MLRun Community Edition
The Workbench2 MLRun module ships with an opt-in Helm install of the official MLRun Community Edition (CE) chart. CE provides the MLRun project store, Web UI, and MinIO-backed artifact bucket; the custom trainer sidecar continues to handle compute. This is the coexistence model adopted in ADR 0006 §Phase 5 .
Why coexistence (not replacement)?
| Capability | Trainer sidecar (this module) | MLRun CE |
|---|---|---|
| sklearn / xgboost / lightgbm / pytorch tabular | ✅ first-class | ❌ requires custom job spec |
| Reproducible Python generator | ✅ | ❌ |
| MongoDB-native feature pipelines | ✅ | indirect |
ecosystem-runtime adapter logging | ✅ | indirect |
| Project store / Web UI | ❌ | ✅ |
| MinIO artifact store | ❌ | ✅ |
| K8s scheduling for ad-hoc jobs | indirect | ✅ |
When CE is enabled, both systems run side-by-side: the workbench
backend prefers the CE API URL for project lookups
(Settings.select_mlrun_api_url), while the trainer sidecar continues
to receive every POST /train and POST /invocations call.
Installation
cd backend
./scripts/setup-mlrun-ce.shThe script:
- Verifies
kubectlandhelmare present. - Switches the Kube context to
docker-desktop. - Creates the
mlrunnamespace. - Resolves
MLRUN_CE_DBPATH(defaultLOCAL_DATA_PATH/mlrun-ce) and writes a values overlay merged withbackend/k8s/mlrun-ce-values.yaml. - Adds the
mlrun-ceHelm repo (idempotent). - Runs
helm upgrade --install mlrun-ce mlrun/mlrun-ce. - Waits for the MLRun API pod to become
Ready.
Service URLs (NodePort-based):
| Service | URL | Purpose |
|---|---|---|
| MLRun API | http://localhost:30070 | Programmatic project / run access |
| MLRun UI | http://localhost:30060 | Web UI |
| MinIO | http://localhost:30090 | Artifact bucket |
Settings
Workbench2 picks up CE through the following backend/.env keys:
MLRUN_CE_ENABLED=true
MLRUN_CE_NAMESPACE=mlrun
MLRUN_CE_RELEASE=mlrun-ce
MLRUN_CE_API_URL=http://localhost:30070
MLRUN_CE_UI_URL=http://localhost:30060
MLRUN_CE_MINIO_URL=http://localhost:30090
# Optional persistence overrides (default: $LOCAL_DATA_PATH/mlrun-ce)
# MLRUN_CE_DBPATH=/Users/me/data/mlrun-ce
# MLRUN_CE_ARTIFACT_PATH=/Users/me/data/mlrun-ce/artifactsSettings.select_mlrun_api_url() returns the CE URL when CE is
enabled and non-blank, otherwise falls back to
MLRUN_SIDECAR_URL, then to MLRUN_TRAINER_URL.
Teardown
cd backend
./scripts/teardown-mlrun-ce.shThe script uninstalls the Helm release and removes the mlrun
namespace. Persistent host paths are preserved so the same
DBPATH can be reused across reinstalls — delete them manually if you
want a clean slate.
Operational notes
- Nuclio / Jupyter / Pipelines disabled: the bundled values file
disables Nuclio, Jupyter, and Kubeflow Pipelines to keep the local
install lightweight. Re-enable them in
backend/k8s/mlrun-ce-values.yamlif you need them. - arm64 only: the local install is opinionated for Apple Silicon. On Intel hardware update the image tags before running the script.
- CE storage size: MinIO is given a 10 GiB
hostPathPVC; raise it for non-trivial artifact stores.
A live screenshot of the MLRun CE Web UI will be added once the
Helm install has been run via ./scripts/setup-mlrun-ce.sh. The
default deployment is opt-in (MLRUN_CE_ENABLED=false) so this
screenshot is captured separately from the seed lifecycle.