DocsRuntime & DeploymentMLFlow Integration

MLFlow Integration

If models are not trained using the ecosystem Server, the trained models need to be made available to the ecosystem Runtime as part of the deployment process. The supported approach to do this is to use MLFlow as a model registry and import the models from MLFlow into the runtime.

🔥
Note

Integration is currently supported from models trained using H2O where the mojo is stored as an artifact in MLFlow

Configuration

MLFlow integration requires the use of the Runtime MCP api interface. The MLFLOW_TRACKING_URI environment variable should be configured, pointing to your MLFlow environment. The MLFlow security variables can also be configured if required. Specify the models required using the config file with the following format

{
    "mlflow_models": [{"name":"recommender-demo","version":7,"mojo_artifact_path":"mojo"}]
}

The location of the config file is specified using the RUNTIME_CONFIG environment variables. Use the /update_runtime_config api to update the config file.

Calling the /refresh API on the Runtime MCP will, in addition to the standard /refresh functionality, download and load the models from MLFlow.