MLFlow Integration
If models are not trained using the ecosystem Server, the trained models need to be made available to the ecosystem Runtime as part of the deployment process. The supported approach to do this is to use MLFlow as a model registry and import the models from MLFlow into the runtime.
Configuration
MLFlow integration requires the use of the Runtime MCP api interface. The MLFLOW_TRACKING_URI
environment variable should be configured, pointing to your MLFlow environment. The MLFlow security variables can also be configured if required. Specify the models required using the config file with the following format:
{
"mlflow_models": [
{"name":"recommender-demo","version":7,"type":"h2o_mojo","mojo_artifact_path":"mojo"},
{"name":"recommender-demo","version":7,"type":"h2o_model","h2o_url":"http://localhost:54321"}
]
}
The location of the config file is specified using the RUNTIME_CONFIG
environment variables. Use the /update_runtime_config
api to update the config file. The currently supported types in the config file are h2o_mojo
and h2o_model
. h2o_mojo
is preferred and requires that the mojo is stored as an artifact in MLFlow. h2o_model
can be used when the mojo is not stored in MLFlow but it requires that the Runtime MCP have access to the H2O server used by MLFlow so that the model can be loaded into MLFlow and the mojo can be downloaded.
Calling the /refresh
API on the Runtime MCP will, in addition to the standard /refresh
functionality, download and load the models from MLFlow.