Skip to Content

OpenShift

ecosystem.Ai can be installed on OpenShift. This can be tested locally using crc. Here we give example deployment configurations for the server, workbench, notebooks, runtime and grafana components.

Environment Variables

A number of environment variables can be used when starting up the ecosystem.Ai Deployments. These can be set in a ConfigMap. In addition, you should create a secret containing your license key.

✏️
Environment Variable Config Mapâ–·
apiVersion: v1 kind: ConfigMap metadata: name: ecosystem-env namespace: ecosystem data: # Server ECOSYSTEM_SERVER_PORT: "3001" ECOSYSTEM_SERVER_IP: "http://server.ecosystem.svc.cluster.local" ECOSYSTEM_PROP_FILE: "/config/ecosystem.properties" CLI_SETTINGS: "-Dserver.port=3001" RESET_USER: "true" NO_WORKBENCH: "true" # Runtime MONITORING_DELAY: "240" ECOSYSTEM_RUNTIME_PORT: "8091" NO_MONGODB: "true" # Workbench WORKBENCH_IP: /route for server deployment/ WORKBENCH_PORT: "3001" # Grafana GF_SECURITY_ALLOW_EMBEDDING: "true" GF_INSTALL_PLUGINS: "marcusolsson-json-datasource,volkovlabs-echarts-panel"
✏️
Commands to Create Config Mapâ–·
oc apply -f ecosystem_configmap.yaml

Persistent Volume Claims

Ideally you should create a ReadWriteMany PVC to mount to the various ecosystem.Ai Deployments as it makes management easier. This is illustrated here. If your OpenShift instance does not support ReadWriteMany PVCs then you will need to create multiple PVCs.

✏️
Persistent Volume Claimâ–·
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ecosystem-data-pvc namespace: ecosystem spec: accessModes: - ReadWriteMany resources: requests: storage: /desired storage capacity/Gi
✏️
Commands to Create Persistent Volume Claimâ–·
oc apply -f ecosystem_pvc.yaml

Server Deployment

For the server, and in the subsequent components:

  • Images are pulled from Docker Hub. Generally, the images should be stored in a local repository and that repository should be referenced in the Deployment.
  • Resources are not specified as they will be specific to the environment.
✏️
Server Deploymentâ–·
apiVersion: apps/v1 kind: Deployment metadata: name: ecosystem-server namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: ecosystem-server template: metadata: labels: app: ecosystem-server spec: initContainers: - name: init-ecosystem-config image: registry.access.redhat.com/ubi9/ubi-minimal:latest command: ["/bin/sh", "-c"] args: - | cat << 'EOF' > /config/ecosystem.properties # ======== logging.level=5 cross.origin=ecosystem.ai date.format=yyyy-MM-dd'T'HH:mm:ss.SSSSSSXXX logger.level=500 # these only needed for mongoexport and import mongo.ingestion.threads=7 mongo.export.compress=true mongo.port=54445 mongo.server=127.0.0.1 mongo.data.port=54445 mongo.data.server=127.0.0.1 mongo.authentication.source=admin mongo.secure.data=true mongo.secure.workbench=true mongo.ecosystem.user=ecosystem_user mongo.ecosystem.password=EcoEco321 mongo.profiles.user=ecosystem_user mongo.profiles.password=EcoEco321 # this is primary connection string #user.profiles=profilesMaster user.profiles=ecosystem_meta mongo.connect=mongodb://ecosystem_user:EcoEco321@127.0.0.1:54445/?authSource=admin #mongo.connect=mongodb://ecosystem_user:EcoEco321@ecosystem-mongo:54445/?authSource=admin # prediction server prediction.server.h2o.mode=local prediction.server.h2o=http://127.0.0.1:54321 prediction.server.pytorch={"train_model":"http://ecosystem-worker-pytorch:5010/train_model","training_status":"http://ecosystem-worker-pytorch:5010/get_model","download_model":"http://ecosystem-worker-pytorch:5010/download_model","get_model":"http://ecosystem-worker-pytorch:5010/get_model","load_model":"http://ecosystem-worker-pytorch:5000/load_model","get_current_model":"http://ecosystem-worker-pytorch:5000/get_current_model","score_document_info_formatted_h2o":"http://ecosystem-worker-pytorch:5000/score_document_info_formatted_h2o"} prediction.server.orbit={"server":"http://ecosystem-worker-orbit:5100/","connect":"mongodb://ecosystem_user:EcoEco321@ecosystem-server:54445/?authSource=admin"} prediction.server.prophet={"server":"http://ecosystem-worker-prophet:5110/","connect":"mongodb://ecosystem_user:EcoEco321@ecosystem-server:54445/?authSource=admin"} prediction.server.ecosystem=http://127.0.0.1:8080 model.list.generative={"default":{"model":"llama3.2", "url":"http://host.docker.internal:11434/api/chat"}} # data and model storage user.home=./ user.data=/data/ user.generated.models=/data/models/ user.deployed.models=/data/deployed/ # apply for new application password in google: https://support.google.com/accounts/answer/185833 user.email={"smtp":"smtp.gmail.com","port":587,"login":"","password":"","admin":"","cc":"","rule":"super_only"} # used for code generation of runtime sourcecontrol.runtime=[{"server":"https://github.com/ecogenetic/ecosystem-runtime.git","branch":"workbench","user":"ecosystemai","password":""}] # presto server presto.url=jdbc:trino://ecosystem-worker-trino:8084/ presto.connection=local/master?user=admin logging.database=logging logging.collection=ecosystemruntime logging.collection.response=ecosystemruntime_response EOF volumeMounts: - name: ecosystem-data subPath: serverconfig mountPath: /config containers: - name: ecosystem-server image: docker.io/ecosystemai/ecosystem-server:arm64 imagePullPolicy: IfNotPresent env: - name: MASTER_KEY valueFrom: secretKeyRef: name: master-key key: master-key - name: PORT valueFrom: configMapKeyRef: name: ecosystem-env key: ECOSYSTEM_SERVER_PORT - name: IP valueFrom: configMapKeyRef: name: ecosystem-env key: ECOSYSTEM_SERVER_IP - name: ECOSYSTEM_PROP_FILE valueFrom: configMapKeyRef: name: ecosystem-env key: ECOSYSTEM_PROP_FILE - name: CLI_SETTINGS valueFrom: configMapKeyRef: name: ecosystem-env key: CLI_SETTINGS - name: RESET_USER valueFrom: configMapKeyRef: name: ecosystem-env key: RESET_USER - name: NO_WORKBENCH valueFrom: configMapKeyRef: name: ecosystem-env key: NO_WORKBENCH ports: - containerPort: 3001 name: http - containerPort: 54321 name: htwoo - containerPort: 54445 name: mongo volumeMounts: - name: ecosystem-data subPath: data mountPath: /data - name: ecosystem-data subPath: serverconfig mountPath: /config volumes: - name: ecosystem-data persistentVolumeClaim: claimName: ecosystem-data-pvc
✏️
Commands to Create Serverâ–·
oc apply -f server-deployment-openshift.yaml oc expose deployment ecosystem-server --port=3001 --name=server oc expose deployment ecosystem-server --port=54445 --name=mongo oc expose deployment ecosystem-server --port=54321 --name=htwoo oc expose svc server --port=3001 oc expose svc mongo --port=54445 oc expose svc htwoo --port=54321

Notebooks and Grafana Deployments

For the notebooks deployment, we configure runAsUser: 1001570001. This is a UID with a pre-created user in the Jupyter Notebooks environment. We pre-create the user as user creation will require the pod to run as root. If this runAsUser is not allowed by your serviceAccount then you will need to contact ecosystem.Ai to have an allowed UID preconfigured as a user.

The grafana deployment uses the nojwt image tag. This requires some manual configuration of the connection between grafana and the server. Automating the connection between grafana and the server can cause permission issues in OpenShift.

✏️
Notebooks and Grafana Deploymentsâ–·
apiVersion: apps/v1 kind: Deployment metadata: name: ecosystem-notebooks namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: ecosystem-notebooks template: metadata: labels: app: ecosystem-notebooks spec: containers: - name: ecosystem-notebooks securityContext: runAsUser: 1001570001 image: docker.io/ecosystemai/ecosystem-notebooks:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8000 name: notebooks - containerPort: 8010 name: alt volumeMounts: - name: ecosystem-data mountPath: "/app/Shared Projects" subPath: "notebooks-users/notebooks" - name: ecosystem-data mountPath: "/home" subPath: "notebooks-users" - name: ecosystem-data subPath: data mountPath: "/data" volumes: - name: ecosystem-data persistentVolumeClaim: claimName: ecosystem-data-pvc restartPolicy: Always --- apiVersion: apps/v1 kind: Deployment metadata: name: ecosystem-grafana namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: ecosystem-grafana template: metadata: labels: app: ecosystem-grafana spec: containers: - name: ecosystem-grafana image: docker.io/ecosystemai/ecosystem-grafana:nojwt imagePullPolicy: IfNotPresent env: - name: GF_SECURITY_ALLOW_EMBEDDING valueFrom: configMapKeyRef: name: ecosystem-env key: GF_SECURITY_ALLOW_EMBEDDING - name: $GF_AUTH_JWT_URL valueFrom: configMapKeyRef: name: ecosystem-env key: GF_AUTH_JWT_URL - name: $GF_AUTH_USERNAME valueFrom: configMapKeyRef: name: ecosystem-env key: GF_AUTH_USERNAME - name: $GF_AUTH_PASSWORD valueFrom: configMapKeyRef: name: ecosystem-env key: GF_AUTH_PASSWORD ports: - containerPort: 3000 name: http volumeMounts: - name: ecosystem-data subPath: grafana mountPath: /var/lib/grafana volumes: - name: ecosystem-data persistentVolumeClaim: claimName: ecosystem-data-pvc restartPolicy: Always
✏️
Commands to Create Notebooks and Grafanaâ–·
oc apply -f notebooks-grafana-deployment-openshift.yaml oc expose deployment ecosystem-grafana --port=3000 oc expose svc ecosystem-grafana --port=3000 oc expose deployment ecosystem-notebooks --port=8000 --name=jupyter oc expose deployment ecosystem-notebooks --port=8010 --name=pythonserver oc expose svc jupyter --port=8000 oc expose svc pythonserver --port=8010

Workbench and Runtime Deployments

The workbench Deployment utilises a ConfigMap which sets the port on which the workbench starts up. In this case we start up the workbench on port 8008.

A single runtime Deployment is created. Separate Deployments should be created for each use case that needs to be pushed to a runtime. The replica count is not specified in the runtime Deployment, this assumes that a HorizontalPodAutoscaler will be created for the runtime. Alternatively the replica count can be set at a volume that can handle the anticipated load.

✏️
Workbench and Runtime Deploymentsâ–·
apiVersion: v1 kind: ConfigMap metadata: name: nginx-config namespace: ecosystem data: default.conf: | server { listen 8008 default_server; listen [::]:8008 default_server; root /usr/share/nginx/html; index index.html; server_name _; location / { try_files $uri$args $uri$args/ /index.html; } location ~* .(js|css|ttf|ttc|otf|eot|woff|woff2)$ { add_header access-control-allow-origin "*"; expires max; } } --- apiVersion: apps/v1 kind: Deployment metadata: name: ecosystem-workbench namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: ecosystem-workbench template: metadata: labels: app: ecosystem-workbench spec: containers: - name: ecosystem-workbench image: docker.io/ecosystemai/ecosystem-workbench:latest volumeMounts: - name: nginx-config mountPath: "/etc/nginx/conf.d" securityContext: runAsGroup: 0 imagePullPolicy: IfNotPresent env: - name: IP valueFrom: configMapKeyRef: name: ecosystem-env key: WORKBENCH_IP - name: PORT valueFrom: configMapKeyRef: name: ecosystem-env key: WORKBENCH_PORT ports: - containerPort: 8008 name: http volumes: - name: nginx-config configMap: name: nginx-config restartPolicy: Always --- apiVersion: apps/v1 kind: Deployment metadata: name: ecosystem-runtime1 namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: ecosystem-runtime1 template: metadata: labels: app: ecosystem-runtime1 spec: containers: - name: ecosystem-runtime1 image: docker.io/ecosystemai/ecosystem-runtime-solo:latest imagePullPolicy: IfNotPresent volumeMounts: - name: ecosystem-data subPath: data mountPath: /data env: - name: MASTER_KEY valueFrom: secretKeyRef: name: master-key key: master-key - name: NO_MONGODB valueFrom: configMapKeyRef: name: ecosystem-env key: NO_MONGODB - name: MONITORING_DELAY valueFrom: configMapKeyRef: name: ecosystem-env key: MONITORING_DELAY - name: PORT valueFrom: configMapKeyRef: name: ecosystem-env key: PORT ports: - containerPort: 8091 name: http volumes: - name: ecosystem-data persistentVolumeClaim: claimName: ecosystem-data-pvc restartPolicy: Always
✏️
Commands to Create Workbench and Runtimeâ–·
oc apply -f workbench-runtime-deployment-openshift.yaml oc expose deployment ecosystem-workbench --port=8008 oc expose svc ecosystem-workbench --port=8008 oc expose deployment ecosystem-runtime1 --port=8091 oc expose svc ecosystem-runtime1 --port=8091

Workbench2 Deployments

The workbench2 incorporates the generative AI enabled UI and the notebooks functionality through which the python package can be used.

✏️
Workbench2 Deploymentsâ–·
apiVersion: v1 kind: ConfigMap metadata: name: workbench2-config namespace: ecosystem data: MONGODB_URI: "mongo_connection_string" MONGODB_DATABASE: "ecosystem_meta" ECOSYSTEM_SERVER: "http://server-ecosystem.apps-crc.testing" BACKEND_PORT: "8001" CORS_ORIGINS: "http://localhost:5270,http://workbench2.ecosystem.svc.cluster.local:5270" LOG_LEVEL: "INFO" H2O_URL: "http://htwoo.ecosystem.svc.cluster.local:54321" H2O_CONTAINER_DATA_PATH: "/data" LOCAL_DATA_PATH: "/data" H2O_MODELS: "/data/models" H2O_SCORER_PORT: "9090" H2O_MODELS_DIR: "/data/models" H2O_OFFLINE: "http://localhost:9090" H2O_PREFER_OFFLINE_SCORING: "true" TRINO_HOST: "trino.ecosystem.svc.cluster.local" TRINO_PORT: "8080" TRINO_USER: "ecosystem_user" TRINO_CATALOG: "mongodb" TRINO_SCHEMA: "master" CHAT_SERVER: "https://bedrock-runtime.us-west-2.amazonaws.com" CHAT_SERVER_TYPE: "bedrock" CHAT_SERVER_MODEL: "qwen.qwen3-coder-480b-a35b-v1:0" NGINX_PORT: "5270" --- apiVersion: v1 kind: ConfigMap metadata: name: workbench2-nginx-config namespace: ecosystem data: supervisord.conf: | [supervisord] nodaemon=true logfile=/var/log/supervisor/supervisord.log pidfile=/tmp/supervisord.pid childlogdir=/var/log/supervisor [unix_http_server] file=/tmp/supervisor.sock chmod=0700 [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock # ============================================================================== # H2O Java Scorer # ============================================================================== [program:h2o-scorer] command=java -jar /app/h2o/h2o-scorer-1.0.0.jar --port %(ENV_H2O_SCORER_PORT)s --models %(ENV_H2O_MODELS_DIR)s directory=/app/h2o autostart=true autorestart=true startsecs=10 stopwaitsecs=30 stdout_logfile=/var/log/supervisor/h2o-scorer.log stderr_logfile=/var/log/supervisor/h2o-scorer-error.log stdout_logfile_maxbytes=50MB stderr_logfile_maxbytes=50MB environment=JAVA_OPTS="-Xmx2g -Xms512m" priority=100 # ============================================================================== # Backend FastAPI # ============================================================================== [program:backend] command=python -m uvicorn app.main:app --host 0.0.0.0 --port %(ENV_BACKEND_PORT)s directory=/app/backend autostart=true autorestart=true startsecs=5 stopwaitsecs=30 stdout_logfile=/dev/stdout stdout_logfile_maxbytes=0 stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0 environment=PYTHONPATH="/app/backend/src",PYTHONUNBUFFERED="1" priority=200 # ============================================================================== # Nginx (Frontend) # ============================================================================== [program:nginx] command=/usr/sbin/nginx -g "daemon off;" autostart=true autorestart=true startsecs=5 stopwaitsecs=10 stdout_logfile=/var/log/supervisor/nginx.log stderr_logfile=/var/log/supervisor/nginx-error.log stdout_logfile_maxbytes=50MB stderr_logfile_maxbytes=50MB priority=300 # ============================================================================== # Group all services # ============================================================================== [group:ecosystem] programs=h2o-scorer,backend,nginx priority=999 --- apiVersion: v1 kind: Secret metadata: name: workbench2-secrets namespace: ecosystem type: Opaque stringData: CHAT_SERVER_KEY: "INSERTKEYHERE" VITE_GOOGLE_MAPS_API_KEY: "INSERTKEYHERE" --- apiVersion: apps/v1 kind: Deployment metadata: name: workbench2 namespace: ecosystem spec: replicas: 1 selector: matchLabels: app: workbench2 template: metadata: labels: app: workbench2 spec: containers: - name: workbench2 image: docker.io/ecosystemai/ecosystem-workbench2:amd64 imagePullPolicy: Always ports: - name: http containerPort: 5270 envFrom: - configMapRef: name: workbench2-config - secretRef: name: workbench2-secrets resources: requests: memory: 1Gi limits: memory: 2Gi volumeMounts: - name: nginx-config mountPath: "/etc/supervisor/conf.d/supervisord.conf" subPath: supervisord.conf - name: ecosystem-data subPath: data mountPath: /data - name: ecosystem-data subPath: workbench2-logs mountPath: /var/log/supervisor securityContext: runAsGroup: 0 volumes: - name: nginx-config configMap: name: workbench2-nginx-config - name: ecosystem-data persistentVolumeClaim: claimName: ecosystem-data-pvc
✏️
Commands to Create Workbench2â–·
oc apply -f new-workbench.yaml oc expose deployment workbench2 --port=5270 oc expose svc workbench2 --port=5270

Conclusion

That’s it! You have now configured your ecosystem.Ai instance on OpenShift. Superset and Airflow components can be added using helm charts if required.

Last updated on