OpenShift
ecosystem.Ai can be installed on OpenShift. This can be tested locally using crc. Here we give example deployment configurations for the server, workbench, notebooks, runtime and grafana components.
Environment Variables
A number of environment variables can be used when starting up the ecosystem.Ai Deployments. These can be set in a ConfigMap. In addition, you should create a secret containing your license key.
Persistent Volume Claims
Ideally you should create a ReadWriteMany PVC to mount to the various ecosystem.Ai Deployments as it makes management easier. This is illustrated here. If your OpenShift instance does not support ReadWriteMany PVCs then you will need to create multiple PVCs.
Server Deployment
For the server, and in the subsequent components:
- Images are pulled from Docker Hub. Generally, the images should be stored in a local repository and that repository should be referenced in the Deployment.
- Resources are not specified as they will be specific to the environment.
Notebooks and Grafana Deployments
For the notebooks deployment, we configure runAsUser: 1001570001. This is a UID with a pre-created user in the Jupyter Notebooks environment. We pre-create the user as user creation will require the pod to run as root. If this runAsUser is not allowed by your serviceAccount then you will need to contact ecosystem.Ai to have an allowed UID preconfigured as a user.
The grafana deployment uses the nojwt image tag. This requires some manual configuration of the connection between grafana and the server. Automating the connection between grafana and the server can cause permission issues in OpenShift.
Workbench and Runtime Deployments
The workbench Deployment utilises a ConfigMap which sets the port on which the workbench starts up. In this case we start up the workbench on port 8008.
A single runtime Deployment is created. Separate Deployments should be created for each use case that needs to be pushed to a runtime. The replica count is not specified in the runtime Deployment, this assumes that a HorizontalPodAutoscaler will be created for the runtime. Alternatively the replica count can be set at a volume that can handle the anticipated load.
Conclusion
That’s it! You have now configured your ecosystem.Ai instance on OpenShift. Superset and Airflow components can be added using helm charts if required.