Kubernetes
ecosystem.Ai can be installed on Kubernetes. This can be tested locally using Minikube. Here we give example deployment configurations for the server, workbench, notebooks, runtime and grafana components.
Environment Variables
A number of environment variables can be used when starting up the ecosystem.Ai Deployments. These can be set in a ConfigMap. In addition, you should create a secret containing your license key.
Persistent Volume Claims
Ideally you should create a ReadWriteMany PVC to mount to the various ecosystem.Ai Deployments as it makes management easier. This is illustrated here. If you’re Kubernetes instance does not support ReadWriteMany PVCs then you will need to create multiple PVCs.
Server Deployment
The server Deployment and services are below. Here, and in the subsequent components:
- Images are pulled from Docker Hub. Generally, the images should be stored in a local repository and that repository should be referenced in the Deployment.
- LoadBalancer services are created. This exposes the services externally in Minikube. The Ingress approach supported by your Kuberenetes instance should be used to expose the services.
Notebooks and Grafana Deployments
Workbench and Runtime Deployments
The workbench Deployment utilises a ConfigMap which sets the port on which the workbench starts up. In this case we start up the workbench on port 8008.
A single runtime Deployment is created. Separate Deployments should be created for each use case that needs to be pushed to a runtime. The replica count is not specified in the runtime Deployment, this assumes that a HorizontalPodAutoscaler will be created for the runtime. Alternatively the replica count can be set at a volume that can handle the anticipated load.
Conclusion
That’s it! You have now configured your ecosystem.Ai instance on Kubernetes.