This guide will walk you through the process of deploying a Digital.ai Deploy Docker image to an Kubernetes on-premise cluster using a Helm chart.
This helm chart automates and simplifies deploying and managing Digital.ai Deploy on an Kubernetes on-premise cluster by providing the essential features you need to keep your clusters up and running.
- Administrators and DevOps with a working knowledge of Docker, Kubernetes, and Helm.
- Digital.ai Deploy users with an understanding of Deploy concepts
- What are we going to install?
- Before you begin
- Best practices
- Using an exisitng DB
- Using an existingMQ
- Configuring SSL/TLS
- Sample values.yaml file
The Helm chart installs the following components:
- A single instance / pod of PostgreSQL database
- The RabbitMQ in highly available configuration
- The HAProxy Ingress Controller
- Digital.ai Deploy Masters and Workers in a highly available configuration
Note: We recommend that you use an external PostgreSQL (e.g. CrunchyData PostgreSQL operator) instance, for production grade installations. See configuring the CrunchyData PostgreSQL operator for more details.
- The minimum version of Digital.ai Deploy that is supported for the Deploy Helm chart is 9.6 or higher.
- You must configure the support for a Persistent Volume (PV) provisioner such as the Network File System (NFS) in the underlying infrastructure along with a desired StorageClass, which you plan to use with Digital.ai Deploy.
- You will need a license file for Digital.ai Deploy in the base64 encoded format.
- You will need a repository Keystorefile in the base64 encoded format. For more information on keystore, see this article.
- xebialabs/xl-deploy - Docker Hub repository for xl-deploy
- stable/rabbitmq-ha - Github repository for RabbitMQ Helm Chart
- bitnami/postgresql - Github repository for Postgresql Helm Chart
- haproxy-ingress/charts - Github repository for HAProxy Ingress Controller Helm Chart
- stable/nfs-client-provisioner - Github repository for NFS Client Provisioner Helm Chart