Best practices

  • Schedule a minimum of 3 replicas for Digital.ai Deploy Master to handle network split scenarios. Please maintain odd number of replicas.
  • Recommended resources for the Digital.ai Deploy pod are (RAM Request: 1700Mi, RAM Limit: 6Gi, CPU Requests: 0.7, CPU Limit: 3)
  • Set the parameters “archiveOnDelete: true” for StorageClass. This ensures that PVs will be archived. See the nfs-client-provisioner for further details.

Recommendations

  • Before making any modification to the Storage Class, PV and PVC, we recommend you to take a backup of all volumes such as for Digital.ai Deploy Masters, Digital.ai Deploy Workers, Postgres Database, RabbitMQ, etc.
  • For production grade installations, we recommended you to use an external PostgreSQL helm chart which supports high availability. This is to avoid a single point of failure.
  • Stop all the running deployments before upgrading.
  • We also recommend you to upgrade one minor version at a time.

Limitations

  • This chart deploys a single instance of PostgreSQL. If the database pod restarts for any reason, then it takes a few seconds to get it up and running on another Kubernetes node. You may experience some down time, but as we use persistence volume for the database, no data is lost.
  • If custom plugins are used with Digital.ai Deploy, it is required to build custom docker image which contains custom plugins.
  • Currently Deploy Satellites are not supported, irrespective of whether a satellite is running on a virtual machine or on Kubernetes. The upcoming version will have support for using Satellites running on virtual machine outside Kubernetes environment. As of now we do not have any plan to support Satellites running as containers on the Kubernetes environment. As a workaround, we recommend that you build a custom docker image of Digital.ai Deploy to establish connectivity with satellites running outside the Kubernetes cluster.

Next Step