Install Deploy on Kubernetes On-premise Platform

This section describes the procedure of fresh installation of the Deploy application on Kubernetes On-premise cluster using operator-based installer.

Intended Audience

This guide is intended for administrators with cluster administrator credentials who are responsible for application deployment.

Before You Begin

The following are the prerequisites required to migrate to the operator-based deployment:

  • Docker version 17.03 or later
  • The kubectl command-line tool
  • Access to a Kubernetes cluster version 1.17 or later
  • Kubernetes cluster configuration

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, DeployInstallation.

Step 2—Download the Operator ZIP

  1. Download the deploy-operator-onprem.zip file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the DeployInstallation folder.

Step 3—Update the Kubernetes On-premise Cluster Information

To deploy the Deploy application on the Kubernetes cluster, update the Infrastructure file parameters (infrastructure.yaml) in the location where you extracted the ZIP file with the parameters corresponding to the Kubernetes On-premise Kubernetes Cluster Configuration (kubeconfig) file as described in the table. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File Parameters Kubernetes On-premise Kubernetes Cluster Configuration File Parameters Parameter Value
apiServerURL server Enter the server details of the cluster.
caCert certificate-authority-data Before updating the parameter value, decode to base64 format.
tlsCert client-certificate-data Before updating the parameter value, decode to base64 format.
tlsPrivateKey client-key-data Before updating the parameter value, decode to base64 format.

Step 4—Update the default Digitial.ai Deploy Custom Resource Definitions.

  1. Update daideploy_cr file in the \digitalai-deploy\kubernetes path of the extracted zip file.
  2. Update the mandatory parameters as described in the following table:

    Note: For deployments on test environments, you can use most of the parameters with their default values in the daideploy_cr.yaml file.

    Parameter Description
    K8sSetup.Platform Platform on which to install the chart. For the Kubernetes on-premise cluster, you must set the value to PlainK8s
    ingress.hosts DNS name for accessing UI of Digital.ai Deploy
    xldLicense Convert the license file for Digital.ai Deploy to the base64 format.
    RepositoryKeystore Convert the license file for Digital.ai Deploy to the base64 format.
    KeystorePassphrase The passphrase for the RepositoryKeystore.
    postgresql.persistence.storageClass Storage Class to be defined for PostgreSQL
    rabbitmq.persistence.storageClass Storage Class to be defined for RabbitMQ
    Persistence.StorageClass The storage class that must be defined as Kubernetes On-premise platform

    Note: For deployments on production environments, you must configure all the parameters required for your Kubernetes On-premise production setup, in the daideploy_cr.yaml file. The table in Step 4.4 lists these parameters and their default values, which can be overridden as per your setup requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

  3. Update the daideploy_cr.yaml file with the license and keystore details.

    1. Convert the Deploy license and the repository keystore files to the base 64 format.
    2. Run the following commands:

      • To convert the xldLicense into base 64 format, run:

        cat <License.lic> | base64 -w 0
      • To convert RepositoryKeystore to base64 format, run:

        cat <keystore.jks> | base64 -w 0

        Note: The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. But you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

  4. Update the default parameters as described in the following table:

    Note: The following table describes the default parameters in the Digital.ai daideploy_cr.yaml file. If you want to use your own database and messaging queue, refer Using Existing DB and Using Existing MQ topics, and update the daideploy_cr.yaml file. For information on how to configure SSL/TLS with Digital.ai Deploy, see Configuring SSL/TLS.

    Parameter Description Default
    K8sSetup.Platform Platform on which to install the chart. Allowed values are PlainK8s and AWSEKS PlainK8s
    XldMasterCount Number of master replicas 3
    XldWorkerCount Number of worker replicas 3
    ImageRepository Image name Truncated
    ImageTag Image tag 10.1
    ImagePullPolicy Image pull policy, Defaults to ’Always’ if image tag is ’latest’,set to ’IfNotPresent’ Always
    ImagePullSecret Specify docker-registry secret names. Secrets must be manually created in the namespace nil
    haproxy-ingress.install Install haproxy subchart. If you have haproxy already installed, set ’install’ to ’false’ true
    haproxy-ingress.controller.kind Type of deployment, DaemonSet or Deployment DaemonSet
    haproxy-ingress.controller.service.type Kubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePort NodePort
    nginx-ingress-controller.install Install nginx subchart to false, as we are using haproxy as a ingress controller false (for HAProxy)
    nginx-ingress.controller.install Install nginx subchart. If you have nginx already installed, set ’install’ to ’false’ true
    nginx-ingress.controller.image.pullSecrets pullSecrets name for nginx ingress controller myRegistryKeySecretName
    nginx-ingress.controller.replicaCount Number of replica 1
    nginx-ingress.controller.service.type Kubernetes Service type for nginx. It can be changed to LoadBalancer or NodePort NodePort
    haproxy-ingress.install Install haproxy subchart to false as we are using nginx as a ingress controller false (for NGINX)
    ingress.Enabled Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster true
    ingress.annotations Annotations for ingress controller ingress.kubernetes.io/ssl-redirect: ”false”kubernetes.io/ingress.class: haproxyingress.kubernetes.io/rewrite-target: /ingress.kubernetes.io/affinity: cookieingress.kubernetes.io/session-cookie-name: JSESSIONIDingress.kubernetes.io/session-cookie-strategy: prefixingress.kubernetes.io/config-backend:
    ingress.path You can route an Ingress to different Services based on the path /xl-deploy/
    ingress.hosts DNS name for accessing ui of Digital.ai Deploy example.com
    ingress.tls.secretName Secret file which holds the tls private key and certificate example-secretsName
    ingress.tls.hosts DNS name for accessing ui of Digital.ai Deploy using tls example.com
    AdminPassword Admin password for xl-deploy If user does not provide password, random 10 character alphanumeric string will be generated
    xldLicense Convert xl-deploy.lic files content to base64 nil
    RepositoryKeystore Convert keystore.jks files content to base64 nil
    KeystorePassphrase Passphrase for keystore.jks file nil
    resources CPU/Memory resource requests/limits. User can change the parameter accordingly nil
    postgresql.install postgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set ’install’ to ’false’. true
    postgresql.postgresqlUsername PostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres) postgres
    postgresql.postgresqlPassword PostgreSQL user password random 10 character alphanumeric string
    postgresql.postgresqlExtendedConf.listenAddresses Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications ’*’
    postgresql.postgresqlExtendedConf.maxConnections Maximum total connections 500
    postgresql.initdbScriptsSecret Secret with initdb scripts that contain sensitive information (Note: can be used with initdbScriptsConfigMap or initdbScripts). The value is evaluated as a template. postgresql-init-sql-xld
    postgresql.service.port PostgreSQL port 5432
    postgresql.persistence.enabled Enable persistence using PVC true
    postgresql.persistence.size PVC Storage Request for PostgreSQL volume 50Gi
    postgresql.persistence.existingClaim Provide an existing PersistentVolumeClaim, the value is evaluated as a template. nil
    postgresql.resources.requests CPU/Memory resource requests requests: memory: 1Gi memory: cpu: 250m
    postgresql.resources.limits Limits limits: memory: 2Gi, limits: cpu: 1
    postgresql.nodeSelector Node labels for pod assignment {}
    postgresql.affinity Affinity labels for pod assignment {}
    postgresql.tolerations Toleration labels for pod assignment []
    UseExistingDB.Enabled If you want to use an existing database, change ’postgresql.install’ to ’false’. false
    UseExistingDB.XL_DB_URL Database URL for xl-deploy nil
    UseExistingDB.XL_DB_USERNAME Database User for xl-deploy nil
    UseExistingDB.XL_DB_PASSWORD Database Password for xl-deploy nil
    rabbitmq-ha.install Install rabbitmq chart. If you have an existing message queue deployment, set ’install’ to ’false’. true
    rabbitmq-ha.rabbitmqUsername RabbitMQ application username guest
    rabbitmq-ha.rabbitmqPassword RabbitMQ application password random 24 character long alphanumeric string
    rabbitmq-ha.rabbitmqErlangCookie Erlang cookie DEPLOYRABBITMQCLUSTER
    rabbitmq-ha.rabbitmqMemoryHighWatermark Memory high watermark 500MB
    rabbitmq-ha.rabbitmqNodePort Node port 5672
    rabbitmq-ha.extraPlugins Additional plugins to add to the default configmap rabbitmq_shovel,rabbitmq_shovel_management,rabbitmq_federation,rabbitmq_federation_management,rabbitmq_jms_topic_exchange,rabbitmq_management,
    rabbitmq-ha.replicaCount Number of replica 3
    rabbitmq-ha.rbac.create If true, create & use RBAC resources true
    rabbitmq-ha.service.type Type of service to create ClusterIP
    rabbitmq-ha.persistentVolume.enabled If true, persistent volume claims are created false
    rabbitmq-ha.persistentVolume.size Persistent volume size 20Gi
    rabbitmq-ha.persistentVolume.annotations Persistent volume annotations {}
    rabbitmq-ha.persistentVolume.resources Persistent Volume resources {}
    rabbitmq-ha.persistentVolume.requests CPU/Memory resource requests requests: memory: 250Mi memory: cpu: 100m
    rabbitmq-ha.persistentVolume.limits Limits limits: memory: 550Mi, limits: cpu: 200m
    rabbitmq-ha.definitions.policies HA policies to add to definitions.json {”name”: ”ha-all”,”pattern”: ”.*”,”vhost”: ”/”,”definition”: {”ha-mode”: ”all”,”ha-sync-mode”: ”automatic”,”ha-sync-batch-size”: 1}}
    rabbitmq-ha.definitions.globalParameters Pre-configured global parameters {”name”: ”cluster_name”,”value”: ””}
    rabbitmq-ha.prometheus.operator.enabled Enabling Prometheus Operator false
    UseExistingMQ.Enabled If you want to use an existing Message Queue, change ’rabbitmq-ha.install’ to ’false’ false
    UseExistingMQ.XLD_TASK_QUEUE_USERNAME Username for xl-deploy task queue nil
    UseExistingMQ.XLD_TASK_QUEUE_PASSWORD Password for xl-deploy task queue nil
    UseExistingMQ.XLD_TASK_QUEUE_URL URL for xl-deploy task queue nil
    UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAME Driver Class Name for xl-deploy task queue nil
    HealthProbes Would you like a HealthProbes to be enabled true
    HealthProbesLivenessTimeout Delay before liveness probe is initiated 90
    HealthProbesReadinessTimeout Delay before readiness probe is initiated 90
    HealthProbeFailureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 12
    HealthPeriodScans How often to perform the probe 10
    nodeSelector Node labels for pod assignment {}
    tolerations Toleration labels for pod assignment []
    affinity Affinity labels for pod assignment {}
    Persistence.Enabled Enable persistence using PVC true
    Persistence.StorageClass PVC Storage Class for volume nil
    Persistence.Annotations Annotations for the PVC {}
    Persistence.AccessMode PVC Access Mode for volume ReadWriteOnce
    Persistence.XldExportPvcSize XLD Master PVC Storage Request for volume. For production grade setup, size must be changed 10Gi
    Persistence. XldWorkPvcSize XLD Worker PVC Storage Request for volume. For production grade setup, size must be changed 5Gi
    satellite.Enabled Enable the satellite support to use it with Deploy false

Step 5—Download and set up the XL CLI

Download the XL-CLI binaries.

```
wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl
```
**Note**: For `$VERSION`, substitute with the version that matches your product version in the [public folder](https://dist.xebialabs.com/public/xl-cli/).
  1. Enable execute permissions.

    chmod +x xl
  2. Copy the XL binary in a directory that is on your PATH.

    echo $PATH

    Example

    cp xl /usr/local/bin
  3. Verify the Deploy version.

    xl version

Step 6—Set up the XL Deploy Container instance

  1. Run the following command to download and start the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:‘http://< host IP address >:4516/’

Step 7—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 8—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer).

Step 9—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

    Successful Deploy Deployment
  • Run the following commands in a terminal or command prompt:

    Deployment Status Using CLI Command

Step 10—Perform sanity checks

Open the Deploy application and perform the required deployment sanity checks.