Managing Plugins in the Operator Environment

Here’s what it takes to manage Digital.ai Deploy plugins on a Deploy cluster that was created using the Operator-based installer:

  1. Create a temporary pod—dai-xld-plugin-management.
  2. Stop all the other Deploy Pods but the newly created pod—dai-xld-plugin-management.
  3. Log on (SSH) to the newly create temporary pod—dai-xld-plugin-management.
  4. Add or remove plugins using the Plugin Manager CLI.
  5. Restart all the Deploy pods.
  6. Delete the temporary pod—dai-xld-plugin-management.

Note: This topic uses the default namespace, digitalai, for illustrative purposes. Use your own namespace if you have installed Deploy in a custom namespace.

  1. Verify the PVC name on you current namespace (it depends on the CR name):

    ❯ kubectl get pvc -n digitalai
    NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                                   AGE
    data-dai-xld-postgresql-0                       Bound    pvc-6878064b-aa7e-4bf8-9cef-2d8754181f2d   1Gi        RWO            vp-azure-aks-test-cluster-disk-storage-class   10m
    data-dai-xld-rabbitmq-0                         Bound    pvc-794e00a7-5689-4cc3-a16b-6e5c15c62f99   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   10m
    data-dir-dai-xld-digitalai-deploy-cc-server-0   Bound    pvc-7c793808-5792-4ccd-8664-4c9f7614ed2a   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   10m
    data-dir-dai-xld-digitalai-deploy-master-0      Bound    pvc-bb365ffb-a4eb-4f48-a8ca-2141f2eb4404   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   10m
    data-dir-dai-xld-digitalai-deploy-master-1      Bound    pvc-6102c45b-2399-49d2-90bc-024befca15ba   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   6m41s
    data-dir-dai-xld-digitalai-deploy-worker-0      Bound    pvc-97f7dad9-be8c-4d00-8d19-883ba77915ef   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   10m
    data-dir-dai-xld-digitalai-deploy-worker-1      Bound    pvc-9bed5dc1-6b8c-4c4c-9037-2d257957c4d5   1Gi        RWO            vp-azure-aks-test-cluster-file-storage-class   9s

    Suppose the Deploy master’s PVC name is data-dir-dai-xld-digitalai-deploy-master-0.

  2. Create a pod-dai-xld-plugin-management.yaml file and add the following code to the file:

    apiVersion: v1
    kind: Pod
    metadata:
    name: dai-xld-plugin-management
    spec:
    securityContext:
     fsGroup: 10001
    containers:
     - name: sleeper
       command: ["/bin/sh"]
       args: ["-c", "cp -f /opt/xebialabs/db-libs/postgresql*.jar /opt/xebialabs/xl-deploy-server/lib; sleep 1d;"]
       image: xebialabs/xl-deploy:<deploy version>
       imagePullPolicy: Always
       volumeMounts:
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/work
           subPath: work
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/conf
           subPath: conf
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/centralConfiguration
           subPath: centralConfiguration
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/ext
           subPath: ext
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/hotfix/lib
           subPath: lib
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/hotfix/plugins
           subPath: plugins
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/hotfix/satellite-lib
           subPath: satellite-lib
         - name: data-dir
           mountPath: /opt/xebialabs/xl-deploy-server/log
           subPath: log
    restartPolicy: Never
    volumes:
     - name: data-dir
       persistentVolumeClaim:
         claimName: <deploy PVC name>

    Replace the placeholders:

    • <deploy version> - with your deploy version name
    • <deploy PVC name> - with the PVC name from the first step
  3. Apply the pod-dai-xld-plugin-management.yaml file.

    ❯ kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
  4. Stop the Deploy pods.

    1. Set the number of masters and workers to 0.

      ❯ kubectl get digitalaideploys.xld.digital.ai -n digitalai
      NAME      AGE
      dai-xld   179m
      
      ❯ kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
       --type=merge \
       --patch '{"spec":{"XldMasterCount":0, "XldWorkerCount":0}}'
    2. Restart the Deploy stateful set, the name depends on the CR name (or you can wait a few seconds, the update will be automatic after the earlier change):

      ❯ kubectl rollout restart sts dai-xld-digitalai-deploy-worker -n digitalai
      ❯ kubectl rollout restart sts dai-xld-digitalai-deploy-master -n digitalai

    Wait until all the Deploy pods terminate.

  5. Log on (SSH) to the newly created pod.

    kubectl exec -it dai-xld-plugin-management -n digitalai -- bash
  6. Add or remove the plugins using the Plugin Manager CLI. Go to bin directory after SSH login:

    cd /opt/xebialabs/xl-deploy-server/bin

    See Plugin Manager CLI to know more about how to add or remove plugins.

    For example, the following commands are to delete the tomcat-plugin plugin.

    bash-4.2$ ./plugin-manager-cli.sh -list
    ...
    
    bash-4.2$ ./plugin-manager-cli.sh -delete tomcat-plugin
    
    tomcat-plugin deleted from database
    Please verify and delete plugin file in other cluster members' plugins directory if needed

    Exit the SSH shell using the exit command.

  7. Restart the Deploy pods:

    1. Set the number of masters and workers back to the required number (2 replicas in this example)
    ❯ kubectl get digitalaideploys.xld.digital.ai -n digitalai
    NAME      AGE
    dai-xld   179m
    
    ❯ kubectl patch digitalaideploys.xld.digital.ai dai-xld -n digitalai \
    --type=merge \
    --patch '{"spec":{"XldMasterCount":2, "XldWorkerCount":2}}'
    1. Restart the Deploy stateful set, the name depends on the CR name (or you can wait few seconds, the update will be automatic after previous change):
    ❯ kubectl rollout restart sts dai-xld-digitalai-deploy-worker -n digitalai
    ❯ kubectl rollout restart sts dai-xld-digitalai-deploy-master -n digitalai
  8. Delete the temporary pod—dai-xld-plugin-management.

    ❯ kubectl delete pod dai-xld-plugin-management -n digitalai

A Script to Automate All the Above Steps

Replace the environment variables with relevant values of your Kubernetes environment.

Here is an example that shows how this could be done with kubectl as the bash script.

Note: Stop all the pods before you run the script.

SOURCE_PLUGIN_DIR=/tmp
PLUGIN_NAME=demo-plugin
PLUGIN_VERSION=22.3.0-705.113
SOURCE_PLUGIN_FILE=$PLUGIN_NAME-$PLUGIN_VERSION.xldp
DEPLOY_MASTER_STS=dai-xld-digitalai-deploy-master
DEPLOY_WORKER_STS=dai-xld-digitalai-deploy-worker
DEPLOY_CR=dai-xld
NAMESPACE=digitalai
MASTER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldMasterCount}')
WORKER_REPLICA_COUNT=$(kubectl get digitalaideploys.xld.digital.ai dai-xld -n digitalai -o 'jsonpath={.spec.XldWorkerCount}')

kubectl apply -f pod-dai-xld-plugin-management.yaml -n digitalai
kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
      --type=merge \
      --patch '{"spec":{"XldMasterCount":0, "XldWorkerCount":0}}'
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s
kubectl wait --for condition=Ready --timeout=60s pod dai-xld-plugin-management -n $NAMESPACE

kubectl cp $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE $NAMESPACE/dai-xld-plugin-management:$SOURCE_PLUGIN_DIR/  
kubectl exec dai-xld-plugin-management -n $NAMESPACE -- /opt/xebialabs/xl-deploy-server/bin/plugin-manager-cli.sh -add $SOURCE_PLUGIN_DIR/$SOURCE_PLUGIN_FILE

kubectl patch digitalaideploys.xld.digital.ai $DEPLOY_CR -n $NAMESPACE \
      --type=merge \
      --patch "{\"spec\":{\"XldMasterCount\":$MASTER_REPLICA_COUNT, \"XldWorkerCount\":$WORKER_REPLICA_COUNT }}"
kubectl delete -f pod-dai-xld-plugin-management.yaml -n $NAMESPACE
sleep 30; kubectl rollout status sts $DEPLOY_MASTER_STS -n $NAMESPACE --timeout=300s
kubectl rollout status sts $DEPLOY_WORKER_STS -n $NAMESPACE --timeout=300s