Install Release on OpenShift AWS Cluster

This section describes the procedure of fresh installation of the Release application on OpenShift AWS cluster using operator-based installer.

Intended Audience

This guide is intended for administrators with cluster administrator credentials who are responsible for application deployment.

Before You Begin

The following are the prerequisites required to migrate to the operator-based deployment:

  • Docker version 17.03 or later
  • The Openshift oc tool
  • Access to a Kubernetes cluster version 1.17 or later
  • The Release application backup and Kubernetes cluster configuration backup for recovery

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the Operator ZIP file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the platform information

To deploy the Release application on the OpenShift AWS cluster, update the Infrastructure file parameters (infrastructure.yaml) in the folder where you extracted the ZIP file with the parameters corresponding to the OpenShift AWS Cluster Configuration (kubeconfig) file as described in the table. You can find the OpenShift AWS cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will fail if the infrastructure.yaml is updated with wrong details.

Infrastructure File Parameters OpenShift AWS Cluster Configuration File Parameters Parameter Value
serverUrl server Enter the server URL.
openshiftToken openshiftToken This parameter defines the access key that allows the Identity and Access (IAM) user to access the AWS using CLI.
Note: This parameter is not available in the Kubernetes configuration file.

Step 4—Update the Custom Resource Definitions (dairelease_cr.yaml)

  1. Update the dairelease_cr file with the details mentioned in the following table:

    Fields in dairelease_cr.yaml Description Default Value
    AdminPassword The administrator password No default value
    KeystorePassphrase The passphrase for Keystore.jks file No default value
    RepositoryKeystore Convert the keystore.jks file content to Base 64 format. No default value
    StorageClass The storage class—AWS No default value
    xlrLicense Release license No default value
  2. Convert the Deploy License and the Repository Keystore files to Base64 format:

    cat <License.lic> | base64 -w 0
  3. Convert RepositoryKeystore into Base 64 format:

    cat <keystore.jks> | base64 -w 0

    Example

    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  4. Run the following command to retrive StorageClass values for Server, Postgres and Rabbitmq:

    oc get sc
    
    
  5. Update the mandatory parameters as described in the following table:

    Note: For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr.yaml file.

    Parameters Description
    Persistence.StorageClass PVC Storage Class for volume
    Postgresql.Persistence.StorageClass PVC Storage Class for Postgres
    Rabbitmq.Persistence.StorageClass PVC Storage Class for Rabbitmq

    Note: For deployments on production environments, you must configure all the parameters required for your Openshift AWS production setup in the daideploy_cr.yaml file. The table in Step 4.6 lists these parameters and their default values, which can be overridden as per your setup requirements and workload. You must override the default parameters, and specify the parameter values with those from the custom resource file.

  6. Update the default parameters as described in the following table based on your requirements:

    Note: The following table describes the default parameters in the Digital.ai dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer Using Existing DB and Using Existing MQ topics, and update the daideploy_cr.yaml file. For information on how to configure AWS RDS with Digital.ai Release, see Configuring AWS RDS.

    Fields to be updated in daideploy_cr.yaml Description Default Values
    AdminPassword The administrator password NA
    ImageRepository Image name xebialabs/xl-release
    ImageTag Image tag 10.2
    AdminPassword Admin password for xl-release admin
    Resources CPU/Memory resource requests/limits. User can change the parameter accordingly. NA
    postgresql.install postgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false. TRUE
    postgresql.postgresqlUsername PostgreSQL user (creates a non-admin user when postgresqlUsername is not postgres) postgres
    postgresql.postgresqlPassword PostgreSQL user password postgres
    postgresql.replication.enabled Enable replication false
    postgresql.postgresqlExtendedConf.listenAddresses Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications *
    postgresql.postgresqlExtendedConf.maxConnections Maximum total connections 500
    postgresql.initdbScriptsSecret Secret with initdb scripts contain sensitive information
    Note: The parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
    postgresql-init-sql-xld
    postgresql.service.port PostgreSQL port 5432
    postgresql.persistence.enabled Enable persistence using PVC TRUE
    postgresql.persistence.size PVC Storage Request for PostgreSQL volume 50Gi
    postgresql.persistence.existingClaim Provide an existing PersistentVolumeClaim, the value is evaluated as a template. NA
    postgresql.resources.requests CPU/Memory resource requests/limits. User can change the parameter accordingly. cpu: 250m
    Memory: 256Mi
    postgresql.nodeSelector Node labels for pod assignment {}
    postgresql.affinity Affinity labels for pod assignment {}
    postgresql.tolerations Toleration labels for pod assignment []
    UseExistingDB.Enabled If you want to use an existing database, change postgresql.install to false. false
    UseExistingDB.XL_DB_URL Database URL for xl-release NA
    UseExistingDB.XL_DB_USERNAME Database User for xl-release NA
    UseExistingDB.XL_DB_PASSWORD Database Password for xl-release NA
    rabbitmq.install Install rabbitmq chart. If you have an existing message queue deployment, set install to false. TRUE
    rabbitmq.extraPlugins Additional plugins to add to the default configmap rabbitmqjmstopic_exchange
    rabbitmq.replicaCount Number of replica 3
    rabbitmq.rbac.create If true, create & use RBAC resources TRUE
    rabbitmq.service.type Type of service to create ClusterIP
    rabbitmq.persistence.enabled If true, persistent volume claims are created TRUE
    rabbitmq.persistence.size Persistent volume size 8Gi
    UseExistingMQ.Enabled If you want to use an existing Message Queue, change rabbitmq-ha.install to false false
    UseExistingMQ.XLD_TASK_QUEUE_USERNAME Username for xl-deploy task queue NA
    UseExistingMQ.XLD_TASK_QUEUE_PASSWORD Password for xl-deploy task queue NA
    UseExistingMQ.XLD_TASK_QUEUE_URL URL for xl-deploy task queue NA
    UseExistingMQ.XLD_TASK_QUEUE_DRIVER_CLASS_NAME Driver Class Name for xl-deploy task queue NA
    HealthProbes Would you like a HealthProbes to be enabled TRUE
    HealthProbesLivenessTimeout Delay before liveness probe is initiated 60
    HealthProbesReadinessTimeout Delay before readiness probe is initiated 60
    HealthProbeFailureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 12
    HealthPeriodScans How often to perform the probe 10
    nodeSelector Node labels for pod assignment {}
    tolerations Toleration labels for pod assignment []
    Persistence.Enabled Enable persistence using PVC TRUE
    Persistence.Annotations Annotations for the PVC {}
    Persistence.AccessMode PVC Access Mode for volume ReadWriteOnce
    Persistence.XldMasterPvcSize XLD Master PVC Storage Request for volume. For production grade setup, size must be changed 10Gi
    Persistence. XldWorkPvcSize XLD Worker PVC Storage Request for volume. For production grade setup, size must be changed 10Gi
    satellite.Enabled Enable the satellite support to use it with Deploy false

Step 5—Set up the CLI

  1. Download the XL-CLI libraries.

    wget https://dist.xebialabs.com/public/xl-cli/$VERSION/linux-amd64/xl

    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary to a directory in your PATH.

    echo $PATH
    cp xl /usr/local/bin
  4. Verify the Deploy application release version.

    xl version

Step 6—Set up the Deploy container instance

  1. Run the following command to download and run the Digital.ai Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. Go the following URL to access the Deploy application:
    http://<host IP address>:4516/

Step 7—Delete RabbitMQ PVC of the existing deployment

  1. Verify the existing PVCs

    oc get pvc
  2. The RabbitMQ PVC names ends with rabbitmq-0, rabbitmq-1, and rabbitmq-2.
    To delete the RabbitMQ PVCs, run the following command:

    kubectl delete pvc Rabbit MQ PVC

    Example

    oc delete pvc data-brownfield-rabbitmq-0
    oc delete pvc data-brownfield-rabbitmq-1
    oc delete pvc data-brownfield-rabbitmq-2
    

    Note: Ensure to delete only the RabbitMQ PVCs.

Step 8— Activate the Release Deployment process

  1. Go to the root of the extracted file and run the following command to activate the Release deployment process:

    xl apply -v -f digital-ai.yaml

Step 9—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 1 minute.

    Note: The running time depends on the environment.

    Deployment Status To troubleshoot runtime errors, see Troubleshooting Operator Based Installer

Step 10—Verify if the deployment was successful

Access UI and check the status deployment status.

To check the deployment status using CLI, run the following command:

oc get pod

## Step 11—Perform sanity checks

Open the Release application and perform the required deployment sanity checks.