Install Release on AWS EKS Cluster

This section describes the procedure of fresh installation of the Release application on AWS EKS cluster using operator-based installer.

Intended Audience

This guide is intended for administrators with cluster administrator credentials who are responsible for application deployment.

Before You Begin

The following are the prerequisites required to migrate to the operator-based deployment:

  • Docker version 17.03 or later
  • The kubectl command-line tool
  • Access to a Kubernetes cluster version 1.17 or later
  • Kubernetes cluster configuration

Step 1—Create a folder for installation tasks

Create a folder on your workstation from where you will execute the installation tasks, for example, ReleaseInstallation.

Step 2—Download the Operator ZIP

  1. Download the file from the Deploy/Release Software Distribution site.
  2. Extract the ZIP file to the ReleaseInstallation folder.

Step 3—Update the AWS EKS cluster information

To deploy the Release application on the Kubernetes cluster, update the infrastructure.yaml file parameters (Infrastructure File Parameters) in ReleaseInstallation folder with the parameters corresponding to the kubeconfig file (AWS EKS Kubernetes Cluster Configuration File Parameters) as described in the table below. You can find the Kubernetes cluster information in the default location ~/.kube/config. Ensure the location of the kubeconfig configuration file is your home directory.

Note: The deployment will not proceed further if the infrastructure.yaml is updated with wrong details.

Infrastructure File Parameters AWS EKS Kubernetes Cluster Configuration File Parameters Steps to Follow
apiServerURL server Enter the server details of the cluster.
caCert certificate-authority-data Before updating the parameter value, decode to base64 format.
regionName Region Enter the AWS Region.
clusterName cluster-name Enter the name of the cluster.
accessKey NA This parameter defines the access key that allows the Identity and Access (IAM) user to access the AWS using CLI.
Note: This parameter is not available in the Kubernetes configuration file.
accessSecret NA This parameter defines the secret password that the IAM user must enter to access the AWS using.
Note: This parameter is not available in the Kubernetes configuration file.

Step 4—Update the default Custom Resource Definitions

  1. Run the following command to get the storage class list:

    kubectl get sc
  2. Run the keytool command below to generate the RepositoryKeystore:

    keytool -genseckey {-alias alias} {-keyalg keyalg} {-keysize keysize} [-keypass keypass] {-storetype storetype} {-keystore keystore} [-storepass storepass]


    keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123
  3. Convert the Release license and the repository keystore files to the base64 format:

    • To convert the xlrLicense into base64 format, run:

      cat <License.lic> | base64 -w 0
    • To convert RepositoryKeystore to base64 format, run:

      cat <repository-keystore.jceks> | base64 -w 0

    Note: The above commands are for Linux-based systems. For Windows, there is no built-in command to directly perform Base64 encoding and decoding. However, you can use the built-in command certutil -encode/-decode to indirectly perform Base64 encoding and decoding.

  4. Update dairelease_cr.yaml file with the mandatory parameters as described in the following table:

    Note: For deployments on test environments, you can use most of the parameters with their default values in the dairelease_cr.yaml file.

    Parameter Description
    KeystorePassphrase The passphrase for the RepositoryKeystore.
    Persistence.StorageClass The storage class that must be defined as AWS EKS cluster
    RepositoryKeystore Convert the repository keystore file for Release to the base64 format.
    ingress.hosts DNS name for accessing UI of Release.
    postgresql.persistence.storageClass The storage Class that needs to be defined as PostgreSQL
    rabbitmq.persistence.storageClass The storage class that must be defined as RabbitMQ
    xlrLicense Release license

    Note: For deployments on production environments, you must configure all the parameters required for your AWS EKS production setup, in the dairelease_cr.yaml file. The table in Step 4.5 lists these parameters and their default values, which can be overridden as per your setup requirements and workload. You can override the default parameters, and specify the parameter values.

  5. Update the default parameters as described in the following table:

    Note: The following table describes the default parameters in the dairelease_cr.yaml file. If you want to use your own database and messaging queue, refer Using Existing DB and Using Existing MQ topics, and update the dairelease_cr.yaml file. For information on how to configure SSL/TLS with Release, see Configuring SSL/TLS.

Parameter Description Default
K8sSetup.Platform The platform on which you install the chart. Allowed values are PlainK8s and AWSEKS AWSEKS
ImageRepository Image name xebialabs/xl-release
ImageTag Image tag 10.2
ImagePullPolicy Image pull policy, Defaults to Always if image tag is latest, set to IfNotPresent Always
ImagePullSecret Specify docker-registry secret names. Secrets must be manually created in the namespace None
haproxy-ingress.install Install haproxy subchart. If you have haproxy already installed, set install to false FALSE
haproxy-ingress.controller.kind Type of deployment, DaemonSet or Deployment Release
haproxy-ingress.controller.service.type Kubernetes Service type for haproxy. It can be changed to LoadBalancer or NodePort LoadBalancer
ingress.Enabled Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster TRUE
ingress.annotations Annotations for Ingress controller nginx cookie “60” “60” “60” /$2 SESSION_XLR “false”
ingress.path You can route an Ingress to different Services based on the path /xl-release(/|$)(.*)
ingress.hosts DNS name for accessing ui of Deploy None
AdminPassword Admin password for xl-release admin
xlrLicense Convert xl-release.lic files content to base64 None
RepositoryKeystore Convert repository-keystore.jceks files content to base64 None
KeystorePassphrase Passphrase for repository-keystore.jceks file None
Resources CPU/Memory resource requests/limits. User can change the parameter accordingly. None
postgresql.install postgresql chart with single instance. Install postgresql chart. If you have an existing database deployment, set install to false. TRUE
postgresql.postgresqlUsername PostgreSQL user (creates a non-admin user when postgresqlUsername is not specified as postgres) postgres
postgresql.postgresqlPassword PostgreSQL user password postgres
postgresql.postgresqlExtendedConf.listenAddresses Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications *
postgresql.postgresqlExtendedConf.maxConnections Maximum total connections 500
postgresql.initdbScriptsSecret Secret with initdb scripts contain sensitive information
Note: This parameter can be used with initdbScriptsConfigMap or initdbScripts. The value is evaluated as a template.
postgresql.service.port PostgreSQL port 5432
postgresql.persistence.enabled Enable persistence using PVC TRUE
postgresql.persistence.size PVC Storage Request for PostgreSQL volume 50Gi
postgresql.persistence.existingClaim Provide an existing PersistentVolumeClaim, the value is evaluated as a template. None
postgresql.resources.requests CPU/Memory resource requests requests: memory: 250m memory: cpu: 256m
postgresql.nodeSelector Node labels for pod assignment {}
postgresql.affinity Affinity labels for pod assignment {}
postgresql.tolerations Toleration labels for pod assignment []
UseExistingDB.Enabled If you want to use an existing database, change postgresql.install to false. FALSE
UseExistingDB.XLDBURL Database URL for xl-deploy None
UseExistingDB.XLDBUSERNAME Database User for xl-deploy None
UseExistingDB.XLDBPASSWORD Database Password for xl-deploy None
rabbitmq.install Install rabbitmq chart. If you have an existing message queue deployment, set install to false. TRUE
rabbitmq.extraPlugins Additional plugins to add to the default configmap rabbitmqjmstopic_exchange
rabbitmq.replicaCount Number of replica 3
rabbitmq.rbac.create If true, create and use RBAC resources TRUE
rabbitmq.service.type Type of service to create ClusterIP
UseExistingMQ.Enabled If you want to use an existing Message Queue, change rabbitmq-ha.install to false FALSE
UseExistingMQ.XLRTASKQUEUE_USERNAME Username for xl-deploy task queue None
UseExistingMQ.XLRTASKQUEUE_PASSWORD Password for xl-deploy task queue None
UseExistingMQ.XLRTASKQUEUE_URL URL for xl-deploy task queue None
UseExistingMQ.XLRTASKQUEUEDRIVERCLASS_NAME Driver Class Name for xl-deploy task queue None
HealthProbes Would you like a HealthProbes to be enabled TRUE
HealthProbesLivenessTimeout Delay before liveness probe is initiated 60
HealthProbesReadinessTimeout Delay before readiness probe is initiated 60
HealthProbeFailureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 12
HealthPeriodScans How often to perform the probe 10
nodeSelector Node labels for pod assignment {}
tolerations Toleration labels for pod assignment []
Persistence.Enabled Enable persistence using PVC TRUE
Persistence.StorageClass PVC Storage Class for volume None
Persistence.Annotations Annotations for the PVC {}
Persistence.AccessMode PVC Access Mode for volume ReadWriteOnce

Step 5—Download and set up the XL CLI

  1. Download the XL-CLI binaries.


    Note: For $VERSION, substitute with the version that matches your product version in the public folder.

  2. Enable execute permissions.

    chmod +x xl
  3. Copy the XL binary in a directory that is on your PATH.

    echo $PATH


    cp xl /usr/local/bin
  4. Verify the release version.

    xl version

Step 6—Set up the local XL Deploy Container instance

  1. Run the following command to download and start the local Deploy instance:

    docker run -d -e "ADMIN_PASSWORD=admin" -e "ACCEPT_EULA=Y" -p 4516:4516 --name xld xebialabs/xl-deploy:10.2
  2. To access the Deploy interface, go to:
    http://<host IP address>:4516/

Step 7—Activate the deployment process

Go to the root of the extracted file and run the following command:

xl apply -v -f digital-ai.yaml

Step 8—Verify the deployment status

  1. Check the deployment job completion using XL CLI.
    The deployment job starts the execution of various tasks as defined in the digital-ai.yaml file in a sequential manner. If you encounter an execution error while running the scripts, the system displays error messages. The average time to complete the job is around 10 minutes.

    Note: The running time depends on the environment.

    Deployment Status

    To troubleshoot runtime errors, see Troubleshooting Operator Based Installer.

Step 9—Verify if the deployment was successful

To verify the deployment succeeded, do one of the following:

  • Open the local Deploy application, go to the Explorer tab, and from Library, click Monitoring > Deployment tasks

    Successful Deploy Deployment
  • Run the following command in a terminal or command prompt:

    Deployment Status Using CLI Command

Step 10—Perform sanity checks

Open the newly installed Release application and perform the required sanity checks.