Introduction to XL UP (BETA)

XL UP - Kubernetes Installation Wizard is a feature in the XL CLI that allows you to deploy the DevOps Platform to Kubernetes containers. XL UP uses XebiaLabs’ own best-in-class deployment tools and blueprint-based CLI to safely and quickly spin up the platform according to your specifications, and perform administrative functions such as upgrading, undeploying, and monitoring instances.

Benefits of XL UP

For users who want to run the DevOps Platform in Kubernetes, XL UP offers some tangible benefits:

  • Greatly simplifies the installation process to build Release and Deploy to the cloud. For example, it…

    • Allows you to define your configuration through a straightforward set of questions in the developer-friendly command line interface
    • Automates many installation steps such as the PostgreSQL database and many necessary platform setup details such as load balancers and file storage
    • Ensures correct configuration of the DevOps Platform according to our best practice recommendations
  • Deploys a set of default monitoring tools in the containers
  • Generates a set of answer files that allow users to repeat the deployment with high predictability
  • Supports a variety of cloud installation environments

How it works

XL UP uses existing Deploy/Release tools to manage the deployment of the DevOps Platform to Kubernetes. This starts with a set of custom CLI commands and blueprints where you can specify the deployment paths and configuration options.

Additionally, XL UP uses a lightweight version of Deploy to manage the deployment of the platform into Kubernetes. This does not have a GUI and limits the available functionality to what is strictly needed to enable the deployment of the objects into the target environments.

The following videos will help you to set up a basic and advanced deployment using XL UP:

Quick local setup

Advanced deployment

XL UP

To test a basic XL UP deployment, you can run the XL UP workshop.

XL UP command line flags and syntax

The full list of CLI commands for XL UP can be found here: XL UP command details.

XL UP supported platforms and requirements

Currently XL UP can deploy the DevOps Platform to the following environments, and you will be asked to specify the target environment during setup. Each of these has a set of requirements that are specified below:

  • Local Kubernetes for Docker Desktop on Mac or Windows
  • Minikube (Unix/Linux only)

    Note that Docker Desktop and Minikube are intended for local testing purposes only

  • AWS EKS
  • Azure AKS
  • Google Kubernetes Engine
  • Multinode Kubernetes cluster. This can be any kind of multinode K8s cluster installed on premises or in the cloud that is managed by the user, and not by a cloud provider such as Google or Amazon.

Generic requirements

For the machine that will run XL UP:

  • Have Docker installed and available
  • Download the latest XL CLI
  • Internet access to the Distribution website and Docker Hub
  • License files and keystore file for the DevOps Platform

    If you do not already have a keystore file, you can generate one from the Keytool utility from the Java JDK.
    For example the following command would generate a keystore file with the password ‘test123’: keytool -genseckey -alias deployit-passsword-key -keyalg aes -keysize 128 -keypass deployit -keystore /tmp/repository-keystore.jceks -storetype jceks -storepass test123

Kubernetes requirements

You must have a Kubernetes cluster available in the intended deployment target environment. This should have a config file that will allow the Deploy/Release CLI to install software in your Kubernetes cluster.

Required system resources

The requested resources in a Kubernetes production setup are:

Minimum requirements

  • Local - minimum of 1 node with 6GB of memory and 4 CPU cores
  • Production - minimum of 3 nodes with 16GB of memory and 4 CPU cores for each node

Optimal requirements Optimally, the setup should have 24GB of memory and 8 CPU cores allocated for each node.

Local Kubernetes from Docker Desktop for Mac/Windows

The local K8s deployment will probably not be used in production, and is mostly for testing the deployment. We suggest the following resources:

  1. Allocate 6 GB of memory and 4 CPUs for Docker in the advanced settings: macOS / Windows
  2. Enable Docker’s built-in Kubernetes engine: macOS / Windows
  3. Because XL UP deployments into local K8s setups like Minikube or K8s for Docker Desktop do not use persistent storage by default, if you wish to persist data on your local setup use the Advanced option and select an external database.

Minikube

The local Minikube deployment will probably not be used in production, and is mostly for testing the deployment. We suggest the following setup:

  1. Allocate 6 GB of memory and 4 CPUs for Minikube in the command line:

    minikube config set memory 6000
    minikube config set cpus 4
  2. Make sure Minikube is running:

    minikube start
    minikube v1.6.2 on Darwin 10.15.2
    Selecting 'hyperkit' driver from user configuration (alternates: [virtualbox])
    Creating hyperkit VM (CPUs=4, Memory=6000MB, Disk=20000MB) ...
    Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
    Pulling images ...
    Launching Kubernetes ...
    Waiting for cluster to come online ...
    Done! kubectl is now configured to use "minikube"
  3. Because XL UP deployments into local K8s setups like Minikube or K8s for Docker Desktop do not use persistent storage by default, if you wish to persist data on your local setup use the Advanced option and select an external database.

For more information see https://kubernetes.io/docs/setup/learning-environment/minikube/

AWS EKS

An AWS file system is needed with the requirement that the EFS file system can be mounted in the AWS EKS worker nodes.

Azure AKS

Note: the AKS deployment can take a long time to deploy. This can sometimes result in failures due to AKS taking too long to provision storage for the pods.

Google Kubernetes Engine

A Google Cloud Platform (GCP) Filestore file system is needed with the requirement that the Filestore share can be mounted in the GCP GKE worker nodes.

Multinode Kubernetes cluster

An NFS file system is needed with the requirement that the NFS share can be mounted in the plain Kubernetes multinode cluster worker nodes.

XL UP setup

Type xl up --help for a list of optional commands to use during the setup, or refer to the XL UP command details.

  1. Generically, you can just run xl up to begin the setup, and the default blueprints will be loaded from dist.xebialabs.com.
  2. Initially you will be asked to select between the advanced and the quick setup. The quick setup uses default values for many of the answers, while the advanced setup requires you to fill in every value and contains advanced deployment features, such as using a custom Docker image for the platform. For more information on the resources you will need to enter, see Deploy advanced setup resources and Release advanced setup resources.

      Select the setup mode?  [Use arrows to move, type to filter]
        > advanced
          quick
  3. Select the environment for deployment.

      Select the Kubernetes setup where the DevOps Platform will be installed:  [Use arrows to move, type to filter, ? for more help]
      > Local K8s from Docker Desktop for Mac/Windows (LocalK8S)
      AWS EKS (AwsEKS)
      Google Kubernetes Engine (GoogleGKE)
      Plain multi-node K8s cluster (PlainK8SCluster)
  4. Choose the version of Deploy and Release to deploy
  5. Fill out the questions if you have not already supplied an answer file.
  6. The deployment will run when the answers have been filled in. You will see something like the following:
Generated files successfully!
Spinning up xl seed!

Deploying K8s-NameSpace

Deployed K8s-NameSpace

Deploying K8s-Ingress-Controller

Deploying PostgreSQL

Deploying Answers-Configmap-Deployment

Deployed K8s-Ingress-Controller

Deployed PostgreSQL

Deployed Answers-Configmap-Deployment

Deploying XL-Deploy-Deployment

Deployed XL-Deploy-Deployment

Deploying XL-Release-Deployment

Deployed XL-Release-Deployment

*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
XL-DEPLOY URL - http://127.0.0.1:30080/deploy/
XL-RELEASE URL - http://127.0.0.1:30080/xl-release/
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

These URLs can be used to access Deploy and Release, as well as the monitoring tools.

  1. You will also be asked if you want to update the config.yaml files with the new URLs. For more information see below.

The successful deployment will result in the following items being created in the deployment location:

  • A xebialabs folder containing configuration information for the install.
  • generated_answers.yaml - an answers file generated from the answers you supplied during the setup, including the admin login details for Deploy and Release.

Note The generated_answers.yaml file contains secret values and you should carefully manage the users who can access it.

Updating the config file after a deployment

When a deployment is finished, the user is asked if they want their config file to be updated. If they reply yes, the .xebialabs/config.yaml file that the CLI uses to interact with will be updated with the username and password that were either entered manually or generated, and the URL that was generated during deployment. If they reply no, the file will not be changed and you will need to manually update the config file if you want to use the XL CLI with the new deployment.

? Do you want to modify your xebialabs/config.yaml to point to the new Release and Deploy instances deployed Yes
Setting XLD config in CLI global config
Setting XLR config in CLI global config
Config has been updated successfully

Note The config.yaml file contains secret values and you should carefully manage the users who can access it.

Deploy advanced setup resources

If you install Deploy in Advanced mode, you will be asked to specify the number of master nodes and worker nodes you want to spin up, and how they should be configured. For more information about the master-worker setup in Deploy, see High availability with master-worker setup.

You will also be asked to set the persistent volume size for data stored on the NFS share. The “export directory” is the location where Deploy will store deployment packages that users export from the server. The “work directory” is where Deploy will temporarily store data that cannot be kept in memory, such as large binary files that need to be deployed.

Release advanced setup resources

If you install Release in Advanced mode, you will be asked to specify how the Release pods should be configured, including the ram requests and limits, and CPU requests and limits. you will also be asked to set the persistent volume size for the reports directory where Release will store generated release audit reports on the NFS share.