Add, start, and use workers

You can view and manage workers from the Monitoring section of the XL Deploy GUI. To see all the workers registered with the master, go to the Explorer, expand Monitoring in the left pane, and double click Workers. You can see the list of workers, their connection states (as seen from the master you are connected to), and the number of deployment and control tasks that are assigned to each worker. To view the workers in the XL Deploy GUI, you must have global admin permissions.

For a more detailed description of the master worker setup and the different types of workers, see High availability with master-worker setup.

Activate workers

  1. Install XL Deploy using the standard installation procedure.
  2. Set up XL Deploy to connect to your database. Note: If you want to use Derby as an external database, open a terminal and execute This is not recommended for a production setup.
  3. To synchronize the configuration for an external worker, copy the master installation directory to a different location, either on the same machine or a different one.
  4. To deactivate the in-process worker, set in the XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf file.
  5. Open a terminal and start XL Deploy master.
  6. Check the logging of the XL Deploy master for these lines:

    2019-03-12 14:46:01.451 [main] {} INFO  com.xebialabs.deployit.Server - XL Deploy Server has started.
    2019-03-12 14:46:01.452 [main] {} INFO  com.xebialabs.deployit.Server - External workers can connect to xld-master-host:8180
    2019-03-12 14:46:01.453 [main] {} INFO  com.xebialabs.deployit.Server - You can now point your browser to http://xld-master-host:4516/

    Use the printed values to connect the workers to the master. Note: For an active/hot-standby or active/active setup, the value of the -api parameter should be set to the loadbalancer endpoint and not the xld-master-host. For an active/active setup, the -master parameter should point to the DNS Service name for XL Deploy. The DNS Service should return an SRV record listing each of the masters’ IP addresses. The worker will poll this list and connect or disconnect dynamically as needed.

  7. Start one or more workers.

Example: Start a local worker in the same folder as master

  1. Run this script from the installation directory:
  2. Enter the required information: 'number' -api http://localhost:4516/ -master localhost:8180 where ‘number’ is the number for the worker you want to create.

    If you specify value 3 for the ‘number’, it will execute this command automatically:
    LOGFILE=deployit-worker-3 worker -name worker-3 -port 8183 -work work-3 -api http://localhost:4516/ -master localhost:8180

    You can add a custom local worker by executing this command:

Example: Start an external worker

The required command to start and external worker is: worker -api 'http://hostname:port' -master 'hostname:remotingport'

Example with values: worker -api loadbalancer:4516 -master xld-master:8180 -name worker1 -hostname xld-worker -port 8181

List of flag values

  • -api REST_ENDPOINT is the REST API endpoint for XL Deploy.
  • -master MASTER_ADDRESS_AND_PORT contains the information where you can register the workers. You can find this information when you install the XL Deploy server.
  • LOGFILE=logfile_name is an environment variable that you can use to create a new log file different from the master log file.
  • -hostname is the hostname that the XL Deploy master can reach this worker on.

    Note: Hostname must be resolvable by each of the masters.

  • -port WORKER_PORT is a port that can be specified for a worker that runs on the same machine as the master.
  • -work WORK_FOLDER is used to specify a work directory where to store task files for recovery at the worker level.
  • -name WORKER_NAME is used to specify a custom name for a worker.


  1. You cannot use both internal and external workers. If is set to false, the internal worker is disconnected and tasks are not assigned to it.
  2. If there are no workers connected to master, tasks cannot be executed.


To configure secure communication between master and workers, see Configure secure communication with workers and satellites.

Upgrading from a previous version

If you are upgrading XL Deploy, you must first run XL Deploy with the default in-process worker. To add an external worker, ensure that you copy the new master folder configuration to a different location. The new configuration structure contains merged configuration settings from other configuration files to the xl-deploy.conf file. For more information, see XL Deploy configuration files

Compatibility of masters and workers

In order to have unproblematic execution of tasks by workers, the configuration of masters and workers must be the same when it comes to plugins and most of the configuration settings. This is enforced by calculating and comparing a hash over these configuration items; see High availability with master-worker setup.html for more details

Draining the workers

You can manually shutdown a worker from the GUI. If the worker is not empty and has running tasks, when you shut down the worker, the state changes to draining and new tasks will not be assigned by the master. A worker in draining state shuts down only after the last task is completed.

A worker changes state to draining when:

  • An admin shuts down the worker.
  • The worker detects configuration changes on master.

Shutdown workers using the GUI

To shutdown a worker:

  1. Go to the Explorer.
  2. In the left pane, expand Monitoring.
  3. Double click Workers.
  4. Select a worker, click Explorer action menu, and select Shutdown. The worker process is closed after draining has completed.

Important: You cannot shut down an internal worker.

Remove workers

To remove a worker:

  1. Go to the Explorer.
  2. In the left pane, expand Monitoring.
  3. Double click Workers.
  4. Select a worker, click Explorer action menu, and select Remove worker. The worker is removed from the list.

Important: If you removed the in-process worker and you want to add it back, modify the XL Deploy configuration by setting in the XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf file.

Crashes, termination, and recovery

While a worker is executing tasks, it may be accidentally terminated or, in unfortunate circumstances, crash. Workers write out .task files as usual, see The XL Deploy work directory and Understanding tasks in XL Deploy. After a restart, workers will find any .task files and recover the associated tasks. Note that these are bound to a single worker.

Note: Work directories and .task files must not be shared. For recovery to work, they need to be persistent across worker invocations.

See also Recovery at worker level

Trigger draining mode when restarting XL Deploy

When an admin user shuts down the XL Deploy master instance due to configuration changes, the workers automatically detect differences between the master and the workers. The state of the workers changes to draining.

If all the workers are in draining state, the master cannot send any tasks to be executed. You must add new workers to execute tasks or manually update the configuration of the existing workers to be synchronized with the master. You must restart the workers after the configuration changes.

If you are running XL Deploy with multiple registered workers and you want to restart XL Deploy due to configuration, plugin, or type system changes:

  1. To make sure you have available resources to start a worker with the new configuration, shut down a worker:
  2. Go to the Explorer and, in the left pane, expand Monitoring.
  3. Double click Workers, select a worker, click Explorer action menu, and click Shutdown. If there are tasks assigned to the worker, it will go to the draining state.
  4. Finish any tasks that are running on the worker in draining state. You can finish, cancel, or abort the tasks.
  5. Make any desired configuration changes to XL Deploy. If the worker is running in a different configuration folder from the master, make sure you copy the changes to the worker configuration folder.
  6. Restart XL Deploy and the updated worker. Any new tasks will be assigned to the updated worker.
  7. Due to the configuration changes, all other running workers will go to draining state until all the tasks are finished.
  8. After all the workers in draining state are empty and all tasks running are finished, you can manually synchronize the configuration changes and restart the workers.