Introduction to the XL Deploy F5 BIG-IP plugin
The F5 BIG-IP plugin adds the ability to manage deployments to application servers and web servers with traffic that is managed by a BIG-IP load balancing device.
For information about plugin dependencies and the configuration items (CIs) that the plugin provides, refer to the F5 BIG-IP Plugin Reference.
- Take servers or services out of the load balancing pool before deployment
- Put servers or services back into the load balancing pool after deployment is complete
Download the plugin distribution ZIP file from the XebiaLabs Software Distribution site. Place the plugin JAR file and all dependent plugin files in your
Install Python 2.7.x on the host that has access to the BIG-IP load balancer device.
Note: If you are using a plugin version prior to 5.5.0, you must also install the
pycontrol 2.0+ and
suds 0.3.9+ Python libraries.
The plugin works in conjunction with the group based orchestrator to disable and enable containers that are part of a single deployment group at once.
The group based orchestrator divides the deployment into multiple phases, based on the
deploymentGroup property of the containers being targeted. Each group will be disabled in BIG-IP just before they are deployed to, and will be re-enabled right after the deployment to that group. This ensures that there is no downtime during the deployment.
The plugin add the following properties to every container to control how the server is known in the BIG-IP load balancer and whether it should take part in the load balancing deployment:
||STRING||The address this server is registered under in the BIG-IP load balancer|
||STRING||The BIG-IP load balancer pool this server is a member of|
||INTEGER||The port of the service of this server that is load balanced by the BIG-IP load balancer|
||BOOLEAN = true||Whether this server should be disabled in the load balancer when it is being deployed to|
The plugin will add two steps to the deployment of each deployment group:
- A disable server step that will stop traffic to the servers that are managed by the load balancer.
- An enable server step that will start traffic to the servers that were previously disabled.
Traffic management to the server is done by enabling and disabling the referenced BIG-IP pool member in the BIG-IP load balancing pool.
To set up XL Deploy to use your BIG-IP load balancing device:
- In the XL Deploy Repository, create a BIG-IP Local Traffic Manager (
big-ip.LocalTrafficManager) configuration item in the Infrastructure tree under a host. Add it as a member of the environment (
udm.Environment). The host configuration item indicates how to connect to the BIG-IP device.
- Add all of the containers that the BIG-IP device manages to the
managedServerscollection of the BIG-IP LocalTrafficManager configuration item.
- Populate the BIG-IP address, user name, password, and partition connection properties, as seen from the host machine.
- Update all managed containers with the appropriate deployment group and BIG-IP member data and add them to the same environment as the BIG-IP LocalTrafficManager CI.
If you have an Apache
httpd server that fronts a website backed by one or more application servers, it is possible to setup a more complex load balancing scenario, thus ensuring that the served website is not broken during the deployment. For this, the
www.ApacheHttpdServer configuration item from the bundled Web Server plugin is augmented with a property called
Whenever a deployment is done to one or more of the containers mentioned in the
applicationServers residing in the same environment as the web server, the following happens in addition to the standard behavior:
- Just before the first application server is deployed to, the web server is removed from the load balancing configuration.
- After the last application server linked to the web server has been deployed to, the web server is put back into the load balancing configuration.
If you use
*-by-deployment-* orchestrators, you might also want to use the
sequential-by-loadbalancer-group orchestrator. This orchestrator splits the execution plan into a sequence of three sub-plans:
- Disable affected servers in load balancers
- Do the deployment
- Enable affected servers in load balancers
You can combine this orchestrator with other orchestrations to accomplish the desired deployment scenarios.