Setup ActiveMQ Artemis HA with TCP

This article provides information on how to create an Artemis cluster (JMS 2.0) and integrate it with Deploy.

There are currently two flavours of ActiveMQ available:

  1. Classic 5.x Broker
  2. Next Generation Artemis Broker. To create the classic ActiveMQ cluster, refer Deploy-Active MQ Classic (JMS 1.1) HA setup.

Note: We highly recommend you to refer Apache official documentation to build and configure your Artemis cluster.

Follow the steps below to create the next generation Artemis message broker.

The following steps are your reference

Apache ActiveMQ Artemis clusters can be connected together in many different topologies, a symmetric cluster is probably the most common cluster topology.

The steps given below are to build the symmetric cluster. For other cluster topology, see the Apache official documentation.

  1. Download the latest distribution of Artemis from Download ActiveMQ Artemis.
  2. Unzip the installation directory and move to bin folder.

    unzip <file_name.tar.gz>
  3. Create the Artemis message broker with the script file artemis present within the bin folder.

    Note: It is recommended to always create the message broker out of the installation directory.

    ./artemis create /opt/master-broker
  4. Modify the broker.xml file under /etc directory to create a cluster specific config.

    vi /opt/master-broker/etc/broker.xml

Broadcast Groups:

Broadcast groups can be configured using either UDP or JGROUP. The snippet below defines the UDP broadcast group. For a more detailed explanation, see the Apache official documentation.

   <broadcast-group name="my-broadcast-group">
      <connector-ref connector-name="netty-connector"/>

Discovery Groups:

For cluster connections, discovery groups are defined in the server-side configuration file broker.xml. All discovery groups must be defined inside a discovery-groups tag. The configuration snippet given below is specific to UDP.

   <discovery-group name="my-discovery-group">

Note: Each discovery group must be configured with a broadcast endpoint (UDP or JGroups) that matches its broadcast group counterpart. For example, if broadcast is configured using UDP, the discovery group must also use UDP, and the same multicast address.

Configure Cluster Connections:

To set up ActiveMQ Artemis to form a symmetric cluster, we simply need to mark each broker as clustered and we need to define a cluster-connection in broker.xml.

The cluster-connection tells the nodes what other nodes to make connections to. With a cluster-connection, each node that we connect to can either be specified individually, or we can use UDP discovery to find out what other nodes are in the cluster.

Using UDP discovery makes configuration simpler since we don’t have to know what nodes are available at any one time.

Here is the relevant snippet from the broker configuration, which tells the broker to form a cluster with the other nodes:

<cluster-connection name="my-cluster">
   <discovery-group-ref discovery-group-name="my-discovery-group"/>

Changing the Default Cluster User and Password:

Under core, add the cluster-user and cluster-password elements.

        <cluster-user> cluster_user</cluster-user>
        <cluster-password> cluster_user_password</cluster-password>

Important:After a cluster broker has been configured, it is common to copy the configuration to other brokers to produce a symmetric cluster. However, when copying the broker files, do not copy any of the following directories from one broker to another. * bindings * journal * large-messages When a broker is started for the first time and initializes its journal files, it also persists a special identifier to the journal directory. This id must be unique among brokers in the cluster, or the cluster will not form properly.

  1. Configuring Client Discovery (Connecting to Deploy) Use the udp URL scheme and a host:port combination matches the group-address and group-port from the corresponding broadcast-group on the server: udp:// in the deploy-task.yaml file.
      # External task queue, used only if    
      jms-driver-classname: "org.apache.activemq.ActiveMQConnectionFactory"
      jms-url: "udp://”
      jms-username: "admin"   
      jms-password: "admin"

If you are unable to connect using the UDP group address above, then a static list of possible servers can also be used by a normal client. A list of servers to be used for the initial connection attempt can be specified in the connection URI using a syntax with (), e.g.: (tcp://myhost:61616,tcp://myhost2:61616)?reconnectAttempts=5 in the deploy-task.yaml file.

      # External task queue, used only if    
      jms-driver-classname: "org.apache.activemq.ActiveMQConnectionFactory"
      jms-url: "failover:(tcp://,tcp://”
      jms-username: "admin"   
      jms-password: "admin"
  1. Apache ActiveMQ Artemis supports three different strategies for backing up a server
  2. shared store
  3. replication and
  4. live-only Which is configured via the ha-policy configuration element.

See Apache official documentation to setup a HA policy as per your needs.

Sample broker.xml

Given below is a complete sample configuration of thebroker.xml file for reference:

<?xml version='1.0'?>
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.  You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and limitations under the License.

<configuration xmlns="urn:activemq"
 xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core"
xsi:schemaLocation="urn:activemq:core ">

<!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files

<!-- This value was determined through a calculation.
Your system could perform 22.73 writes per millisecond  on the current journal configuration.     
That translates as a sync write every 44000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.

<!-- When using ASYNCIO, this will determine the writing queue depth for libaio. -->

<!-- You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC> -->

<!--Use this to use an HTTP server to validate the network
<network-check-URL-list></network-check-URL-list> -->

<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->

<!-- this is a comma separated list, no spaces, just DNS or IPs it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->

<!-- <network-check-list></network-check-list> -->

<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->

<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->

<!-- how often we are looking for how many bytes are being used on the disk in ms -->

<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->


<!-- should the broker detect dead locks and other issues -->

<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.            
You may specify a different value here if you need to customize it to your needs.

<global-max-size>100Mb</global-max-size> -->

<!--  <acceptors> -->
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->

<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large,
so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->

<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See for more information. -->

<!-- Acceptor for every supported protocol -->
<!--     <acceptor name="artemis">tcp://;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

<!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
<!--     <acceptor name="amqp">tcp://;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

<!-- STOMP Acceptor. -->
<!--     <acceptor name="stomp">tcp://;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

<!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<!--     <acceptor name="hornetq">tcp://;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor    -->

<!-- MQTT Acceptor -->
<!--       <acceptor name="mqtt">tcp://;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
<!-- </acceptors> -->

<!-- Connectors -->
            <connector name="netty-connector">tcp://devops-centos-13:61616</connector>

<!-- Acceptors -->
   <acceptor name="netty-acceptor">tcp://devops-centos-13:61616</acceptor>

<!-- Clustering configuration -->
        <broadcast-group name="Artemis-broadcast-group">
        <!--   <local-bind-address></local-bind-address>  -->

        <discovery-group name="Artemis-discovery-group">
        <!-- <local-bind-address></local-bind-address>  -->

<!-- This is a symmetric cluster -->
        <cluster-connection name="Globular_cluster">
          <discovery-group-ref discovery-group-name="Artemis-discovery-group"/>

         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>

<!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="">
            <!-- with -1 only the global-max-size is in use for limiting -->
         <!--default for catch all-->
         <address-setting match="#">
            <!-- with -1 only the global-max-size is in use for limiting -->

           <address name="DLQ">
                 <queue name="DLQ" />
           <address name="ExpiryQueue">
                 <queue name="ExpiryQueue" />
           <address name="xld-tasks-queue">
                 <queue name="xld-tasks-queue" />

<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>

<!-- HA option used is replication modify the <master/> to <slave/> on the slave server-->