You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Vilius Šumskas <v....@advantes.tech.INVALID> on 2021/11/29 20:54:37 UTC

ActiveMQ 5.16.x Master/Slave topology question

Hi,

I‘m trying to setup a simple ActiveMQ 5.16.x master/slave cluster using shared filesystem option. I just need one broker running at any given time and a slave for HA.

The filesystem part is clear and I have already configured it according to ActiveMQ requirements.

What I don’t understand is do I _need_ to configure networkConnectors in order for master/slave to work properly? Almost every configuration example I found on the internet says that in order to have master/slave topology one only needs to point kahaDB to the same shared data directory. Every article and blog post which talk about networkConnectors say that it is only related to Network of Brokers functionality. However this link https://activemq.apache.org/networks-of-brokers#masterslave-discovery says something vague regarding masterslave:// transport. Also there is this blog post https://medium.com/@chamilad/creating-a-simple-activemq-master-slave-setup-e3de33a6bcc2 which says that networkConnector is needed?

Can somebody clear things up? Is master/slave cluster is also considered as network of (two) brokers? Can I run it with or without neworkConnectors, and what is the difference from the perspective of clients then?

--
   Best Regards,

    Vilius


RE: ActiveMQ 5.16.x Master/Slave topology question

Posted by Vilius Šumskas <v....@advantes.tech.INVALID>.
I understand that in case of unplanned master failure I need to handle fallacies of distributed computing on the client side, since there is always a switchover timeout. However, I was looking more at cases where ActiveMQ is restarted during the maintenance. In those cases I would have expected the switchover to be handled smoothly, as usual in clustered software, by exchanging topology information between nodes and informing clients about this during the normal shutdown. Hence, was my question regarding NetworkConnector.

I see now that it's not the case with ActiveMQ. From what I gathered I have only these options:

1. Use failover:(tcp://master,tcp://slave) URI and TransportListener in the client -- failover speed is decided by ActiveMQ datastore polling period of the broker + TransportListener pooling period (?) on the client
2. failover:(tcp://master,tcp://slave)?timeout=X URI in the client -- failover speed is decided by ActiveMQ datastore polling period of the broker + timeout X period in the URI
3. use external loadbalancer and ssl://loadbalancer_host URI in the client -- failover speed is decided by the health check pooling period of the load balancer
4. use something like Kubernetes and single instance ActiveMQ image -- failover speed is decided by the Kubernetes health check pooling period

Does this sound about right?

-- 
    Vilius

-----Original Message-----
From: Simon Lundström <si...@su.se> 
Sent: Wednesday, December 1, 2021 10:48 AM
To: users@activemq.apache.org
Subject: Re: ActiveMQ 5.16.x Master/Slave topology question

On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
>[...]
> As an alternative, does anybody know if I can use non-HTTP SSL load balancer and set client URI to something like ssl://loadbalancer_host:61616 ? I'm thinking, if slave servers do not respond to the request until they become master maybe that would allow me to have a simpler configuration for my clients. If I will ever need to add more slaves I would just add them under the same load balancer.

Yep. That's what JB meant with "but not bound as the broker is waiting for the lock".

Using an external hw LB is how we currently use ActiveMQ and it works great. Just make sure that your loadbalancer healthchecks check all the protocols you are using and not just one protocol or just pinging.

> If that's possible, which of the methods will be faster? We are deploying a point-of-sale application and I want the failover to be done in an instant, without losing any transactions (if that's possible :)).

Producers always have to deal with network or mq being down, see <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.

ActiveMQ is polling to see if current master is up and you don't want to poll NFS so often.

How does you application currently deal with the fallacies of distributed computing when you're doing syncronous integrations?

BR,
- Simon

> -----Original Message-----
> From: Jean-Baptiste Onofré <jb...@nanthrax.net>
> Sent: Tuesday, November 30, 2021 6:01 PM
> To: users@activemq.apache.org
> Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> 
> Hi,
> 
> masterslave: transport is deprecated. You can achieve the same with randomize=false basically.
> 
> Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).
> 
> No, the clients won't be stuck: they will reconnect to the new master.
> 
> Let me illustrate this:
> - you have a NFS shared filesystem on machine C
> - machine A mount NFS filesystem (from C) on /opt/kahadb
> - machine B mount NFS filesystem (from C) on /opt/kahadb
> - you start brokerA on machineA, brokerA is the master (transport 
> connector tcp on 61616)
> - you start brokerB on machineB, brokerB is a slave (transport 
> connector tcp on 61616, but not bound as the broker is waiting for the 
> lock)
> - in your client connection factory, you configure the broker URL with
> failover:(tcp://machineA:61616,tcp://machineB:61616)
> - as brokerA is master, your clients are connected to brokerA
> - you shutdown brokerA, brokerB will take the lock and become the new 
> master
> - your clients will automatically reconnect to brokerB
> - you start brokerA, it's now a slave (as the lock is on brokerB)
> 
> Regards
> JB
> 
> On 30/11/2021 09:45, Vilius Šumskas wrote:
> > Thank you for your response!
> > 
> > Just out of curiosity, what is this masterslave:() transport is about then?
> > 
> > Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> > 
> > My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> > 

Re: ActiveMQ 5.16.x Master/Slave topology question

Posted by Tim Bain <tb...@alumni.duke.edu>.
Just FYI, networkConnectors and the masterslave transport are for making
networks of brokers, which might be networks of failover pairs. If you just
have a single active-passive failover pair, you don't need those things.

Tim

On Wed, Dec 1, 2021, 1:49 AM Simon Lundström <si...@su.se> wrote:

> On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
> >[...]
> > As an alternative, does anybody know if I can use non-HTTP SSL load
> balancer and set client URI to something like ssl://loadbalancer_host:61616
> ? I'm thinking, if slave servers do not respond to the request until they
> become master maybe that would allow me to have a simpler configuration for
> my clients. If I will ever need to add more slaves I would just add them
> under the same load balancer.
>
> Yep. That's what JB meant with "but not bound as the broker is waiting
> for the lock".
>
> Using an external hw LB is how we currently use ActiveMQ and it works
> great. Just make sure that your loadbalancer healthchecks check all the
> protocols you are using and not just one protocol or just pinging.
>
> > If that's possible, which of the methods will be faster? We are
> deploying a point-of-sale application and I want the failover to be done in
> an instant, without losing any transactions (if that's possible :)).
>
> Producers always have to deal with network or mq being down, see
> <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.
>
> ActiveMQ is polling to see if current master is up and you don't want to
> poll NFS so often.
>
> How does you application currently deal with the fallacies of
> distributed computing when you're doing syncronous integrations?
>
> BR,
> - Simon
>
> > -----Original Message-----
> > From: Jean-Baptiste Onofré <jb...@nanthrax.net>
> > Sent: Tuesday, November 30, 2021 6:01 PM
> > To: users@activemq.apache.org
> > Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> >
> > Hi,
> >
> > masterslave: transport is deprecated. You can achieve the same with
> randomize=false basically.
> >
> > Correct: updateClusterClientOnRemove is only for network connection, but
> when you have active/active (so a real network).
> >
> > No, the clients won't be stuck: they will reconnect to the new master.
> >
> > Let me illustrate this:
> > - you have a NFS shared filesystem on machine C
> > - machine A mount NFS filesystem (from C) on /opt/kahadb
> > - machine B mount NFS filesystem (from C) on /opt/kahadb
> > - you start brokerA on machineA, brokerA is the master (transport
> connector tcp on 61616)
> > - you start brokerB on machineB, brokerB is a slave (transport connector
> tcp on 61616, but not bound as the broker is waiting for the lock)
> > - in your client connection factory, you configure the broker URL with
> > failover:(tcp://machineA:61616,tcp://machineB:61616)
> > - as brokerA is master, your clients are connected to brokerA
> > - you shutdown brokerA, brokerB will take the lock and become the new
> master
> > - your clients will automatically reconnect to brokerB
> > - you start brokerA, it's now a slave (as the lock is on brokerB)
> >
> > Regards
> > JB
> >
> > On 30/11/2021 09:45, Vilius Šumskas wrote:
> > > Thank you for your response!
> > >
> > > Just out of curiosity, what is this masterslave:() transport is about
> then?
> > >
> > > Also,  if I don't configure network connection will
> updateClusterClientsOnRemove parameter take effect?
> > >
> > > My main concern is that clients will go into stuck state during/after
> the failover. I'm not sure if everything I need is just handle this in the
> code with  TransportListener or do I need to set
> updateClusterClientsOnRemove and updateClusterClients on the broker side to
> make failover smooth?
> > >
>

Re: ActiveMQ 5.16.x Master/Slave topology question

Posted by Simon Lundström <si...@su.se>.
On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
>[...]
> As an alternative, does anybody know if I can use non-HTTP SSL load balancer and set client URI to something like ssl://loadbalancer_host:61616 ? I'm thinking, if slave servers do not respond to the request until they become master maybe that would allow me to have a simpler configuration for my clients. If I will ever need to add more slaves I would just add them under the same load balancer.

Yep. That's what JB meant with "but not bound as the broker is waiting
for the lock".

Using an external hw LB is how we currently use ActiveMQ and it works
great. Just make sure that your loadbalancer healthchecks check all the
protocols you are using and not just one protocol or just pinging.

> If that's possible, which of the methods will be faster? We are deploying a point-of-sale application and I want the failover to be done in an instant, without losing any transactions (if that's possible :)).

Producers always have to deal with network or mq being down, see
<https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.

ActiveMQ is polling to see if current master is up and you don't want to
poll NFS so often.

How does you application currently deal with the fallacies of
distributed computing when you're doing syncronous integrations?

BR,
- Simon

> -----Original Message-----
> From: Jean-Baptiste Onofré <jb...@nanthrax.net> 
> Sent: Tuesday, November 30, 2021 6:01 PM
> To: users@activemq.apache.org
> Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> 
> Hi,
> 
> masterslave: transport is deprecated. You can achieve the same with randomize=false basically.
> 
> Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).
> 
> No, the clients won't be stuck: they will reconnect to the new master.
> 
> Let me illustrate this:
> - you have a NFS shared filesystem on machine C
> - machine A mount NFS filesystem (from C) on /opt/kahadb
> - machine B mount NFS filesystem (from C) on /opt/kahadb
> - you start brokerA on machineA, brokerA is the master (transport connector tcp on 61616)
> - you start brokerB on machineB, brokerB is a slave (transport connector tcp on 61616, but not bound as the broker is waiting for the lock)
> - in your client connection factory, you configure the broker URL with
> failover:(tcp://machineA:61616,tcp://machineB:61616)
> - as brokerA is master, your clients are connected to brokerA
> - you shutdown brokerA, brokerB will take the lock and become the new master
> - your clients will automatically reconnect to brokerB
> - you start brokerA, it's now a slave (as the lock is on brokerB)
> 
> Regards
> JB
> 
> On 30/11/2021 09:45, Vilius Šumskas wrote:
> > Thank you for your response!
> > 
> > Just out of curiosity, what is this masterslave:() transport is about then?
> > 
> > Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> > 
> > My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> > 

RE: ActiveMQ 5.16.x Master/Slave topology question

Posted by Vilius Šumskas <v....@advantes.tech.INVALID>.
Thank you for the detailed explanation. The part about datastore locking and how brokers behave is more or less clear.

So would you recommend using randomize=false? We will have moments where slave becomes master and stays that way for extensive period of time. Does this mean that with randomize=false clients will still have to wait timeout=x amount of time every time they connect and the old master (which is first in the URI list) doesn't respond to them because it is now slave? Or should I just use TransportListener on the client side and ignore randomization parameter?

As an alternative, does anybody know if I can use non-HTTP SSL load balancer and set client URI to something like ssl://loadbalancer_host:61616 ? I'm thinking, if slave servers do not respond to the request until they become master maybe that would allow me to have a simpler configuration for my clients. If I will ever need to add more slaves I would just add them under the same load balancer.

If that's possible, which of the methods will be faster? We are deploying a point-of-sale application and I want the failover to be done in an instant, without losing any transactions (if that's possible :)).

-- 
    Vilius

-----Original Message-----
From: Jean-Baptiste Onofré <jb...@nanthrax.net> 
Sent: Tuesday, November 30, 2021 6:01 PM
To: users@activemq.apache.org
Subject: Re: ActiveMQ 5.16.x Master/Slave topology question

Hi,

masterslave: transport is deprecated. You can achieve the same with randomize=false basically.

Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).

No, the clients won't be stuck: they will reconnect to the new master.

Let me illustrate this:
- you have a NFS shared filesystem on machine C
- machine A mount NFS filesystem (from C) on /opt/kahadb
- machine B mount NFS filesystem (from C) on /opt/kahadb
- you start brokerA on machineA, brokerA is the master (transport connector tcp on 61616)
- you start brokerB on machineB, brokerB is a slave (transport connector tcp on 61616, but not bound as the broker is waiting for the lock)
- in your client connection factory, you configure the broker URL with
failover:(tcp://machineA:61616,tcp://machineB:61616)
- as brokerA is master, your clients are connected to brokerA
- you shutdown brokerA, brokerB will take the lock and become the new master
- your clients will automatically reconnect to brokerB
- you start brokerA, it's now a slave (as the lock is on brokerB)

Regards
JB

On 30/11/2021 09:45, Vilius Šumskas wrote:
> Thank you for your response!
> 
> Just out of curiosity, what is this masterslave:() transport is about then?
> 
> Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> 
> My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> 

Re: ActiveMQ 5.16.x Master/Slave topology question

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
Hi,

masterslave: transport is deprecated. You can achieve the same with 
randomize=false basically.

Correct: updateClusterClientOnRemove is only for network connection, but 
when you have active/active (so a real network).

No, the clients won't be stuck: they will reconnect to the new master.

Let me illustrate this:
- you have a NFS shared filesystem on machine C
- machine A mount NFS filesystem (from C) on /opt/kahadb
- machine B mount NFS filesystem (from C) on /opt/kahadb
- you start brokerA on machineA, brokerA is the master (transport 
connector tcp on 61616)
- you start brokerB on machineB, brokerB is a slave (transport connector 
tcp on 61616, but not bound as the broker is waiting for the lock)
- in your client connection factory, you configure the broker URL with 
failover:(tcp://machineA:61616,tcp://machineB:61616)
- as brokerA is master, your clients are connected to brokerA
- you shutdown brokerA, brokerB will take the lock and become the new master
- your clients will automatically reconnect to brokerB
- you start brokerA, it's now a slave (as the lock is on brokerB)

Regards
JB

On 30/11/2021 09:45, Vilius Šumskas wrote:
> Thank you for your response!
> 
> Just out of curiosity, what is this masterslave:() transport is about then?
> 
> Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> 
> My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> 

RE: ActiveMQ 5.16.x Master/Slave topology question

Posted by Vilius Šumskas <v....@advantes.tech.INVALID>.
Thank you for your response!

Just out of curiosity, what is this masterslave:() transport is about then?

Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?

My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?

-- 
    Vilius

-----Original Message-----
From: Jean-Baptiste Onofre <jb...@nanthrax.net> 
Sent: Tuesday, November 30, 2021 7:01 AM
To: users@activemq.apache.org
Subject: Re: ActiveMQ 5.16.x Master/Slave topology question

Hi

No need to use networkConnector with master/slave.

Just use failover on the client side.

So basically,

1. Brokers side, you shared the same filesystem (using NFS, LUN, whatever), and you configure kahadb to point on the same filesystem in activemq.xml 2. On client side, you can failover:(master,slave) on the brokerURL to allow client to automatically switch to “new” master

Regards
JB

> Le 29 nov. 2021 à 21:54, Vilius Šumskas <v....@advantes.tech.INVALID> a écrit :
> 
> Hi,
> 
> I‘m trying to setup a simple ActiveMQ 5.16.x master/slave cluster using shared filesystem option. I just need one broker running at any given time and a slave for HA.
> 
> The filesystem part is clear and I have already configured it according to ActiveMQ requirements.
> 
> What I don’t understand is do I _need_ to configure networkConnectors in order for master/slave to work properly? Almost every configuration example I found on the internet says that in order to have master/slave topology one only needs to point kahaDB to the same shared data directory. Every article and blog post which talk about networkConnectors say that it is only related to Network of Brokers functionality. However this link https://activemq.apache.org/networks-of-brokers#masterslave-discovery says something vague regarding masterslave:// transport. Also there is this blog post https://medium.com/@chamilad/creating-a-simple-activemq-master-slave-setup-e3de33a6bcc2 which says that networkConnector is needed?
> 
> Can somebody clear things up? Is master/slave cluster is also considered as network of (two) brokers? Can I run it with or without neworkConnectors, and what is the difference from the perspective of clients then?
> 
> --
>   Best Regards,
> 
>    Vilius
> 


Re: ActiveMQ 5.16.x Master/Slave topology question

Posted by Jean-Baptiste Onofre <jb...@nanthrax.net>.
Hi

No need to use networkConnector with master/slave.

Just use failover on the client side.

So basically,

1. Brokers side, you shared the same filesystem (using NFS, LUN, whatever), and you configure kahadb to point on the same filesystem in activemq.xml
2. On client side, you can failover:(master,slave) on the brokerURL to allow client to automatically switch to “new” master

Regards
JB

> Le 29 nov. 2021 à 21:54, Vilius Šumskas <v....@advantes.tech.INVALID> a écrit :
> 
> Hi,
> 
> I‘m trying to setup a simple ActiveMQ 5.16.x master/slave cluster using shared filesystem option. I just need one broker running at any given time and a slave for HA.
> 
> The filesystem part is clear and I have already configured it according to ActiveMQ requirements.
> 
> What I don’t understand is do I _need_ to configure networkConnectors in order for master/slave to work properly? Almost every configuration example I found on the internet says that in order to have master/slave topology one only needs to point kahaDB to the same shared data directory. Every article and blog post which talk about networkConnectors say that it is only related to Network of Brokers functionality. However this link https://activemq.apache.org/networks-of-brokers#masterslave-discovery says something vague regarding masterslave:// transport. Also there is this blog post https://medium.com/@chamilad/creating-a-simple-activemq-master-slave-setup-e3de33a6bcc2 which says that networkConnector is needed?
> 
> Can somebody clear things up? Is master/slave cluster is also considered as network of (two) brokers? Can I run it with or without neworkConnectors, and what is the difference from the perspective of clients then?
> 
> --
>   Best Regards,
> 
>    Vilius
>