You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Simon Lundström <si...@su.se> on 2021/12/01 08:47:40 UTC

Re: ActiveMQ 5.16.x Master/Slave topology question

On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
>[...]
> As an alternative, does anybody know if I can use non-HTTP SSL load balancer and set client URI to something like ssl://loadbalancer_host:61616 ? I'm thinking, if slave servers do not respond to the request until they become master maybe that would allow me to have a simpler configuration for my clients. If I will ever need to add more slaves I would just add them under the same load balancer.

Yep. That's what JB meant with "but not bound as the broker is waiting
for the lock".

Using an external hw LB is how we currently use ActiveMQ and it works
great. Just make sure that your loadbalancer healthchecks check all the
protocols you are using and not just one protocol or just pinging.

> If that's possible, which of the methods will be faster? We are deploying a point-of-sale application and I want the failover to be done in an instant, without losing any transactions (if that's possible :)).

Producers always have to deal with network or mq being down, see
<https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.

ActiveMQ is polling to see if current master is up and you don't want to
poll NFS so often.

How does you application currently deal with the fallacies of
distributed computing when you're doing syncronous integrations?

BR,
- Simon

> -----Original Message-----
> From: Jean-Baptiste Onofré <jb...@nanthrax.net> 
> Sent: Tuesday, November 30, 2021 6:01 PM
> To: users@activemq.apache.org
> Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> 
> Hi,
> 
> masterslave: transport is deprecated. You can achieve the same with randomize=false basically.
> 
> Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).
> 
> No, the clients won't be stuck: they will reconnect to the new master.
> 
> Let me illustrate this:
> - you have a NFS shared filesystem on machine C
> - machine A mount NFS filesystem (from C) on /opt/kahadb
> - machine B mount NFS filesystem (from C) on /opt/kahadb
> - you start brokerA on machineA, brokerA is the master (transport connector tcp on 61616)
> - you start brokerB on machineB, brokerB is a slave (transport connector tcp on 61616, but not bound as the broker is waiting for the lock)
> - in your client connection factory, you configure the broker URL with
> failover:(tcp://machineA:61616,tcp://machineB:61616)
> - as brokerA is master, your clients are connected to brokerA
> - you shutdown brokerA, brokerB will take the lock and become the new master
> - your clients will automatically reconnect to brokerB
> - you start brokerA, it's now a slave (as the lock is on brokerB)
> 
> Regards
> JB
> 
> On 30/11/2021 09:45, Vilius Šumskas wrote:
> > Thank you for your response!
> > 
> > Just out of curiosity, what is this masterslave:() transport is about then?
> > 
> > Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> > 
> > My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> > 

RE: ActiveMQ 5.16.x Master/Slave topology question

Posted by Vilius Šumskas <v....@advantes.tech.INVALID>.
I understand that in case of unplanned master failure I need to handle fallacies of distributed computing on the client side, since there is always a switchover timeout. However, I was looking more at cases where ActiveMQ is restarted during the maintenance. In those cases I would have expected the switchover to be handled smoothly, as usual in clustered software, by exchanging topology information between nodes and informing clients about this during the normal shutdown. Hence, was my question regarding NetworkConnector.

I see now that it's not the case with ActiveMQ. From what I gathered I have only these options:

1. Use failover:(tcp://master,tcp://slave) URI and TransportListener in the client -- failover speed is decided by ActiveMQ datastore polling period of the broker + TransportListener pooling period (?) on the client
2. failover:(tcp://master,tcp://slave)?timeout=X URI in the client -- failover speed is decided by ActiveMQ datastore polling period of the broker + timeout X period in the URI
3. use external loadbalancer and ssl://loadbalancer_host URI in the client -- failover speed is decided by the health check pooling period of the load balancer
4. use something like Kubernetes and single instance ActiveMQ image -- failover speed is decided by the Kubernetes health check pooling period

Does this sound about right?

-- 
    Vilius

-----Original Message-----
From: Simon Lundström <si...@su.se> 
Sent: Wednesday, December 1, 2021 10:48 AM
To: users@activemq.apache.org
Subject: Re: ActiveMQ 5.16.x Master/Slave topology question

On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
>[...]
> As an alternative, does anybody know if I can use non-HTTP SSL load balancer and set client URI to something like ssl://loadbalancer_host:61616 ? I'm thinking, if slave servers do not respond to the request until they become master maybe that would allow me to have a simpler configuration for my clients. If I will ever need to add more slaves I would just add them under the same load balancer.

Yep. That's what JB meant with "but not bound as the broker is waiting for the lock".

Using an external hw LB is how we currently use ActiveMQ and it works great. Just make sure that your loadbalancer healthchecks check all the protocols you are using and not just one protocol or just pinging.

> If that's possible, which of the methods will be faster? We are deploying a point-of-sale application and I want the failover to be done in an instant, without losing any transactions (if that's possible :)).

Producers always have to deal with network or mq being down, see <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.

ActiveMQ is polling to see if current master is up and you don't want to poll NFS so often.

How does you application currently deal with the fallacies of distributed computing when you're doing syncronous integrations?

BR,
- Simon

> -----Original Message-----
> From: Jean-Baptiste Onofré <jb...@nanthrax.net>
> Sent: Tuesday, November 30, 2021 6:01 PM
> To: users@activemq.apache.org
> Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> 
> Hi,
> 
> masterslave: transport is deprecated. You can achieve the same with randomize=false basically.
> 
> Correct: updateClusterClientOnRemove is only for network connection, but when you have active/active (so a real network).
> 
> No, the clients won't be stuck: they will reconnect to the new master.
> 
> Let me illustrate this:
> - you have a NFS shared filesystem on machine C
> - machine A mount NFS filesystem (from C) on /opt/kahadb
> - machine B mount NFS filesystem (from C) on /opt/kahadb
> - you start brokerA on machineA, brokerA is the master (transport 
> connector tcp on 61616)
> - you start brokerB on machineB, brokerB is a slave (transport 
> connector tcp on 61616, but not bound as the broker is waiting for the 
> lock)
> - in your client connection factory, you configure the broker URL with
> failover:(tcp://machineA:61616,tcp://machineB:61616)
> - as brokerA is master, your clients are connected to brokerA
> - you shutdown brokerA, brokerB will take the lock and become the new 
> master
> - your clients will automatically reconnect to brokerB
> - you start brokerA, it's now a slave (as the lock is on brokerB)
> 
> Regards
> JB
> 
> On 30/11/2021 09:45, Vilius Šumskas wrote:
> > Thank you for your response!
> > 
> > Just out of curiosity, what is this masterslave:() transport is about then?
> > 
> > Also,  if I don't configure network connection will updateClusterClientsOnRemove parameter take effect?
> > 
> > My main concern is that clients will go into stuck state during/after the failover. I'm not sure if everything I need is just handle this in the code with  TransportListener or do I need to set updateClusterClientsOnRemove and updateClusterClients on the broker side to make failover smooth?
> > 

Re: ActiveMQ 5.16.x Master/Slave topology question

Posted by Tim Bain <tb...@alumni.duke.edu>.
Just FYI, networkConnectors and the masterslave transport are for making
networks of brokers, which might be networks of failover pairs. If you just
have a single active-passive failover pair, you don't need those things.

Tim

On Wed, Dec 1, 2021, 1:49 AM Simon Lundström <si...@su.se> wrote:

> On Tue, 2021-11-30 at 17:20:31 +0100, Vilius Šumskas wrote:
> >[...]
> > As an alternative, does anybody know if I can use non-HTTP SSL load
> balancer and set client URI to something like ssl://loadbalancer_host:61616
> ? I'm thinking, if slave servers do not respond to the request until they
> become master maybe that would allow me to have a simpler configuration for
> my clients. If I will ever need to add more slaves I would just add them
> under the same load balancer.
>
> Yep. That's what JB meant with "but not bound as the broker is waiting
> for the lock".
>
> Using an external hw LB is how we currently use ActiveMQ and it works
> great. Just make sure that your loadbalancer healthchecks check all the
> protocols you are using and not just one protocol or just pinging.
>
> > If that's possible, which of the methods will be faster? We are
> deploying a point-of-sale application and I want the failover to be done in
> an instant, without losing any transactions (if that's possible :)).
>
> Producers always have to deal with network or mq being down, see
> <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>.
>
> ActiveMQ is polling to see if current master is up and you don't want to
> poll NFS so often.
>
> How does you application currently deal with the fallacies of
> distributed computing when you're doing syncronous integrations?
>
> BR,
> - Simon
>
> > -----Original Message-----
> > From: Jean-Baptiste Onofré <jb...@nanthrax.net>
> > Sent: Tuesday, November 30, 2021 6:01 PM
> > To: users@activemq.apache.org
> > Subject: Re: ActiveMQ 5.16.x Master/Slave topology question
> >
> > Hi,
> >
> > masterslave: transport is deprecated. You can achieve the same with
> randomize=false basically.
> >
> > Correct: updateClusterClientOnRemove is only for network connection, but
> when you have active/active (so a real network).
> >
> > No, the clients won't be stuck: they will reconnect to the new master.
> >
> > Let me illustrate this:
> > - you have a NFS shared filesystem on machine C
> > - machine A mount NFS filesystem (from C) on /opt/kahadb
> > - machine B mount NFS filesystem (from C) on /opt/kahadb
> > - you start brokerA on machineA, brokerA is the master (transport
> connector tcp on 61616)
> > - you start brokerB on machineB, brokerB is a slave (transport connector
> tcp on 61616, but not bound as the broker is waiting for the lock)
> > - in your client connection factory, you configure the broker URL with
> > failover:(tcp://machineA:61616,tcp://machineB:61616)
> > - as brokerA is master, your clients are connected to brokerA
> > - you shutdown brokerA, brokerB will take the lock and become the new
> master
> > - your clients will automatically reconnect to brokerB
> > - you start brokerA, it's now a slave (as the lock is on brokerB)
> >
> > Regards
> > JB
> >
> > On 30/11/2021 09:45, Vilius Šumskas wrote:
> > > Thank you for your response!
> > >
> > > Just out of curiosity, what is this masterslave:() transport is about
> then?
> > >
> > > Also,  if I don't configure network connection will
> updateClusterClientsOnRemove parameter take effect?
> > >
> > > My main concern is that clients will go into stuck state during/after
> the failover. I'm not sure if everything I need is just handle this in the
> code with  TransportListener or do I need to set
> updateClusterClientsOnRemove and updateClusterClients on the broker side to
> make failover smooth?
> > >
>