You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Archibald <ar...@gmx.net> on 2018/04/24 08:29:35 UTC

[ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Hi, 

I'm trying to create an environment with 3 live servers and a shared
JDBC-Store.
But this obviously failes due to live locks (all are using the same
NODE_MANAGER_STORE table).

So first server starts fine.
Second doesn't (timeout waiting for live lock).
Third doesn't (timeout waiting for live lock).

Is there a way to run this scenario successfully?

Thanks, 
Archibald.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Clebert Suconic <cl...@gmail.com>.
I’m not sure what’s the issue to be honest.

You should try the latest version of Artemis and you still hit an issue you
would have to communicate us a way to reproduce the issue.


On Mon, Feb 11, 2019 at 3:32 PM jraju <j....@gmail.com> wrote:

> Hi,
>
> Any update to this issue please?
>
> I am also trying a Symmetric cluster with Multi server environment where
> JGroups is used for discovery. The JGroup ping file is pointed to shared
> file location.
>
> Thanks.
> Raju J
>
>
>
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>
-- 
Clebert Suconic

RE: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by jraju <j....@gmail.com>.
Hi,

Any update to this issue please?

I am also trying a Symmetric cluster with Multi server environment where
JGroups is used for discovery. The JGroup ping file is pointed to shared
file location.

Thanks.
Raju J



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

RE: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by "Stefaniuk, Marcin " <ma...@credit-suisse.com>.
Any follow-up on that because I'm also struggling with version 2.5.0?

Marcin

-----Original Message-----
From: Archibald [mailto:archi99@gmx.net] 
Sent: 26 April 2018 20:37
To: users@activemq.apache.org
Subject: Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

@jbertram: 2.5.0



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


=============================================================================== 
Please access the attached hyperlink for an important electronic communications disclaimer: 
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
=============================================================================== 

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
@jbertram: 2.5.0



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Justin Bertram <jb...@apache.org>.
What version of Artemis are you using?


Justin

On Thu, Apr 26, 2018 at 5:00 AM, Archibald <ar...@gmx.net> wrote:

> The more I search for answers by reading other threads the more I get the
> impression, that the whole replication strategy only works on your test
> cases (or single host) but not in a real clustered (multi host)
> environment...
>
> See
> http://activemq.2283324.n4.nabble.com/H-A-colocated-
> replication-environment-not-working-as-expected-td4719539.html
> or
> http://activemq.2283324.n4.nabble.com/Artemis-2-5-0-
> Problems-with-colocated-scaledown-td4737583.html
>
>
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
The more I search for answers by reading other threads the more I get the
impression, that the whole replication strategy only works on your test
cases (or single host) but not in a real clustered (multi host)
environment...

See 
http://activemq.2283324.n4.nabble.com/H-A-colocated-replication-environment-not-working-as-expected-td4719539.html
or 
http://activemq.2283324.n4.nabble.com/Artemis-2-5-0-Problems-with-colocated-scaledown-td4737583.html





--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
Hi Justin,

After spending hours of reading through documentation and examples I finally
decided to
continue with the replication/colocate approach.

So I've started two brokers which should pair with each other using static
connectors.


    <ha-policy>
      <replication>
        <colocated>
          <request-backup>true</request-backup>
          <max-backups>1</max-backups>
          <backup-request-retries>-1</backup-request-retries>
         
<backup-request-retry-interval>5000</backup-request-retry-interval>
          <master>
            <check-for-live-server>true</check-for-live-server>
          </master>
          <slave>
             <allow-failback>true</allow-failback>
          </slave>
        </colocated>
      </replication>
    </ha-policy>

    <cluster-connections>
      <cluster-connection name="my-cluster">
        <address></address>
        <connector-ref>netty-connector</connector-ref>
        <check-period>1000</check-period>
        <connection-ttl>5000</connection-ttl>
        <min-large-message-size>50000</min-large-message-size>
        <call-timeout>5000</call-timeout>
        <retry-interval>500</retry-interval>
        <retry-interval-multiplier>1.0</retry-interval-multiplier>
        <max-retry-interval>5000</max-retry-interval>
        <initial-connect-attempts>-1</initial-connect-attempts>
        <reconnect-attempts>-1</reconnect-attempts>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>ON_DEMAND</message-load-balancing>
        <max-hops>1</max-hops>
        <confirmation-window-size>32000</confirmation-window-size>
        <call-failover-timeout>30000</call-failover-timeout>
        <notification-interval>1000</notification-interval>
        <notification-attempts>2</notification-attempts>
        <static-connectors>
          <connector-ref>cluster-connector</connector-ref>
        </static-connectors>
      </cluster-connection>
    </cluster-connections>

    <connectors>
      <connector name="netty-connector">tcp://10.0.1.109:61616</connector>   
      <connector name="cluster-connector">tcp://10.0.1.111:61616</connector>
    </connectors>


Both servers find each other and request backup is also successful, e.g

2018-04-26 06:51:19,387 INFO  [org.apache.activemq.artemis.core.server]
AMQ221027: Bridge ClusterConnectionBridge@4af78b15 [...] is connected
2018-04-26 06:51:23,258 INFO  [org.apache.activemq.artemis.core.server]
AMQ221066: Initiating quorum vote: RequestBackupQuorumVote
2018-04-26 06:51:23,271 INFO  [org.apache.activemq.artemis.core.server]
AMQ221060: Sending quorum vote request to 10.0.1.111/10.0.1.111:61616:
RequestBackupVote [backupsSize=-1, nodeID=null, backupAvailable=false]
2018-04-26 06:51:23,277 INFO  [org.apache.activemq.artemis.core.server]
AMQ221061: Received quorum vote response from 10.0.1.111/10.0.1.111:61616:
RequestBackupVote [backupsSize=0,
nodeID=3691529e-491e-11e8-a0d4-0242ac120006, backupAvailable=true]
2018-04-26 06:51:23,795 INFO  [org.apache.activemq.artemis.core.server]
AMQ221062: Received quorum vote request: RequestBackupVote [backupsSize=-1,
nodeID=null, backupAvailable=false]
2018-04-26 06:51:23,795 INFO  [org.apache.activemq.artemis.core.server]
AMQ221063: Sending quorum vote response: RequestBackupVote [backupsSize=0,
nodeID=36502b14-491e-11e8-9a28-0242ac120006, backupAvailable=true]
2018-04-26 06:51:23,952 INFO  [org.apache.activemq.artemis.core.server]
AMQ221000: backup Message Broker is starting with configuration Broker
Configuration
(clustered=true,journalDirectory=data/journalcolocated_backup_01,bindingsDirectory=data/bindingscolocated_backup_01,largeMessagesDirectory=data/large-messagescolocated_backup_01,pagingDirectory=data/pagingcolocated_backup_01)

I then stop the second server and start it anew. The cluster connects but
backup fails, e.g.

2018-04-26 06:57:02,134 WARN  [org.apache.activemq.artemis.core.client]
AMQ212037: Connection failure has been detected: AMQ119015: The connection
was disconnected because of server shutdown [code=DISCONNECTED]
2018-04-26 06:57:02,138 WARN  [org.apache.activemq.artemis.core.server]
AMQ222095: Connection failed with failedOver=false
2018-04-26 06:57:02,138 WARN  [org.apache.activemq.artemis.core.client]
AMQ212037: Connection failure has been detected: AMQ119015: The connection
was disconnected because of server shutdown [code=DISCONNECTED]
2018-04-26 06:57:02,139 WARN  [org.apache.activemq.artemis.core.client]
AMQ212037: Connection failure has been detected: AMQ119015: The connection
was disconnected because of server shutdown [code=DISCONNECTED]
2018-04-26 06:57:02,273 INFO  [org.apache.activemq.artemis.core.server]
AMQ221029: stopped bridge
$.artemis.internal.sf.fleeture.3691529e-491e-11e8-a0d4-0242ac120006
2018-04-26 06:57:02,276 WARN  [org.apache.activemq.artemis.core.server]
AMQ222095: Connection failed with failedOver=false
2018-04-26 06:57:22,240 INFO  [org.apache.activemq.artemis.core.server]
AMQ221062: Received quorum vote request: RequestBackupVote [backupsSize=-1,
nodeID=null, backupAvailable=false]
2018-04-26 06:57:22,240 INFO  [org.apache.activemq.artemis.core.server]
AMQ221063: Sending quorum vote response: RequestBackupVote [backupsSize=1,
nodeID=36502b14-491e-11e8-9a28-0242ac120006, backupAvailable=false]
2018-04-26 06:57:22,903 INFO  [org.apache.activemq.artemis.core.server]
AMQ221027: Bridge ClusterConnectionBridge@6b293748 [...] is connected
2018-04-26 06:57:27,295 INFO  [org.apache.activemq.artemis.core.server]
AMQ221062: Received quorum vote request: RequestBackupVote [backupsSize=-1,
nodeID=null, backupAvailable=false]
2018-04-26 06:57:27,296 INFO  [org.apache.activemq.artemis.core.server]
AMQ221063: Sending quorum vote response: RequestBackupVote [backupsSize=1,
nodeID=36502b14-491e-11e8-9a28-0242ac120006, backupAvailable=false]
2018-04-26 06:57:32,311 INFO  [org.apache.activemq.artemis.core.server]
AMQ221062: Received quorum vote request: RequestBackupVote [backupsSize=-1,
nodeID=null, backupAvailable=false]
2018-04-26 06:57:32,311 INFO  [org.apache.activemq.artemis.core.server]
AMQ221063: Sending quorum vote response: RequestBackupVote [backupsSize=1,
nodeID=36502b14-491e-11e8-9a28-0242ac120006, backupAvailable=false]
...

The second server (which was restarted) is after connecting to cluster
continuesly posting:

2018-04-26 06:57:27,286 INFO  [org.apache.activemq.artemis.core.server]
AMQ221066: Initiating quorum vote: RequestBackupQuorumVote
2018-04-26 06:57:27,294 INFO  [org.apache.activemq.artemis.core.server]
AMQ221060: Sending quorum vote request to 10.0.1.109/10.0.1.109:61616:
RequestBackupVote [backupsSize=-1, nodeID=null, backupAvailable=false]
2018-04-26 06:57:27,297 INFO  [org.apache.activemq.artemis.core.server]
AMQ221061: Received quorum vote response from 10.0.1.109/10.0.1.109:61616:
RequestBackupVote [backupsSize=1,
nodeID=36502b14-491e-11e8-9a28-0242ac120006, backupAvailable=false]

The whole scenario only stablizis if I restart the first server aswell (but
I doubt that it will recover from backup, instead it will ask the second
server for a new backup).

Why does colocation doesnt't work here? Is this a configuration issue?

A short notice. The whole scenario runs on docker. So after a restart each
server gets a new IP address. 
This issue looks similar to what Ikka described in
http://activemq.2283324.n4.nabble.com/Artemis-2-5-0-Problems-with-colocated-scaledown-td4737583.html
but has never been commented and in my scenario it doesn't matter wether I
use UDP broadcasts or static connectors.

Thank you for any help,

Archibald



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Justin Bertram <jb...@apache.org>.
> what I understood from the documentation is that with the scale-down
option it should very well be possible to offload the current work to
another live server (in case of normal shutdown, of course). Or am I wrong?

Your understanding is correct. However, scale-down and HA are not the same
thing although they are related. HA is to protect data in case of a broker
failure. The typical scale-down use case is for dynamic clusters where
nodes may be brought-up and then shut-down in response to increasing and
decreasing loads. Your previous message talked about HA and not scale-down
specifically.

> A client currently connected to a live server is being notified that this
server is going down and which server will take up the work. By what? The
connector-ref definition in some cluster configuration (which might not be
public accessable at all)? Or the public acceptor definition of the target
server?

Where the broker scales-down is determined by the configuration as outlined
in the documentation. I'm not sure what you mean by "public" in this
context.

> ...what happens to current transactions/messages in progress if I
shutdown (not abort) the live server?

It depends on your configuration. If you've set failover-on-shutdown = true
(as discussed in the documentation) then the backup will activate and all
your persistent data (including transaction state) will be available on the
backup. If failover-on-shutdown = false then the backup will not activate
and your persistent data will not be available again until you restart that
broker.

> What happens to messages which are not marked as PERSISTENT?

If you shutdown a broker without scale-down then all non-persistent data
will be lost. That's how non-persistent data works - it doesn't survive a
broker failure or restart.


Justin

On Tue, Apr 24, 2018 at 11:02 AM, Archibald <ar...@gmx.net> wrote:

> Hi Justin,
>
> what I understood from the documentation is that with the scale-down option
> it should very well be possible to offload the current work to another live
> server (in case of normal shutdown, of course). Or am I wrong?
>
> And how's scaled down interally handled? A client currently connected to a
> live server is being notified that this server is going down and which
> server will take up the work. By what? The connector-ref definition in some
> cluster configuration (which might not be public accessable at all)? Or the
> public acceptor definition of the target server?
>
> If I now change the environment to one live and one backup server with a
> shared storage (jdbc), what happens to current transactions/messages in
> progress if I shutdown (not abort) the live server?
> Will those be migrated to the backup server? What happens to messages which
> are not marked as PERSISTENT?
>
> Sorry for bothering with so many questions. But clustering is (was never)
> easy to implement...
>
> Br,
> A.
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
Hi Justin,

what I understood from the documentation is that with the scale-down option
it should very well be possible to offload the current work to another live
server (in case of normal shutdown, of course). Or am I wrong?

And how's scaled down interally handled? A client currently connected to a
live server is being notified that this server is going down and which
server will take up the work. By what? The connector-ref definition in some
cluster configuration (which might not be public accessable at all)? Or the
public acceptor definition of the target server? 

If I now change the environment to one live and one backup server with a
shared storage (jdbc), what happens to current transactions/messages in
progress if I shutdown (not abort) the live server?
Will those be migrated to the backup server? What happens to messages which
are not marked as PERSISTENT?

Sorry for bothering with so many questions. But clustering is (was never)
easy to implement...

Br, 
A.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Francesco Nigro <ni...@gmail.com>.
You've stolen the words from my mouth Justin :)
@archibald Can just say: as he has said :)

Il giorno mar 24 apr 2018 alle ore 15:04 Justin Bertram <jb...@redhat.com>
ha scritto:

> > If have 3 live servers running in one cluster which do load balancing,
> HA, etc. Do I really need backup servers? If one is down, I suspect the two
> others to take over message processing...
>
> The issue here is that HA isn't provided between live servers. HA only
> works between live and backup servers. Therefore, if you want HA you'll
> want to configure backups.
>
> Read through the HA documentation [1] and/or check out the HA examples [2]
> for more details.
>
>
> Justin
>
> [1] https://activemq.apache.org/artemis/docs/latest/ha.html
> [2]
> https://github.com/apache/activemq-artemis/tree/master/examples/features/ha
>
> On Tue, Apr 24, 2018 at 7:51 AM, Archibald <ar...@gmx.net> wrote:
>
> > Hi Franz,
> >
> > I just checked the artemis-configuration.xsd and you're right! There's a
> > node-manager-store-table-name specified. I was missing that in the latest
> > documentation about the jdbc-store. So I can have multiple tables for
> each
> > broker and a shared set of tables for bindings and messages?
> >
> > If have 3 live servers running in one cluster which do load balancing,
> HA,
> > etc. Do I really need backup servers?
> > If one is down, I suspect the two others to take over message
> processing...
> >
> > Thanks, A.
> >
> >
> >
> > --
> > Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> > f2341805.html
> >
>

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Justin Bertram <jb...@redhat.com>.
> If have 3 live servers running in one cluster which do load balancing,
HA, etc. Do I really need backup servers? If one is down, I suspect the two
others to take over message processing...

The issue here is that HA isn't provided between live servers. HA only
works between live and backup servers. Therefore, if you want HA you'll
want to configure backups.

Read through the HA documentation [1] and/or check out the HA examples [2]
for more details.


Justin

[1] https://activemq.apache.org/artemis/docs/latest/ha.html
[2]
https://github.com/apache/activemq-artemis/tree/master/examples/features/ha

On Tue, Apr 24, 2018 at 7:51 AM, Archibald <ar...@gmx.net> wrote:

> Hi Franz,
>
> I just checked the artemis-configuration.xsd and you're right! There's a
> node-manager-store-table-name specified. I was missing that in the latest
> documentation about the jdbc-store. So I can have multiple tables for each
> broker and a shared set of tables for bindings and messages?
>
> If have 3 live servers running in one cluster which do load balancing, HA,
> etc. Do I really need backup servers?
> If one is down, I suspect the two others to take over message processing...
>
> Thanks, A.
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
Hi Franz, 

I just checked the artemis-configuration.xsd and you're right! There's a
node-manager-store-table-name specified. I was missing that in the latest
documentation about the jdbc-store. So I can have multiple tables for each
broker and a shared set of tables for bindings and messages?

If have 3 live servers running in one cluster which do load balancing, HA,
etc. Do I really need backup servers?
If one is down, I suspect the two others to take over message processing...

Thanks, A.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Michael André Pearce <mi...@me.com>.
Good old politics , fair enough. 

Re docker I think there is such a thing now as statefulservice think it was called petservice for a while before. But essentially gives you stateful disk. 

But totally get your reasons, half our decisions in life is due to internal politics :)



Sent from my iPhone

> On 24 Apr 2018, at 14:09, Archibald <ar...@gmx.net> wrote:
> 
> @MichelAndrePearce
> 
> That's a valid question. And the answer is more or less politically driven.
> If you break down a monolith application which does everything in one
> transaction people are getting nervious about their data. Keeping messages
> within a database (which itself is also clustered and there's global
> confidence, etc) keep other's calm as you can always see, nothing is lost.
> See your data (in form of messages) is still persistent. We're running the
> brokers in some docker environment where containers (or even nodes) go up
> and down. And there's no confidence about the data being stored locally on
> some nodes.
> 
> One the other hand without a database it is way easier to create (configure)
> an elastic cluster of brokers...
> 
> Thanks A.  
> 
> 
> 
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Archibald <ar...@gmx.net>.
@MichelAndrePearce

That's a valid question. And the answer is more or less politically driven.
If you break down a monolith application which does everything in one
transaction people are getting nervious about their data. Keeping messages
within a database (which itself is also clustered and there's global
confidence, etc) keep other's calm as you can always see, nothing is lost.
See your data (in form of messages) is still persistent. We're running the
brokers in some docker environment where containers (or even nodes) go up
and down. And there's no confidence about the data being stored locally on
some nodes.

One the other hand without a database it is way easier to create (configure)
an elastic cluster of brokers...

Thanks A.  



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Michael André Pearce <mi...@me.com>.
Is there a reason you don’t look at replicates journal setup? 

Journal is the primary and most performant way to run, especially in a scalable way which is what multi master is about.

JDBC really is just for retro users, you won’t get benefit from multi master as you will have the shared single JDBC backend as your bottle neck 

Sent from my iPhone

> On 24 Apr 2018, at 11:12, Francesco Nigro <ni...@gmail.com> wrote:
> 
> Hi Archibald!
> 
> If you want to have 3 servers running over the same shared store I think
> you have to choose 1 to be the live and the remaining 2 to be backups.
> 
> If you want to have 3 live servers, each one with shared store HA you need
> to specify for each [live, backups] group a different node manager store
> table
> (for the journal too) to allow live and backup locks to be handled
> correctly..
> 
> Cheers,
> Franz
> 
> 
> Il giorno mar 24 apr 2018 alle ore 10:29 Archibald <ar...@gmx.net> ha
> scritto:
> 
>> Hi,
>> 
>> I'm trying to create an environment with 3 live servers and a shared
>> JDBC-Store.
>> But this obviously failes due to live locks (all are using the same
>> NODE_MANAGER_STORE table).
>> 
>> So first server starts fine.
>> Second doesn't (timeout waiting for live lock).
>> Third doesn't (timeout waiting for live lock).
>> 
>> Is there a way to run this scenario successfully?
>> 
>> Thanks,
>> Archibald.
>> 
>> 
>> 
>> --
>> Sent from:
>> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>> 

Re: [ARTEMIS] Clustered broker with multiple live servers and shared JDBC-Store

Posted by Francesco Nigro <ni...@gmail.com>.
Hi Archibald!

If you want to have 3 servers running over the same shared store I think
you have to choose 1 to be the live and the remaining 2 to be backups.

If you want to have 3 live servers, each one with shared store HA you need
to specify for each [live, backups] group a different node manager store
table
(for the journal too) to allow live and backup locks to be handled
correctly..

Cheers,
Franz


Il giorno mar 24 apr 2018 alle ore 10:29 Archibald <ar...@gmx.net> ha
scritto:

> Hi,
>
> I'm trying to create an environment with 3 live servers and a shared
> JDBC-Store.
> But this obviously failes due to live locks (all are using the same
> NODE_MANAGER_STORE table).
>
> So first server starts fine.
> Second doesn't (timeout waiting for live lock).
> Third doesn't (timeout waiting for live lock).
>
> Is there a way to run this scenario successfully?
>
> Thanks,
> Archibald.
>
>
>
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>