You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Kristo Kuusküll <kr...@transferwise.com> on 2016/05/25 08:35:59 UTC

[Artemis] Clustering and Management interface

Hello.

We have used Artemis for some time now with one live-node cluster. Our
approach has been, that all queues and addresses are created automatically
on micro-services startup, using Artemis management interface/API (
https://activemq.apache.org/artemis/docs/1.2.0/management.html, using JMS
messages). This has been very flexible approach especially when having lots
of micro-services. Every micro-service knows which queues and addresses it
needs, especially re-delivery and DLA settings on addresses it consumes,
and so the need for central registry (i.e. central broker.xml) has been
avoided.

We are now investigating of adding another live-node to the cluster, but it
turned out that our current approach does not work well in a cluster with
many live-nodes.

Basically the management API creates queues and addresses only in one node,
where the particular JMS session is connected to, lets say it is nodeA and
lets also say that the queue name is 'jms.queue.MyQueue10'.

Now, when a consumer connects to nodeB, it will just get
'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
creating and auto deleting queues is enabled in nodeB. At the same time
consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
works (the queue is automatically created in nodeB).

A side note, we discovered, that diverts created over management API are
not persistent, they are gone when the Artemis is restarted.

Questions.
Is the management API not intended for multi live-node cluster scenario and
all configuration should happen via broker.xml (having identical queues and
addresses configuration in all nodes)?

If so, is there a way to change and apply broker.xml changes without
restarting Artemis? Because it seems, that every message "lives" in one
node only (i.e. no mirroring like in RabbitMQ), then every restart will
delay some messages, making the Artemis cluster not high-available when
modifying queues and addresses configuration. I can think that scaling-down
could be one approach, but it may be slow with millions of messages.

PS.
One case of "the problem" can be reproduced with example
'apache-artemis-1.2.0/examples/features/clustered/clustered-queue', when
commenting out 'jms' section in broker.xml-s. When the test connects to
second node, it will fail with queue not exists exception.

Thanks,
Kristo Kuusküll

Re: [Artemis] Clustering and Management interface

Posted by Justin Bertram <jb...@apache.com>.
The address-settings on the broker where the address is located are applied, not the settings from the broker where the address was created.  In my opinion, it wouldn't make sense to apply settings from one broker in a cluster to another as one broker may be on different hardware with different capabilities.  If you want the settings to be the same then simply configure them that way.


Justin

----- Original Message -----
From: "Kristo Kuusküll" <kr...@transferwise.com>
To: users@activemq.apache.org
Sent: Thursday, May 26, 2016 2:22:01 AM
Subject: Re: [Artemis] Clustering and Management interface

Thanks for answer, this fix seems valid.

However, I noticed, that when a queue is created in nodeB, an address with
the same name is also created in nodeB, but the address will have nodeB
default settings instead of the settings in nodeA. Now a question rises,
which address settings will be used, when next producer will connect by
round-robin to nodeB and sends a message to that address?

Let's say I created an address 'jms.queue.MyQueue10' in nodeA with
maxSizeBytes of 10M. Now when the queue is created automatically in nodeB,
the address 'jms.queue.MyQueue10' in it has default maxSizeBytes of 100M.
Memory settings don't seem very important, but when the same address in
nodeA and nodeB ends up having let's say different DLA, it can create
unexpected behaviours, when some producers start connecting to nodeA and
others to nodeB.

Kristo Kuusküll

On Wed, May 25, 2016 at 9:23 PM, Justin Bertram <jb...@apache.com> wrote:

> > Now, when a consumer connects to nodeB, it will just get
> > 'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
> > AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
> > creating and auto deleting queues is enabled in nodeB. At the same time
> > consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
> > works (the queue is automatically created in nodeB).
>
> I believe this has been fixed via
> https://issues.apache.org/jira/browse/ARTEMIS-218.  Using HEAD of the
> "master" branch from GitHub I tried to reproduce this problem by commenting
> out the JMS queues from the broker.xml files for the "clustered-queue"
> example and everything worked fine.
>
>
> > Is the management API not intended for multi live-node cluster scenario
> and
> > all configuration should happen via broker.xml (having identical queues
> and
> > addresses configuration in all nodes)?
>
> When you create a queue on a node in a cluster the queue "lives" on that
> particular node, but all the other nodes in the cluster are notified about
> that queue so messages can be routed to/from that queue on any other node
> in the cluster.
>
> I believe the problem you experienced was simply due to a bug (now fixed).
>
>
> Justin
>
> ----- Original Message -----
> From: "Kristo Kuusküll" <kr...@transferwise.com>
> To: users@activemq.apache.org
> Sent: Wednesday, May 25, 2016 3:35:59 AM
> Subject: [Artemis] Clustering and Management interface
>
> Hello.
>
> We have used Artemis for some time now with one live-node cluster. Our
> approach has been, that all queues and addresses are created automatically
> on micro-services startup, using Artemis management interface/API (
> https://activemq.apache.org/artemis/docs/1.2.0/management.html, using JMS
> messages). This has been very flexible approach especially when having lots
> of micro-services. Every micro-service knows which queues and addresses it
> needs, especially re-delivery and DLA settings on addresses it consumes,
> and so the need for central registry (i.e. central broker.xml) has been
> avoided.
>
> We are now investigating of adding another live-node to the cluster, but it
> turned out that our current approach does not work well in a cluster with
> many live-nodes.
>
> Basically the management API creates queues and addresses only in one node,
> where the particular JMS session is connected to, lets say it is nodeA and
> lets also say that the queue name is 'jms.queue.MyQueue10'.
>
> Now, when a consumer connects to nodeB, it will just get
> 'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
> AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
> creating and auto deleting queues is enabled in nodeB. At the same time
> consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
> works (the queue is automatically created in nodeB).
>
> A side note, we discovered, that diverts created over management API are
> not persistent, they are gone when the Artemis is restarted.
>
> Questions.
> Is the management API not intended for multi live-node cluster scenario and
> all configuration should happen via broker.xml (having identical queues and
> addresses configuration in all nodes)?
>
> If so, is there a way to change and apply broker.xml changes without
> restarting Artemis? Because it seems, that every message "lives" in one
> node only (i.e. no mirroring like in RabbitMQ), then every restart will
> delay some messages, making the Artemis cluster not high-available when
> modifying queues and addresses configuration. I can think that scaling-down
> could be one approach, but it may be slow with millions of messages.
>
> PS.
> One case of "the problem" can be reproduced with example
> 'apache-artemis-1.2.0/examples/features/clustered/clustered-queue', when
> commenting out 'jms' section in broker.xml-s. When the test connects to
> second node, it will fail with queue not exists exception.
>
> Thanks,
> Kristo Kuusküll
>

Re: [Artemis] Clustering and Management interface

Posted by Kristo Kuusküll <kr...@transferwise.com>.
Thanks for answer, this fix seems valid.

However, I noticed, that when a queue is created in nodeB, an address with
the same name is also created in nodeB, but the address will have nodeB
default settings instead of the settings in nodeA. Now a question rises,
which address settings will be used, when next producer will connect by
round-robin to nodeB and sends a message to that address?

Let's say I created an address 'jms.queue.MyQueue10' in nodeA with
maxSizeBytes of 10M. Now when the queue is created automatically in nodeB,
the address 'jms.queue.MyQueue10' in it has default maxSizeBytes of 100M.
Memory settings don't seem very important, but when the same address in
nodeA and nodeB ends up having let's say different DLA, it can create
unexpected behaviours, when some producers start connecting to nodeA and
others to nodeB.

Kristo Kuusküll

On Wed, May 25, 2016 at 9:23 PM, Justin Bertram <jb...@apache.com> wrote:

> > Now, when a consumer connects to nodeB, it will just get
> > 'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
> > AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
> > creating and auto deleting queues is enabled in nodeB. At the same time
> > consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
> > works (the queue is automatically created in nodeB).
>
> I believe this has been fixed via
> https://issues.apache.org/jira/browse/ARTEMIS-218.  Using HEAD of the
> "master" branch from GitHub I tried to reproduce this problem by commenting
> out the JMS queues from the broker.xml files for the "clustered-queue"
> example and everything worked fine.
>
>
> > Is the management API not intended for multi live-node cluster scenario
> and
> > all configuration should happen via broker.xml (having identical queues
> and
> > addresses configuration in all nodes)?
>
> When you create a queue on a node in a cluster the queue "lives" on that
> particular node, but all the other nodes in the cluster are notified about
> that queue so messages can be routed to/from that queue on any other node
> in the cluster.
>
> I believe the problem you experienced was simply due to a bug (now fixed).
>
>
> Justin
>
> ----- Original Message -----
> From: "Kristo Kuusküll" <kr...@transferwise.com>
> To: users@activemq.apache.org
> Sent: Wednesday, May 25, 2016 3:35:59 AM
> Subject: [Artemis] Clustering and Management interface
>
> Hello.
>
> We have used Artemis for some time now with one live-node cluster. Our
> approach has been, that all queues and addresses are created automatically
> on micro-services startup, using Artemis management interface/API (
> https://activemq.apache.org/artemis/docs/1.2.0/management.html, using JMS
> messages). This has been very flexible approach especially when having lots
> of micro-services. Every micro-service knows which queues and addresses it
> needs, especially re-delivery and DLA settings on addresses it consumes,
> and so the need for central registry (i.e. central broker.xml) has been
> avoided.
>
> We are now investigating of adding another live-node to the cluster, but it
> turned out that our current approach does not work well in a cluster with
> many live-nodes.
>
> Basically the management API creates queues and addresses only in one node,
> where the particular JMS session is connected to, lets say it is nodeA and
> lets also say that the queue name is 'jms.queue.MyQueue10'.
>
> Now, when a consumer connects to nodeB, it will just get
> 'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
> AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
> creating and auto deleting queues is enabled in nodeB. At the same time
> consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
> works (the queue is automatically created in nodeB).
>
> A side note, we discovered, that diverts created over management API are
> not persistent, they are gone when the Artemis is restarted.
>
> Questions.
> Is the management API not intended for multi live-node cluster scenario and
> all configuration should happen via broker.xml (having identical queues and
> addresses configuration in all nodes)?
>
> If so, is there a way to change and apply broker.xml changes without
> restarting Artemis? Because it seems, that every message "lives" in one
> node only (i.e. no mirroring like in RabbitMQ), then every restart will
> delay some messages, making the Artemis cluster not high-available when
> modifying queues and addresses configuration. I can think that scaling-down
> could be one approach, but it may be slow with millions of messages.
>
> PS.
> One case of "the problem" can be reproduced with example
> 'apache-artemis-1.2.0/examples/features/clustered/clustered-queue', when
> commenting out 'jms' section in broker.xml-s. When the test connects to
> second node, it will fail with queue not exists exception.
>
> Thanks,
> Kristo Kuusküll
>

Re: [Artemis] Clustering and Management interface

Posted by Justin Bertram <jb...@apache.com>.
> Now, when a consumer connects to nodeB, it will just get
> 'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
> AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
> creating and auto deleting queues is enabled in nodeB. At the same time
> consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
> works (the queue is automatically created in nodeB).

I believe this has been fixed via https://issues.apache.org/jira/browse/ARTEMIS-218.  Using HEAD of the "master" branch from GitHub I tried to reproduce this problem by commenting out the JMS queues from the broker.xml files for the "clustered-queue" example and everything worked fine.


> Is the management API not intended for multi live-node cluster scenario and
> all configuration should happen via broker.xml (having identical queues and
> addresses configuration in all nodes)?

When you create a queue on a node in a cluster the queue "lives" on that particular node, but all the other nodes in the cluster are notified about that queue so messages can be routed to/from that queue on any other node in the cluster.

I believe the problem you experienced was simply due to a bug (now fixed).


Justin

----- Original Message -----
From: "Kristo Kuusküll" <kr...@transferwise.com>
To: users@activemq.apache.org
Sent: Wednesday, May 25, 2016 3:35:59 AM
Subject: [Artemis] Clustering and Management interface

Hello.

We have used Artemis for some time now with one live-node cluster. Our
approach has been, that all queues and addresses are created automatically
on micro-services startup, using Artemis management interface/API (
https://activemq.apache.org/artemis/docs/1.2.0/management.html, using JMS
messages). This has been very flexible approach especially when having lots
of micro-services. Every micro-service knows which queues and addresses it
needs, especially re-delivery and DLA settings on addresses it consumes,
and so the need for central registry (i.e. central broker.xml) has been
avoided.

We are now investigating of adding another live-node to the cluster, but it
turned out that our current approach does not work well in a cluster with
many live-nodes.

Basically the management API creates queues and addresses only in one node,
where the particular JMS session is connected to, lets say it is nodeA and
lets also say that the queue name is 'jms.queue.MyQueue10'.

Now, when a consumer connects to nodeB, it will just get
'org.apache.activemq.artemis.api.core.ActiveMQNonExistentQueueException:
AMQ119017: Queue jms.queue.MyQueue10 does not exist', even when auto
creating and auto deleting queues is enabled in nodeB. At the same time
consuming a queue (lets say jms.queue.X), which does not exist in nodeA,
works (the queue is automatically created in nodeB).

A side note, we discovered, that diverts created over management API are
not persistent, they are gone when the Artemis is restarted.

Questions.
Is the management API not intended for multi live-node cluster scenario and
all configuration should happen via broker.xml (having identical queues and
addresses configuration in all nodes)?

If so, is there a way to change and apply broker.xml changes without
restarting Artemis? Because it seems, that every message "lives" in one
node only (i.e. no mirroring like in RabbitMQ), then every restart will
delay some messages, making the Artemis cluster not high-available when
modifying queues and addresses configuration. I can think that scaling-down
could be one approach, but it may be slow with millions of messages.

PS.
One case of "the problem" can be reproduced with example
'apache-artemis-1.2.0/examples/features/clustered/clustered-queue', when
commenting out 'jms' section in broker.xml-s. When the test connects to
second node, it will fail with queue not exists exception.

Thanks,
Kristo Kuusküll