You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by "Dondorp, Erwin" <er...@cgi.com.INVALID> on 2023/02/28 23:54:03 UTC
scale-down on cluster leaves old internal addresses and queues
Hello!
I have an Artemis cluster that is set up for auto-discovery of its nodes using a jgroups file on a shared location.
When the number of members is reduced, all nodes still keep the $.artemis.internal.sf.CLUSTERNAME.UUID addresses and queues that are now unused.
But I'm expecting those to be removed, since these are no longer in use...
Some more details:
Starting with a 5-node cluster, no problems in forming the cluster, all 5 nodes have cluster connections with the other members.
Adding 3 additional nodes, no problems in growing the cluster, all 8 nodes have cluster connections with the other members.
Then the 3 nodes are deleted again. After a while (reconnect-attempts times retry-interval) the cluster is happy again with the remaining 5 nodes.
This list of queues under $.activemq.notifications is correctly back to 4 queues (the number of 'other' nodes).
But the list of $.artemis.internal.sf.CLUSTERNAME.UUID addresses and queues unexpectedly remains at 7 addresses+queues.
The consumer counts for each is as expected: 4 queues have consumercount=1, the 3 others have consumercount=0.
Is it expected that the unused $.artemis.internal.sf.CLUSTERNAME.UUID addresses and queues remain after a simple scale down?
Would creating these queues with auto-delete=true help?
Thx!
Erwin
Config:
<management-notification-address>$.activemq.notifications</management-notification-address>
and
<cluster-connections>
<cluster-connection name="mycluster">
<address></address>
<!-- see comment with actual connector -->
<connector-ref>cluster-connector</connector-ref>
<check-period>5000</check-period>
<connection-ttl>60000</connection-ttl>
<min-large-message-size>100000</min-large-message-size>
<!-- call-timeout -->
<retry-interval>5000</retry-interval>
<!-- retry-interval-multiplier -->
<!-- max-retry-interval -->
<initial-connect-attempts>180</initial-connect-attempts>
<reconnect-attempts>60</reconnect-attempts>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<call-failover-timeout>30000</call-failover-timeout>
<discovery-group-ref discovery-group-name=" discovery-group" />
</cluster-connection>
</cluster-connections>