You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Mark Johnson <sp...@gmail.com> on 2022/10/14 08:44:59 UTC

Classic Message Groups and Horizontal Scaling of Consumers

Hi,

We're considering using Message Groups to solve update contention within
the consumers.

Consumers do handle contention by rolling back and retrying, which
significantly reduces effective throughput, so the objective is to
eliminate as much contention as possible.

Msg ordering is also not strict; loss of sequence within a reasonable time
window is acceptable.

The set of GroupIDs is not known in advance, but derived from the msg
content. The size of the set will mostly be <20k, but some sets could be up
to 1m.

There is horizontal scaling of consumers on demand. At the moment this is
via scaling out a set of VMs in cloud, but we have plans to slim down the
app and scale out within Kubernetes.
The total number of consumers is unlikely to exceed 1k.

I have a few questions.

1. Auto Rebalance
Some means of redistributing the load is required, otherwise scaling
consumers on demand will achieve nothing. (We have the same issue with some
old web code that requires sticky sessions).

Artemis has a 'group-rebalance' config; does Classic have any equivalent
feature?
If not, is there a suggested implementation pattern?

2. GroupID Memory
Given that consumer count will generally be much smaller than Group set
size, using hashed grouped id to reduce the group set size seems essential
to reduce memory consumption when there can be 1m groups, e.g. Classic
MessageGroupHashBucket or Artemis group-buckets.

Approximately how much memory should be allowed for 1024 buckets?

I assume that auto rebalance is rebalancing the buckets if group buckets is
enabled; is that correct?
The consequence of the random nature of hashing is that there is a small
possibility that very busy groups might be hashed together, and rebalancing
won't help to spread the load.

3. Anything else that I should be considering?

Thanks

Mark Johnson