You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Helen Kwong <he...@gmail.com> on 2014/01/16 20:20:20 UTC

Limiting the number of concurrent consumers across multiple queues

Hi Qpid users / experts,

I need to limit the number of consumers concurrently processing messages
considered to be in the same group, across multiple queues, and was
wondering if anyone has ideas about how to do it. We’re using the Java
broker and client, and have multiple queues, each with multiple listeners,
each listener’s session listening to multiple queues. Some messages are
associated with groups, and for a given group we want at most K listeners
processing messages from the group at any given time. The messages are
enqueued to multiple queues, and it’s possible for messages from the same
group to be in different queues.

If messages in the same group can go into only one queue, then the message
groups feature will give us what we need (it’d work directly with K = 1 and
with K > 1 we can tweak the grouping value, e.g., hash it to one of 1 to K
and append the number to the grouping value). But since messages considered
to be in the same group can be in different queues, the feature is not
enough for our case.

Since it looks like the broker side doesn’t have what we need exactly,
we’re thinking about how to do this from the client side. We’re thinking
along the lines of having some semaphore object per group, shared between
the different listeners, and whenever a listener receives a message, it
will try to acquire a permit from the semaphore for that group. If it’s
able to acquire a permit, then process the message and release the permit
upon completion. If it’s not able to acquire a permit, reenqueue the
message in some way. For example:

1) Reenqueue the message back to the same queue so it can be retried right
away. But this would lead to a lot of churning when permits are not
available for a while, so we’ve ruled this out.

2) Same as #1, but sleep for a short while first so we wouldn’t have the
high churning. But since each listener’s session is responsible for
multiple queues, this can decrease the throughput of other queues.

3) Enqueue the message to a special queue that stores messages waiting for
a permit, a queue that is not listened to by anyone. A periodic sweeper job
will wake up once in a while, say every minute, and pulls all the messages
off of the waiting queue and reenqueues them to their respective original
queues. But throughput would be limited by sweeper interval.

4) Like #3, but don’t use a periodic sweeper. Instead, when a listener that
was able to acquire a permit is done with a message, look up the next
waiting message of the same group in the waiting queue using a JMS
selector, and reenqueue it back to the original queue. But look up
performance might be bad if queue depth is high.

Each of these has some drawbacks. Does anyone have ideas about other
possible approaches (maybe entirely different from the above), or has done
something similar?

Thanks,
Helen

Re: Limiting the number of concurrent consumers across multiple queues

Posted by Robbie Gemmell <ro...@gmail.com>.
Thats a little hard to parse, and all the more difficult to reason about
without additional detail hehe. Regardless, it does sound like anything
workable using groups would involve more change to your systems than you
want.

One last thing to note which occurred to me. You mention listeners, which I
am assuming means use of onMessage(), as well having multiple consumers per
Session to consume from multiple queues. Given your concern about
starvation and concurrent processing, and also depending on exactly what
you meant by things taking a long time to process, its worth noting that if
you are using AMQP 0-10 then the clients message prefetch is controlled
per-consumer and so by having multiple consumers using onMessage() on a
single session you are essentially forcing that only one consumer at a time
can actually be receiving a message, whilst the prefetched messages for all
the other consumers wait their turn for the single-threaded session to call
onMessage for them.

Robbie

On 21 January 2014 01:55, Helen Kwong <he...@gmail.com> wrote:

> Hi Robbie,
>
> Really appreciate your taking the time to look at this and making the
> change to add the extra options / modes for message grouping.
>
> Yes you're right about what I was thinking about with using selectors; I
> see what Gordon was suggesting now.
>
> More about our situation: we have many different types of messages, and for
> each type we route messages into 10 different queues, based on a certain
> property P that can be put into 10 different groups, because we want to
> make sure one group's messages taking a long time to process won't starve
> another group for a long time. (There are no concurrency requirements
> between these groups in general.) For each queue we have multiple
> listeners, and each listener's session has listeners to multiple queues --
> this is done so that our consumer resources are flexible. A certain type T
> of messages may have a concurrency requirement based on a property Q and
> limit K (what Q and K are depend on T), the need being that for a given
> value x, at most K consumers can be concurrently processing T messages with
> Q = x. We don't know what the groups / Q values are ahead of time. And like
> I mentioned before, messages with Q = x can have different values of P and
> thus can be in different queues in our current setup. No particular Q
> values would be more important than others, but the volumes of different
> values can be very different.
>
> Since we don't know the values of Q ahead of time, we can't have consumers
> selecting on specific Q values. But we could have made messages of the same
> type T to all go to 1 queue, and had some consumers selecting for the first
> P group, some selecting for the second P group, etc., for all 10 groups of
> P. Then we could just use message groups on the single queue. However, this
> would actually involve some pretty major changes to how we set up our
> queues and clients, and things we have in place to ensure "fairness" at
> different levels. For example, we sometimes can have a huge number of
> messages with a certain P value (say P = a), and we don't want them
> starving other messages within the same P group, so we have a mechanism in
> place to route messages with P = a to a separate queue when that happens,
> to protect other messages in the same P group. If we are to use selectors
> instead of queues, we could achieve something similar by closing the
> current consumers and replace each of them with an extra selector criterion
> P != a (I'm assuming that's possible), but this requires a lot of changes
> to how we do things, so it's not a route we want to take.
>
> As for message ordering, ideally we'd be able to preserve it, but it's not
> required. The approach I mentioned would indeed destroy the order, but it's
> something we can live with. What we need is just limit the number of
> concurrently processing threads for a group (for Q).
>
> So unfortunately at this point, it looks like we probably won't be using
> message groups, with the way we have queues and fairness mechanisms set up.
> If I'm missing something and you think there's another way, please let me
> know. I also hope making the change you made for message groups didn't take
> up too much of your time!
>
> Thanks,
> Helen
>
>
> On Mon, Jan 20, 2014 at 10:43 AM, Robbie Gemmell
> <ro...@gmail.com>wrote:
>
> > Hi Helen,
> >
> > I think you and Gordon moight be talking about a different use of
> > selectors; yours being for the case where you try to pick off particular
> > messages from an existing backlog at a specific time, and Gordon's
> > suggestion being more around use of selectors on all your long-lived
> > consumers to consume the messages as they arrive and remove the need to
> > pick out specific messages later.
> >
> > Picking individual messages off a queue with new consumers using a
> selector
> > is never likely to be that fast, because as you say it might have to
> > evaluate every message first to find a match (assuming there is one). On
> > the other hand, when using [a number of] long lived consumers aimed at
> > [collectively] consuming all the messages on the queue, the selectors are
> > simply evaluated during the regular process of attempting to deliver each
> > message to the available consumers. The overhead then moves from checking
> > every message specifically until it matches, which may be dependant on
> the
> > queue depth, to the regular competition between particular subscriptions
> > for accepting the messages as they are processed, which in some ways
> isnt.
> >
> > I also presume that, as discussed on the other thread, you could also
> > currently be seeing a hit in performance due to using 'shared groups'
> with
> > unique'ish keys; more on this later.
> >
> > Particular distributions of selectors could be used to ensure that at
> most
> > K consumers could ever process particular messages like X, and thus do so
> > at one time, but would additionally mean that if those particular
> consumers
> > were busy processing messages like Y which they can also consume, then it
> > could be possible for other consumers to sit idle because their selectors
> > indicate they cant process messages like X. It would need to be
> considered
> > as a balance, governed by whatever it is you are looking to achieve by
> > limiting the maximum number of concurrent consumers for a given type of
> > message.
> >
> > The same approach could technically be used across multiple queues, which
> > might also use groups (which is a little weird to write, since groups are
> > usually used to prevent concurrent delivery), but doing so would add an
> > additional element to the 'idle consumers' balancing problem, wherein at
> a
> > given time some sources might have messages of interest and others might
> > not.
> >
> > The above, and any other specific ideas people have, might all depend
> what
> > your messages, groups, and queues are actually like. E.g. are some higher
> > volume than others, are some more important than others, do you know what
> > the groups are ahead of time, etc etc.
> >
> > If I am reading them correctly it seems like all of the original options
> in
> > this thread allow for a message to go round in circles between the
> various
> > queues, potentially forever. There also seems to be scope for interesting
> > ordering effects from re-enqueing messages. Given that ordering is
> another
> > key reason for using message grouping, what are your actual ordering
> > requirements?
> >
> > On a related-but-not note, see the other thread for further discussion
> > around improvements for queues with 'shared groups'.
> >
> > Robbie
> >
> > On 17 January 2014 21:38, Helen Kwong <he...@gmail.com> wrote:
> >
> > > Hi Gordon,
> > >
> > > In the tests that we've run, the time it takes to dequeue messages
> using
> > > selectors seems to increase with the depth of the queue. Since the
> number
> > > of unprocessed messages can sometimes be quite high (e.g., >200000), if
> > > they are all on the same queue and we use selectors, the dequeue time
> > will
> > > increase by a lot (e.g., 3-4 seconds if we're selecting the 200000th
> > > message), and the performance hit is probably too much for us. Is
> there a
> > > way to dequeue using selectors quickly from a high-depth queue?
> > >
> > > Helen
> > >
> > >
> > > On Fri, Jan 17, 2014 at 2:40 AM, Gordon Sim <gs...@redhat.com> wrote:
> > >
> > > > On 01/16/2014 07:20 PM, Helen Kwong wrote:
> > > >
> > > >> Hi Qpid users / experts,
> > > >>
> > > >> I need to limit the number of consumers concurrently processing
> > messages
> > > >> considered to be in the same group, across multiple queues, and was
> > > >> wondering if anyone has ideas about how to do it. We’re using the
> Java
> > > >> broker and client, and have multiple queues, each with multiple
> > > listeners,
> > > >> each listener’s session listening to multiple queues. Some messages
> > are
> > > >> associated with groups, and for a given group we want at most K
> > > listeners
> > > >> processing messages from the group at any given time. The messages
> are
> > > >> enqueued to multiple queues, and it’s possible for messages from the
> > > same
> > > >> group to be in different queues.
> > > >>
> > > >> If messages in the same group can go into only one queue, then the
> > > message
> > > >> groups feature will give us what we need (it’d work directly with K
> =
> > 1
> > > >> and
> > > >> with K > 1 we can tweak the grouping value, e.g., hash it to one of
> 1
> > > to K
> > > >> and append the number to the grouping value). But since messages
> > > >> considered
> > > >> to be in the same group can be in different queues, the feature is
> not
> > > >> enough for our case.
> > > >>
> > > >
> > > > Instead of multiple queues, could you have one queue with different
> > > > selectors pulling subsets of the messages?
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > > For additional commands, e-mail: users-help@qpid.apache.org
> > > >
> > > >
> > >
> >
>

Re: Limiting the number of concurrent consumers across multiple queues

Posted by Helen Kwong <he...@gmail.com>.
Hi Robbie,

Really appreciate your taking the time to look at this and making the
change to add the extra options / modes for message grouping.

Yes you're right about what I was thinking about with using selectors; I
see what Gordon was suggesting now.

More about our situation: we have many different types of messages, and for
each type we route messages into 10 different queues, based on a certain
property P that can be put into 10 different groups, because we want to
make sure one group's messages taking a long time to process won't starve
another group for a long time. (There are no concurrency requirements
between these groups in general.) For each queue we have multiple
listeners, and each listener's session has listeners to multiple queues --
this is done so that our consumer resources are flexible. A certain type T
of messages may have a concurrency requirement based on a property Q and
limit K (what Q and K are depend on T), the need being that for a given
value x, at most K consumers can be concurrently processing T messages with
Q = x. We don't know what the groups / Q values are ahead of time. And like
I mentioned before, messages with Q = x can have different values of P and
thus can be in different queues in our current setup. No particular Q
values would be more important than others, but the volumes of different
values can be very different.

Since we don't know the values of Q ahead of time, we can't have consumers
selecting on specific Q values. But we could have made messages of the same
type T to all go to 1 queue, and had some consumers selecting for the first
P group, some selecting for the second P group, etc., for all 10 groups of
P. Then we could just use message groups on the single queue. However, this
would actually involve some pretty major changes to how we set up our
queues and clients, and things we have in place to ensure "fairness" at
different levels. For example, we sometimes can have a huge number of
messages with a certain P value (say P = a), and we don't want them
starving other messages within the same P group, so we have a mechanism in
place to route messages with P = a to a separate queue when that happens,
to protect other messages in the same P group. If we are to use selectors
instead of queues, we could achieve something similar by closing the
current consumers and replace each of them with an extra selector criterion
P != a (I'm assuming that's possible), but this requires a lot of changes
to how we do things, so it's not a route we want to take.

As for message ordering, ideally we'd be able to preserve it, but it's not
required. The approach I mentioned would indeed destroy the order, but it's
something we can live with. What we need is just limit the number of
concurrently processing threads for a group (for Q).

So unfortunately at this point, it looks like we probably won't be using
message groups, with the way we have queues and fairness mechanisms set up.
If I'm missing something and you think there's another way, please let me
know. I also hope making the change you made for message groups didn't take
up too much of your time!

Thanks,
Helen


On Mon, Jan 20, 2014 at 10:43 AM, Robbie Gemmell
<ro...@gmail.com>wrote:

> Hi Helen,
>
> I think you and Gordon moight be talking about a different use of
> selectors; yours being for the case where you try to pick off particular
> messages from an existing backlog at a specific time, and Gordon's
> suggestion being more around use of selectors on all your long-lived
> consumers to consume the messages as they arrive and remove the need to
> pick out specific messages later.
>
> Picking individual messages off a queue with new consumers using a selector
> is never likely to be that fast, because as you say it might have to
> evaluate every message first to find a match (assuming there is one). On
> the other hand, when using [a number of] long lived consumers aimed at
> [collectively] consuming all the messages on the queue, the selectors are
> simply evaluated during the regular process of attempting to deliver each
> message to the available consumers. The overhead then moves from checking
> every message specifically until it matches, which may be dependant on the
> queue depth, to the regular competition between particular subscriptions
> for accepting the messages as they are processed, which in some ways isnt.
>
> I also presume that, as discussed on the other thread, you could also
> currently be seeing a hit in performance due to using 'shared groups' with
> unique'ish keys; more on this later.
>
> Particular distributions of selectors could be used to ensure that at most
> K consumers could ever process particular messages like X, and thus do so
> at one time, but would additionally mean that if those particular consumers
> were busy processing messages like Y which they can also consume, then it
> could be possible for other consumers to sit idle because their selectors
> indicate they cant process messages like X. It would need to be considered
> as a balance, governed by whatever it is you are looking to achieve by
> limiting the maximum number of concurrent consumers for a given type of
> message.
>
> The same approach could technically be used across multiple queues, which
> might also use groups (which is a little weird to write, since groups are
> usually used to prevent concurrent delivery), but doing so would add an
> additional element to the 'idle consumers' balancing problem, wherein at a
> given time some sources might have messages of interest and others might
> not.
>
> The above, and any other specific ideas people have, might all depend what
> your messages, groups, and queues are actually like. E.g. are some higher
> volume than others, are some more important than others, do you know what
> the groups are ahead of time, etc etc.
>
> If I am reading them correctly it seems like all of the original options in
> this thread allow for a message to go round in circles between the various
> queues, potentially forever. There also seems to be scope for interesting
> ordering effects from re-enqueing messages. Given that ordering is another
> key reason for using message grouping, what are your actual ordering
> requirements?
>
> On a related-but-not note, see the other thread for further discussion
> around improvements for queues with 'shared groups'.
>
> Robbie
>
> On 17 January 2014 21:38, Helen Kwong <he...@gmail.com> wrote:
>
> > Hi Gordon,
> >
> > In the tests that we've run, the time it takes to dequeue messages using
> > selectors seems to increase with the depth of the queue. Since the number
> > of unprocessed messages can sometimes be quite high (e.g., >200000), if
> > they are all on the same queue and we use selectors, the dequeue time
> will
> > increase by a lot (e.g., 3-4 seconds if we're selecting the 200000th
> > message), and the performance hit is probably too much for us. Is there a
> > way to dequeue using selectors quickly from a high-depth queue?
> >
> > Helen
> >
> >
> > On Fri, Jan 17, 2014 at 2:40 AM, Gordon Sim <gs...@redhat.com> wrote:
> >
> > > On 01/16/2014 07:20 PM, Helen Kwong wrote:
> > >
> > >> Hi Qpid users / experts,
> > >>
> > >> I need to limit the number of consumers concurrently processing
> messages
> > >> considered to be in the same group, across multiple queues, and was
> > >> wondering if anyone has ideas about how to do it. We’re using the Java
> > >> broker and client, and have multiple queues, each with multiple
> > listeners,
> > >> each listener’s session listening to multiple queues. Some messages
> are
> > >> associated with groups, and for a given group we want at most K
> > listeners
> > >> processing messages from the group at any given time. The messages are
> > >> enqueued to multiple queues, and it’s possible for messages from the
> > same
> > >> group to be in different queues.
> > >>
> > >> If messages in the same group can go into only one queue, then the
> > message
> > >> groups feature will give us what we need (it’d work directly with K =
> 1
> > >> and
> > >> with K > 1 we can tweak the grouping value, e.g., hash it to one of 1
> > to K
> > >> and append the number to the grouping value). But since messages
> > >> considered
> > >> to be in the same group can be in different queues, the feature is not
> > >> enough for our case.
> > >>
> > >
> > > Instead of multiple queues, could you have one queue with different
> > > selectors pulling subsets of the messages?
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> > >
> >
>

Re: Limiting the number of concurrent consumers across multiple queues

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Helen,

I think you and Gordon moight be talking about a different use of
selectors; yours being for the case where you try to pick off particular
messages from an existing backlog at a specific time, and Gordon's
suggestion being more around use of selectors on all your long-lived
consumers to consume the messages as they arrive and remove the need to
pick out specific messages later.

Picking individual messages off a queue with new consumers using a selector
is never likely to be that fast, because as you say it might have to
evaluate every message first to find a match (assuming there is one). On
the other hand, when using [a number of] long lived consumers aimed at
[collectively] consuming all the messages on the queue, the selectors are
simply evaluated during the regular process of attempting to deliver each
message to the available consumers. The overhead then moves from checking
every message specifically until it matches, which may be dependant on the
queue depth, to the regular competition between particular subscriptions
for accepting the messages as they are processed, which in some ways isnt.

I also presume that, as discussed on the other thread, you could also
currently be seeing a hit in performance due to using 'shared groups' with
unique'ish keys; more on this later.

Particular distributions of selectors could be used to ensure that at most
K consumers could ever process particular messages like X, and thus do so
at one time, but would additionally mean that if those particular consumers
were busy processing messages like Y which they can also consume, then it
could be possible for other consumers to sit idle because their selectors
indicate they cant process messages like X. It would need to be considered
as a balance, governed by whatever it is you are looking to achieve by
limiting the maximum number of concurrent consumers for a given type of
message.

The same approach could technically be used across multiple queues, which
might also use groups (which is a little weird to write, since groups are
usually used to prevent concurrent delivery), but doing so would add an
additional element to the 'idle consumers' balancing problem, wherein at a
given time some sources might have messages of interest and others might
not.

The above, and any other specific ideas people have, might all depend what
your messages, groups, and queues are actually like. E.g. are some higher
volume than others, are some more important than others, do you know what
the groups are ahead of time, etc etc.

If I am reading them correctly it seems like all of the original options in
this thread allow for a message to go round in circles between the various
queues, potentially forever. There also seems to be scope for interesting
ordering effects from re-enqueing messages. Given that ordering is another
key reason for using message grouping, what are your actual ordering
requirements?

On a related-but-not note, see the other thread for further discussion
around improvements for queues with 'shared groups'.

Robbie

On 17 January 2014 21:38, Helen Kwong <he...@gmail.com> wrote:

> Hi Gordon,
>
> In the tests that we've run, the time it takes to dequeue messages using
> selectors seems to increase with the depth of the queue. Since the number
> of unprocessed messages can sometimes be quite high (e.g., >200000), if
> they are all on the same queue and we use selectors, the dequeue time will
> increase by a lot (e.g., 3-4 seconds if we're selecting the 200000th
> message), and the performance hit is probably too much for us. Is there a
> way to dequeue using selectors quickly from a high-depth queue?
>
> Helen
>
>
> On Fri, Jan 17, 2014 at 2:40 AM, Gordon Sim <gs...@redhat.com> wrote:
>
> > On 01/16/2014 07:20 PM, Helen Kwong wrote:
> >
> >> Hi Qpid users / experts,
> >>
> >> I need to limit the number of consumers concurrently processing messages
> >> considered to be in the same group, across multiple queues, and was
> >> wondering if anyone has ideas about how to do it. We’re using the Java
> >> broker and client, and have multiple queues, each with multiple
> listeners,
> >> each listener’s session listening to multiple queues. Some messages are
> >> associated with groups, and for a given group we want at most K
> listeners
> >> processing messages from the group at any given time. The messages are
> >> enqueued to multiple queues, and it’s possible for messages from the
> same
> >> group to be in different queues.
> >>
> >> If messages in the same group can go into only one queue, then the
> message
> >> groups feature will give us what we need (it’d work directly with K = 1
> >> and
> >> with K > 1 we can tweak the grouping value, e.g., hash it to one of 1
> to K
> >> and append the number to the grouping value). But since messages
> >> considered
> >> to be in the same group can be in different queues, the feature is not
> >> enough for our case.
> >>
> >
> > Instead of multiple queues, could you have one queue with different
> > selectors pulling subsets of the messages?
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: Limiting the number of concurrent consumers across multiple queues

Posted by Helen Kwong <he...@gmail.com>.
Hi Gordon,

In the tests that we've run, the time it takes to dequeue messages using
selectors seems to increase with the depth of the queue. Since the number
of unprocessed messages can sometimes be quite high (e.g., >200000), if
they are all on the same queue and we use selectors, the dequeue time will
increase by a lot (e.g., 3-4 seconds if we're selecting the 200000th
message), and the performance hit is probably too much for us. Is there a
way to dequeue using selectors quickly from a high-depth queue?

Helen


On Fri, Jan 17, 2014 at 2:40 AM, Gordon Sim <gs...@redhat.com> wrote:

> On 01/16/2014 07:20 PM, Helen Kwong wrote:
>
>> Hi Qpid users / experts,
>>
>> I need to limit the number of consumers concurrently processing messages
>> considered to be in the same group, across multiple queues, and was
>> wondering if anyone has ideas about how to do it. We’re using the Java
>> broker and client, and have multiple queues, each with multiple listeners,
>> each listener’s session listening to multiple queues. Some messages are
>> associated with groups, and for a given group we want at most K listeners
>> processing messages from the group at any given time. The messages are
>> enqueued to multiple queues, and it’s possible for messages from the same
>> group to be in different queues.
>>
>> If messages in the same group can go into only one queue, then the message
>> groups feature will give us what we need (it’d work directly with K = 1
>> and
>> with K > 1 we can tweak the grouping value, e.g., hash it to one of 1 to K
>> and append the number to the grouping value). But since messages
>> considered
>> to be in the same group can be in different queues, the feature is not
>> enough for our case.
>>
>
> Instead of multiple queues, could you have one queue with different
> selectors pulling subsets of the messages?
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: Limiting the number of concurrent consumers across multiple queues

Posted by Gordon Sim <gs...@redhat.com>.
On 01/16/2014 07:20 PM, Helen Kwong wrote:
> Hi Qpid users / experts,
>
> I need to limit the number of consumers concurrently processing messages
> considered to be in the same group, across multiple queues, and was
> wondering if anyone has ideas about how to do it. We’re using the Java
> broker and client, and have multiple queues, each with multiple listeners,
> each listener’s session listening to multiple queues. Some messages are
> associated with groups, and for a given group we want at most K listeners
> processing messages from the group at any given time. The messages are
> enqueued to multiple queues, and it’s possible for messages from the same
> group to be in different queues.
>
> If messages in the same group can go into only one queue, then the message
> groups feature will give us what we need (it’d work directly with K = 1 and
> with K > 1 we can tweak the grouping value, e.g., hash it to one of 1 to K
> and append the number to the grouping value). But since messages considered
> to be in the same group can be in different queues, the feature is not
> enough for our case.

Instead of multiple queues, could you have one queue with different 
selectors pulling subsets of the messages?


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org