You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Mike Wilson <mi...@hotmail.com> on 2014/04/01 12:55:23 UTC

queue dispatchpolicy based on "consumer load"

According to:
http://activemq.apache.org/dispatch-policies.html
queue load balancing use either the round-robin or "strict" method.

Round-robin will give the same amount of work to consumers
independent on if they are fast or slow, possibly leading to work
building up on a slow consumer even when the other consumers are 
idle.

Would it be possible to implement a queue dispatch policy that f ex
prefers to dispatch to the consumer with the least number of
unacknowledged messages, or maybe with the highest number of free 
prefetch slots?

Or is the solution to use small enough prefetch values so that the
maximum amount of queued work in any consumer is handled in a short 
time?

Thanks
Mike


Re: queue dispatchpolicy based on "consumer load"

Posted by Gary Tully <ga...@gmail.com>.
use a small prefetch or 0 and messages will not get stuck pending
consumption by slow consumers.
Dispatch will not exceed the prefetch for a given consumer in both
modes. The real difference is with ordering.

With 0-9 in the queue and 9 consumers with prefetch =10;
 - with roundrobin each gets one message.
 - With strict order, consumer 0 gets all the messages.

With a prefetch of 1 each will get one message in both scenarios


On 1 April 2014 11:55, Mike Wilson <mi...@hotmail.com> wrote:
> According to:
> http://activemq.apache.org/dispatch-policies.html
> queue load balancing use either the round-robin or "strict" method.
>
> Round-robin will give the same amount of work to consumers
> independent on if they are fast or slow, possibly leading to work
> building up on a slow consumer even when the other consumers are
> idle.
>
> Would it be possible to implement a queue dispatch policy that f ex
> prefers to dispatch to the consumer with the least number of
> unacknowledged messages, or maybe with the highest number of free
> prefetch slots?
>
> Or is the solution to use small enough prefetch values so that the
> maximum amount of queued work in any consumer is handled in a short
> time?
>
> Thanks
> Mike
>



-- 
http://redhat.com
http://blog.garytully.com