You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Wolgemuth Greg <wo...@eseri.com> on 2010/11/05 22:35:09 UTC

Understanding "capacity" flag for Python receivers

I have a system with long-processing consumers which pull off shared
queues. I'm using a direct exchange. Here's what I want to have happen:

Consumers A, B read from shared queue Q
A and B are both busy when messages 1, 2, 3 arrive on Q
A unblocks and begins processing 1
B unblocks and begins processing 2
B finishes 2, and begins processing 3
A finishes 1

Questions are as follows:

What's the difference in behaviour for setting the capacity of a
receiver to 0 versus 1?

If consumer B calls fetch() before consumer A calls acknowledge(), can
consumer B end up with the same message as consumer A?

For long-processing consumers, is it better to acknowledge when
processing starts or when processing ends?

What happens when my consumers read from many shared queues, and use the
session.next_receiver() functionality? From my understanding,
next_receiver() needs prefetch enabled to function. However, I don't
want messages pulled from shared queues onto consumers which have just
started a long processing block - preventing other, idle consumers from
processing the message in question.

Thanks in advance,

Greg


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: Understanding "capacity" flag for Python receivers

Posted by Gordon Sim <gs...@redhat.com>.
On 11/05/2010 09:35 PM, Wolgemuth Greg wrote:
> I have a system with long-processing consumers which pull off shared
> queues. I'm using a direct exchange. Here's what I want to have happen:
>
> Consumers A, B read from shared queue Q
> A and B are both busy when messages 1, 2, 3 arrive on Q
> A unblocks and begins processing 1
> B unblocks and begins processing 2
> B finishes 2, and begins processing 3
> A finishes 1
>
> Questions are as follows:
>
> What's the difference in behaviour for setting the capacity of a
> receiver to 0 versus 1?

With capacity 0 the broker will only ever send a message to the client 
in response to a fetch() call. With capacity 1, the broker may send one 
message before a fetch call is made in anticipation of such a call (i.e. 
there is a 'prefetch' of 1).

In the context of your example above, if A has capacity set to 1 then 
when it calls fetch and gets the first message (which will have been 
sent prior to the fecth call being made), the broker may then send the 
next message down to it in anticipation of further fetch calls. If on 
the other hand it had capacity 0 then the first message would only be 
sent in response to the fetch() and no further message would be sent 
until another fetch() was issued.

Having no prefetch tends to give more intuitive behaviour in simple 
cases like these. However prefetch will increase the throughput so is 
desirable in many cases.

> If consumer B calls fetch() before consumer A calls acknowledge(), can
> consumer B end up with the same message as consumer A?

No. Once the message is sent to A, it will not be sent to anyone else 
unless the session for A is lost or A explicitly releases the message.

>
> For long-processing consumers, is it better to acknowledge when
> processing starts or when processing ends?

When processing ends I would say. The acknowledgement is a reliability 
mechanism. If your client fails you may want the message to be requeued 
and sent to another worker. Acknowledging receipt informs the broker 
that there is no need to redeliver; doing this after processing is safer.

> What happens when my consumers read from many shared queues, and use the
> session.next_receiver() functionality? From my understanding,
> next_receiver() needs prefetch enabled to function. However, I don't
> want messages pulled from shared queues onto consumers which have just
> started a long processing block - preventing other, idle consumers from
> processing the message in question.

If you don't know which of a number of queues will have a message first, 
you need to allow the broker to send you one from any of them. The 
protocol defines credit per subscriber, not per session. This means you 
have to set the capacity to be greater than zero for those queues.

Now, when you get the first message you could set the capacity back to 0 
and release any other available messages that may have concurrently 
become available on other queues, before you start processing the first 
message.



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org