You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Praveen M <le...@gmail.com> on 2011/10/27 00:48:55 UTC

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Hi Jakub,

Thanks for your reply. Yes I did find the prefetch model and reran my test
and now ran into another issue.

I set the prefetch to 1 and ran the same test described in my earlier mail.

In this case the behavior I see is,
The 1st consumer gets the 1st message and works on it for a while, the 2nd
consumer consumes 8 messages and then does nothing(even though there was 1
more unconsumed message). When the first consumer completed its long running
message it got around and consumed the remaining 1 message. However,  I was
expecting the 2nd consumer to dequeue all 9 messages(the number of remaining
messages) while the 1st consumer was busy working on the long message.

Then, I thought, perhaps the prefetch count meant that, when a consumer is
working on a message, another message in the queue is prefetched to the
consumer from the persistant store as my prefetch count is 1. That could
explain why I saw the behavior as above.

What i wanted to achieve was to actually turn of any kinda prefetching
(Yeah, I'm ok with taking the throughput hit)

So I re ran my test now with prefetch = 0, and saw a really weird result.

With prefetch 0, the 1st consumer gets the 1st message and works on it for a
while, which the 2nd consumer consumes 7 messages(why 7?) and then does
nothing(even though there were 2 more unconsumed messages). When the 1st
consumer completed processing it's message it got to consume the remaining
two messages too. (Did it kinda prefetch 2?)

Can someone please tell me if Is this a bug or am I doing something
completely wrong? I'm using the latest Java Broker & client (from trunk)
with DerbyMessageStore for my tests.

Also, can someone please tell me what'd be the best way to turn off
prefetching?

Thanks a lot,
Praveen


On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz> wrote:

> Hi Praveen,
>
> Have you set the capacity / prefetch for the receivers to one message?
> I believe the capacity defines how many messages can be "buffered" by
> the client API in background while you are still processing the first
> message. That may cause that both your clients receive 5 messages,
> even when the processing in the first client takes a longer time.
>
> Regards
> Jakub
>
> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com> wrote:
> > Hi,
> >
> > I ran the following test
> >
> > 1) I created 1 Queue
> > 2) Registered 2 consumers to the queue
> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
> long
> > running. I simulated such that the first message on consumption takes
> about
> > 50 seconds to be processed]
> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
> > 5) The 1st consumer that got the long running message works on it for a
> long
> > time while the second consumer that got the second message keeps
> processing
> > and going to the next message, but  only goes as far until it processes 5
> of
> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
> > 6) When the 1st consumer with the  long running message completes, it
> then
> > ends up processing the remaining messages and my test completes.
> >
> > So it seems like the two consumers were trying to take a fair share of
> > messages that they were processing immaterial of the time it takes to
> > process individual messages. Enqueued message = 10, Consumer 1 share of 5
> > messages were processed by it, and Consumer 2's share of 5 messages were
> > processed by it.
> >
> >
> > This is kinda against the behavior that I'd like to see. The desired
> > behavior in my case is that of each consumer keeps going on if it's done
> and
> > has other messages to process.
> >
> > In the above test, I'd expect as consumer 1 is working on the long
> message,
> > the second consumer should work its way through all the remaining
> messages.
> >
> > Is there some config that I'm missing that could cause this effect?? Any
> > advice on tackling this will be great.
> >
> > Also, Can someone please explain in what order are messages delivered to
> the
> > consumers in the following cases?
> >
> > Case 1)
> >  There is a single Queue with more than 1 message in it and multiple
> > consumers registered to it.
> >
> > Case 2)
> > There are multiple queues each with more than 1 message in it, and has
> > multiple consumers registered to it.
> >
> >
> >
> > Thank you,
> > --
> > -Praveen
> >
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>


-- 
-Praveen

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Rob, thanks for the background info, definitely helpful!

Cheers! Dan



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612674.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Hi Helen,

glad to hear you have things working.

The single consumer against multiple queues code is already committed to
trunk, and so will be in 0.32 ... I've not had the chance to do a huge
amount of testing yet, so you might want to play with it a bit before 0.32
gets finalised.  I'm not sure what the release plan for 0.32 is yet - I'd
be quite keen to get another Java release out by the end of the year.


To enable the feature on the broker you currently need to run with
-Dqpid.enableMultiQueueConsumers=true (or you can set it directly as a
"context variable" [1] in the broker configuration).

To consume from such an address you need to use an address string like


ADDR: '' ; {node : { type : queue }, link : { x-subscribe : { arguments : {
x-multiqueue : [ q1, q2, q3 ] } } } }

where q1, q2 and q3 are the queues you wish to consumer from.


If you do get a chance to do any testing and find any issues (or if you
need any help with the upgrading of the configuration of your current
broker to what is now on trunk), please let me know,

Cheers,
Rob


[1]
https://qpid.apache.org/releases/qpid-trunk/java-broker/book/Java-Broker-Management-Managing-Broker.html#Java-Broker-Management-Managing-Broker-Context

On 21 October 2014 00:30, Helen Kwong <he...@gmail.com> wrote:

> Hi Rob,
>
> We have implemented the workaround Dan mentioned before -- by suspending
> the session with sendSuspendChannel() and releasing prefetched messages
> with rejectMessagesForConsumerTag() whenever we detect a long-running
> message -- and so far it is working as expected. Thank you again for all
> your help.
>
> For the long term solution, we plan to upgrade our client and broker to the
> latest version, and use the "single consumer across many queues" feature
> that you're building. So I'm wondering:
> 1. Do you still plan to have this available in v0.32 of Qpid?
> 2. What is the planned release date of v0.32?
>
> Thanks!
>
> Helen
>
> On Tue, Sep 9, 2014 at 12:54 PM, xiaodan.wang <xiaodan.wang@salesforce.com
> >
> wrote:
>
> > Hi Rob, in case you are interested, wanted to give you an update from our
> > side. First, let me begin by saying how awesome the Qpid community has
> > been.
> > Your patience and responsiveness in addressing our questions is way
> beyond
> > what we could have hoped for from this community. I think I speak for
> > Salesforce in saying that we owe you a couple rounds of beer, in case you
> > are ever bored and in San Francisco :)
> >
> > Our plan is to address per-consumer prefetching on the v0.16 clients we
> > currently run in production using the suspend & release prefetched
> messages
> > approach described above. The general flow of our solution is as follows:
> >
> > 1) Each time a message is dispatched and onMessage is invoked, we
> register
> > the parent Qpid session with a session tracker
> >
> > 2)  The session tracker is scheduled every x seconds and looks for Qpid
> > sessions that is processing a long running messages
> >
> > 3) Once the session tracker finds a long running message, it invokes
> > "sendSuspendChannel" and "rejectMessagesForConsumerTag" on the session to
> > suspend dequeue and release any prefetched messages
> >
> > 4) When the long running message is completed, it unregisters the parent
> > Qpid session from the session tracker and invokes "sendSuspendChannel" to
> > resume dequeue
> >
> > We are in the process of performance testing this solution to make sure
> > that
> > a) overhead of suspend and release is manageable and b) no anomalies
> arise
> > from repeatedly suspending and unsuspending the session
> >
> >
> > For our long term plan, we plan to upgrade to the latest version of the
> > client/broker and use the single-consumer-multiple-queue feature you've
> > built in v0.32 for AMQP 0-10. After we've patched our v0.16 client, we
> plan
> > to focus on vetting this.
> >
> > We are also abandoning efforts to switch to per-session prefetch with
> AMQP
> > 0-91. We are grateful for bulk dequeue support in AMQP 0-91, but in
> > subsequent testing, we encountered other incompatibilities due to our use
> > case. (Unfortunately have not had time to dig further). Given that the
> > v0.16
> > band-aid solution eliminated the immediate urgency, we feel that taking
> the
> > time to build a solution around AMQP 0-10 is the right approach going
> > forward.
> >
> > By the way, what is the typical release time frame for Qpid (with
> > v0.32/trunk in mind)?
> > Cheers, Dan
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613381.html
> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Helen Kwong <he...@gmail.com>.
Hi Rob,

We have implemented the workaround Dan mentioned before -- by suspending
the session with sendSuspendChannel() and releasing prefetched messages
with rejectMessagesForConsumerTag() whenever we detect a long-running
message -- and so far it is working as expected. Thank you again for all
your help.

For the long term solution, we plan to upgrade our client and broker to the
latest version, and use the "single consumer across many queues" feature
that you're building. So I'm wondering:
1. Do you still plan to have this available in v0.32 of Qpid?
2. What is the planned release date of v0.32?

Thanks!

Helen

On Tue, Sep 9, 2014 at 12:54 PM, xiaodan.wang <xi...@salesforce.com>
wrote:

> Hi Rob, in case you are interested, wanted to give you an update from our
> side. First, let me begin by saying how awesome the Qpid community has
> been.
> Your patience and responsiveness in addressing our questions is way beyond
> what we could have hoped for from this community. I think I speak for
> Salesforce in saying that we owe you a couple rounds of beer, in case you
> are ever bored and in San Francisco :)
>
> Our plan is to address per-consumer prefetching on the v0.16 clients we
> currently run in production using the suspend & release prefetched messages
> approach described above. The general flow of our solution is as follows:
>
> 1) Each time a message is dispatched and onMessage is invoked, we register
> the parent Qpid session with a session tracker
>
> 2)  The session tracker is scheduled every x seconds and looks for Qpid
> sessions that is processing a long running messages
>
> 3) Once the session tracker finds a long running message, it invokes
> "sendSuspendChannel" and "rejectMessagesForConsumerTag" on the session to
> suspend dequeue and release any prefetched messages
>
> 4) When the long running message is completed, it unregisters the parent
> Qpid session from the session tracker and invokes "sendSuspendChannel" to
> resume dequeue
>
> We are in the process of performance testing this solution to make sure
> that
> a) overhead of suspend and release is manageable and b) no anomalies arise
> from repeatedly suspending and unsuspending the session
>
>
> For our long term plan, we plan to upgrade to the latest version of the
> client/broker and use the single-consumer-multiple-queue feature you've
> built in v0.32 for AMQP 0-10. After we've patched our v0.16 client, we plan
> to focus on vetting this.
>
> We are also abandoning efforts to switch to per-session prefetch with AMQP
> 0-91. We are grateful for bulk dequeue support in AMQP 0-91, but in
> subsequent testing, we encountered other incompatibilities due to our use
> case. (Unfortunately have not had time to dig further). Given that the
> v0.16
> band-aid solution eliminated the immediate urgency, we feel that taking the
> time to build a solution around AMQP 0-10 is the right approach going
> forward.
>
> By the way, what is the typical release time frame for Qpid (with
> v0.32/trunk in mind)?
> Cheers, Dan
>
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613381.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Rob, in case you are interested, wanted to give you an update from our
side. First, let me begin by saying how awesome the Qpid community has been.
Your patience and responsiveness in addressing our questions is way beyond
what we could have hoped for from this community. I think I speak for
Salesforce in saying that we owe you a couple rounds of beer, in case you
are ever bored and in San Francisco :)

Our plan is to address per-consumer prefetching on the v0.16 clients we
currently run in production using the suspend & release prefetched messages
approach described above. The general flow of our solution is as follows:

1) Each time a message is dispatched and onMessage is invoked, we register
the parent Qpid session with a session tracker

2)  The session tracker is scheduled every x seconds and looks for Qpid
sessions that is processing a long running messages

3) Once the session tracker finds a long running message, it invokes
"sendSuspendChannel" and "rejectMessagesForConsumerTag" on the session to
suspend dequeue and release any prefetched messages

4) When the long running message is completed, it unregisters the parent
Qpid session from the session tracker and invokes "sendSuspendChannel" to
resume dequeue 

We are in the process of performance testing this solution to make sure that
a) overhead of suspend and release is manageable and b) no anomalies arise
from repeatedly suspending and unsuspending the session


For our long term plan, we plan to upgrade to the latest version of the
client/broker and use the single-consumer-multiple-queue feature you've
built in v0.32 for AMQP 0-10. After we've patched our v0.16 client, we plan
to focus on vetting this.

We are also abandoning efforts to switch to per-session prefetch with AMQP
0-91. We are grateful for bulk dequeue support in AMQP 0-91, but in
subsequent testing, we encountered other incompatibilities due to our use
case. (Unfortunately have not had time to dig further). Given that the v0.16
band-aid solution eliminated the immediate urgency, we feel that taking the
time to build a solution around AMQP 0-10 is the right approach going
forward.

By the way, what is the typical release time frame for Qpid (with
v0.32/trunk in mind)?
Cheers, Dan




--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613381.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Cool beans, we did not set max message delivery attempt or enable DLQ on our
broker :)



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613363.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
To my knowledge the only issue would be if you enabled max message delivery
features, in which case if you reject a message a given number of times
then it will be DLQd... however if you've not enabled that then I don't see
an issue.

 -- Rob


On 9 September 2014 17:35, xiaodan.wang <xi...@salesforce.com> wrote:

> Thanks Rob! The reason I asked is because we are planning to expose/make
> public AMQSession#rejectMessagesForConsumerTag on the v0.16 client that we
> use in production as a short term solution until we upgrade. I see that
> underneath the hood it is calling rejectMessage for each prefetched
> message.
>
> Cheers, Dan
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613356.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Thanks Rob! The reason I asked is because we are planning to expose/make
public AMQSession#rejectMessagesForConsumerTag on the v0.16 client that we
use in production as a short term solution until we upgrade. I see that
underneath the hood it is calling rejectMessage for each prefetched message.

Cheers, Dan



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613356.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Hi Dan,

messages only get dequeued at the broker when it receives an
acknowledgement for that message... so, no - messages shouldn't get lost in
the case you are describing above.

-- Rob


On 9 September 2014 05:53, xiaodan.wang <xi...@salesforce.com> wrote:

> Hi Rob, when using AMQSession_0_10#sendSuspendChannel(), we noticed that
> existing prefetched messages are not released after suspending the session.
>
> If we were to use AMQSession#rejectMessage(UnprocessMessage message,
> boolean
> reenqueue) to release the message using reenqueue=true, is the message
> guaranteed to be re-delivered? Namely, can the message get lost if the
> client dies after discarding the message but before it is re-enqueued?
>
> Cheers, Dan
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613337.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Rob, when using AMQSession_0_10#sendSuspendChannel(), we noticed that
existing prefetched messages are not released after suspending the session. 

If we were to use AMQSession#rejectMessage(UnprocessMessage message, boolean
reenqueue) to release the message using reenqueue=true, is the message
guaranteed to be re-delivered? Namely, can the message get lost if the
client dies after discarding the message but before it is re-enqueued?

Cheers, Dan



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7613337.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Hi Helen,

that's interesting (and not the way 0-9 works - it doesn't release the
prefetched messages when suspended - so I hadn't actually thought of that
:-)  ).

In terms of overhead, obviously you're going to be doing more work in terms
of re-sending messages over and over again... and the broker will have to
keep resetting the consumer "pointers" into the queues as messages are
released back into the queue... however I guess the only way to determine
if this overhead is significant is for you to test it with your use-case.
 It would certainly make sense if you could somehow determine when you are
processing a long running messages and suspend the session then... if your
message processing would be < 1s then I'd wonder if the work of releasing
all the messages and then having them resent might not dominate the amount
of time actually spent on actual message processing...

It certainly sounds like it might be a workaround for you at the moment
though.

As I said to Dan earlier I'll have a look around the 0-9 codepath to look
at implementing a more sensible mechanism for synchronous sessions with low
prefetch buffers (because this is actually an issue that I've run into
elsewhere).  However it does seem like this might be a lower impact
solution for you if the overhead is not too significant and/or you can
choose to only do this if you determine that you have spent > x seconds
processing the message or something.

-- Rob


On 5 September 2014 21:36, Helen Kwong <he...@gmail.com> wrote:

> Hi Rob,
>
> We're looking into one idea that you suggested, where you said:
>
> > It may be possible to code a client library side change without changing
> > the broker (basically reduce consumer credit to 0 as soon as one consumer
> > has a message, and release any messages that have been prefetched), but
> > that probably isn't a trivial piece of work.
>
> I'm looking at the 0.16 client on 0-10 (which is what we've been using),
> and while I couldn't find where the consumer credit is kept track of in the
> client code (perhaps this is done on the broker), I did find a method
> AMQSession_0_10#sendSuspendChannel() that seems like might solve our
> problem. What I'm doing right now is call sendSuspendChannel(true) to
> suspend the session's message flow whenever a message listener receives a
> message in onMessage(), and then at the end of onMessage(), we call
> sendSuspendChannel(false) to resume the message flow. The behavior we've
> seen is that when we call sendSuspendChannel(true), it releases the
> prefetched messages by this session, so that if another session's consumer
> would be able to receive the message.
>
> Do you see any problems with this possible solution? What is the overhead
> of suspending and resuming the session each time a listener receives a
> message?
>
> Thanks,
> Helen
>
>
> On Fri, Aug 29, 2014 at 5:26 PM, xiaodan.wang <xiaodan.wang@salesforce.com
> >
> wrote:
>
> > rgodfrey wrote
> > > Have you tried the latest trunk revision (or rather any trunk version
> > > after
> > > http://svn.apache.org/r1621143).  I've made some changes that may (or
> > may
> > > not) help.  In particular messages that arrive in 0-9-1 in ADDR mode
> will
> > > get ADDR addresses and (hopefully) a type appropriate to their nature.
> > >
> > > I haven't requested inclusion of this change into 0.30 yet... but let
> me
> > > know if it helps.
> >
> > Awesome, the latest trunk solved the AMQTopic/Queue issue.
> >
> > Cheers! Dan
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612835.html
> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Helen Kwong <he...@gmail.com>.
Hi Rob,

We're looking into one idea that you suggested, where you said:

> It may be possible to code a client library side change without changing
> the broker (basically reduce consumer credit to 0 as soon as one consumer
> has a message, and release any messages that have been prefetched), but
> that probably isn't a trivial piece of work.

I'm looking at the 0.16 client on 0-10 (which is what we've been using),
and while I couldn't find where the consumer credit is kept track of in the
client code (perhaps this is done on the broker), I did find a method
AMQSession_0_10#sendSuspendChannel() that seems like might solve our
problem. What I'm doing right now is call sendSuspendChannel(true) to
suspend the session's message flow whenever a message listener receives a
message in onMessage(), and then at the end of onMessage(), we call
sendSuspendChannel(false) to resume the message flow. The behavior we've
seen is that when we call sendSuspendChannel(true), it releases the
prefetched messages by this session, so that if another session's consumer
would be able to receive the message.

Do you see any problems with this possible solution? What is the overhead
of suspending and resuming the session each time a listener receives a
message?

Thanks,
Helen


On Fri, Aug 29, 2014 at 5:26 PM, xiaodan.wang <xi...@salesforce.com>
wrote:

> rgodfrey wrote
> > Have you tried the latest trunk revision (or rather any trunk version
> > after
> > http://svn.apache.org/r1621143).  I've made some changes that may (or
> may
> > not) help.  In particular messages that arrive in 0-9-1 in ADDR mode will
> > get ADDR addresses and (hopefully) a type appropriate to their nature.
> >
> > I haven't requested inclusion of this change into 0.30 yet... but let me
> > know if it helps.
>
> Awesome, the latest trunk solved the AMQTopic/Queue issue.
>
> Cheers! Dan
>
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612835.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
rgodfrey wrote
> Have you tried the latest trunk revision (or rather any trunk version
> after
> http://svn.apache.org/r1621143).  I've made some changes that may (or may
> not) help.  In particular messages that arrive in 0-9-1 in ADDR mode will
> get ADDR addresses and (hopefully) a type appropriate to their nature.
> 
> I haven't requested inclusion of this change into 0.30 yet... but let me
> know if it helps.

Awesome, the latest trunk solved the AMQTopic/Queue issue.

Cheers! Dan




--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612835.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
On 27 August 2014 23:16, xiaodan.wang <xi...@salesforce.com> wrote:

> Hi Rob,
>
> Per your earlier question, with AMQP 0-10 we grabbed the queue name from
> the
> jms message as follows:
> String queueName = ((Queue)message.getJMSDestination()).getQueueName();
>
> With AMQP 0-91 had to make the follow change:
> String queueName =
> ((AMQTopic)message.getJMSDestination()).getRoutingKey().toString();
>
>
OK - I'll try to look into that tmr (my time)


> Also, regarding this statement
> >The 0.30 broker release provides some enhancements both to 0-9-1
> > functionality and message conversions between 0-9-1 and the other AMQP
> > protocols.
>
> Does this mean that in pre-0.30 brokers a message enqueued by a AMQP 0-10
> client cannot be dequeued by a AMQP 0-91 client?
>
>
No - just that the conversions have been improved.  In particular for
non-JMS compliant messages - messages with application headers which are
lists.  Such messages are not legal in JMS, but if a non-JMS client was
sending pure AMQP they might arrive at the JMS client.  Previously the 0-9
client didn't support the AMQP FieldArray type (roughly equivalent to
list).  In 0.30 that type is not supported in the message headers, but - as
above - you can't legally use that from JMS.

There has been conversion between 0-9 and 0-10 in place since the 0-10
protocol was first introduced. There have been a few updates since then to
improve things as and when we've found things that could be improved.  As
long as you stick to legal JMS you should be good - improvements have (as
far as I can remember) been around non-JMS corner cases.

Hope this helps,
Rob


> Cheers! Dan
>
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612668.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
On 30 August 2014 00:26, xiaodan.wang <xi...@salesforce.com> wrote:

> Nope, the last version I pulled was from Tuesday. Will test with the latest
> and report back :)
>
> Two more questions regarding alternatives to achieving per session
> prefetching:
>
> 1) Does AMQP 1.0 resemble AMQP 0-10 or AMQP 0-91 in terms of per consumer
> vs
> per session prefetching?
>

AMQP 1.0 has a mixed model.  You can set a session level prefetch, which is
in terms of frames rather than messages... this is primarily intended for
efficient throughput management.  At each consumer there is the ability to
manage credit.  The real difference in 1.0 is that credit is always
maintained explicitly rather than being a window that is moved by
acknowledgement.  An AMQP 1.0 client would find it easier (I think) to
release prefetched messages and maintain what is essentially a time bound
(rather than space bound) prefetch.


>
> 2) Assuming that pre-consumer prefetching is used. Is it feasible for
> messages to expire from the prefetch buffer so that long running messages
> do
> not block indefinitely? For example having the broker expire/invalidate the
> client buffer after some preset time and redeliver the message. Then again
> the broker might not know what messages the client is actively working on.
>

Yes - that is possible and 1.0 provides more explicit mechanisms for doing
this (though they aren't implemented yet on either the client or broker
side).  As the client the issue you'd have to watch out for is that you
might do the work on a message only to have the "acknowledge" fail because
the broker expired your lease on the message.  AMQP 1.0 actually allows for
more explicit state management such that a client could potentially "lock"
the message once they start working on it rather than allowing the message
lease to be rescinded by the broker... again though we've not yet tried to
implement these more complex interaction patterns.

Hope this helps,
Rob


>
> Cheers! Dan
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612830.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Nope, the last version I pulled was from Tuesday. Will test with the latest
and report back :)

Two more questions regarding alternatives to achieving per session
prefetching:

1) Does AMQP 1.0 resemble AMQP 0-10 or AMQP 0-91 in terms of per consumer vs
per session prefetching?

2) Assuming that pre-consumer prefetching is used. Is it feasible for
messages to expire from the prefetch buffer so that long running messages do
not block indefinitely? For example having the broker expire/invalidate the
client buffer after some preset time and redeliver the message. Then again
the broker might not know what messages the client is actively working on.

Cheers! Dan



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612830.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Have you tried the latest trunk revision (or rather any trunk version after
http://svn.apache.org/r1621143).  I've made some changes that may (or may
not) help.  In particular messages that arrive in 0-9-1 in ADDR mode will
get ADDR addresses and (hopefully) a type appropriate to their nature.

I haven't requested inclusion of this change into 0.30 yet... but let me
know if it helps.

-- Rob


On 30 August 2014 00:10, xiaodan.wang <xi...@salesforce.com> wrote:

> rgodfrey wrote
> > Is the destination that the message was sent to / the consumer that was
> > created to receive the message a Queue or a Topic?
> >
> > I did a little test myself (sending to a queue object, receiving from a
> > queue object) and the object that I got back from getJMSDestination() was
> > a
> > queue (and not a Topic as you seem to be getting)... however I did notice
> > that the queue seemed to be in binding URL and not Address format - so I
> > will try to fix that at least.
>
> Hi Rob, some additional context on the behavior I'm seeing. The queues were
> originally created on a v0.16 broker using AMQP 0-10 and are addressed as
> follow:
> 'Q1'/None; {
>   'create': 'always',
>   'node': {
>     'durable': 'true',
>     'type': 'queue',
>     'x-declare': {
>       'arguments': {
>         'x-qpid-priorities': 10
>       }
>     }
>   }
> }
>
> Using an AMQP 0-10 client, I'm seeing the following:
> a) When message is enqueued type is: AMQAnyDestination
> b) When we create the consumer: AMQAnyDestination
> c) when we call getJMSDestination after message is received:
> AMQAnyDestination
>
> Using AMQP 0-91 client (jvm argument -Dqpid.amqp.version=0-91 with no other
> code changes):
> a) When message is enqueued type is: AMQAnyDestination
> b) When we create the consumer: AMQAnyDestination
> c) when we call getJMSDestination after message is received: *AMQTopic*
> topic://<<default>>/Q1/?routingkey='Q1'&exclusive='true'&autodelete='true'
>
> I did notice a slight difference in the destination string when message is
> enqueued through a AMQP 0-91 client:
>
> 'Q1'/None; {
>   'create': 'always',
>   'node': {
>     'durable': 'true',
>     'type': 'queue',
>     'x-declare': {
>       'arguments': {
>         *'no-local': False,*
>         'x-qpid-priorities': 10
>       }
>     }
>   }
> }
>
> I think the "*'no-local': False,*" parameter has no impact on our
> application since we use separate connections to enqueue messages vs
> receiving messages.
>
> Now that I think about it, I'm seeing another issue that might be related
> to
> this (i.e. message received in AMQP 0-91 returns a destination of type
> AMQTopic). With AMQP 0-10, we were able to repeatedly call consumer.receive
> in order to fetch additional (>1) messages from the queue. Using AMQP 0-91,
> consumer.receive only returns a single message, subsequent invocations time
> out (even though messages are available). Only if we committed on the
> session after each receive were we able to fetch all messages from the
> queue.
>
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612825.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
rgodfrey wrote
> Is the destination that the message was sent to / the consumer that was
> created to receive the message a Queue or a Topic?
> 
> I did a little test myself (sending to a queue object, receiving from a
> queue object) and the object that I got back from getJMSDestination() was
> a
> queue (and not a Topic as you seem to be getting)... however I did notice
> that the queue seemed to be in binding URL and not Address format - so I
> will try to fix that at least.

Hi Rob, some additional context on the behavior I'm seeing. The queues were
originally created on a v0.16 broker using AMQP 0-10 and are addressed as
follow:
'Q1'/None; {
  'create': 'always',
  'node': {
    'durable': 'true',
    'type': 'queue',
    'x-declare': {
      'arguments': {
        'x-qpid-priorities': 10
      }
    }
  }
}

Using an AMQP 0-10 client, I'm seeing the following:
a) When message is enqueued type is: AMQAnyDestination
b) When we create the consumer: AMQAnyDestination
c) when we call getJMSDestination after message is received:
AMQAnyDestination

Using AMQP 0-91 client (jvm argument -Dqpid.amqp.version=0-91 with no other
code changes):
a) When message is enqueued type is: AMQAnyDestination
b) When we create the consumer: AMQAnyDestination
c) when we call getJMSDestination after message is received: *AMQTopic*
topic://<<default>>/Q1/?routingkey='Q1'&exclusive='true'&autodelete='true'

I did notice a slight difference in the destination string when message is
enqueued through a AMQP 0-91 client:

'Q1'/None; {
  'create': 'always',
  'node': {
    'durable': 'true',
    'type': 'queue',
    'x-declare': {
      'arguments': {
        *'no-local': False,*
        'x-qpid-priorities': 10
      }
    }
  }
}

I think the "*'no-local': False,*" parameter has no impact on our
application since we use separate connections to enqueue messages vs
receiving messages.

Now that I think about it, I'm seeing another issue that might be related to
this (i.e. message received in AMQP 0-91 returns a destination of type
AMQTopic). With AMQP 0-10, we were able to repeatedly call consumer.receive
in order to fetch additional (>1) messages from the queue. Using AMQP 0-91,
consumer.receive only returns a single message, subsequent invocations time
out (even though messages are available). Only if we committed on the
session after each receive were we able to fetch all messages from the
queue.




--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612825.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
On 27 August 2014 23:16, xiaodan.wang <xi...@salesforce.com> wrote:

> Hi Rob,
>
> Per your earlier question, with AMQP 0-10 we grabbed the queue name from
> the
> jms message as follows:
> String queueName = ((Queue)message.getJMSDestination()).getQueueName();
>
> With AMQP 0-91 had to make the follow change:
> String queueName =
> ((AMQTopic)message.getJMSDestination()).getRoutingKey().toString();
>
>
Is the destination that the message was sent to / the consumer that was
created to receive the message a Queue or a Topic?

I did a little test myself (sending to a queue object, receiving from a
queue object) and the object that I got back from getJMSDestination() was a
queue (and not a Topic as you seem to be getting)... however I did notice
that the queue seemed to be in binding URL and not Address format - so I
will try to fix that at least.

-- Rob


> Also, regarding this statement
> >The 0.30 broker release provides some enhancements both to 0-9-1
> > functionality and message conversions between 0-9-1 and the other AMQP
> > protocols.
>
> Does this mean that in pre-0.30 brokers a message enqueued by a AMQP 0-10
> client cannot be dequeued by a AMQP 0-91 client?
>
> Cheers! Dan
>
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612668.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Rob,

Per your earlier question, with AMQP 0-10 we grabbed the queue name from the
jms message as follows:
String queueName = ((Queue)message.getJMSDestination()).getQueueName();

With AMQP 0-91 had to make the follow change:
String queueName =
((AMQTopic)message.getJMSDestination()).getRoutingKey().toString();

Also, regarding this statement
>The 0.30 broker release provides some enhancements both to 0-9-1
> functionality and message conversions between 0-9-1 and the other AMQP
> protocols.

Does this mean that in pre-0.30 brokers a message enqueued by a AMQP 0-10
client cannot be dequeued by a AMQP 0-91 client?

Cheers! Dan
 



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612668.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
On 27 August 2014 02:33, xiaodan.wang <xi...@salesforce.com> wrote:

> Hi Rob, just verified with 0.32/trunk client and the v0.16 broker that we
> currently use. It is indeed doing prefetching at a per session level once I
> started the client with AMQP 0-91. Thanks so much for the suggestion!


Great!


> The
> only issue I encountered is that getJMSDestination on the message returns
> an
> instance of AMQTopic instead of javax.jms.Queue object, which was
> straightforward to work around.


Can you give me an example of the sort of address that you set the replyTo
to - I'll try to look into this and fix it before the 0.30 release goes out


> Did not encounter any other issues between
> 0.32 client/0.16 broker so far, *fingers-crossed*.
>
> Long term, we would definitely be interested in updating the broker to test
> out the multiple queue per consumer feature you are adding to trunk
> considering our options with AMQP 0-10 is non-existent at the moment.
>
> Couple of quick questions:
>
> 1) Did session-level prefetching get dropped as part of changes to the
> protocol from AMQP 0-91 -> AMQP 0-10?
>
>
Yes - it's a change at the underlying protocol level.  If you are
interested in all the gory details, the 0-9-1 specification is here:

http://www.amqp.org/specification/0-9-1/amqp-org-download

and the 0-10 specification is here

http://www.amqp.org/specification/0-10/amqp-org-download

In 0-9-1 flow control is enforced through the basic.qos method which allows
you to set a prefetch size (in terms of either bytes or messages).  In 0-10
flow control is enforced through the message.flow command which is on a per
subscription basis.


> 2) Is there any plans to drop support on the broker for versions below AMQP
> 0-10?
>
>
Absolutely not!  We have a large number of users using 0-8 / 0-9 / 0-9-1.
 The aim for the Java Broker is to support every publicly released version
of AMQP and allow interoperation between these versions as much as
possible.  The 0.30 broker release provides some enhancements both to 0-9-1
functionality and message conversions between 0-9-1 and the other AMQP
protocols.

Hope this helps,
Rob


> Thanks!
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612615.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Rob, just verified with 0.32/trunk client and the v0.16 broker that we
currently use. It is indeed doing prefetching at a per session level once I
started the client with AMQP 0-91. Thanks so much for the suggestion! The
only issue I encountered is that getJMSDestination on the message returns an
instance of AMQTopic instead of javax.jms.Queue object, which was
straightforward to work around. Did not encounter any other issues between
0.32 client/0.16 broker so far, *fingers-crossed*.

Long term, we would definitely be interested in updating the broker to test
out the multiple queue per consumer feature you are adding to trunk
considering our options with AMQP 0-10 is non-existent at the moment.

Couple of quick questions:

1) Did session-level prefetching get dropped as part of changes to the
protocol from AMQP 0-91 -> AMQP 0-10?

2) Is there any plans to drop support on the broker for versions below AMQP
0-10?

Thanks!



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612615.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
On 25 August 2014 20:12, John Buisson <jb...@salesforce.com> wrote:

> Rob,
>
> Just to make sure we are on the same page... If we use the ADDR
> functionality, we need to test with trunk instead of 0.28, right?
>
>
Correct.  The ADDR code for 0-9-1 is on trunk, and will be in the next 0.30
beta release.


>  "Other than that, as discussed, changing the consuming client to use the
> 0-9-1 protocol will give you session level flow control.  The current trunk
> code (as of about 30 minutes ago) should also support the use of ADDR style
> addresses (i.e. the address style that could only previously be used in
> 0-10)."
>
>
>
> When would this be ready for an official release?  Or is the recomendation
> to build our prod jars out of trunk?  That makes me a little nervous....
>
>
So, as above, the work has been merged into the 0.30 release branch...
which means it'll be in the next beta release (which should be happening on
Wednesday). The 0.30 release schedule is here
https://cwiki.apache.org/confluence/display/qpid/0.30+Release (at the foot
of the page).

Obviously if you have the opportunity to test with trunk / the beta release
then I can look to resolve any issues you may find before the final 0.30
release.

Meanwhile I've also knocked up an implementation of the "single consumer
across many queues" idea that I floated the other day.  That won't make it
into 0.30 (it's currently only on my laptop), but I would hope it would be
in 0.32, or whatever the next release is called.  Note that this change
will require both a new client and a new broker, but should also be easily
available from all the other 0-10 clients.

Hope this helps,
Rob


> On Sun, Aug 24, 2014 at 4:21 PM, Rob Godfrey <ro...@gmail.com>
> wrote:
>
> > So the "simplest" solution right now would seem to be to use the client
> in
> > 0-9-1 mode with the broker you have, unless that causes you a lot of
> > issues... more recent clients (e.g. trunk - what will become 0.32) should
> > still work with 0.16 (I haven't personally tested, but there really
> should
> > be any reason why they would not).  Is there a reason that this won't
> work
> > for you?
> >
> > If trying to stick with AMQP 0-10, I think the obvious code change to the
> > broker would also need a code change client side... (to cope with
> messages
> > being repossessed, or simply assigned with a lease).
> >
> > It may be possible to code a client library side change without changing
> > the broker (basically reduce consumer credit to 0 as soon as one consumer
> > has a message, and release any messages that have been prefetched), but
> > that probably isn't a trivial piece of work.
> >
> > The only pure broker side change I can think of that wouldn't require a
> > client library change (but might impact your application design), is to
> > allow a single consumer to consume from multiple queues (i.e. you would
> > have a single consumer which is associated with all your hundreds of
> > queues, thus issuing one consumer credit will get you one message from
> one
> > of the possible queues).  This is something I want to add anyway, but
> it'd
> > most likely be something added to the current broker code and not easy to
> > backport (the broker internals have changed a bit since 0.16 in how
> > consumers are represented).
> >
> >
> > -- Rob
> >
> >
> > On 24 August 2014 22:03, John Buisson <jb...@salesforce.com> wrote:
> >
> > > Prediction might be possible, but it would certainly not be 100% and
> we'd
> > > still get the large spikes.  We have some message types that are
> > > consistently long, while others that would mix long and not long.
> > >
> > > Once we hit this problem, I was anticipating this being an effort to
> fix.
> > >  We are essentially blocked at this point from continuing forward with
> > > QPID.  Our users will absolutely not accept the latency spikes, so
> > > upgrading the broker and finding a solution is preferable to having to
> > > stop.  A change we could make to our very old version (0.16) would
> > > obviously be the simplest, but I'm much more interested in finding some
> > > kind of solution for now.
> > >
> > > I guess I should also note that forking off QPID and forcing a lack of
> > > caching just on our branch is also an option.  It would break the
> > protocol,
> > > but the protocol is seems to be the problem with how we use it.  Not
> > > something I want to jump in to, but an option.  We will see if we can
> > play
> > > with the mix and match protocol in both 0.16 and the latest version.
> > That
> > > seems like the least-painful option so far.
> > >
> > > Thanks for the info, we really appreciate it :)
> > >
> > > John
> > >
> > >
> > >
> > > On Sun, Aug 24, 2014 at 12:44 PM, Rob Godfrey <rob.j.godfrey@gmail.com
> >
> > > wrote:
> > >
> > > > Hi John,
> > > >
> > > > I can't immediately think of any elegant solutions (or really many
> > > > inelegant ones) which wouldn't require a fairly significant change in
> > > your
> > > > application design.
> > > > (About the best I can think of is that if you can anticipate the
> amount
> > > of
> > > > processing time a particular message is going to take once you
> receive
> > > it,
> > > > you reconfigure you client to close any consumers on other queues and
> > > only
> > > > reestablish after you have processed the message.  (Note - I'd need
> to
> > > > check if in 0-10 closing the consumer actually returns prefetched
> > > messages,
> > > > I know in the 0-9-1 code it doesn't actually return messages until
> you
> > > > close the session...).
> > > >
> > > > In general is it the case that messages on a given queue take a
> > > predictable
> > > > amount of time (i.e. that there are some queues for which every
> message
> > > is
> > > > going to take an hour to process, whereas for others all messages
> will
> > > only
> > > > take milliseconds) or is it the case that the monster messages are
> > > > distributed across many queues which might also hold millisecond
> jobs.
> > > >
> > > > Other than that, as discussed, changing the consuming client to use
> the
> > > > 0-9-1 protocol will give you session level flow control.  The current
> > > trunk
> > > > code (as of about 30 minutes ago) should also support the use of ADDR
> > > style
> > > > addresses (i.e. the address style that could only previously be used
> in
> > > > 0-10).
> > > >
> > > > I'm certainly going to spend some time thinking about change that we
> in
> > > the
> > > > Qpid development community can make in either the client or the
> broker
> > > that
> > > > could work around this problem for you... but I'm not sure I have any
> > > > immediate answers there (and I guess upgrading the broker is
> probably a
> > > big
> > > > change to ask you to take on)
> > > >
> > > > -- Rob.
> > > >
> > > >
> > > > On 24 August 2014 16:23, John Buisson <jb...@salesforce.com>
> wrote:
> > > >
> > > > > We are having some pretty major problems with this, so any advice
> you
> > > can
> > > > > give would be appreciated.  We have an extremely diverse group of
> > 450+
> > > > > types of messages.  They range from a few ms processing time to
> > several
> > > > > hours and we isolate them by queue.  With this setup, we are
> hitting
> > > > > problems where a high throughput message gets "stuck" behind a long
> > > > running
> > > > > message.  This can give us spike of hours on our dequeue latency
> > where
> > > > the
> > > > > only good reason for it is the caching of the server....  We asked
> a
> > > > pretty
> > > > > specific question, but any thoughts on how we could work around the
> > > > larger
> > > > > issue would be very much appreciated!
> > > > >
> > > > > John
> > > > >
> > > > >
> > > > >
> > > > > On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <
> > rob.j.godfrey@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > For information, if you use a mixture of clients using AMQP
> > > > > 0-8/0-9/0-9-1
> > > > > > (which are all substantially the same protocol) and AMQP 0-10
> > (which
> > > is
> > > > > a a
> > > > > > bit different) then the Java Broker should be able to translate
> > > > > > automatically between them allowing messages sent from one
> protocol
> > > to
> > > > be
> > > > > > received by the other.  As long as you are using standard JMS any
> > > such
> > > > > > translation should be pretty much invisible.  If you are doing
> > > non-JMS
> > > > > > things like sending Lists as values in the application headers
> then
> > > you
> > > > > may
> > > > > > run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in
> the
> > > 0.30
> > > > > > version of the broker has been improved and should deal with this
> > > case
> > > > > and
> > > > > > a few others.
> > > > > >
> > > > > > As you've discovered the 0-8/9/9-1 codepath doesn't currently
> > support
> > > > the
> > > > > > "ADDR" addressing syntax...  Unfortunately the current
> > implementation
> > > > of
> > > > > > that is somewhat mixed in with 0-10 specific features.
> > > > > >
> > > > > > -- Rob
> > > > > >
> > > > > >
> > > > > > On 23 August 2014 09:09, xiaodan.wang <
> xiaodan.wang@salesforce.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Thanks Robbie & Rob! Was able to use your suggestion to force
> the
> > > > > client
> > > > > > to
> > > > > > > use AMQP 0-9, will re-run our tests to validate session-wide
> > > > > prefetching.
> > > > > > >
> > > > > > > @Vijay, unfortunately ran into "The new addressing based sytanx
> > is
> > > > not
> > > > > > > supported for AMQP 0-8/0-9 versions" exception when trying to
> > > create
> > > > a
> > > > > > > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > View this message in context:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > > > > > > Sent from the Apache Qpid users mailing list archive at
> > Nabble.com.
> > > > > > >
> > > > > > >
> > > ---------------------------------------------------------------------
> > > > > > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > > > > > For additional commands, e-mail: users-help@qpid.apache.org
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by John Buisson <jb...@salesforce.com>.
Rob,

Just to make sure we are on the same page... If we use the ADDR
functionality, we need to test with trunk instead of 0.28, right?

"Other than that, as discussed, changing the consuming client to use the
0-9-1 protocol will give you session level flow control.  The current trunk
code (as of about 30 minutes ago) should also support the use of ADDR style
addresses (i.e. the address style that could only previously be used in
0-10)."



When would this be ready for an official release?  Or is the recomendation
to build our prod jars out of trunk?  That makes me a little nervous....


On Sun, Aug 24, 2014 at 4:21 PM, Rob Godfrey <ro...@gmail.com>
wrote:

> So the "simplest" solution right now would seem to be to use the client in
> 0-9-1 mode with the broker you have, unless that causes you a lot of
> issues... more recent clients (e.g. trunk - what will become 0.32) should
> still work with 0.16 (I haven't personally tested, but there really should
> be any reason why they would not).  Is there a reason that this won't work
> for you?
>
> If trying to stick with AMQP 0-10, I think the obvious code change to the
> broker would also need a code change client side... (to cope with messages
> being repossessed, or simply assigned with a lease).
>
> It may be possible to code a client library side change without changing
> the broker (basically reduce consumer credit to 0 as soon as one consumer
> has a message, and release any messages that have been prefetched), but
> that probably isn't a trivial piece of work.
>
> The only pure broker side change I can think of that wouldn't require a
> client library change (but might impact your application design), is to
> allow a single consumer to consume from multiple queues (i.e. you would
> have a single consumer which is associated with all your hundreds of
> queues, thus issuing one consumer credit will get you one message from one
> of the possible queues).  This is something I want to add anyway, but it'd
> most likely be something added to the current broker code and not easy to
> backport (the broker internals have changed a bit since 0.16 in how
> consumers are represented).
>
>
> -- Rob
>
>
> On 24 August 2014 22:03, John Buisson <jb...@salesforce.com> wrote:
>
> > Prediction might be possible, but it would certainly not be 100% and we'd
> > still get the large spikes.  We have some message types that are
> > consistently long, while others that would mix long and not long.
> >
> > Once we hit this problem, I was anticipating this being an effort to fix.
> >  We are essentially blocked at this point from continuing forward with
> > QPID.  Our users will absolutely not accept the latency spikes, so
> > upgrading the broker and finding a solution is preferable to having to
> > stop.  A change we could make to our very old version (0.16) would
> > obviously be the simplest, but I'm much more interested in finding some
> > kind of solution for now.
> >
> > I guess I should also note that forking off QPID and forcing a lack of
> > caching just on our branch is also an option.  It would break the
> protocol,
> > but the protocol is seems to be the problem with how we use it.  Not
> > something I want to jump in to, but an option.  We will see if we can
> play
> > with the mix and match protocol in both 0.16 and the latest version.
> That
> > seems like the least-painful option so far.
> >
> > Thanks for the info, we really appreciate it :)
> >
> > John
> >
> >
> >
> > On Sun, Aug 24, 2014 at 12:44 PM, Rob Godfrey <ro...@gmail.com>
> > wrote:
> >
> > > Hi John,
> > >
> > > I can't immediately think of any elegant solutions (or really many
> > > inelegant ones) which wouldn't require a fairly significant change in
> > your
> > > application design.
> > > (About the best I can think of is that if you can anticipate the amount
> > of
> > > processing time a particular message is going to take once you receive
> > it,
> > > you reconfigure you client to close any consumers on other queues and
> > only
> > > reestablish after you have processed the message.  (Note - I'd need to
> > > check if in 0-10 closing the consumer actually returns prefetched
> > messages,
> > > I know in the 0-9-1 code it doesn't actually return messages until you
> > > close the session...).
> > >
> > > In general is it the case that messages on a given queue take a
> > predictable
> > > amount of time (i.e. that there are some queues for which every message
> > is
> > > going to take an hour to process, whereas for others all messages will
> > only
> > > take milliseconds) or is it the case that the monster messages are
> > > distributed across many queues which might also hold millisecond jobs.
> > >
> > > Other than that, as discussed, changing the consuming client to use the
> > > 0-9-1 protocol will give you session level flow control.  The current
> > trunk
> > > code (as of about 30 minutes ago) should also support the use of ADDR
> > style
> > > addresses (i.e. the address style that could only previously be used in
> > > 0-10).
> > >
> > > I'm certainly going to spend some time thinking about change that we in
> > the
> > > Qpid development community can make in either the client or the broker
> > that
> > > could work around this problem for you... but I'm not sure I have any
> > > immediate answers there (and I guess upgrading the broker is probably a
> > big
> > > change to ask you to take on)
> > >
> > > -- Rob.
> > >
> > >
> > > On 24 August 2014 16:23, John Buisson <jb...@salesforce.com> wrote:
> > >
> > > > We are having some pretty major problems with this, so any advice you
> > can
> > > > give would be appreciated.  We have an extremely diverse group of
> 450+
> > > > types of messages.  They range from a few ms processing time to
> several
> > > > hours and we isolate them by queue.  With this setup, we are hitting
> > > > problems where a high throughput message gets "stuck" behind a long
> > > running
> > > > message.  This can give us spike of hours on our dequeue latency
> where
> > > the
> > > > only good reason for it is the caching of the server....  We asked a
> > > pretty
> > > > specific question, but any thoughts on how we could work around the
> > > larger
> > > > issue would be very much appreciated!
> > > >
> > > > John
> > > >
> > > >
> > > >
> > > > On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <
> rob.j.godfrey@gmail.com>
> > > > wrote:
> > > >
> > > > > For information, if you use a mixture of clients using AMQP
> > > > 0-8/0-9/0-9-1
> > > > > (which are all substantially the same protocol) and AMQP 0-10
> (which
> > is
> > > > a a
> > > > > bit different) then the Java Broker should be able to translate
> > > > > automatically between them allowing messages sent from one protocol
> > to
> > > be
> > > > > received by the other.  As long as you are using standard JMS any
> > such
> > > > > translation should be pretty much invisible.  If you are doing
> > non-JMS
> > > > > things like sending Lists as values in the application headers then
> > you
> > > > may
> > > > > run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the
> > 0.30
> > > > > version of the broker has been improved and should deal with this
> > case
> > > > and
> > > > > a few others.
> > > > >
> > > > > As you've discovered the 0-8/9/9-1 codepath doesn't currently
> support
> > > the
> > > > > "ADDR" addressing syntax...  Unfortunately the current
> implementation
> > > of
> > > > > that is somewhat mixed in with 0-10 specific features.
> > > > >
> > > > > -- Rob
> > > > >
> > > > >
> > > > > On 23 August 2014 09:09, xiaodan.wang <xiaodan.wang@salesforce.com
> >
> > > > wrote:
> > > > >
> > > > > > Thanks Robbie & Rob! Was able to use your suggestion to force the
> > > > client
> > > > > to
> > > > > > use AMQP 0-9, will re-run our tests to validate session-wide
> > > > prefetching.
> > > > > >
> > > > > > @Vijay, unfortunately ran into "The new addressing based sytanx
> is
> > > not
> > > > > > supported for AMQP 0-8/0-9 versions" exception when trying to
> > create
> > > a
> > > > > > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > View this message in context:
> > > > > >
> > > > >
> > > >
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > > > > > Sent from the Apache Qpid users mailing list archive at
> Nabble.com.
> > > > > >
> > > > > >
> > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > > > > For additional commands, e-mail: users-help@qpid.apache.org
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
So the "simplest" solution right now would seem to be to use the client in
0-9-1 mode with the broker you have, unless that causes you a lot of
issues... more recent clients (e.g. trunk - what will become 0.32) should
still work with 0.16 (I haven't personally tested, but there really should
be any reason why they would not).  Is there a reason that this won't work
for you?

If trying to stick with AMQP 0-10, I think the obvious code change to the
broker would also need a code change client side... (to cope with messages
being repossessed, or simply assigned with a lease).

It may be possible to code a client library side change without changing
the broker (basically reduce consumer credit to 0 as soon as one consumer
has a message, and release any messages that have been prefetched), but
that probably isn't a trivial piece of work.

The only pure broker side change I can think of that wouldn't require a
client library change (but might impact your application design), is to
allow a single consumer to consume from multiple queues (i.e. you would
have a single consumer which is associated with all your hundreds of
queues, thus issuing one consumer credit will get you one message from one
of the possible queues).  This is something I want to add anyway, but it'd
most likely be something added to the current broker code and not easy to
backport (the broker internals have changed a bit since 0.16 in how
consumers are represented).


-- Rob


On 24 August 2014 22:03, John Buisson <jb...@salesforce.com> wrote:

> Prediction might be possible, but it would certainly not be 100% and we'd
> still get the large spikes.  We have some message types that are
> consistently long, while others that would mix long and not long.
>
> Once we hit this problem, I was anticipating this being an effort to fix.
>  We are essentially blocked at this point from continuing forward with
> QPID.  Our users will absolutely not accept the latency spikes, so
> upgrading the broker and finding a solution is preferable to having to
> stop.  A change we could make to our very old version (0.16) would
> obviously be the simplest, but I'm much more interested in finding some
> kind of solution for now.
>
> I guess I should also note that forking off QPID and forcing a lack of
> caching just on our branch is also an option.  It would break the protocol,
> but the protocol is seems to be the problem with how we use it.  Not
> something I want to jump in to, but an option.  We will see if we can play
> with the mix and match protocol in both 0.16 and the latest version.  That
> seems like the least-painful option so far.
>
> Thanks for the info, we really appreciate it :)
>
> John
>
>
>
> On Sun, Aug 24, 2014 at 12:44 PM, Rob Godfrey <ro...@gmail.com>
> wrote:
>
> > Hi John,
> >
> > I can't immediately think of any elegant solutions (or really many
> > inelegant ones) which wouldn't require a fairly significant change in
> your
> > application design.
> > (About the best I can think of is that if you can anticipate the amount
> of
> > processing time a particular message is going to take once you receive
> it,
> > you reconfigure you client to close any consumers on other queues and
> only
> > reestablish after you have processed the message.  (Note - I'd need to
> > check if in 0-10 closing the consumer actually returns prefetched
> messages,
> > I know in the 0-9-1 code it doesn't actually return messages until you
> > close the session...).
> >
> > In general is it the case that messages on a given queue take a
> predictable
> > amount of time (i.e. that there are some queues for which every message
> is
> > going to take an hour to process, whereas for others all messages will
> only
> > take milliseconds) or is it the case that the monster messages are
> > distributed across many queues which might also hold millisecond jobs.
> >
> > Other than that, as discussed, changing the consuming client to use the
> > 0-9-1 protocol will give you session level flow control.  The current
> trunk
> > code (as of about 30 minutes ago) should also support the use of ADDR
> style
> > addresses (i.e. the address style that could only previously be used in
> > 0-10).
> >
> > I'm certainly going to spend some time thinking about change that we in
> the
> > Qpid development community can make in either the client or the broker
> that
> > could work around this problem for you... but I'm not sure I have any
> > immediate answers there (and I guess upgrading the broker is probably a
> big
> > change to ask you to take on)
> >
> > -- Rob.
> >
> >
> > On 24 August 2014 16:23, John Buisson <jb...@salesforce.com> wrote:
> >
> > > We are having some pretty major problems with this, so any advice you
> can
> > > give would be appreciated.  We have an extremely diverse group of 450+
> > > types of messages.  They range from a few ms processing time to several
> > > hours and we isolate them by queue.  With this setup, we are hitting
> > > problems where a high throughput message gets "stuck" behind a long
> > running
> > > message.  This can give us spike of hours on our dequeue latency where
> > the
> > > only good reason for it is the caching of the server....  We asked a
> > pretty
> > > specific question, but any thoughts on how we could work around the
> > larger
> > > issue would be very much appreciated!
> > >
> > > John
> > >
> > >
> > >
> > > On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <ro...@gmail.com>
> > > wrote:
> > >
> > > > For information, if you use a mixture of clients using AMQP
> > > 0-8/0-9/0-9-1
> > > > (which are all substantially the same protocol) and AMQP 0-10 (which
> is
> > > a a
> > > > bit different) then the Java Broker should be able to translate
> > > > automatically between them allowing messages sent from one protocol
> to
> > be
> > > > received by the other.  As long as you are using standard JMS any
> such
> > > > translation should be pretty much invisible.  If you are doing
> non-JMS
> > > > things like sending Lists as values in the application headers then
> you
> > > may
> > > > run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the
> 0.30
> > > > version of the broker has been improved and should deal with this
> case
> > > and
> > > > a few others.
> > > >
> > > > As you've discovered the 0-8/9/9-1 codepath doesn't currently support
> > the
> > > > "ADDR" addressing syntax...  Unfortunately the current implementation
> > of
> > > > that is somewhat mixed in with 0-10 specific features.
> > > >
> > > > -- Rob
> > > >
> > > >
> > > > On 23 August 2014 09:09, xiaodan.wang <xi...@salesforce.com>
> > > wrote:
> > > >
> > > > > Thanks Robbie & Rob! Was able to use your suggestion to force the
> > > client
> > > > to
> > > > > use AMQP 0-9, will re-run our tests to validate session-wide
> > > prefetching.
> > > > >
> > > > > @Vijay, unfortunately ran into "The new addressing based sytanx is
> > not
> > > > > supported for AMQP 0-8/0-9 versions" exception when trying to
> create
> > a
> > > > > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > View this message in context:
> > > > >
> > > >
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > > > > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> > > > >
> > > > >
> ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > > > For additional commands, e-mail: users-help@qpid.apache.org
> > > > >
> > > > >
> > > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by John Buisson <jb...@salesforce.com>.
Prediction might be possible, but it would certainly not be 100% and we'd
still get the large spikes.  We have some message types that are
consistently long, while others that would mix long and not long.

Once we hit this problem, I was anticipating this being an effort to fix.
 We are essentially blocked at this point from continuing forward with
QPID.  Our users will absolutely not accept the latency spikes, so
upgrading the broker and finding a solution is preferable to having to
stop.  A change we could make to our very old version (0.16) would
obviously be the simplest, but I'm much more interested in finding some
kind of solution for now.

I guess I should also note that forking off QPID and forcing a lack of
caching just on our branch is also an option.  It would break the protocol,
but the protocol is seems to be the problem with how we use it.  Not
something I want to jump in to, but an option.  We will see if we can play
with the mix and match protocol in both 0.16 and the latest version.  That
seems like the least-painful option so far.

Thanks for the info, we really appreciate it :)

John



On Sun, Aug 24, 2014 at 12:44 PM, Rob Godfrey <ro...@gmail.com>
wrote:

> Hi John,
>
> I can't immediately think of any elegant solutions (or really many
> inelegant ones) which wouldn't require a fairly significant change in your
> application design.
> (About the best I can think of is that if you can anticipate the amount of
> processing time a particular message is going to take once you receive it,
> you reconfigure you client to close any consumers on other queues and only
> reestablish after you have processed the message.  (Note - I'd need to
> check if in 0-10 closing the consumer actually returns prefetched messages,
> I know in the 0-9-1 code it doesn't actually return messages until you
> close the session...).
>
> In general is it the case that messages on a given queue take a predictable
> amount of time (i.e. that there are some queues for which every message is
> going to take an hour to process, whereas for others all messages will only
> take milliseconds) or is it the case that the monster messages are
> distributed across many queues which might also hold millisecond jobs.
>
> Other than that, as discussed, changing the consuming client to use the
> 0-9-1 protocol will give you session level flow control.  The current trunk
> code (as of about 30 minutes ago) should also support the use of ADDR style
> addresses (i.e. the address style that could only previously be used in
> 0-10).
>
> I'm certainly going to spend some time thinking about change that we in the
> Qpid development community can make in either the client or the broker that
> could work around this problem for you... but I'm not sure I have any
> immediate answers there (and I guess upgrading the broker is probably a big
> change to ask you to take on)
>
> -- Rob.
>
>
> On 24 August 2014 16:23, John Buisson <jb...@salesforce.com> wrote:
>
> > We are having some pretty major problems with this, so any advice you can
> > give would be appreciated.  We have an extremely diverse group of 450+
> > types of messages.  They range from a few ms processing time to several
> > hours and we isolate them by queue.  With this setup, we are hitting
> > problems where a high throughput message gets "stuck" behind a long
> running
> > message.  This can give us spike of hours on our dequeue latency where
> the
> > only good reason for it is the caching of the server....  We asked a
> pretty
> > specific question, but any thoughts on how we could work around the
> larger
> > issue would be very much appreciated!
> >
> > John
> >
> >
> >
> > On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <ro...@gmail.com>
> > wrote:
> >
> > > For information, if you use a mixture of clients using AMQP
> > 0-8/0-9/0-9-1
> > > (which are all substantially the same protocol) and AMQP 0-10 (which is
> > a a
> > > bit different) then the Java Broker should be able to translate
> > > automatically between them allowing messages sent from one protocol to
> be
> > > received by the other.  As long as you are using standard JMS any such
> > > translation should be pretty much invisible.  If you are doing non-JMS
> > > things like sending Lists as values in the application headers then you
> > may
> > > run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the 0.30
> > > version of the broker has been improved and should deal with this case
> > and
> > > a few others.
> > >
> > > As you've discovered the 0-8/9/9-1 codepath doesn't currently support
> the
> > > "ADDR" addressing syntax...  Unfortunately the current implementation
> of
> > > that is somewhat mixed in with 0-10 specific features.
> > >
> > > -- Rob
> > >
> > >
> > > On 23 August 2014 09:09, xiaodan.wang <xi...@salesforce.com>
> > wrote:
> > >
> > > > Thanks Robbie & Rob! Was able to use your suggestion to force the
> > client
> > > to
> > > > use AMQP 0-9, will re-run our tests to validate session-wide
> > prefetching.
> > > >
> > > > @Vijay, unfortunately ran into "The new addressing based sytanx is
> not
> > > > supported for AMQP 0-8/0-9 versions" exception when trying to create
> a
> > > > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> > > >
> > > >
> > > >
> > > > --
> > > > View this message in context:
> > > >
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > > > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > > For additional commands, e-mail: users-help@qpid.apache.org
> > > >
> > > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Hi John,

I can't immediately think of any elegant solutions (or really many
inelegant ones) which wouldn't require a fairly significant change in your
application design.
(About the best I can think of is that if you can anticipate the amount of
processing time a particular message is going to take once you receive it,
you reconfigure you client to close any consumers on other queues and only
reestablish after you have processed the message.  (Note - I'd need to
check if in 0-10 closing the consumer actually returns prefetched messages,
I know in the 0-9-1 code it doesn't actually return messages until you
close the session...).

In general is it the case that messages on a given queue take a predictable
amount of time (i.e. that there are some queues for which every message is
going to take an hour to process, whereas for others all messages will only
take milliseconds) or is it the case that the monster messages are
distributed across many queues which might also hold millisecond jobs.

Other than that, as discussed, changing the consuming client to use the
0-9-1 protocol will give you session level flow control.  The current trunk
code (as of about 30 minutes ago) should also support the use of ADDR style
addresses (i.e. the address style that could only previously be used in
0-10).

I'm certainly going to spend some time thinking about change that we in the
Qpid development community can make in either the client or the broker that
could work around this problem for you... but I'm not sure I have any
immediate answers there (and I guess upgrading the broker is probably a big
change to ask you to take on)

-- Rob.


On 24 August 2014 16:23, John Buisson <jb...@salesforce.com> wrote:

> We are having some pretty major problems with this, so any advice you can
> give would be appreciated.  We have an extremely diverse group of 450+
> types of messages.  They range from a few ms processing time to several
> hours and we isolate them by queue.  With this setup, we are hitting
> problems where a high throughput message gets "stuck" behind a long running
> message.  This can give us spike of hours on our dequeue latency where the
> only good reason for it is the caching of the server....  We asked a pretty
> specific question, but any thoughts on how we could work around the larger
> issue would be very much appreciated!
>
> John
>
>
>
> On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <ro...@gmail.com>
> wrote:
>
> > For information, if you use a mixture of clients using AMQP
> 0-8/0-9/0-9-1
> > (which are all substantially the same protocol) and AMQP 0-10 (which is
> a a
> > bit different) then the Java Broker should be able to translate
> > automatically between them allowing messages sent from one protocol to be
> > received by the other.  As long as you are using standard JMS any such
> > translation should be pretty much invisible.  If you are doing non-JMS
> > things like sending Lists as values in the application headers then you
> may
> > run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the 0.30
> > version of the broker has been improved and should deal with this case
> and
> > a few others.
> >
> > As you've discovered the 0-8/9/9-1 codepath doesn't currently support the
> > "ADDR" addressing syntax...  Unfortunately the current implementation of
> > that is somewhat mixed in with 0-10 specific features.
> >
> > -- Rob
> >
> >
> > On 23 August 2014 09:09, xiaodan.wang <xi...@salesforce.com>
> wrote:
> >
> > > Thanks Robbie & Rob! Was able to use your suggestion to force the
> client
> > to
> > > use AMQP 0-9, will re-run our tests to validate session-wide
> prefetching.
> > >
> > > @Vijay, unfortunately ran into "The new addressing based sytanx is not
> > > supported for AMQP 0-8/0-9 versions" exception when trying to create a
> > > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by John Buisson <jb...@salesforce.com>.
We are having some pretty major problems with this, so any advice you can
give would be appreciated.  We have an extremely diverse group of 450+
types of messages.  They range from a few ms processing time to several
hours and we isolate them by queue.  With this setup, we are hitting
problems where a high throughput message gets "stuck" behind a long running
message.  This can give us spike of hours on our dequeue latency where the
only good reason for it is the caching of the server....  We asked a pretty
specific question, but any thoughts on how we could work around the larger
issue would be very much appreciated!

John



On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <ro...@gmail.com>
wrote:

> For information, if you use a mixture of clients using AMQP 0-8/0-9/0-9-1
> (which are all substantially the same protocol) and AMQP 0-10 (which is a a
> bit different) then the Java Broker should be able to translate
> automatically between them allowing messages sent from one protocol to be
> received by the other.  As long as you are using standard JMS any such
> translation should be pretty much invisible.  If you are doing non-JMS
> things like sending Lists as values in the application headers then you may
> run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the 0.30
> version of the broker has been improved and should deal with this case and
> a few others.
>
> As you've discovered the 0-8/9/9-1 codepath doesn't currently support the
> "ADDR" addressing syntax...  Unfortunately the current implementation of
> that is somewhat mixed in with 0-10 specific features.
>
> -- Rob
>
>
> On 23 August 2014 09:09, xiaodan.wang <xi...@salesforce.com> wrote:
>
> > Thanks Robbie & Rob! Was able to use your suggestion to force the client
> to
> > use AMQP 0-9, will re-run our tests to validate session-wide prefetching.
> >
> > @Vijay, unfortunately ran into "The new addressing based sytanx is not
> > supported for AMQP 0-8/0-9 versions" exception when trying to create a
> > consumer using AMQP 0-9. Will get it sorted out tomorrow :)
> >
> >
> >
> > --
> > View this message in context:
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
For information, if you use a mixture of clients using AMQP 0-8/0-9/0-9-1
(which are all substantially the same protocol) and AMQP 0-10 (which is a a
bit different) then the Java Broker should be able to translate
automatically between them allowing messages sent from one protocol to be
received by the other.  As long as you are using standard JMS any such
translation should be pretty much invisible.  If you are doing non-JMS
things like sending Lists as values in the application headers then you may
run into issues.  The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the 0.30
version of the broker has been improved and should deal with this case and
a few others.

As you've discovered the 0-8/9/9-1 codepath doesn't currently support the
"ADDR" addressing syntax...  Unfortunately the current implementation of
that is somewhat mixed in with 0-10 specific features.

-- Rob


On 23 August 2014 09:09, xiaodan.wang <xi...@salesforce.com> wrote:

> Thanks Robbie & Rob! Was able to use your suggestion to force the client to
> use AMQP 0-9, will re-run our tests to validate session-wide prefetching.
>
> @Vijay, unfortunately ran into "The new addressing based sytanx is not
> supported for AMQP 0-8/0-9 versions" exception when trying to create a
> consumer using AMQP 0-9. Will get it sorted out tomorrow :)
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Thanks Robbie & Rob! Was able to use your suggestion to force the client to
use AMQP 0-9, will re-run our tests to validate session-wide prefetching.

@Vijay, unfortunately ran into "The new addressing based sytanx is not
supported for AMQP 0-8/0-9 versions" exception when trying to create a
consumer using AMQP 0-9. Will get it sorted out tomorrow :)



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Vijay,

It is certainly possible to make the client/broker use 0-9-1 and you would
then end up getting per-session prefetch, however as the 0-8/0-9/0-9-1 and
0-10 paths in the client are fairly distinct in various areas you would
also see a number of other differing behaviours, which is the main reason
they are documented separately. It would depend on how you are using the
client exactly differently they might behave.

You can instruct the client what protocol version to attempt initially (it
defaults to 0-10), by setting the JVM system property qpid.amqp.version,
e.g to "0-9-1".
Alternatively, you can instruct the broker not to support 0-10 on some or
all of its AMQP ports such that the java client will negotiate down to the
lower supported protocol version after the initial attempt at 0-10 is
rejected. How to do this varies depending on the broker version, but is
fairly easy on recent versions via the port settings in the web management.

Robbie

On 23 August 2014 02:46, Vijay Devadhar <vd...@salesforce.com> wrote:

> Thanks Rob.  Is it feasible for us to configure the client-broker to work
> in AMQP 0-8/0-9/0-9-1 mode? If yes, do we end up getting the session level
> prefetch limit?
>
>
> On Fri, Aug 22, 2014 at 4:37 PM, Rob Godfrey <ro...@gmail.com>
> wrote:
>
> > Hi Dan,
> >
> > The document you refer to is discussing the behaviour of the client when
> > speaking AMQP 0-8/0-9/0-9-1 - these versions of AMQP allow for setting of
> > credit (prefetch) at a session wide level, but not on a per consumer
> basis.
> >
> > I believe you are using the client and broker in AMQP 0-10 mode.  In AMQP
> > 0-10 credit is issued on a per consumer (rather than per session)
> basis.  I
> > don't believe there is a way of setting a session wide credit limit in
> the
> > AMQP 0-10 protocol.
> >
> >
> > -- Rob
> >
> >
> > On 23 August 2014 00:41, xiaodan.wang <xi...@salesforce.com>
> wrote:
> >
> > > Hi Robbie, sorry to resurrect an old thread :)
> > >
> > > We are seeing an interesting behavior on the Qpid Java client with
> > respect
> > > to prefetching of messages (maxprefetch). Based on documentation from
> the
> > > following link, we believe that the prefetch buffer is defined on a per
> > > session basis. However, after running a few benchmarks, the prefetch
> > buffer
> > > seems to be allocated on a per consumer (or per destination queue, not
> > sure
> > > which one) basis. Namely, when we setup multiple consumers using the
> same
> > > session, each consumer is buffering separately. Is this the expected
> > > behavior? If so, can you suggest a workaround to either turn off client
> > > side
> > > prefetching or have prefetch buffers scoped at the session level.
> > >
> > >
> > >
> >
> https://qpid.apache.org/releases/qpid-0.26/jms-client-0-8/book/JMS-Client-0-8-Client-Understanding-Session.html
> > >
> > > We configured Qpid with asynchronous onMessage delivery with transacted
> > > session. Prefetch count on the client is set to 1 (setting prefetch to
> 0
> > > did
> > > not solve our issue). The experiment involves 2 sessions (A and B) and
> 2
> > > destination queues (Q1 and Q2). Each session creates 2 consumers that
> > > listen
> > > on Q1 and Q2 respectively. Next, we enqueue 2 messages, one long
> running
> > > message on Q1 and a short running message on Q2. A consumer on session
> A
> > > pulls the long running message from Q1 and starts working on it. In the
> > > meantime, session B does nothing even though there is an unconsumed
> > message
> > > on Q2. Once session A finishes the long running messages, it consumes
> the
> > > message from Q2.
> > >
> > > This seems to contradict our expectation. So we suspected that the
> > prefetch
> > > buffer is allocated for each consumer on a session (i.e. for session A,
> > > with
> > > prefetch of 1, it will buffer 1 message from Q1 and 1 for Q2 for a
> total
> > of
> > > 2 messages). To test this theory, we modified the above experiment to
> > use a
> > > single destination queue (Q1) and consumer for sessions A and B. We
> > > enqueued
> > > both long running and short running messages in Q1 and did not observe
> > any
> > > instance in which a consumer was doing nothing while an unconsumed
> > message
> > > was sitting on the queue. (We repeated this several times with many
> > > messages).
> > >
> > > This seems to indicate that there is a separate prefetch buffer for
> each
> > > consumer within the same session. Thanks in advance for any help
> > > interpreting this behavior!
> > >
> > > Cheers! Dan
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612403.html
> > > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> > >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Vijay Devadhar <vd...@salesforce.com>.
Thanks Rob.  Is it feasible for us to configure the client-broker to work
in AMQP 0-8/0-9/0-9-1 mode? If yes, do we end up getting the session level
prefetch limit?


On Fri, Aug 22, 2014 at 4:37 PM, Rob Godfrey <ro...@gmail.com>
wrote:

> Hi Dan,
>
> The document you refer to is discussing the behaviour of the client when
> speaking AMQP 0-8/0-9/0-9-1 - these versions of AMQP allow for setting of
> credit (prefetch) at a session wide level, but not on a per consumer basis.
>
> I believe you are using the client and broker in AMQP 0-10 mode.  In AMQP
> 0-10 credit is issued on a per consumer (rather than per session) basis.  I
> don't believe there is a way of setting a session wide credit limit in the
> AMQP 0-10 protocol.
>
>
> -- Rob
>
>
> On 23 August 2014 00:41, xiaodan.wang <xi...@salesforce.com> wrote:
>
> > Hi Robbie, sorry to resurrect an old thread :)
> >
> > We are seeing an interesting behavior on the Qpid Java client with
> respect
> > to prefetching of messages (maxprefetch). Based on documentation from the
> > following link, we believe that the prefetch buffer is defined on a per
> > session basis. However, after running a few benchmarks, the prefetch
> buffer
> > seems to be allocated on a per consumer (or per destination queue, not
> sure
> > which one) basis. Namely, when we setup multiple consumers using the same
> > session, each consumer is buffering separately. Is this the expected
> > behavior? If so, can you suggest a workaround to either turn off client
> > side
> > prefetching or have prefetch buffers scoped at the session level.
> >
> >
> >
> https://qpid.apache.org/releases/qpid-0.26/jms-client-0-8/book/JMS-Client-0-8-Client-Understanding-Session.html
> >
> > We configured Qpid with asynchronous onMessage delivery with transacted
> > session. Prefetch count on the client is set to 1 (setting prefetch to 0
> > did
> > not solve our issue). The experiment involves 2 sessions (A and B) and 2
> > destination queues (Q1 and Q2). Each session creates 2 consumers that
> > listen
> > on Q1 and Q2 respectively. Next, we enqueue 2 messages, one long running
> > message on Q1 and a short running message on Q2. A consumer on session A
> > pulls the long running message from Q1 and starts working on it. In the
> > meantime, session B does nothing even though there is an unconsumed
> message
> > on Q2. Once session A finishes the long running messages, it consumes the
> > message from Q2.
> >
> > This seems to contradict our expectation. So we suspected that the
> prefetch
> > buffer is allocated for each consumer on a session (i.e. for session A,
> > with
> > prefetch of 1, it will buffer 1 message from Q1 and 1 for Q2 for a total
> of
> > 2 messages). To test this theory, we modified the above experiment to
> use a
> > single destination queue (Q1) and consumer for sessions A and B. We
> > enqueued
> > both long running and short running messages in Q1 and did not observe
> any
> > instance in which a consumer was doing nothing while an unconsumed
> message
> > was sitting on the queue. (We repeated this several times with many
> > messages).
> >
> > This seems to indicate that there is a separate prefetch buffer for each
> > consumer within the same session. Thanks in advance for any help
> > interpreting this behavior!
> >
> > Cheers! Dan
> >
> >
> >
> > --
> > View this message in context:
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612403.html
> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Dan,

Rob has beaten me to reading your mail, and his reply is almost exactly
what I would have said.

The per-consumer prefetching of the 0-10 client is mentioned in its
documentation, e.g:
http://qpid.apache.org/releases/qpid-0.28/programming/book/QpidJNDI.html#section-jms-connection-url


Robbie

On 23 August 2014 00:37, Rob Godfrey <ro...@gmail.com> wrote:

> Hi Dan,
>
> The document you refer to is discussing the behaviour of the client when
> speaking AMQP 0-8/0-9/0-9-1 - these versions of AMQP allow for setting of
> credit (prefetch) at a session wide level, but not on a per consumer basis.
>
> I believe you are using the client and broker in AMQP 0-10 mode.  In AMQP
> 0-10 credit is issued on a per consumer (rather than per session) basis.  I
> don't believe there is a way of setting a session wide credit limit in the
> AMQP 0-10 protocol.
>
>
> -- Rob
>
>
> On 23 August 2014 00:41, xiaodan.wang <xi...@salesforce.com> wrote:
>
> > Hi Robbie, sorry to resurrect an old thread :)
> >
> > We are seeing an interesting behavior on the Qpid Java client with
> respect
> > to prefetching of messages (maxprefetch). Based on documentation from the
> > following link, we believe that the prefetch buffer is defined on a per
> > session basis. However, after running a few benchmarks, the prefetch
> buffer
> > seems to be allocated on a per consumer (or per destination queue, not
> sure
> > which one) basis. Namely, when we setup multiple consumers using the same
> > session, each consumer is buffering separately. Is this the expected
> > behavior? If so, can you suggest a workaround to either turn off client
> > side
> > prefetching or have prefetch buffers scoped at the session level.
> >
> >
> >
> https://qpid.apache.org/releases/qpid-0.26/jms-client-0-8/book/JMS-Client-0-8-Client-Understanding-Session.html
> >
> > We configured Qpid with asynchronous onMessage delivery with transacted
> > session. Prefetch count on the client is set to 1 (setting prefetch to 0
> > did
> > not solve our issue). The experiment involves 2 sessions (A and B) and 2
> > destination queues (Q1 and Q2). Each session creates 2 consumers that
> > listen
> > on Q1 and Q2 respectively. Next, we enqueue 2 messages, one long running
> > message on Q1 and a short running message on Q2. A consumer on session A
> > pulls the long running message from Q1 and starts working on it. In the
> > meantime, session B does nothing even though there is an unconsumed
> message
> > on Q2. Once session A finishes the long running messages, it consumes the
> > message from Q2.
> >
> > This seems to contradict our expectation. So we suspected that the
> prefetch
> > buffer is allocated for each consumer on a session (i.e. for session A,
> > with
> > prefetch of 1, it will buffer 1 message from Q1 and 1 for Q2 for a total
> of
> > 2 messages). To test this theory, we modified the above experiment to
> use a
> > single destination queue (Q1) and consumer for sessions A and B. We
> > enqueued
> > both long running and short running messages in Q1 and did not observe
> any
> > instance in which a consumer was doing nothing while an unconsumed
> message
> > was sitting on the queue. (We repeated this several times with many
> > messages).
> >
> > This seems to indicate that there is a separate prefetch buffer for each
> > consumer within the same session. Thanks in advance for any help
> > interpreting this behavior!
> >
> > Cheers! Dan
> >
> >
> >
> > --
> > View this message in context:
> >
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612403.html
> > Sent from the Apache Qpid users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Rob Godfrey <ro...@gmail.com>.
Hi Dan,

The document you refer to is discussing the behaviour of the client when
speaking AMQP 0-8/0-9/0-9-1 - these versions of AMQP allow for setting of
credit (prefetch) at a session wide level, but not on a per consumer basis.

I believe you are using the client and broker in AMQP 0-10 mode.  In AMQP
0-10 credit is issued on a per consumer (rather than per session) basis.  I
don't believe there is a way of setting a session wide credit limit in the
AMQP 0-10 protocol.


-- Rob


On 23 August 2014 00:41, xiaodan.wang <xi...@salesforce.com> wrote:

> Hi Robbie, sorry to resurrect an old thread :)
>
> We are seeing an interesting behavior on the Qpid Java client with respect
> to prefetching of messages (maxprefetch). Based on documentation from the
> following link, we believe that the prefetch buffer is defined on a per
> session basis. However, after running a few benchmarks, the prefetch buffer
> seems to be allocated on a per consumer (or per destination queue, not sure
> which one) basis. Namely, when we setup multiple consumers using the same
> session, each consumer is buffering separately. Is this the expected
> behavior? If so, can you suggest a workaround to either turn off client
> side
> prefetching or have prefetch buffers scoped at the session level.
>
>
> https://qpid.apache.org/releases/qpid-0.26/jms-client-0-8/book/JMS-Client-0-8-Client-Understanding-Session.html
>
> We configured Qpid with asynchronous onMessage delivery with transacted
> session. Prefetch count on the client is set to 1 (setting prefetch to 0
> did
> not solve our issue). The experiment involves 2 sessions (A and B) and 2
> destination queues (Q1 and Q2). Each session creates 2 consumers that
> listen
> on Q1 and Q2 respectively. Next, we enqueue 2 messages, one long running
> message on Q1 and a short running message on Q2. A consumer on session A
> pulls the long running message from Q1 and starts working on it. In the
> meantime, session B does nothing even though there is an unconsumed message
> on Q2. Once session A finishes the long running messages, it consumes the
> message from Q2.
>
> This seems to contradict our expectation. So we suspected that the prefetch
> buffer is allocated for each consumer on a session (i.e. for session A,
> with
> prefetch of 1, it will buffer 1 message from Q1 and 1 for Q2 for a total of
> 2 messages). To test this theory, we modified the above experiment to use a
> single destination queue (Q1) and consumer for sessions A and B. We
> enqueued
> both long running and short running messages in Q1 and did not observe any
> instance in which a consumer was doing nothing while an unconsumed message
> was sitting on the queue. (We repeated this several times with many
> messages).
>
> This seems to indicate that there is a separate prefetch buffer for each
> consumer within the same session. Thanks in advance for any help
> interpreting this behavior!
>
> Cheers! Dan
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612403.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by "xiaodan.wang" <xi...@salesforce.com>.
Hi Robbie, sorry to resurrect an old thread :)

We are seeing an interesting behavior on the Qpid Java client with respect
to prefetching of messages (maxprefetch). Based on documentation from the
following link, we believe that the prefetch buffer is defined on a per
session basis. However, after running a few benchmarks, the prefetch buffer
seems to be allocated on a per consumer (or per destination queue, not sure
which one) basis. Namely, when we setup multiple consumers using the same
session, each consumer is buffering separately. Is this the expected
behavior? If so, can you suggest a workaround to either turn off client side
prefetching or have prefetch buffers scoped at the session level.

https://qpid.apache.org/releases/qpid-0.26/jms-client-0-8/book/JMS-Client-0-8-Client-Understanding-Session.html

We configured Qpid with asynchronous onMessage delivery with transacted
session. Prefetch count on the client is set to 1 (setting prefetch to 0 did
not solve our issue). The experiment involves 2 sessions (A and B) and 2
destination queues (Q1 and Q2). Each session creates 2 consumers that listen
on Q1 and Q2 respectively. Next, we enqueue 2 messages, one long running
message on Q1 and a short running message on Q2. A consumer on session A
pulls the long running message from Q1 and starts working on it. In the
meantime, session B does nothing even though there is an unconsumed message
on Q2. Once session A finishes the long running messages, it consumes the
message from Q2.

This seems to contradict our expectation. So we suspected that the prefetch
buffer is allocated for each consumer on a session (i.e. for session A, with
prefetch of 1, it will buffer 1 message from Q1 and 1 for Q2 for a total of
2 messages). To test this theory, we modified the above experiment to use a
single destination queue (Q1) and consumer for sessions A and B. We enqueued
both long running and short running messages in Q1 and did not observe any
instance in which a consumer was doing nothing while an unconsumed message
was sitting on the queue. (We repeated this several times with many
messages).

This seems to indicate that there is a separate prefetch buffer for each
consumer within the same session. Thanks in advance for any help
interpreting this behavior!

Cheers! Dan



--
View this message in context: http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612403.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Praveen M <le...@gmail.com>.
Hi Robbie,

I tested this fix today. It works like a charm. Thanks a lot.

Praveen

On Sun, Oct 30, 2011 at 12:02 PM, Praveen M <le...@gmail.com> wrote:

> awesome. Thanks a lot Robbie :-)
>
>
> On Sun, Oct 30, 2011 at 11:52 AM, Robbie Gemmell <robbie.gemmell@gmail.com
> > wrote:
>
>> I have made a change to the client on trunk that should result in it
>> now doing what you want when using prefetch=1 on transacted sessions
>> when using onMessage().
>>
>> Robbie
>>
>> On 28 October 2011 02:25, Robbie Gemmell <ro...@gmail.com>
>> wrote:
>> > Ok, I havent actually tried this yet, but after sneaking a look at the
>> > code I am pretty sure I see a problem in the client specific to
>> > transacted AMQP 0-10 sessions with prefetch=1 that would cause the
>> > behaviour you are seeing. I'll look into it at the weekend. Time for
>> > sleep, before 3am comes along ;)
>> >
>> > Robbie
>> >
>> > On 28 October 2011 01:18, Praveen M <le...@gmail.com> wrote:
>> >> Hi Robbie,
>> >>
>> >> I was testing against trunk, and also, I was calling commit after my
>> >> simulated processing delay, yes.
>> >>
>> >> Thanks,
>> >> Praveen
>> >>
>> >> On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <
>> robbie.gemmell@gmail.com>wrote:
>> >>
>> >>> Just to be clear for when I look at it...were you using trunk or 0.12
>> >>> for those tests, and presumably you were calling commit after your
>> >>> simulated processing delay?
>> >>>
>> >>> Robbie
>> >>>
>> >>> On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
>> >>> > Hi Robbie,
>> >>> >
>> >>> > I was using asynchronous onMessage delivery with transacted session
>> for
>> >>> my
>> >>> > tests.
>> >>> >
>> >>> > So from your email, I'm afraid it might be an issue. It will be
>> great if
>> >>> you
>> >>> > could investigate a little on this and keep us update.
>> >>> >
>> >>> > Thanks a lot,
>> >>> > Praveen
>> >>> >
>> >>> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
>> >>> > <ro...@gmail.com>wrote:
>> >>> >
>> >>> >> From the below, would I be right in thinking you were using
>> receive()
>> >>> >> calls with an AutoAck session? If so then you would see the
>> behaviour
>> >>> >> you observed as the message gets acked just before receive()
>> returns,
>> >>> >> which makes the broker send the next one to the client. That
>> shouldnt
>> >>> >> happen if you were using asynchronous onMessage delivery (since the
>> >>> >> ack gets since when the onMessage() handler returns), or if you you
>> >>> >> used a ClientAck or Transacted session in which you only
>> acknowledged
>> >>> >> the message / commited the session after the processing is
>> complete.
>> >>> >>
>> >>> >> I must admit to having never used the client with prefetch set to
>> 0,
>> >>> >> which should in theory give you what you are looking for even with
>> >>> >> AutoAck but based on your comments appears not to have. I will try
>> and
>> >>> >> take a look into that at the weekend to see if there are any
>> obvious
>> >>> >> issues we can JIRA for fixing.
>> >>> >>
>> >>> >> Robbie
>> >>> >>
>> >>> >> On 26 October 2011 23:48, Praveen M <le...@gmail.com>
>> wrote:
>> >>> >> > Hi Jakub,
>> >>> >> >
>> >>> >> > Thanks for your reply. Yes I did find the prefetch model and
>> reran my
>> >>> >> test
>> >>> >> > and now ran into another issue.
>> >>> >> >
>> >>> >> > I set the prefetch to 1 and ran the same test described in my
>> earlier
>> >>> >> mail.
>> >>> >> >
>> >>> >> > In this case the behavior I see is,
>> >>> >> > The 1st consumer gets the 1st message and works on it for a
>> while, the
>> >>> >> 2nd
>> >>> >> > consumer consumes 8 messages and then does nothing(even though
>> there
>> >>> was
>> >>> >> 1
>> >>> >> > more unconsumed message). When the first consumer completed its
>> long
>> >>> >> running
>> >>> >> > message it got around and consumed the remaining 1 message.
>> However,
>> >>>  I
>> >>> >> was
>> >>> >> > expecting the 2nd consumer to dequeue all 9 messages(the number
>> of
>> >>> >> remaining
>> >>> >> > messages) while the 1st consumer was busy working on the long
>> message.
>> >>> >> >
>> >>> >> > Then, I thought, perhaps the prefetch count meant that, when a
>> >>> consumer
>> >>> >> is
>> >>> >> > working on a message, another message in the queue is prefetched
>> to
>> >>> the
>> >>> >> > consumer from the persistant store as my prefetch count is 1.
>> That
>> >>> could
>> >>> >> > explain why I saw the behavior as above.
>> >>> >> >
>> >>> >> > What i wanted to achieve was to actually turn of any kinda
>> prefetching
>> >>> >> > (Yeah, I'm ok with taking the throughput hit)
>> >>> >> >
>> >>> >> > So I re ran my test now with prefetch = 0, and saw a really weird
>> >>> result.
>> >>> >> >
>> >>> >> > With prefetch 0, the 1st consumer gets the 1st message and works
>> on it
>> >>> >> for a
>> >>> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and
>> then
>> >>> does
>> >>> >> > nothing(even though there were 2 more unconsumed messages). When
>> the
>> >>> 1st
>> >>> >> > consumer completed processing it's message it got to consume the
>> >>> >> remaining
>> >>> >> > two messages too. (Did it kinda prefetch 2?)
>> >>> >> >
>> >>> >> > Can someone please tell me if Is this a bug or am I doing
>> something
>> >>> >> > completely wrong? I'm using the latest Java Broker & client (from
>> >>> trunk)
>> >>> >> > with DerbyMessageStore for my tests.
>> >>> >> >
>> >>> >> > Also, can someone please tell me what'd be the best way to turn
>> off
>> >>> >> > prefetching?
>> >>> >> >
>> >>> >> > Thanks a lot,
>> >>> >> > Praveen
>> >>> >> >
>> >>> >> >
>> >>> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz>
>> >>> wrote:
>> >>> >> >
>> >>> >> >> Hi Praveen,
>> >>> >> >>
>> >>> >> >> Have you set the capacity / prefetch for the receivers to one
>> >>> message?
>> >>> >> >> I believe the capacity defines how many messages can be
>> "buffered" by
>> >>> >> >> the client API in background while you are still processing the
>> first
>> >>> >> >> message. That may cause that both your clients receive 5
>> messages,
>> >>> >> >> even when the processing in the first client takes a longer
>> time.
>> >>> >> >>
>> >>> >> >> Regards
>> >>> >> >> Jakub
>> >>> >> >>
>> >>> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <
>> lefthandmagic@gmail.com>
>> >>> >> wrote:
>> >>> >> >> > Hi,
>> >>> >> >> >
>> >>> >> >> > I ran the following test
>> >>> >> >> >
>> >>> >> >> > 1) I created 1 Queue
>> >>> >> >> > 2) Registered 2 consumers to the queue
>> >>> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued
>> message
>> >>> is
>> >>> >> >> long
>> >>> >> >> > running. I simulated such that the first message on
>> consumption
>> >>> takes
>> >>> >> >> about
>> >>> >> >> > 50 seconds to be processed]
>> >>> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
>> >>> message.
>> >>> >> >> > 5) The 1st consumer that got the long running message works
>> on it
>> >>> for
>> >>> >> a
>> >>> >> >> long
>> >>> >> >> > time while the second consumer that got the second message
>> keeps
>> >>> >> >> processing
>> >>> >> >> > and going to the next message, but  only goes as far until it
>> >>> >> processes 5
>> >>> >> >> of
>> >>> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
>> >>> processing.
>> >>> >> >> > 6) When the 1st consumer with the  long running message
>> completes,
>> >>> it
>> >>> >> >> then
>> >>> >> >> > ends up processing the remaining messages and my test
>> completes.
>> >>> >> >> >
>> >>> >> >> > So it seems like the two consumers were trying to take a fair
>> share
>> >>> of
>> >>> >> >> > messages that they were processing immaterial of the time it
>> takes
>> >>> to
>> >>> >> >> > process individual messages. Enqueued message = 10, Consumer 1
>> >>> share
>> >>> >> of 5
>> >>> >> >> > messages were processed by it, and Consumer 2's share of 5
>> messages
>> >>> >> were
>> >>> >> >> > processed by it.
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > This is kinda against the behavior that I'd like to see. The
>> >>> desired
>> >>> >> >> > behavior in my case is that of each consumer keeps going on
>> if it's
>> >>> >> done
>> >>> >> >> and
>> >>> >> >> > has other messages to process.
>> >>> >> >> >
>> >>> >> >> > In the above test, I'd expect as consumer 1 is working on the
>> long
>> >>> >> >> message,
>> >>> >> >> > the second consumer should work its way through all the
>> remaining
>> >>> >> >> messages.
>> >>> >> >> >
>> >>> >> >> > Is there some config that I'm missing that could cause this
>> >>> effect??
>> >>> >> Any
>> >>> >> >> > advice on tackling this will be great.
>> >>> >> >> >
>> >>> >> >> > Also, Can someone please explain in what order are messages
>> >>> delivered
>> >>> >> to
>> >>> >> >> the
>> >>> >> >> > consumers in the following cases?
>> >>> >> >> >
>> >>> >> >> > Case 1)
>> >>> >> >> >  There is a single Queue with more than 1 message in it and
>> >>> multiple
>> >>> >> >> > consumers registered to it.
>> >>> >> >> >
>> >>> >> >> > Case 2)
>> >>> >> >> > There are multiple queues each with more than 1 message in
>> it, and
>> >>> has
>> >>> >> >> > multiple consumers registered to it.
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > Thank you,
>> >>> >> >> > --
>> >>> >> >> > -Praveen
>> >>> >> >> >
>> >>> >> >>
>> >>> >> >>
>> ---------------------------------------------------------------------
>> >>> >> >> Apache Qpid - AMQP Messaging Implementation
>> >>> >> >> Project:      http://qpid.apache.org
>> >>> >> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >>> >> >>
>> >>> >> >>
>> >>> >> >
>> >>> >> >
>> >>> >> > --
>> >>> >> > -Praveen
>> >>> >> >
>> >>> >>
>> >>> >>
>> ---------------------------------------------------------------------
>> >>> >> Apache Qpid - AMQP Messaging Implementation
>> >>> >> Project:      http://qpid.apache.org
>> >>> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >>> >>
>> >>> >>
>> >>> >
>> >>> >
>> >>> > --
>> >>> > -Praveen
>> >>> >
>> >>>
>> >>> ---------------------------------------------------------------------
>> >>> Apache Qpid - AMQP Messaging Implementation
>> >>> Project:      http://qpid.apache.org
>> >>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >>>
>> >>>
>> >>
>> >>
>> >> --
>> >> -Praveen
>> >>
>> >
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>



-- 
-Praveen

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Praveen M <le...@gmail.com>.
awesome. Thanks a lot Robbie :-)

On Sun, Oct 30, 2011 at 11:52 AM, Robbie Gemmell
<ro...@gmail.com>wrote:

> I have made a change to the client on trunk that should result in it
> now doing what you want when using prefetch=1 on transacted sessions
> when using onMessage().
>
> Robbie
>
> On 28 October 2011 02:25, Robbie Gemmell <ro...@gmail.com> wrote:
> > Ok, I havent actually tried this yet, but after sneaking a look at the
> > code I am pretty sure I see a problem in the client specific to
> > transacted AMQP 0-10 sessions with prefetch=1 that would cause the
> > behaviour you are seeing. I'll look into it at the weekend. Time for
> > sleep, before 3am comes along ;)
> >
> > Robbie
> >
> > On 28 October 2011 01:18, Praveen M <le...@gmail.com> wrote:
> >> Hi Robbie,
> >>
> >> I was testing against trunk, and also, I was calling commit after my
> >> simulated processing delay, yes.
> >>
> >> Thanks,
> >> Praveen
> >>
> >> On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <
> robbie.gemmell@gmail.com>wrote:
> >>
> >>> Just to be clear for when I look at it...were you using trunk or 0.12
> >>> for those tests, and presumably you were calling commit after your
> >>> simulated processing delay?
> >>>
> >>> Robbie
> >>>
> >>> On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
> >>> > Hi Robbie,
> >>> >
> >>> > I was using asynchronous onMessage delivery with transacted session
> for
> >>> my
> >>> > tests.
> >>> >
> >>> > So from your email, I'm afraid it might be an issue. It will be
> great if
> >>> you
> >>> > could investigate a little on this and keep us update.
> >>> >
> >>> > Thanks a lot,
> >>> > Praveen
> >>> >
> >>> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> >>> > <ro...@gmail.com>wrote:
> >>> >
> >>> >> From the below, would I be right in thinking you were using
> receive()
> >>> >> calls with an AutoAck session? If so then you would see the
> behaviour
> >>> >> you observed as the message gets acked just before receive()
> returns,
> >>> >> which makes the broker send the next one to the client. That
> shouldnt
> >>> >> happen if you were using asynchronous onMessage delivery (since the
> >>> >> ack gets since when the onMessage() handler returns), or if you you
> >>> >> used a ClientAck or Transacted session in which you only
> acknowledged
> >>> >> the message / commited the session after the processing is complete.
> >>> >>
> >>> >> I must admit to having never used the client with prefetch set to 0,
> >>> >> which should in theory give you what you are looking for even with
> >>> >> AutoAck but based on your comments appears not to have. I will try
> and
> >>> >> take a look into that at the weekend to see if there are any obvious
> >>> >> issues we can JIRA for fixing.
> >>> >>
> >>> >> Robbie
> >>> >>
> >>> >> On 26 October 2011 23:48, Praveen M <le...@gmail.com>
> wrote:
> >>> >> > Hi Jakub,
> >>> >> >
> >>> >> > Thanks for your reply. Yes I did find the prefetch model and
> reran my
> >>> >> test
> >>> >> > and now ran into another issue.
> >>> >> >
> >>> >> > I set the prefetch to 1 and ran the same test described in my
> earlier
> >>> >> mail.
> >>> >> >
> >>> >> > In this case the behavior I see is,
> >>> >> > The 1st consumer gets the 1st message and works on it for a
> while, the
> >>> >> 2nd
> >>> >> > consumer consumes 8 messages and then does nothing(even though
> there
> >>> was
> >>> >> 1
> >>> >> > more unconsumed message). When the first consumer completed its
> long
> >>> >> running
> >>> >> > message it got around and consumed the remaining 1 message.
> However,
> >>>  I
> >>> >> was
> >>> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> >>> >> remaining
> >>> >> > messages) while the 1st consumer was busy working on the long
> message.
> >>> >> >
> >>> >> > Then, I thought, perhaps the prefetch count meant that, when a
> >>> consumer
> >>> >> is
> >>> >> > working on a message, another message in the queue is prefetched
> to
> >>> the
> >>> >> > consumer from the persistant store as my prefetch count is 1. That
> >>> could
> >>> >> > explain why I saw the behavior as above.
> >>> >> >
> >>> >> > What i wanted to achieve was to actually turn of any kinda
> prefetching
> >>> >> > (Yeah, I'm ok with taking the throughput hit)
> >>> >> >
> >>> >> > So I re ran my test now with prefetch = 0, and saw a really weird
> >>> result.
> >>> >> >
> >>> >> > With prefetch 0, the 1st consumer gets the 1st message and works
> on it
> >>> >> for a
> >>> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
> >>> does
> >>> >> > nothing(even though there were 2 more unconsumed messages). When
> the
> >>> 1st
> >>> >> > consumer completed processing it's message it got to consume the
> >>> >> remaining
> >>> >> > two messages too. (Did it kinda prefetch 2?)
> >>> >> >
> >>> >> > Can someone please tell me if Is this a bug or am I doing
> something
> >>> >> > completely wrong? I'm using the latest Java Broker & client (from
> >>> trunk)
> >>> >> > with DerbyMessageStore for my tests.
> >>> >> >
> >>> >> > Also, can someone please tell me what'd be the best way to turn
> off
> >>> >> > prefetching?
> >>> >> >
> >>> >> > Thanks a lot,
> >>> >> > Praveen
> >>> >> >
> >>> >> >
> >>> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz>
> >>> wrote:
> >>> >> >
> >>> >> >> Hi Praveen,
> >>> >> >>
> >>> >> >> Have you set the capacity / prefetch for the receivers to one
> >>> message?
> >>> >> >> I believe the capacity defines how many messages can be
> "buffered" by
> >>> >> >> the client API in background while you are still processing the
> first
> >>> >> >> message. That may cause that both your clients receive 5
> messages,
> >>> >> >> even when the processing in the first client takes a longer time.
> >>> >> >>
> >>> >> >> Regards
> >>> >> >> Jakub
> >>> >> >>
> >>> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <
> lefthandmagic@gmail.com>
> >>> >> wrote:
> >>> >> >> > Hi,
> >>> >> >> >
> >>> >> >> > I ran the following test
> >>> >> >> >
> >>> >> >> > 1) I created 1 Queue
> >>> >> >> > 2) Registered 2 consumers to the queue
> >>> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued
> message
> >>> is
> >>> >> >> long
> >>> >> >> > running. I simulated such that the first message on consumption
> >>> takes
> >>> >> >> about
> >>> >> >> > 50 seconds to be processed]
> >>> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
> >>> message.
> >>> >> >> > 5) The 1st consumer that got the long running message works on
> it
> >>> for
> >>> >> a
> >>> >> >> long
> >>> >> >> > time while the second consumer that got the second message
> keeps
> >>> >> >> processing
> >>> >> >> > and going to the next message, but  only goes as far until it
> >>> >> processes 5
> >>> >> >> of
> >>> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
> >>> processing.
> >>> >> >> > 6) When the 1st consumer with the  long running message
> completes,
> >>> it
> >>> >> >> then
> >>> >> >> > ends up processing the remaining messages and my test
> completes.
> >>> >> >> >
> >>> >> >> > So it seems like the two consumers were trying to take a fair
> share
> >>> of
> >>> >> >> > messages that they were processing immaterial of the time it
> takes
> >>> to
> >>> >> >> > process individual messages. Enqueued message = 10, Consumer 1
> >>> share
> >>> >> of 5
> >>> >> >> > messages were processed by it, and Consumer 2's share of 5
> messages
> >>> >> were
> >>> >> >> > processed by it.
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > This is kinda against the behavior that I'd like to see. The
> >>> desired
> >>> >> >> > behavior in my case is that of each consumer keeps going on if
> it's
> >>> >> done
> >>> >> >> and
> >>> >> >> > has other messages to process.
> >>> >> >> >
> >>> >> >> > In the above test, I'd expect as consumer 1 is working on the
> long
> >>> >> >> message,
> >>> >> >> > the second consumer should work its way through all the
> remaining
> >>> >> >> messages.
> >>> >> >> >
> >>> >> >> > Is there some config that I'm missing that could cause this
> >>> effect??
> >>> >> Any
> >>> >> >> > advice on tackling this will be great.
> >>> >> >> >
> >>> >> >> > Also, Can someone please explain in what order are messages
> >>> delivered
> >>> >> to
> >>> >> >> the
> >>> >> >> > consumers in the following cases?
> >>> >> >> >
> >>> >> >> > Case 1)
> >>> >> >> >  There is a single Queue with more than 1 message in it and
> >>> multiple
> >>> >> >> > consumers registered to it.
> >>> >> >> >
> >>> >> >> > Case 2)
> >>> >> >> > There are multiple queues each with more than 1 message in it,
> and
> >>> has
> >>> >> >> > multiple consumers registered to it.
> >>> >> >> >
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > Thank you,
> >>> >> >> > --
> >>> >> >> > -Praveen
> >>> >> >> >
> >>> >> >>
> >>> >> >>
> ---------------------------------------------------------------------
> >>> >> >> Apache Qpid - AMQP Messaging Implementation
> >>> >> >> Project:      http://qpid.apache.org
> >>> >> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>> >> >>
> >>> >> >>
> >>> >> >
> >>> >> >
> >>> >> > --
> >>> >> > -Praveen
> >>> >> >
> >>> >>
> >>> >>
> ---------------------------------------------------------------------
> >>> >> Apache Qpid - AMQP Messaging Implementation
> >>> >> Project:      http://qpid.apache.org
> >>> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>> >>
> >>> >>
> >>> >
> >>> >
> >>> > --
> >>> > -Praveen
> >>> >
> >>>
> >>> ---------------------------------------------------------------------
> >>> Apache Qpid - AMQP Messaging Implementation
> >>> Project:      http://qpid.apache.org
> >>> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>>
> >>>
> >>
> >>
> >> --
> >> -Praveen
> >>
> >
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>


-- 
-Praveen

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
I have made a change to the client on trunk that should result in it
now doing what you want when using prefetch=1 on transacted sessions
when using onMessage().

Robbie

On 28 October 2011 02:25, Robbie Gemmell <ro...@gmail.com> wrote:
> Ok, I havent actually tried this yet, but after sneaking a look at the
> code I am pretty sure I see a problem in the client specific to
> transacted AMQP 0-10 sessions with prefetch=1 that would cause the
> behaviour you are seeing. I'll look into it at the weekend. Time for
> sleep, before 3am comes along ;)
>
> Robbie
>
> On 28 October 2011 01:18, Praveen M <le...@gmail.com> wrote:
>> Hi Robbie,
>>
>> I was testing against trunk, and also, I was calling commit after my
>> simulated processing delay, yes.
>>
>> Thanks,
>> Praveen
>>
>> On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <ro...@gmail.com>wrote:
>>
>>> Just to be clear for when I look at it...were you using trunk or 0.12
>>> for those tests, and presumably you were calling commit after your
>>> simulated processing delay?
>>>
>>> Robbie
>>>
>>> On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
>>> > Hi Robbie,
>>> >
>>> > I was using asynchronous onMessage delivery with transacted session for
>>> my
>>> > tests.
>>> >
>>> > So from your email, I'm afraid it might be an issue. It will be great if
>>> you
>>> > could investigate a little on this and keep us update.
>>> >
>>> > Thanks a lot,
>>> > Praveen
>>> >
>>> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
>>> > <ro...@gmail.com>wrote:
>>> >
>>> >> From the below, would I be right in thinking you were using receive()
>>> >> calls with an AutoAck session? If so then you would see the behaviour
>>> >> you observed as the message gets acked just before receive() returns,
>>> >> which makes the broker send the next one to the client. That shouldnt
>>> >> happen if you were using asynchronous onMessage delivery (since the
>>> >> ack gets since when the onMessage() handler returns), or if you you
>>> >> used a ClientAck or Transacted session in which you only acknowledged
>>> >> the message / commited the session after the processing is complete.
>>> >>
>>> >> I must admit to having never used the client with prefetch set to 0,
>>> >> which should in theory give you what you are looking for even with
>>> >> AutoAck but based on your comments appears not to have. I will try and
>>> >> take a look into that at the weekend to see if there are any obvious
>>> >> issues we can JIRA for fixing.
>>> >>
>>> >> Robbie
>>> >>
>>> >> On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
>>> >> > Hi Jakub,
>>> >> >
>>> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
>>> >> test
>>> >> > and now ran into another issue.
>>> >> >
>>> >> > I set the prefetch to 1 and ran the same test described in my earlier
>>> >> mail.
>>> >> >
>>> >> > In this case the behavior I see is,
>>> >> > The 1st consumer gets the 1st message and works on it for a while, the
>>> >> 2nd
>>> >> > consumer consumes 8 messages and then does nothing(even though there
>>> was
>>> >> 1
>>> >> > more unconsumed message). When the first consumer completed its long
>>> >> running
>>> >> > message it got around and consumed the remaining 1 message. However,
>>>  I
>>> >> was
>>> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
>>> >> remaining
>>> >> > messages) while the 1st consumer was busy working on the long message.
>>> >> >
>>> >> > Then, I thought, perhaps the prefetch count meant that, when a
>>> consumer
>>> >> is
>>> >> > working on a message, another message in the queue is prefetched to
>>> the
>>> >> > consumer from the persistant store as my prefetch count is 1. That
>>> could
>>> >> > explain why I saw the behavior as above.
>>> >> >
>>> >> > What i wanted to achieve was to actually turn of any kinda prefetching
>>> >> > (Yeah, I'm ok with taking the throughput hit)
>>> >> >
>>> >> > So I re ran my test now with prefetch = 0, and saw a really weird
>>> result.
>>> >> >
>>> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
>>> >> for a
>>> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
>>> does
>>> >> > nothing(even though there were 2 more unconsumed messages). When the
>>> 1st
>>> >> > consumer completed processing it's message it got to consume the
>>> >> remaining
>>> >> > two messages too. (Did it kinda prefetch 2?)
>>> >> >
>>> >> > Can someone please tell me if Is this a bug or am I doing something
>>> >> > completely wrong? I'm using the latest Java Broker & client (from
>>> trunk)
>>> >> > with DerbyMessageStore for my tests.
>>> >> >
>>> >> > Also, can someone please tell me what'd be the best way to turn off
>>> >> > prefetching?
>>> >> >
>>> >> > Thanks a lot,
>>> >> > Praveen
>>> >> >
>>> >> >
>>> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz>
>>> wrote:
>>> >> >
>>> >> >> Hi Praveen,
>>> >> >>
>>> >> >> Have you set the capacity / prefetch for the receivers to one
>>> message?
>>> >> >> I believe the capacity defines how many messages can be "buffered" by
>>> >> >> the client API in background while you are still processing the first
>>> >> >> message. That may cause that both your clients receive 5 messages,
>>> >> >> even when the processing in the first client takes a longer time.
>>> >> >>
>>> >> >> Regards
>>> >> >> Jakub
>>> >> >>
>>> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com>
>>> >> wrote:
>>> >> >> > Hi,
>>> >> >> >
>>> >> >> > I ran the following test
>>> >> >> >
>>> >> >> > 1) I created 1 Queue
>>> >> >> > 2) Registered 2 consumers to the queue
>>> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message
>>> is
>>> >> >> long
>>> >> >> > running. I simulated such that the first message on consumption
>>> takes
>>> >> >> about
>>> >> >> > 50 seconds to be processed]
>>> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
>>> message.
>>> >> >> > 5) The 1st consumer that got the long running message works on it
>>> for
>>> >> a
>>> >> >> long
>>> >> >> > time while the second consumer that got the second message keeps
>>> >> >> processing
>>> >> >> > and going to the next message, but  only goes as far until it
>>> >> processes 5
>>> >> >> of
>>> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
>>> processing.
>>> >> >> > 6) When the 1st consumer with the  long running message completes,
>>> it
>>> >> >> then
>>> >> >> > ends up processing the remaining messages and my test completes.
>>> >> >> >
>>> >> >> > So it seems like the two consumers were trying to take a fair share
>>> of
>>> >> >> > messages that they were processing immaterial of the time it takes
>>> to
>>> >> >> > process individual messages. Enqueued message = 10, Consumer 1
>>> share
>>> >> of 5
>>> >> >> > messages were processed by it, and Consumer 2's share of 5 messages
>>> >> were
>>> >> >> > processed by it.
>>> >> >> >
>>> >> >> >
>>> >> >> > This is kinda against the behavior that I'd like to see. The
>>> desired
>>> >> >> > behavior in my case is that of each consumer keeps going on if it's
>>> >> done
>>> >> >> and
>>> >> >> > has other messages to process.
>>> >> >> >
>>> >> >> > In the above test, I'd expect as consumer 1 is working on the long
>>> >> >> message,
>>> >> >> > the second consumer should work its way through all the remaining
>>> >> >> messages.
>>> >> >> >
>>> >> >> > Is there some config that I'm missing that could cause this
>>> effect??
>>> >> Any
>>> >> >> > advice on tackling this will be great.
>>> >> >> >
>>> >> >> > Also, Can someone please explain in what order are messages
>>> delivered
>>> >> to
>>> >> >> the
>>> >> >> > consumers in the following cases?
>>> >> >> >
>>> >> >> > Case 1)
>>> >> >> >  There is a single Queue with more than 1 message in it and
>>> multiple
>>> >> >> > consumers registered to it.
>>> >> >> >
>>> >> >> > Case 2)
>>> >> >> > There are multiple queues each with more than 1 message in it, and
>>> has
>>> >> >> > multiple consumers registered to it.
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> > Thank you,
>>> >> >> > --
>>> >> >> > -Praveen
>>> >> >> >
>>> >> >>
>>> >> >> ---------------------------------------------------------------------
>>> >> >> Apache Qpid - AMQP Messaging Implementation
>>> >> >> Project:      http://qpid.apache.org
>>> >> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>> >> >>
>>> >> >>
>>> >> >
>>> >> >
>>> >> > --
>>> >> > -Praveen
>>> >> >
>>> >>
>>> >> ---------------------------------------------------------------------
>>> >> Apache Qpid - AMQP Messaging Implementation
>>> >> Project:      http://qpid.apache.org
>>> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>> >>
>>> >>
>>> >
>>> >
>>> > --
>>> > -Praveen
>>> >
>>>
>>> ---------------------------------------------------------------------
>>> Apache Qpid - AMQP Messaging Implementation
>>> Project:      http://qpid.apache.org
>>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>>
>>>
>>
>>
>> --
>> -Praveen
>>
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
Ok, I havent actually tried this yet, but after sneaking a look at the
code I am pretty sure I see a problem in the client specific to
transacted AMQP 0-10 sessions with prefetch=1 that would cause the
behaviour you are seeing. I'll look into it at the weekend. Time for
sleep, before 3am comes along ;)

Robbie

On 28 October 2011 01:18, Praveen M <le...@gmail.com> wrote:
> Hi Robbie,
>
> I was testing against trunk, and also, I was calling commit after my
> simulated processing delay, yes.
>
> Thanks,
> Praveen
>
> On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <ro...@gmail.com>wrote:
>
>> Just to be clear for when I look at it...were you using trunk or 0.12
>> for those tests, and presumably you were calling commit after your
>> simulated processing delay?
>>
>> Robbie
>>
>> On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
>> > Hi Robbie,
>> >
>> > I was using asynchronous onMessage delivery with transacted session for
>> my
>> > tests.
>> >
>> > So from your email, I'm afraid it might be an issue. It will be great if
>> you
>> > could investigate a little on this and keep us update.
>> >
>> > Thanks a lot,
>> > Praveen
>> >
>> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
>> > <ro...@gmail.com>wrote:
>> >
>> >> From the below, would I be right in thinking you were using receive()
>> >> calls with an AutoAck session? If so then you would see the behaviour
>> >> you observed as the message gets acked just before receive() returns,
>> >> which makes the broker send the next one to the client. That shouldnt
>> >> happen if you were using asynchronous onMessage delivery (since the
>> >> ack gets since when the onMessage() handler returns), or if you you
>> >> used a ClientAck or Transacted session in which you only acknowledged
>> >> the message / commited the session after the processing is complete.
>> >>
>> >> I must admit to having never used the client with prefetch set to 0,
>> >> which should in theory give you what you are looking for even with
>> >> AutoAck but based on your comments appears not to have. I will try and
>> >> take a look into that at the weekend to see if there are any obvious
>> >> issues we can JIRA for fixing.
>> >>
>> >> Robbie
>> >>
>> >> On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
>> >> > Hi Jakub,
>> >> >
>> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
>> >> test
>> >> > and now ran into another issue.
>> >> >
>> >> > I set the prefetch to 1 and ran the same test described in my earlier
>> >> mail.
>> >> >
>> >> > In this case the behavior I see is,
>> >> > The 1st consumer gets the 1st message and works on it for a while, the
>> >> 2nd
>> >> > consumer consumes 8 messages and then does nothing(even though there
>> was
>> >> 1
>> >> > more unconsumed message). When the first consumer completed its long
>> >> running
>> >> > message it got around and consumed the remaining 1 message. However,
>>  I
>> >> was
>> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
>> >> remaining
>> >> > messages) while the 1st consumer was busy working on the long message.
>> >> >
>> >> > Then, I thought, perhaps the prefetch count meant that, when a
>> consumer
>> >> is
>> >> > working on a message, another message in the queue is prefetched to
>> the
>> >> > consumer from the persistant store as my prefetch count is 1. That
>> could
>> >> > explain why I saw the behavior as above.
>> >> >
>> >> > What i wanted to achieve was to actually turn of any kinda prefetching
>> >> > (Yeah, I'm ok with taking the throughput hit)
>> >> >
>> >> > So I re ran my test now with prefetch = 0, and saw a really weird
>> result.
>> >> >
>> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
>> >> for a
>> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
>> does
>> >> > nothing(even though there were 2 more unconsumed messages). When the
>> 1st
>> >> > consumer completed processing it's message it got to consume the
>> >> remaining
>> >> > two messages too. (Did it kinda prefetch 2?)
>> >> >
>> >> > Can someone please tell me if Is this a bug or am I doing something
>> >> > completely wrong? I'm using the latest Java Broker & client (from
>> trunk)
>> >> > with DerbyMessageStore for my tests.
>> >> >
>> >> > Also, can someone please tell me what'd be the best way to turn off
>> >> > prefetching?
>> >> >
>> >> > Thanks a lot,
>> >> > Praveen
>> >> >
>> >> >
>> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz>
>> wrote:
>> >> >
>> >> >> Hi Praveen,
>> >> >>
>> >> >> Have you set the capacity / prefetch for the receivers to one
>> message?
>> >> >> I believe the capacity defines how many messages can be "buffered" by
>> >> >> the client API in background while you are still processing the first
>> >> >> message. That may cause that both your clients receive 5 messages,
>> >> >> even when the processing in the first client takes a longer time.
>> >> >>
>> >> >> Regards
>> >> >> Jakub
>> >> >>
>> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com>
>> >> wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I ran the following test
>> >> >> >
>> >> >> > 1) I created 1 Queue
>> >> >> > 2) Registered 2 consumers to the queue
>> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message
>> is
>> >> >> long
>> >> >> > running. I simulated such that the first message on consumption
>> takes
>> >> >> about
>> >> >> > 50 seconds to be processed]
>> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
>> message.
>> >> >> > 5) The 1st consumer that got the long running message works on it
>> for
>> >> a
>> >> >> long
>> >> >> > time while the second consumer that got the second message keeps
>> >> >> processing
>> >> >> > and going to the next message, but  only goes as far until it
>> >> processes 5
>> >> >> of
>> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
>> processing.
>> >> >> > 6) When the 1st consumer with the  long running message completes,
>> it
>> >> >> then
>> >> >> > ends up processing the remaining messages and my test completes.
>> >> >> >
>> >> >> > So it seems like the two consumers were trying to take a fair share
>> of
>> >> >> > messages that they were processing immaterial of the time it takes
>> to
>> >> >> > process individual messages. Enqueued message = 10, Consumer 1
>> share
>> >> of 5
>> >> >> > messages were processed by it, and Consumer 2's share of 5 messages
>> >> were
>> >> >> > processed by it.
>> >> >> >
>> >> >> >
>> >> >> > This is kinda against the behavior that I'd like to see. The
>> desired
>> >> >> > behavior in my case is that of each consumer keeps going on if it's
>> >> done
>> >> >> and
>> >> >> > has other messages to process.
>> >> >> >
>> >> >> > In the above test, I'd expect as consumer 1 is working on the long
>> >> >> message,
>> >> >> > the second consumer should work its way through all the remaining
>> >> >> messages.
>> >> >> >
>> >> >> > Is there some config that I'm missing that could cause this
>> effect??
>> >> Any
>> >> >> > advice on tackling this will be great.
>> >> >> >
>> >> >> > Also, Can someone please explain in what order are messages
>> delivered
>> >> to
>> >> >> the
>> >> >> > consumers in the following cases?
>> >> >> >
>> >> >> > Case 1)
>> >> >> >  There is a single Queue with more than 1 message in it and
>> multiple
>> >> >> > consumers registered to it.
>> >> >> >
>> >> >> > Case 2)
>> >> >> > There are multiple queues each with more than 1 message in it, and
>> has
>> >> >> > multiple consumers registered to it.
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > Thank you,
>> >> >> > --
>> >> >> > -Praveen
>> >> >> >
>> >> >>
>> >> >> ---------------------------------------------------------------------
>> >> >> Apache Qpid - AMQP Messaging Implementation
>> >> >> Project:      http://qpid.apache.org
>> >> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> > --
>> >> > -Praveen
>> >> >
>> >>
>> >> ---------------------------------------------------------------------
>> >> Apache Qpid - AMQP Messaging Implementation
>> >> Project:      http://qpid.apache.org
>> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >>
>> >>
>> >
>> >
>> > --
>> > -Praveen
>> >
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Praveen M <le...@gmail.com>.
Hi Robbie,

I was testing against trunk, and also, I was calling commit after my
simulated processing delay, yes.

Thanks,
Praveen

On Thu, Oct 27, 2011 at 5:11 PM, Robbie Gemmell <ro...@gmail.com>wrote:

> Just to be clear for when I look at it...were you using trunk or 0.12
> for those tests, and presumably you were calling commit after your
> simulated processing delay?
>
> Robbie
>
> On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
> > Hi Robbie,
> >
> > I was using asynchronous onMessage delivery with transacted session for
> my
> > tests.
> >
> > So from your email, I'm afraid it might be an issue. It will be great if
> you
> > could investigate a little on this and keep us update.
> >
> > Thanks a lot,
> > Praveen
> >
> > On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> > <ro...@gmail.com>wrote:
> >
> >> From the below, would I be right in thinking you were using receive()
> >> calls with an AutoAck session? If so then you would see the behaviour
> >> you observed as the message gets acked just before receive() returns,
> >> which makes the broker send the next one to the client. That shouldnt
> >> happen if you were using asynchronous onMessage delivery (since the
> >> ack gets since when the onMessage() handler returns), or if you you
> >> used a ClientAck or Transacted session in which you only acknowledged
> >> the message / commited the session after the processing is complete.
> >>
> >> I must admit to having never used the client with prefetch set to 0,
> >> which should in theory give you what you are looking for even with
> >> AutoAck but based on your comments appears not to have. I will try and
> >> take a look into that at the weekend to see if there are any obvious
> >> issues we can JIRA for fixing.
> >>
> >> Robbie
> >>
> >> On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
> >> > Hi Jakub,
> >> >
> >> > Thanks for your reply. Yes I did find the prefetch model and reran my
> >> test
> >> > and now ran into another issue.
> >> >
> >> > I set the prefetch to 1 and ran the same test described in my earlier
> >> mail.
> >> >
> >> > In this case the behavior I see is,
> >> > The 1st consumer gets the 1st message and works on it for a while, the
> >> 2nd
> >> > consumer consumes 8 messages and then does nothing(even though there
> was
> >> 1
> >> > more unconsumed message). When the first consumer completed its long
> >> running
> >> > message it got around and consumed the remaining 1 message. However,
>  I
> >> was
> >> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> >> remaining
> >> > messages) while the 1st consumer was busy working on the long message.
> >> >
> >> > Then, I thought, perhaps the prefetch count meant that, when a
> consumer
> >> is
> >> > working on a message, another message in the queue is prefetched to
> the
> >> > consumer from the persistant store as my prefetch count is 1. That
> could
> >> > explain why I saw the behavior as above.
> >> >
> >> > What i wanted to achieve was to actually turn of any kinda prefetching
> >> > (Yeah, I'm ok with taking the throughput hit)
> >> >
> >> > So I re ran my test now with prefetch = 0, and saw a really weird
> result.
> >> >
> >> > With prefetch 0, the 1st consumer gets the 1st message and works on it
> >> for a
> >> > while, which the 2nd consumer consumes 7 messages(why 7?) and then
> does
> >> > nothing(even though there were 2 more unconsumed messages). When the
> 1st
> >> > consumer completed processing it's message it got to consume the
> >> remaining
> >> > two messages too. (Did it kinda prefetch 2?)
> >> >
> >> > Can someone please tell me if Is this a bug or am I doing something
> >> > completely wrong? I'm using the latest Java Broker & client (from
> trunk)
> >> > with DerbyMessageStore for my tests.
> >> >
> >> > Also, can someone please tell me what'd be the best way to turn off
> >> > prefetching?
> >> >
> >> > Thanks a lot,
> >> > Praveen
> >> >
> >> >
> >> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz>
> wrote:
> >> >
> >> >> Hi Praveen,
> >> >>
> >> >> Have you set the capacity / prefetch for the receivers to one
> message?
> >> >> I believe the capacity defines how many messages can be "buffered" by
> >> >> the client API in background while you are still processing the first
> >> >> message. That may cause that both your clients receive 5 messages,
> >> >> even when the processing in the first client takes a longer time.
> >> >>
> >> >> Regards
> >> >> Jakub
> >> >>
> >> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com>
> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I ran the following test
> >> >> >
> >> >> > 1) I created 1 Queue
> >> >> > 2) Registered 2 consumers to the queue
> >> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message
> is
> >> >> long
> >> >> > running. I simulated such that the first message on consumption
> takes
> >> >> about
> >> >> > 50 seconds to be processed]
> >> >> > 4) Once the enqueue is committed, the 2 consumers each pick a
> message.
> >> >> > 5) The 1st consumer that got the long running message works on it
> for
> >> a
> >> >> long
> >> >> > time while the second consumer that got the second message keeps
> >> >> processing
> >> >> > and going to the next message, but  only goes as far until it
> >> processes 5
> >> >> of
> >> >> > the 10 messages enqueued. Then the 2nd consumer gives up
> processing.
> >> >> > 6) When the 1st consumer with the  long running message completes,
> it
> >> >> then
> >> >> > ends up processing the remaining messages and my test completes.
> >> >> >
> >> >> > So it seems like the two consumers were trying to take a fair share
> of
> >> >> > messages that they were processing immaterial of the time it takes
> to
> >> >> > process individual messages. Enqueued message = 10, Consumer 1
> share
> >> of 5
> >> >> > messages were processed by it, and Consumer 2's share of 5 messages
> >> were
> >> >> > processed by it.
> >> >> >
> >> >> >
> >> >> > This is kinda against the behavior that I'd like to see. The
> desired
> >> >> > behavior in my case is that of each consumer keeps going on if it's
> >> done
> >> >> and
> >> >> > has other messages to process.
> >> >> >
> >> >> > In the above test, I'd expect as consumer 1 is working on the long
> >> >> message,
> >> >> > the second consumer should work its way through all the remaining
> >> >> messages.
> >> >> >
> >> >> > Is there some config that I'm missing that could cause this
> effect??
> >> Any
> >> >> > advice on tackling this will be great.
> >> >> >
> >> >> > Also, Can someone please explain in what order are messages
> delivered
> >> to
> >> >> the
> >> >> > consumers in the following cases?
> >> >> >
> >> >> > Case 1)
> >> >> >  There is a single Queue with more than 1 message in it and
> multiple
> >> >> > consumers registered to it.
> >> >> >
> >> >> > Case 2)
> >> >> > There are multiple queues each with more than 1 message in it, and
> has
> >> >> > multiple consumers registered to it.
> >> >> >
> >> >> >
> >> >> >
> >> >> > Thank you,
> >> >> > --
> >> >> > -Praveen
> >> >> >
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> Apache Qpid - AMQP Messaging Implementation
> >> >> Project:      http://qpid.apache.org
> >> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > -Praveen
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> Apache Qpid - AMQP Messaging Implementation
> >> Project:      http://qpid.apache.org
> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>
> >>
> >
> >
> > --
> > -Praveen
> >
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>


-- 
-Praveen

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
Just to be clear for when I look at it...were you using trunk or 0.12
for those tests, and presumably you were calling commit after your
simulated processing delay?

Robbie

On 28 October 2011 00:28, Praveen M <le...@gmail.com> wrote:
> Hi Robbie,
>
> I was using asynchronous onMessage delivery with transacted session for my
> tests.
>
> So from your email, I'm afraid it might be an issue. It will be great if you
> could investigate a little on this and keep us update.
>
> Thanks a lot,
> Praveen
>
> On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
> <ro...@gmail.com>wrote:
>
>> From the below, would I be right in thinking you were using receive()
>> calls with an AutoAck session? If so then you would see the behaviour
>> you observed as the message gets acked just before receive() returns,
>> which makes the broker send the next one to the client. That shouldnt
>> happen if you were using asynchronous onMessage delivery (since the
>> ack gets since when the onMessage() handler returns), or if you you
>> used a ClientAck or Transacted session in which you only acknowledged
>> the message / commited the session after the processing is complete.
>>
>> I must admit to having never used the client with prefetch set to 0,
>> which should in theory give you what you are looking for even with
>> AutoAck but based on your comments appears not to have. I will try and
>> take a look into that at the weekend to see if there are any obvious
>> issues we can JIRA for fixing.
>>
>> Robbie
>>
>> On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
>> > Hi Jakub,
>> >
>> > Thanks for your reply. Yes I did find the prefetch model and reran my
>> test
>> > and now ran into another issue.
>> >
>> > I set the prefetch to 1 and ran the same test described in my earlier
>> mail.
>> >
>> > In this case the behavior I see is,
>> > The 1st consumer gets the 1st message and works on it for a while, the
>> 2nd
>> > consumer consumes 8 messages and then does nothing(even though there was
>> 1
>> > more unconsumed message). When the first consumer completed its long
>> running
>> > message it got around and consumed the remaining 1 message. However,  I
>> was
>> > expecting the 2nd consumer to dequeue all 9 messages(the number of
>> remaining
>> > messages) while the 1st consumer was busy working on the long message.
>> >
>> > Then, I thought, perhaps the prefetch count meant that, when a consumer
>> is
>> > working on a message, another message in the queue is prefetched to the
>> > consumer from the persistant store as my prefetch count is 1. That could
>> > explain why I saw the behavior as above.
>> >
>> > What i wanted to achieve was to actually turn of any kinda prefetching
>> > (Yeah, I'm ok with taking the throughput hit)
>> >
>> > So I re ran my test now with prefetch = 0, and saw a really weird result.
>> >
>> > With prefetch 0, the 1st consumer gets the 1st message and works on it
>> for a
>> > while, which the 2nd consumer consumes 7 messages(why 7?) and then does
>> > nothing(even though there were 2 more unconsumed messages). When the 1st
>> > consumer completed processing it's message it got to consume the
>> remaining
>> > two messages too. (Did it kinda prefetch 2?)
>> >
>> > Can someone please tell me if Is this a bug or am I doing something
>> > completely wrong? I'm using the latest Java Broker & client (from trunk)
>> > with DerbyMessageStore for my tests.
>> >
>> > Also, can someone please tell me what'd be the best way to turn off
>> > prefetching?
>> >
>> > Thanks a lot,
>> > Praveen
>> >
>> >
>> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz> wrote:
>> >
>> >> Hi Praveen,
>> >>
>> >> Have you set the capacity / prefetch for the receivers to one message?
>> >> I believe the capacity defines how many messages can be "buffered" by
>> >> the client API in background while you are still processing the first
>> >> message. That may cause that both your clients receive 5 messages,
>> >> even when the processing in the first client takes a longer time.
>> >>
>> >> Regards
>> >> Jakub
>> >>
>> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com>
>> wrote:
>> >> > Hi,
>> >> >
>> >> > I ran the following test
>> >> >
>> >> > 1) I created 1 Queue
>> >> > 2) Registered 2 consumers to the queue
>> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
>> >> long
>> >> > running. I simulated such that the first message on consumption takes
>> >> about
>> >> > 50 seconds to be processed]
>> >> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
>> >> > 5) The 1st consumer that got the long running message works on it for
>> a
>> >> long
>> >> > time while the second consumer that got the second message keeps
>> >> processing
>> >> > and going to the next message, but  only goes as far until it
>> processes 5
>> >> of
>> >> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
>> >> > 6) When the 1st consumer with the  long running message completes, it
>> >> then
>> >> > ends up processing the remaining messages and my test completes.
>> >> >
>> >> > So it seems like the two consumers were trying to take a fair share of
>> >> > messages that they were processing immaterial of the time it takes to
>> >> > process individual messages. Enqueued message = 10, Consumer 1 share
>> of 5
>> >> > messages were processed by it, and Consumer 2's share of 5 messages
>> were
>> >> > processed by it.
>> >> >
>> >> >
>> >> > This is kinda against the behavior that I'd like to see. The desired
>> >> > behavior in my case is that of each consumer keeps going on if it's
>> done
>> >> and
>> >> > has other messages to process.
>> >> >
>> >> > In the above test, I'd expect as consumer 1 is working on the long
>> >> message,
>> >> > the second consumer should work its way through all the remaining
>> >> messages.
>> >> >
>> >> > Is there some config that I'm missing that could cause this effect??
>> Any
>> >> > advice on tackling this will be great.
>> >> >
>> >> > Also, Can someone please explain in what order are messages delivered
>> to
>> >> the
>> >> > consumers in the following cases?
>> >> >
>> >> > Case 1)
>> >> >  There is a single Queue with more than 1 message in it and multiple
>> >> > consumers registered to it.
>> >> >
>> >> > Case 2)
>> >> > There are multiple queues each with more than 1 message in it, and has
>> >> > multiple consumers registered to it.
>> >> >
>> >> >
>> >> >
>> >> > Thank you,
>> >> > --
>> >> > -Praveen
>> >> >
>> >>
>> >> ---------------------------------------------------------------------
>> >> Apache Qpid - AMQP Messaging Implementation
>> >> Project:      http://qpid.apache.org
>> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
>> >>
>> >>
>> >
>> >
>> > --
>> > -Praveen
>> >
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Praveen M <le...@gmail.com>.
Hi Robbie,

I was using asynchronous onMessage delivery with transacted session for my
tests.

So from your email, I'm afraid it might be an issue. It will be great if you
could investigate a little on this and keep us update.

Thanks a lot,
Praveen

On Thu, Oct 27, 2011 at 11:49 AM, Robbie Gemmell
<ro...@gmail.com>wrote:

> From the below, would I be right in thinking you were using receive()
> calls with an AutoAck session? If so then you would see the behaviour
> you observed as the message gets acked just before receive() returns,
> which makes the broker send the next one to the client. That shouldnt
> happen if you were using asynchronous onMessage delivery (since the
> ack gets since when the onMessage() handler returns), or if you you
> used a ClientAck or Transacted session in which you only acknowledged
> the message / commited the session after the processing is complete.
>
> I must admit to having never used the client with prefetch set to 0,
> which should in theory give you what you are looking for even with
> AutoAck but based on your comments appears not to have. I will try and
> take a look into that at the weekend to see if there are any obvious
> issues we can JIRA for fixing.
>
> Robbie
>
> On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
> > Hi Jakub,
> >
> > Thanks for your reply. Yes I did find the prefetch model and reran my
> test
> > and now ran into another issue.
> >
> > I set the prefetch to 1 and ran the same test described in my earlier
> mail.
> >
> > In this case the behavior I see is,
> > The 1st consumer gets the 1st message and works on it for a while, the
> 2nd
> > consumer consumes 8 messages and then does nothing(even though there was
> 1
> > more unconsumed message). When the first consumer completed its long
> running
> > message it got around and consumed the remaining 1 message. However,  I
> was
> > expecting the 2nd consumer to dequeue all 9 messages(the number of
> remaining
> > messages) while the 1st consumer was busy working on the long message.
> >
> > Then, I thought, perhaps the prefetch count meant that, when a consumer
> is
> > working on a message, another message in the queue is prefetched to the
> > consumer from the persistant store as my prefetch count is 1. That could
> > explain why I saw the behavior as above.
> >
> > What i wanted to achieve was to actually turn of any kinda prefetching
> > (Yeah, I'm ok with taking the throughput hit)
> >
> > So I re ran my test now with prefetch = 0, and saw a really weird result.
> >
> > With prefetch 0, the 1st consumer gets the 1st message and works on it
> for a
> > while, which the 2nd consumer consumes 7 messages(why 7?) and then does
> > nothing(even though there were 2 more unconsumed messages). When the 1st
> > consumer completed processing it's message it got to consume the
> remaining
> > two messages too. (Did it kinda prefetch 2?)
> >
> > Can someone please tell me if Is this a bug or am I doing something
> > completely wrong? I'm using the latest Java Broker & client (from trunk)
> > with DerbyMessageStore for my tests.
> >
> > Also, can someone please tell me what'd be the best way to turn off
> > prefetching?
> >
> > Thanks a lot,
> > Praveen
> >
> >
> > On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz> wrote:
> >
> >> Hi Praveen,
> >>
> >> Have you set the capacity / prefetch for the receivers to one message?
> >> I believe the capacity defines how many messages can be "buffered" by
> >> the client API in background while you are still processing the first
> >> message. That may cause that both your clients receive 5 messages,
> >> even when the processing in the first client takes a longer time.
> >>
> >> Regards
> >> Jakub
> >>
> >> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com>
> wrote:
> >> > Hi,
> >> >
> >> > I ran the following test
> >> >
> >> > 1) I created 1 Queue
> >> > 2) Registered 2 consumers to the queue
> >> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
> >> long
> >> > running. I simulated such that the first message on consumption takes
> >> about
> >> > 50 seconds to be processed]
> >> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
> >> > 5) The 1st consumer that got the long running message works on it for
> a
> >> long
> >> > time while the second consumer that got the second message keeps
> >> processing
> >> > and going to the next message, but  only goes as far until it
> processes 5
> >> of
> >> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
> >> > 6) When the 1st consumer with the  long running message completes, it
> >> then
> >> > ends up processing the remaining messages and my test completes.
> >> >
> >> > So it seems like the two consumers were trying to take a fair share of
> >> > messages that they were processing immaterial of the time it takes to
> >> > process individual messages. Enqueued message = 10, Consumer 1 share
> of 5
> >> > messages were processed by it, and Consumer 2's share of 5 messages
> were
> >> > processed by it.
> >> >
> >> >
> >> > This is kinda against the behavior that I'd like to see. The desired
> >> > behavior in my case is that of each consumer keeps going on if it's
> done
> >> and
> >> > has other messages to process.
> >> >
> >> > In the above test, I'd expect as consumer 1 is working on the long
> >> message,
> >> > the second consumer should work its way through all the remaining
> >> messages.
> >> >
> >> > Is there some config that I'm missing that could cause this effect??
> Any
> >> > advice on tackling this will be great.
> >> >
> >> > Also, Can someone please explain in what order are messages delivered
> to
> >> the
> >> > consumers in the following cases?
> >> >
> >> > Case 1)
> >> >  There is a single Queue with more than 1 message in it and multiple
> >> > consumers registered to it.
> >> >
> >> > Case 2)
> >> > There are multiple queues each with more than 1 message in it, and has
> >> > multiple consumers registered to it.
> >> >
> >> >
> >> >
> >> > Thank you,
> >> > --
> >> > -Praveen
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> Apache Qpid - AMQP Messaging Implementation
> >> Project:      http://qpid.apache.org
> >> Use/Interact: mailto:users-subscribe@qpid.apache.org
> >>
> >>
> >
> >
> > --
> > -Praveen
> >
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscribe@qpid.apache.org
>
>


-- 
-Praveen

Re: 1 Queue with 2 Consumers - turn off pre-fetching?

Posted by Robbie Gemmell <ro...@gmail.com>.
>From the below, would I be right in thinking you were using receive()
calls with an AutoAck session? If so then you would see the behaviour
you observed as the message gets acked just before receive() returns,
which makes the broker send the next one to the client. That shouldnt
happen if you were using asynchronous onMessage delivery (since the
ack gets since when the onMessage() handler returns), or if you you
used a ClientAck or Transacted session in which you only acknowledged
the message / commited the session after the processing is complete.

I must admit to having never used the client with prefetch set to 0,
which should in theory give you what you are looking for even with
AutoAck but based on your comments appears not to have. I will try and
take a look into that at the weekend to see if there are any obvious
issues we can JIRA for fixing.

Robbie

On 26 October 2011 23:48, Praveen M <le...@gmail.com> wrote:
> Hi Jakub,
>
> Thanks for your reply. Yes I did find the prefetch model and reran my test
> and now ran into another issue.
>
> I set the prefetch to 1 and ran the same test described in my earlier mail.
>
> In this case the behavior I see is,
> The 1st consumer gets the 1st message and works on it for a while, the 2nd
> consumer consumes 8 messages and then does nothing(even though there was 1
> more unconsumed message). When the first consumer completed its long running
> message it got around and consumed the remaining 1 message. However,  I was
> expecting the 2nd consumer to dequeue all 9 messages(the number of remaining
> messages) while the 1st consumer was busy working on the long message.
>
> Then, I thought, perhaps the prefetch count meant that, when a consumer is
> working on a message, another message in the queue is prefetched to the
> consumer from the persistant store as my prefetch count is 1. That could
> explain why I saw the behavior as above.
>
> What i wanted to achieve was to actually turn of any kinda prefetching
> (Yeah, I'm ok with taking the throughput hit)
>
> So I re ran my test now with prefetch = 0, and saw a really weird result.
>
> With prefetch 0, the 1st consumer gets the 1st message and works on it for a
> while, which the 2nd consumer consumes 7 messages(why 7?) and then does
> nothing(even though there were 2 more unconsumed messages). When the 1st
> consumer completed processing it's message it got to consume the remaining
> two messages too. (Did it kinda prefetch 2?)
>
> Can someone please tell me if Is this a bug or am I doing something
> completely wrong? I'm using the latest Java Broker & client (from trunk)
> with DerbyMessageStore for my tests.
>
> Also, can someone please tell me what'd be the best way to turn off
> prefetching?
>
> Thanks a lot,
> Praveen
>
>
> On Wed, Oct 26, 2011 at 3:45 AM, Jakub Scholz <ja...@scholz.cz> wrote:
>
>> Hi Praveen,
>>
>> Have you set the capacity / prefetch for the receivers to one message?
>> I believe the capacity defines how many messages can be "buffered" by
>> the client API in background while you are still processing the first
>> message. That may cause that both your clients receive 5 messages,
>> even when the processing in the first client takes a longer time.
>>
>> Regards
>> Jakub
>>
>> On Wed, Oct 26, 2011 at 03:02, Praveen M <le...@gmail.com> wrote:
>> > Hi,
>> >
>> > I ran the following test
>> >
>> > 1) I created 1 Queue
>> > 2) Registered 2 consumers to the queue
>> > 3) Enqueued 10 messages to the Queue. [ The first enqueued message is
>> long
>> > running. I simulated such that the first message on consumption takes
>> about
>> > 50 seconds to be processed]
>> > 4) Once the enqueue is committed, the 2 consumers each pick a message.
>> > 5) The 1st consumer that got the long running message works on it for a
>> long
>> > time while the second consumer that got the second message keeps
>> processing
>> > and going to the next message, but  only goes as far until it processes 5
>> of
>> > the 10 messages enqueued. Then the 2nd consumer gives up processing.
>> > 6) When the 1st consumer with the  long running message completes, it
>> then
>> > ends up processing the remaining messages and my test completes.
>> >
>> > So it seems like the two consumers were trying to take a fair share of
>> > messages that they were processing immaterial of the time it takes to
>> > process individual messages. Enqueued message = 10, Consumer 1 share of 5
>> > messages were processed by it, and Consumer 2's share of 5 messages were
>> > processed by it.
>> >
>> >
>> > This is kinda against the behavior that I'd like to see. The desired
>> > behavior in my case is that of each consumer keeps going on if it's done
>> and
>> > has other messages to process.
>> >
>> > In the above test, I'd expect as consumer 1 is working on the long
>> message,
>> > the second consumer should work its way through all the remaining
>> messages.
>> >
>> > Is there some config that I'm missing that could cause this effect?? Any
>> > advice on tackling this will be great.
>> >
>> > Also, Can someone please explain in what order are messages delivered to
>> the
>> > consumers in the following cases?
>> >
>> > Case 1)
>> >  There is a single Queue with more than 1 message in it and multiple
>> > consumers registered to it.
>> >
>> > Case 2)
>> > There are multiple queues each with more than 1 message in it, and has
>> > multiple consumers registered to it.
>> >
>> >
>> >
>> > Thank you,
>> > --
>> > -Praveen
>> >
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org