You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Tom Mathews <da...@hotmail.com> on 2014/06/05 20:19:44 UTC

The waiting game [client sends 0 outgoing size]


AMQP Qpid sets the outgoing window size (maximum
transfer frames to expect from client) when negotiating the BEGIN of a
session equal to the currently enqueued message count. Our AMQP service honors
this when replying with the initial FLOW message, setting the incoming
window size (maximum transfer frames allowed to be sent) to the same
value.

 

The problem is that there is rarely a message enqueued when
the session is started, and so the outgoing/incoming window size is set to 0,
which prevents the client from further communication. The developer in charge of the service points out that they are honoring the expectations of the client, and I tend to agree with them: it makes sense that they could optimize a link while it has 0 expected transfers, and wait for an updated flow to renegotiate a new window.

 We're not using the messenger class, we're using the lower-level classes. I can reproduce this behavior by using the proton project with the commandline parameters -c 127.0.0.1 -a TESTING against a version of the service running locally. Diving into the code, 
pn_session_outgoing_window looks only at currently pending session->outgoing_deliveries.That's correctly updated in pn_advance_sender when I submit a message... but in pn_process_tpwork_sender we have a 0 remote_incoming_window, so we never send a transfer. Naturally, the one place a pn_post_flow occurs on a sender link is in pn_do_transfer... after a transfer:  // XXX: need better policy for when to refresh window

if (!ssn->state.incoming_window && (int32_t) link->state.local_handle >= 0) {pn_post_flow(transport, ssn, link);
 
 
I can't call pn_link_flow, as that's only for modifying receiver link credits, and it asserts on a sender. Questions:Am I using AMQP wrong? :)Is there any way to send a flow for the sending link to set a new anticipated window? How do we renegotiate as our window shrinks? Thank you very much for your time, -Tom Mathews

 		 	   		  

Re: The waiting game [client sends 0 outgoing size]

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Thu, Jun 5, 2014 at 2:59 PM, Ted Ross <tr...@redhat.com> wrote:

> Tom,
>
> I'm not sure I understand why the server sets the incoming window the
> same as the client's outgoing window.  Shouldn't the server set the
> incoming window to some value large enough to prevent pipeline-stalling
> and small enough to prevent incoming frames from consuming too much memory?
>
> If your objective is to manage a very large number of clients and you
> don't want to provide incoming capacity until there are messages to be
> sent, I think pn_session_t would need to add something like "set_offer"
> so the sender can indicate that there are bytes/frames to send.
>

I don't think additional API would be necessary here. The session's
outgoing window should be computable based on the state of it's various
links, e.g. if messages are being offered on one or more links, we should
be able to factor that in when we compute the outgoing session window.

Regardless, I don't think using the "available" protocol field in the way
you describe will work in general since it is an optional field. As a
server you need to use a strategy that will work even if available is never
set. The point of the available field is to provide extra information to
distribute credit more optimally, but you can't rely on it as an absolute
signal. For example as a server that has more clients than available
credit, you can revoke credit from idle clients and give it instead to
blocked clients. You can use the information from the available field in
order to pick a blocked client that will definitely be able to use the
credit, but if none of your clients supply that information, you will still
need to ensure that all clients eventually have an opportunity to send.

--Rafael

Re: The waiting game [client sends 0 outgoing size]

Posted by Rafael Schloming <rh...@alum.mit.edu>.
Hi there, after digging a bit I think this is definitely a bug. I would
recommend filing a JIRA and attaching the telemetry as a patch. Do you have
the option of testing against trunk builds, or do you only work from
officially released artifacts?

--Rafael


On Fri, Jun 6, 2014 at 12:29 PM, Tom Mathews <da...@hotmail.com> wrote:

> Here's the telemetry log, with the relevant bits bolded. Once the server
> responds with an incoming window set to 0, adding a new delivery won't
> trigger a renegotiated flow.Connected to 127.0.0.1:6053[0000006F6E3311D0]:
>  -> AMQP[0000006F6E3311D0]:0 -> @open(16) [container-id="TOMM-DT2",
> hostname="127.0.0.1"][0000006F6E3311D0]:0 -> @begin(17)
> [next-outgoing-id=0, incoming-window=2147483647,
> outgoing-window=0][0000006F6E3311D0]:0 -> @attach(18) [name="sender",
> handle=0, role=false, snd-settle-mode=2, rcv-settle-mode=0,
> source=@source(40) [durable=0, timeout=0, dynamic=false],
> target=@target(41) [address="TESTING", durable=0, timeout=0,
> dynamic=false], initial-delivery-count=0][0000006F6E3311D0]:0 ->
> @attach(18) [name="receiver", handle=1, role=true, snd-settle-mode=2,
> rcv-settle-mode=0, source=@source(40) [address="TESTING", durable=0,
> timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0,
> dynamic=false], initial-delivery-count=0][0000006F6E3311D0]:0 -> @flow(19)
> [incoming-window=2147483647, next-outgoing-id=0, outgoing-window=0,
> handle=1, delivery-count=0, link-credit=1, drain=false][0000006F6E3311D0]:
>  <- AMQP[0000006F6E3311D0]:0 <- @open(16)
> [container-id="M2099774168P21368", max-frame-size=65536,
> channel-max=10000][0000006F6E3311D0]:0 <- @begin(17) [remote-channel=0,
> next-outgoing-id=1, incoming-window=0,
> outgoing-window=5000][0000006F6E3311D0]:0 <- @attach(18) [name="sender",
> handle=0, role=true, snd-settle-mode=2, rcv-settle-mode=0,
> source=@source(40) [durable=0, timeout=0, dynamic=false],
> target=@target(41) [address="TESTING", durable=0, timeout=0,
> dynamic=false], max-message-size=18446744073709551615][0000006F6E3311D0]:0
> <- @flow(19) [next-incoming-id=0, incoming-window=0, next-outgoing-id=1,
> outgoing-window=5000, handle=0, delivery-count=0, link-credit=1000,
> available=0, echo=false]sent delivery: 0[0000006F6E3311D0]:0 <- @attach(18)
> [name="receiver", handle=1, role=false, snd-settle-mode=1,
> source=@source(40) [address="TESTING", durable=0, timeout=0,
> dynamic=false], target=@target(41) [durable=0, timeout=0, dynamic=false],
> initial-delivery-count=0, max-message-size=18446744073709551615]
>
> > Date: Fri, 6 Jun 2014 09:47:55 -0400
> > Subject: Re: The waiting game [client sends 0 outgoing size]
> > From: rhs@alum.mit.edu
> > To: users@qpid.apache.org
> >
> > On Thu, Jun 5, 2014 at 8:15 PM, Tom Mathews <da...@hotmail.com>
> wrote:
> >
> > >
> > >
> > >
> > > We are indeed planning on handling a large number of clients (millions
> of
> > > concurrent connections, multiple links per connection, distributed of
> > > course across load-balanced servers).
> > > What would set_offer look like?  I see pn_link_offered, but I can't
> tell
> > > that it does anything effective (link->available doesn't seem to be
> used).
> > >
> >
> > I don't think any sort of offering should be necessary in your situation,
> > both for the reasons I explained in my reply to Ted, but also because the
> > test program you're using (assuming it is the same proton.c I'm looking
> at)
> > is actually creating a delivery and attempting to send it.
> >
> > The offer API is intended for situations where you *may* have messages
> > available, but you don't know for sure. For example, say you have a queue
> > with competing consumers. You can use the offer API to provide a hint to
> > your consumers that you have messages available for transfer, but by the
> > time any given consumer acts upon the hint, its competitors may have
> > already eaten up the messages you were offering. A simple client should
> > never need to use the API since they will usually just be supplying the
> > messages directly.
> >
> > I believe what you're seeing is either a bug, or an interop issue, or
> > possibly both. It's hard to be sure without seeing the protocol trace,
> but
> > I think the fact that the outgoing window is initially zero shouldn't
> > matter. When a delivery is available the window should be
> recomputed/resent
> > regardless.
> >
> > --Rafael
>
>

RE: The waiting game [client sends 0 outgoing size]

Posted by Tom Mathews <da...@hotmail.com>.
Here's the telemetry log, with the relevant bits bolded. Once the server responds with an incoming window set to 0, adding a new delivery won't trigger a renegotiated flow.Connected to 127.0.0.1:6053[0000006F6E3311D0]:  -> AMQP[0000006F6E3311D0]:0 -> @open(16) [container-id="TOMM-DT2", hostname="127.0.0.1"][0000006F6E3311D0]:0 -> @begin(17) [next-outgoing-id=0, incoming-window=2147483647, outgoing-window=0][0000006F6E3311D0]:0 -> @attach(18) [name="sender", handle=0, role=false, snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, timeout=0, dynamic=false], target=@target(41) [address="TESTING", durable=0, timeout=0, dynamic=false], initial-delivery-count=0][0000006F6E3311D0]:0 -> @attach(18) [name="receiver", handle=1, role=true, snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="TESTING", durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0, dynamic=false], initial-delivery-count=0][0000006F6E3311D0]:0 -> @flow(19) [incoming-window=2147483647, next-outgoing-id=0, outgoing-window=0, handle=1, delivery-count=0, link-credit=1, drain=false][0000006F6E3311D0]:  <- AMQP[0000006F6E3311D0]:0 <- @open(16) [container-id="M2099774168P21368", max-frame-size=65536, channel-max=10000][0000006F6E3311D0]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1, incoming-window=0, outgoing-window=5000][0000006F6E3311D0]:0 <- @attach(18) [name="sender", handle=0, role=true, snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, timeout=0, dynamic=false], target=@target(41) [address="TESTING", durable=0, timeout=0, dynamic=false], max-message-size=18446744073709551615][0000006F6E3311D0]:0 <- @flow(19) [next-incoming-id=0, incoming-window=0, next-outgoing-id=1, outgoing-window=5000, handle=0, delivery-count=0, link-credit=1000, available=0, echo=false]sent delivery: 0[0000006F6E3311D0]:0 <- @attach(18) [name="receiver", handle=1, role=false, snd-settle-mode=1, source=@source(40) [address="TESTING", durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=18446744073709551615]

> Date: Fri, 6 Jun 2014 09:47:55 -0400
> Subject: Re: The waiting game [client sends 0 outgoing size]
> From: rhs@alum.mit.edu
> To: users@qpid.apache.org
> 
> On Thu, Jun 5, 2014 at 8:15 PM, Tom Mathews <da...@hotmail.com> wrote:
> 
> >
> >
> >
> > We are indeed planning on handling a large number of clients (millions of
> > concurrent connections, multiple links per connection, distributed of
> > course across load-balanced servers).
> > What would set_offer look like?  I see pn_link_offered, but I can't tell
> > that it does anything effective (link->available doesn't seem to be used).
> >
> 
> I don't think any sort of offering should be necessary in your situation,
> both for the reasons I explained in my reply to Ted, but also because the
> test program you're using (assuming it is the same proton.c I'm looking at)
> is actually creating a delivery and attempting to send it.
> 
> The offer API is intended for situations where you *may* have messages
> available, but you don't know for sure. For example, say you have a queue
> with competing consumers. You can use the offer API to provide a hint to
> your consumers that you have messages available for transfer, but by the
> time any given consumer acts upon the hint, its competitors may have
> already eaten up the messages you were offering. A simple client should
> never need to use the API since they will usually just be supplying the
> messages directly.
> 
> I believe what you're seeing is either a bug, or an interop issue, or
> possibly both. It's hard to be sure without seeing the protocol trace, but
> I think the fact that the outgoing window is initially zero shouldn't
> matter. When a delivery is available the window should be recomputed/resent
> regardless.
> 
> --Rafael
 		 	   		  

Re: The waiting game [client sends 0 outgoing size]

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Thu, Jun 5, 2014 at 8:15 PM, Tom Mathews <da...@hotmail.com> wrote:

>
>
>
> We are indeed planning on handling a large number of clients (millions of
> concurrent connections, multiple links per connection, distributed of
> course across load-balanced servers).
> What would set_offer look like?  I see pn_link_offered, but I can't tell
> that it does anything effective (link->available doesn't seem to be used).
>

I don't think any sort of offering should be necessary in your situation,
both for the reasons I explained in my reply to Ted, but also because the
test program you're using (assuming it is the same proton.c I'm looking at)
is actually creating a delivery and attempting to send it.

The offer API is intended for situations where you *may* have messages
available, but you don't know for sure. For example, say you have a queue
with competing consumers. You can use the offer API to provide a hint to
your consumers that you have messages available for transfer, but by the
time any given consumer acts upon the hint, its competitors may have
already eaten up the messages you were offering. A simple client should
never need to use the API since they will usually just be supplying the
messages directly.

I believe what you're seeing is either a bug, or an interop issue, or
possibly both. It's hard to be sure without seeing the protocol trace, but
I think the fact that the outgoing window is initially zero shouldn't
matter. When a delivery is available the window should be recomputed/resent
regardless.

--Rafael

RE: The waiting game [client sends 0 outgoing size]

Posted by Tom Mathews <da...@hotmail.com>.


We are indeed planning on handling a large number of clients (millions of concurrent connections, multiple links per connection, distributed of course across load-balanced servers).
What would set_offer look like?  I see pn_link_offered, but I can't tell that it does anything effective (link->available doesn't seem to be used).
-TomM

> Date: Thu, 5 Jun 2014 14:59:00 -0400
> From: tross@redhat.com
> To: users@qpid.apache.org
> Subject: Re: The waiting game [client sends 0 outgoing size]
> 
> Tom,
> 
> I'm not sure I understand why the server sets the incoming window the
> same as the client's outgoing window.  Shouldn't the server set the
> incoming window to some value large enough to prevent pipeline-stalling
> and small enough to prevent incoming frames from consuming too much memory?
> 
> If your objective is to manage a very large number of clients and you
> don't want to provide incoming capacity until there are messages to be
> sent, I think pn_session_t would need to add something like "set_offer"
> so the sender can indicate that there are bytes/frames to send.
> 
> -Ted
> 
> On 06/05/2014 02:19 PM, Tom Mathews wrote:
> > 
> > 
> > AMQP Qpid sets the outgoing window size (maximum
> > transfer frames to expect from client) when negotiating the BEGIN of a
> > session equal to the currently enqueued message count. Our AMQP service honors
> > this when replying with the initial FLOW message, setting the incoming
> > window size (maximum transfer frames allowed to be sent) to the same
> > value.
> > 
> >  
> > 
> > The problem is that there is rarely a message enqueued when
> > the session is started, and so the outgoing/incoming window size is set to 0,
> > which prevents the client from further communication. The developer in charge of the service points out that they are honoring the expectations of the client, and I tend to agree with them: it makes sense that they could optimize a link while it has 0 expected transfers, and wait for an updated flow to renegotiate a new window.
> > 
> >  We're not using the messenger class, we're using the lower-level classes. I can reproduce this behavior by using the proton project with the commandline parameters -c 127.0.0.1 -a TESTING against a version of the service running locally. Diving into the code, 
> > pn_session_outgoing_window looks only at currently pending session->outgoing_deliveries.That's correctly updated in pn_advance_sender when I submit a message... but in pn_process_tpwork_sender we have a 0 remote_incoming_window, so we never send a transfer. Naturally, the one place a pn_post_flow occurs on a sender link is in pn_do_transfer... after a transfer:  // XXX: need better policy for when to refresh window
> > 
> > if (!ssn->state.incoming_window && (int32_t) link->state.local_handle >= 0) {pn_post_flow(transport, ssn, link);
> >  
> >  
> > I can't call pn_link_flow, as that's only for modifying receiver link credits, and it asserts on a sender. Questions:Am I using AMQP wrong? :)Is there any way to send a flow for the sending link to set a new anticipated window? How do we renegotiate as our window shrinks? Thank you very much for your time, -Tom Mathews
> > 
> >  		 	   		  
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 

 		 	   		  

Re: The waiting game [client sends 0 outgoing size]

Posted by Ted Ross <tr...@redhat.com>.
Tom,

I'm not sure I understand why the server sets the incoming window the
same as the client's outgoing window.  Shouldn't the server set the
incoming window to some value large enough to prevent pipeline-stalling
and small enough to prevent incoming frames from consuming too much memory?

If your objective is to manage a very large number of clients and you
don't want to provide incoming capacity until there are messages to be
sent, I think pn_session_t would need to add something like "set_offer"
so the sender can indicate that there are bytes/frames to send.

-Ted

On 06/05/2014 02:19 PM, Tom Mathews wrote:
> 
> 
> AMQP Qpid sets the outgoing window size (maximum
> transfer frames to expect from client) when negotiating the BEGIN of a
> session equal to the currently enqueued message count. Our AMQP service honors
> this when replying with the initial FLOW message, setting the incoming
> window size (maximum transfer frames allowed to be sent) to the same
> value.
> 
>  
> 
> The problem is that there is rarely a message enqueued when
> the session is started, and so the outgoing/incoming window size is set to 0,
> which prevents the client from further communication. The developer in charge of the service points out that they are honoring the expectations of the client, and I tend to agree with them: it makes sense that they could optimize a link while it has 0 expected transfers, and wait for an updated flow to renegotiate a new window.
> 
>  We're not using the messenger class, we're using the lower-level classes. I can reproduce this behavior by using the proton project with the commandline parameters -c 127.0.0.1 -a TESTING against a version of the service running locally. Diving into the code, 
> pn_session_outgoing_window looks only at currently pending session->outgoing_deliveries.That's correctly updated in pn_advance_sender when I submit a message... but in pn_process_tpwork_sender we have a 0 remote_incoming_window, so we never send a transfer. Naturally, the one place a pn_post_flow occurs on a sender link is in pn_do_transfer... after a transfer:  // XXX: need better policy for when to refresh window
> 
> if (!ssn->state.incoming_window && (int32_t) link->state.local_handle >= 0) {pn_post_flow(transport, ssn, link);
>  
>  
> I can't call pn_link_flow, as that's only for modifying receiver link credits, and it asserts on a sender. Questions:Am I using AMQP wrong? :)Is there any way to send a flow for the sending link to set a new anticipated window? How do we renegotiate as our window shrinks? Thank you very much for your time, -Tom Mathews
> 
>  		 	   		  
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: The waiting game [client sends 0 outgoing size]

Posted by Rafael Schloming <rh...@alum.mit.edu>.
Hi Tom,

Can you post the protocol trace from your reproducer? You should be able to
turn on tracing like so:

  shell$ export PN_TRACE_FRM=1
  shell$ proton -c 127.0.0.1 -a TESTING
  ...

With the help of the protocol trace, we should be able to figure out what
is going on and hopefully answer your question.

--Rafael




On Thu, Jun 5, 2014 at 2:19 PM, Tom Mathews <da...@hotmail.com> wrote:

>
>
> AMQP Qpid sets the outgoing window size (maximum
> transfer frames to expect from client) when negotiating the BEGIN of a
> session equal to the currently enqueued message count. Our AMQP service
> honors
> this when replying with the initial FLOW message, setting the incoming
> window size (maximum transfer frames allowed to be sent) to the same
> value.
>
>
>
> The problem is that there is rarely a message enqueued when
> the session is started, and so the outgoing/incoming window size is set to
> 0,
> which prevents the client from further communication. The developer in
> charge of the service points out that they are honoring the expectations of
> the client, and I tend to agree with them: it makes sense that they could
> optimize a link while it has 0 expected transfers, and wait for an updated
> flow to renegotiate a new window.
>
>  We're not using the messenger class, we're using the lower-level classes.
> I can reproduce this behavior by using the proton project with the
> commandline parameters -c 127.0.0.1 -a TESTING against a version of the
> service running locally. Diving into the code,
> pn_session_outgoing_window looks only at currently pending
> session->outgoing_deliveries.That's correctly updated in pn_advance_sender
> when I submit a message... but in pn_process_tpwork_sender we have a 0
> remote_incoming_window, so we never send a transfer. Naturally, the one
> place a pn_post_flow occurs on a sender link is in pn_do_transfer... after
> a transfer:  // XXX: need better policy for when to refresh window
>
> if (!ssn->state.incoming_window && (int32_t) link->state.local_handle >=
> 0) {pn_post_flow(transport, ssn, link);
>
>
> I can't call pn_link_flow, as that's only for modifying receiver link
> credits, and it asserts on a sender. Questions:Am I using AMQP wrong? :)Is
> there any way to send a flow for the sending link to set a new anticipated
> window? How do we renegotiate as our window shrinks? Thank you very much
> for your time, -Tom Mathews
>
>