You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Alan Conway <ac...@redhat.com> on 2016/11/02 17:23:36 UTC

Re: Bounding proton sender memory, or: on_sendable and PN_TRANSPORT and buffers, Oh My!

On Fri, 2016-10-28 at 00:53 -0700, Cliff Jansen wrote:
> This nearly covers the issue I was raising, and most likely addresses
> the major use case.
> 
> I was envisaging multiple sessions over the same connection, possibly
> with different priorities and flow control needs.��The question
> shifts
> from "when is it a good time to send messages on this session" to
> "when is the connection output buffer ready for a new message (from
> any session)".��The PN_TRANSPORT event you suggest is actually
> session
> agnostic and could form the role here as
> well.��pn_transport_pending()
> could serve to decide when to add more messages to the connection's
> output stream.

What I'm looking for is a bound on total memory use. Since sessions are
multiplexed onto a connection, it's not simple to have a fixed per-
connection bound with a variable number of sessions (since each session
does independent buffering) so a per-session bound seems like a
reasonable compromise. If there are a fixed number of sessions you can
divide your total per-connection goal by the number of session.�

It's the same problem as distributing an integral amount of credit over
a variable number of subscribers to give a fixed total credit, and
usually that gets solved the same way - by doing a per-subscriber bound
not a per-queue bound.

My proposal was to treat the sessions as independent - so rely on
proton for fairness with respect to the connection. You could do
something more complex and scan all your sessions each time there is
more transport space.

> The right value of a high/low water mark would vary a lot between an
> 8Gig network and some low power wireless transport.
> 
> If we did not generate PN_TRANSPORT events by default, many
> applications would run just fine and not pay the price of ignoring
> them.��Those that wanted them could perhaps register for them.
> 
> Or, assuming the application knows whether it is a large hub server
> or a
> webcam, perhaps it could inform Proton appropriately and register
> what
> it thought was the low water mark via an api call:
> 
> ������pn_transport_set_low_watermark(t, n_bytes);
> 
> which would be edge triggered when the buffer transitioned from above
> the watermark to below.
> 
> Or the Application could just tell Proton each time it is interested
> in
> a low watermark event.
> 
> � void send_available() {
> �����/* ... */
> �����/* try_to_send returns false because we hit high water mark,
> which leads to: */
> �����/* Message not sent, leave on available list for next
> send_available() */
> �����/* and... */
> �����pn_transport_notify_pending_less_than(t, n_bytes)
> 
> Which fires exactly once, either on the transition, or "soon" if
> transport_pending is currently less than n_bytes.
> 
> I acknowledge that the saner approach in most cases might be to have
> separate connections to manage complicated session link priorities
> and
> that it may not really be too onerous to generate a transport event
> whenever the output buffer shrinks.

I think probably not. The output buffer only shrinks in response to
some IO write-done event. Under load you'd expect a reasonable
accumulation of data per-write, so its not like PN_FLOW being generated
every time anything changes credit state in the app or on the wire. A
single IO read event can generate a ton of PN_FLOW.

Given that, I think we might as well provide the event: you could
implement the edge trigger in the IO layer by checking buffers after
each write-done, but that's not much different from generating a
PN_TRANSPORT and letting the application do the check. The application
can tailor the check to be exactly as simple or complex as needed,
whereas in the IO layer you'd have to have some configurable "common
case" check that might be more than the app requires.

> 
> Cliff
> 
> On Thu, Oct 27, 2016 at 1:35 PM, Alan Conway <ac...@redhat.com>
> wrote:
> > 
> > Cliff has been bugging me about transport events, sending messages
> > and
> > memory bounding for ages and the penny finally dropped, I think it
> > deserves a wider audience:
> > 
> > The issue is bounding memory use in a proton sending application.
> > AMQP
> > flow control (as shown in our examples) covers many/most cases
> > providing the receiver sets a "reasonable" credit limits.
> > 
> > However, on the sender, if the receiver sets infinite credit, or
> > has a
> > much bigger notion of "reasonable", proton will buffer messages
> > without
> > regard to sender constraints. It is quite plausible that
> > receiver/sender have very different memory constraints - one might
> > be a
> > large hub server, the other embedded on millions of small devices
> > (webcams for example)
> > 
> > The `on_sendable` or PN_FLOW event tells you the remote end has
> > given
> > credit, so you can write a sender that waits for credit before
> > sending.
> > I think we can use the C PN_TRANSPORT event in a similar way to
> > limit
> > sender memory. Attached is a C example/explanation.
> > 
> > Some of our language bindings don't expose TRANSPORT and we might
> > want
> > to think of a more intuitive way to express this. Also TRANSPORT is
> > a
> > bit of a catch-all event, it does fire when data moves from session
> > to
> > transport buffers, but it fires for other things too. We might want
> > to
> > look at the event model.
> > 
> > Meantime I think the attached is a workable approach in C. Would
> > love
> > to hear comments, this is something we probably should incorporate
> > into
> > the language bindings
> > 
> > -----------------------------------------------------------------
> > ----
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org