You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Fraser Adams <fr...@blueyonder.co.uk> on 2013/12/08 21:06:20 UTC

still trying to get proton messenger to behave properly asynchronously

Hey all,
I've been able to get messenger to behave fairly sensibly in a 
non-blocking way, but what I've yet to achieve is getting it to behave 
in a properly "asynchronous" event-driven way where I fire up a looping 
"notifier" after everything is initialised and the notifier calls the 
"callbacks", you know, the usual event driven pattern.


For my send-async code I've got a "main_loop" function that performs the 
guts of the work, the key bits look like this:

#define SENT_MESSAGE 0
#define STOPPING 1


void main_loop(void *arg) {
printf("                          *** main_loop ***\n");

     int err = pn_messenger_work(messenger, 0); // Sends any outstanding 
messages queued for messenger.
     int pending = pn_messenger_outgoing(messenger); // Get the number 
of pending messages in the outgoing message queue.

printf("err = %d\n", err);
printf("pending = %d\n", pending);

     if (state == SENT_MESSAGE && !pending) {
printf("calling stop\n");
         pn_message_free(message); // Release message.
         pn_messenger_stop(messenger);
         state = STOPPING;
     } else if (state == STOPPING && !err) {
printf("exiting\n");
         pn_messenger_free(messenger);
         exit(0);
     }
}



in the main method I set messenger to non-blocking, create and send the 
message and set state to SENT_MESSAGE

when I have a "notifier" loop like the following:

   while (1) {
     main_loop(NULL);

     struct timeval timeout;
     timeout.tv_sec = 0;
     timeout.tv_usec = 16667;
     select(0, NULL, NULL, NULL, &timeout);
   }

The approach above works fine and reaches the final state and exits, but 
It's not really what I want as it's essentially a busy-wait loop albeit 
with a 16.7 ms delay. What I *really* want is for the main notifier 
event loop to block until activity is happening.


I tried:

   while (1) {
     pn_driver_wait(messenger->driver, -1); // Block indefinitely until 
there has been socket activity.
     main_loop(NULL);
   }

But that doesn't even compile (error: dereferencing pointer to 
incomplete type) I guess pn_messenger_t isn't externally visible outside 
messenger.c so I can't access the driver instance?

If I do:

   while (1) {
     pn_messenger_work(messenger, -1); // Block indefinitely until there 
has been socket activity.
     main_loop(NULL);
   }


That "kind of" works, but it doesn't get as far as the exit state (the 
last err value is -7), so there's socket activity that I'm missing I think.

In any case I *really* don't like having to do a blocking call to 
pn_messenger_work() just to do the select/poll call that I really want 
to do.



If I'm honest I don't *think* messenger is currently entirely geared up 
for asynchronous behaviour despite the ability to put it in non-blocking 
mode. What I mean is that the heart of many of the calls is 
pn_messenger_tsync, which is called in blocking and non-blocking modes 
and that's calling pn_driver_wait, potentially in a loop even in 
non-blocking mode, so even if my notifier could do a simple block on 
pn_driver_wait I'd still by calling poll multiple times - once blocking 
in my notifier waiting for activity and then non-blocking when I do.
pn_messenger_work(messenger, 0); // Sends any outstanding messages 
queued for messenger.

Even when it is working well I'm suspecting that the loop will be 
causing more calls to main_loop than really desirable as it'll trigger 
on all socket activity not just the send - perhaps that's necessary 
because of the AMQP handshaking, but it doesn't "feel" especially 
efficient, but I'm certainly far from an expert about the guts of 
messenger - it made my head explode :-)


I guess that most uses to date have been for traditional blocking 
scenarios, but I'm thinking that in the future a more asynchronous 
approach is likely to become important - what I mean is that as the 
number of processor cores increases I suspect that asynchronous 
approaches like Grand Central Dispatch 
http://en.wikipedia.org/wiki/Grand_Central_Dispatch will probably scale 
better across large numbers of cores, so it might be good to have an 
efficient asynchronous programming model for proton.

Am I roughly on the right track? Any ideas how to make what I'm trying 
to do a little neater?

At this stage I'm trying things out and trying to educate myself, so if 
there are limitations it's not necessarily a huge deal, but it might be 
a useful point for further conversation on asynchronous behaviour - 
anyone else musing over this?

Best regards,
Frase

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Mon, Dec 16, 2013 at 3:11 PM, Fraser Adams <fraser.adams@blueyonder.co.uk
> wrote:

> D'oh......
> OK so I reread your notes around the async python code and noted:
>
>
> "Also, the status is not updated when a message is settled without an
> explicit disposition. I believe this is just a bug. The workaround is to
> set a nonzero incoming window and explicitly accept/reject at the receiver."
>
> that turned out to be more significant than I had realised (slaps head
> very hard with back of hand......) adding
> pn_messenger_set_outgoing_window(messenger, 1024);
> pn_messenger_set_incoming_window(messenger, 1024);
>
> made it work .......
>
>
> would you perhaps be able to explain what's going on with respect to
> "disposition" and "window" here? Am I correct in thinking that the received
> message *should* be implicitly accepted? I've *no idea* what the optimal
> window sizes would be so I copied your values.
>

Ok, so this kind of benefits from an understanding of how acknowledgments
work in the 1.0 protocol. Unlike 0-x the 1.0 acknowledgement semantics are
factored into two independent aspects, namely delivery state, and
settlement. The delivery state encodes information about the processing of
a delivery, things like ACCEPTED, REJECTED, RELEASED, etc. Settlement on
the other hand deals exclusively around whether an endpoint retains state
regarding a delivery. In other words settling a delivery is really
synonymous with forgetting about its existence. The disposition frame in
the protocol is used to communicate when changes occur around delivery
state or settlement. It's important to note that these things can happen
somewhat independently, e.g. a delivery can be settled before its delivery
state reaches any meaningful value.

To map this back to the incoming/outgoing windows, these windows really
control settlement, not delivery state. The delivery state is controlled by
the explicit use of accept/reject in the API. So a window of size N means
that the endpoint will remember the state of the last N deliveries. When
deliveries roll off this window, they will be settled automatically, and in
the case of the incoming window this can happen before a delivery state is
explicitly assigned via use of accept or reject. In fact by default the
incoming window is zero, and so incoming deliveries are settled as soon as
they are read off of the wire, and there is no opportunity to explicitly
accept/reject them.

So to answer your question, incoming messages (or deliveries to be more
precise) are never implicitly accepted by messenger, however they are
implicitly settled, and can in fact be implicitly settled prior to being
accepted. This is in fact what is happening. It results in a disposition
frame being sent indicating the delivery is settled and that the delivery
state is null (it's default value). The bug is that the sender should be
noticing that the receiver settled the delivery and updating the status of
the delivery as visible through the messenger's API. It is in fact not
doing this and the delivery still remains in the PENDING state at the
sender.

Regarding the sizes of the windows, there isn't really an optimal per/se.
The numbers chosen really have more to do with application semantics than
performance tuning directly. These aren't credit windows and won't in and
of themselves cause any throttling to occur, it's just that deliveries
rolling off the window will be forgotten. That said, of course if your
application needs to track the status of every single delivery, then it
will need to ensure that it never sends more than whatever the outgoing
window size is, and it may need to throttle itself and/or choose a large
enough window to ensure that it doesn't lose information.


I'm fairly familiar with the qpid::messaging setCapacity() stuff and how
> that helps tune the internal queues, but I'm not clear at all how messenger
> concepts map to the qpid::messaging/JMS stuff that I'm more familiar with.
> I can see how to get the internal queue depth, but I'm not clear if it's
> bounded or unbounded - is the window stuff the equivalent of the
> sender/receiver setCapacity() or something else entirely?
>

Hopefully this is a bit clearer now. An incoming window of size zero is
roughly analogous to JMS auto-ack mode. Setting it to a positive value is
roughly analogous to client-ack mode.

The setCapacity stuff in the messaging API is really about flow control and
so is pretty orthogonal to the incoming/outgoing window.


>
> I don't suppose that you have a "messenger for qpid::messaging
> programmers" kick start guide do you :-D
>
> As I get a bit more familiar I'd really like to understand how to get the
> best performance out of this - though I'm probably a little way off that at
> the moment - are there any plans to add "proton perftest" code as a sort of
> canonical "here's how to make proton rock" guide? That'd be *really*
> useful.
>

We've had a couple of fits and starts in this directly. I think Ken may
have done some work on this. We probably could use a bit more of a
concerted effort to resurrect/complete/tie in some of this stuff.


>
>
> Anyway thanks yet again for all of your help, I'm definitely making
> progress now.
>

Excellent, hope to hear more soon.

--Rafael

Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Mon, Dec 16, 2013 at 3:11 PM, Fraser Adams <fraser.adams@blueyonder.co.uk
> wrote:

> D'oh......
> OK so I reread your notes around the async python code and noted:
>
>
> "Also, the status is not updated when a message is settled without an
> explicit disposition. I believe this is just a bug. The workaround is to
> set a nonzero incoming window and explicitly accept/reject at the receiver."
>
> that turned out to be more significant than I had realised (slaps head
> very hard with back of hand......) adding
> pn_messenger_set_outgoing_window(messenger, 1024);
> pn_messenger_set_incoming_window(messenger, 1024);
>
> made it work .......
>

FYI, I fixed the bug on trunk (PROTON-478), so you shouldn't need to mess
with the incoming window anymore. Setting up the outgoing window is still
necessary if you want to be notified when messages are actually delivered.
If you didn't care about this you could omit that and modify the example to
not have callbacks on delivery.

--Rafael

Re: still trying to get proton messenger to behave properly asynchronously

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
D'oh......
OK so I reread your notes around the async python code and noted:

"Also, the status is not updated when a message is settled without an 
explicit disposition. I believe this is just a bug. The workaround is to 
set a nonzero incoming window and explicitly accept/reject at the receiver."

that turned out to be more significant than I had realised (slaps head 
very hard with back of hand......) adding
pn_messenger_set_outgoing_window(messenger, 1024);
pn_messenger_set_incoming_window(messenger, 1024);

made it work .......


would you perhaps be able to explain what's going on with respect to 
"disposition" and "window" here? Am I correct in thinking that the 
received message *should* be implicitly accepted? I've *no idea* what 
the optimal window sizes would be so I copied your values.


I'm fairly familiar with the qpid::messaging setCapacity() stuff and how 
that helps tune the internal queues, but I'm not clear at all how 
messenger concepts map to the qpid::messaging/JMS stuff that I'm more 
familiar with. I can see how to get the internal queue depth, but I'm 
not clear if it's bounded or unbounded - is the window stuff the 
equivalent of the sender/receiver setCapacity() or something else entirely?

I don't suppose that you have a "messenger for qpid::messaging 
programmers" kick start guide do you :-D

As I get a bit more familiar I'd really like to understand how to get 
the best performance out of this - though I'm probably a little way off 
that at the moment - are there any plans to add "proton perftest" code 
as a sort of canonical "here's how to make proton rock" guide? That'd be 
*really* useful.


Anyway thanks yet again for all of your help, I'm definitely making 
progress now.
Cheers,
Frase



On 15/12/13 18:40, Fraser Adams wrote:
> Hi again Rafael,
> really sorry to keep on at this one :-(
> I seem to be having trouble with the tracker/status stuff now......
>
> I got your python examples working, so I figured I'd try and go along 
> similar lines (using pn_messenger_work as the loop blocker for now 
> rather than my pn_messenger_wait just to rule out that as a potential 
> issue) so I've got:
>
>   while (1) {
>     //pn_messenger_wait(messenger, -1); // Block indefinitely until 
> there has been socket activity.
>     pn_messenger_work(messenger, -1); // Block indefinitely until 
> there has been socket activity.
>     process(NULL);
>   }
>
> I've tried doing the explicit accept using a tracker but my 
> recv-async.c doesn't *seem* to be sending it??
>
> I've tried my ./recv-async using your python send_async.py and I don't 
> end up in the ACCEPTED state :-(
>
> similarly when I try ./recv_async.py and use my ./send-async my stuff 
> keeps reporting status = 0 which is PN_STATUS_UNKNOWN 
> <http://qpid.apache.org/releases/qpid-proton-0.5/protocol-engine/c/api/messenger_8h.html#a242e4ee54b9c0a416443c7da5f6e045ba0b46b1041679460baaba2ddcdb2173f2>
>
> This is really starting to drive me nuts now. I must be doing 
> something stupid, but I can't see what.
>
> The code I've attached is rather hacky as I've been trying to mess 
> around, but I'd be really grateful for any suggestions as to what I'm 
> doing wrong.
>
> This is feeling a bit like voodoo at the moment :-D
>
> Cheers.
> Frase
>
>
>
> On 13/12/13 18:40, Rafael Schloming wrote:
>> On Fri, Dec 13, 2013 at 12:54 PM, Fraser Adams <
>> fraser.adams@blueyonder.co.uk> wrote:
>>
>>> Hey Rafael,
>>> many thanks again for your relies, I'll take a look at the python code.
>>>
>>> For info in the branch that I'm doing my JavaScript stuff in I "pimped"
>>> messenger.h and messenger.c slightly adding
>>>
>>> PN_EXTERN int pn_messenger_wait(pn_messenger_t *messenger, int timeout);
>>>
>>> to messenger.h and
>>>
>>> int pn_messenger_wait(pn_messenger_t *messenger, int timeout)
>>> {
>>>      return pn_driver_wait(messenger->driver, timeout);
>>> }
>>>
>>> to messenger.c
>>>
>>> so my notifier now looks like:
>>>
>>>    while (1) {
>>>      pn_messenger_wait(messenger, -1); // Block indefinitely until there
>>> has been socket activity.
>>>      main_loop(NULL);
>>>    }
>>>
>>> And that works perfectly - yay :-)
>>>
>>> Would you have any issues with that going forward as an interim step until
>>> you're able to move forward with the fully decoupled driver?
>>>
>> Not at all. Please feel free.
>>
>> --Rafael
>>
>


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
Hi again Rafael,
really sorry to keep on at this one :-(
I seem to be having trouble with the tracker/status stuff now......

I got your python examples working, so I figured I'd try and go along 
similar lines (using pn_messenger_work as the loop blocker for now 
rather than my pn_messenger_wait just to rule out that as a potential 
issue) so I've got:

   while (1) {
     //pn_messenger_wait(messenger, -1); // Block indefinitely until 
there has been socket activity.
     pn_messenger_work(messenger, -1); // Block indefinitely until there 
has been socket activity.
     process(NULL);
   }

I've tried doing the explicit accept using a tracker but my recv-async.c 
doesn't *seem* to be sending it??

I've tried my ./recv-async using your python send_async.py and I don't 
end up in the ACCEPTED state :-(

similarly when I try ./recv_async.py and use my ./send-async my stuff 
keeps reporting status = 0 which is PN_STATUS_UNKNOWN 
<http://qpid.apache.org/releases/qpid-proton-0.5/protocol-engine/c/api/messenger_8h.html#a242e4ee54b9c0a416443c7da5f6e045ba0b46b1041679460baaba2ddcdb2173f2>

This is really starting to drive me nuts now. I must be doing something 
stupid, but I can't see what.

The code I've attached is rather hacky as I've been trying to mess 
around, but I'd be really grateful for any suggestions as to what I'm 
doing wrong.

This is feeling a bit like voodoo at the moment :-D

Cheers.
Frase



On 13/12/13 18:40, Rafael Schloming wrote:
> On Fri, Dec 13, 2013 at 12:54 PM, Fraser Adams <
> fraser.adams@blueyonder.co.uk> wrote:
>
>> Hey Rafael,
>> many thanks again for your relies, I'll take a look at the python code.
>>
>> For info in the branch that I'm doing my JavaScript stuff in I "pimped"
>> messenger.h and messenger.c slightly adding
>>
>> PN_EXTERN int pn_messenger_wait(pn_messenger_t *messenger, int timeout);
>>
>> to messenger.h and
>>
>> int pn_messenger_wait(pn_messenger_t *messenger, int timeout)
>> {
>>      return pn_driver_wait(messenger->driver, timeout);
>> }
>>
>> to messenger.c
>>
>> so my notifier now looks like:
>>
>>    while (1) {
>>      pn_messenger_wait(messenger, -1); // Block indefinitely until there
>> has been socket activity.
>>      main_loop(NULL);
>>    }
>>
>> And that works perfectly - yay :-)
>>
>> Would you have any issues with that going forward as an interim step until
>> you're able to move forward with the fully decoupled driver?
>>
> Not at all. Please feel free.
>
> --Rafael
>


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Fri, Dec 13, 2013 at 12:54 PM, Fraser Adams <
fraser.adams@blueyonder.co.uk> wrote:

> Hey Rafael,
> many thanks again for your relies, I'll take a look at the python code.
>
> For info in the branch that I'm doing my JavaScript stuff in I "pimped"
> messenger.h and messenger.c slightly adding
>
> PN_EXTERN int pn_messenger_wait(pn_messenger_t *messenger, int timeout);
>
> to messenger.h and
>
> int pn_messenger_wait(pn_messenger_t *messenger, int timeout)
> {
>     return pn_driver_wait(messenger->driver, timeout);
> }
>
> to messenger.c
>
> so my notifier now looks like:
>
>   while (1) {
>     pn_messenger_wait(messenger, -1); // Block indefinitely until there
> has been socket activity.
>     main_loop(NULL);
>   }
>
> And that works perfectly - yay :-)
>
> Would you have any issues with that going forward as an interim step until
> you're able to move forward with the fully decoupled driver?
>

Not at all. Please feel free.

--Rafael

Re: still trying to get proton messenger to behave properly asynchronously

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
Hey Rafael,
many thanks again for your relies, I'll take a look at the python code.

For info in the branch that I'm doing my JavaScript stuff in I "pimped" 
messenger.h and messenger.c slightly adding

PN_EXTERN int pn_messenger_wait(pn_messenger_t *messenger, int timeout);

to messenger.h and

int pn_messenger_wait(pn_messenger_t *messenger, int timeout)
{
     return pn_driver_wait(messenger->driver, timeout);
}

to messenger.c

so my notifier now looks like:

   while (1) {
     pn_messenger_wait(messenger, -1); // Block indefinitely until there 
has been socket activity.
     main_loop(NULL);
   }

And that works perfectly - yay :-)

Would you have any issues with that going forward as an interim step 
until you're able to move forward with the fully decoupled driver?

Cheers,
Frase


On 13/12/13 12:49, Rafael Schloming wrote:
>
> On Tue, Dec 10, 2013 at 2:33 PM, Fraser Adams 
> <fraser.adams@blueyonder.co.uk <ma...@blueyonder.co.uk>> 
> wrote:
>
>     On 10/12/13 15:51, Rafael Schloming wrote:
>
>         To clarify my comment a little, it's not the
>         pn_messenger_work(..., -1) in the loop that I found confusing.
>         That usage of it, i.e. using it to block/avoid busy looping,
>         is quite expected. It's the additional usage of it inside the
>         main_loop() body that I found surprising. As far as I can tell
>         you could just remove that call and your code would work. (Or
>         at least work better.)
>
>     Funnily enough....... after my last mail I ended up thinking that
>     you might come back with exactly this response :-)
>
>     I probably ought to add some more nuance to the saga :-D
>
>     You might have seen my post a couple of weeks ago about me getting
>     a proof of concept JavaScript implementation of messenger running
>     by compiling from C to JavaScript using emscripten, so to do this
>     I was able to keep the core proton messenger and engine code
>     completely unchanged and only had to modify send/recv to behave
>     asynchronously.
>
>     In my original send-async code I actually had
>
>     #if EMSCRIPTEN
>       emscripten_set_main_loop(main_loop, 0, 0);
>     #else
>
>
>       while (1) {
>         main_loop(NULL);
>
>         struct timeval timeout;
>         timeout.tv_sec = 0;
>         timeout.tv_usec = 16667;
>         select(0, NULL, NULL, NULL, &timeout);
>       }
>     #endif
>
>     So for emscripten there's a notifier (in this case under the hood
>     it's implemented by a JavaScript timeout) and for native code as
>     you can see I just had a slightly crappy select based sleep. That
>     code works, but is effectively polling at 60 frames per second.
>
>     After I got *something* working that's when I got into the game of
>     trying to be properly asynchronous, which is what precipitated
>     this thread..
>
>
>     For the emscripten side of things I've been adding some code that
>     enables me to get notified asynchronously by the WebSocket data
>     handler, I've been trying to do essentially the same for the
>     native C code block in the stuff I've been describing in this thread.
>
>     That's why I've ended up with the non-blocking
>     pn_messenger_work(messenger, 0) in the main_loop() function - in
>     practical terms data arrives on the WebSocket, it gets added to
>     the buffer that the mapped C socket read will read from, then I
>     trigger the callback to main_loop, which then does the proton
>     goodness, so I *have* to call pn_messenger_work in the main_loop
>     for emscripten - I'm being called from a real event handler.
>
>
>     I guess that I *could* put a conditional compilation block around
>     the pn_messenger_work(messenger, 0) in main loop and only call it
>     from the emscripten based version and then use the blocking
>     version in the main notification loop for the native C version,
>     but TBH I'm not really keen on that, there really ought to be a
>     way to get it to behave the same way however event notification
>     has been triggered.
>
>     It does feel slightly (to me at any rate) that pn_messenger_work
>     is doing "too much" in this context (really I guess
>     pn_messenger_tsync). TBH when I look at pn_messenger_tsync there's
>     quite a bit of logic going on there and it's combining the
>     blocking behaviour with quite a bit of other stuff (the messenger
>     internal state updates).
>
>     I guess that from my perspective it feels much cleaner to have the
>     notifier loop doing no more than blocking until there's activity
>     (an event), with the callback actually executing the business
>     logic - surely a true notifier should just be doing the
>     notification of an event (such as socket activity in this case)?
>     It doesn't feel quite right to me in an async model that the
>     notifier loop is also updating the internal state.
>
>     As I say that's what I can achieve in my crazy JavaScript backed
>     world, but not with native C because I can't access the basic
>     blocking call without also executing the business logic that's
>     updating the state.
>
>     Clearly I *want* to update the state, I just don't think I should
>     be doing it at the same time in an async model :-)
>
>
> Ah, I think I follow a little bit more. I certainly agree that the 
> driver and messenger are too closely coupled. I actually hope to 
> support completely decoupling them at some point. Ideally you should 
> be able to supply your own I/O with messenger rather than having to 
> use messenger's built-in posix driver. What you see right now with 
> pn_messenger_work is really a halfway point in between messenger's 
> original blocking behaviour and a fully decoupled driver. I simply 
> haven't had time yet to take that the rest of the way yet, but once we 
> get there I expect you would have full control over exactly where/how 
> you want to block and where/how you want to process I/O events.
>
>
>
>
>
>         Ah, so you want to be notified when messages are acknowledged?
>         For some
>         reason I got it stuck in my head that you were trying to be
>         notified of
>         incoming messages.
>
>         FWIW, I don't think good things would ever come of being able
>         to directly
>         call pn_driver_wait or block on the socket. The
>         pn_messenger_work call is
>         pretty much just blocking on the socket for you and then doing
>         updates of
>         messenger's internal state. Blocking on the socket without
>         actually doing
>         those updates won't actually accomplish anything since you'd
>         never see any
>         changes to messenger's visible state.
>
>     As above it's all about cleanly separating responsibilities in the
>     asynchronous case, yes "The pn_messenger_work call is pretty much
>     just blocking on the socket for you ", but the key bit is that it
>     goes on "then doing updates of messenger's internal state ". I
>     think that the former certainly belongs in the notifier loop, but
>     I believe that the latter belongs in the callback "business logic".
>
>     I certainly agree that "Blocking on the socket without actually
>     doing those updates won't actually accomplish anything since you'd
>     never see any changes to messenger's visible state. ", but as I
>     say I think that doing the update in the callback using the
>     non-blocking pn_messenger_work call is actually the correct thing
>     to do in asynchronous code.
>
>
>     Actually you mention "For some reason I got it stuck in my head
>     that you were trying to be notified of incoming messages." so I've
>     not actually mentioned my asynchronous receiver yet - basically an
>     async version of recv.c well funnily enough I think that actually
>     backs up my argument about separating the responsibilities of
>     blocking/notification from those of updating state in an async
>     world. In recv-async my main loop looks like:
>
>     void main_loop(void *arg) {
>
>         pn_messenger_recv(messenger, -1); // Receive as many messages
>     as messenger can buffer
>         check(messenger);
>
>         while(pn_messenger_incoming(messenger))
>         {
>           pn_messenger_get(messenger, message);
>           check(messenger);
>
>           char buffer[1024];
>           size_t buffsize = sizeof(buffer);
>           pn_data_t *body = pn_message_body(message);
>           pn_data_format(body, buffer, &buffsize);
>
>           printf("Address: %s\n", pn_message_get_address(message));
>           const char* subject = pn_message_get_subject(message);
>           printf("Subject: %s\n", subject ? subject : "(no subject)");
>           printf("Content: %s\n", buffer);
>         }
>
>     }
>
>
>     So again I'd really like my notifier loop to look something like
>
>       while (1) {
>         <block until some relevant socket activity event>
>         main_loop(NULL);
>       }
>
>     Again I need the non-blocking pn_messenger_recv in the callback
>     code as I'm truly being called asynchronously in the JavaScript case.
>
>     All just my opinion of course.
>
>     Hopefully this wider context of what I've been up to makes my
>     mails on this subject to date seem just a little less weird?
>
>     Again thanks for your responses.
>
>
> I finally managed to finish off the async example. I did it in python 
> for brevity, but the same pattern should work in C. It definitely 
> shows up a few gaps in the API, but it works. I don't know if it's 
> particularly useful to you given what you're trying to do, but it 
> should give you some idea of what the work stuff is currently geared 
> towards.
>
> The two notable gaps it shows up are that there is no way to ask 
> messenger for trackers that have an updated status. This is because 
> they were initially added for scenarios where the user had the tracker 
> already and was interested in the status, so the code simply iterates 
> over all the trackers to call the async handlers. This works but could 
> obviously be more efficient. Also, the status is not updated when a 
> message is settled without an explicit disposition. I believe this is 
> just a bug. The workaround is to set a nonzero incoming window and 
> explicitly accept/reject at the receiver. In any case, I've attached 
> the three files. The async.py file is a shared by the sender and 
> receiver. It provides an adapter for using callbacks with messenger.
>
> The files are attached.
>
> --Rafael
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Tue, Dec 10, 2013 at 2:33 PM, Fraser Adams <fraser.adams@blueyonder.co.uk
> wrote:

> On 10/12/13 15:51, Rafael Schloming wrote:
>
>> To clarify my comment a little, it's not the pn_messenger_work(..., -1)
>> in the loop that I found confusing. That usage of it, i.e. using it to
>> block/avoid busy looping, is quite expected. It's the additional usage of
>> it inside the main_loop() body that I found surprising. As far as I can
>> tell you could just remove that call and your code would work. (Or at least
>> work better.)
>>
> Funnily enough....... after my last mail I ended up thinking that you
> might come back with exactly this response :-)
>
> I probably ought to add some more nuance to the saga :-D
>
> You might have seen my post a couple of weeks ago about me getting a proof
> of concept JavaScript implementation of messenger running by compiling from
> C to JavaScript using emscripten, so to do this I was able to keep the core
> proton messenger and engine code completely unchanged and only had to
> modify send/recv to behave asynchronously.
>
> In my original send-async code I actually had
>
> #if EMSCRIPTEN
>   emscripten_set_main_loop(main_loop, 0, 0);
> #else
>
>
>   while (1) {
>     main_loop(NULL);
>
>     struct timeval timeout;
>     timeout.tv_sec = 0;
>     timeout.tv_usec = 16667;
>     select(0, NULL, NULL, NULL, &timeout);
>   }
> #endif
>
> So for emscripten there's a notifier (in this case under the hood it's
> implemented by a JavaScript timeout) and for native code as you can see I
> just had a slightly crappy select based sleep. That code works, but is
> effectively polling at 60 frames per second.
>
> After I got *something* working that's when I got into the game of trying
> to be properly asynchronous, which is what precipitated this thread..
>
>
> For the emscripten side of things I've been adding some code that enables
> me to get notified asynchronously by the WebSocket data handler, I've been
> trying to do essentially the same for the native C code block in the stuff
> I've been describing in this thread.
>
> That's why I've ended up with the non-blocking
> pn_messenger_work(messenger, 0) in the main_loop() function - in practical
> terms data arrives on the WebSocket, it gets added to the buffer that the
> mapped C socket read will read from, then I trigger the callback to
> main_loop, which then does the proton goodness, so I *have* to call
> pn_messenger_work in the main_loop for emscripten - I'm being called from a
> real event handler.
>
>
> I guess that I *could* put a conditional compilation block around the
> pn_messenger_work(messenger, 0) in main loop and only call it from the
> emscripten based version and then use the blocking version in the main
> notification loop for the native C version, but TBH I'm not really keen on
> that, there really ought to be a way to get it to behave the same way
> however event notification has been triggered.
>
> It does feel slightly (to me at any rate) that pn_messenger_work is doing
> "too much" in this context (really I guess pn_messenger_tsync). TBH when I
> look at pn_messenger_tsync there's quite a bit of logic going on there and
> it's combining the blocking behaviour with quite a bit of other stuff (the
> messenger internal state updates).
>
> I guess that from my perspective it feels much cleaner to have the
> notifier loop doing no more than blocking until there's activity (an
> event), with the callback actually executing the business logic - surely a
> true notifier should just be doing the notification of an event (such as
> socket activity in this case)? It doesn't feel quite right to me in an
> async model that the notifier loop is also updating the internal state.
>
> As I say that's what I can achieve in my crazy JavaScript backed world,
> but not with native C because I can't access the basic blocking call
> without also executing the business logic that's updating the state.
>
> Clearly I *want* to update the state, I just don't think I should be doing
> it at the same time in an async model :-)


Ah, I think I follow a little bit more. I certainly agree that the driver
and messenger are too closely coupled. I actually hope to support
completely decoupling them at some point. Ideally you should be able to
supply your own I/O with messenger rather than having to use messenger's
built-in posix driver. What you see right now with pn_messenger_work is
really a halfway point in between messenger's original blocking behaviour
and a fully decoupled driver. I simply haven't had time yet to take that
the rest of the way yet, but once we get there I expect you would have full
control over exactly where/how you want to block and where/how you want to
process I/O events.


>
>
>
>
>> Ah, so you want to be notified when messages are acknowledged? For some
>> reason I got it stuck in my head that you were trying to be notified of
>> incoming messages.
>>
>> FWIW, I don't think good things would ever come of being able to directly
>> call pn_driver_wait or block on the socket. The pn_messenger_work call is
>> pretty much just blocking on the socket for you and then doing updates of
>> messenger's internal state. Blocking on the socket without actually doing
>> those updates won't actually accomplish anything since you'd never see any
>> changes to messenger's visible state.
>>
>>  As above it's all about cleanly separating responsibilities in the
> asynchronous case, yes "The pn_messenger_work call is pretty much just
> blocking on the socket for you ", but the key bit is that it goes on "then
> doing updates of messenger's internal state ". I think that the former
> certainly belongs in the notifier loop, but I believe that the latter
> belongs in the callback "business logic".
>
> I certainly agree that "Blocking on the socket without actually doing
> those updates won't actually accomplish anything since you'd never see any
> changes to messenger's visible state. ", but as I say I think that doing
> the update in the callback using the non-blocking pn_messenger_work call is
> actually the correct thing to do in asynchronous code.
>
>
> Actually you mention "For some reason I got it stuck in my head that you
> were trying to be notified of incoming messages." so I've not actually
> mentioned my asynchronous receiver yet - basically an async version of
> recv.c well funnily enough I think that actually backs up my argument about
> separating the responsibilities of blocking/notification from those of
> updating state in an async world. In recv-async my main loop looks like:
>
> void main_loop(void *arg) {
>
>     pn_messenger_recv(messenger, -1); // Receive as many messages as
> messenger can buffer
>     check(messenger);
>
>     while(pn_messenger_incoming(messenger))
>     {
>       pn_messenger_get(messenger, message);
>       check(messenger);
>
>       char buffer[1024];
>       size_t buffsize = sizeof(buffer);
>       pn_data_t *body = pn_message_body(message);
>       pn_data_format(body, buffer, &buffsize);
>
>       printf("Address: %s\n", pn_message_get_address(message));
>       const char* subject = pn_message_get_subject(message);
>       printf("Subject: %s\n", subject ? subject : "(no subject)");
>       printf("Content: %s\n", buffer);
>     }
>
> }
>
>
> So again I'd really like my notifier loop to look something like
>
>   while (1) {
>     <block until some relevant socket activity event>
>     main_loop(NULL);
>   }
>
> Again I need the non-blocking pn_messenger_recv in the callback code as
> I'm truly being called asynchronously in the JavaScript case.
>
> All just my opinion of course.
>
> Hopefully this wider context of what I've been up to makes my mails on
> this subject to date seem just a little less weird?
>
> Again thanks for your responses.


I finally managed to finish off the async example. I did it in python for
brevity, but the same pattern should work in C. It definitely shows up a
few gaps in the API, but it works. I don't know if it's particularly useful
to you given what you're trying to do, but it should give you some idea of
what the work stuff is currently geared towards.

The two notable gaps it shows up are that there is no way to ask messenger
for trackers that have an updated status. This is because they were
initially added for scenarios where the user had the tracker already and
was interested in the status, so the code simply iterates over all the
trackers to call the async handlers. This works but could obviously be more
efficient. Also, the status is not updated when a message is settled
without an explicit disposition. I believe this is just a bug. The
workaround is to set a nonzero incoming window and explicitly accept/reject
at the receiver. In any case, I've attached the three files. The async.py
file is a shared by the sender and receiver. It provides an adapter for
using callbacks with messenger.

The files are attached.

--Rafael

Re: still trying to get proton messenger to behave properly asynchronously

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
On 10/12/13 15:51, Rafael Schloming wrote:
> To clarify my comment a little, it's not the pn_messenger_work(..., 
> -1) in the loop that I found confusing. That usage of it, i.e. using 
> it to block/avoid busy looping, is quite expected. It's the additional 
> usage of it inside the main_loop() body that I found surprising. As 
> far as I can tell you could just remove that call and your code would 
> work. (Or at least work better.) 
Funnily enough....... after my last mail I ended up thinking that you 
might come back with exactly this response :-)

I probably ought to add some more nuance to the saga :-D

You might have seen my post a couple of weeks ago about me getting a 
proof of concept JavaScript implementation of messenger running by 
compiling from C to JavaScript using emscripten, so to do this I was 
able to keep the core proton messenger and engine code completely 
unchanged and only had to modify send/recv to behave asynchronously.

In my original send-async code I actually had

#if EMSCRIPTEN
   emscripten_set_main_loop(main_loop, 0, 0);
#else

   while (1) {
     main_loop(NULL);

     struct timeval timeout;
     timeout.tv_sec = 0;
     timeout.tv_usec = 16667;
     select(0, NULL, NULL, NULL, &timeout);
   }
#endif

So for emscripten there's a notifier (in this case under the hood it's 
implemented by a JavaScript timeout) and for native code as you can see 
I just had a slightly crappy select based sleep. That code works, but is 
effectively polling at 60 frames per second.

After I got *something* working that's when I got into the game of 
trying to be properly asynchronous, which is what precipitated this thread..


For the emscripten side of things I've been adding some code that 
enables me to get notified asynchronously by the WebSocket data handler, 
I've been trying to do essentially the same for the native C code block 
in the stuff I've been describing in this thread.

That's why I've ended up with the non-blocking 
pn_messenger_work(messenger, 0) in the main_loop() function - in 
practical terms data arrives on the WebSocket, it gets added to the 
buffer that the mapped C socket read will read from, then I trigger the 
callback to main_loop, which then does the proton goodness, so I *have* 
to call pn_messenger_work in the main_loop for emscripten - I'm being 
called from a real event handler.


I guess that I *could* put a conditional compilation block around the 
pn_messenger_work(messenger, 0) in main loop and only call it from the 
emscripten based version and then use the blocking version in the main 
notification loop for the native C version, but TBH I'm not really keen 
on that, there really ought to be a way to get it to behave the same way 
however event notification has been triggered.

It does feel slightly (to me at any rate) that pn_messenger_work is 
doing "too much" in this context (really I guess pn_messenger_tsync). 
TBH when I look at pn_messenger_tsync there's quite a bit of logic going 
on there and it's combining the blocking behaviour with quite a bit of 
other stuff (the messenger internal state updates).

I guess that from my perspective it feels much cleaner to have the 
notifier loop doing no more than blocking until there's activity (an 
event), with the callback actually executing the business logic - surely 
a true notifier should just be doing the notification of an event (such 
as socket activity in this case)? It doesn't feel quite right to me in 
an async model that the notifier loop is also updating the internal state.

As I say that's what I can achieve in my crazy JavaScript backed world, 
but not with native C because I can't access the basic blocking call 
without also executing the business logic that's updating the state.

Clearly I *want* to update the state, I just don't think I should be 
doing it at the same time in an async model :-)


>
> Ah, so you want to be notified when messages are acknowledged? For some
> reason I got it stuck in my head that you were trying to be notified of
> incoming messages.
>
> FWIW, I don't think good things would ever come of being able to directly
> call pn_driver_wait or block on the socket. The pn_messenger_work call is
> pretty much just blocking on the socket for you and then doing updates of
> messenger's internal state. Blocking on the socket without actually doing
> those updates won't actually accomplish anything since you'd never see any
> changes to messenger's visible state.
>
As above it's all about cleanly separating responsibilities in the 
asynchronous case, yes "The pn_messenger_work call is pretty much just 
blocking on the socket for you ", but the key bit is that it goes on 
"then doing updates of messenger's internal state ". I think that the 
former certainly belongs in the notifier loop, but I believe that the 
latter belongs in the callback "business logic".

I certainly agree that "Blocking on the socket without actually doing 
those updates won't actually accomplish anything since you'd never see 
any changes to messenger's visible state. ", but as I say I think that 
doing the update in the callback using the non-blocking 
pn_messenger_work call is actually the correct thing to do in 
asynchronous code.


Actually you mention "For some reason I got it stuck in my head that you 
were trying to be notified of incoming messages." so I've not actually 
mentioned my asynchronous receiver yet - basically an async version of 
recv.c well funnily enough I think that actually backs up my argument 
about separating the responsibilities of blocking/notification from 
those of updating state in an async world. In recv-async my main loop 
looks like:

void main_loop(void *arg) {

     pn_messenger_recv(messenger, -1); // Receive as many messages as 
messenger can buffer
     check(messenger);

     while(pn_messenger_incoming(messenger))
     {
       pn_messenger_get(messenger, message);
       check(messenger);

       char buffer[1024];
       size_t buffsize = sizeof(buffer);
       pn_data_t *body = pn_message_body(message);
       pn_data_format(body, buffer, &buffsize);

       printf("Address: %s\n", pn_message_get_address(message));
       const char* subject = pn_message_get_subject(message);
       printf("Subject: %s\n", subject ? subject : "(no subject)");
       printf("Content: %s\n", buffer);
     }

}


So again I'd really like my notifier loop to look something like

   while (1) {
     <block until some relevant socket activity event>
     main_loop(NULL);
   }

Again I need the non-blocking pn_messenger_recv in the callback code as 
I'm truly being called asynchronously in the JavaScript case.

All just my opinion of course.

Hopefully this wider context of what I've been up to makes my mails on 
this subject to date seem just a little less weird?

Again thanks for your responses.
Frase






---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Mon, Dec 9, 2013 at 2:58 PM, Fraser Adams
<fr...@blueyonder.co.uk>wrote:

> On 09/12/13 14:36, Rafael Schloming wrote:
>
>> The -7 error code means PN_TIMEOUT, i.e. there was no work that could be
>> performed within the time limit you passed in, in this case 0. I'm a
>> little
>> puzzled why you are calling pn_messenger_work twice. The way you have your
>> code structured the first and second call will happen right after each
>> other with the first call blocking indefinitely, and the second call
>> returning immediately. It seems like the second call will serve no purpose
>> but to return a potentially confusing error code.
>>
> Oh, I agree :-)
>
> I guess my explanation of what I was doing wasn't clear. The *only* reason
> that I have the pn_messenger_work(messenger, -1); in the main loop is
> because that was the only way that I could get things to block until I had
> socket activity. To be clear it was really just an experiment.
>
> In my original mail I mentioned that I had actually tried to do
>
>
>     pn_driver_wait(messenger->driver, -1); // Block indefinitely until
> there has been socket activity.
>
> But that doesn't even compile (error: dereferencing pointer to incomplete
> type). I guess pn_messenger_t isn't externally visible outside messenger.c
> so I can't access the driver instance? Which is when I tried
> pn_messenger_work as a bit of a hacky "look see" experiment.
>
> So what I'd like to do really is to block until there's some socket
> activity, then call my callback - and it's the callback that does the
> non-blocking
>
>
>     int err = pn_messenger_work(messenger, 0); // Sends any outstanding
> messages
>
> So yeah I'm not surprised that you were puzzled, but I hope you see what
> I'm trying to achieve now?
>
> I think that your explanation around the -7 probably accounts for why I'm
> not reaching the final exit state, so if I can figure out how to block
> until socket activity without having to use pn_messenger_work to do the
> blocking hopefully I'll be sorted.


To clarify my comment a little, it's not the pn_messenger_work(..., -1) in
the loop that I found confusing. That usage of it, i.e. using it to
block/avoid busy looping, is quite expected. It's the additional usage of
it inside the main_loop() body that I found surprising. As far as I can
tell you could just remove that call and your code would work. (Or at least
work better.)


>
>
>  You're correct that main_loop might get called a few extra times, but I
>> doubt it is much of a performance concern. In any performance related
>> scenario the bulk of what main_loop is going to be doing is
>> sending/receiving messages, and in this case, 99.99% of socket activity
>> will be related to exactly that, so even though it may occasionally get
>> called due to socket activity related to hand shaking or heart beating or
>> something like that, the odds of there not also being message related work
>> to process become vanishingly small the more messages you're
>> sending/receiving.
>>
> Seems fair enough, I guess a few extra system calls are *probably* not
> going to be the critical path of a non-trivial messaging system, might get
> more interesting if we try to push the bounds with large numbers of small
> messages.


I haven't done too much performance testing on messenger, so I'm sure we'll
hit some issues if we do. I'm pretty comfortable with this particular
aspect of the design though. This is probably because of my background
designing the protocol. I know that if there were a lot of socket activity
unrelated to message transfer then either the protocol itself is
inefficient or it is being used in a pathological way, e.g  spamming the
wire with lots of empty frames for no reason.


>
>
>
>> The asynchronous APIs have been used, but only in one or two cases, so you
>> are definitely exploring uncharted territory here. FWIW, I agree
>> completely
>> that the asynchronous case is going to be key in the future.
>>
> That's cool, I'm quite happy playing in uncharted waters, though I'm
> extremely glad to be getting useful advice :-)
>
> I'm really pleased that I'm not the only one musing over the rise of
> asynchrony in a multi-core world, so I really appreciate your comment there
> too!!
>
>
>  I think you're close, but I'm not sure exactly what you're trying to do. I
>> don't see where your code is actually sending outgoing messages or
>> processing incoming ones. One thing I would say is that you probably only
>> need to call pn_messenger_work in one place, i.e. where you need to block.
>>
> Hopefully my explanation above makes sense? All I'm really doing is a
> simple async. rewrite of send.c the message send happens before the
> "notifier" loop in this simple case:
> .......
>   pn_messenger_put(messenger, message);
>   check(messenger);
>
>
>   pn_messenger_send(messenger, 0);
>   state = SENT_MESSAGE;
>   check(messenger);
>
> while (1) {
> ........
> And as I say the  pn_messenger_work I've got in the main loop is only
> 'cause it's the only way I could figure how to get it to block waiting on a
> file descriptor. I could use a "vanilla" poll, but I'd need to figure out
> the descriptors to check so that would get awfully convoluted. Ideally (I
> think) being able to access the driver instance to do
> pn_driver_wait(messenger->driver, -1); would be the right thing, but as I
> say I couldn't get that to compile :-(


Ah, so you want to be notified when messages are acknowledged? For some
reason I got it stuck in my head that you were trying to be notified of
incoming messages.

FWIW, I don't think good things would ever come of being able to directly
call pn_driver_wait or block on the socket. The pn_messenger_work call is
pretty much just blocking on the socket for you and then doing updates of
messenger's internal state. Blocking on the socket without actually doing
those updates won't actually accomplish anything since you'd never see any
changes to messenger's visible state.


>
>
>> I think we could definitely use a set of asynchronous examples to help get
>> people started here. If you can describe a bit more about what you're
>> trying to set up I'll see if I can take a crack at sketching out some
>> example code.
>>
> That'd be cool, as I say at this stage I am simply trying to get an
> asynchronous version of send.c and recv.c working using a sort of
> "notifier" paradigm, I'm sure that I'll have more complex scenario when/if
> I ever get the simple one working.
>

I'll see if I can work up an asynchronous version of the examples.

--Rafael

Re: still trying to get proton messenger to behave properly asynchronously

Posted by Fraser Adams <fr...@blueyonder.co.uk>.
On 09/12/13 14:36, Rafael Schloming wrote:
> The -7 error code means PN_TIMEOUT, i.e. there was no work that could be
> performed within the time limit you passed in, in this case 0. I'm a little
> puzzled why you are calling pn_messenger_work twice. The way you have your
> code structured the first and second call will happen right after each
> other with the first call blocking indefinitely, and the second call
> returning immediately. It seems like the second call will serve no purpose
> but to return a potentially confusing error code.
Oh, I agree :-)

I guess my explanation of what I was doing wasn't clear. The *only* 
reason that I have the pn_messenger_work(messenger, -1); in the main 
loop is because that was the only way that I could get things to block 
until I had socket activity. To be clear it was really just an experiment.

In my original mail I mentioned that I had actually tried to do

     pn_driver_wait(messenger->driver, -1); // Block indefinitely until there has been socket activity.

But that doesn't even compile (error: dereferencing pointer to 
incomplete type). I guess pn_messenger_t isn't externally visible 
outside messenger.c so I can't access the driver instance? Which is when 
I tried pn_messenger_work as a bit of a hacky "look see" experiment.

So what I'd like to do really is to block until there's some socket 
activity, then call my callback - and it's the callback that does the 
non-blocking

     int err = pn_messenger_work(messenger, 0); // Sends any outstanding messages

So yeah I'm not surprised that you were puzzled, but I hope you see what 
I'm trying to achieve now?

I think that your explanation around the -7 probably accounts for why 
I'm not reaching the final exit state, so if I can figure out how to 
block until socket activity without having to use pn_messenger_work to 
do the blocking hopefully I'll be sorted.

> You're correct that main_loop might get called a few extra times, but I
> doubt it is much of a performance concern. In any performance related
> scenario the bulk of what main_loop is going to be doing is
> sending/receiving messages, and in this case, 99.99% of socket activity
> will be related to exactly that, so even though it may occasionally get
> called due to socket activity related to hand shaking or heart beating or
> something like that, the odds of there not also being message related work
> to process become vanishingly small the more messages you're
> sending/receiving.
Seems fair enough, I guess a few extra system calls are *probably* not 
going to be the critical path of a non-trivial messaging system, might 
get more interesting if we try to push the bounds with large numbers of 
small messages.

>
> The asynchronous APIs have been used, but only in one or two cases, so you
> are definitely exploring uncharted territory here. FWIW, I agree completely
> that the asynchronous case is going to be key in the future.
That's cool, I'm quite happy playing in uncharted waters, though I'm 
extremely glad to be getting useful advice :-)

I'm really pleased that I'm not the only one musing over the rise of 
asynchrony in a multi-core world, so I really appreciate your comment 
there too!!

> I think you're close, but I'm not sure exactly what you're trying to do. I
> don't see where your code is actually sending outgoing messages or
> processing incoming ones. One thing I would say is that you probably only
> need to call pn_messenger_work in one place, i.e. where you need to block.
Hopefully my explanation above makes sense? All I'm really doing is a 
simple async. rewrite of send.c the message send happens before the 
"notifier" loop in this simple case:
.......
   pn_messenger_put(messenger, message);
   check(messenger);


   pn_messenger_send(messenger, 0);
   state = SENT_MESSAGE;
   check(messenger);

while (1) {
........
And as I say the  pn_messenger_work I've got in the main loop is only 
'cause it's the only way I could figure how to get it to block waiting 
on a file descriptor. I could use a "vanilla" poll, but I'd need to 
figure out the descriptors to check so that would get awfully 
convoluted. Ideally (I think) being able to access the driver instance 
to do pn_driver_wait(messenger->driver, -1); would be the right thing, 
but as I say I couldn't get that to compile :-(
>
> I think we could definitely use a set of asynchronous examples to help get
> people started here. If you can describe a bit more about what you're
> trying to set up I'll see if I can take a crack at sketching out some
> example code.
That'd be cool, as I say at this stage I am simply trying to get an 
asynchronous version of send.c and recv.c working using a sort of 
"notifier" paradigm, I'm sure that I'll have more complex scenario 
when/if I ever get the simple one working.

Many thanks for the response Rafael,
Cheers,
Frase


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: still trying to get proton messenger to behave properly asynchronously

Posted by Rafael Schloming <rh...@alum.mit.edu>.
On Sun, Dec 8, 2013 at 3:06 PM, Fraser Adams
<fr...@blueyonder.co.uk>wrote:

> Hey all,
> I've been able to get messenger to behave fairly sensibly in a
> non-blocking way, but what I've yet to achieve is getting it to behave in a
> properly "asynchronous" event-driven way where I fire up a looping
> "notifier" after everything is initialised and the notifier calls the
> "callbacks", you know, the usual event driven pattern.
>
>
> For my send-async code I've got a "main_loop" function that performs the
> guts of the work, the key bits look like this:
>
> #define SENT_MESSAGE 0
> #define STOPPING 1
>
>
> void main_loop(void *arg) {
> printf("                          *** main_loop ***\n");
>
>     int err = pn_messenger_work(messenger, 0); // Sends any outstanding
> messages queued for messenger.
>     int pending = pn_messenger_outgoing(messenger); // Get the number of
> pending messages in the outgoing message queue.
>
> printf("err = %d\n", err);
> printf("pending = %d\n", pending);
>
>     if (state == SENT_MESSAGE && !pending) {
> printf("calling stop\n");
>         pn_message_free(message); // Release message.
>         pn_messenger_stop(messenger);
>         state = STOPPING;
>     } else if (state == STOPPING && !err) {
> printf("exiting\n");
>         pn_messenger_free(messenger);
>         exit(0);
>     }
> }
>
>
>
> in the main method I set messenger to non-blocking, create and send the
> message and set state to SENT_MESSAGE
>
> when I have a "notifier" loop like the following:
>
>   while (1) {
>     main_loop(NULL);
>
>     struct timeval timeout;
>     timeout.tv_sec = 0;
>     timeout.tv_usec = 16667;
>     select(0, NULL, NULL, NULL, &timeout);
>   }
>
> The approach above works fine and reaches the final state and exits, but
> It's not really what I want as it's essentially a busy-wait loop albeit
> with a 16.7 ms delay. What I *really* want is for the main notifier event
> loop to block until activity is happening.
>
>
> I tried:
>
>   while (1) {
>     pn_driver_wait(messenger->driver, -1); // Block indefinitely until
> there has been socket activity.
>     main_loop(NULL);
>   }
>
> But that doesn't even compile (error: dereferencing pointer to incomplete
> type) I guess pn_messenger_t isn't externally visible outside messenger.c
> so I can't access the driver instance?
>
> If I do:
>
>   while (1) {
>     pn_messenger_work(messenger, -1); // Block indefinitely until there
> has been socket activity.
>     main_loop(NULL);
>   }
>
>
> That "kind of" works, but it doesn't get as far as the exit state (the
> last err value is -7), so there's socket activity that I'm missing I think.
>
> In any case I *really* don't like having to do a blocking call to
> pn_messenger_work() just to do the select/poll call that I really want to
> do.
>

The -7 error code means PN_TIMEOUT, i.e. there was no work that could be
performed within the time limit you passed in, in this case 0. I'm a little
puzzled why you are calling pn_messenger_work twice. The way you have your
code structured the first and second call will happen right after each
other with the first call blocking indefinitely, and the second call
returning immediately. It seems like the second call will serve no purpose
but to return a potentially confusing error code.


> If I'm honest I don't *think* messenger is currently entirely geared up
> for asynchronous behaviour despite the ability to put it in non-blocking
> mode. What I mean is that the heart of many of the calls is
> pn_messenger_tsync, which is called in blocking and non-blocking modes and
> that's calling pn_driver_wait, potentially in a loop even in non-blocking
> mode, so even if my notifier could do a simple block on pn_driver_wait I'd
> still by calling poll multiple times - once blocking in my notifier waiting
> for activity and then non-blocking when I do.
> pn_messenger_work(messenger, 0); // Sends any outstanding messages queued
> for messenger.
>
> Even when it is working well I'm suspecting that the loop will be causing
> more calls to main_loop than really desirable as it'll trigger on all
> socket activity not just the send - perhaps that's necessary because of the
> AMQP handshaking, but it doesn't "feel" especially efficient, but I'm
> certainly far from an expert about the guts of messenger - it made my head
> explode :-)
>

You're correct that main_loop might get called a few extra times, but I
doubt it is much of a performance concern. In any performance related
scenario the bulk of what main_loop is going to be doing is
sending/receiving messages, and in this case, 99.99% of socket activity
will be related to exactly that, so even though it may occasionally get
called due to socket activity related to hand shaking or heart beating or
something like that, the odds of there not also being message related work
to process become vanishingly small the more messages you're
sending/receiving.


> I guess that most uses to date have been for traditional blocking
> scenarios, but I'm thinking that in the future a more asynchronous approach
> is likely to become important - what I mean is that as the number of
> processor cores increases I suspect that asynchronous approaches like Grand
> Central Dispatch http://en.wikipedia.org/wiki/Grand_Central_Dispatch will
> probably scale better across large numbers of cores, so it might be good to
> have an efficient asynchronous programming model for proton.
>

The asynchronous APIs have been used, but only in one or two cases, so you
are definitely exploring uncharted territory here. FWIW, I agree completely
that the asynchronous case is going to be key in the future.


>
> Am I roughly on the right track? Any ideas how to make what I'm trying to
> do a little neater?
>

I think you're close, but I'm not sure exactly what you're trying to do. I
don't see where your code is actually sending outgoing messages or
processing incoming ones. One thing I would say is that you probably only
need to call pn_messenger_work in one place, i.e. where you need to block.

I think we could definitely use a set of asynchronous examples to help get
people started here. If you can describe a bit more about what you're
trying to set up I'll see if I can take a crack at sketching out some
example code.


> At this stage I'm trying things out and trying to educate myself, so if
> there are limitations it's not necessarily a huge deal, but it might be a
> useful point for further conversation on asynchronous behaviour - anyone
> else musing over this?
>

Much of the async stuff was developed in response to some of the use cases
brought up on the proton list, so there should be at least one other user
of these APIs.

--Rafael