You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mynewt.apache.org by Wayne Keenan <wa...@gmail.com> on 2016/11/04 12:57:04 UTC

os_events manamgnent

Hi,

I had to move from the commonly found  'statically allocated' os_event
usage pattern to something dynamic.

This is what I'm using:

// Post event:

    struct os_event* my_event = (struct os_event
*)os_memblock_get(&bleprph_mbuf_mpool);

    if (!my_event) {
        // TODO: scream loudly
        return;
    }
    my_event->ev_queued = 0;
    my_event->ev_arg = arg;
    my_event->ev_type=OS_EVENT_T_PERUSER+1;   // *1
    my_event(&MY_EVENT_QUEUE, my_event);



// Poll handler:

            case OS_EVENT_T_PERUSER+1:
                my_data = ev->ev_arg;
                os_eventq_remove(&btask_evq, ev);
                os_memblock_put(&task_mbuf_mpool, ev);   //* 2
                process_data(my_data);
                break;


This seems to work and will fail quickly (reset) if step *2 isn't
performed,  but I have some questions.

1. Is this really correct?  I've not seen it in the samples or the docs.
2. The samples and docs didn't make *2 obvious,  e.g. who owns the memory
pool.
3. Would it be better if the Task didn't care which pool the memory came
from? i.e. inter task events.
4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
what n? is there a documented range limit?


It would be nice to have the API that perhaps look something like this:

// Post:
        rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
        if (rc) {
            // TODO: scream loudly
            return;
        }


At *1 a default application pool, registered with the OS,  is used (which
would require additional API  and setup in the application main startup)


// Poll:
        struct os_event ev;

        os_eventq_get(&task_evq, &ev);   // *1

        switch (ev->ev_type) {
            case OS_EVENT_T_PERUSER+1:
                my_data = ev->ev_arg;
                process_data(my_data);
                break;
        }


At *1 the event data is copied and the event itself is released by '
os_eventq_get'


All the best
Wayne

Re: os_events manamgnent

Posted by will sanfilippo <wi...@runtime.io>.
Thanks! I was not sure about responding.

Will
> On Nov 4, 2016, at 9:42 AM, Christopher Collins <cc...@apache.org> wrote:
> 
> (responding to Runtime only)
> 
> I will reply to Wayne's email this morning, and follow it up with a
> message to dev regarding the recent eventq changes.  I just wanted to
> send this note so no one spends time writing a response.
> 
> Thanks,
> Chris
> 
> On Fri, Nov 04, 2016 at 01:08:50PM +0000, Wayne Keenan wrote:
>> sorry for the typo'ed subject and also these corrections are required:
>> 
>> In the first post example this line:
>>    my_event(&MY_EVENT_QUEUE, my_event);
>> should be:
>>    os_eventq_put(&MY_EVENT_QUEUE, my_event);
>> 
>> 
>> In the second post example this line:
>> 
>> rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
>> 
>> 
>> should read:
>> 
>> rc = os_eventq_put(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
>> 
>> 
>> 
>> All the best
>> Wayne
>> 
>> On 4 November 2016 at 12:57, Wayne Keenan <wa...@gmail.com> wrote:
>> 
>>> Hi,
>>> 
>>> I had to move from the commonly found  'statically allocated' os_event
>>> usage pattern to something dynamic.
>>> 
>>> This is what I'm using:
>>> 
>>> // Post event:
>>> 
>>>    struct os_event* my_event = (struct os_event
>>> *)os_memblock_get(&bleprph_mbuf_mpool);
>>> 
>>>    if (!my_event) {
>>>        // TODO: scream loudly
>>>        return;
>>>    }
>>>    my_event->ev_queued = 0;
>>>    my_event->ev_arg = arg;
>>>    my_event->ev_type=OS_EVENT_T_PERUSER+1;   // *1
>>>    my_event(&MY_EVENT_QUEUE, my_event);
>>> 
>>> 
>>> 
>>> // Poll handler:
>>> 
>>>            case OS_EVENT_T_PERUSER+1:
>>>                my_data = ev->ev_arg;
>>>                os_eventq_remove(&btask_evq, ev);
>>>                os_memblock_put(&task_mbuf_mpool, ev);   //* 2
>>>                process_data(my_data);
>>>                break;
>>> 
>>> 
>>> This seems to work and will fail quickly (reset) if step *2 isn't
>>> performed,  but I have some questions.
>>> 
>>> 1. Is this really correct?  I've not seen it in the samples or the docs.
>>> 2. The samples and docs didn't make *2 obvious,  e.g. who owns the memory
>>> pool.
>>> 3. Would it be better if the Task didn't care which pool the memory came
>>> from? i.e. inter task events.
>>> 4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
>>> what n? is there a documented range limit?
>>> 
>>> 
>>> It would be nice to have the API that perhaps look something like this:
>>> 
>>> // Post:
>>>        rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
>>>        if (rc) {
>>>            // TODO: scream loudly
>>>            return;
>>>        }
>>> 
>>> 
>>> At *1 a default application pool, registered with the OS,  is used (which
>>> would require additional API  and setup in the application main startup)
>>> 
>>> 
>>> // Poll:
>>>        struct os_event ev;
>>> 
>>>        os_eventq_get(&task_evq, &ev);   // *1
>>> 
>>>        switch (ev->ev_type) {
>>>            case OS_EVENT_T_PERUSER+1:
>>>                my_data = ev->ev_arg;
>>>                process_data(my_data);
>>>                break;
>>>        }
>>> 
>>> 
>>> At *1 the event data is copied and the event itself is released by '
>>> os_eventq_get'
>>> 
>>> 
>>> All the best
>>> Wayne
>>> 


Re: os_events manamgnent

Posted by Christopher Collins <cc...@apache.org>.
(responding to Runtime only)

I will reply to Wayne's email this morning, and follow it up with a
message to dev regarding the recent eventq changes.  I just wanted to
send this note so no one spends time writing a response.

Thanks,
Chris

On Fri, Nov 04, 2016 at 01:08:50PM +0000, Wayne Keenan wrote:
> sorry for the typo'ed subject and also these corrections are required:
> 
> In the first post example this line:
>     my_event(&MY_EVENT_QUEUE, my_event);
> should be:
>     os_eventq_put(&MY_EVENT_QUEUE, my_event);
> 
> 
> In the second post example this line:
> 
> rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
> 
> 
> should read:
> 
> rc = os_eventq_put(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
> 
> 
> 
> All the best
> Wayne
> 
> On 4 November 2016 at 12:57, Wayne Keenan <wa...@gmail.com> wrote:
> 
> > Hi,
> >
> > I had to move from the commonly found  'statically allocated' os_event
> > usage pattern to something dynamic.
> >
> > This is what I'm using:
> >
> > // Post event:
> >
> >     struct os_event* my_event = (struct os_event
> > *)os_memblock_get(&bleprph_mbuf_mpool);
> >
> >     if (!my_event) {
> >         // TODO: scream loudly
> >         return;
> >     }
> >     my_event->ev_queued = 0;
> >     my_event->ev_arg = arg;
> >     my_event->ev_type=OS_EVENT_T_PERUSER+1;   // *1
> >     my_event(&MY_EVENT_QUEUE, my_event);
> >
> >
> >
> > // Poll handler:
> >
> >             case OS_EVENT_T_PERUSER+1:
> >                 my_data = ev->ev_arg;
> >                 os_eventq_remove(&btask_evq, ev);
> >                 os_memblock_put(&task_mbuf_mpool, ev);   //* 2
> >                 process_data(my_data);
> >                 break;
> >
> >
> > This seems to work and will fail quickly (reset) if step *2 isn't
> > performed,  but I have some questions.
> >
> > 1. Is this really correct?  I've not seen it in the samples or the docs.
> > 2. The samples and docs didn't make *2 obvious,  e.g. who owns the memory
> > pool.
> > 3. Would it be better if the Task didn't care which pool the memory came
> > from? i.e. inter task events.
> > 4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
> > what n? is there a documented range limit?
> >
> >
> > It would be nice to have the API that perhaps look something like this:
> >
> > // Post:
> >         rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
> >         if (rc) {
> >             // TODO: scream loudly
> >             return;
> >         }
> >
> >
> > At *1 a default application pool, registered with the OS,  is used (which
> > would require additional API  and setup in the application main startup)
> >
> >
> > // Poll:
> >         struct os_event ev;
> >
> >         os_eventq_get(&task_evq, &ev);   // *1
> >
> >         switch (ev->ev_type) {
> >             case OS_EVENT_T_PERUSER+1:
> >                 my_data = ev->ev_arg;
> >                 process_data(my_data);
> >                 break;
> >         }
> >
> >
> > At *1 the event data is copied and the event itself is released by '
> > os_eventq_get'
> >
> >
> > All the best
> > Wayne
> >

Re: os_events manamgnent

Posted by Wayne Keenan <wa...@gmail.com>.
sorry for the typo'ed subject and also these corrections are required:

In the first post example this line:
    my_event(&MY_EVENT_QUEUE, my_event);
should be:
    os_eventq_put(&MY_EVENT_QUEUE, my_event);


In the second post example this line:

rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1


should read:

rc = os_eventq_put(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1



All the best
Wayne

On 4 November 2016 at 12:57, Wayne Keenan <wa...@gmail.com> wrote:

> Hi,
>
> I had to move from the commonly found  'statically allocated' os_event
> usage pattern to something dynamic.
>
> This is what I'm using:
>
> // Post event:
>
>     struct os_event* my_event = (struct os_event
> *)os_memblock_get(&bleprph_mbuf_mpool);
>
>     if (!my_event) {
>         // TODO: scream loudly
>         return;
>     }
>     my_event->ev_queued = 0;
>     my_event->ev_arg = arg;
>     my_event->ev_type=OS_EVENT_T_PERUSER+1;   // *1
>     my_event(&MY_EVENT_QUEUE, my_event);
>
>
>
> // Poll handler:
>
>             case OS_EVENT_T_PERUSER+1:
>                 my_data = ev->ev_arg;
>                 os_eventq_remove(&btask_evq, ev);
>                 os_memblock_put(&task_mbuf_mpool, ev);   //* 2
>                 process_data(my_data);
>                 break;
>
>
> This seems to work and will fail quickly (reset) if step *2 isn't
> performed,  but I have some questions.
>
> 1. Is this really correct?  I've not seen it in the samples or the docs.
> 2. The samples and docs didn't make *2 obvious,  e.g. who owns the memory
> pool.
> 3. Would it be better if the Task didn't care which pool the memory came
> from? i.e. inter task events.
> 4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
> what n? is there a documented range limit?
>
>
> It would be nice to have the API that perhaps look something like this:
>
> // Post:
>         rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
>         if (rc) {
>             // TODO: scream loudly
>             return;
>         }
>
>
> At *1 a default application pool, registered with the OS,  is used (which
> would require additional API  and setup in the application main startup)
>
>
> // Poll:
>         struct os_event ev;
>
>         os_eventq_get(&task_evq, &ev);   // *1
>
>         switch (ev->ev_type) {
>             case OS_EVENT_T_PERUSER+1:
>                 my_data = ev->ev_arg;
>                 process_data(my_data);
>                 break;
>         }
>
>
> At *1 the event data is copied and the event itself is released by '
> os_eventq_get'
>
>
> All the best
> Wayne
>

Re: os_events manamgnent

Posted by aditi hilbert <ad...@runtime.io>.
+1 

> I'm now not suggesting change what's in the core, instead, perhaps make
> available a simple struct wrapper and support functions/macros as
> 'codified' best practice of this pattern.
> I can take a look at creating a PR once I've seen the latest Event API and
> am closer to the v1.0 codebase.
> 



aditi

Re: os_events manamgnent

Posted by Christopher Collins <cc...@apache.org>.
On Mon, Nov 07, 2016 at 11:07:56AM +0000, Wayne Keenan wrote:
[...]

> 'm totally with you on it's best to keep Mynewt simple; an application
> developer can pass a user defined struct to the arg which has the actual
> user data and also a pointer to the pool to be freed; they can also create
> their own convenience functions/macros around it.
> 
> In my case I'm (currently) passing inbound serial character data as an
> event, in order to decouple the IRQ handler context from the MicroPython
> interpreter and it's REPL.
> 
> Eventually I'll switch back to using the Console when it has the low level
> TX/RX handler support, ticket #469.
> 
> But, I also need to use the same pattern to decouple GPIO IRQ's and the
>  Python callbacks, in order to support this:
> 
> from gpiozero import Button
> 
> def buttonA_pressed():
>     print("Button pressed.")
> 
> buttonA = Button(17)
> buttonA.when_pressed(buttonA_pressed)
> 
> (In case you were wondering, I back ported the 0.10.0 nRF GPIO HAL to
> 0.9.0, thanks Will :)
> 
> So the event struct wrapping and helper function could become a much more
> commonly used pattern as 3rd party libraries (which hook in into IRQ's and
> events) and other scripting languages are ported to Mynewt.
> 
> I'm now not suggesting change what's in the core, instead, perhaps make
> available a simple struct wrapper and support functions/macros as
> 'codified' best practice of this pattern.
> I can take a look at creating a PR once I've seen the latest Event API and
> am closer to the v1.0 codebase.

I think that sounds great.  Then it is possible to have it both ways :).

Thanks,
Chris

Re: os_events manamgnent

Posted by Wayne Keenan <wa...@gmail.com>.
Thanks Chris,

On 4 November 2016 at 18:11, Christopher Collins <cc...@apache.org>
wrote:

> Hi Wayne,
>
> Good question.  I'll address the specifics below, but I should mention
> that OS event handling has undergone some fairly major changes in recent
> days.  Later today, I will send a follow up email to the dev list
> detailing these changes and the rationale.
>

ok cool.

> It would be nice to have the API that perhaps look something like this:
> >
> > // Post:
> >         rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
> >         if (rc) {
> >             // TODO: scream loudly
> >             return;
> >         }
> >
> >
>
> I think this is a good idea that is safer than the current approach.
> However, I believe there are tradeoffs involved.  If an event knows how
> to free itself, then it must either:
>
>     1. contain a pointer to its source pool, or
>     2. come from the same pool as all other events.
>
> But I could be missing something.  Do you see another option?
>

nope.


> The first option increases the size of each event, even events which
> are statically allocated.  The second solution restricts how much
> control the developer has, as you can no longer pre-allocate for a set
> number of events of a particular type.  Others can certainly chime in,
> but personally I don't know that I'm comfortable with either of these
> tradeoffs.
>


The expectation is that most events will be statically
> allocated, so I think it is important to keep that path simple and
> efficient.


'm totally with you on it's best to keep Mynewt simple; an application
developer can pass a user defined struct to the arg which has the actual
user data and also a pointer to the pool to be freed; they can also create
their own convenience functions/macros around it.

In my case I'm (currently) passing inbound serial character data as an
event, in order to decouple the IRQ handler context from the MicroPython
interpreter and it's REPL.

Eventually I'll switch back to using the Console when it has the low level
TX/RX handler support, ticket #469.

But, I also need to use the same pattern to decouple GPIO IRQ's and the
 Python callbacks, in order to support this:

from gpiozero import Button

def buttonA_pressed():
    print("Button pressed.")

buttonA = Button(17)
buttonA.when_pressed(buttonA_pressed)

(In case you were wondering, I back ported the 0.10.0 nRF GPIO HAL to
0.9.0, thanks Will :)

So the event struct wrapping and helper function could become a much more
commonly used pattern as 3rd party libraries (which hook in into IRQ's and
events) and other scripting languages are ported to Mynewt.

I'm now not suggesting change what's in the core, instead, perhaps make
available a simple struct wrapper and support functions/macros as
'codified' best practice of this pattern.
I can take a look at creating a PR once I've seen the latest Event API and
am closer to the v1.0 codebase.


All the best
Wayne

Re: os_events manamgnent

Posted by Christopher Collins <cc...@apache.org>.
Hi Wayne,

Good question.  I'll address the specifics below, but I should mention
that OS event handling has undergone some fairly major changes in recent
days.  Later today, I will send a follow up email to the dev list
detailing these changes and the rationale.

On Fri, Nov 04, 2016 at 12:57:04PM +0000, Wayne Keenan wrote:
> Hi,
> 
> I had to move from the commonly found  'statically allocated' os_event
> usage pattern to something dynamic.
> 
> This is what I'm using:
> 
> // Post event:
> 
>     struct os_event* my_event = (struct os_event
> *)os_memblock_get(&bleprph_mbuf_mpool);
> 
>     if (!my_event) {
>         // TODO: scream loudly
>         return;
>     }
>     my_event->ev_queued = 0;
>     my_event->ev_arg = arg;
>     my_event->ev_type=OS_EVENT_T_PERUSER+1;   // *1
>     my_event(&MY_EVENT_QUEUE, my_event);
> 
> 
> 
> // Poll handler:
> 
>             case OS_EVENT_T_PERUSER+1:
>                 my_data = ev->ev_arg;
>                 os_eventq_remove(&btask_evq, ev);
>                 os_memblock_put(&task_mbuf_mpool, ev);   //* 2
>                 process_data(my_data);
>                 break;
> 
> 
> This seems to work and will fail quickly (reset) if step *2 isn't
> performed,  but I have some questions.
> 
> 1. Is this really correct?  I've not seen it in the samples or the docs.

Yes, this is the expected way to manage events when there can be several
of the same type.  As in your example, dynamically allocated events are
needed when each has a unique ev_arg value.

My opinion is that the event handler is the correct place to free the
os_event (as you do in your example).  It may bug some that the module
that does the allocating is different from the one that does the
freeing.  One approach that makes this less objectionable is to create a
function to allocate and enqueue the event as an atomic operation.  If
this function lives in the same module where event processing occurs,
there is less uncertainty about event ownership.

> 2. The samples and docs didn't make *2 obvious,  e.g. who owns the memory
> pool.

One package which does dynamic event allocation is the NimBLE
controller (net/nimble/controller).  An event is allocated and enqueued
when a packet is received (ble_ll_hci_cmd_rx()), and freed after
processing in the main task look (ble_ll_task()).

> 3. Would it be better if the Task didn't care which pool the memory came
> from? i.e. inter task events.
> 4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
> what n? is there a documented range limit?

Yes, using OS_EVENT_T_PERUSER+n was (!) correct.  Event types is
something that has changed recently.

> It would be nice to have the API that perhaps look something like this:
> 
> // Post:
>         rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
>         if (rc) {
>             // TODO: scream loudly
>             return;
>         }
> 
> 
> At *1 a default application pool, registered with the OS,  is used (which
> would require additional API  and setup in the application main startup)
> 
> 
> // Poll:
>         struct os_event ev;
> 
>         os_eventq_get(&task_evq, &ev);   // *1
> 
>         switch (ev->ev_type) {
>             case OS_EVENT_T_PERUSER+1:
>                 my_data = ev->ev_arg;
>                 process_data(my_data);
>                 break;
>         }
> 
> 
> At *1 the event data is copied and the event itself is released by '
> os_eventq_get'

I think this is a good idea that is safer than the current approach.
However, I believe there are tradeoffs involved.  If an event knows how
to free itself, then it must either:

    1. contain a pointer to its source pool, or
    2. come from the same pool as all other events.

But I could be missing something.  Do you see another option?

The first option increases the size of each event, even events which
are statically allocated.  The second solution restricts how much
control the developer has, as you can no longer pre-allocate for a set
number of events of a particular type.  Others can certainly chime in,
but personally I don't know that I'm comfortable with either of these
tradeoffs.  The expectation is that most events will be statically
allocated, so I think it is important to keep that path simple and
efficient.

Thanks,
Chris