You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mynewt.apache.org by ad...@apache.org on 2016/06/23 21:19:01 UTC

[6/9] incubator-mynewt-site git commit: Added os_mutex_init and os_mutex_pend pages correctly. Pushed core OS function descriptor tables in pull request #95 by bgiori

http://git-wip-us.apache.org/repos/asf/incubator-mynewt-site/blob/ea84c9e9/latest/mkdocs/search_index.json
----------------------------------------------------------------------
diff --git a/latest/mkdocs/search_index.json b/latest/mkdocs/search_index.json
index eb8ab4b..5d2c26d 100644
--- a/latest/mkdocs/search_index.json
+++ b/latest/mkdocs/search_index.json
@@ -1723,7 +1723,7 @@
         {
             "location": "/os/core_os/context_switch/os_sched_get_current_task/", 
             "text": "os_sched_get_current_task \n\n\nstruct\n \nos_task\n \n*os_sched_get_current_task\n(\nvoid\n)\n\n\n\n\n\nReturns the pointer to task which is currently \nrunning\n.\n\n\nArguments\n\n\nN/A\n\n\nReturned values\n\n\nSee description.\n\n\nNotes\n\n\nExample\n\n\n\n\nvoid\n\n\nos_time_delay\n(\nint32_t\n \nosticks\n)\n{\n    \nos_sr_t\n \nsr\n;\n\n    \nif\n (\nosticks\n \n \n0\n) {\n        \nOS_ENTER_CRITICAL\n(\nsr\n);\n        \nos_sched_sleep\n(\nos_sched_get_current_task\n(), (\nos_time_t\n)\nosticks\n);\n        \nOS_EXIT_CRITICAL\n(\nsr\n);\n        \nos_sched\n(\nNULL\n);\n    }\n}", 
-            "title": "os_/sched_get_current_task"
+            "title": "os_sched_get_current_task"
         }, 
         {
             "location": "/os/core_os/context_switch/os_sched_get_current_task/#os_sched_get_current_task", 
@@ -1947,7 +1947,7 @@
         }, 
         {
             "location": "/os/core_os/time/os_time/", 
-            "text": "OS_Time\n\n\nThe system time for the Mynewt OS.\n\n\nDescription\n\n\nThe Mynewt OS contains an incrementing time that drives the OS scheduler and time delays. The time is a fixed size (e.g. 32 bits) and will eventually wrap back to zero. The time to wrap from zero back to zero is called the \nOS time epoch\n. \n\n\nThe frequency of the OS time tick is specified in the architecture-specific OS code \nos_arch.h\n and is named \nOS_TICKS_PER_SEC\n.\n\n\nThe Mynewt OS also provides APIs for setting and retrieving the wallclock time (also known as local time or time-of-day in other operating systems).\n\n\nData Structures\n\n\nTime is stored in Mynewt as an \nos_time_t\n value. \n\n\nWallclock time is represented using the \nstruct os_timeval\n and \nstruct os_timezone\n tuple.\n\n\nstruct os_timeval\n represents the number of seconds elapsed since 00:00:00 Jan 1, 1970 UTC.\n\n\nstruct os_timeval {\n    int64_t tv_sec;  /\n seconds since Jan 1 1970 UTC \n/\n    int3
 2_t tv_usec; /\n fractional seconds \n/\n};\n\n\nstruct os_timeval tv = { 1457400000, 0 };  /\n 01:20:00 Mar 8 2016 UTC \n/\n\n\n\nstruct os_timezone\n is used to specify the offset of local time from UTC and whether daylight savings is in effect. Note that \ntz_minuteswest\n is a positive number if the local time is \nbehind\n UTC and a negative number if the local time is \nahead\n of UTC.\n\n\nstruct os_timezone {\n    int16_t tz_minuteswest;\n    int16_t tz_dsttime;\n};\n\n\n/\n Pacific Standard Time is 08:00 hours west of UTC \n/\nstruct os_timezone PST = { 480, 0 };\nstruct os_timezone PDT = { 480, 1 };\n\n\n/\n Indian Standard Time is 05:30 hours east of UTC \n/\nstruct os_timezone IST = { -330, 0 };\n\n\n\nList of Functions\n\n\nThe functions available in time are:\n\n\n\n\nos_time_delay\n\n\nos_time_get\n\n\nos_time_tick\n\n\nos_settimeofday\n\n\nos_gettimeofday\n\n\n\n\nList of Macros\n\n\nSeveral macros help with the evalution of times with respect to each other.\n\n\n\n\
 nOS_TIME_TICK_LT(t1,t2)\n -- evaluates to true if t1 is before t2 in time.\n\n\nOS_TIME_TICK_GT(t1,t2)\n -- evaluates to true if t1 is after t2 in time \n\n\nOS_TIME_TICK_GEQ(t1,t2)\n -- evaluates to true if t1 is on or after t2 in time.\n\n\n\n\nNOTE:  For all of these macros the calculations are done modulo 'os_time_t'.  \n\n\nEnsure that comparison of OS time always uses the macros above (to compensate for the possible wrap of OS time).\n\n\nThe following macros help adding or subtracting time when represented as \nstruct os_timeval\n. All parameters to the following macros are pointers to \nstruct os_timeval\n.\n\n\n\n\nos_timeradd(tvp, uvp, vvp)\n --  Add \nuvp\n to \ntvp\n and store result in \nvvp\n.\n\n\nos_timersub(tvp, uvp, vvp)\n -- Subtract \nuvp\n from \ntvp\n and store result in \nvvp\n.\n\n\n\n\nSpecial Notes\n\n\nIts important to understand how quickly the time wraps especially when doing time comparison using the macros above (or by any other means). \n\n\nFor examp
 le, if a tick is 1 millisecond and \nos_time_t\n is 32-bits the OS time will wrap back to zero in about 49.7 days or stated another way, the OS time epoch is 49.7 days.\n\n\nIf two times are more than 1/2 the OS time epoch apart, any time comparison will be incorrect.  Ensure at design time that comparisons will not occur between times that are more than half the OS time epoch.", 
+            "text": "OS_Time\n\n\nThe system time for the Mynewt OS.\n\n\nDescription\n\n\nThe Mynewt OS contains an incrementing time that drives the OS scheduler and time delays. The time is a fixed size (e.g. 32 bits) and will eventually wrap back to zero. The time to wrap from zero back to zero is called the \nOS time epoch\n. \n\n\nThe frequency of the OS time tick is specified in the architecture-specific OS code \nos_arch.h\n and is named \nOS_TICKS_PER_SEC\n.\n\n\nThe Mynewt OS also provides APIs for setting and retrieving the wallclock time (also known as local time or time-of-day in other operating systems).\n\n\nData Structures\n\n\nTime is stored in Mynewt as an \nos_time_t\n value. \n\n\nWallclock time is represented using the \nstruct os_timeval\n and \nstruct os_timezone\n tuple.\n\n\nstruct os_timeval\n represents the number of seconds elapsed since 00:00:00 Jan 1, 1970 UTC.\n\n\nstruct os_timeval {\n    int64_t tv_sec;  /\n seconds since Jan 1 1970 UTC \n/\n    int3
 2_t tv_usec; /\n fractional seconds \n/\n};\n\n\nstruct os_timeval tv = { 1457400000, 0 };  /\n 01:20:00 Mar 8 2016 UTC \n/\n\n\n\nstruct os_timezone\n is used to specify the offset of local time from UTC and whether daylight savings is in effect. Note that \ntz_minuteswest\n is a positive number if the local time is \nbehind\n UTC and a negative number if the local time is \nahead\n of UTC.\n\n\nstruct os_timezone {\n    int16_t tz_minuteswest;\n    int16_t tz_dsttime;\n};\n\n\n/\n Pacific Standard Time is 08:00 hours west of UTC \n/\nstruct os_timezone PST = { 480, 0 };\nstruct os_timezone PDT = { 480, 1 };\n\n\n/\n Indian Standard Time is 05:30 hours east of UTC \n/\nstruct os_timezone IST = { -330, 0 };\n\n\n\nList of Functions\n\n\nThe functions available in time are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_time_delay\n\n\nPut the current task to sleep for the given number of ticks.\n\n\n\n\n\n\nos_time_get\n\n\nGet the current value of OS time.\n\n\n
 \n\n\n\nos_time_tick\n\n\nIncrements the OS time tick for the system.\n\n\n\n\n\n\nos_settimeofday\n\n\nSet the current time of day to the given time structs.\n\n\n\n\n\n\nos_gettimeofday\n\n\nPopulate the given timeval and timezone structs with current time data.\n\n\n\n\n\n\n\n\nList of Macros\n\n\nSeveral macros help with the evalution of times with respect to each other.\n\n\n\n\nOS_TIME_TICK_LT(t1,t2)\n -- evaluates to true if t1 is before t2 in time.\n\n\nOS_TIME_TICK_GT(t1,t2)\n -- evaluates to true if t1 is after t2 in time \n\n\nOS_TIME_TICK_GEQ(t1,t2)\n -- evaluates to true if t1 is on or after t2 in time.\n\n\n\n\nNOTE:  For all of these macros the calculations are done modulo 'os_time_t'.  \n\n\nEnsure that comparison of OS time always uses the macros above (to compensate for the possible wrap of OS time).\n\n\nThe following macros help adding or subtracting time when represented as \nstruct os_timeval\n. All parameters to the following macros are pointers to \nstruct os
 _timeval\n.\n\n\n\n\nos_timeradd(tvp, uvp, vvp)\n --  Add \nuvp\n to \ntvp\n and store result in \nvvp\n.\n\n\nos_timersub(tvp, uvp, vvp)\n -- Subtract \nuvp\n from \ntvp\n and store result in \nvvp\n.\n\n\n\n\nSpecial Notes\n\n\nIts important to understand how quickly the time wraps especially when doing time comparison using the macros above (or by any other means). \n\n\nFor example, if a tick is 1 millisecond and \nos_time_t\n is 32-bits the OS time will wrap back to zero in about 49.7 days or stated another way, the OS time epoch is 49.7 days.\n\n\nIf two times are more than 1/2 the OS time epoch apart, any time comparison will be incorrect.  Ensure at design time that comparisons will not occur between times that are more than half the OS time epoch.", 
             "title": "toc"
         }, 
         {
@@ -1967,7 +1967,7 @@
         }, 
         {
             "location": "/os/core_os/time/os_time/#list-of-functions", 
-            "text": "The functions available in time are:   os_time_delay  os_time_get  os_time_tick  os_settimeofday  os_gettimeofday", 
+            "text": "The functions available in time are:     Function  Description      os_time_delay  Put the current task to sleep for the given number of ticks.    os_time_get  Get the current value of OS time.    os_time_tick  Increments the OS time tick for the system.    os_settimeofday  Set the current time of day to the given time structs.    os_gettimeofday  Populate the given timeval and timezone structs with current time data.", 
             "title": "List of Functions"
         }, 
         {
@@ -2132,7 +2132,7 @@
         }, 
         {
             "location": "/os/core_os/task/task/", 
-            "text": "Task\n\n\nA task, along with the scheduler, forms the basis of the Mynewt OS. A task \nconsists of two basic elements: a task stack and a task function. The task \nfunction is basically a forever loop, waiting for some \"event\" to wake it up. \nThere are two methods used to signal a task that it has work to do: event queues \nand semaphores (see the appropriate manual sections for descriptions of these \nfeatures).\n\n\nThe Mynewt OS is a multi-tasking, preemptive OS. Every task is assigned a task \npriority (from 0 to 255), with 0 being the highest priority task. If a higher \npriority task than the current task wants to run, the scheduler preempts the \ncurrently running task and switches context to the higher priority task. This is \njust a fancy way of saying that the processor stack pointer now points to the \nstack of the higher priority task and the task resumes execution where it left \noff.\n\n\nTasks run to completion unless they are preempted by a hi
 gher priority task. The \ndeveloper must insure that tasks eventually \"sleep\"; otherwise lower priority \ntasks will never get a chance to run (actually, any task lower in priority than \nthe task that never sleeps). A task will be put to sleep in the following cases: \nit puts itself to sleep using \nos_time_delay()\n, it waits on an event queue \nwhich is empty or attempts to obtain a mutex or a semaphore that is currently \nowned by another task.\n\n\nNote that other sections of the manual describe these OS features in more \ndetail.\n\n\nDescription\n\n\nIn order to create a task two data structures need to be defined: the task \nobject (struct os_task) and its associated stack. Determining the stack size can \nbe a bit tricky; generally developers should not declare large local variables \non the stack so that task stacks can be of limited size. However, all \napplications are different and the developer must choose the stack size \naccordingly. NOTE: be careful when declarin
 g your stack! The stack is in units \nof \nos_stack_t\n sized elements (generally 32-bits). Looking at the example given \nbelow and assuming \nos_stack_t\n is defined to be a 32-bit unsigned value, \n\"my_task_stack\" will use 256 bytes. \n\n\nA task must also have an associated \"task function\". This is the function that \nwill be called when the task is first run. This task function should never \nreturn!\n\n\nIn order to inform the Mynewt OS of the new task and to have it added to the \nscheduler, the \nos_task_init()\n function is called. Once \nos_task_init()\n is \ncalled, the task is made ready to run and is added to the active task list. Note \nthat a task can be initialized (started) before or after the os has started \n(i.e. before \nos_start()\n is called) but must be initialized after the os has \nbeen initialized (i.e. 'os_init' has been called). In most of the examples and \ncurrent Mynewt projects, the os is initialized, tasks are initialized, and the \nthe os is st
 arted. Once the os has started, the highest priority task will be \nthe first task set to run.\n\n\nInformation about a task can be obtained using the \nos_task_info_get_next()\n \nAPI. Developers can walk the list of tasks to obtain information on all created \ntasks. This information is of type \nos_task_info\n and is described below.\n\n\nThe following is a very simple example showing a single application task. This \ntask simply toggles an LED at a one second interval.\n\n\n/* Create a simple \nproject\n with a task that blinks a LED every second */\n\n\n\n/* Define task stack and task object */\n\n\n#define MY_TASK_PRI         (OS_TASK_PRI_HIGHEST) \n\n\n#define MY_STACK_SIZE       (64) \n\n\nstruct\n \nos_task\n \nmy_task\n; \n\nos_stack_t\n \nmy_task_stack\n[\nMY_STACK_SIZE\n]; \n\n\n/* This is the task function */\n\n\nvoid\n \nmy_task_func\n(\nvoid\n \n*arg\n) {\n    \n/* Set the led pin as an output */\n\n    \nhal_gpio_init_out\n(\nLED_BLINK_PIN\n, \n1\n);\n\n    \n/* The
  task is a forever loop that does not return */\n\n    \nwhile\n (\n1\n) {\n        \n/* Wait one second */\n \n        \nos_time_delay\n(\n1000\n);\n\n        \n/* Toggle the LED */\n \n        \nhal_gpio_toggle\n(\nLED_BLINK_PIN\n);\n    }\n}\n\n\n/* This is the main function for the project */\n\n\nint\n \nmain\n(\nvoid\n) {\n    \nint\n \nrc\n;\n\n    \n/* Initialize OS */\n\n    \nos_init\n();\n\n    \n/* Initialize the task */\n\n    \nos_task_init\n(\nmy_task\n, \nmy_task\n, \nmy_task_func\n, \nNULL\n, \nMY_TASK_PRIO\n, \n                 \nOS_WAIT_FOREVER\n, \nmy_task_stack\n, \nMY_STACK_SIZE\n);\n\n    \n/* Start the OS */\n\n    \nos_start\n();\n\n    \n/* os start should never return. If it does, this should be an error */\n\n    \nassert\n(\n0\n);\n\n    \nreturn\n \nrc\n;\n}\n\n\n\n\n\nData structures\n\n\n/* The highest and lowest task priorities */\n\n\n#define OS_TASK_PRI_HIGHEST         (0)\n\n\n#define OS_TASK_PRI_LOWEST          (0xff)\n\n\n\n/* Task states */\n\n
 \ntypedef\n \nenum\n \nos_task_state\n {\n    \nOS_TASK_READY\n \n=\n \n1\n, \n    \nOS_TASK_SLEEP\n \n=\n \n2\n\n} \nos_task_state_t\n;\n\n\n/* Task flags */\n\n\n#define OS_TASK_FLAG_NO_TIMEOUT     (0x0001U)\n\n\n#define OS_TASK_FLAG_SEM_WAIT       (0x0002U)\n\n\n#define OS_TASK_FLAG_MUTEX_WAIT     (0x0004U)\n\n\n\ntypedef\n \nvoid\n (\n*\nos_task_func_t\n)(\nvoid\n \n*\n);\n\n\n#define OS_TASK_MAX_NAME_LEN (32)\n\n\n\n\n\n\n\n\nstruct\n \nos_task\n {\n    \nos_stack_t\n \n*t_stackptr\n;\n    \nos_stack_t\n \n*t_stacktop\n;\n\n    \nuint16_t\n \nt_stacksize\n;\n    \nuint16_t\n \nt_flags\n;\n\n    \nuint8_t\n \nt_taskid\n;\n    \nuint8_t\n \nt_prio\n;\n    \nuint8_t\n \nt_state\n;\n    \nuint8_t\n \nt_pad\n;\n\n    \nchar\n \n*t_name\n;\n    \nos_task_func_t\n \nt_func\n;\n    \nvoid\n \n*t_arg\n;\n\n    \nvoid\n \n*t_obj\n;\n\n    \nstruct\n \nos_sanity_check\n \nt_sanity_check\n; \n\n    \nos_time_t\n \nt_next_wakeup\n;\n    \nos_time_t\n \nt_run_time\n;\n    \nuint32_t\n \nt_ct
 x_sw_cnt\n;\n\n    \n/* Global list of all tasks, irrespective of run or sleep lists */\n\n    \nSTAILQ_ENTRY\n(\nos_task\n) \nt_os_task_list\n;\n\n    \n/* Used to chain task to either the run or sleep list */\n \n    \nTAILQ_ENTRY\n(\nos_task\n) \nt_os_list\n;\n\n    \n/* Used to chain task to an object such as a semaphore or mutex */\n\n    \nSLIST_ENTRY\n(\nos_task\n) \nt_obj_list\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nt_stackptr\n\n\nCurrent stack pointer\n\n\n\n\n\n\nt_stacktop\n\n\nThe address of the top of the task stack. The stack grows downward\n\n\n\n\n\n\nt_stacksize\n\n\nThe size of the stack, in units of os_stack_t (not bytes!)\n\n\n\n\n\n\nt_flags\n\n\nTask flags (see flag definitions)\n\n\n\n\n\n\nt_taskid\n\n\nA numeric id assigned to each task\n\n\n\n\n\n\nt_prio\n\n\nThe priority of the task. The lower the number, the higher the priority\n\n\n\n\n\n\nt_state\n\n\nThe task state (see state definitions)\n\n\n\n\n\n\nt_pad\n\n\npa
 dding (for alignment)\n\n\n\n\n\n\nt_name\n\n\nName of task\n\n\n\n\n\n\nt_func\n\n\nPointer to task function\n\n\n\n\n\n\nt_obj\n\n\nGeneric object used by mutexes and semaphores when the task is waiting on a mutex or semaphore\n\n\n\n\n\n\nt_sanity_check\n\n\nSanity task data structure\n\n\n\n\n\n\nt_next_wakeup\n\n\nOS time when task is next scheduled to wake up\n\n\n\n\n\n\nt_run_time\n\n\nThe amount of os time ticks this task has been running\n\n\n\n\n\n\nt_ctx_sw_cnt\n\n\nThe number of times that this task has been run\n\n\n\n\n\n\nt_os_task_list\n\n\nList pointer for global task list. All tasks are placed on this list\n\n\n\n\n\n\nt_os_list\n\n\nList pointer used by either the active task list or the sleeping task list\n\n\n\n\n\n\nt_obj_list\n\n\nList pointer for tasks waiting on a semaphore or mutex\n\n\n\n\n\n\n\n\n\n\nstruct\n \nos_task_info\n {\n    \nuint8_t\n \noti_prio\n;\n    \nuint8_t\n \noti_taskid\n;\n    \nuint8_t\n \noti_state\n;\n    \nuint8_t\n \noti_flags\n;\
 n    \nuint16_t\n \noti_stkusage\n;\n    \nuint16_t\n \noti_stksize\n;\n    \nuint32_t\n \noti_cswcnt\n;\n    \nuint32_t\n \noti_runtime\n;\n    \nos_time_t\n \noti_last_checkin\n;\n    \nos_time_t\n \noti_next_checkin\n;\n\n    \nchar\n \noti_name\n[\nOS_TASK_MAX_NAME_LEN\n];\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\noti_prio\n\n\nTask priority\n\n\n\n\n\n\noti_taskid\n\n\nTask id\n\n\n\n\n\n\noti_state\n\n\nTask state\n\n\n\n\n\n\noti_flags\n\n\nTask flags\n\n\n\n\n\n\noti_stkusage\n\n\nAmount of stack used by the task (in os_stack_t units)\n\n\n\n\n\n\noti_stksize\n\n\nThe size of the stack (in os_stack_t units)\n\n\n\n\n\n\noti_cswcnt\n\n\nThe context switch count\n\n\n\n\n\n\noti_runtime\n\n\nThe amount of time that the task has run (in os time ticks)\n\n\n\n\n\n\noti_last_checkin\n\n\nThe time (os time) at which this task last checked in to the sanity task\n\n\n\n\n\n\noti_next_checkin\n\n\nThe time (os time) at which this task last checked in to
  the sanity task\n\n\n\n\n\n\noti_name\n\n\nName of the task\n\n\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in task are:\n\n\n\n\nos_task_init\n\n\nos_task_count\n\n\nos_task_info_get_next", 
+            "text": "Task\n\n\nA task, along with the scheduler, forms the basis of the Mynewt OS. A task \nconsists of two basic elements: a task stack and a task function. The task \nfunction is basically a forever loop, waiting for some \"event\" to wake it up. \nThere are two methods used to signal a task that it has work to do: event queues \nand semaphores (see the appropriate manual sections for descriptions of these \nfeatures).\n\n\nThe Mynewt OS is a multi-tasking, preemptive OS. Every task is assigned a task \npriority (from 0 to 255), with 0 being the highest priority task. If a higher \npriority task than the current task wants to run, the scheduler preempts the \ncurrently running task and switches context to the higher priority task. This is \njust a fancy way of saying that the processor stack pointer now points to the \nstack of the higher priority task and the task resumes execution where it left \noff.\n\n\nTasks run to completion unless they are preempted by a hi
 gher priority task. The \ndeveloper must insure that tasks eventually \"sleep\"; otherwise lower priority \ntasks will never get a chance to run (actually, any task lower in priority than \nthe task that never sleeps). A task will be put to sleep in the following cases: \nit puts itself to sleep using \nos_time_delay()\n, it waits on an event queue \nwhich is empty or attempts to obtain a mutex or a semaphore that is currently \nowned by another task.\n\n\nNote that other sections of the manual describe these OS features in more \ndetail.\n\n\nDescription\n\n\nIn order to create a task two data structures need to be defined: the task \nobject (struct os_task) and its associated stack. Determining the stack size can \nbe a bit tricky; generally developers should not declare large local variables \non the stack so that task stacks can be of limited size. However, all \napplications are different and the developer must choose the stack size \naccordingly. NOTE: be careful when declarin
 g your stack! The stack is in units \nof \nos_stack_t\n sized elements (generally 32-bits). Looking at the example given \nbelow and assuming \nos_stack_t\n is defined to be a 32-bit unsigned value, \n\"my_task_stack\" will use 256 bytes. \n\n\nA task must also have an associated \"task function\". This is the function that \nwill be called when the task is first run. This task function should never \nreturn!\n\n\nIn order to inform the Mynewt OS of the new task and to have it added to the \nscheduler, the \nos_task_init()\n function is called. Once \nos_task_init()\n is \ncalled, the task is made ready to run and is added to the active task list. Note \nthat a task can be initialized (started) before or after the os has started \n(i.e. before \nos_start()\n is called) but must be initialized after the os has \nbeen initialized (i.e. 'os_init' has been called). In most of the examples and \ncurrent Mynewt projects, the os is initialized, tasks are initialized, and the \nthe os is st
 arted. Once the os has started, the highest priority task will be \nthe first task set to run.\n\n\nInformation about a task can be obtained using the \nos_task_info_get_next()\n \nAPI. Developers can walk the list of tasks to obtain information on all created \ntasks. This information is of type \nos_task_info\n and is described below.\n\n\nThe following is a very simple example showing a single application task. This \ntask simply toggles an LED at a one second interval.\n\n\n/* Create a simple \nproject\n with a task that blinks a LED every second */\n\n\n\n/* Define task stack and task object */\n\n\n#define MY_TASK_PRI         (OS_TASK_PRI_HIGHEST) \n\n\n#define MY_STACK_SIZE       (64) \n\n\nstruct\n \nos_task\n \nmy_task\n; \n\nos_stack_t\n \nmy_task_stack\n[\nMY_STACK_SIZE\n]; \n\n\n/* This is the task function */\n\n\nvoid\n \nmy_task_func\n(\nvoid\n \n*arg\n) {\n    \n/* Set the led pin as an output */\n\n    \nhal_gpio_init_out\n(\nLED_BLINK_PIN\n, \n1\n);\n\n    \n/* The
  task is a forever loop that does not return */\n\n    \nwhile\n (\n1\n) {\n        \n/* Wait one second */\n \n        \nos_time_delay\n(\n1000\n);\n\n        \n/* Toggle the LED */\n \n        \nhal_gpio_toggle\n(\nLED_BLINK_PIN\n);\n    }\n}\n\n\n/* This is the main function for the project */\n\n\nint\n \nmain\n(\nvoid\n) {\n    \nint\n \nrc\n;\n\n    \n/* Initialize OS */\n\n    \nos_init\n();\n\n    \n/* Initialize the task */\n\n    \nos_task_init\n(\nmy_task\n, \nmy_task\n, \nmy_task_func\n, \nNULL\n, \nMY_TASK_PRIO\n, \n                 \nOS_WAIT_FOREVER\n, \nmy_task_stack\n, \nMY_STACK_SIZE\n);\n\n    \n/* Start the OS */\n\n    \nos_start\n();\n\n    \n/* os start should never return. If it does, this should be an error */\n\n    \nassert\n(\n0\n);\n\n    \nreturn\n \nrc\n;\n}\n\n\n\n\n\nData structures\n\n\n/* The highest and lowest task priorities */\n\n\n#define OS_TASK_PRI_HIGHEST         (0)\n\n\n#define OS_TASK_PRI_LOWEST          (0xff)\n\n\n\n/* Task states */\n\n
 \ntypedef\n \nenum\n \nos_task_state\n {\n    \nOS_TASK_READY\n \n=\n \n1\n, \n    \nOS_TASK_SLEEP\n \n=\n \n2\n\n} \nos_task_state_t\n;\n\n\n/* Task flags */\n\n\n#define OS_TASK_FLAG_NO_TIMEOUT     (0x0001U)\n\n\n#define OS_TASK_FLAG_SEM_WAIT       (0x0002U)\n\n\n#define OS_TASK_FLAG_MUTEX_WAIT     (0x0004U)\n\n\n\ntypedef\n \nvoid\n (\n*\nos_task_func_t\n)(\nvoid\n \n*\n);\n\n\n#define OS_TASK_MAX_NAME_LEN (32)\n\n\n\n\n\n\n\n\nstruct\n \nos_task\n {\n    \nos_stack_t\n \n*t_stackptr\n;\n    \nos_stack_t\n \n*t_stacktop\n;\n\n    \nuint16_t\n \nt_stacksize\n;\n    \nuint16_t\n \nt_flags\n;\n\n    \nuint8_t\n \nt_taskid\n;\n    \nuint8_t\n \nt_prio\n;\n    \nuint8_t\n \nt_state\n;\n    \nuint8_t\n \nt_pad\n;\n\n    \nchar\n \n*t_name\n;\n    \nos_task_func_t\n \nt_func\n;\n    \nvoid\n \n*t_arg\n;\n\n    \nvoid\n \n*t_obj\n;\n\n    \nstruct\n \nos_sanity_check\n \nt_sanity_check\n; \n\n    \nos_time_t\n \nt_next_wakeup\n;\n    \nos_time_t\n \nt_run_time\n;\n    \nuint32_t\n \nt_ct
 x_sw_cnt\n;\n\n    \n/* Global list of all tasks, irrespective of run or sleep lists */\n\n    \nSTAILQ_ENTRY\n(\nos_task\n) \nt_os_task_list\n;\n\n    \n/* Used to chain task to either the run or sleep list */\n \n    \nTAILQ_ENTRY\n(\nos_task\n) \nt_os_list\n;\n\n    \n/* Used to chain task to an object such as a semaphore or mutex */\n\n    \nSLIST_ENTRY\n(\nos_task\n) \nt_obj_list\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nt_stackptr\n\n\nCurrent stack pointer\n\n\n\n\n\n\nt_stacktop\n\n\nThe address of the top of the task stack. The stack grows downward\n\n\n\n\n\n\nt_stacksize\n\n\nThe size of the stack, in units of os_stack_t (not bytes!)\n\n\n\n\n\n\nt_flags\n\n\nTask flags (see flag definitions)\n\n\n\n\n\n\nt_taskid\n\n\nA numeric id assigned to each task\n\n\n\n\n\n\nt_prio\n\n\nThe priority of the task. The lower the number, the higher the priority\n\n\n\n\n\n\nt_state\n\n\nThe task state (see state definitions)\n\n\n\n\n\n\nt_pad\n\n\npa
 dding (for alignment)\n\n\n\n\n\n\nt_name\n\n\nName of task\n\n\n\n\n\n\nt_func\n\n\nPointer to task function\n\n\n\n\n\n\nt_obj\n\n\nGeneric object used by mutexes and semaphores when the task is waiting on a mutex or semaphore\n\n\n\n\n\n\nt_sanity_check\n\n\nSanity task data structure\n\n\n\n\n\n\nt_next_wakeup\n\n\nOS time when task is next scheduled to wake up\n\n\n\n\n\n\nt_run_time\n\n\nThe amount of os time ticks this task has been running\n\n\n\n\n\n\nt_ctx_sw_cnt\n\n\nThe number of times that this task has been run\n\n\n\n\n\n\nt_os_task_list\n\n\nList pointer for global task list. All tasks are placed on this list\n\n\n\n\n\n\nt_os_list\n\n\nList pointer used by either the active task list or the sleeping task list\n\n\n\n\n\n\nt_obj_list\n\n\nList pointer for tasks waiting on a semaphore or mutex\n\n\n\n\n\n\n\n\n\n\nstruct\n \nos_task_info\n {\n    \nuint8_t\n \noti_prio\n;\n    \nuint8_t\n \noti_taskid\n;\n    \nuint8_t\n \noti_state\n;\n    \nuint8_t\n \noti_flags\n;\
 n    \nuint16_t\n \noti_stkusage\n;\n    \nuint16_t\n \noti_stksize\n;\n    \nuint32_t\n \noti_cswcnt\n;\n    \nuint32_t\n \noti_runtime\n;\n    \nos_time_t\n \noti_last_checkin\n;\n    \nos_time_t\n \noti_next_checkin\n;\n\n    \nchar\n \noti_name\n[\nOS_TASK_MAX_NAME_LEN\n];\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\noti_prio\n\n\nTask priority\n\n\n\n\n\n\noti_taskid\n\n\nTask id\n\n\n\n\n\n\noti_state\n\n\nTask state\n\n\n\n\n\n\noti_flags\n\n\nTask flags\n\n\n\n\n\n\noti_stkusage\n\n\nAmount of stack used by the task (in os_stack_t units)\n\n\n\n\n\n\noti_stksize\n\n\nThe size of the stack (in os_stack_t units)\n\n\n\n\n\n\noti_cswcnt\n\n\nThe context switch count\n\n\n\n\n\n\noti_runtime\n\n\nThe amount of time that the task has run (in os time ticks)\n\n\n\n\n\n\noti_last_checkin\n\n\nThe time (os time) at which this task last checked in to the sanity task\n\n\n\n\n\n\noti_next_checkin\n\n\nThe time (os time) at which this task last checked in to
  the sanity task\n\n\n\n\n\n\noti_name\n\n\nName of the task\n\n\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in task are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_task_init\n\n\nCalled to create a task. This adds the task object to the list of ready to run tasks.\n\n\n\n\n\n\nos_task_count\n\n\nReturns the number of tasks that have been created.\n\n\n\n\n\n\nos_task_info_get_next\n\n\nPopulates the os task info structure given with task information.", 
             "title": "toc"
         }, 
         {
@@ -2152,7 +2152,7 @@
         }, 
         {
             "location": "/os/core_os/task/task/#list-of-functions", 
-            "text": "The functions available in task are:   os_task_init  os_task_count  os_task_info_get_next", 
+            "text": "The functions available in task are:     Function  Description      os_task_init  Called to create a task. This adds the task object to the list of ready to run tasks.    os_task_count  Returns the number of tasks that have been created.    os_task_info_get_next  Populates the os task info structure given with task information.", 
             "title": "List of Functions"
         }, 
         {
@@ -2232,7 +2232,7 @@
         }, 
         {
             "location": "/os/core_os/event_queue/event_queue/", 
-            "text": "Event Queues\n\n\nEvent queue is a way of serializing events arring to a task. This makes it easy to queue processing to happen inside task's context. This would be done either from an interrupt handler, or from another task.\n\n\nEvents arrive in a form of a data structure \nstruct os_event\n.\n\n\nDescription\n\n\nEvents are in form of a data structure \nstruct os_event\n, and they are queued to data structure \nstruct os_eventq\n.\n\n\nQueue must be initialized before trying to add events to it. This is done using \nos_eventq_init()\n.\n\n\nCommon way of using event queues is to have a task loop while calling \nos_eventq_get()\n, waiting for work to do.\nOther tasks (or interrupts) then call \nos_eventq_put()\n to wake it up. Once event has been queued task waiting on that queue is woken up, and will get a pointer to queued event structure.\nProcessing task would then act according to event type.\n\n\nWhen \nos_event\n is queued, it should not be freed until 
 processing task is done with it.\n\n\nIt is assumed that there is only one task consuming events from an event queue. Only one task should be sleeping on a particular \nos_eventq\n at a time.\n\n\nNote that os_callout subsystem assumes that event queue is used as the wakeup mechanism.\n\n\nData structures\n\n\nstruct\n \nos_event\n {\n    \nuint8_t\n \nev_queued\n;\n    \nuint8_t\n \nev_type\n;\n    \nvoid\n \n*ev_arg\n;\n    \nSTAILQ_ENTRY\n(\nos_event\n) \nev_next\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nev_queued\n\n\nInternal field, which tells whether event is linked into an \nos_eventq\n already\n\n\n\n\n\n\nev_type\n\n\nType of an event. This should be unique, as it should be used by processing task to figure out what the event means\n\n\n\n\n\n\nev_arg\n\n\nCan be used further as input to task processing this event\n\n\n\n\n\n\nev_next\n\n\nLinkage attaching this event to an event queue\n\n\n\n\n\n\n\n\nstruct\n \nos_eventq\n {\n    \nstruc
 t\n \nos_task\n \n*evq_task\n;\n    \nSTAILQ_HEAD\n(, \nos_event\n) \nevq_list\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nevq_task\n\n\nPointer to task if there is task sleeping on \nos_eventq_get()\n\n\n\n\n\n\nevq_list\n\n\nQueue head for list of events in this queue\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in event queue feature are:\n\n\n\n\nos_eventq_get\n\n\nos_eventq_init\n\n\nos_eventq_put\n\n\nos_eventq_remove", 
+            "text": "Event Queues\n\n\nEvent queue is a way of serializing events arring to a task. This makes it easy to queue processing to happen inside task's context. This would be done either from an interrupt handler, or from another task.\n\n\nEvents arrive in a form of a data structure \nstruct os_event\n.\n\n\nDescription\n\n\nEvents are in form of a data structure \nstruct os_event\n, and they are queued to data structure \nstruct os_eventq\n.\n\n\nQueue must be initialized before trying to add events to it. This is done using \nos_eventq_init()\n.\n\n\nCommon way of using event queues is to have a task loop while calling \nos_eventq_get()\n, waiting for work to do.\nOther tasks (or interrupts) then call \nos_eventq_put()\n to wake it up. Once event has been queued task waiting on that queue is woken up, and will get a pointer to queued event structure.\nProcessing task would then act according to event type.\n\n\nWhen \nos_event\n is queued, it should not be freed until 
 processing task is done with it.\n\n\nIt is assumed that there is only one task consuming events from an event queue. Only one task should be sleeping on a particular \nos_eventq\n at a time.\n\n\nNote that os_callout subsystem assumes that event queue is used as the wakeup mechanism.\n\n\nData structures\n\n\nstruct\n \nos_event\n {\n    \nuint8_t\n \nev_queued\n;\n    \nuint8_t\n \nev_type\n;\n    \nvoid\n \n*ev_arg\n;\n    \nSTAILQ_ENTRY\n(\nos_event\n) \nev_next\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nev_queued\n\n\nInternal field, which tells whether event is linked into an \nos_eventq\n already\n\n\n\n\n\n\nev_type\n\n\nType of an event. This should be unique, as it should be used by processing task to figure out what the event means\n\n\n\n\n\n\nev_arg\n\n\nCan be used further as input to task processing this event\n\n\n\n\n\n\nev_next\n\n\nLinkage attaching this event to an event queue\n\n\n\n\n\n\n\n\nstruct\n \nos_eventq\n {\n    \nstruc
 t\n \nos_task\n \n*evq_task\n;\n    \nSTAILQ_HEAD\n(, \nos_event\n) \nevq_list\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nevq_task\n\n\nPointer to task if there is task sleeping on \nos_eventq_get()\n\n\n\n\n\n\nevq_list\n\n\nQueue head for list of events in this queue\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in event queue feature are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_eventq_get\n\n\nFetches the first event from a queue. Task will sleep until something gets queued.\n\n\n\n\n\n\nos_eventq_init\n\n\nInitializes the given event queue, making it ready for use.\n\n\n\n\n\n\nos_eventq_put\n\n\nQueues an event to tail of the event queue.\n\n\n\n\n\n\nos_eventq_remove\n\n\nRemoves an event which has been placed in a queue.", 
             "title": "toc"
         }, 
         {
@@ -2252,7 +2252,7 @@
         }, 
         {
             "location": "/os/core_os/event_queue/event_queue/#list-of-functions", 
-            "text": "The functions available in event queue feature are:   os_eventq_get  os_eventq_init  os_eventq_put  os_eventq_remove", 
+            "text": "The functions available in event queue feature are:     Function  Description      os_eventq_get  Fetches the first event from a queue. Task will sleep until something gets queued.    os_eventq_init  Initializes the given event queue, making it ready for use.    os_eventq_put  Queues an event to tail of the event queue.    os_eventq_remove  Removes an event which has been placed in a queue.", 
             "title": "List of Functions"
         }, 
         {
@@ -2372,7 +2372,7 @@
         }, 
         {
             "location": "/os/core_os/semaphore/semaphore/", 
-            "text": "Semaphore\n\n\nA semaphore is a structure used for gaining exclusive access (much like a mutex), synchronizing task operations and/or use in a \"producer/consumer\" roles. Semaphores like the ones used by the myNewt OS are called \"counting\" semaphores as they are allowed to have more than one token (explained below).\n\n\nDescription\n\n\nA semaphore is a fairly simple construct consisting of a queue for waiting tasks and the number of tokens currently owned by the semaphore. A semaphore can be obtained as long as there are tokens in the semaphore. Any task can add tokens to the semaphore and any task can request the semaphore, thereby removing tokens. When creating the semaphore, the initial number of tokens can be set as well.\n\n\nWhen used for exclusive access to a shared resource the semaphore only needs a single token. In this case, a single task \"creates\" the semaphore by calling \nos_sem_init\n with a value of one (1) for the token. When a task desir
 es exclusive access to the shared resource it requests the semaphore by calling \nos_sem_pend\n. If there is a token the requesting task will acquire the semaphore and continue operation. If no tokens are available the task will be put to sleep until there is a token. A common \"problem\" with using a semaphore for exclusive access is called \npriority inversion\n. Consider the following scenario: a high and low priority task both share a resource which is locked using a semaphore. If the low priority task obtains the semaphore and then the high priority task requests the semaphore, the high priority task is now blocked until the low priority task releases the semaphore. Now suppose that there are tasks between the low priority task and the high priority task that want to run. These tasks will preempt the low priority task which owns the semaphore. Thus, the high priority task is blocked waiting for the low priority task to finish using the semaphore but the low priority task cannot
  run since other tasks are running. Thus, the high priority tasks is \"inverted\" in priority; in effect running at a much lower priority as normally it would preempt the other (lower priority) tasks. If this is an issue a mutex should be used instead of a semaphore.\n\n\nSemaphores can also be used for task synchronization. A simple example of this would be the following. A task creates a semaphore and initializes it with no tokens. The task then waits on the semaphore, and since there are no tokens, the task is put to sleep. When other tasks want to wake up the sleeping task they simply add a token by calling \nos_sem_release\n. This will cause the sleeping task to wake up (instantly if no other higher priority tasks want to run).\n\n\nThe other common use of a counting semaphore is in what is commonly called a \"producer/consumer\" relationship. The producer adds tokens (by calling \nos_sem_release\n) and the consumer consumes them by calling \nos_sem_pend\n. In this relationship
 , the producer has work for the consumer to do. Each token added to the semaphore will cause the consumer to do whatever work is required. A simple example could be the following: every time a button is pressed there is some work to do (ring a bell). Each button press causes the producer to add a token. Each token consumed rings the bell. There will exactly the same number of bell rings as there are button presses. In other words, each call to \nos_sem_pend\n subtracts exactly one token and each call to \nos_sem_release\n adds exactly one token.\n\n\nData structures\n\n\nstruct\n \nos_sem\n\n{\n    \nSLIST_HEAD\n(, \nos_task\n) \nsem_head\n;     \n/* chain of waiting tasks */\n\n    \nuint16_t\n    \n_pad\n;\n    \nuint16_t\n    \nsem_tokens\n;             \n/* # of tokens */\n\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsem_head\n\n\nQueue head for list of tasks waiting on semaphore\n\n\n\n\n\n\n_pad\n\n\nPadding for alignment\n\n\n\n\n\n\nsem_tokens\n\
 n\nCurrent number of tokens\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in semaphore are:\n\n\n\n\nos_sem_init\n\n\nos_sem_pend\n\n\nos_sem_release", 
+            "text": "Semaphore\n\n\nA semaphore is a structure used for gaining exclusive access (much like a mutex), synchronizing task operations and/or use in a \"producer/consumer\" roles. Semaphores like the ones used by the myNewt OS are called \"counting\" semaphores as they are allowed to have more than one token (explained below).\n\n\nDescription\n\n\nA semaphore is a fairly simple construct consisting of a queue for waiting tasks and the number of tokens currently owned by the semaphore. A semaphore can be obtained as long as there are tokens in the semaphore. Any task can add tokens to the semaphore and any task can request the semaphore, thereby removing tokens. When creating the semaphore, the initial number of tokens can be set as well.\n\n\nWhen used for exclusive access to a shared resource the semaphore only needs a single token. In this case, a single task \"creates\" the semaphore by calling \nos_sem_init\n with a value of one (1) for the token. When a task desir
 es exclusive access to the shared resource it requests the semaphore by calling \nos_sem_pend\n. If there is a token the requesting task will acquire the semaphore and continue operation. If no tokens are available the task will be put to sleep until there is a token. A common \"problem\" with using a semaphore for exclusive access is called \npriority inversion\n. Consider the following scenario: a high and low priority task both share a resource which is locked using a semaphore. If the low priority task obtains the semaphore and then the high priority task requests the semaphore, the high priority task is now blocked until the low priority task releases the semaphore. Now suppose that there are tasks between the low priority task and the high priority task that want to run. These tasks will preempt the low priority task which owns the semaphore. Thus, the high priority task is blocked waiting for the low priority task to finish using the semaphore but the low priority task cannot
  run since other tasks are running. Thus, the high priority tasks is \"inverted\" in priority; in effect running at a much lower priority as normally it would preempt the other (lower priority) tasks. If this is an issue a mutex should be used instead of a semaphore.\n\n\nSemaphores can also be used for task synchronization. A simple example of this would be the following. A task creates a semaphore and initializes it with no tokens. The task then waits on the semaphore, and since there are no tokens, the task is put to sleep. When other tasks want to wake up the sleeping task they simply add a token by calling \nos_sem_release\n. This will cause the sleeping task to wake up (instantly if no other higher priority tasks want to run).\n\n\nThe other common use of a counting semaphore is in what is commonly called a \"producer/consumer\" relationship. The producer adds tokens (by calling \nos_sem_release\n) and the consumer consumes them by calling \nos_sem_pend\n. In this relationship
 , the producer has work for the consumer to do. Each token added to the semaphore will cause the consumer to do whatever work is required. A simple example could be the following: every time a button is pressed there is some work to do (ring a bell). Each button press causes the producer to add a token. Each token consumed rings the bell. There will exactly the same number of bell rings as there are button presses. In other words, each call to \nos_sem_pend\n subtracts exactly one token and each call to \nos_sem_release\n adds exactly one token.\n\n\nData structures\n\n\nstruct\n \nos_sem\n\n{\n    \nSLIST_HEAD\n(, \nos_task\n) \nsem_head\n;     \n/* chain of waiting tasks */\n\n    \nuint16_t\n    \n_pad\n;\n    \nuint16_t\n    \nsem_tokens\n;             \n/* # of tokens */\n\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nsem_head\n\n\nQueue head for list of tasks waiting on semaphore\n\n\n\n\n\n\n_pad\n\n\nPadding for alignment\n\n\n\n\n\n\nsem_tokens\n\
 n\nCurrent number of tokens\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in semaphore are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_sem_init\n\n\nInitialize a semaphore with a given number of tokens.\n\n\n\n\n\n\nos_sem_pend\n\n\nWait for a semaphore for a given amount of time.\n\n\n\n\n\n\nos_sem_release\n\n\nRelease a semaphore that you are holding. This adds a token to the semaphore.", 
             "title": "toc"
         }, 
         {
@@ -2392,7 +2392,7 @@
         }, 
         {
             "location": "/os/core_os/semaphore/semaphore/#list-of-functions", 
-            "text": "The functions available in semaphore are:   os_sem_init  os_sem_pend  os_sem_release", 
+            "text": "The functions available in semaphore are:     Function  Description      os_sem_init  Initialize a semaphore with a given number of tokens.    os_sem_pend  Wait for a semaphore for a given amount of time.    os_sem_release  Release a semaphore that you are holding. This adds a token to the semaphore.", 
             "title": "List of Functions"
         }, 
         {
@@ -2487,7 +2487,7 @@
         }, 
         {
             "location": "/os/core_os/mutex/mutex/", 
-            "text": "Mutex\n\n\nMutex is short for \"mutual exclusion\"; a mutex provides mutually exclusive access to a shared resource. A mutex provides \npriority inheritance\n in order to prevent \npriority inversion\n. Priority inversion occurs when a higher priority task is waiting on a resource owned by a lower priority task. Using a mutex, the lower priority task will inherit the highest priority of any task waiting on the mutex. \n\n\nDescription\n\n\nThe first order of business when using a mutex is to declare the mutex globally. The mutex needs to be initialized before it is used (see the examples). It is generally a good idea to initialize the mutex before tasks start running in order to avoid a task possibly using the mutex before it is initialized.\n\n\nWhen a task wants exclusive access to a shared resource it needs to obtain the mutex by calling \nos_mutex_pend\n. If the mutex is currently owned by a different task (a lower priority task), the requesting task will be
  put to sleep and the owners priority will be elevated to the priority of the requesting task. Note that multiple tasks can request ownership and the current owner is elevated to the highest priority of any task waitin on the mutex. When the task is done using the shared resource, it needs to release the mutex by called \nos_mutex_release\n. There needs to be one release per call to pend. Note that nested calls to \nos_mutex_pend\n are allowed but there needs to be one release per pend.\n\n\nThe following example will illustrate how priority inheritance works. In this example, the task number is the same as its priority. Remember that the lower the number, the higher the priority (i.e. priority 0 is higher priority than priority 1). Suppose that task 5 gets ownership of a mutex but is preempted by task 4. Task 4 attempts to gain ownership of the mutex but cannot as it is owned by task 5. Task 4 is put to sleep and task 5 is temporarily raised to priority 4. Before task 5 can release
  the mutex, task 3 runs and attempts to acquire the mutex. At this point, both task 3 and task 4 are waiting on the mutex (sleeping). Task 5 now runs at priority 3 (the highest priority of all the tasks waiting on the mutex). When task 5 finally releases the mutex it will be preempted as two higher priority tasks are waiting for it. \n\n\nNote that when multiple tasks are waiting on a mutex owned by another task, once the mutex is released the highest priority task waiting on the mutex is run. \n\n\nData structures\n\n\nstruct\n \nos_mutex\n\n{\n    \nSLIST_HEAD\n(, \nos_task\n) \nmu_head\n;\n    \nuint8_t\n     \n_pad\n;\n    \nuint8_t\n     \nmu_prio\n;\n    \nuint16_t\n    \nmu_level\n;\n    \nstruct\n \nos_task\n \n*mu_owner\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nmu_head\n\n\nQueue head for list of tasks waiting on mutex\n\n\n\n\n\n\n_pad\n\n\nPadding\n\n\n\n\n\n\nmu_prio\n\n\nDefault priority of owner of mutex. Used to reset priority of task
  when mutex released\n\n\n\n\n\n\nmu_level\n\n\nCall nesting level (for nested calls)\n\n\n\n\n\n\nmu_owner\n\n\nPointer to task structure which owns mutex\n\n\n\n\n\n\n\n\nList of Functions\n\n\n\n\nThe functions available in this OS feature are:\n\n\n\n\nos_mutex_init\n\n\nos_mutex_pend\n\n\nos_mutex_release", 
+            "text": "Mutex\n\n\nMutex is short for \"mutual exclusion\"; a mutex provides mutually exclusive access to a shared resource. A mutex provides \npriority inheritance\n in order to prevent \npriority inversion\n. Priority inversion occurs when a higher priority task is waiting on a resource owned by a lower priority task. Using a mutex, the lower priority task will inherit the highest priority of any task waiting on the mutex. \n\n\nDescription\n\n\nThe first order of business when using a mutex is to declare the mutex globally. The mutex needs to be initialized before it is used (see the examples). It is generally a good idea to initialize the mutex before tasks start running in order to avoid a task possibly using the mutex before it is initialized.\n\n\nWhen a task wants exclusive access to a shared resource it needs to obtain the mutex by calling \nos_mutex_pend\n. If the mutex is currently owned by a different task (a lower priority task), the requesting task will be
  put to sleep and the owners priority will be elevated to the priority of the requesting task. Note that multiple tasks can request ownership and the current owner is elevated to the highest priority of any task waitin on the mutex. When the task is done using the shared resource, it needs to release the mutex by called \nos_mutex_release\n. There needs to be one release per call to pend. Note that nested calls to \nos_mutex_pend\n are allowed but there needs to be one release per pend.\n\n\nThe following example will illustrate how priority inheritance works. In this example, the task number is the same as its priority. Remember that the lower the number, the higher the priority (i.e. priority 0 is higher priority than priority 1). Suppose that task 5 gets ownership of a mutex but is preempted by task 4. Task 4 attempts to gain ownership of the mutex but cannot as it is owned by task 5. Task 4 is put to sleep and task 5 is temporarily raised to priority 4. Before task 5 can release
  the mutex, task 3 runs and attempts to acquire the mutex. At this point, both task 3 and task 4 are waiting on the mutex (sleeping). Task 5 now runs at priority 3 (the highest priority of all the tasks waiting on the mutex). When task 5 finally releases the mutex it will be preempted as two higher priority tasks are waiting for it. \n\n\nNote that when multiple tasks are waiting on a mutex owned by another task, once the mutex is released the highest priority task waiting on the mutex is run. \n\n\nData structures\n\n\nstruct\n \nos_mutex\n\n{\n    \nSLIST_HEAD\n(, \nos_task\n) \nmu_head\n;\n    \nuint8_t\n     \n_pad\n;\n    \nuint8_t\n     \nmu_prio\n;\n    \nuint16_t\n    \nmu_level\n;\n    \nstruct\n \nos_task\n \n*mu_owner\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nmu_head\n\n\nQueue head for list of tasks waiting on mutex\n\n\n\n\n\n\n_pad\n\n\nPadding\n\n\n\n\n\n\nmu_prio\n\n\nDefault priority of owner of mutex. Used to reset priority of task
  when mutex released\n\n\n\n\n\n\nmu_level\n\n\nCall nesting level (for nested calls)\n\n\n\n\n\n\nmu_owner\n\n\nPointer to task structure which owns mutex\n\n\n\n\n\n\n\n\nList of Functions\n\n\n\n\nThe functions available in this OS feature are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_mutex_init\n\n\nInitialize the mutex. Must be called before the mutex can be used.\n\n\n\n\n\n\nos_mutex_pend\n\n\nAcquire ownership of a mutex.\n\n\n\n\n\n\nos_mutex_release\n\n\nRelease ownership of a mutex.", 
             "title": "toc"
         }, 
         {
@@ -2507,13 +2507,43 @@
         }, 
         {
             "location": "/os/core_os/mutex/mutex/#list-of-functions", 
-            "text": "The functions available in this OS feature are:   os_mutex_init  os_mutex_pend  os_mutex_release", 
+            "text": "The functions available in this OS feature are:     Function  Description      os_mutex_init  Initialize the mutex. Must be called before the mutex can be used.    os_mutex_pend  Acquire ownership of a mutex.    os_mutex_release  Release ownership of a mutex.", 
             "title": "List of Functions"
         }, 
         {
+            "location": "/os/core_os/mutex/os_mutex_init/", 
+            "text": "os_mutex_init\n\n\nos_error_t\n \nos_mutex_init\n(\nstruct\n \nos_mutex\n \n*mu\n)\n\n\n\n\n\nInitialize the mutex. Must be called before the mutex can be used.\n\n\nArguments\n\n\n\n\n\n\n\n\nArguments\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\n*mu\n\n\nPointer to mutex\n\n\n\n\n\n\n\n\nReturned values\n\n\nOS_INVALID_PARM: returned when *mu is NULL on entry.\n\n\nOS_OK: mutex initialized successfully.\n\n\nNotes\n\n\n\n\nExample\n\n\nstruct\n \nos_mutex\n \ng_mutex1\n;\n\nos_error_t\n \nerr\n;\n\n\nerr\n \n=\n \nos_mutex_init\n(\ng_mutex1\n);\n\nassert\n(\nerr\n \n==\n \nOS_OK\n);", 
+            "title": "os_mutex_init"
+        }, 
+        {
+            "location": "/os/core_os/mutex/os_mutex_init/#os_mutex_init", 
+            "text": "os_error_t   os_mutex_init ( struct   os_mutex   *mu )  Initialize the mutex. Must be called before the mutex can be used.", 
+            "title": "os_mutex_init"
+        }, 
+        {
+            "location": "/os/core_os/mutex/os_mutex_init/#arguments", 
+            "text": "Arguments  Description      *mu  Pointer to mutex", 
+            "title": "Arguments"
+        }, 
+        {
+            "location": "/os/core_os/mutex/os_mutex_init/#returned-values", 
+            "text": "OS_INVALID_PARM: returned when *mu is NULL on entry.  OS_OK: mutex initialized successfully.", 
+            "title": "Returned values"
+        }, 
+        {
+            "location": "/os/core_os/mutex/os_mutex_init/#notes", 
+            "text": "", 
+            "title": "Notes"
+        }, 
+        {
+            "location": "/os/core_os/mutex/os_mutex_init/#example", 
+            "text": "struct   os_mutex   g_mutex1 ; os_error_t   err ; err   =   os_mutex_init ( g_mutex1 ); assert ( err   ==   OS_OK );", 
+            "title": "Example"
+        }, 
+        {
             "location": "/os/core_os/mutex/os_mutex_pend/", 
             "text": "os_mutex_pend \n\n\nos_error_t\n \nos_mutex_pend\n(\nstruct\n \nos_mutex\n \n*mu\n, \nuint32_t\n \ntimeout\n) \n\n\n\n\n\nAcquire ownership of a mutex.\n\n\nArguments\n\n\n\n\n\n\n\n\nArguments\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\n*mu\n\n\nPointer to mutex\n\n\n\n\n\n\ntimeout\n\n\nTimeout, in os ticks. A value of 0 means no timeout. A value of 0xFFFFFFFF means to wait forever.\n\n\n\n\n\n\n\n\nReturned values\n\n\nOS_INVALID_PARM: returned when *mu is NULL on entry.\n\n\nOS_OK: mutex was successfully acquired.\n\n\nOS_TIMEOUT: the mutex was not available within the timeout specified.\n\n\nOS_NOT_STARTED: Attempt to release a mutex before the os has been started.\n\n\nNotes\n\n\nIf the mutex is owned by another task and the timeout is 0 the function returns immediately with the error code OS_TIMEOUT. The calling task \ndoes not\n own the mutex when this occurs.\n\n\nExample\n\n\nstruct\n \nos_mutex\n \ng_mutex1\n;\n\nos_error_t\n \nerr\n;\n\n\nerr\n \n=\n \nos_
 mutex_pend\n(\ng_mutex1\n, \n0\n);\n\nassert\n(\nerr\n \n==\n \nOS_OK\n);\n\n\n/* Perform operations requiring exclusive access */\n\n\n\nerr\n \n=\n \nos_mutex_release\n(\ng_mutex1\n);\n\nassert\n(\nerr\n \n==\n \nOS_OK\n);", 
-            "title": "os_mutex_init"
+            "title": "os_mutex_pend"
         }, 
         {
             "location": "/os/core_os/mutex/os_mutex_pend/#os_mutex_pend", 
@@ -2567,7 +2597,7 @@
         }, 
         {
             "location": "/os/core_os/memory_pool/memory_pool/", 
-            "text": "Memory Pools\n\n\nA memory pool is a collection of fixed sized elements called memory blocks. Generally, memory pools are used when the developer wants to allocate a certain amount of memory to a given feature. Unlike the heap, where a code module is at the mercy of other code modules to insure there is sufficient memory, memory pools can insure sufficient memory allocation.\n\n\nDescription\n\n\nIn order to create a memory pool the developer needs to do a few things. The first task is to define the memory pool itself. This is a data structure which contains information about the pool itself (i.e. number of blocks, size of the blocks, etc).\n\n\nstruct\n \nos_mempool\n \nmy_pool\n;\n\n\n\n\n\n\nThe next order of business is to allocate the memory used by the memory pool. This memory can either be statically allocated (i.e. a global variable) or dynamically allocated (i.e. from the heap). When determining the amount of memory required for the memory pool, simply 
 multiplying the number of blocks by the size of each block is not sufficient as the OS may have alignment requirements. The alignment size definition is named \nOS_ALIGNMENT\n and can be found in os_arch.h as it is architecture specific. The memory block alignment is usually for efficiency but may be due to other reasons. Generally, blocks are aligned on 32-bit boundaries. Note that memory blocks must also be of sufficient size to hold a list pointer as this is needed to chain memory blocks on the free list.\n\n\nIn order to simplify this for the user two macros have been provided: \nOS_MEMPOOL_BYTES(n, blksize)\n and \nOS_MEMPOOL_SIZE(n, blksize)\n. The first macro returns the number of bytes needed for the memory pool while the second returns the number of \nos_membuf_t\n elements required by the memory pool. The \nos_membuf_t\n type is used to guarantee that the memory buffer used by the memory pool is aligned on the correct boundary. \n\n\nHere are some examples. Note that if a 
 custom malloc implementation is used it must guarantee that the memory buffer used by the pool is allocated on the correct boundary (i.e. OS_ALIGNMENT).\n\n\nvoid\n \n*my_memory_buffer\n;\n\nmy_memory_buffer\n \n=\n \nmalloc\n(\nOS_MEMPOOL_BYTES\n(\nNUM_BLOCKS\n, \nBLOCK_SIZE\n));\n\n\n\n\n\nos_membuf_t\n \nmy_memory_buffer\n[\nOS_MEMPOOL_SIZE\n(\nNUM_BLOCKS\n, \nBLOCK_SIZE\n)];\n\n\n\n\n\n\nNow that the memory pool has been defined as well as the memory required for the memory blocks which make up the pool the user needs to initialize the memory pool by calling \nos_mempool_init\n.\n\n\nos_mempool_init\n(\nmy_pool\n, \nNUM_BLOCKS\n, \nBLOCK_SIZE\n, \nmy_memory_buffer\n,\n                         \nMyPool\n);\n\n\n\n\n\n\nOnce the memory pool has been initialized the developer can allocate memory blocks from the pool by calling \nos_memblock_get\n. When the memory block is no longer needed the memory can be freed by calling \nos_memblock_put\n. \n\n\nData structures\n\n\nstruct\n \n
 os_mempool\n {\n    \nint\n \nmp_block_size\n;\n    \nint\n \nmp_num_blocks\n;\n    \nint\n \nmp_num_free\n;\n    \nuint32_t\n \nmp_membuf_addr\n;\n    \nSTAILQ_ENTRY\n(\nos_mempool\n) \nmp_list\n;    \n    \nSLIST_HEAD\n(,\nos_memblock\n);\n    \nchar\n \n*name\n;\n};\n\n\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nmp_block_size\n\n\nSize of the memory blocks, in bytes. This is not the actual  number of bytes used by each block; it is the requested size of each block. The actual memory block size will be aligned to OS_ALIGNMENT bytes\n\n\n\n\n\n\nmp_num_blocks\n\n\nNumber of memory blocks in the pool\n\n\n\n\n\n\nmp_num_free\n\n\nNumber of free blocks left\n\n\n\n\n\n\nmp_membuf_addr\n\n\nThe address of the memory block. This is used to check that a valid memory block is being freed.\n\n\n\n\n\n\nmp_list\n\n\nList pointer to chain memory pools so they can be displayed by newt tools\n\n\n\n\n\n\nSLIST_HEAD(,os_memblock)\n\n\nList pointer to chain free memory
  blocks\n\n\n\n\n\n\nname\n\n\nName for the memory block\n\n\n\n\n\n\n\n\nList of Functions\n\n\nThe functions available in mem_pool are:\n\n\n\n\nos_memblock_get\n\n\nos_mempool_init\n\n\nos_memblock_put\n\n\nOS_MEMPOOL_BYTES\n\n\nOS_MEMPOOL_SIZE", 
+            "text": "Memory Pools\n\n\nA memory pool is a collection of fixed sized elements called memory blocks. Generally, memory pools are used when the developer wants to allocate a certain amount of memory to a given feature. Unlike the heap, where a code module is at the mercy of other code modules to insure there is sufficient memory, memory pools can insure sufficient memory allocation.\n\n\nDescription\n\n\nIn order to create a memory pool the developer needs to do a few things. The first task is to define the memory pool itself. This is a data structure which contains information about the pool itself (i.e. number of blocks, size of the blocks, etc).\n\n\nstruct\n \nos_mempool\n \nmy_pool\n;\n\n\n\n\n\n\nThe next order of business is to allocate the memory used by the memory pool. This memory can either be statically allocated (i.e. a global variable) or dynamically allocated (i.e. from the heap). When determining the amount of memory required for the memory pool, simply 
 multiplying the number of blocks by the size of each block is not sufficient as the OS may have alignment requirements. The alignment size definition is named \nOS_ALIGNMENT\n and can be found in os_arch.h as it is architecture specific. The memory block alignment is usually for efficiency but may be due to other reasons. Generally, blocks are aligned on 32-bit boundaries. Note that memory blocks must also be of sufficient size to hold a list pointer as this is needed to chain memory blocks on the free list.\n\n\nIn order to simplify this for the user two macros have been provided: \nOS_MEMPOOL_BYTES(n, blksize)\n and \nOS_MEMPOOL_SIZE(n, blksize)\n. The first macro returns the number of bytes needed for the memory pool while the second returns the number of \nos_membuf_t\n elements required by the memory pool. The \nos_membuf_t\n type is used to guarantee that the memory buffer used by the memory pool is aligned on the correct boundary. \n\n\nHere are some examples. Note that if a 
 custom malloc implementation is used it must guarantee that the memory buffer used by the pool is allocated on the correct boundary (i.e. OS_ALIGNMENT).\n\n\nvoid\n \n*my_memory_buffer\n;\n\nmy_memory_buffer\n \n=\n \nmalloc\n(\nOS_MEMPOOL_BYTES\n(\nNUM_BLOCKS\n, \nBLOCK_SIZE\n));\n\n\n\n\n\nos_membuf_t\n \nmy_memory_buffer\n[\nOS_MEMPOOL_SIZE\n(\nNUM_BLOCKS\n, \nBLOCK_SIZE\n)];\n\n\n\n\n\n\nNow that the memory pool has been defined as well as the memory required for the memory blocks which make up the pool the user needs to initialize the memory pool by calling \nos_mempool_init\n.\n\n\nos_mempool_init\n(\nmy_pool\n, \nNUM_BLOCKS\n, \nBLOCK_SIZE\n, \nmy_memory_buffer\n,\n                         \nMyPool\n);\n\n\n\n\n\n\nOnce the memory pool has been initialized the developer can allocate memory blocks from the pool by calling \nos_memblock_get\n. When the memory block is no longer needed the memory can be freed by calling \nos_memblock_put\n. \n\n\nData structures\n\n\nstruct\n \n
 os_mempool\n {\n    \nint\n \nmp_block_size\n;\n    \nint\n \nmp_num_blocks\n;\n    \nint\n \nmp_num_free\n;\n    \nuint32_t\n \nmp_membuf_addr\n;\n    \nSTAILQ_ENTRY\n(\nos_mempool\n) \nmp_list\n;    \n    \nSLIST_HEAD\n(,\nos_memblock\n);\n    \nchar\n \n*name\n;\n};\n\n\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nmp_block_size\n\n\nSize of the memory blocks, in bytes. This is not the actual  number of bytes used by each block; it is the requested size of each block. The actual memory block size will be aligned to OS_ALIGNMENT bytes\n\n\n\n\n\n\nmp_num_blocks\n\n\nNumber of memory blocks in the pool\n\n\n\n\n\n\nmp_num_free\n\n\nNumber of free blocks left\n\n\n\n\n\n\nmp_membuf_addr\n\n\nThe address of the memory block. This is used to check that a valid memory block is being freed.\n\n\n\n\n\n\nmp_list\n\n\nList pointer to chain memory pools so they can be displayed by newt tools\n\n\n\n\n\n\nSLIST_HEAD(,os_memblock)\n\n\nList pointer to chain free memory
  blocks\n\n\n\n\n\n\nname\n\n\nName for the memory block\n\n\n\n\n\n\n\n\nList of Functions/Macros\n\n\nThe functions/macros available in mem_pool are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_memblock_get\n\n\nAllocate an element from the memory pool.\n\n\n\n\n\n\nos_mempool_init\n\n\nInitializes the memory pool.\n\n\n\n\n\n\nos_memblock_put\n\n\nReleases previously allocated element back to the pool.\n\n\n\n\n\n\nOS_MEMPOOL_BYTES\n\n\nCalculates how many bytes of memory is used by n number of elements, when individual element size is blksize bytes.\n\n\n\n\n\n\nOS_MEMPOOL_SIZE\n\n\nCalculates the number of os_membuf_t elements used by n blocks of size blksize bytes.", 
             "title": "toc"
         }, 
         {
@@ -2586,9 +2616,9 @@
             "title": "Data structures"
         }, 
         {
-            "location": "/os/core_os/memory_pool/memory_pool/#list-of-functions", 
-            "text": "The functions available in mem_pool are:   os_memblock_get  os_mempool_init  os_memblock_put  OS_MEMPOOL_BYTES  OS_MEMPOOL_SIZE", 
-            "title": "List of Functions"
+            "location": "/os/core_os/memory_pool/memory_pool/#list-of-functionsmacros", 
+            "text": "The functions/macros available in mem_pool are:     Function  Description      os_memblock_get  Allocate an element from the memory pool.    os_mempool_init  Initializes the memory pool.    os_memblock_put  Releases previously allocated element back to the pool.    OS_MEMPOOL_BYTES  Calculates how many bytes of memory is used by n number of elements, when individual element size is blksize bytes.    OS_MEMPOOL_SIZE  Calculates the number of os_membuf_t elements used by n blocks of size blksize bytes.", 
+            "title": "List of Functions/Macros"
         }, 
         {
             "location": "/os/core_os/memory_pool/os_memblock_get/", 
@@ -2737,7 +2767,7 @@
         }, 
         {
             "location": "/os/core_os/heap/heap/", 
-            "text": "Heap\n\n\nAPI for doing dynamic memory allocation.\n\n\nDescription\n\n\nThis provides malloc()/free() functionality with locking.  The shared resource heap needs to be protected from concurrent access when OS has been started. \nos_malloc()\n function grabs a mutex before calling \nmalloc()\n.\n\n\nData structures\n\n\nN/A\n\n\nList of Functions\n\n\nThe functions available in heap are:\n\n\n\n\nos_free\n\n\nos_malloc\n\n\nos_realloc", 
+            "text": "Heap\n\n\nAPI for doing dynamic memory allocation.\n\n\nDescription\n\n\nThis provides malloc()/free() functionality with locking.  The shared resource heap needs to be protected from concurrent access when OS has been started. \nos_malloc()\n function grabs a mutex before calling \nmalloc()\n.\n\n\nData structures\n\n\nN/A\n\n\nList of Functions\n\n\nThe functions available in heap are:\n\n\n\n\n\n\n\n\nFunction\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nos_free\n\n\nFrees previously allocated memory back to the heap.\n\n\n\n\n\n\nos_malloc\n\n\nAllocates the given number of bytes from heap and returns a pointer to it.\n\n\n\n\n\n\nos_realloc\n\n\nTries to resize previously allocated memory block, and returns pointer to resized memory.", 
             "title": "toc"
         }, 
         {
@@ -2757,7 +2787,7 @@
         }, 
         {
             "location": "/os/core_os/heap/heap/#list-of-functions", 
-            "text": "The functions available in heap are:   os_free  os_malloc  os_realloc", 
+            "text": "The functions available in heap are:     Function  Description      os_free  Frees previously allocated memory back to the heap.    os_malloc  Allocates the given number of bytes from heap and returns a pointer to it.    os_realloc  Tries to resize previously allocated memory block, and returns pointer to resized memory.", 
             "title": "List of Functions"
         }, 
         {
@@ -2852,7 +2882,7 @@
         }, 
         {
             "location": "/os/core_os/mbuf/mbuf/", 
-            "text": "Mbufs\n\n\nThe mbuf (short for memory buffer) is a common concept in networking stacks. The mbuf is used to hold packet data as it traverses the stack. The mbuf also generally stores header information or other networking stack information that is carried around with the packet. The mbuf and its associated library of functions were developed to make common networking stack operations (like stripping and adding protocol headers) efficient and as copy-free as possible.\n\n\nIn its simplest form, an mbuf is a memory block with some space reserved for internal information and a pointer which is used to \"chain\" memory blocks together in order to create a \"packet\". This is a very important aspect of the mbuf: the ability to chain mbufs together to create larger \"packets\" (chains of mbufs).\n\n\nWhy use mbufs?\n\n\nThe main reason is to conserve memory. Consider a networking protocol that generally sends small packets but occasionally sends large ones. The Blueto
 oth Low Energy (BLE) protocol is one such example. A flat buffer would need to be sized so that the maximum packet size could be contained by the buffer. With the mbuf, a number of mbufs can be chained together so that the occasional large packet can be handled while leaving more packet buffers available to the networking stack for smaller packets.\n\n\nPacket Header mbuf\n\n\nNot all mbufs are created equal. The first mbuf in a chain of mbufs is a special mbuf called a \"packet header mbuf\". The reason that this mbuf is special is that it contains the length of all the data contained by the chain of mbufs (the packet length, in other words). The packet header mbuf may also contain a user defined structure (called a \"user header\") so that networking protocol specific information can be conveyed to various layers of the networking stack. Any mbufs that are part of the packet (i.e. in the mbuf chain but not the first one) are \"normal\" (i.e. non-packet header) mbufs. A normal mbuf
  does not have any packet header or user packet header structures in them; they only contain the basic mbuf header (\nstruct os_mbuf\n). Figure 1 illustrates these two types of mbufs. Note that the numbers/text in parentheses denote the size of the structures/elements (in bytes) and that MBLEN is the memory block length of the memory pool used by the mbuf pool.\n\n\n\n\nNormal mbuf\n\n\nNow let's take a deeper dive into the mbuf structure. Figure 2 illustrates a normal mbuf and breaks out the various fields in the \nos_mbuf\n structure. \n\n\n\n\nThe \nom_data\n field is a pointer to where the data starts inside the data buffer. Typically, mbufs that are allocated from the mbuf pool (discussed later) have their om_data pointer set to the start of the data buffer but there are cases where this may not be desirable (added a protocol header to a packet, for example). \n\n\nThe \nom_flags\n field is a set of flags used internally by the mbuf library. Currently, no flags have been define
 d. \n\n\nThe \nom_pkthdr_len\n field is the total length of all packet headers in the mbuf. For normal mbufs this is set to 0 as there is no packet or user packet headers. For packet header mbufs, this would be set to the length of the packet header structure (16) plus the size of the user packet header (if any). Note that it is this field which differentiates packet header mbufs from normal mbufs (i.e. if \nom_pkthdr_len\n is zero, this is a normal mbuf; otherwise it is a packet header mbuf). \n\n\nThe \nom_len\n field contains the amount of user data in the data buffer. When initially allocated, this field is 0 as there is no user data in the mbuf. \n\n\nThe \nomp_pool\n field is a pointer to the pool from which this mbuf has been allocated. This is used internally by the mbuf library. \n\n\nThe \nomp_next\n field is a linked list element which is used to chain mbufs.\n\n\n\n\nFigure 2 also shows a normal mbuf with actual values in the \nos_mbuf\n structure. This mbuf starts at ad
 dress 0x1000 and is 256 bytes in total length. In this example, the user has copied 33 bytes into the data buffer starting at address 0x1010 (this is where om_data points). Note that the packet header length in this mbuf is 0 as it is not a packet header mbuf.\n\n\n\n\nFigure 3 illustrates the packet header mbuf along with some chained mbufs (i.e a \"packet\"). In this example, the user header structure is defined to be 8 bytes. Note that in figure 3 we show a number of different mbufs with varying \nom_data\n pointers and lengths since we want to show various examples of valid mbufs. For all the mbufs (both packet header and normal ones) the total length of the memory block is 128 bytes.\n\n\n\n\nMbuf pools\n\n\nMbufs are collected into \"mbuf pools\" much like memory blocks. The mbuf pool itself contains a pointer to a memory pool. The memory blocks in this memory pool are the actual mbufs; both normal and packet header mbufs. Thus, the memory block (and corresponding memory pool)
  must be sized correctly. In other words, the memory blocks which make up the memory pool used by the mbuf pool must be at least: sizeof(struct os_mbuf) + sizeof(struct os_mbuf_pkthdr) + sizeof(struct user_defined_header) + desired minimum data buffer length. For example, if the developer wants mbufs to contain at least 64 bytes of user data and they have a user header of 12 bytes, the size of the memory block would be (at least): 64 + 12 + 16 + 8, or 100 bytes. Yes, this is a fair amount of overhead. However, the flexibility provided by the mbuf library usually outweighs overhead concerns.\n\n\nCreate mbuf pool\n\n\nCreating an mbuf pool is fairly simple: create a memory pool and then create the mbuf pool using that memory pool. Once the developer has determined the size of the user data needed per mbuf (this is based on the application/networking stack and is outside the scope of this discussion) and the size of the user header (if any), the memory blocks can be sized. In the exam
 ple shown below, the application requires 64 bytes of user data per mbuf and also allocates a user header (called struct user_hdr). Note that we do not show the user header data structure as there really is no need; all we need to do is to account for it when creating the memory pool. In the example, we use the macro \nMBUF_PKTHDR_OVERHEAD\n to denote the amount of packet header overhead per mbuf and \nMBUF_MEMBLOCK_OVERHEAD\n to denote the total amount of overhead required per memory block. The macro \nMBUF_BUF_SIZE\n is used to denote the amount of payload that the application requires (aligned on a 32-bit boundary in this case). All this leads to the total memory block size required, denoted by the macro \nMBUF_MEMBLOCK_OVERHEAD\n.\n\n\n#define MBUF_PKTHDR_OVERHEAD    sizeof(struct os_mbuf_pkthdr) + sizeof(struct user_hdr)\n\n\n#define MBUF_MEMBLOCK_OVERHEAD  sizeof(struct os_mbuf) + MBUF_PKTHDR_OVERHEAD\n\n\n\n#define MBUF_NUM_MBUFS      (32)\n\n\n#define MBUF_PAYLOAD_SIZE   (64
 )\n\n\n#define MBUF_BUF_SIZE       OS_ALIGN(MBUF_PAYLOAD_SIZE, 4)\n\n\n#define MBUF_MEMBLOCK_SIZE  (MBUF_BUF_SIZE + MBUF_MEMBLOCK_OVERHEAD)\n\n\n#define MBUF_MEMPOOL_SIZE   OS_MEMPOOL_SIZE(MBUF_NUM_MBUFS, MBUF_MEMBLOCK_SIZE)\n\n\n\nstruct\n \nos_mbuf_pool\n \ng_mbuf_pool\n; \n\nstruct\n \nos_mempool\n \ng_mbuf_mempool\n;\n\nos_membuf_t\n \ng_mbuf_buffer\n[\nMBUF_MEMPOOL_SIZE\n];\n\n\nvoid\n\n\ncreate_mbuf_pool\n(\nvoid\n)\n{\n    \nint\n \nrc\n;\n\n    \nrc\n \n=\n \nos_mempool_init\n(\ng_mbuf_mempool\n, \nMBUF_NUM_MBUFS\n, \n                          \nMBUF_MEMBLOCK_SIZE\n, \ng_mbuf_buffer\n[\n0\n], \nmbuf_pool\n);\n    \nassert\n(\nrc\n \n==\n \n0\n);\n\n    \nrc\n \n=\n \nos_mbuf_pool_init\n(\ng_mbuf_pool\n, \ng_mbuf_mempool\n, \nMBUF_MEMBLOCK_SIZE\n, \n                           \nMBUF_NUM_MBUFS\n);\n    \nassert\n(\nrc\n \n==\n \n0\n);\n}\n\n\n\n\n\nUsing mbufs\n\n\nThe following examples illustrate typical mbuf usage. There are two basic mbuf allocation API: \nos_mbuf_get()\n 
 and \nos_mbuf_get_pkthdr()\n. The first API obtains a normal mbuf whereas the latter obtains a packet header mbuf. Typically, application developers use \nos_mbuf_get_pkthdr()\n and rarely, if ever, need to call \nos_mbuf_get()\n as the rest of the mbuf API (e.g. \nos_mbuf_append()\n, \nos_mbuf_copyinto()\n, etc.) typically deal with allocating and chaining mbufs. It is recommended to use the provided API to copy data into/out of mbuf chains and/or manipulate mbufs.\n\n\nIn \nexample1\n, the developer creates a packet and then sends the packet to a networking interface. The code sample also provides an example of copying data out of an mbuf as well as use of the \"pullup\" api (another very common mbuf api).\n\n\nvoid\n\n\nmbuf_usage_example1\n(\nuint8_t\n \n*mydata\n, \nint\n \nmydata_length\n)\n{\n    \nint\n \nrc\n;\n    \nstruct\n \nos_mbuf\n \n*om\n;\n\n    \n/* get a packet header mbuf */\n\n    \nom\n \n=\n \nos_mbuf_get_pkthdr\n(\ng_mbuf_pool\n, \nsizeof\n(\nstruct\n \nuser_
 hdr\n));\n    \nif\n (\nom\n) {\n        \n/* \n\n\n         * Copy user data into mbuf. NOTE: if mydata_length is greater than the\n\n\n         * mbuf payload size (64 bytes using above example), mbufs are allocated\n\n\n         * and chained together to accommodate the total packet length.\n\n\n         */\n\n        \nrc\n \n=\n \nos_mbuf_copyinto\n(\nom\n, \n0\n, \nmydata\n, \nlen\n);\n        \nif\n (\nrc\n) {\n            \n/* Error! Could not allocate enough mbufs for total packet length */\n\n            \nreturn\n \n-\n1\n;\n        }\n\n        \n/* Send packet to networking interface */\n\n        \nsend_pkt\n(\nom\n);\n    }\n}\n\n\n\n\n\nIn \nexample2\n we show use of the pullup api as this illustrates some of the typical pitfalls developers encounter when using mbufs. The first pitfall is one of alignment/padding. Depending on the processor and/or compiler, the sizeof() a structure may vary. Thus, the size of \nmy_protocol_header\n may be different inside the packet 
 data of the mbuf than the size of the structure on the stack or as a global variable, for instance. While some networking protcols may align protocol information on convenient processor boundaries many others try to conserve bytes \"on the air\" (i.e inside the packet data). Typical methods used to deal with this are \"packing\" the structure (i.e. force compiler to not pad) or creating protocol headers that do not require padding. \nexample2\n assumes that one of these methods was used when defining the \nmy_protocol_header\n structure.\n\n\nAnother common pitfall occurs around endianness. A network protocol may be little endian or big endian; it all depends on the protocol specification. Processors also have an endianness; this means that the developer has to be careful that the processor endianness and the protocol endianness are handled correctly. In \nexample2\n, some common networking functions are used: \nntohs()\n and \nntohl()\n. These are shorthand for \"network order to h
 ost order, short\" and \"network order to host order, long\". Basically, these functions convert data of a certain size (i.e. 16 bits, 32 bits, etc) to the endianness of the host. Network byte order is big-endian (most significant byte first), so these functions convert big-endian byte order to host order (thus, the implementation of these functions is host dependent). Note that the BLE networking stack \"on the air\" format is least signigicant byte first (i.e. little endian), so a \"bletoh\" function would have to take little endian format and convert to host format.\n\n\nA long story short: the developer must take care when copying structure data to/from mbufs and flat buffers!\n\n\nA final note: these examples assume the same mbuf struture and definitions used in the first example. \n\n\nvoid\n\n\nmbuf_usage_example2\n(\nstruct\n \nmbuf\n \n*rxpkt\n)\n{\n    \nint\n \nrc\n;\n    \nuint8_t\n \npacket_data\n[\n16\n];\n    \nstruct\n \nmbuf\n \n*om\n;\n    \nstruct\n \nmy_protocol_
 header\n \n*phdr\n;\n\n    \n/* Make sure that \nmy_protocol_header\n bytes are contiguous in mbuf */\n\n    \nom\n \n=\n \nos_mbuf_pullup\n(\ng_mbuf_pool\n, \nsizeof\n(\nstruct\n \nmy_protocol_header\n));\n    \nif\n (\n!om\n) {\n        \n/* Not able to pull up data into contiguous area */\n\n        \nreturn\n \n-\n1\n;\n    }\n\n    \n/* \n\n\n     * Get the protocol information from the packet. In this example we presume that we\n\n\n     * are interested in protocol types that are equal to MY_PROTOCOL_TYPE, are not zero\n\n\n     * length, and have had some time in flight.\n\n\n     */\n\n    \nphdr\n \n=\n \nOS_MBUF_DATA\n(\nom\n, \nstruct\n \nmy_protocol_header\n \n*\n);\n    \ntype\n \n=\n \nntohs\n(\nphdr-\nprot_type\n);\n    \nlength\n \n=\n \nntohs\n(\nphdr-\nprot_length\n);\n    \ntime_in_flight\n \n=\n \nntohl\n(\nphdr-\nprot_tif\n);\n\n    \nif\n ((\ntype\n \n==\n \nMY_PROTOCOL_TYPE\n) \n (\nlength\n \n \n0\n) \n (\ntime_in_flight\n \n \n0\n)) {\n        \nrc\n \n=\n 
 \nos_mbuf_copydata\n(\nrxpkt\n, \nsizeof\n(\nstruct\n \nmy_protocol_header\n), \n16\n, \npacket_data\n);\n        \nif\n (\n!rc\n) {\n            \n/* Success! Perform operations on packet data */\n\n            \n... \nuser\n \ncode\n \nhere\n ...\n\n        }\n    }\n\n    \n/* Free passed in packet (mbuf chain) since we don\nt need it anymore */\n\n    \nos_mbuf_free_chain\n(\nom\n);\n}\n\n\n\n\n\n\n\nData Structures\n\n\nstruct\n \nos_mbuf_pool\n {\n    \nuint16_t\n \nomp_databuf_len\n;\n    \nuint16_t\n \nomp_mbuf_count\n;\n    \nstruct\n \nos_mempool\n \n*omp_pool\n;\n    \nSTAILQ_ENTRY\n(\nos_mbuf_pool\n) \nomp_next\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nomp_databuf_len\n\n\nThe length, in bytes, of the \"data buffer\" of the mbuf. The data buffer of the mbuf is everything except the os_mbuf structure (which is present in all types of mbufs)\n\n\n\n\n\n\nomp_mbuf_count\n\n\nTotal number of mbufs in the pool when allocated. This is NOT the 
 number of free mbufs in the pool!\n\n\n\n\n\n\nomp_pool\n\n\nThe memory pool from which the mbufs are allocated\n\n\n\n\n\n\nomp_next\n\n\nThis is a linked list pointer which chains memory pools. It is used by the system memory pool library\n\n\n\n\n\n\n\n\n\n\nstruct\n \nos_mbuf_pkthdr\n {\n    \nuint16_t\n \nomp_len\n;\n    \nuint16_t\n \nomp_flags\n;\n    \nSTAILQ_ENTRY\n(\nos_mbuf_pkthdr\n) \nomp_next\n;\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nomp_len\n\n\nLength, in bytes, of the \"packet\". This is the sum of the user data in all the mbufs chained to the packet header mbuf (including the packet header mbuf)\n\n\n\n\n\n\nomp_flags\n\n\nPacket header flags.\n\n\n\n\n\n\nomp_next\n\n\nLinked list pointer to chain \"packets\". This can be used to add mbuf chains to a queue or linked list and is there for convenience.\n\n\n\n\n\n\n\n\n\n\nstruct\n \nos_mbuf\n {\n    \nuint8_t\n \n*om_data\n;\n    \nuint8_t\n \nom_flags\n;\n    \nuint8_t\n \nom_pkthd
 r_len\n;\n    \nuint16_t\n \nom_len\n;\n    \nstruct\n \nos_mbuf_pool\n \n*om_omp\n;\n    \nSLIST_ENTRY\n(\nos_mbuf\n) \nom_next\n;\n    \nuint8_t\n \nom_databuf\n[\n0\n];\n};\n\n\n\n\n\n\n\n\n\n\n\nElement\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nom_data\n\n\nPointer to start of user data in mbuf data buffer\n\n\n\n\n\n\nom_flags\n\n\nmbuf flags field. Currently all flags unused.\n\n\n\n\n\n\nom_pkthdr_len\n\n\nThe total length of all packet headers in the mbuf (mbuf packet header plus user packet header), in bytes\n\n\n\n\n\n\nom_len\n\n\nThe length of the user data contained in this mbuf, in bytes\n\n\n\n\n\n\nom_omp\n\n\nMemory pool pointer. This is the mbuf pool from which this mbuf was allocated.\n\n\n\n\n\n\nom_next\n\n\nPointer to next mbuf in packet chain\n\n\n\n\n\n\nom_databuf\n\n\nmbuf data buffer (accessor to start of mbuf data buffer). Note that the mbuf data buffer refers to the start of either the user data in normal mbufs or the start of the os mbuf packet header for p
 acket header mbufs\n\n\n\n\n\n\n\n\nList of Functions/Macros\n\n\nThe functions/macros available in mbuf are:\n\n\n\n\nOS_MBUF_PKTHDR\n\n\nOS_MBUF_PKTHDR_TO_MBUF\n\n\nOS_MBUF_PKTLEN\n\n\nOS_MBUF_DATA\n\n\nOS_MBUF_USRHDR\n\n\nOS_MBUF_USRHDR_LEN\n\n\nOS_MBUF_LEADINGSPACE\n\n\nOS_MBUF_TRAILINGSPACE\n\n\nos_mbuf_adj\n\n\nos_mbuf_append\n\n\nos_mbuf_concat\n\n\nos_mbuf_copydata\n\n\nos_mbuf_copyinto\n\n\nos_mbuf_dup\n\n\nos_mbuf_extend\n\n\nos_mbuf_free_chain\n\n\nos_mbuf_get\n\n\nos_mbuf_get_pkthdr\n\n\nos_mbuf_memcmp\n\n\nos_mbuf_off\n\n\nos_mbuf_pool_init\n\n\nos_mbuf_prepend\n\n\nos_mbuf_pullup", 
+            "text": "Mbufs\n\n\nThe mbuf (short for memory buffer) is a common concept in networking stacks. The mbuf is used to hold packet data as it traverses the stack. The mbuf also generally stores header information or other networking stack information that is carried around with the packet. The mbuf and its associated library of functions were developed to make common networking stack operations (like stripping and adding protocol headers) efficient and as copy-free as possible.\n\n\nIn its simplest form, an mbuf is a memory block with some space reserved for internal information and a pointer which is used to \"chain\" memory blocks together in order to create a \"packet\". This is a very important aspect of the mbuf: the ability to chain mbufs together to create larger \"packets\" (chains of mbufs).\n\n\nWhy use mbufs?\n\n\nThe main reason is to conserve memory. Consider a networking protocol that generally sends small packets but occasionally sends large ones. The Blueto
 oth Low Energy (BLE) protocol is one such example. A flat buffer would need to be sized so that the maximum packet size could be contained by the buffer. With the mbuf, a number of mbufs can be chained together so that the occasional large packet can be handled while leaving more packet buffers available to the networking stack for smaller packets.\n\n\nPacket Header mbuf\n\n\nNot all mbufs are created equal. The first mbuf in a chain of mbufs is a special mbuf called a \"packet header mbuf\". The reason that this mbuf is special is that it contains the length of all the data contained by the chain of mbufs (the packet length, in other words). The packet header mbuf may also contain a user defined structure (called a \"user header\") so that networking protocol specific information can be conveyed to various layers of the networking stack. Any mbufs that are part of the packet (i.e. in the mbuf chain but not the first one) are \"normal\" (i.e. non-packet header) mbufs. A normal mbuf
  does not have any packet header or user packet header structures in them; they only contain the basic mbuf header (\nstruct os_mbuf\n). Figure 1 illustrates these two types of mbufs. Note that the numbers/text in parentheses denote the size of the structures/elements (in bytes) and that MBLEN is the memory block length of the memory pool used by the mbuf pool.\n\n\n\n\nNormal mbuf\n\n\nNow let's take a deeper dive into the mbuf structure. Figure 2 illustrates a normal mbuf and breaks out the various fields in the \nos_mbuf\n structure. \n\n\n\n\nThe \nom_data\n field is a pointer to where the data starts inside the data buffer. Typically, mbufs that are allocated from the mbuf pool (discussed later) have their om_data pointer set to the start of the data buffer but there are cases where this may not be desirable (added a protocol header to a packet, for example). \n\n\nThe \nom_flags\n field is a set of flags used internally by the mbuf library. Currently, no flags have been define
 d. \n\n\nThe \nom_pkthdr_len\n field is the total length of all packet headers in the mbuf. For normal mbufs this is set to 0 as there is no packet or user packet headers. For packet header mbufs, this would be set to the length of the packet header structure (16) plus the size of the user packet header (if any). Note that it is this field which differentiates packet header mbufs from normal mbufs (i.e. if \nom_pkthdr_len\n is zero, this is a normal mbuf; otherwise it is a packet header mbuf). \n\n\nThe \nom_len\n field contains the amount of user data in the data buffer. When initially allocated, this field is 0 as there is no user data in the mbuf. \n\n\nThe \nomp_pool\n field is a pointer to the pool from which this mbuf has been allocated. This is used internally by the mbuf library. \n\n\nThe \nomp_next\n field is a linked list element which is used to chain mbufs.\n\n\n\n\nFigure 2 also shows a normal mbuf with actual values in the \nos_mbuf\n structure. This mbuf starts at ad
 dress 0x1000 and is 256 bytes in total length. In this example, the user has copied 33 bytes into the data buffer starting at address 0x1010 (this is where om_data points). Note that the packet header length in this mbuf is 0 as it is not a packet header mbuf.\n\n\n\n\nFigure 3 illustrates the packet header mbuf along with some chained mbufs (i.e a \"packet\"). In this example, the user header structure is defined to be 8 bytes. Note that in figure 3 we show a number of different mbufs with varying \nom_data\n pointers and lengths since we want to show various examples of valid mbufs. For all the mbufs (both packet header and normal ones) the total length of the memory block is 128 bytes.\n\n\n\n\nMbuf pools\n\n\nMbufs are collected into \"mbuf pools\" much like memory blocks. The mbuf pool itself contains a pointer to a memory pool. The memory blocks in this memory pool are the actual mbufs; both normal and packet header mbufs. Thus, the memory block (and corresponding memory pool)
  must be sized correctly. In other words, the memory blocks which make up the memory pool used by the mbuf pool must be at least: sizeof(struct os_mbuf) + sizeof(struct os_mbuf_pkthdr) + sizeof(struct user_defined_header) + des

<TRUNCATED>