You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nuttx.apache.org by GitBox <gi...@apache.org> on 2022/05/23 23:17:47 UTC

[GitHub] [incubator-nuttx] anchao commented on a diff in pull request #6279: include:add recursive lock

anchao commented on code in PR #6279:
URL: https://github.com/apache/incubator-nuttx/pull/6279#discussion_r879932826


##########
include/nuttx/mutex.h:
##########
@@ -205,6 +214,200 @@ static inline int nxmutex_unlock(FAR mutex_t *mutex)
   return nxsem_post(mutex);
 }
 
+/****************************************************************************
+ * Name: nxrmutex_init
+ *
+ * Description:
+ *   This function initializes the UNNAMED recursive mutex. Following a
+ *   successful call to nxrmutex_init(), the recursive mutex may be used in
+ *   subsequent calls to nxrmutex_lock(), nxrmutex_unlock(),
+ *   and nxrmutex_trylock(). The recursive mutex remains usable
+ *   until it is destroyed.
+ *
+ * Parameters:
+ *   rmutex - Recursive mutex to be initialized
+ *
+ * Return Value:
+ *   This is an internal OS interface and should not be used by applications.
+ *   It follows the NuttX internal error return policy:  Zero (OK) is
+ *   returned on success.  A negated errno value is returned on failure.
+ *
+ ****************************************************************************/
+
+static inline int nxrmutex_init(FAR rmutex_t *rmutex)
+{
+  rmutex->count = 0;
+  rmutex->holder = INVALID_PROCESS_ID;
+  return nxmutex_init(&rmutex->mutex);
+}
+
+/****************************************************************************
+ * Name: nxrmutex_destroy
+ *
+ * Description:
+ *   This function destroy the UNNAMED recursive mutex.
+ *
+ * Parameters:
+ *   rmutex - Recursive mutex to be destroyed
+ *
+ * Return Value:
+ *   This is an internal OS interface and should not be used by applications.
+ *   It follows the NuttX internal error return policy:  Zero (OK) is
+ *   returned on success.  A negated errno value is returned on failure.
+ *
+ ****************************************************************************/
+
+static inline int nxrmutex_destroy(FAR rmutex_t *rmutex)
+{
+  return nxmutex_destroy(&rmutex->mutex);
+}
+
+/****************************************************************************
+ * Name: nrxmutex_lock
+ *
+ * Description:
+ *   This function attempts to lock the recursive mutex referenced by
+ *   'rmutex'.The recursive mutex can be locked multiple times in the same
+ *   thread.
+ *
+ * Parameters:
+ *   rmutex - Recursive mutex descriptor.
+ *
+ * Return Value:
+ *   This is an internal OS interface and should not be used by applications.
+ *   It follows the NuttX internal error return policy:  Zero (OK) is
+ *   returned on success.  A negated errno value is returned on failure.
+ *   Possible returned errors:
+ *
+ ****************************************************************************/
+
+static inline int nxrmutex_lock(FAR rmutex_t *rmutex)

Review Comment:
   @xiaoxiang781216  @pkarashchenko 
   Some simple functions seem to get better performance if implemented in inline.
   The following is an optimization of up_interrupt_context(). The inline version saves more instruction cycles:
   
   Test on cortex-R:
   
   Origin:
   
   ```
   0000bb88 <up_interrupt_context>:
       bb88: e59f300c  ldr r3, [pc, #12] ; bb9c <up_interrupt_context+0x14>
       bb8c: e5930000  ldr r0, [r3]
       bb90: e2500000  subs  r0, r0, #0
       bb94: 13a00001  movne r0, #1
       bb98: e12fff1e  bx  lr
       bb9c: 0001a978  .word 0x0001a978
   
   int file_mq_send(FAR struct file *mq, FAR const char *msg, size_t msglen,
                    unsigned int prio)
   {
       1c38: e92d47f3  push  {r0, r1, r4, r5, r6, r7, r8, r9, sl, lr}
       1c3c: e1a09003  mov r9, r3
   ...
       1c80: eb000556  bl  31e0 <sched_lock>
       1c84: e10fa000  mrs sl, CPSR
       1c88: f10c0080  cpsid i
       1c8c: eb0027bd  bl  bb88 <up_interrupt_context>
       1c90: e3500000  cmp r0, #0
       1c94: 0a000005  beq 1cb0 <file_mq_send+0x78>
   ...
   ```
   
   inline:
   
   ```
   int file_mq_send(FAR struct file *mq, FAR const char *msg, size_t msglen,
                    unsigned int prio)
   {
       1c38: e92d47f3  push  {r0, r1, r4, r5, r6, r7, r8, r9, sl, lr}
       1c3c: e1a09003  mov r9, r3
   ...
       1c8c: e59f3078  ldr r3, [pc, #120]  ; 1d0c <file_mq_send+0xd4>
       1c90: e5933000  ldr r3, [r3]
       1c94: e3530000  cmp r3, #0
       1c98: 0a000005  beq 1cb4 <file_mq_send+0x7c>
       
   ```
       
   you can see that the optimization version save about 6 instructions in an inline process,
   in addition, some functions(file_mq_send) will call up_interrupt_context() repeatedly, eg:
   ```
   file_mq_send
   |
    ->up_interrupt_context
    ->nxmq_alloc_msg
      ->up_interrupt_context
   ```
      
   In this way, we can save about 12 instructions in a single send,
   it is worth noting that this optimization is more obvious in some scenarios:
   
   mq_send perf count test:
   
   mq send (low prio -> hi prio)
   
   thread 1 (prio 100):
   ```
   begin = up_perf_gettime();
   ret = mq_send(mqdes, msg, 1, 1);
   ```
   
   thread 2 (prio 101):
   ```
   ret = mq_receive(mqdes, msg, sizeof(msg), NULL);
   end = up_perf_gettime();
   ```
   
   Before optimization:
   `end - begin = 718`
   Optimized:
   `end - begin = 688`
   
   optimized version needs 688 cycle counts, but this is far from other RTOS....
   From the data below, you can be seen that the performance gap between nuttx and threadx is very large, 
   
   ```
                nuttx  threadx (cycle)
   mq_send(hi-low)    833       282
   mq_send(low-hi)    688       172
   mq_send(multi)    1978       467
   sem_post(low-hi)   225       147
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@nuttx.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org