You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nuttx.apache.org by ma...@apache.org on 2020/10/30 10:06:27 UTC

[incubator-nuttx] branch releases/10.0 updated (1456514 -> a881495)

This is an automated email from the ASF dual-hosted git repository.

masayuki pushed a change to branch releases/10.0
in repository https://gitbox.apache.org/repos/asf/incubator-nuttx.git.


    from 1456514  s/BUSY/EBUSY typo fix, which ressulted in compile error
     new 2629090  sched: Fix DEBUGASSERT() in sched_unlock() for SMP
     new a881495  sched: Fix sched_lock() logic for SMP

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 sched/sched/sched_lock.c   | 19 ++++++-------------
 sched/sched/sched_unlock.c |  5 ++---
 2 files changed, 8 insertions(+), 16 deletions(-)


[incubator-nuttx] 02/02: sched: Fix sched_lock() logic for SMP

Posted by ma...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

masayuki pushed a commit to branch releases/10.0
in repository https://gitbox.apache.org/repos/asf/incubator-nuttx.git

commit a881495f7489995591d0c1f86c31de3799e4643d
Author: Masayuki Ishikawa <ma...@gmail.com>
AuthorDate: Thu Oct 15 10:03:25 2020 +0900

    sched: Fix sched_lock() logic for SMP
    
    Summary:
    - I noticed sched_lock() logic is different from sched_unlock()
    - I think sched_lock() should use critical section
    - Also, the code should be simple like sched_unlock()
    - This commit fixes these issues
    
    Impact:
    - Affects SMP only
    
    Testing:
    - Tested with spresense:wifi_smp (both NCPUS=2 and 3)
    - Tested with lc823450-xgevk:rndis
    - Tested with maix-bit:smp (QEMU)
    - Tested with esp32-core:smp (QEMU)
    - Tested with sabre-6quad:smp (QEMU)
    
    Signed-off-by: Masayuki Ishikawa <Ma...@jp.sony.com>
---
 sched/sched/sched_lock.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/sched/sched/sched_lock.c b/sched/sched/sched_lock.c
index 33b3f71..2b678ee 100644
--- a/sched/sched/sched_lock.c
+++ b/sched/sched/sched_lock.c
@@ -133,29 +133,20 @@ volatile cpu_set_t g_cpu_lockset SP_SECTION;
 int sched_lock(void)
 {
   FAR struct tcb_s *rtcb;
-  irqstate_t flags;
-  int cpu;
 
   /* If the CPU supports suppression of interprocessor interrupts, then
    * simple disabling interrupts will provide sufficient protection for
    * the following operation.
    */
 
-  flags = up_irq_save();
-
-  cpu  = this_cpu();
-  rtcb = current_task(cpu);
+  rtcb = this_task();
 
   /* Check for some special cases:  (1) rtcb may be NULL only during early
    * boot-up phases, and (2) sched_lock() should have no effect if called
    * from the interrupt level.
    */
 
-  if (rtcb == NULL || up_interrupt_context())
-    {
-      up_irq_restore(flags);
-    }
-  else
+  if (rtcb != NULL && !up_interrupt_context())
     {
       /* Catch attempts to increment the lockcount beyond the range of the
        * integer type.
@@ -163,6 +154,8 @@ int sched_lock(void)
 
       DEBUGASSERT(rtcb->lockcount < MAX_LOCK_COUNT);
 
+      irqstate_t flags = enter_critical_section();
+
       /* We must hold the lock on this CPU before we increment the lockcount
        * for the first time. Holding the lock is sufficient to lockout
        * context switching.
@@ -196,8 +189,6 @@ int sched_lock(void)
 
       rtcb->lockcount++;
 
-      up_irq_restore(flags);
-
 #if defined(CONFIG_SCHED_INSTRUMENTATION_PREEMPTION) || \
     defined(CONFIG_SCHED_CRITMONITOR)
       /* Check if we just acquired the lock */
@@ -223,6 +214,8 @@ int sched_lock(void)
       nxsched_merge_prioritized((FAR dq_queue_t *)&g_readytorun,
                                 (FAR dq_queue_t *)&g_pendingtasks,
                                 TSTATE_TASK_PENDING);
+
+      leave_critical_section(flags);
     }
 
   return OK;


[incubator-nuttx] 01/02: sched: Fix DEBUGASSERT() in sched_unlock() for SMP

Posted by ma...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

masayuki pushed a commit to branch releases/10.0
in repository https://gitbox.apache.org/repos/asf/incubator-nuttx.git

commit 2629090a6ff5dfb2366266494a98540ff3728f44
Author: Masayuki Ishikawa <ma...@gmail.com>
AuthorDate: Thu Oct 15 09:43:27 2020 +0900

    sched: Fix DEBUGASSERT() in sched_unlock() for SMP
    
    Summary:
    - I noticed DEBUGASSERT() happens in sched_unlock()
    - The test was Wi-Fi audio streaming stress test with spresense 3cores
    - Actually, g_cpu_schedlock was locked but g_cpu_lockset was incorrect
    - Finally, I found that cpu was obtained before enter_critical_section()
    - And the task was moved from one cpu to another cpu
    - However, that call should be done within the critical section
    - This commit fixes this issue
    
    Impact:
    - Affects SMP only
    
    Testing:
    - Tested with spresense:wifi_smp (both NCPUS=2 and 3)
    - Tested with lc823450-xgevk:rndis
    - Tested with maix-bit:smp (QEMU)
    - Tested with esp32-core:smp (QEMU)
    - Tested with sabre-6quad:smp (QEMU)
    
    Signed-off-by: Masayuki Ishikawa <Ma...@jp.sony.com>
---
 sched/sched/sched_unlock.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/sched/sched/sched_unlock.c b/sched/sched/sched_unlock.c
index 857c682..3991b16 100644
--- a/sched/sched/sched_unlock.c
+++ b/sched/sched/sched_unlock.c
@@ -54,14 +54,12 @@
 int sched_unlock(void)
 {
   FAR struct tcb_s *rtcb;
-  int cpu;
 
   /* This operation is safe because the scheduler is locked and no context
    * switch may occur.
    */
 
-  cpu  = this_cpu();
-  rtcb = current_task(cpu);
+  rtcb = this_task();
 
   /* Check for some special cases:  (1) rtcb may be NULL only during
    * early boot-up phases, and (2) sched_unlock() should have no
@@ -73,6 +71,7 @@ int sched_unlock(void)
       /* Prevent context switches throughout the following. */
 
       irqstate_t flags = enter_critical_section();
+      int cpu = this_cpu();
 
       /* Decrement the preemption lock counter */