You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2024/02/26 22:32:20 UTC

(spark) branch master updated: [SPARK-45527][CORE][TESTS][FOLLOWUP] Reduce the number of threads from 1k to 100 in `TaskSchedulerImplSuite`

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 76c4fd56c5a5 [SPARK-45527][CORE][TESTS][FOLLOWUP] Reduce the number of threads from 1k to 100 in `TaskSchedulerImplSuite`
76c4fd56c5a5 is described below

commit 76c4fd56c5a53bf9f726820a44ca0f610f7b91f6
Author: Dongjoon Hyun <dh...@apple.com>
AuthorDate: Mon Feb 26 14:32:10 2024 -0800

    [SPARK-45527][CORE][TESTS][FOLLOWUP] Reduce the number of threads from 1k to 100 in `TaskSchedulerImplSuite`
    
    ### What changes were proposed in this pull request?
    
    This PR is a follow-up of #43494 in order to reduce the number of threads of SparkContext from 1k to 100 in the test environment.
    
    ### Why are the changes needed?
    
    To reduce the test resource requirement. 1000 threads seem to be too large for some CI systems with a limited resource.
    - https://github.com/apache/spark/actions/workflows/build_maven_java21_macos14.yml
      - https://github.com/apache/spark/actions/runs/8054862135/job/22000403549
    ```
    Warning: [766.327s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 4096k, guardsize: 16k, detached.
    Warning: [766.327s][warning][os,thread] Failed to start the native thread for java.lang.Thread "dispatcher-event-loop-840"
    *** RUN ABORTED ***
    An exception or error caused a run to abort: unable to create native thread: possibly out of memory or process/resource limits reached
      java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, this is a test-case update.
    
    ### How was this patch tested?
    
    Pass the CIs and monitor Daily Apple Silicon test.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #45264 from dongjoon-hyun/SPARK-45527.
    
    Authored-by: Dongjoon Hyun <dh...@apple.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
index 3e43442583ec..f7b868c66468 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
@@ -2489,7 +2489,7 @@ class TaskSchedulerImplSuite extends SparkFunSuite with LocalSparkContext
     val taskCpus = 1
     val taskGpus = 0.3
     val executorGpus = 4
-    val executorCpus = 1000
+    val executorCpus = 100
 
     // each tasks require 0.3 gpu
     val taskScheduler = setupScheduler(numCores = executorCpus,


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org