You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "srowen (via GitHub)" <gi...@apache.org> on 2023/02/28 14:55:48 UTC

[GitHub] [spark] srowen commented on a diff in pull request #40212: [SPARK-42613][CORE][PYTHON][YARN] PythonRunner should set OMP_NUM_THREADS to task cpus instead of executor cores by default

srowen commented on code in PR #40212:
URL: https://github.com/apache/spark/pull/40212#discussion_r1120198019


##########
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala:
##########
@@ -140,7 +140,7 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       // SPARK-28843: limit the OpenMP thread pool to the number of cores assigned to this executor
       // this avoids high memory consumption with pandas/numpy because of a large OpenMP thread pool
       // see https://github.com/numpy/numpy/issues/10455
-      execCoresProp.foreach(envVars.put("OMP_NUM_THREADS", _))
+      envVars.put("OMP_NUM_THREADS", conf.get("spark.task.cpus", "1"))

Review Comment:
   This change is logical. The only argument for the original setting is it might actually help some tasks to fully use the CPUs by over-committing (assuming not all tasks are using all these threads at the same time). However it hurts throughput when closer to saturated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org