You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "HyukjinKwon (via GitHub)" <gi...@apache.org> on 2023/03/01 02:14:17 UTC

[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40212: [SPARK-42613][CORE][PYTHON][YARN] PythonRunner should set OMP_NUM_THREADS to task cpus instead of executor cores by default

HyukjinKwon commented on code in PR #40212:
URL: https://github.com/apache/spark/pull/40212#discussion_r1121034208


##########
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala:
##########
@@ -140,7 +140,7 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       // SPARK-28843: limit the OpenMP thread pool to the number of cores assigned to this executor
       // this avoids high memory consumption with pandas/numpy because of a large OpenMP thread pool
       // see https://github.com/numpy/numpy/issues/10455
-      execCoresProp.foreach(envVars.put("OMP_NUM_THREADS", _))
+      envVars.put("OMP_NUM_THREADS", conf.get("spark.task.cpus", "1"))

Review Comment:
   I am good with the current proposal but maybe worthwhile adding it to a release note.
   
   FWIW, for the very initial PR, I was worried about limiting the number of cores (and misunderstood this concept a bit there) and its potential impact. After having multiple syncs with ML guys, now I understand it better :-).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org