You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "jzhuge (via GitHub)" <gi...@apache.org> on 2023/02/28 01:47:15 UTC

[GitHub] [spark] jzhuge commented on a diff in pull request #40199: [SPARK-42596][CORE][YARN] OMP_NUM_THREADS not set to number of executor cores by default

jzhuge commented on code in PR #40199:
URL: https://github.com/apache/spark/pull/40199#discussion_r1119492302


##########
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala:
##########
@@ -135,6 +135,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
     val execCoresProp = Option(context.getLocalProperty(EXECUTOR_CORES_LOCAL_PROPERTY))
     val memoryMb = Option(context.getLocalProperty(PYSPARK_MEMORY_LOCAL_PROPERTY)).map(_.toLong)
     val localdir = env.blockManager.diskBlockManager.localDirs.map(f => f.getPath()).mkString(",")
+    // if OMP_NUM_THREADS is not explicitly set, override it with the number of cores
+    if (conf.getOption("spark.executorEnv.OMP_NUM_THREADS").isEmpty) {
+      // SPARK-28843: limit the OpenMP thread pool to the number of cores assigned to this executor
+      // this avoids high memory consumption with pandas/numpy because of a large OpenMP thread pool
+      // see https://github.com/numpy/numpy/issues/10455
+      execCoresProp.foreach(envVars.put("OMP_NUM_THREADS", _))

Review Comment:
   Thanks for the clarification! Make sense.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org