You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/19 01:19:52 UTC

[GitHub] [spark] zhouyejoe commented on a change in pull request #35504: [SPARK-38194][YARN][MESOS][K8S] Make memory overhead factor configurable

zhouyejoe commented on a change in pull request #35504:
URL: https://github.com/apache/spark/pull/35504#discussion_r810426309



##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -105,6 +105,17 @@ package object config {
     .bytesConf(ByteUnit.MiB)
     .createOptional
 
+  private[spark] val DRIVER_MEMORY_OVERHEAD_FACTOR =
+    ConfigBuilder("spark.driver.memoryOverheadFactor")
+      .doc("This sets the Memory Overhead Factor on the driver that will allocate memory to " +
+        "non-JVM memory, which includes off-heap memory allocations, non-JVM tasks, various " +
+        "systems processes, and tmpfs-based local directories.")
+      .version("3.3.0")
+      .doubleConf
+      .checkValue(factor => factor > 0,

Review comment:
       Should we also check whether the factor is configured to be larger than 1.0? Same issue applies below for the spark.executor.memoryOverheadFactor? 
   We do check this for spark.memory.fraction and spark.memory.storageFraction.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org