You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Moss (JIRA)" <ji...@apache.org> on 2017/11/20 20:33:00 UTC

[jira] [Created] (SPARK-22567) spark.mesos.executor.memoryOverhead equivalent for the Driver when running on Mesos

Michael Moss created SPARK-22567:
------------------------------------

             Summary: spark.mesos.executor.memoryOverhead equivalent for the Driver when running on Mesos
                 Key: SPARK-22567
                 URL: https://issues.apache.org/jira/browse/SPARK-22567
             Project: Spark
          Issue Type: Improvement
          Components: Mesos
    Affects Versions: 2.2.0
            Reporter: Michael Moss
            Priority: Minor


spark.mesos.executor.memoryOverhead is:
"The amount of additional memory, specified in MB, to be allocated per executor. By default, the overhead will be larger of either 384 or 10% of spark.executor.memory"

It is important for every JVM process to have memory available to it, beyond its heap (Xmx) for native allocations.

When using the MesosClusterDispatcher and running the Driver on Mesos (https://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode), it appears that the Driver's mesos sandbox is allocated with the same amount of memory (configured with spark.driver.memory) as the heap (Xmx) itself. This increases the prevalence of OOM exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org