You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2015/09/04 21:54:45 UTC

[jira] [Resolved] (SPARK-10453) There's now way to use spark.dynmicAllocation.enabled with pyspark

     [ https://issues.apache.org/jira/browse/SPARK-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcelo Vanzin resolved SPARK-10453.
------------------------------------
    Resolution: Not A Problem

>From http://spark.apache.org/docs/latest/running-on-yarn.html:

{noformat}
spark.yarn.executor.memoryOverhead

executorMemory * 0.10, with minimum of 384

The amount of off heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
{noformat}

That also counts encompasses the python workers.


> There's now way to use spark.dynmicAllocation.enabled with pyspark
> ------------------------------------------------------------------
>
>                 Key: SPARK-10453
>                 URL: https://issues.apache.org/jira/browse/SPARK-10453
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.4.0
>         Environment: When using spark.dynamicAllocation.enabled, the assumption is that memory/core resources will be mediated by the yarn resource manager.  Unfortunately, whatever value is used for spark.executor.memory is consumed as JVM heap space by the executor.  There's no way to account for the memory requirements of the pyspark worker.  Executor JVM heap space should be decoupled from spark.executor.memory.
>            Reporter: Michael Procopio
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org