You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Procopio (JIRA)" <ji...@apache.org> on 2015/09/04 21:35:45 UTC

[jira] [Created] (SPARK-10453) There's now way to use spark.dynmicAllocation.enabled with pyspark

Michael Procopio created SPARK-10453:
----------------------------------------

             Summary: There's now way to use spark.dynmicAllocation.enabled with pyspark
                 Key: SPARK-10453
                 URL: https://issues.apache.org/jira/browse/SPARK-10453
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 1.4.0
         Environment: When using spark.dynamicAllocation.enabled, the assumption is that memory/core resources will be mediated by the yarn resource manager.  Unfortunately, whatever value is used for spark.executor.memory is consumed as JVM heap space by the executor.  There's no way to account for the memory requirements of the pyspark worker.  Executor JVM heap space should be decoupled from spark.executor.memory.
            Reporter: Michael Procopio






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org