You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/12/07 09:52:58 UTC

[jira] [Assigned] (SPARK-18765) Make values for spark.yarn.{am|driver|executor}.memoryOverhead have configurable units

     [ https://issues.apache.org/jira/browse/SPARK-18765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-18765:
------------------------------------

    Assignee: Apache Spark

> Make values for spark.yarn.{am|driver|executor}.memoryOverhead have configurable units
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-18765
>                 URL: https://issues.apache.org/jira/browse/SPARK-18765
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.6.3
>            Reporter: Daisuke Kobayashi
>            Assignee: Apache Spark
>            Priority: Trivial
>
> {{spark.yarn.\{driver|executor|am\}.memoryOverhead}} values are isolated to Megabytes today. Users provide a value without a unit and Spark assumes its in MBs. Since the overhead is often a few gigabytes, we should change the memory overhead to work the same way as executor or driver memory configs.
> Given 2.0 has already covered this, it's worth to have 1.X code line cover this capability as well. My PR offers users being able to pass the values in multiple ways (backward compatibility is not broken) like:
> {code}
> spark.yarn.executor.memoryOverhead=300m --> converted to 300
> spark.yarn.executor.memoryOverhead=500 --> converted to 500
> spark.yarn.executor.memoryOverhead=1g --> converted to 1024
> spark.yarn.executor.memoryOverhead=1024m --> converted to 1024
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org