You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/10/16 20:43:06 UTC

[jira] [Commented] (SPARK-11154) make specificaition spark.yarn.executor.memoryOverhead consistent with typical JVM options

    [ https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961184#comment-14961184 ] 

Sean Owen commented on SPARK-11154:
-----------------------------------

Should be for all similar properties, not just this one. The twist is that you have to support the current syntax. 1000 must mean "1000 megabytes". But then someone writing "1000000" would be surprised to find that it means "1000000 megabytes". (CM might do just this, note.) Hence I'm actually not sure if this is feasible.

> make specificaition spark.yarn.executor.memoryOverhead consistent with typical JVM options
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11154
>                 URL: https://issues.apache.org/jira/browse/SPARK-11154
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation, Spark Submit
>            Reporter: Dustin Cote
>            Priority: Minor
>
> spark.yarn.executor.memoryOverhead is currently specified in megabytes by default, but it would be nice to allow users to specify the size as though it were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended to the end to explicitly specify megabytes or gigabytes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org