You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sandy Ryza (JIRA)" <ji...@apache.org> on 2014/12/28 04:30:13 UTC

[jira] [Updated] (SPARK-4585) Spark dynamic executor allocation maxExecutors as initial number

     [ https://issues.apache.org/jira/browse/SPARK-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sandy Ryza updated SPARK-4585:
------------------------------
    Summary: Spark dynamic executor allocation maxExecutors as initial number  (was: Spark dynamic scaling executors use upper limit value as default.)

> Spark dynamic executor allocation maxExecutors as initial number
> ----------------------------------------------------------------
>
>                 Key: SPARK-4585
>                 URL: https://issues.apache.org/jira/browse/SPARK-4585
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.1.0
>            Reporter: Chengxiang Li
>
> With SPARK-3174, one can configure a minimum and maximum number of executors for a Spark application on Yarn. However, the application always starts with the maximum. It seems more reasonable, at least for Hive on Spark, to start from the minimum and scale up as needed up to the maximum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org