You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Chengxiang Li (JIRA)" <ji...@apache.org> on 2014/11/25 03:11:13 UTC

[jira] [Created] (SPARK-4585) Spark dynamic scaling executors use upper limit value as default.

Chengxiang Li created SPARK-4585:
------------------------------------

             Summary: Spark dynamic scaling executors use upper limit value as default.
                 Key: SPARK-4585
                 URL: https://issues.apache.org/jira/browse/SPARK-4585
             Project: Spark
          Issue Type: Bug
          Components: Spark Core, YARN
    Affects Versions: 1.1.0
            Reporter: Chengxiang Li


With SPARK-3174, one can configure a minimum and maximum number of executors for a Spark application on Yarn. However, the application always starts with the maximum. It seems more reasonable, at least for Hive on Spark, to start from the minimum and scale up as needed up to the maximum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org