You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2014/11/25 09:50:13 UTC

[jira] [Commented] (SPARK-4585) Spark dynamic scaling executors use upper limit value as default.

    [ https://issues.apache.org/jira/browse/SPARK-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224244#comment-14224244 ] 

Sean Owen commented on SPARK-4585:
----------------------------------

Given the discussion in SPARK-3174, it seems like this behavior is actually more desirable for Hive. At least there's a good argument for it, so I am not sure selecting the minimum is better. It's not a bug in any event; is there a smarter heuristic available and proposed here?

> Spark dynamic scaling executors use upper limit value as default.
> -----------------------------------------------------------------
>
>                 Key: SPARK-4585
>                 URL: https://issues.apache.org/jira/browse/SPARK-4585
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.1.0
>            Reporter: Chengxiang Li
>
> With SPARK-3174, one can configure a minimum and maximum number of executors for a Spark application on Yarn. However, the application always starts with the maximum. It seems more reasonable, at least for Hive on Spark, to start from the minimum and scale up as needed up to the maximum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org