You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nathan M (JIRA)" <ji...@apache.org> on 2014/12/22 02:37:15 UTC

[jira] [Comment Edited] (SPARK-3174) Provide elastic scaling within a Spark application

    [ https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14255344#comment-14255344 ] 

Nathan M edited comment on SPARK-3174 at 12/22/14 1:36 AM:
-----------------------------------------------------------

This is something we're very interested in but we aren't using YARN. 

Is there a JIRA to add this feature to Mesos?


was (Author: nemccarthy):
Is there a JIRA to add this feature to Mesos?

> Provide elastic scaling within a Spark application
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>             Fix For: 1.2.0
>
>         Attachments: SPARK-3174design.pdf, SparkElasticScalingDesignB.pdf, dynamic-scaling-executors-10-6-14.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that applications have a fixed allocation that doesn't grow and shrink with their resource needs.  We're blocked on YARN-1197 for dynamically changing the resources within executors, but we can still allocate and discard whole executors.
> It would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Discard executors when they are idle
> See the latest design doc for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org