You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2015/07/20 10:14:05 UTC

[jira] [Closed] (SPARK-5349) Spark standalone should support dynamic resource scaling

     [ https://issues.apache.org/jira/browse/SPARK-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or closed SPARK-5349.
----------------------------
    Resolution: Duplicate

> Spark standalone should support dynamic resource scaling
> --------------------------------------------------------
>
>                 Key: SPARK-5349
>                 URL: https://issues.apache.org/jira/browse/SPARK-5349
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Tobias Bertelsen
>
> The resource requirements of an interactive shell varies heavily. Sometimes heavy commands are executed, and sometimes the user is thinking, getting coffee, interrupted etc... 
> A spark shell allocates a fixed number of worker cores (at least in standalone mode). A user thus has the choice to either block other users from the cluster by allocating all cores (default behavior), or restrict him/herself to only a few cores using the option {{--total-executor-cores}}. Either way the cores allocated to the shell has low utilization, since they will be waiting for the user a lot.
> Instead the spark shell allocate resources directly required to run the driver, and request worker cores only when computation is performed on the RDDs.
> This should allow for multiple users, to use an interactive shell concurrently while stille utilizing the entire cluster, when performing heavy operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org