You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2015/01/26 01:59:34 UTC

[jira] [Updated] (SPARK-1706) Allow multiple executors per worker in Standalone mode

     [ https://issues.apache.org/jira/browse/SPARK-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or updated SPARK-1706:
-----------------------------
    Affects Version/s: 1.0.0

> Allow multiple executors per worker in Standalone mode
> ------------------------------------------------------
>
>                 Key: SPARK-1706
>                 URL: https://issues.apache.org/jira/browse/SPARK-1706
>             Project: Spark
>          Issue Type: Improvement
>          Components: Deploy
>    Affects Versions: 1.0.0
>            Reporter: Patrick Wendell
>            Assignee: Nan Zhu
>
> Right now if people want to launch multiple executors on each machine they need to start multiple standalone workers. This is not too difficult, but it means you have extra JVM's sitting around.
> We should just allow users to set a number of cores they want per-executor in standalone mode and then allow packing multiple executors on each node. This would make standalone mode more consistent with YARN in the way you request resources.
> It's not too big of a change as far as I can see. You'd need to:
> 1. Introduce a configuration for how many cores you want per executor.
> 2. Change the scheduling logic in Master.scala to take this into account.
> 3. Change CoarseGrainedSchedulerBackend to not assume a 1<->1 correspondence between hosts and executors.
> And maybe modify a few other places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org