You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Zhan Zhang (JIRA)" <ji...@apache.org> on 2016/09/22 18:34:21 UTC

[jira] [Commented] (SPARK-17637) Packed scheduling for Spark tasks across executors

    [ https://issues.apache.org/jira/browse/SPARK-17637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514105#comment-15514105 ] 

Zhan Zhang commented on SPARK-17637:
------------------------------------

The plan is to introduce a new configuration so that different scheduling algorithms can be used for the task scheduling.

> Packed scheduling for Spark tasks across executors
> --------------------------------------------------
>
>                 Key: SPARK-17637
>                 URL: https://issues.apache.org/jira/browse/SPARK-17637
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>            Reporter: Zhan Zhang
>            Priority: Minor
>
> Currently Spark scheduler implements round robin scheduling for tasks to executors. Which is great as it distributes the load evenly across the cluster, but this leads to significant resource waste in some cases, especially when dynamic allocation is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org