You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/09/23 16:20:20 UTC

[jira] [Assigned] (SPARK-17637) Packed scheduling for Spark tasks across executors

     [ https://issues.apache.org/jira/browse/SPARK-17637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-17637:
------------------------------------

    Assignee: Apache Spark

> Packed scheduling for Spark tasks across executors
> --------------------------------------------------
>
>                 Key: SPARK-17637
>                 URL: https://issues.apache.org/jira/browse/SPARK-17637
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>            Reporter: Zhan Zhang
>            Assignee: Apache Spark
>            Priority: Minor
>
> Currently Spark scheduler implements round robin scheduling for tasks to executors. Which is great as it distributes the load evenly across the cluster, but this leads to significant resource waste in some cases, especially when dynamic allocation is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org