You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Saisai Shao (JIRA)" <ji...@apache.org> on 2015/05/20 04:02:00 UTC

[jira] [Commented] (SPARK-4352) Incorporate locality preferences in dynamic allocation requests

    [ https://issues.apache.org/jira/browse/SPARK-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14551649#comment-14551649 ] 

Saisai Shao commented on SPARK-4352:
------------------------------------

Hi all, I'd like to take a crack at this, Here is my proposed design doc (https://docs.google.com/document/d/1YH6TtZg1Rlcp6wcXpXpqSo3NNLG4O9FajGdmzABh-c8/edit?usp=sharing), any comment is greatly appreciated.

> Incorporate locality preferences in dynamic allocation requests
> ---------------------------------------------------------------
>
>                 Key: SPARK-4352
>                 URL: https://issues.apache.org/jira/browse/SPARK-4352
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.2.0
>            Reporter: Sandy Ryza
>            Priority: Critical
>
> Currently, achieving data locality in Spark is difficult unless an application takes resources on every node in the cluster.  preferredNodeLocalityData provides a sort of hacky workaround that has been broken since 1.0.
> With dynamic executor allocation, Spark requests executors in response to demand from the application.  When this occurs, it would be useful to look at the pending tasks and communicate their location preferences to the cluster resource manager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org