You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/08 05:45:10 UTC

[jira] [Resolved] (SPARK-24016) Yarn does not update node blacklist in static allocation

     [ https://issues.apache.org/jira/browse/SPARK-24016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-24016.
----------------------------------
    Resolution: Incomplete

> Yarn does not update node blacklist in static allocation
> --------------------------------------------------------
>
>                 Key: SPARK-24016
>                 URL: https://issues.apache.org/jira/browse/SPARK-24016
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, YARN
>    Affects Versions: 2.3.0
>            Reporter: Imran Rashid
>            Priority: Major
>              Labels: bulk-closed
>
> Task-based blacklisting keeps track of bad nodes, and updates YARN with that set of nodes so that Spark will not receive more containers on that node.  However, that only happens with dynamic allocation.  Though its far more important with dynamic allocation, even with static allocation this matters; if executors die, or if the cluster was too busy at the original resource request to give all the containers, the spark application will add new containers in the middle.  And we want an updated node blacklist for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org