You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2018/12/05 06:23:00 UTC

[jira] [Assigned] (SPARK-26269) YarnAllocator should have same blacklist behaviour with YARN to maxmize use of cluster resource

     [ https://issues.apache.org/jira/browse/SPARK-26269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-26269:
------------------------------------

    Assignee:     (was: Apache Spark)

> YarnAllocator should have same blacklist behaviour with YARN to maxmize use of cluster resource
> -----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-26269
>                 URL: https://issues.apache.org/jira/browse/SPARK-26269
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 2.3.1, 2.3.2, 2.4.0
>            Reporter: wuyi
>            Priority: Minor
>             Fix For: 2.4.0
>
>
> Currently, YarnAllocator may put a node with a completed container whose exit status is not one of SUCCESS, PREEMPTED, KILLED_EXCEEDED_VMEM, KILLED_EXCEEDED_PMEM into blacklist. Howerver, for other exit status, e.g. KILLED_BY_RESOURCEMANAGER, Yarn do not consider its related nodes shoule be added into blacklist(see YARN's explaination for detail https://github.com/apache/hadoop/blob/228156cfd1b474988bc4fedfbf7edddc87db41e3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java#L273). So, relaxing the current blacklist rule and having the same blacklist behaviour with YARN would maxmize use of cluster resources.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org