You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Steven Rand (Jira)" <ji...@apache.org> on 2019/11/05 03:13:00 UTC

[jira] [Commented] (SPARK-29683) Job failed due to executor failures all available nodes are blacklisted

    [ https://issues.apache.org/jira/browse/SPARK-29683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16967188#comment-16967188 ] 

Steven Rand commented on SPARK-29683:
-------------------------------------

We're experiencing this as well during HA YARN failover.

> Job failed due to executor failures all available nodes are blacklisted
> -----------------------------------------------------------------------
>
>                 Key: SPARK-29683
>                 URL: https://issues.apache.org/jira/browse/SPARK-29683
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 3.0.0
>            Reporter: Genmao Yu
>            Priority: Major
>
> My streaming job will fail *due to executor failures all available nodes are blacklisted*. This exception is thrown only when all node is blacklisted:
> {code:java}
> def isAllNodeBlacklisted: Boolean = currentBlacklistedYarnNodes.size >= numClusterNodes
> val allBlacklistedNodes = excludeNodes ++ schedulerBlacklist ++ allocatorBlacklist.keySet
> {code}
> After diving into the code, I found some critical conditions not be handled properly:
>  - unchecked `excludeNodes`: it comes from user config. If not set properly, it may lead to "currentBlacklistedYarnNodes.size >= numClusterNodes". For example, we may set some nodes not in Yarn cluster.
> {code:java}
> excludeNodes = (invalid1, invalid2, invalid3)
> clusterNodes = (valid1, valid2)
> {code}
>  - `numClusterNodes` may equals 0: When HA Yarn failover, it will take some time for all NodeManagers to register ResourceManager again. In this case, `numClusterNode` may equals 0 or some other number, and Spark driver failed.
>  - too strong condition check: Spark driver will fail as long as "currentBlacklistedYarnNodes.size >= numClusterNodes". This condition should not indicate a unrecovered fatal. For example, there are some NodeManagers restarting. So we can give some waiting time before job failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org