You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Genmao Yu (Jira)" <ji...@apache.org> on 2019/10/31 09:36:00 UTC

[jira] [Created] (SPARK-29683) Job failed due to executor failures all available nodes are blacklisted

Genmao Yu created SPARK-29683:
---------------------------------

             Summary: Job failed due to executor failures all available nodes are blacklisted
                 Key: SPARK-29683
                 URL: https://issues.apache.org/jira/browse/SPARK-29683
             Project: Spark
          Issue Type: Bug
          Components: YARN
    Affects Versions: 3.0.0
            Reporter: Genmao Yu


My streaming job will fail *due to executor failures all available nodes are blacklisted*.  This exception is thrown only when all node is blacklisted:
{code}
def isAllNodeBlacklisted: Boolean = currentBlacklistedYarnNodes.size >= numClusterNodes

val allBlacklistedNodes = excludeNodes ++ schedulerBlacklist ++ allocatorBlacklist.keySet
{code}

After  diving into the code, I found some critical conditions not be handle properly:
- unchecked `excludeNodes`: it comes from user config. If not set properly, it may lead to "currentBlacklistedYarnNodes.size >= numClusterNodes". For example, we may set some nodes not in Yarn cluster.
{code}
excludeNodes = (invalid1, invalid2, invalid3)
clusterNodes = (valid1, valid2)
{code}
- `numClusterNodes` may equals 0: When HA Yarn failover, it will take some time for all NodeManagers to register ResourceManager again. In this case, `numClusterNode` may equals 0 or some other number, and Spark driver failed.
- too strong condition check: Spark driver will fail as long as "currentBlacklistedYarnNodes.size >= numClusterNodes". This condition should not indicate a unrecovered fatal. For example, there are some NodeManagers restarting. So we can give some waiting time before job failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org