You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Thomas Graves (Jira)" <ji...@apache.org> on 2020/09/29 19:56:00 UTC

[jira] [Created] (SPARK-33031) scheduler with blacklisting doesn't appear to pick up new executor added

Thomas Graves created SPARK-33031:
-------------------------------------

             Summary: scheduler with blacklisting doesn't appear to pick up new executor added
                 Key: SPARK-33031
                 URL: https://issues.apache.org/jira/browse/SPARK-33031
             Project: Spark
          Issue Type: Bug
          Components: Scheduler
    Affects Versions: 3.0.0
            Reporter: Thomas Graves


I was running a test with blacklisting on yarn (and standalone mode) and all the executors were initially blacklisted.  Then one of the executors died and we got allocated another one. The scheduler did not appear to pick up the new one and try to schedule on it though.

You can reproduce this by starting a master and slave on a single node, then launch a shell like where you will get multiple executors (in this case I got 3)

$SPARK_HOME/bin/spark-shell --master spark://yourhost:7077 --executor-cores 4 --conf spark.blacklist.enabled=true

From shell run:
{code:java}
import org.apache.spark.TaskContext
val rdd = sc.makeRDD(1 to 1000, 5).mapPartitions { it =>
 val context = TaskContext.get()
 if (context.attemptNumber() < 2) {
 throw new Exception("test attempt num")
 }
 it
}{code}
 

Note that I tried both with and without dynamic allocation enabled.

 

You can see screen shot related on https://issues.apache.org/jira/browse/SPARK-33029



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org