You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/12 19:27:28 UTC

[GitHub] [spark] pgandhi999 opened a new pull request #24072: [SPARK-27112] : Spark Scheduler encounters two independent Deadlocks …

pgandhi999 opened a new pull request #24072: [SPARK-27112] : Spark Scheduler encounters two independent Deadlocks …
URL: https://github.com/apache/spark/pull/24072
 
 
   …when trying to kill executors either due to dynamic allocation or blacklisting
   
   Recently, a few spark users in the organization have reported that their jobs were getting stuck. On further analysis, it was found out that there exist two independent deadlocks and either of them occur under different circumstances. The screenshots for these two deadlocks are attached here. 
   
   We were able to reproduce the deadlocks with the following piece of code:
   
   ```
   import org.apache.hadoop.conf.Configuration
   import org.apache.hadoop.fs.{FileSystem, Path}
   
   import org.apache.spark._
   import org.apache.spark.TaskContext
   
   // Simple example of Word Count in Scala
   object ScalaWordCount {
   def main(args: Array[String]) {
   
   if (args.length < 2) {
   System.err.println("Usage: ScalaWordCount <inputFilesURI> <outputFilesUri>")
   System.exit(1)
   }
   
   val conf = new SparkConf().setAppName("Scala Word Count")
   val sc = new SparkContext(conf)
   
   // get the input file uri
   val inputFilesUri = args(0)
   
   // get the output file uri
   val outputFilesUri = args(1)
   
   while (true) {
   val textFile = sc.textFile(inputFilesUri)
   val counts = textFile.flatMap(line => line.split(" "))
   .map(word => {if (TaskContext.get.partitionId == 5 && TaskContext.get.attemptNumber == 0) throw new Exception("Fail for blacklisting") else (word, 1)})
   .reduceByKey(_ + _)
   counts.saveAsTextFile(outputFilesUri)
   val conf: Configuration = new Configuration()
   val path: Path = new Path(outputFilesUri)
   val hdfs: FileSystem = FileSystem.get(conf)
   hdfs.delete(path, true)
   }
   
   sc.stop()
   }
   }
   ```
   
   Additionally, to ensure that the deadlock surfaces up soon enough, I also added a small delay in the Spark code here:
   
   https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala#L256
   
   ```
   executorIdToFailureList.remove(exec)
   updateNextExpiryTime()
   Thread.sleep(2000)
   killBlacklistedExecutor(exec)
   ```
   Also make sure that the following configs are set when launching the above spark job:
   **spark.blacklist.enabled=true
   spark.blacklist.killBlacklistedExecutors=true
   spark.blacklist.application.maxFailedTasksPerExecutor=1**
   
   Screenshots for deadlock between task-result-getter-thread and spark-dynamic-executor-allocation thread:
   <img width="1417" alt="Screen Shot 2019-02-26 at 4 10 26 PM" src="https://user-images.githubusercontent.com/22228190/54062129-de943e80-41c9-11e9-8c62-cc810b7be74d.png">
   
   <img width="1416" alt="Screen Shot 2019-02-26 at 4 10 48 PM" src="https://user-images.githubusercontent.com/22228190/54062135-e653e300-41c9-11e9-99a2-7569045487bd.png">
   
   Screenshots for deadlock between task-result-getter-thread and dispatcher-event-loop thread:
   
   <img width="1417" alt="Screen Shot 2019-02-26 at 4 11 11 PM" src="https://user-images.githubusercontent.com/22228190/54062148-fc61a380-41c9-11e9-9014-9199b46863d8.png">
   
   <img width="1417" alt="Screen Shot 2019-02-26 at 4 11 26 PM" src="https://user-images.githubusercontent.com/22228190/54062150-01beee00-41ca-11e9-8d0b-a7a1836a4e11.png">
   
   
   
   ## What changes were proposed in this pull request?
   
   There are two deadlocks as a result of the interplay between three different threads:
   
   **task-result-getter thread**
   
   **spark-dynamic-executor-allocation thread**
   
   **dispatcher-event-loop thread(makeOffers())**
   
   Ordered synchronization constraint by acquiring lock on `TaskSchedulerImpl` before acquiring lock on `CoarseGrainedSchedulerBackend` in `makeOffers()` and ensured that `isExecutorBusy()` method is called out of synchronized block on `CoarseGrainedSchedulerBackend` object.
   
   ## How was this patch tested?
   
   The code used to reproduce the deadlock issue is documented above.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org