You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by brad-kaiser <gi...@git.apache.org> on 2017/10/02 17:14:39 UTC

[GitHub] spark pull request #19041: [SPARK-21097][CORE] Add option to recover cached ...

Github user brad-kaiser commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19041#discussion_r142199422
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala ---
    @@ -88,6 +89,12 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, val rpcEnv: Rp
       @GuardedBy("CoarseGrainedSchedulerBackend.this")
       private val executorsPendingToRemove = new HashMap[String, Boolean]
     
    +  // Mark executors that we will request to kill in the near future.
    +  // This is different from executors in executorsPendingToRemove, which have already asked to be
    +  // killed.
    +  @GuardedBy("CoarseGrainedSchedulerBackend.this")
    +  private val executorsToBeKilled = mutable.Set.empty[String]
    --- End diff --
    
    I removed executorsToBeKilled. I tried using ```disableExecutor``` but it had the side effect of immediately removing executors. Instead I reused the pre-existing ```executorsPendingToBeRemoved``` map, and added a new parameter to ```killExecutors``` to get the behavior I need. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org