You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/02/10 12:50:44 UTC

[GitHub] [spark] SaurabhChawla100 commented on a change in pull request #26440: [SPARK-20628][CORE][K8S] Start to improve Spark decommissioning & preemption support

SaurabhChawla100 commented on a change in pull request #26440: [SPARK-20628][CORE][K8S] Start to improve Spark decommissioning & preemption support
URL: https://github.com/apache/spark/pull/26440#discussion_r377042992
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ##########
 @@ -560,7 +618,9 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, val rpcEnv: Rp
   override def isExecutorActive(id: String): Boolean = synchronized {
     executorDataMap.contains(id) &&
       !executorsPendingToRemove.contains(id) &&
-      !executorsPendingLossReason.contains(id)
+      !executorsPendingLossReason.contains(id) &&
+      !executorsPendingDecommission.contains(id)
 
 Review comment:
   Here the entire node will be decommissioned , So instead of checking for each executor id contains in executorsPendingDecommission, can we have the filter on the node level with its HostName as the input parameter. So there will be only one entry of the node hostname that would be added in the some decommission tracker.At the end we don't want any task to be scheduled on the executor running on the node. We can check this in org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint#makeOffers

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org