You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/12/01 03:41:57 UTC

[GitHub] [spark] Ngone51 commented on a diff in pull request #38852: [SPARK-41341][CORE] Wait shuffle fetch to finish when decommission executor

Ngone51 commented on code in PR #38852:
URL: https://github.com/apache/spark/pull/38852#discussion_r1036661551


##########
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala:
##########
@@ -352,8 +353,20 @@ private[spark] class CoarseGrainedExecutorBackend(
                 // We can only trust allBlocksMigrated boolean value if there were no tasks running
                 // since the start of computing it.
                 if (allBlocksMigrated && (migrationTime > lastTaskRunningTime)) {
-                  logInfo("No running tasks, all blocks migrated, stopping.")
-                  exitExecutor(0, ExecutorLossMessage.decommissionFinished, notifyDriver = true)
+                  val pendingFetches = env.blockManager.getNumPendingBlockFetches()

Review Comment:
   Shouldn't the condition `executor.numRunningTasks == 0` avoid this situation already? If there are pending shuffle fetches, running tasks should be empty, right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org