You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "wuyi (Jira)" <ji...@apache.org> on 2021/11/10 03:27:00 UTC

[jira] [Updated] (SPARK-36575) Executor lost may cause spark stage to hang

     [ https://issues.apache.org/jira/browse/SPARK-36575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

wuyi updated SPARK-36575:
-------------------------
    Issue Type: Improvement  (was: Bug)

> Executor lost may cause spark stage to hang
> -------------------------------------------
>
>                 Key: SPARK-36575
>                 URL: https://issues.apache.org/jira/browse/SPARK-36575
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>    Affects Versions: 2.3.3
>            Reporter: hujiahua
>            Assignee: wuyi
>            Priority: Major
>
> When a executor finished a task of some stage, the driver will receive a `StatusUpdate` event to handle it. At the same time the driver found the executor heartbeat timed out, so the dirver also need handle ExecutorLost event simultaneously. There was a race condition issues here, which will make the task never been rescheduled again and the stage hang over.
>  The problem is that `TaskResultGetter.enqueueSuccessfulTask` use asynchronous thread to handle successful task, that mean the synchronized lock of `TaskSchedulerImpl` was released prematurely during midway [https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala#L61]. So `TaskSchedulerImpl` may handle executorLost first, then the asynchronous thread will go on to handle successful task. It cause `TaskSetManager.successful` and `TaskSetManager.tasksSuccessful` wrong result. 
> Then `HeartbeatReceiver.expireDeadHosts` executed `killAndReplaceExecutor`, which make `TaskSchedulerImpl.executorLost` was executed twice. `copiesRunning(index) -= 1` were processed in `executorLost`, twice `executorLost` made `copiesRunning(index)` to -1, which lead stage to hang. 
> related log when the issue produce: 
>  21/08/05 02:58:14,784 INFO [dispatcher-event-loop-8] TaskSetManager: Starting task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724, partition 4004, ANY, 7994 bytes)
>  21/08/05 03:00:24,126 ERROR [dispatcher-event-loop-4] TaskSchedulerImpl: Lost executor 366724 on 10.109.89.3: Executor heartbeat timed out after 140830 ms
>  21/08/05 03:00:24,218 WARN [dispatcher-event-loop-4] TaskSetManager: Lost task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724): ExecutorLostFailure (executor 366724 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 140830 ms
>  21/08/05 03:00:24,542 INFO [task-result-getter-2] TaskSetManager: Finished task 4004.0 in stage 1328625.0 (TID 347212402) in 129758 ms on 10.109.89.3 (executor 366724) (3047/5400)
> 21/08/05 03:00:34,621 INFO [dispatcher-event-loop-8] TaskSchedulerImpl: Executor 366724 on 10.109.89.3 killed by driver.
>  21/08/05 03:00:34,771 INFO [spark-listener-group-executorManagement] ExecutorMonitor: Executor 366724 removed (new total is 793)
> 21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Executor lost: 366724 (epoch 417416)
>  21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14] BlockManagerMasterEndpoint: Trying to remove executor 366724 from BlockManagerMaster.
>  21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14] BlockManagerMasterEndpoint: Removing block manager BlockManagerId(366724, 10.109.89.3, 43402, None)
>  21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] BlockManagerMaster: Removed 366724 successfully in removeExecutor
>  21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle files lost for executor: 366724 (epoch 417416)
>  21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Executor lost: 366724 (epoch 417473)
>  21/08/05 03:00:44,584 INFO [dispatcher-event-loop-15] BlockManagerMasterEndpoint: Trying to remove executor 366724 from BlockManagerMaster.
>  21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] BlockManagerMaster: Removed 366724 successfully in removeExecutor
>  21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle files lost for executor: 366724 (epoch 417473)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org