You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Thomas Graves (Jira)" <ji...@apache.org> on 2020/01/22 17:08:00 UTC

[jira] [Commented] (SPARK-27082) Dynamic Allocation: we should consider the scenario that speculative task being killed and never resubmit

    [ https://issues.apache.org/jira/browse/SPARK-27082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021284#comment-17021284 ] 

Thomas Graves commented on SPARK-27082:
---------------------------------------

I ran into this case while testing, I think when you say killed here you really just mean one of the 2 tasks finishes (succeeds) the other task that was running is killed, correct?  The dynamic allocation manager never removes it from the speculativeTaskIndices and thus we always keep more executors then we actually need.

Would you be interested in putting up a PR with a fix?

> Dynamic Allocation: we should consider the scenario that speculative task being killed and never resubmit
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27082
>                 URL: https://issues.apache.org/jira/browse/SPARK-27082
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Zhen Fan
>            Priority: Major
>              Labels: patch
>
> Issue background:
> When we enable dynamic allocation, we expect that the executors can be removed appropriately, especially in some stages with data skew. With speculation enabled, the copying task  can be killed by the original task and vice versa. In TaskSetManager, we set successful(index)=true, and never resubmit the killed tasks. However, in ExecutorAllocationManager which is very related to the dynamic allocation function, doesn't handle this scenario.
> See SPARK-8366. However, (SPARK-8366) ignores one scenario that copying task is being killed. When this happens, the TaskSetManager will mark the task index of the stage as success and never resubmit the killed task, so here we shouldn't treat it as pending task.
> This can do harm to the computing of  maxNumExecutorsNeeded, as a result, we always retain unnecessary  executors and waste the computing resources of clusters.
> Solution:
> When the task index is marked as speculative and the mirror task is successful, we won't treat it as pending task. 
> Code has been tested.
> {code:java}
> private val stageIdToSpeculativeTaskIndices = new mutable.HashMap[Int, mutable.HashMap[Int, Boolean]]
> ... 
> val speculativeTaskIndices = stageIdToSpeculativeTaskIndices.get(stageId)
> if (taskEnd.reason == Success) {
>   if (speculativeTaskIndices.isDefined && speculativeTaskIndices.get.contains(taskIndex)) {
>     speculativeTaskIndices.get(taskIndex) = true
>   }
> } else {
>   var resubmitTask = true
>   if (taskEnd.taskInfo.killed) {
>     resubmitTask = !(speculativeTaskIndices.isDefined &&
>         speculativeTaskIndices.get.getOrElse(taskIndex, false))
>   }
>   if (resubmitTask) {
>     if (totalPendingTasks() == 0) {
>       allocationManager.onSchedulerBacklogged()
>     }
>     if (taskEnd.taskInfo.speculative) {
>       stageIdToSpeculativeTaskIndices.get(stageId).foreach {_.remove(taskIndex)}
>     } else {
>       stageIdToTaskIndices.get(stageId).foreach {_.remove(taskIndex)}
>     }
>   }
> }{code}
>  Please take a look, Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org