You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "meiyoula (JIRA)" <ji...@apache.org> on 2015/07/01 05:44:04 UTC
[jira] [Updated] (SPARK-8366) When task fails and append a new one,
the ExecutorAllocationManager can't sense the new tasks
[ https://issues.apache.org/jira/browse/SPARK-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
meiyoula updated SPARK-8366:
----------------------------
Description:
I use the *dynamic executor allocation* function.
When an executor is killed, all running tasks on it will be failed. Until reach the maxTaskFailures, this failed task will re-run with a new task id.
But the `ExecutorAllocationManager` won't concern this new tasks to pending tasks, because the total stage task number only set when stage submitted.
was:I use the *dynamic executor allocation* function. Then one executor is killed, all running tasks on it are failed. When the new tasks are appended, the new executor won't added.
> When task fails and append a new one, the ExecutorAllocationManager can't sense the new tasks
> ---------------------------------------------------------------------------------------------
>
> Key: SPARK-8366
> URL: https://issues.apache.org/jira/browse/SPARK-8366
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.4.0
> Reporter: meiyoula
>
> I use the *dynamic executor allocation* function.
> When an executor is killed, all running tasks on it will be failed. Until reach the maxTaskFailures, this failed task will re-run with a new task id.
> But the `ExecutorAllocationManager` won't concern this new tasks to pending tasks, because the total stage task number only set when stage submitted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org