You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "meiyoula (JIRA)" <ji...@apache.org> on 2015/10/27 04:03:27 UTC

[jira] [Created] (SPARK-11334) numRunningTasks can't be less than 0, or it will refrect executor allocation

meiyoula created SPARK-11334:
--------------------------------

             Summary: numRunningTasks can't be less than 0, or it will refrect executor allocation
                 Key: SPARK-11334
                 URL: https://issues.apache.org/jira/browse/SPARK-11334
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
            Reporter: meiyoula


With Dynamic Allocation function, a task failed over maxFailure time, all the dependent jobs, stages, tasks will be killed or aborted. In this process, SparkListenerTaskEnd event will be behind in SparkListenerStageCompleted and SparkListenerJobEnd. Like the Event Log below:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org