You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "meiyoula (JIRA)" <ji...@apache.org> on 2015/10/27 04:11:27 UTC

[jira] [Updated] (SPARK-11334) numRunningTasks can't be less than 0, or it will refrect executor allocation

     [ https://issues.apache.org/jira/browse/SPARK-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

meiyoula updated SPARK-11334:
-----------------------------
    Description: 
With *Dynamic Allocation* function, a task failed over *maxFailure* time, all the dependent jobs, stages, tasks will be killed or aborted. In this process, *SparkListenerTaskEnd* event will be behind in *SparkListenerStageCompleted* and *SparkListenerJobEnd*. Like the Event Log below:
{quote}
{"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of Tasks":200}
{"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
{"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task Info":{"Task ID":1955,"Index":88,"Attempt":2,"Launch Time":1444914699763,"Executor ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
{quote}

Because that, the *

  was:With Dynamic Allocation function, a task failed over maxFailure time, all the dependent jobs, stages, tasks will be killed or aborted. In this process, SparkListenerTaskEnd event will be behind in SparkListenerStageCompleted and SparkListenerJobEnd. Like the Event Log below:


> numRunningTasks can't be less than 0, or it will refrect executor allocation
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-11334
>                 URL: https://issues.apache.org/jira/browse/SPARK-11334
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> With *Dynamic Allocation* function, a task failed over *maxFailure* time, all the dependent jobs, stages, tasks will be killed or aborted. In this process, *SparkListenerTaskEnd* event will be behind in *SparkListenerStageCompleted* and *SparkListenerJobEnd*. Like the Event Log below:
> {quote}
> {"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":20,"Stage Attempt ID":0,"Stage Name":"run at AccessController.java:-2","Number of Tasks":200}
> {"Event":"SparkListenerJobEnd","Job ID":9,"Completion Time":1444914699829}
> {"Event":"SparkListenerTaskEnd","Stage ID":20,"Stage Attempt ID":0,"Task Type":"ResultTask","Task End Reason":{"Reason":"TaskKilled"},"Task Info":{"Task ID":1955,"Index":88,"Attempt":2,"Launch Time":1444914699763,"Executor ID":"5","Host":"linux-223","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":1444914699864,"Failed":true,"Accumulables":[]}}
> {quote}
> Because that, the *



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org