You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/05/23 10:08:04 UTC

[jira] [Assigned] (SPARK-20713) Speculative task that got CommitDenied exception shows up as failed

     [ https://issues.apache.org/jira/browse/SPARK-20713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-20713:
------------------------------------

    Assignee:     (was: Apache Spark)

> Speculative task that got CommitDenied exception shows up as failed
> -------------------------------------------------------------------
>
>                 Key: SPARK-20713
>                 URL: https://issues.apache.org/jira/browse/SPARK-20713
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Thomas Graves
>
> When running speculative tasks you can end up getting a task failure on a speculative task (the other task succeeded) because that task got a CommitDenied exception when really it was "killed" by the driver. It is a race between when the driver kills and when the executor tries to commit.
> I think ideally we should fix up the task state on this to be killed because the fact that this task failed doesn't matter since the other speculative task succeeded.  tasks showing up as failure confuse the user and could make other scheduler cases harder.   
> This is somewhat related to SPARK-13343 where I think we should be correctly account for speculative tasks.  only one of the 2 tasks really succeeded and commited, and the other should be marked differently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org