You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2015/08/19 20:56:48 UTC

[jira] [Commented] (SPARK-8029) ShuffleMapTasks must be robust to concurrent attempts on the same executor

    [ https://issues.apache.org/jira/browse/SPARK-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703567#comment-14703567 ] 

Reynold Xin commented on SPARK-8029:
------------------------------------

I have retargeted this and downgraded it from Blocker to Critical since it's been there for a while and not a regression.

> ShuffleMapTasks must be robust to concurrent attempts on the same executor
> --------------------------------------------------------------------------
>
>                 Key: SPARK-8029
>                 URL: https://issues.apache.org/jira/browse/SPARK-8029
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.0
>            Reporter: Imran Rashid
>            Assignee: Imran Rashid
>            Priority: Critical
>         Attachments: AlternativesforMakingShuffleMapTasksRobusttoMultipleAttempts.pdf
>
>
> When stages get retried, a task may have more than one attempt running at the same time, on the same executor.  Currently this causes problems for ShuffleMapTasks, since all attempts try to write to the same output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org