You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Imran Rashid (JIRA)" <ji...@apache.org> on 2015/06/02 07:27:19 UTC

[jira] [Commented] (SPARK-8029) ShuffleMapTasks must be robust to concurrent attempts on the same executor

    [ https://issues.apache.org/jira/browse/SPARK-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568552#comment-14568552 ] 

Imran Rashid commented on SPARK-8029:
-------------------------------------

This is a subset of the issues originally reported in https://issues.apache.org/jira/browse/SPARK-7308, to have an issue with a smaller scope, but hopefully still large enough to consider the design.

https://issues.apache.org/jira/browse/SPARK-7829 is the "ad-hoc" proposal of the fix for this issue.

> ShuffleMapTasks must be robust to concurrent attempts on the same executor
> --------------------------------------------------------------------------
>
>                 Key: SPARK-8029
>                 URL: https://issues.apache.org/jira/browse/SPARK-8029
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.0
>            Reporter: Imran Rashid
>         Attachments: AlternativesforMakingShuffleMapTasksRobusttoMultipleAttempts.pdf
>
>
> When stages get retried, a task may have more than one attempt running at the same time, on the same executor.  Currently this causes problems for ShuffleMapTasks, since all attempts try to write to the same output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org