You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Imran Rashid (JIRA)" <ji...@apache.org> on 2015/06/02 07:18:17 UTC

[jira] [Created] (SPARK-8029) ShuffleMapTasks must be robust to concurrent attempts on the same executor

Imran Rashid created SPARK-8029:
-----------------------------------

             Summary: ShuffleMapTasks must be robust to concurrent attempts on the same executor
                 Key: SPARK-8029
                 URL: https://issues.apache.org/jira/browse/SPARK-8029
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.4.0
            Reporter: Imran Rashid


When stages get retried, a task may have more than one attempt running at the same time, on the same executor.  Currently this causes problems for ShuffleMapTasks, since all attempts try to write to the same output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org