You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mohit Sabharwal (JIRA)" <ji...@apache.org> on 2015/06/02 06:09:20 UTC

[jira] [Commented] (SPARK-7953) Spark should cleanup output dir if job fails

    [ https://issues.apache.org/jira/browse/SPARK-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568491#comment-14568491 ] 

Mohit Sabharwal commented on SPARK-7953:
----------------------------------------

Thanks, [~joshrosen], don't have cycles currently, but will jump on this when I have some.  

> Spark should cleanup output dir if job fails
> --------------------------------------------
>
>                 Key: SPARK-7953
>                 URL: https://issues.apache.org/jira/browse/SPARK-7953
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.3.0
>            Reporter: Mohit Sabharwal
>
> MR calls abortTask and abortJob on the {{OutputCommitter}} to clean up the temporary output directories, but Spark doesn't seem to be doing that (when outputting an RDD to a Hadoop FS)
> For example: {{PairRDDFunctions.saveAsNewAPIHadoopDataset}} should call {{committer.abortTask(hadoopContext)}} in the finally block inside the writeShard closure. And also {{jobCommitter.abortJob(jobTaskContext, JobStatus.State.FAILED)}} should be called if the job fails.
> Additionally, MR removes the output dir if job fails, but Spark doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org