You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/06/05 20:41:02 UTC
[jira] [Resolved] (SPARK-1677) Allow users to avoid Hadoop output
checks if desired
[ https://issues.apache.org/jira/browse/SPARK-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Patrick Wendell resolved SPARK-1677.
------------------------------------
Resolution: Fixed
Fix Version/s: 1.1.0
1.0.1
Issue resolved by pull request 947
[https://github.com/apache/spark/pull/947]
> Allow users to avoid Hadoop output checks if desired
> ----------------------------------------------------
>
> Key: SPARK-1677
> URL: https://issues.apache.org/jira/browse/SPARK-1677
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 1.0.0
> Reporter: Patrick Wendell
> Assignee: Nan Zhu
> Fix For: 1.0.1, 1.1.0
>
>
> For compatibility with older versions of Spark it would be nice to have an option `spark.hadoop.validateOutputSpecs` (default true) and a description "If set to true, validates the output specification used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories."
> This would just wrap the checking done in this PR:
> https://issues.apache.org/jira/browse/SPARK-1100
> https://github.com/apache/spark/pull/11
> By first checking the spark conf.
--
This message was sent by Atlassian JIRA
(v6.2#6252)