You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cheng Lian (JIRA)" <ji...@apache.org> on 2015/05/25 18:30:17 UTC
[jira] [Resolved] (SPARK-7842) For InsertIntoHadoopFsRelation, if
an exception is thrown while committing a task, the task is not aborted
[ https://issues.apache.org/jira/browse/SPARK-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Cheng Lian resolved SPARK-7842.
-------------------------------
Resolution: Fixed
Fix Version/s: 1.4.0
Issue resolved by pull request 6378
[https://github.com/apache/spark/pull/6378]
> For InsertIntoHadoopFsRelation, if an exception is thrown while committing a task, the task is not aborted
> ----------------------------------------------------------------------------------------------------------
>
> Key: SPARK-7842
> URL: https://issues.apache.org/jira/browse/SPARK-7842
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.4.0
> Reporter: Cheng Lian
> Assignee: Cheng Lian
> Priority: Critical
> Fix For: 1.4.0
>
>
> This is related to spark-7838, where an exception is thrown when committing a task which writes a Parquet file. To be more specific, an exception is thrown from {{OutputWriter.close()}}. In this case, we should catch the exception and call {{abortTask()}} accordingly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org