You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cheng Lian (JIRA)" <ji...@apache.org> on 2015/05/23 16:05:17 UTC
[jira] [Created] (SPARK-7842) For InsertIntoHadoopFsRelation, if an
exception is thrown while committing a task, the task is not aborted
Cheng Lian created SPARK-7842:
---------------------------------
Summary: For InsertIntoHadoopFsRelation, if an exception is thrown while committing a task, the task is not aborted
Key: SPARK-7842
URL: https://issues.apache.org/jira/browse/SPARK-7842
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Lian
Assignee: Cheng Lian
Priority: Critical
This is related to spark-7838, where an exception is thrown when committing a task which writes a Parquet file. To be more specific, an exception is thrown from {{OutputWriter.close()}}. In this case, we should catch the exception and call {{abortTask()}} accordingly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org