You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sahil Takiar (JIRA)" <ji...@apache.org> on 2018/03/22 02:10:00 UTC
[jira] [Comment Edited] (HIVE-18831) Differentiate errors that are
thrown by Spark tasks
[ https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16388881#comment-16388881 ]
Sahil Takiar edited comment on HIVE-18831 at 3/22/18 2:09 AM:
--------------------------------------------------------------
Before this patch the console output would look like:
{code}
Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
{code}
Now it looks like:
{code}
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed due to Spark task failures: Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
{code}
So this change just combined these two lines and cleaned up the error message a bit.
Other changes:
* Did the same thing for Spark job failures
* Found a way to differentiate between Spark task failures and Spark job failures
* Added some unit tests
was (Author: stakiar):
Before this patch the console output would look like:
{code}
Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
{code}
Now it looks like:
{code}
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed due to Spark task failures: Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
{code}
So pretty much just combined these two lines and cleaned up the error message a bit.
> Differentiate errors that are thrown by Spark tasks
> ---------------------------------------------------
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Sahil Takiar
> Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, HIVE-18831.3.patch, HIVE-18831.4.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we don't differentiate between errors from HS2 / RSC vs. errors thrown by individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its difficult to know what part of the excution threw the exception.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)