You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sahil Takiar (Jira)" <ji...@apache.org> on 2020/10/23 17:07:00 UTC

[jira] [Assigned] (HIVE-20273) Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo

     [ https://issues.apache.org/jira/browse/HIVE-20273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sahil Takiar reassigned HIVE-20273:
-----------------------------------

    Assignee:     (was: Sahil Takiar)

> Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo
> --------------------------------------------------------------------
>
>                 Key: HIVE-20273
>                 URL: https://issues.apache.org/jira/browse/HIVE-20273
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Priority: Major
>         Attachments: HIVE-20273.1.patch, HIVE-20273.2.patch
>
>
> HIVE-19053 and HIVE-19733 added handling of {{InterruptedException}} to {{RemoteSparkJobStatus#getSparkJobInfo}} and {{RemoteSparkJobStatus#getSparkStagesInfo}}. Now, these methods catch {{InterruptedException}} and wrap the exception in a {{HiveException}} and then throw the new {{HiveException}}.
> This new {{HiveException}} is then caught in {{RemoteSparkJobMonitor#startMonitor}} which then looks for exceptions that match the condition:
> {code:java}
> if (e instanceof InterruptedException ||
>                 (e instanceof HiveException && e.getCause() instanceof InterruptedException))
> {code}
> If this condition is met (in this case it is), the exception will again be wrapped in another {{HiveException}} and is thrown again. So the final exception is a {{HiveException}} that wraps a {{HiveException}} that wraps an {{InterruptedException}}.
> The double nesting of hive exception causes the logic in {{SparkTask#setSparkException}} to break, and doesn't cause {{killJob}} to get triggered.
> This causes interrupted Hive queries to not kill their corresponding Spark jobs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)