You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Allman (JIRA)" <ji...@apache.org> on 2018/09/28 17:09:00 UTC
[jira] [Comment Edited] (SPARK-25561)
HiveClient.getPartitionsByFilter throws an exception if Hive retries
directSql
[ https://issues.apache.org/jira/browse/SPARK-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16632137#comment-16632137 ]
Michael Allman edited comment on SPARK-25561 at 9/28/18 5:08 PM:
-----------------------------------------------------------------
cc [~cloud_fan] [~ekhliang]
Hi [~karthik.manamcheri]. Thanks for reporting this. I can't take a look right now, but I believe we have test cases that exercise this scenario. If not, it's certainly a hole in our coverage. If we do, it may be that Hive's behavior in this scenario is version-dependent, and we don't have coverage for your version of Hive. What version of Hive are you using?
Thanks.
was (Author: michael):
cc [~cloud_fan] [~ekhliang]
Hi [~karthik.manamcheri]. Thanks for reporting this. I can't take a look right now, but I believe we have test cases that exercise this scenario. If not, it's certainly a whole in our coverage. If we do, it may be that Hive's behavior in this scenario is version-dependent, and we don't have coverage for your version of Hive. What version of Hive are you using?
Thanks.
> HiveClient.getPartitionsByFilter throws an exception if Hive retries directSql
> ------------------------------------------------------------------------------
>
> Key: SPARK-25561
> URL: https://issues.apache.org/jira/browse/SPARK-25561
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.1.0
> Reporter: Karthik Manamcheri
> Priority: Major
>
> In HiveShim.scala, the current behavior is that if hive.metastore.try.direct.sql is enabled, we expect the getPartitionsByFilter call to succeed. If it fails, we'll throw a RuntimeException.
> However, this might not always be the case. Hive's direct SQL functionality is best-attempt. Meaning, it will fall back to ORM if direct sql fails. Spark should handle that exception correctly if Hive falls back to ORM.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org