You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Ayush Saxena (Jira)" <ji...@apache.org> on 2023/02/14 05:12:00 UTC
[jira] [Resolved] (HIVE-27071) Select query with LIMIT clause can fail if there are marker files like "_SUCCESS" and "_MANIFEST"
[ https://issues.apache.org/jira/browse/HIVE-27071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ayush Saxena resolved HIVE-27071.
---------------------------------
Fix Version/s: 4.0.0
Resolution: Fixed
> Select query with LIMIT clause can fail if there are marker files like "_SUCCESS" and "_MANIFEST"
> -------------------------------------------------------------------------------------------------
>
> Key: HIVE-27071
> URL: https://issues.apache.org/jira/browse/HIVE-27071
> Project: Hive
> Issue Type: Bug
> Components: HiveServer2
> Affects Versions: 4.0.0
> Reporter: Sai Hemanth Gantasala
> Assignee: Ayush Saxena
> Priority: Major
> Labels: pull-request-available
> Fix For: 4.0.0
>
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> Spark clients creates marker files like "_SUCCESS" and "_MANIFEST" under the table/partition path at the end of a write operation. For example 'hdfs://name-node-host/table/partition/_SUCCESS'
> Whenever Hive is trying to read that table with the LIMIT clause, it could to the following error:
> {code:java}
> ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1676095298574_0017_2_00, diagnostics=[Vertex vertex_1676095298574_0017_2_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: trade initializer failed, vertex=vertex_1676095298574_0017_2_00 [Map 1], org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://name-node-host/table/partition/_MANIFEST
> Input path does not exist: hdfs://name-node-host/table/partition/_SUCCESS at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
> at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:240)
> at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:328)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:579) {code}
> Hive execution engine should ignore these marker files while reading the table/partition data.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)