You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "max_c (Jira)" <ji...@apache.org> on 2019/10/10 07:23:00 UTC

[jira] [Created] (HIVE-22318) Java.io.exception:Two readers for

max_c created HIVE-22318:
----------------------------

             Summary: Java.io.exception:Two readers for
                 Key: HIVE-22318
                 URL: https://issues.apache.org/jira/browse/HIVE-22318
             Project: Hive
          Issue Type: Bug
          Components: Hive, HiveServer2
    Affects Versions: 3.1.0
            Reporter: max_c
         Attachments: hiveserver2 for exception.log

I create a ACID table with ORC format:

 
{noformat}
CREATE TABLE `some.TableA`( 
   ....
   )                                                                   
 ROW FORMAT SERDE                                   
   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'      
 STORED AS INPUTFORMAT                              
   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'  
 OUTPUTFORMAT                                       
   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'  
 TBLPROPERTIES (                                    
   'bucketing_version'='2',                         
   'orc.compress'='SNAPPY',                         
   'transactional'='true',                          
   'transactional_properties'='default'){noformat}
After executing merge into operation:
{noformat}
MERGE INTO some.TableA AS a USING (SELECT vend_no FROM some.TableB UNION ALL SELECT vend_no FROM some.TableC) AS b ON a.vend_no=b.vend_no WHEN MATCHED THEN DELETE
{noformat}
the problem happend(when selecting the TableA, the exception happens too):
{noformat}
java.io.IOException: java.io.IOException: Two readers for {originalWriteId: 4, bucket: 536870912(1.0.0), row: 2434, currentWriteId 25}: new [key={originalWriteId: 4, bucket: 536870912(1.0.0), row: 2434, currentWriteId 25}, nextRecord={2, 4, 536870912, 2434, 25, null}, reader=Hive ORC Reader(hdfs://hdpprod/warehouse/tablespace/managed/hive/some.db/tableA/delete_delta_0000015_0000026/bucket_00001, 9223372036854775807)], old [key={originalWriteId: 4, bucket: 536870912(1.0.0), row: 2434, currentWriteId 25}, nextRecord={2, 4, 536870912, 2434, 25, null}, reader=Hive ORC Reader(hdfs://hdpprod/warehouse/tablespace/managed/hive/some.db/tableA/delete_delta_0000015_0000026/bucket_00000{noformat}
Through orc_tools I scan all the files(bucket_00000,bucket_00001,bucket_00002) under delete_delta and find all rows of files are the same.I think this will cause the same key(RecordIdentifer) when scan the bucket_00001 after bucket_00000 but I don't know why all the rows are the same in these bucket files.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)