You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Oleksiy Sayankin (JIRA)" <ji...@apache.org> on 2016/09/21 14:39:20 UTC

[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

    [ https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15510141#comment-15510141 ] 

Oleksiy Sayankin commented on HIVE-10685:
-----------------------------------------

[prasanth_j] I can see 

a5afaa04538b6ca96b34febad49cc1daef9fe2f4 for HIVE-10685: Alter table concatenate...
and 
f4a68c9677602e24b06e4f2fd01d8b6258b709e6 for Revert "HIVE-10685: Alter table concatenate...

So was patch for HIVE-10685 applied or not? Since our customer has an exception with Hive-1.2:

{code}
Caused by: java.io.EOFException: Read past end of RLE integer from compressed stream Stream for column 1 kind DATA position: 2108 length: 2108 range: 0 offset: 2108 limit: 2108 range 0 = 0 to 2108 uncompressed: 44422 to 44422
        at org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:56)
        at org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:302)
        at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$LongTreeReader.next(TreeReaderFactory.java:564)
        at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.next(TreeReaderFactory.java:2004)
        at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1039)
        ... 18 more
{code}

> Alter table concatenate oparetor will cause duplicate data
> ----------------------------------------------------------
>
>                 Key: HIVE-10685
>                 URL: https://issues.apache.org/jira/browse/HIVE-10685
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
>            Reporter: guoliming
>            Assignee: guoliming
>            Priority: Critical
>             Fix For: 1.2.1
>
>         Attachments: HIVE-10685.patch, HIVE-10685.patch
>
>
> "Orders" table has 1500000000 rows and stored as ORC. 
> {noformat}
> hive> select count(*) from orders;
> OK
> 1500000000
> Time taken: 37.692 seconds, Fetched: 1 row(s)
> {noformat}
> The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
> After executing command : ALTER TABLE orders CONCATENATE;
> The table is already 1530115000 rows.
> My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)