You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Jihong Liu (JIRA)" <ji...@apache.org> on 2014/12/04 02:28:12 UTC

[jira] [Updated] (HIVE-8966) Delta files created by hive hcatalog streaming cannot be compacted

     [ https://issues.apache.org/jira/browse/HIVE-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jihong Liu updated HIVE-8966:
-----------------------------
    Fix Version/s: 0.14.1
           Labels: easyfix  (was: )
     Release Note: don't try to do compact on non-bucket files
           Status: Patch Available  (was: Open)

https://issues.apache.org/jira/i#browse/HIVE-8966

> Delta files created by hive hcatalog streaming cannot be compacted
> ------------------------------------------------------------------
>
>                 Key: HIVE-8966
>                 URL: https://issues.apache.org/jira/browse/HIVE-8966
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog
>    Affects Versions: 0.14.0
>         Environment: hive
>            Reporter: Jihong Liu
>            Assignee: Alan Gates
>            Priority: Critical
>              Labels: easyfix
>             Fix For: 0.14.1
>
>
> hive hcatalog streaming will also create a file like bucket_n_flush_length in each delta directory. Where "n" is the bucket number. But the compactor.CompactorMR think this file also needs to compact. However this file of course cannot be compacted, so compactor.CompactorMR will not continue to do the compaction. 
> Did a test, after removed the bucket_n_flush_length file, then the "alter table partition compact" finished successfully. If don't delete that file, nothing will be compacted. 
> This is probably a very severity bug. Both 0.13 and 0.14 have this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)