You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Shashank Thillainathan (Jira)" <ji...@apache.org> on 2021/04/29 10:57:00 UTC

[jira] [Updated] (HBASE-25827) Per Cell TTL tags get duplicated with increments causing tags length overflow

     [ https://issues.apache.org/jira/browse/HBASE-25827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shashank Thillainathan updated HBASE-25827:
-------------------------------------------
    Description: 
Incrementing with per cell TTL and flushing corrupts the HFile.

Reproducing the issue:

Incrementing a row and a column with per cell TTL for about 3 thousand times and flushing corrupts the HFile leaving the table unusable.

 
 Cause:
 On reading the HFile, it is found that duplicate TTL tags get appended for each cell.

Though this case has already been addressed here at HBASE-18030, corruption still occurs even with this patch.
{code:java}
java.lang.IllegalStateException: Invalid currTagsLen -31260. Block offset: 250962, block length: 76568, position: 42207 (without header). path=hdfs://hdfs/file/path
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
        at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:388)
        at org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:327)
        at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
        at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
        at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1432)
        at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2192)
        at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:577)
        at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:619)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
{code}
 
  
  
  

  was:
Incrementing with per cell TTL and flushing corrupts the HFile.

Reproducing the issue:

Incrementing a row and a column with per cell TTL for about 3 thousand times and flushing corrupts the HFile leaving the table unusable.

 
Cause:
On reading the HFile, it is found that duplicate TTL tags get appended for each cell.


Though this case has already been addressed here at [HBASE-18030|https://issues.apache.org/jira/browse/HBASE-18030], corruption still occurs even with this patch.
{code:java}
// java.lang.IllegalStateException: Invalid currTagsLen -31260. Block offset: 250962, block length: 76568, position: 42207 (without header). path=hdfs://hdfs/file/path
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
        at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:388)
        at org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:327)
        at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
        at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
        at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1432)
        at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2192)
        at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:577)
        at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:619)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
{code}
 
 
 
 


> Per Cell TTL tags get duplicated with increments causing tags length overflow
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-25827
>                 URL: https://issues.apache.org/jira/browse/HBASE-25827
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 2.1.9, 2.2.6
>            Reporter: Shashank Thillainathan
>            Priority: Critical
>
> Incrementing with per cell TTL and flushing corrupts the HFile.
> Reproducing the issue:
> Incrementing a row and a column with per cell TTL for about 3 thousand times and flushing corrupts the HFile leaving the table unusable.
>  
>  Cause:
>  On reading the HFile, it is found that duplicate TTL tags get appended for each cell.
> Though this case has already been addressed here at HBASE-18030, corruption still occurs even with this patch.
> {code:java}
> java.lang.IllegalStateException: Invalid currTagsLen -31260. Block offset: 250962, block length: 76568, position: 42207 (without header). path=hdfs://hdfs/file/path
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkTagsLen(HFileReaderImpl.java:642)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:630)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
>         at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
>         at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
>         at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:388)
>         at org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:327)
>         at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>         at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>         at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1432)
>         at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2192)
>         at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:577)
>         at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:619)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
>  
>   
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)