You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2012/10/02 08:07:14 UTC

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks

    [ https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467516#comment-13467516 ] 

Hudson commented on HBASE-6871:
-------------------------------

Integrated in HBase-0.94-security #58 (See [https://builds.apache.org/job/HBase-0.94-security/58/])
    HBASE-6871 HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks; ADDENDUM2 REAPPLICATION (Revision 1391879)
HBASE-6871 HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks; ADDENDUM2 OVERCOMMIT (Revision 1391878)
HBASE-6871 HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks; ADDENDUM2 (Revision 1391877)
HBASE-6871 HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks; ADDENDUM (Revision 1391869)
HBASE-6871 HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks (Revision 1391742)

     Result = SUCCESS
stack : 
Files : 
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* /hbase/branches/0.94/pom.xml
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* /hbase/branches/0.94/pom.xml
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

                
> HFileBlockIndex Write Error in HFile V2 due to incorrect split into intermediate index blocks
> ---------------------------------------------------------------------------------------------
>
>                 Key: HBASE-6871
>                 URL: https://issues.apache.org/jira/browse/HBASE-6871
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.1
>         Environment: redhat 5u4
>            Reporter: Fenng Wang
>            Assignee: Mikhail Bautin
>            Priority: Critical
>             Fix For: 0.92.3, 0.94.3, 0.96.0
>
>         Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.094.addendum2.txt, 6871.094.addendum.txt, 6871-0.94.txt, 6871-0.94v2.txt, 6871-hfile-index-0.92.txt, 6871-hfile-index-0.92-v2.txt, 6871.txt, 6871v2.txt, 787179746cc347ce9bb36f1989d17419.hfile, 960a026ca370464f84903ea58114bc75.hfile, d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the exception message is below:
> 2012-09-18 06:32:26,227 ERROR org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: Compaction failed regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 188.0k, 188.0k, 185.8k, 223.3k), priority=9, time=45826250816757428java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for reader reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895, compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn, lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0] to key http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)        
>         at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)        
>         at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
>         at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)        
>         at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)        
>         at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
>         at org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)        
>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997)        
>         at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
>         at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)        
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)        
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, prevBlockOffset=-1, dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120, fileOffset=218942        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType(HFileReaderV2.java:378)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:331)        at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:213)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:455)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)        
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)        
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
> After some debug works,I found that when hfile closing, if the rootChunk is empty, the only one curInlineChunk will upgrade to root chunk. But if the last block flushing make curInlineChunk exceed max index block size, the root chunk(upgrade from curInlineChunk) will be splited into intermediate index blocks, and the index level is set to 2. So when BlockIndexReader read the root index, it expects the next level index block is leaf index(index level=2), but the on disk index block is intermediate block, the error happened. 
> After I add some code to check curInlineChunk's size when rootChunk is empty in shouldWriteBlock(boolean closing), this bug can be fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira