You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by zheng wang <18...@qq.com> on 2020/07/04 03:32:06 UTC

回复:Could not iterate StoreFileScanner - during compaction

Hi,
"cur=10259783_1010157000000008851/hb:B/1490097148981/Put/vlen=16591695"
"Invalid onDisksize=-969694035: expected to be at least 33 and at most 2147483647, or -1"


I guess there is a very big cell cause&nbsp;the block size exceed the&nbsp;Integer.MAX_VALUE, and&nbsp;lead to overflow&nbsp;error.





------------------&nbsp;原始邮件&nbsp;------------------
发件人:&nbsp;"Mohamed Meeran"<meeran.gladiatorz@gmail.com&gt;;
发送时间:&nbsp;2020年7月3日(星期五) 晚上10:26
收件人:&nbsp;"user"<user@hbase.apache.org&gt;;

主题:&nbsp;Could not iterate StoreFileScanner - during compaction



Hi,

We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see major compaction failed for some of the regions with the following error logs.


Caused by: java.io.IOException: Could not iterate StoreFileScanner[HFileScanner for reader reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081, compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, firstKey=Optional[10259783_1010157000000008129/hb:B/1490097103780/Put/seqid=0], lastKey=Optional[10260211_1009658000000470017/hb:H/1490097295354/Put/seqid=0], avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, cur=10259783_1010157000000008851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
	at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
	at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
	... 5 more
Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to be at least 33 and at most 2147483647, or -1
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
	... 13 more

We analysed a file using the hfile tool. Attaching the output for printblocks and printblockheaders.


Any help to fix this would be greatly appreciated.&nbsp;
-- 
Thanks in advance,&nbsp;&nbsp;
&nbsp; &nbsp;&nbsp; Meeran

Re: 回复:Could not iterate StoreFileScanner - during compaction

Posted by Meeran <me...@gmail.com>.
Hi,



We did not face this issue in our previous version of HBase-1.4.x (hadoop-2.7.3). We recently upgraded our cluster to hbase-2.1.9(hadoop-3.1.3) and enabled erasure coding policy (XOR-2-1-1024k) for testing purpose.



We faced the issue mentioned in the following JIRA when one of the datanode went unreachable state. 



https://issues.apache.org/jira/browse/HDFS-14175



We applied the patch and fixed it. 



I guess we are facing this issue after this . 

Also hbck2 filesystem report looks fine. 



Regards,

Meeran.








---- On Sat, 04 Jul 2020 09:02:06 +0530 zheng wang <18...@qq.com> wrote ----



Hi, 
"cur=10259783_1010157000000008851/hb:B/1490097148981/Put/vlen=16591695" 
"Invalid onDisksize=-969694035: expected to be at least 33 and at most 2147483647, or -1" 
 
 
I guess there is a very big cell cause&nbsp;the block size exceed the&nbsp;Integer.MAX_VALUE, and&nbsp;lead to overflow&nbsp;error. 
 
 
 
 
 
------------------&nbsp;原始邮件&nbsp;------------------ 
发件人:&nbsp;"Mohamed Meeran"<mailto:meeran.gladiatorz@gmail.com&gt;; 
发送时间:&nbsp;2020年7月3日(星期五) 晚上10:26 
收件人:&nbsp;"user"<mailto:user@hbase.apache.org&gt;; 
 
主题:&nbsp;Could not iterate StoreFileScanner - during compaction 
 
 
 
Hi, 
 
We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see major compaction failed for some of the regions with the following error logs. 
 
 
Caused by: java.io.IOException: Could not iterate StoreFileScanner[HFileScanner for reader reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081, compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, firstKey=Optional[10259783_1010157000000008129/hb:B/1490097103780/Put/seqid=0], lastKey=Optional[10260211_1009658000000470017/hb:H/1490097295354/Put/seqid=0], avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, cur=10259783_1010157000000008851/hb:B/1490097148981/Put/vlen=16591695/seqid=0] 
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217) 
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120) 
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654) 
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153) 
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593) 
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757) 
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527) 
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158) 
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407) 
    ... 5 more 
Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to be at least 33 and at most 2147483647, or -1 
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673) 
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746) 
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076) 
    at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097) 
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208) 
    ... 13 more 
 
We analysed a file using the hfile tool. Attaching the output for printblocks and printblockheaders. 
 
 
Any help to fix this would be greatly appreciated.&nbsp; 
-- 
Thanks in advance,&nbsp;&nbsp; 
&nbsp; &nbsp;&nbsp; Meeran