You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Govind <go...@gmail.com> on 2016/05/02 12:04:36 UTC

Could not seekToPreviousRow

Hi all,

I'm getting an exception while performing reverse scan on a HBase table.
Previously it was working fine but now there is some problem with seeking
to previous row. Any suggestions will be highly appreciated. Error log is
following:

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=35, exceptions:
Mon May 02 10:59:29 CEST 2016,
RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
java.io.IOException: java.io.IOException: Could not seekToPreviousRow
StoreFileScanner[HFileScanner for reader
reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
heapSize=9919772624, minSize=12147044352, minFactor=0.95,
multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
lastKey=Motorveje i
Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
to key
Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
at
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: On-disk size without header provided is
196736, but block header contains 65582. Block offset: -1, data starts
with:
DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
\x00\x00\x01\x00
at
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

Regards,
Govind

Re: Could not seekToPreviousRow

Posted by Govind <go...@gmail.com>.
I'm using hbase-1.1.2 and yes,file 3eac358ffb9d43018221fbddf9274ffd is
producing the same error every time. I tested the same code on other table
and it worked fine. What could be wrong with this?

On Mon, May 2, 2016 at 3:42 PM, Ted Yu <yu...@gmail.com> wrote:

> Which release of hbase are you using ?
>
> Does file 3eac358ffb9d43018221fbddf9274ffd always produce such error during
> reverse scan ?
>
> Thanks
>
> On Mon, May 2, 2016 at 3:04 AM, Govind <go...@gmail.com> wrote:
>
> > Hi all,
> >
> > I'm getting an exception while performing reverse scan on a HBase table.
> > Previously it was working fine but now there is some problem with seeking
> > to previous row. Any suggestions will be highly appreciated. Error log is
> > following:
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> > attempts=35, exceptions:
> > Mon May 02 10:59:29 CEST 2016,
> > RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
> > java.io.IOException: java.io.IOException: Could not seekToPreviousRow
> > StoreFileScanner[HFileScanner for reader
> >
> >
> reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
> > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
> > currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
> > heapSize=9919772624, minSize=12147044352, minFactor=0.95,
> > multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
> > singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> > cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> > cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
> > firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
> > lastKey=Motorveje i
> >
> >
> Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
> > avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
> >
> >
> cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
> > to key
> >
> >
> Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.IOException: On-disk size without header provided is
> > 196736, but block header contains 65582. Block offset: -1, data starts
> > with:
> >
> >
> DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
> > \x00\x00\x01\x00
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
> > ... 13 more
> >
> > Regards,
> > Govind
> >
>

Re: Could not seekToPreviousRow

Posted by Ted Yu <yu...@gmail.com>.
Which release of hbase are you using ?

Does file 3eac358ffb9d43018221fbddf9274ffd always produce such error during
reverse scan ?

Thanks

On Mon, May 2, 2016 at 3:04 AM, Govind <go...@gmail.com> wrote:

> Hi all,
>
> I'm getting an exception while performing reverse scan on a HBase table.
> Previously it was working fine but now there is some problem with seeking
> to previous row. Any suggestions will be highly appreciated. Error log is
> following:
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=35, exceptions:
> Mon May 02 10:59:29 CEST 2016,
> RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
> java.io.IOException: java.io.IOException: Could not seekToPreviousRow
> StoreFileScanner[HFileScanner for reader
>
> reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
> compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
> currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
> heapSize=9919772624, minSize=12147044352, minFactor=0.95,
> multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
> singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
> firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
> lastKey=Motorveje i
>
> Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
> avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
>
> cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
> to key
>
> Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
> at
>
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
> at
>
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
> at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: On-disk size without header provided is
> 196736, but block header contains 65582. Block offset: -1, data starts
> with:
>
> DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
> \x00\x00\x01\x00
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
> ... 13 more
>
> Regards,
> Govind
>