You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by João Alves <jo...@5dlab.com> on 2015/04/23 17:17:33 UTC

Index out of bounds exception when reading row

Hi all,

I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have one HBase table where there is at least one row that is impossible to get using either the java API or the hbase shell. I was unable to find online any examples that encompass this particular situation, maybe you guys can help me. The output error is the following:


ERROR: java.io.IOException
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
	at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
	at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException
	at java.nio.Buffer.checkBounds(Buffer.java:559)
	at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
	at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
	at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
	at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
	at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
	at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
	at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
	at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
	at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
	at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
	at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
	... 5 more


The description of the table is:

{NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => ' true                                                                
 SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_                                                                      
 MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO                                                                      
 N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f                                                                      
 alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 

Thanks for the help,
João


Re: Index out of bounds exception when reading row

Posted by João Alves <jo...@5dlab.com>.
Hey,

Thanks all for the help!

Best,
João

> On 23 Apr 2015, at 23:54, Dejan Menges <de...@gmail.com> wrote:
> 
> Hi,
> 
> Yes, that's the one.
> 
> FYI - we are using Hortonworks, so it's even part of patchset for 2.1
> 
> Cheers,
> Dejan
> 
> On Thu, Apr 23, 2015, 22:25 Enis Söztutar <en...@apache.org> wrote:
> 
>> In case this is HBASE-11234, HDP-2.2 releases contain the fix.
>> 
>> Enis
>> 
>> On Thu, Apr 23, 2015 at 12:06 PM, Ted Yu <yu...@gmail.com> wrote:
>> 
>>> I think Dejan was referring to HBASE-11234
>>> 
>>> Cheers
>>> 
>>> On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges <de...@gmail.com>
>>> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> This is a known bug, there's fix already. We had it as well.
>>>> 
>>>> Cheers,
>>>> Dejan
>>>> 
>>>> On Thu, Apr 23, 2015 at 5:19 PM João Alves <jo...@5dlab.com> wrote:
>>>> 
>>>>> Hi all,
>>>>> 
>>>>> I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have
>> one
>>>>> HBase table where there is at least one row that is impossible to get
>>>> using
>>>>> either the java API or the hbase shell. I was unable to find online
>> any
>>>>> examples that encompass this particular situation, maybe you guys can
>>>> help
>>>>> me. The output error is the following:
>>>>> 
>>>>> 
>>>>> ERROR: java.io.IOException
>>>>>        at
>>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
>>>>>        at
>>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>>>>>        at java.lang.Thread.run(Thread.java:744)
>>>>> Caused by: java.lang.IndexOutOfBoundsException
>>>>>        at java.nio.Buffer.checkBounds(Buffer.java:559)
>>>>>        at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> 
>> .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> 
>>>> 
>>> 
>> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> 
>>>> 
>>> 
>> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> 
>>>> 
>>> 
>> .hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
>>>>>        at org.apache.hadoop.hbase.io
>>>>> .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
>>>>>        at
>>>>> 
>>> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
>>>>>        at
>>>>> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
>>>>>        at
>>>>> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
>>>>>        at
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
>>>>>        at
>>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>>>>>        ... 5 more
>>>>> 
>>>>> 
>>>>> The description of the table is:
>>>>> 
>>>>> {NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER =>
>> 'ROW',
>>>>> REPLICATION_SCOPE => '0', COMPRESSION => ' true
>>>>> SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0',
>>>>> KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_
>>>>> MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm',
>>>>> DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO
>>>>> N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY',
>> MIN_VERSIONS
>>>> =>
>>>>> '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f
>>>>> alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
>>> 'true'}
>>>>> 
>>>>> Thanks for the help,
>>>>> João
>>>>> 
>>>>> 
>>>> 
>>> 
>> 


Re: Index out of bounds exception when reading row

Posted by Dejan Menges <de...@gmail.com>.
Hi,

Yes, that's the one.

FYI - we are using Hortonworks, so it's even part of patchset for 2.1

Cheers,
Dejan

On Thu, Apr 23, 2015, 22:25 Enis Söztutar <en...@apache.org> wrote:

> In case this is HBASE-11234, HDP-2.2 releases contain the fix.
>
> Enis
>
> On Thu, Apr 23, 2015 at 12:06 PM, Ted Yu <yu...@gmail.com> wrote:
>
> > I think Dejan was referring to HBASE-11234
> >
> > Cheers
> >
> > On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges <de...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > This is a known bug, there's fix already. We had it as well.
> > >
> > > Cheers,
> > > Dejan
> > >
> > > On Thu, Apr 23, 2015 at 5:19 PM João Alves <jo...@5dlab.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have
> one
> > > > HBase table where there is at least one row that is impossible to get
> > > using
> > > > either the java API or the hbase shell. I was unable to find online
> any
> > > > examples that encompass this particular situation, maybe you guys can
> > > help
> > > > me. The output error is the following:
> > > >
> > > >
> > > > ERROR: java.io.IOException
> > > >         at
> > > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
> > > >         at
> > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> > > >         at java.lang.Thread.run(Thread.java:744)
> > > > Caused by: java.lang.IndexOutOfBoundsException
> > > >         at java.nio.Buffer.checkBounds(Buffer.java:559)
> > > >         at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
> > > >         at org.apache.hadoop.hbase.io
> > > >
> .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
> > > >         at org.apache.hadoop.hbase.io
> > > >
> > >
> >
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
> > > >         at org.apache.hadoop.hbase.io
> > > >
> > >
> >
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
> > > >         at org.apache.hadoop.hbase.io
> > > >
> > >
> >
> .hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
> > > >         at org.apache.hadoop.hbase.io
> > > > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
> > > >         at org.apache.hadoop.hbase.io
> > > > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
> > > >         at
> > > >
> > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
> > > >         at
> > > > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
> > > >         at
> > > > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
> > > >         at
> > > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> > > >         ... 5 more
> > > >
> > > >
> > > > The description of the table is:
> > > >
> > > > {NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER =>
> 'ROW',
> > > > REPLICATION_SCOPE => '0', COMPRESSION => ' true
> > > >  SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0',
> > > > KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_
> > > >  MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm',
> > > > DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO
> > > >  N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY',
> MIN_VERSIONS
> > > =>
> > > > '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f
> > > >  alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
> > 'true'}
> > > >
> > > > Thanks for the help,
> > > > João
> > > >
> > > >
> > >
> >
>

Re: Index out of bounds exception when reading row

Posted by Enis Söztutar <en...@apache.org>.
In case this is HBASE-11234, HDP-2.2 releases contain the fix.

Enis

On Thu, Apr 23, 2015 at 12:06 PM, Ted Yu <yu...@gmail.com> wrote:

> I think Dejan was referring to HBASE-11234
>
> Cheers
>
> On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges <de...@gmail.com>
> wrote:
>
> > Hi,
> >
> > This is a known bug, there's fix already. We had it as well.
> >
> > Cheers,
> > Dejan
> >
> > On Thu, Apr 23, 2015 at 5:19 PM João Alves <jo...@5dlab.com> wrote:
> >
> > > Hi all,
> > >
> > > I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have one
> > > HBase table where there is at least one row that is impossible to get
> > using
> > > either the java API or the hbase shell. I was unable to find online any
> > > examples that encompass this particular situation, maybe you guys can
> > help
> > > me. The output error is the following:
> > >
> > >
> > > ERROR: java.io.IOException
> > >         at
> > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
> > >         at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> > >         at java.lang.Thread.run(Thread.java:744)
> > > Caused by: java.lang.IndexOutOfBoundsException
> > >         at java.nio.Buffer.checkBounds(Buffer.java:559)
> > >         at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
> > >         at org.apache.hadoop.hbase.io
> > > .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
> > >         at org.apache.hadoop.hbase.io
> > >
> >
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
> > >         at org.apache.hadoop.hbase.io
> > >
> >
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
> > >         at org.apache.hadoop.hbase.io
> > >
> >
> .hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
> > >         at org.apache.hadoop.hbase.io
> > > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
> > >         at org.apache.hadoop.hbase.io
> > > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
> > >         at
> > >
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
> > >         at
> > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> > >         ... 5 more
> > >
> > >
> > > The description of the table is:
> > >
> > > {NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW',
> > > REPLICATION_SCOPE => '0', COMPRESSION => ' true
> > >  SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0',
> > > KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_
> > >  MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm',
> > > DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO
> > >  N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS
> > =>
> > > '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f
> > >  alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
> 'true'}
> > >
> > > Thanks for the help,
> > > João
> > >
> > >
> >
>

Re: Index out of bounds exception when reading row

Posted by Ted Yu <yu...@gmail.com>.
I think Dejan was referring to HBASE-11234

Cheers

On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges <de...@gmail.com>
wrote:

> Hi,
>
> This is a known bug, there's fix already. We had it as well.
>
> Cheers,
> Dejan
>
> On Thu, Apr 23, 2015 at 5:19 PM João Alves <jo...@5dlab.com> wrote:
>
> > Hi all,
> >
> > I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have one
> > HBase table where there is at least one row that is impossible to get
> using
> > either the java API or the hbase shell. I was unable to find online any
> > examples that encompass this particular situation, maybe you guys can
> help
> > me. The output error is the following:
> >
> >
> > ERROR: java.io.IOException
> >         at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
> >         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> >         at
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> >         at
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> >         at
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> >         at java.lang.Thread.run(Thread.java:744)
> > Caused by: java.lang.IndexOutOfBoundsException
> >         at java.nio.Buffer.checkBounds(Buffer.java:559)
> >         at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
> >         at org.apache.hadoop.hbase.io
> > .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
> >         at org.apache.hadoop.hbase.io
> >
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
> >         at org.apache.hadoop.hbase.io
> >
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
> >         at org.apache.hadoop.hbase.io
> >
> .hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
> >         at org.apache.hadoop.hbase.io
> > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
> >         at org.apache.hadoop.hbase.io
> > .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
> >         at
> > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
> >         at
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
> >         at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> >         ... 5 more
> >
> >
> > The description of the table is:
> >
> > {NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW',
> > REPLICATION_SCOPE => '0', COMPRESSION => ' true
> >  SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0',
> > KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_
> >  MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm',
> > DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO
> >  N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS
> =>
> > '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f
> >  alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> >
> > Thanks for the help,
> > João
> >
> >
>

Re: Index out of bounds exception when reading row

Posted by Dejan Menges <de...@gmail.com>.
Hi,

This is a known bug, there's fix already. We had it as well.

Cheers,
Dejan

On Thu, Apr 23, 2015 at 5:19 PM João Alves <jo...@5dlab.com> wrote:

> Hi all,
>
> I have a cluster with HDP 2.1 stack running HBase 0.98.0.2. I have one
> HBase table where there is at least one row that is impossible to get using
> either the java API or the hbase shell. I was unable to find online any
> examples that encompass this particular situation, maybe you guys can help
> me. The output error is the following:
>
>
> ERROR: java.io.IOException
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2046)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.IndexOutOfBoundsException
>         at java.nio.Buffer.checkBounds(Buffer.java:559)
>         at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:143)
>         at org.apache.hadoop.hbase.io
> .encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:489)
>         at org.apache.hadoop.hbase.io
> .encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
>         at org.apache.hadoop.hbase.io
> .encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:336)
>         at org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1134)
>         at org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
>         at org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
>         at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
>         at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
>         at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
>         at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
>         at
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4738)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4712)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2847)
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28857)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>         ... 5 more
>
>
> The description of the table is:
>
> {NAME => 'd', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW',
> REPLICATION_SCOPE => '0', COMPRESSION => ' true
>  SNAPPY', VERSIONS => '1', TTL => '2147483647', MIN_VERSIONS => '0',
> KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_
>  MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'm',
> DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATIO
>  N_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS =>
> '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'f
>  alse', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>
> Thanks for the help,
> João
>
>