You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Konstantinos Kougios <ko...@googlemail.com> on 2015/10/01 00:15:40 UTC

Re: unexpected throwable? probably due to query

Hi,

hbase-1.1.2
phoenix-4.5.1-HBase-1.1

(hadoop 2.7.1)

Cheers

On 30/09/15 21:55, Samarth Jain wrote:
> Hi Konstantinos,
>
> Can you tell us what versions of Phoenix and HBase are you using?
>
> - Samarth
>
>
>
> On Wed, Sep 30, 2015 at 1:46 PM, anil gupta <anilgupta84@gmail.com 
> <ma...@gmail.com>> wrote:
>
>     As per the stack trace, that looks like a bug to me.
>
>     On Wed, Sep 30, 2015 at 7:27 AM, Konstantinos Kougios
>     <kostas.kougios@googlemail.com
>     <ma...@googlemail.com>> wrote:
>
>         Seems I am having all sorts of trouble with my query:
>
>         select count(*),word from words group by word limit 10;
>
>         I am getting this at the logs of region servers,. any ideas?
>
>         2015-09-30 15:12:40,836 ERROR
>         [IndexRpcServer.handler=6,queue=0,port=16020] ipc.RpcServer:
>         Unexpected throwable object
>         java.lang.ArrayIndexOutOfBoundsException
>                 at
>         org.apache.hadoop.hbase.util.Bytes.putBytes(Bytes.java:299)
>                 at
>         org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1102)
>                 at
>         org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:650)
>                 at
>         org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:578)
>                 at
>         org.apache.phoenix.util.KeyValueUtil.newKeyValue(KeyValueUtil.java:63)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillManager.getAggregators(SpillManager.java:204)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillManager.toCacheEntry(SpillManager.java:243)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$EntryIterator.next(SpillableGroupByCache.java:285)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$EntryIterator.next(SpillableGroupByCache.java:261)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2.next(SpillableGroupByCache.java:364)
>                 at
>         org.apache.phoenix.coprocessor.BaseRegionScanner.next(BaseRegionScanner.java:40)
>                 at
>         org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:60)
>                 at
>         org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>                 at
>         org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
>                 at
>         org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>                 at
>         org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>                 at java.lang.Thread.run(Thread.java:745)
>         2015-09-30 15:13:38,744 WARN
>         [B.defaultRpcServer.handler=15,queue=0,port=16020]
>         ipc.RpcServer: (responseTooSlow):
>         {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1443621466063,"responsesize":9,"method":"Scan","processingtimems":952547,"client":"192.168.0.11:55994
>         <http://192.168.0.11:55994>","queuetimems":558,"class":"HRegionServer"}
>         2015-09-30 15:13:39,129 ERROR
>         [IndexRpcServer.handler=8,queue=0,port=16020] ipc.RpcServer:
>         Unexpected throwable object
>         java.lang.ArrayIndexOutOfBoundsException
>                 at
>         org.apache.hadoop.hbase.util.Bytes.putBytes(Bytes.java:299)
>                 at
>         org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1102)
>                 at
>         org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:650)
>                 at
>         org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:578)
>                 at
>         org.apache.phoenix.util.KeyValueUtil.newKeyValue(KeyValueUtil.java:63)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillManager.getAggregators(SpillManager.java:204)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillManager.toCacheEntry(SpillManager.java:243)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$EntryIterator.next(SpillableGroupByCache.java:285)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$EntryIterator.next(SpillableGroupByCache.java:261)
>                 at
>         org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2.next(SpillableGroupByCache.java:364)
>                 at
>         org.apache.phoenix.coprocessor.BaseRegionScanner.next(BaseRegionScanner.java:40)
>                 at
>         org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:60)
>                 at
>         org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>                 at
>         org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
>                 at
>         org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>                 at
>         org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>                 at
>         org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>                 at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>     -- 
>     Thanks & Regards,
>     Anil Gupta
>
>