You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "Lars Hofhansl (JIRA)" <ji...@apache.org> on 2018/09/28 23:06:00 UTC

[jira] [Commented] (PHOENIX-4933) DELETE FROM throws NPE when a local index is present

    [ https://issues.apache.org/jira/browse/PHOENIX-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16632627#comment-16632627 ] 

Lars Hofhansl commented on PHOENIX-4933:
----------------------------------------

As soon as I drop the local index the delete succeeds.

{{create table test (pk integer primary key, v1 float, v2 float, v3 integer) SALT_BUCKETS=8, DISABLE_WAL=true;}}
{{delete from test where v1 < 0.99;}}

> DELETE FROM throws NPE when a local index is present
> ----------------------------------------------------
>
>                 Key: PHOENIX-4933
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4933
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Priority: Major
>
> Just ran into this. When a local index is present. DELETE FROM <table> throws the following NPE:
> Error: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: TEST,,1537573236513.ef4b34358717193907bddb3a5bec3b26.: null
>  at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
>  at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
>  at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:195)
>  at org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:557)
>  at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:239)
>  at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:287)
>  at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3130)
>  at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36359)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2369)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  Caused by: java.lang.NullPointerException
>  at org.apache.phoenix.execute.TupleProjector.projectResults(TupleProjector.java:283)
>  at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:185)
>  ... 10 more (state=08000,code=101)
> It fails here:
> {{long maxTS = tuple.getValue(0).getTimestamp();}}, because {{tuple.getValue(0)}} returns null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)