You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Rajeshbabu Chintaguntla (JIRA)" <ji...@apache.org> on 2016/07/14 18:16:21 UTC

[jira] [Comment Edited] (PHOENIX-3021) Using local index during compaction is producing NPE

    [ https://issues.apache.org/jira/browse/PHOENIX-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377902#comment-15377902 ] 

Rajeshbabu Chintaguntla edited comment on PHOENIX-3021 at 7/14/16 6:16 PM:
---------------------------------------------------------------------------

[~ndimiduk] Yes it's isolated to 4.8 only and it's possible after PHOENIX-2628 where we have moved scanning local index  reference files from IndexHalfStoreFileReader to LocalIndexStoreFileScanner.


was (Author: rajeshbabu):
[~ndimiduk] Yes it's isolated to 4.8 only and it's possible PHOENIX-2628 where we have moved scanning local index  reference files from IndexHalfStoreFileReader to LocalIndexStoreFileScanner.

> Using local index during compaction  is producing NPE
> -----------------------------------------------------
>
>                 Key: PHOENIX-3021
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3021
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.8.0
>            Reporter: Sergio Peleato
>            Assignee: Sergey Soldatov
>            Priority: Critical
>             Fix For: 4.8.0
>
>         Attachments: PHOENIX-3021-1.patch
>
>
> When we are querying tables that are going through a split, if we query the table sometimes we get the following exception:
> {noformat}
> 2016-06-21 06:20:13,327|beaver.machine|INFO|938|139891910878976|MainThread|Done
> 2016-06-21 06:20:13,339|beaver.machine|INFO|938|139891910878976|MainThread|1/2          !outputformat xmlattr
> 2016-06-21 06:20:13,340|beaver.machine|INFO|938|139891910878976|MainThread|2/2          SELECT COUNT(unsig_long_id) AS Result FROM SECONDARY_LARGE_TABLE AS S INNER JOIN GIGANTIC_TABLE AS L ON S.sec_id=L.id GROUP BY unsig_long_id ORDER BY unsig_long_id DESC;
> 2016-06-21 06:20:21,607|beaver.machine|INFO|938|139891910878976|MainThread|<resultset>
> 2016-06-21 06:20:21,608|beaver.machine|INFO|938|139891910878976|MainThread|<result RESULT="10000"/>
> 2016-06-21 06:20:21,608|beaver.machine|INFO|938|139891910878976|MainThread|<result RESULT="10000"/>
> 2016-06-21 06:20:21,608|beaver.machine|INFO|938|139891910878976|MainThread|<result RESULT="10000"/>
> 2016-06-21 06:20:21,608|beaver.machine|INFO|938|139891910878976|MainThread|<result RESULT="10000"/>
> 2016-06-21 06:20:21,609|beaver.machine|INFO|938|139891910878976|MainThread|<result RESULT="10000"/>
> 2016-06-21 06:20:21,609|beaver.machine|INFO|938|139891910878976|MainThread|</resultset>
> 2016-06-21 06:20:21,612|beaver.machine|INFO|938|139891910878976|MainThread|5 rows selected (8.269 seconds)
> ....
> ....
> ....
> 2016-06-21 06:20:22,024|beaver.component.hbase|INFO|938|139891910878976|MainThread|Writing commands to file /grid/0/hadoopqe/artifacts/tmp-689403
> 2016-06-21 06:20:22,024|beaver.component.hbase|INFO|938|139891910878976|MainThread| 'split 'GIGANTIC_TABLE''
> 2016-06-21 06:20:22,025|beaver.component.hbase|INFO|938|139891910878976|MainThread|Done writing commands to file. Will execute them now.
> ....
> ....
> ....
> ...
> 2016-06-21 06:20:44,667|beaver.machine|INFO|938|139891910878976|MainThread|Done
> 2016-06-21 06:20:44,675|beaver.machine|INFO|938|139891910878976|MainThread|1/2          !outputformat xmlattr
> 2016-06-21 06:20:44,676|beaver.machine|INFO|938|139891910878976|MainThread|2/2          SELECT COUNT(unsig_long_id) AS Result FROM SECONDARY_LARGE_TABLE AS S INNER JOIN GIGANTIC_TABLE AS L ON S.sec_id=L.id GROUP BY unsig_long_id ORDER BY unsig_long_id DESC;
> 2016-06-21 06:20:48,973|beaver.machine|INFO|938|139891910878976|MainThread|Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> 2016-06-21 06:20:48,974|beaver.machine|INFO|938|139891910878976|MainThread|java.sql.SQLException: Encountered exception in sub plan [0] execution.
> 2016-06-21 06:20:48,974|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:198)
> 2016-06-21 06:20:48,974|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:143)
> 2016-06-21 06:20:48,974|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
> 2016-06-21 06:20:48,974|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.Commands.execute(Commands.java:822)
> 2016-06-21 06:20:48,975|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.Commands.sql(Commands.java:732)
> 2016-06-21 06:20:48,976|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.dispatch(SqlLine.java:808)
> 2016-06-21 06:20:48,976|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.runCommands(SqlLine.java:1711)
> 2016-06-21 06:20:48,977|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.Commands.run(Commands.java:1285)
> 2016-06-21 06:20:48,977|beaver.machine|INFO|938|139891910878976|MainThread|at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2016-06-21 06:20:48,977|beaver.machine|INFO|938|139891910878976|MainThread|at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2016-06-21 06:20:48,977|beaver.machine|INFO|938|139891910878976|MainThread|at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2016-06-21 06:20:48,977|beaver.machine|INFO|938|139891910878976|MainThread|at java.lang.reflect.Method.invoke(Method.java:498)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.dispatch(SqlLine.java:804)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.initArgs(SqlLine.java:613)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.begin(SqlLine.java:656)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.start(SqlLine.java:398)
> 2016-06-21 06:20:48,978|beaver.machine|INFO|938|139891910878976|MainThread|at sqlline.SqlLine.main(SqlLine.java:292)
> 2016-06-21 06:20:48,979|beaver.machine|INFO|938|139891910878976|MainThread|Caused by: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: GIGANTIC_TABLE,\x80\x03\xD0\xAF,1466490035955.090582e969d6b25f6e4f7423b995fd0d.: null
> 2016-06-21 06:20:48,979|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
> 2016-06-21 06:20:48,979|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53)
> 2016-06-21 06:20:48,984|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:444)
> 2016-06-21 06:20:48,985|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
> 2016-06-21 06:20:48,985|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2402)
> 2016-06-21 06:20:48,985|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> 2016-06-21 06:20:48,985|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
> 2016-06-21 06:20:48,985|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> 2016-06-21 06:20:48,986|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> 2016-06-21 06:20:48,986|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> 2016-06-21 06:20:48,986|beaver.machine|INFO|938|139891910878976|MainThread|at java.lang.Thread.run(Thread.java:745)
> 2016-06-21 06:20:48,986|beaver.machine|INFO|938|139891910878976|MainThread|Caused by: java.lang.NullPointerException
> 2016-06-21 06:20:48,986|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1397)
> 2016-06-21 06:20:48,987|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
> 2016-06-21 06:20:48,987|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504)
> 2016-06-21 06:20:48,987|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:439)
> 2016-06-21 06:20:48,987|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:713)
> 2016-06-21 06:20:48,988|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1256)
> 2016-06-21 06:20:48,988|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:152)
> 2016-06-21 06:20:48,988|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseekToProperKey(LocalIndexStoreFileScanner.java:234)
> 2016-06-21 06:20:48,988|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseek(LocalIndexStoreFileScanner.java:226)
> 2016-06-21 06:20:48,989|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.reseek(LocalIndexStoreFileScanner.java:94)
> 2016-06-21 06:20:48,989|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> 2016-06-21 06:20:48,989|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
> 2016-06-21 06:20:48,989|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268)
> 2016-06-21 06:20:48,989|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:815)
> 2016-06-21 06:20:48,990|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:792)
> 2016-06-21 06:20:48,990|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:592)
> 2016-06-21 06:20:48,990|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> 2016-06-21 06:20:48,990|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5699)
> 2016-06-21 06:20:48,991|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5850)
> 2016-06-21 06:20:48,991|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5637)
> 2016-06-21 06:20:48,991|beaver.machine|INFO|938|139891910878976|MainThread|at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:414)
> ....
> ....
> ....
> {noformat}
> We are not seeing any relevant exception on hbase logs side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)