You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Nithin (JIRA)" <ji...@apache.org> on 2016/08/22 23:19:20 UTC

[jira] [Updated] (PHOENIX-3196) Array Index Out Of Bounds Exception

     [ https://issues.apache.org/jira/browse/PHOENIX-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nithin updated PHOENIX-3196:
----------------------------
    Description: 
Data Set Size - Table with 156 Million Rows and 200 Columns

Seems like this issue is resolved in Phoenix 3.0. But its still recurring

Phoenix throws the following exception -

Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
	... 10 more


To reproduce -
1) Create Table
2) While creating indexes create an index with multiple occurances of same column name - Phoenix throws an error stating that the column name is used multiple times
3) Correct it and try to run the index creation again.

Not sure, if running a faulty index creation DDL is the root cause of this exception. But started seeing this after doing the above steps

  was:
Data Set Size - Table with 156 Million Rows and 200 Columns

Seems like this issue is resolved in Phoenix 3.0. But its still recurring

Phoenix throws the following exception -

Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
	... 10 more


To reproduce -
1) Create Table
2) While creating indexes create an index with multiple occurances of same column name - Phoenix throws an error stating that the column name is used multiple times
3) Correct it and try to run the index creation again.

Not sure, if running a faulty index creation DDL in is the root cause of this exception. But started seeing this after doing the above steps


> Array Index Out Of Bounds Exception
> -----------------------------------
>
>                 Key: PHOENIX-3196
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3196
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>         Environment: Amazon EMR - 4.7.2
>            Reporter: Nithin
>            Priority: Critical
>             Fix For: 4.8.1
>
>
> Data Set Size - Table with 156 Million Rows and 200 Columns
> Seems like this issue is resolved in Phoenix 3.0. But its still recurring
> Phoenix throws the following exception -
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
> 	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
> 	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
> 	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
> 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
> 	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
> 	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
> 	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
> 	... 10 more
> To reproduce -
> 1) Create Table
> 2) While creating indexes create an index with multiple occurances of same column name - Phoenix throws an error stating that the column name is used multiple times
> 3) Correct it and try to run the index creation again.
> Not sure, if running a faulty index creation DDL is the root cause of this exception. But started seeing this after doing the above steps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)