You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Alex Kamil <al...@gmail.com> on 2016/02/10 19:36:59 UTC
OutOfOrderScannerNextException with phoenix 4.6-HBase-1.0-cdh5.5
I'm getting below exception in SELECT DISTINCT query using tenant-specific
connection with phoenix 4.6-HBase-1.0-cdh5.5 .
The exception disappears if I either switch to non-tenant connection, or
remove DISTINCT from the query.
Caused by: *org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
*org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 2326 number_of_rows: 100 close_scanner: false
next_call_seq: 0 client_handles_partials: true
client_handles_heartbeats: true
I'm using phoenix 4.6 for cloudera cdh5.5.1 community edition
https://github.com/chiastic-security/phoenix
-for-cloudera/tree/4.6-HBase-1.0-cdh5.5
Below the test case, error log and hbase-site.xml settings:
*import* java.sql.Connection;
*import* java.sql.DriverManager;
*import* java.sql.ResultSet;
*import* java.sql.SQLException;
*import* java.sql.Statement;
*import* java.util.Properties;
*public* *class* Test {
*public* *static* *void* main (String [] args){
Connection conn = *null*;
String tenant = *SYSTEMTENANT*;
String url = "my.ip";
System.*out*.println("trying to initialize
tenant-specific connection to hbaseUrl="+url+" for tenant="+tenant);
Properties connProps = *new* Properties();
connProps.setProperty("TenantId", tenant);
String query = "SELECT DISTINCT ROWKEY,VS FROM TABLE1
ORDER BY VS DESC";
*try* {
conn =
DriverManager.*getConnection*("jdbc:phoenix:"+url, connProps);
Statement st = conn.createStatement();
ResultSet resultSet = st.executeQuery(query);
*while*(resultSet.next())
{
String rowKey = resultSet.getString(1);
String versionSerial = resultSet.getString(2);
System.*out*.println("rowkey="+rowKey+",
versionserial="+versionSerial);
}
} *catch* (SQLException e) {
// logger.error(e);
e.printStackTrace();
}
}
}
*Stack trace: *
*org.apache.phoenix.exception.PhoenixIOException*:
*org.apache.phoenix.exception.PhoenixIOException*: Failed after retry
of *OutOfOrderScannerNextException*: was there a rpc timeout?
at org.apache.phoenix.util.ServerUtil.parseServerException(*ServerUtil.java:108*)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(*BaseResultIterators.java:558*)
at org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(*MergeSortResultIterator.java:48*)
at org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(*MergeSortResultIterator.java:84*)
at org.apache.phoenix.iterate.MergeSortResultIterator.next(*MergeSortResultIterator.java:111*)
at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(*BaseGroupedAggregatingResultIterator.java:64*)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(*PhoenixResultSet.java:771*)
at Test.main(*Test.java:26*)
Caused by: *java.util.concurrent.ExecutionException*:
*org.apache.phoenix.exception.PhoenixIOException*: Failed after retry
of *OutOfOrderScannerNextException*: was there a rpc timeout?
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(*BaseResultIterators.java:554*)
... 6 more
Caused by: *org.apache.phoenix.exception.PhoenixIOException*: Failed
after retry of *OutOfOrderScannerNextException*: was there a rpc
timeout?
at org.apache.phoenix.util.ServerUtil.parseServerException(*ServerUtil.java:108*)
at org.apache.phoenix.iterate.ScanningResultIterator.next(*ScanningResultIterator.java:61*)
at org.apache.phoenix.iterate.TableResultIterator.next(*TableResultIterator.java:107*)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(*SpoolingResultIterator.java:125*)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(*SpoolingResultIterator.java:83*)
at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(*SpoolingResultIterator.java:62*)
at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(*SpoolingResultIterator.java:78*)
at org.apache.phoenix.iterate.ParallelIterators$1.call(*ParallelIterators.java:109*)
at org.apache.phoenix.iterate.ParallelIterators$1.call(*ParallelIterators.java:100*)
at java.util.concurrent.FutureTask.run(Unknown Source)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(*JobManager.java:183*)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: *org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
*org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 2326 number_of_rows: 100 close_scanner: false
next_call_seq: 0 client_handles_partials: true
client_handles_heartbeats: true
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(*RSRpcServices.java:2177*)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(*ClientProtos.java:32205*)
at org.apache.hadoop.hbase.ipc.RpcServer.call(*RpcServer.java:2034*)
at org.apache.hadoop.hbase.ipc.CallRunner.run(*CallRunner.java:107*)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(*RpcExecutor.java:130*)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(*RpcExecutor.java:107*)
at java.lang.Thread.run(*Thread.java:744*) at
org.apache.hadoop.ipc.RemoteException.instantiateException(*RemoteException.java:106*)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(*RemoteException.java:95*)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(*ProtobufUtil.java:328*)
at org.apache.hadoop.hbase.client.ScannerCallable.call(*ScannerCallable.java:255*)
at org.apache.hadoop.hbase.client.ScannerCallable.call(*ScannerCallable.java:62*)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(*RpcRetryingCaller.java:200*)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(*ScannerCallableWithReplicas.java:371*)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(*ScannerCallableWithReplicas.java:345*)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(*RpcRetryingCaller.java:126*)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(*ResultBoundedCompletionService.java:64*)
hbase-site.xml settings:
<property>
<name>hbase.rpc.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>100</value>
</property>
<property>
<name>phoenix.query.timeoutMs</name>
<value>60000</value>
</property>
<property>
<name>"hbase.client.scanner.timeout.period</name>
<value>60000</value>
</property>
Thanks,
Alex