You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Nathan Davis <na...@salesforce.com> on 2016/08/17 14:26:51 UTC

UnknowScanner/ScannerTimeoutException using IndexTool MR

Hi All,
I'm getting the following error (sorry, it is the full error stack) when I
run the IndexTool MR job to populate an index I created with ASYNC. I have
been able to use IndexTool successfully previously.

2016-08-17 13:30:48,024 INFO  [main] mapreduce.Job: Task Id :
> attempt_1471372816200_0005_m_000051_0, Status : FAILED
> Error: java.lang.RuntimeException:
> org.apache.phoenix.exception.PhoenixIOException: 60209ms passed since the
> last invocation, timeout is currently set to 60000
> at
> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:159)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:565)
> at
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.phoenix.exception.PhoenixIOException: 60209ms passed
> since the last invocation, timeout is currently set to 60000
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
> at
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:65)
> at
> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:110)
> at
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at
> org.apache.phoenix.iterate.LookAheadResultIterator.next(LookAheadResultIterator.java:67)
> at
> org.apache.phoenix.iterate.RoundRobinResultIterator$RoundRobinIterator.next(RoundRobinResultIterator.java:309)
> at
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:97)
> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
> at
> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:152)
> ... 11 more
> Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 60209ms
> passed since the last invocation, timeout is currently set to 60000
> at
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
> at
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)
> at
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
> ... 18 more
> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already closed?
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2374)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:329)
> at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:262)
> at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
> at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> at
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException):
> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already closed?
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2374)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1268)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
> at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
> ... 9 more



It seems like I need to set either/both of
'phoenix.query.timeoutMs'/'hbase.rpc.timeout'. But not sure how to
configure those settings for the MR job's internal HBase client...

Thanks for the help,
-nathan

Re: UnknowScanner/ScannerTimeoutException using IndexTool MR

Posted by James Taylor <ja...@apache.org>.
Hi Nathan,
If the index is not completely built, then that'd definitely be a bug for
it to be put in an active state. Please file a JIRA if that's the case. Is
it possible that the part of the job that failed was retried and ended up
being successful?

Also, any chance you could use 4.8.0?

Thanks,
James

On Wednesday, August 17, 2016, Nathan Davis <na...@salesforce.com>
wrote:

> One additional problem here (perhaps the more important issue): even
> though the MR job failed because of this exception, the underlying index
> table still got set to ACTIVE. That I think is a bug.
> I'm using v4.7 against HBase 1.2.1 in EMR.
>
> On Wed, Aug 17, 2016 at 10:26 AM, Nathan Davis <
> nathan.davis@salesforce.com
> <javascript:_e(%7B%7D,'cvml','nathan.davis@salesforce.com');>> wrote:
>
>> Hi All,
>> I'm getting the following error (sorry, it is the full error stack) when
>> I run the IndexTool MR job to populate an index I created with ASYNC. I
>> have been able to use IndexTool successfully previously.
>>
>> 2016-08-17 13:30:48,024 INFO  [main] mapreduce.Job: Task Id :
>>> attempt_1471372816200_0005_m_000051_0, Status : FAILED
>>> Error: java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
>>> 60209ms passed since the last invocation, timeout is currently set to 60000
>>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValu
>>> e(PhoenixRecordReader.java:159)
>>> at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nex
>>> tKeyValue(MapTask.java:565)
>>> at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue
>>> (MapContextImpl.java:80)
>>> at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.ne
>>> xtKeyValue(WrappedMapper.java:91)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
>>> upInformation.java:1657)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>>> Caused by: org.apache.phoenix.exception.PhoenixIOException: 60209ms
>>> passed since the last invocation, timeout is currently set to 60000
>>> at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
>>> erUtil.java:111)
>>> at org.apache.phoenix.iterate.ScanningResultIterator.next(Scann
>>> ingResultIterator.java:65)
>>> at org.apache.phoenix.iterate.TableResultIterator.next(TableRes
>>> ultIterator.java:110)
>>> at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance
>>> (LookAheadResultIterator.java:47)
>>> at org.apache.phoenix.iterate.LookAheadResultIterator.next(Look
>>> AheadResultIterator.java:67)
>>> at org.apache.phoenix.iterate.RoundRobinResultIterator$RoundRob
>>> inIterator.next(RoundRobinResultIterator.java:309)
>>> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(Rou
>>> ndRobinResultIterator.java:97)
>>> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultS
>>> et.java:778)
>>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValu
>>> e(PhoenixRecordReader.java:152)
>>> ... 11 more
>>> Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
>>> 60209ms passed since the last invocation, timeout is currently set to 60000
>>> at org.apache.hadoop.hbase.client.ClientScanner.loadCache(
>>> ClientScanner.java:438)
>>> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScan
>>> ner.java:370)
>>> at org.apache.phoenix.iterate.ScanningResultIterator.next(Scann
>>> ingResultIterator.java:55)
>>> ... 18 more
>>> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
>>> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already
>>> closed?
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
>>> RSRpcServices.java:2374)
>>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$
>>> ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>>> utor.java:133)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>>> at java.lang.Thread.run(Thread.java:745)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>> ConstructorAccessorImpl.java:57)
>>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>> legatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> at org.apache.hadoop.ipc.RemoteException.instantiateException(R
>>> emoteException.java:106)
>>> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
>>> RemoteException.java:95)
>>> at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteExcep
>>> tion(ProtobufUtil.java:329)
>>> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerC
>>> allable.java:262)
>>> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerC
>>> allable.java:64)
>>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithout
>>> Retries(RpcRetryingCaller.java:200)
>>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$R
>>> etryingRPC.call(ScannerCallableWithReplicas.java:360)
>>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$R
>>> etryingRPC.call(ScannerCallableWithReplicas.java:334)
>>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRet
>>> ries(RpcRetryingCaller.java:126)
>>> at org.apache.hadoop.hbase.client.ResultBoundedCompletionServic
>>> e$QueueingFuture.run(ResultBoundedCompletionService.java:65)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.ap
>>> ache.hadoop.hbase.UnknownScannerException):
>>> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already
>>> closed?
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
>>> RSRpcServices.java:2374)
>>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$
>>> ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>>> utor.java:133)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>>> at java.lang.Thread.run(Thread.java:745)
>>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl
>>> .java:1268)
>>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMe
>>> thod(AbstractRpcClient.java:226)
>>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcCha
>>> nnelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$
>>> ClientService$BlockingStub.scan(ClientProtos.java:34094)
>>> at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerC
>>> allable.java:219)
>>> ... 9 more
>>
>>
>>
>> It seems like I need to set either/both of 'phoenix.query.timeoutMs'/'hbase.rpc.timeout'.
>> But not sure how to configure those settings for the MR job's internal
>> HBase client...
>>
>> Thanks for the help,
>> -nathan
>>
>>
>

Re: UnknowScanner/ScannerTimeoutException using IndexTool MR

Posted by Nathan Davis <na...@salesforce.com>.
One additional problem here (perhaps the more important issue): even though
the MR job failed because of this exception, the underlying index table
still got set to ACTIVE. That I think is a bug.
I'm using v4.7 against HBase 1.2.1 in EMR.

On Wed, Aug 17, 2016 at 10:26 AM, Nathan Davis <na...@salesforce.com>
wrote:

> Hi All,
> I'm getting the following error (sorry, it is the full error stack) when I
> run the IndexTool MR job to populate an index I created with ASYNC. I have
> been able to use IndexTool successfully previously.
>
> 2016-08-17 13:30:48,024 INFO  [main] mapreduce.Job: Task Id :
>> attempt_1471372816200_0005_m_000051_0, Status : FAILED
>> Error: java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
>> 60209ms passed since the last invocation, timeout is currently set to 60000
>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(
>> PhoenixRecordReader.java:159)
>> at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
>> nextKeyValue(MapTask.java:565)
>> at org.apache.hadoop.mapreduce.task.MapContextImpl.
>> nextKeyValue(MapContextImpl.java:80)
>> at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
>> nextKeyValue(WrappedMapper.java:91)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1657)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>> Caused by: org.apache.phoenix.exception.PhoenixIOException: 60209ms
>> passed since the last invocation, timeout is currently set to 60000
>> at org.apache.phoenix.util.ServerUtil.parseServerException(
>> ServerUtil.java:111)
>> at org.apache.phoenix.iterate.ScanningResultIterator.next(
>> ScanningResultIterator.java:65)
>> at org.apache.phoenix.iterate.TableResultIterator.next(
>> TableResultIterator.java:110)
>> at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(
>> LookAheadResultIterator.java:47)
>> at org.apache.phoenix.iterate.LookAheadResultIterator.next(
>> LookAheadResultIterator.java:67)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator$
>> RoundRobinIterator.next(RoundRobinResultIterator.java:309)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(
>> RoundRobinResultIterator.java:97)
>> at org.apache.phoenix.jdbc.PhoenixResultSet.next(
>> PhoenixResultSet.java:778)
>> at org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(
>> PhoenixRecordReader.java:152)
>> ... 11 more
>> Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException:
>> 60209ms passed since the last invocation, timeout is currently set to 60000
>> at org.apache.hadoop.hbase.client.ClientScanner.
>> loadCache(ClientScanner.java:438)
>> at org.apache.hadoop.hbase.client.ClientScanner.next(
>> ClientScanner.java:370)
>> at org.apache.phoenix.iterate.ScanningResultIterator.next(
>> ScanningResultIterator.java:55)
>> ... 18 more
>> Caused by: org.apache.hadoop.hbase.UnknownScannerException:
>> org.apache.hadoop.hbase.UnknownScannerException: Name: 149, already
>> closed?
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
>> scan(RSRpcServices.java:2374)
>> at org.apache.hadoop.hbase.protobuf.generated.
>> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
>> RpcExecutor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> at java.lang.Thread.run(Thread.java:745)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
>> NativeConstructorAccessorImpl.java:57)
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
>> DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at org.apache.hadoop.ipc.RemoteException.instantiateException(
>> RemoteException.java:106)
>> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
>> RemoteException.java:95)
>> at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(
>> ProtobufUtil.java:329)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:262)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:64)
>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(
>> RpcRetryingCaller.java:200)
>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
>> RetryingRPC.call(ScannerCallableWithReplicas.java:360)
>> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
>> RetryingRPC.call(ScannerCallableWithReplicas.java:334)
>> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
>> RpcRetryingCaller.java:126)
>> at org.apache.hadoop.hbase.client.ResultBoundedCompletionService
>> $QueueingFuture.run(ResultBoundedCompletionService.java:65)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.
>> apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException:
>> Name: 149, already closed?
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
>> scan(RSRpcServices.java:2374)
>> at org.apache.hadoop.hbase.protobuf.generated.
>> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
>> RpcExecutor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> at java.lang.Thread.run(Thread.java:745)
>> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(
>> RpcClientImpl.java:1268)
>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
>> AbstractRpcClient.java:226)
>> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$
>> BlockingRpcChannelImplementation.callBlockingMethod(
>> AbstractRpcClient.java:331)
>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$
>> BlockingStub.scan(ClientProtos.java:34094)
>> at org.apache.hadoop.hbase.client.ScannerCallable.call(
>> ScannerCallable.java:219)
>> ... 9 more
>
>
>
> It seems like I need to set either/both of 'phoenix.query.timeoutMs'/'hbase.rpc.timeout'.
> But not sure how to configure those settings for the MR job's internal
> HBase client...
>
> Thanks for the help,
> -nathan
>
>