You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Maryann Xue (JIRA)" <ji...@apache.org> on 2016/08/31 22:12:20 UTC

[jira] [Resolved] (PHOENIX-3202) Error trying to remove Hash Cache - Followed by CallQueueTooBigException

     [ https://issues.apache.org/jira/browse/PHOENIX-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Maryann Xue resolved PHOENIX-3202.
----------------------------------
    Resolution: Not A Problem

Closing this issue since it was caused by using Phoenix JDBC connection as a persistent connection.

> Error trying to remove Hash Cache - Followed by CallQueueTooBigException
> ------------------------------------------------------------------------
>
>                 Key: PHOENIX-3202
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3202
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>         Environment: Amazon EMR - 4.7.2
>            Reporter: Nithin
>            Priority: Critical
>              Labels: Phoenix, hbase
>             Fix For: 4.7.0
>
>
> This issue is in continuation to - 
> https://issues.apache.org/jira/browse/PHOENIX-3200
> 14:26:14.653 [pool-2-thread-46] ERROR org.apache.phoenix.cache.ServerCacheClient - Error trying to remove hash cache for region=EPOEVENT,,1471683321681.6d9e058365a24e63e5802ca783c09660., hostname=ip-172-31-8-132.us-west-2.compute.internal,60020,1471683102656, seqNum=1935501
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
> Tue Aug 23 14:16:03 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:03 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, java.io.IOException: Call to ip-172-31-8-132.us-west-2.compute.internal/172.31.8.132:60020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=949539152, waitTime=60001, operationTimeout=60000 expired.
> Tue Aug 23 14:17:03 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:04 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:06 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:10 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:21 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:31 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:41 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:17:51 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:18:11 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:18:31 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
> Tue Aug 23 14:18:51 UTC 2016, RpcRetryingCaller{globalStartTime=1471961763091, pause=100, retries=35}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.CallQueueTooBigException): Call queue is full on /172.31.8.132:60020, too many items queued ?
>         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.removeServerCache(ServerCachingProtos.java:3378) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.phoenix.cache.ServerCacheClient$2.call(ServerCacheClient.java:343) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.phoenix.cache.ServerCacheClient$2.call(ServerCacheClient.java:322) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1751) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_101]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_101]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_101]
>         at java.lang.Thread.run(Thread.java:745) [?:1.7.0_101]
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: Call queue is full on /172.31.8.132:60020, too many items queued ?
>         at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1235) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:32675) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1624) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ~[PhoenixMultiCopyDataLoader-0.0.1-SNAPSHOT-jar-with-dependencies.jar:?]
>         ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)