You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Nick Dimiduk (JIRA)" <ji...@apache.org> on 2016/05/06 19:21:12 UTC

[jira] [Commented] (PHOENIX-2508) Phoenix Connections Stopped Working

    [ https://issues.apache.org/jira/browse/PHOENIX-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15274589#comment-15274589 ] 

Nick Dimiduk commented on PHOENIX-2508:
---------------------------------------

Hi [~gcagrici], what workload are you running when this happens? Can you describe your schema at all? Do you have any metrics in place? I'm curious if you're also seeing excessively high RPC call times from the region server. Any chance you can take a jstack of the RS while this is happening?

> Phoenix Connections Stopped Working
> -----------------------------------
>
>                 Key: PHOENIX-2508
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2508
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.6.0
>         Environment: 1 HBase Master, 2 RS
>            Reporter: Gokhan Cagrici
>            Priority: Blocker
>             Fix For: 4.8.0
>
>
> Connections stopped working and no new connections can be established. 
> HBASE SHELL:
> hbase(main):004:0> status
> 2 servers, 0 dead, 282.0000 average load
> RS1 LOG:
> 2015-12-10 13:55:35,063 ERROR [B.defaultRpcServer.handler=21,queue=0,port=16020] coprocessor.MetaDataEndpointImpl: createTable failed
> java.io.IOException: Timed out waiting for lock for row: \x00SYSTEM\x00CATALOG
> 	at org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1283)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1171)
> 	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
> 	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
> 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> 	at java.lang.Thread.run(Thread.java:745)
> 2015-12-10 13:56:25,544 ERROR [B.defaultRpcServer.handler=2,queue=2,port=16020] coprocessor.MetaDataEndpointImpl: createTable failed
> java.io.IOException: Timed out waiting for lock for row: \x00SYSTEM\x00CATALOG
> 	at org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1283)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1171)
> 	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
> 	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
> 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> 	at java.lang.Thread.run(Thread.java:745)
> RS2 LOG:
> 2015-12-10 13:58:10,668 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,670 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,672 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,674 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,676 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,678 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,680 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,682 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,684 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,686 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,688 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,690 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,692 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,694 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,695 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,697 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,699 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,701 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,703 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,704 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,706 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,708 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,710 INFO  [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498] compress.CodecPool: Got brand-new decompressor [.gz]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)