You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "James Taylor (JIRA)" <ji...@apache.org> on 2017/10/11 22:47:00 UTC
[jira] [Updated] (PHOENIX-4041) CoprocessorHConnectionTableFactory
should not open a new HConnection when shutting down
[ https://issues.apache.org/jira/browse/PHOENIX-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
James Taylor updated PHOENIX-4041:
----------------------------------
Fix Version/s: (was: 4.11.1)
> CoprocessorHConnectionTableFactory should not open a new HConnection when shutting down
> ---------------------------------------------------------------------------------------
>
> Key: PHOENIX-4041
> URL: https://issues.apache.org/jira/browse/PHOENIX-4041
> Project: Phoenix
> Issue Type: Bug
> Reporter: Samarth Jain
> Assignee: Samarth Jain
> Labels: secondary_index
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4041.patch
>
>
> It is wasteful to possibly establish a new HConnection when the CoprocessorHConnectionTableFactory is shutting down.
> {code}
> @Override
> public void shutdown() {
> try {
> getConnection(conf).close();
> } catch (IOException e) {
> LOG.error("Exception caught while trying to close the HConnection used by CoprocessorHConnectionTableFactory");
> }
> }
> {code}
> In fact, in one of the test runs I saw that the region server aborted when getConnection() call in shutDown() ran into an OOM.
> {code}
> org.apache.hadoop.hbase.regionserver.HRegionServer(1950): ABORTING region server asf921.gq1.ygridcore.net,43200,1500441052416: Caught throwable while processing event M_RS_CLOSE_REGION
> java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
> at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:165)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:406)
> at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
> at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
> at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
> at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128)
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:135)
> at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:171)
> at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:145)
> at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1872)
> at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:926)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:711)
> at org.apache.hadoop.hbase.client.CoprocessorHConnection.<init>(CoprocessorHConnection.java:113)
> at org.apache.phoenix.hbase.index.write.IndexWriterUtils$CoprocessorHConnectionTableFactory.getConnection(IndexWriterUtils.java:124)
> at org.apache.phoenix.hbase.index.write.IndexWriterUtils$CoprocessorHConnectionTableFactory.shutdown(IndexWriterUtils.java:137)
> at org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.stop(ParallelWriterIndexCommitter.java:228)
> at org.apache.phoenix.hbase.index.write.IndexWriter.stop(IndexWriter.java:225)
> at org.apache.phoenix.hbase.index.Indexer.stop(Indexer.java:222)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.shutdown(CoprocessorHost.java:755)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment.shutdown(RegionCoprocessorHost.java:148)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.shutdown(CoprocessorHost.java:318)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$5.postEnvCall(RegionCoprocessorHost.java:518)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1746)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postClose(RegionCoprocessorHost.java:511)
> at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1280)
> at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1141)
> at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:151)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)