You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by 圣眼之翼 <24...@qq.com> on 2019/09/24 12:21:19 UTC

回复:HBaseTableSource for SQL query errors

The following exception was thrown in the MiniCluster.executeJobBlocking method via the debug source code.
 &nbsp;
 akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#997865675]] after [10000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
 &nbsp;
 https://issues.apache.org/jira/browse/FLINK-8485
  May be a previous bug, how can we solve it?

 

 ------------------&nbsp;原始邮件&nbsp;------------------
  发件人:&nbsp;"圣眼之翼"<2463889@qq.com&gt;;
 发送时间:&nbsp;2019年9月24日(星期二) 下午4:04
 收件人:&nbsp;"user"<user@flink.apache.org&gt;;
 
 主题:&nbsp;HBaseTableSource for SQL query errors

 

 I am using the HBaseTableSource class for SQL query errors.No error outside Flink using HBase demo.
 My flink version is 1.8.1,use flink table &amp; SQL API
 &nbsp;
 flink code show as below:
 &nbsp; // environment configuration
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; BatchTableEnvironment tEnv = BatchTableEnvironment.create(env);
 &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; String currentTableName = "table";
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; //取得一个数据库连接的配置参数对象
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; Configuration conf = HBaseConfiguration.create();
&nbsp;...
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; conf.set("hbase.zookeeper.quorum", quorum);
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; HBaseTableSource hSrc = new HBaseTableSource(conf, "table");
 &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; hSrc.addColumn("base", "rowkey", String.class);
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; tEnv.registerTableSource(currentTableName, hSrc);
 
&nbsp;&nbsp;&nbsp; &nbsp; &nbsp; Table res = tEnv.sqlQuery("select * from table");
 &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; DataSet<Row&gt; result = tEnv.toDataSet(res, Row.class);
 &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; result.print();
 &nbsp;&nbsp;&nbsp; &nbsp; &nbsp; env.execute();
 &nbsp;
 Error is as follows:
 <2019-09-24 15:40:36,436&gt;[ WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect - org.apache.zookeeper.ClientCnxn
java.net.ConnectException: Connection refused: no further information
&nbsp;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
&nbsp;at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
&nbsp;at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
&nbsp;at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
ERROR RecoverableZooKeeper ZooKeeper getData failed after 4 attempts
ERROR ZooKeeperWatcher hconnection-0x39ee0d3e0x0, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
&nbsp;org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
&nbsp;at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
&nbsp;at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
&nbsp;at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
&nbsp;at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:354)
&nbsp;at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:625)
&nbsp;at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:486)
&nbsp;at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:167)
&nbsp;at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:606)
&nbsp;at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:587)
&nbsp;at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:560)
&nbsp;at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
&nbsp;at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
&nbsp;at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
&nbsp;at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
&nbsp;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
&nbsp;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
&nbsp;at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
&nbsp;at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
&nbsp;at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
&nbsp;at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
&nbsp;at org.apache.hadoop.hbase.client.ClientScanner.<init&gt;(ClientScanner.java:159)
&nbsp;at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
&nbsp;at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
&nbsp;at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
&nbsp;at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343)
&nbsp;at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:141)
&nbsp;at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:117)
&nbsp;at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:205)
&nbsp;at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
&nbsp;at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init&gt;(ExecutionJobVertex.java:253)
&nbsp;at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:853)
&nbsp;at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:232)
&nbsp;at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:100)
&nbsp;at org.apache.flink.runtime.jobmaster.JobMaster.createExecutionGraph(JobMaster.java:1198)
&nbsp;at org.apache.flink.runtime.jobmaster.JobMaster.createAndRestoreExecutionGraph(JobMaster.java:1178)
&nbsp;at org.apache.flink.runtime.jobmaster.JobMaster.<init&gt;(JobMaster.java:287)
&nbsp;at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:83)
&nbsp;at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:37)
&nbsp;at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init&gt;(JobManagerRunner.java:146)
&nbsp;at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:76)
&nbsp;at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:351)
&nbsp;at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
&nbsp;at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
&nbsp;at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
&nbsp;at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
&nbsp;at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
&nbsp;at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
&nbsp;at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
&nbsp;at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
 &nbsp;
 Thanks!
 &nbsp;