You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "vamshi (JIRA)" <ji...@apache.org> on 2011/07/25 11:11:09 UTC

[jira] [Commented] (HBASE-2827) HBase Client doesn't handle master failover

    [ https://issues.apache.org/jira/browse/HBASE-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13070383#comment-13070383 ] 

vamshi commented on HBASE-2827:
-------------------------------

Hi jonathan,may be in this place this question is irrelevant. But please let me know whether we can implement a distributed hashing in HBase for fast lookup/ scanning purpose?? i want to implement scalable data structure i.e DHT in Hbase, for that how can i proceed? Thank you.

> HBase Client doesn't handle master failover
> -------------------------------------------
>
>                 Key: HBASE-2827
>                 URL: https://issues.apache.org/jira/browse/HBASE-2827
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.90.0
>            Reporter: Nicolas Spiegelberg
>            Assignee: Jonathan Gray
>
> A client on our beta tier was stuck in this exception loop when we issued a new HMaster after the old one died:
> Exception while trying to connect hBase
> java.lang.reflect.UndeclaredThrowableException
> at $Proxy1.getClusterStatus(Unknown Source)
> at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:912)
> at org.apache.hadoop.hbase.client.HTable.getCurrentNrHRS(HTable.java:170)
> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:143)
> ...
> at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.18.34.212:60000]
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:213)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:406)
> at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:309)
> at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:856)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:724)
> at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:252)
> ... 20 more
> 12:52:55,863 [pool-4-thread-5182] INFO PersistentUtil:153 - Retry after 1 second...
> Looking at the client code, the HConnectionManager does not watch ZK for NodeDeleted & NodeCreated of /hbase/master

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira