You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by 聪聪 <17...@qq.com> on 2014/11/28 10:34:25 UTC

回复: after QJM failover,hbase can not write

part of  the datanode log is following:


2014-11-28 16:51:56,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: l-hbase1.dba.dev.cn0/10.86.36.217:8020. Already tried 8 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-11-28 16:52:12,421 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: l-hbase1.dba.dev.cn0/10.86.36.217:8020. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-11-28 16:52:27,422 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.net.ConnectException: Call From l-hbase3.dba.dev.cn0.qunar.com/10.86.36.219 to l-hbase1.dba.dev.cn0:8020 failed on connection exception: java.net.ConnectException:
 Connection timed out; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        at org.apache.hadoop.ipc.Client.call(Client.java:1413)
        at org.apache.hadoop.ipc.Client.call(Client.java:1362)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:178)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:566)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:664)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:834)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1461)
        at org.apache.hadoop.ipc.Client.call(Client.java:1380)
        ... 14 more
2014-11-28 16:52:43,424 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: l-hbase1.dba.dev.cn0/10.86.36.217:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-11-28 16:52:59,424 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: l-hbase1.dba.dev.cn0/10.86.36.217:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-11-28 16:53:15,425 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: l-hbase1.dba.dev.cn0/10.86.36.217:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)‍






------------------ 原始邮件 ------------------
发件人: "mail list";<lo...@gmail.com>;
发送时间: 2014年11月28日(星期五) 下午5:31
收件人: "user"<us...@hadoop.apache.org>; 

主题: Re: after  QJM failover,hbase  can not  write



Hi,

please attach your log when the problem happened!!

On Nov 28, 2014, at 14:32, 聪聪 <17...@qq.com> wrote:

hi,there:
I encount a problem,it let me upset.


I use version of hadoop is hadoop-2.3.0-cdh5.1.0,namenode HA use  the Quorum Journal Manager (QJM) feature ,dfs.ha.fencing.methods option is following:
<property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence
               shell(q_hadoop_fence.sh $target_host $target_port)
        </value>
</property>



or


<property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence
               shell(/bin/true)
        </value>
</property>



I use iptables to  simulate  machine of active namenode  crash。After  automatic failover completed,hdfs  can the normal write,for example ./bin/hdfs dfs -put a.txt /tmp,but  hbase  still  can not write.
After a very long time,hbase can write,but I can not statistic How long did it take.
I want to ask:
1、Why hdfs Complete failover,hbase can not write?
2、After hdfs Complete failover,how long hbase can write?
Looking forward for your responses!‍
‍