You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2013/08/21 23:36:52 UTC

[jira] [Commented] (HBASE-9292) Syncer fails but we won't go down

    [ https://issues.apache.org/jira/browse/HBASE-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746872#comment-13746872 ] 

stack commented on HBASE-9292:
------------------------------

Client sees the condition as this:

{code}
hbase(main):003:0> scan 'TestTable'
ROW                                                  COLUMN+CELL

ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=7, exceptions:
Wed Aug 21 14:14:43 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, java.net.ConnectException: Connection refused
Wed Aug 21 14:14:43 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: a2434.halxg.cloudera.com/10.20.212.42:60020
Wed Aug 21 14:14:44 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: a2434.halxg.cloudera.com/10.20.212.42:60020
Wed Aug 21 14:14:45 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: a2434.halxg.cloudera.com/10.20.212.42:60020
Wed Aug 21 14:14:55 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, java.net.ConnectException: Connection refused
Wed Aug 21 14:15:05 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, java.net.ConnectException: Connection refused
Wed Aug 21 14:15:15 PDT 2013, org.apache.hadoop.hbase.client.RpcRetryingCaller@7d05e560, java.net.ConnectException: Connection refused

{code}

In master log I see this:

{code}
2013-08-20 16:51:56,706 ERROR [RpcServer.handler=26,port=60000] master.HMaster: Region server a2434.halxg.cloudera.com,60020,1377031955847 reported a fatal error:
ABORTING region server a2434.halxg.cloudera.com,60020,1377031955847: Failed log close in log roller
Cause:
org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: #1377042716482
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.cleanupCurrentWriter(FSHLog.java:681)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:519)
        at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:96)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716482 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
...
{code}
                
> Syncer fails but we won't go down
> ---------------------------------
>
>                 Key: HBASE-9292
>                 URL: https://issues.apache.org/jira/browse/HBASE-9292
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.95.3
>         Environment: hadoop-2.1.0-beta and tip of 0.95 branch
>            Reporter: stack
>            Priority: Critical
>             Fix For: 0.96.0
>
>
> Running some simple loading tests i ran into the following running on hadoop-2.1.0-beta.
> {code}
> 2013-08-20 16:51:56,310 DEBUG [regionserver60020.logRoller] regionserver.LogRoller: HLog roll requested
> 2013-08-20 16:51:56,314 DEBUG [regionserver60020.logRoller] wal.FSHLog: cleanupCurrentWriter  waiting for transactions to get synced  total 655761 synced till here 655750
> 2013-08-20 16:51:56,360 INFO  [regionserver60020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042714402 with entries=985, filesize=122.5 M; new WAL /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311
> 2013-08-20 16:51:56,378 WARN  [Thread-4788] hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at $Proxy13.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at $Proxy13.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>         at $Proxy14.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
> ...
> {code}
> Thereafter the server is up but useless and can't go down because it just keeps doing this:
> {code}
> 2013-08-20 16:51:56,380 FATAL [RpcServer.handler=3,port=60020] wal.FSHLog: Could not sync. Requesting roll of hlog
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at $Proxy13.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at $Proxy13.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>         at $Proxy14.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
> ...
> {code}
> It goes on like this for ever.
> here is a bit more log:
> {code}
> 2013-08-21 04:30:07,932 ERROR [regionserver60020.logSyncer] wal.FSHLog: Error while syncing, requesting close of hlog
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716482 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at $Proxy13.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at $Proxy13.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>         at $Proxy14.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
>                                                                                                                                                                                           2993503,2-9   Bot
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at $Proxy13.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at $Proxy13.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>         at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>         at $Proxy14.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
> 2013-08-21 04:30:07,932 FATAL [regionserver60020.logSyncer] wal.FSHLog: Could not sync. Requesting roll of hlog
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716482 could only be replicated to 0 nodes instead of minReplication (=1).  There are 5 datanode(s) running and no node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
> {code}
> We broke something in here (hadoop-2.0.1-beta going bad of a sudden is interesting tooo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira