You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "suja s (JIRA)" <ji...@apache.org> on 2012/05/18 13:32:06 UTC

[jira] [Created] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

suja s created HDFS-3443:
----------------------------

             Summary: Unable to catch up edits during standby to active switch due to NPE
                 Key: HDFS-3443
                 URL: https://issues.apache.org/jira/browse/HDFS-3443
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: auto-failover
            Reporter: suja s


Start NN
Let NN standby services be started.
Before the editLogTailer is initialised start ZKFC and allow the activeservices start to proceed further.


Here editLogTailer.catchupDuringFailover() will throw NPE.

void startActiveServices() throws IOException {
    LOG.info("Starting services required for active state");
    writeLock();
    try {
      FSEditLog editLog = dir.fsImage.getEditLog();
      
      if (!editLog.isOpenForWrite()) {
        // During startup, we're already open for write during initialization.
        editLog.initJournalsForWrite();
        // May need to recover
        editLog.recoverUnclosedStreams();
        
        LOG.info("Catching up to latest edits from old active before " +
            "taking over writer role in edits logs.");
        editLogTailer.catchupDuringFailover();


{noformat}
2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from XX.XX.XX.55:58003: output error
2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive from XX.XX.XX.55:58004: error: java.lang.NullPointerException
java.lang.NullPointerException
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8020 caught an exception
java.nio.channels.ClosedChannelException
	at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
	at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
	at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
	at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
	at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira