You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Jing Zhao (JIRA)" <ji...@apache.org> on 2014/03/18 19:23:45 UTC

[jira] [Resolved] (HDFS-6113) Rolling upgrae exception

     [ https://issues.apache.org/jira/browse/HDFS-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jing Zhao resolved HDFS-6113.
-----------------------------

    Resolution: Invalid

Based on Kihwal and Nicholas's comments, let's close this jira first. Fengdong, thanks for the testing, and please feel free to open new jiras if you think there are other issues.

> Rolling upgrae exception
> ------------------------
>
>                 Key: HDFS-6113
>                 URL: https://issues.apache.org/jira/browse/HDFS-6113
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.4.0
>            Reporter: Fengdong Yu
>
> I've a hadoop-2.3 running non-securable on the cluster. then I built a trunk instance, also non securable.
> NN1 - active
> NN2 - standby
> DN1 - datanode 
> DN2 - datanode
> JN1,JN2,JN3 - Journal and ZK
> then on the NN2:
> {code}
> hadoop-dameon.sh stop namenode
> hadoop-dameon.sh stop zkfc
> {code}
> then:
> change the environment variables to the new hadoop.(trunk version)
> then:
> {code}
> hadoop-dameon.sh start namenode
> {code}
> NN2 throws exception:
> {code}
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not journal CTime for one more JournalNodes. 1 exceptions thrown:
> 10.100.91.33:8485: Failed on local exception: java.io.EOFException; Host Details : local host is: "10-204-8-136/10.204.8.136"; destination host is: "jn33.com":8485;
>         at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>         at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
>         at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.getJournalCTime(QuorumJournalManager.java:631)
>         at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getSharedLogCTime(FSEditLog.java:1383)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:738)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:600)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:360)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:258)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:894)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:653)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:444)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:500)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:656)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:641)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1294)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1360)
> {code}
> JN throws Exception:
> {code}
> 2014-03-18 12:19:01,960 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8485: readAndProcess threw exception java.io.IOException: Unable to read authentication method from client 10.204.8.136. Count of bytes read: 0
> java.io.IOException: Unable to read authentication method
> 	at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1344)
> 	at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:761)
> 	at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:560)
> 	at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:535)
> 2014-03-18 12:19:01,960 DEBUG org.apache.hadoop.ipc.Server: IPC Server listener on 8485: disconnecting client 10.204.8.136:39063. Number of active connections: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)