You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2008/05/02 03:15:55 UTC

[jira] Commented: (HADOOP-3337) Name-node fails to start because DatanodeInfo format changed.

    [ https://issues.apache.org/jira/browse/HADOOP-3337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12593687#action_12593687 ] 

Konstantin Shvachko commented on HADOOP-3337:
---------------------------------------------

This patch works on my old file system image. Minor comments, please
- remove import of UTF8
- provide comments on the 2 new methods *FSEditLog() explaining what they are for.

> Name-node fails to start because DatanodeInfo format changed.
> -------------------------------------------------------------
>
>                 Key: HADOOP-3337
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3337
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Tsz Wo (Nicholas), SZE
>            Priority: Blocker
>             Fix For: 0.18.0
>
>         Attachments: 3337_20080501.patch
>
>
> HADOOP-3283 introduced a new field ipcPort in DatanodeInfo, which was not reflected in the reading/writing file system image files.
> Particularly, reading edits generated by the previous version of hadoop throws the following exception:
> {code}
> 08/05/02 00:02:50 ERROR dfs.NameNode: java.lang.IllegalArgumentException: No enum const class org.apache.hadoop.dfs.DatanodeInfo$AdminStates.0?
> /56.313
> 	at java.lang.Enum.valueOf(Enum.java:192)
> 	at org.apache.hadoop.io.WritableUtils.readEnum(WritableUtils.java:399)
> 	at org.apache.hadoop.dfs.DatanodeInfo.readFields(DatanodeInfo.java:318)
> 	at org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90)
> 	at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:499)
> 	at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:794)
> 	at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:664)
> 	at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:280)
> 	at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:81)
> 	at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:276)
> 	at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:257)
> 	at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:133)
> 	at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:178)
> 	at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:164)
> 	at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:777)
> 	at org.apache.hadoop.dfs.NameNode.main(NameNode.java:786)
> {code}
> and startup fails.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.