You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mesos.apache.org by "Bill Zhao (JIRA)" <ji...@apache.org> on 2011/07/20 02:49:57 UTC

[jira] [Resolved] (MESOS-27) Trouble starting hadoop datanode with the bundle version of hadoop (hadoop-0.20.2)

     [ https://issues.apache.org/jira/browse/MESOS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bill Zhao resolved MESOS-27.
----------------------------

    Resolution: Fixed

Running two different version of hadoop in the same local directory.  "bin/hadoop namenode -format" command only remove the dfs/name directory no the dfs/data directory.  So, with "rm -rf dfs/data" everything works.

> Trouble starting hadoop datanode with the bundle version of hadoop (hadoop-0.20.2)
> ----------------------------------------------------------------------------------
>
>                 Key: MESOS-27
>                 URL: https://issues.apache.org/jira/browse/MESOS-27
>             Project: Mesos
>          Issue Type: Bug
>          Components: master
>         Environment: Mac OS X, Ubuntu Linux
>            Reporter: Bill Zhao
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Not sure if this related to https://issues.apache.org/jira/browse/HADOOP-2345. 
> Summary:
> I was trying to get hadoop to run on top of mesos.  However, I am keep getting error message when I try to start the datanode.  The problem related to value of layoutVersion in /current/VERSION.  However, when I ran the Hadoop version (0.20.203.0), I don't observe the same problem.
> The full error looked like this:
> 11/07/14 15:58:05 ERROR datanode.DataNode: org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /app/hadoop/tmp/dfs/data. Reported: -31. Expecting = -18.
>         at org.apache.hadoop.hdfs.server.common.Storage.getFields(Storage.java:647)
>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.getFields(DataStorage.java:178)
>         at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:227)
>         at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:216)
>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:228)
>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> My Workaround:
> 1. stop all hadoop process with "stop-all.sh"
> 2. edit the layoutVersion parameter in ${hadoop.tmp.dir}/dfs/data/current/VERSION 
> same as ${hadoop.tmp.dir}/dfs/name/current/VERSION
> 3. start namenode: bin/hadoop namenode
> 4. start datanode:  bin/hadoop datanode
> 5. start jobtracker: bin/hadoop jobtracker

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira