You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ahmednagy <ah...@hotmail.com> on 2011/02/07 19:22:54 UTC

Data Nodes do not start

Dear All,
Please Help. I have tried to start the data nodes with ./start-all.sh on a 7
node cluster however I recieve incompatible namespace when i try to put any
file on the HDFS I tried the suggestions in the known issues for changing
the VERSION number in the hdfs however it did not work. any ideas Please
advise. I am attaching the error in the log file for data node
Regards


https://issues.apache.org/jira/browse/HDFS-107


2011-02-07 18:52:28,691 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = n01/192.168.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.21.0
STARTUP_MSG:   classpath =
/home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
2011-02-07 18:52:28,881 WARN org.apache.hadoop.hdfs.server.common.Util: Path
/tmp/mylocal/ should be specified as a URI in configuration files. Please
updat$
2011-02-07 18:52:29,115 INFO org.apache.hadoop.security.Groups: Group
mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=3000$
2011-02-07 18:52:29,580 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /tmp/mylocal: namenode name$
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:260)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:237)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)

-- 
View this message in context: http://old.nabble.com/Data-Nodes-do-not-start-tp30866323p30866323.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Data Nodes do not start

Posted by suresh srinivas <sr...@gmail.com>.
On Tue, Feb 8, 2011 at 11:05 PM, rahul patodi <pa...@gmail.com> wrote:

> I think you should copy the namespaceID of your master which is in
> name/current/VERSION file to all the slaves
>

This is a sure recipe for disaster. The VERSION file is a file system meta
data file not to be messed around with. At worst, this can cause loss of
entire file system data! Rahul please update your blog to reflect this.

Some background on namespace ID:
A namespace ID is created on the namenode when it is formatted. This is
propagated to datanodes when they register the first time with namenode.
>From then on, this ID is burnt into the datanodes.

A mismatch in namespace ID of datanode and namenode means:
# Datanode is pointing to a wrong namenode, perhaps in a different cluster
(config of datanode points to wrong namenode address).
# Namenode was running with a storage directory previously. It was changed
to some other storage directory with a different file system image.


Why does editing namespace ID is a bad idea?
Given that either namenode has loaded wrong namespace or datanode is
pointing to wrong namenode, messing around with namespaceID either on
namenode/datanode, results in datanode being able to register with the
namenode. When datanode sends block report, the blocks on the datanode do
not belong to the namespace loaded by the namenode, resulting in deletion of
all the blocks on the datanode.

Please find out if any of these problem exist in your setup and fix it.

Re: Data Nodes do not start

Posted by rahul patodi <pa...@gmail.com>.
I think you should copy the namespaceID of your master which is in
name/current/VERSION file to all the slaves
Also use ./start-dfs.sh then ./start-mapred.sh to start respective daemons

http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html
<http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html>

*Regards*,
Rahul Patodi
Software Engineer,
Impetus Infotech (India) Pvt Ltd,
www.impetus.com
Mob:09907074413


On Wed, Feb 9, 2011 at 11:48 AM, madhu phatak <ph...@gmail.com> wrote:

> Don't use start-all.sh ,use data node daemon script to start the data node
> .
>
> On Mon, Feb 7, 2011 at 11:52 PM, ahmednagy <ahmed_said_nagy@hotmail.com
> >wrote:
>
> >
> > Dear All,
> > Please Help. I have tried to start the data nodes with ./start-all.sh on
> a
> > 7
> > node cluster however I recieve incompatible namespace when i try to put
> any
> > file on the HDFS I tried the suggestions in the known issues for changing
> > the VERSION number in the hdfs however it did not work. any ideas Please
> > advise. I am attaching the error in the log file for data node
> > Regards
> >
> >
> > https://issues.apache.org/jira/browse/HDFS-107
> >
> >
> > 2011-02-07 18:52:28,691 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host = n01/192.168.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.21.0
> > STARTUP_MSG:   classpath =
> >
> >
> /home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
> > 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
> > ************************************************************/
> > 2011-02-07 18:52:28,881 WARN org.apache.hadoop.hdfs.server.common.Util:
> > Path
> > /tmp/mylocal/ should be specified as a URI in configuration files. Please
> > updat$
> > 2011-02-07 18:52:29,115 INFO org.apache.hadoop.security.Groups: Group
> > mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> > cacheTimeout=3000$
> > 2011-02-07 18:52:29,580 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> > Incompatible namespaceIDs in /tmp/mylocal: namenode name$
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:260)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:237)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)
> >
> > --
> > View this message in context:
> > http://old.nabble.com/Data-Nodes-do-not-start-tp30866323p30866323.html
> > Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >
> >
>



-- 
*
*

Re: Data Nodes do not start

Posted by madhu phatak <ph...@gmail.com>.
Don't use start-all.sh ,use data node daemon script to start the data node .

On Mon, Feb 7, 2011 at 11:52 PM, ahmednagy <ah...@hotmail.com>wrote:

>
> Dear All,
> Please Help. I have tried to start the data nodes with ./start-all.sh on a
> 7
> node cluster however I recieve incompatible namespace when i try to put any
> file on the HDFS I tried the suggestions in the known issues for changing
> the VERSION number in the hdfs however it did not work. any ideas Please
> advise. I am attaching the error in the log file for data node
> Regards
>
>
> https://issues.apache.org/jira/browse/HDFS-107
>
>
> 2011-02-07 18:52:28,691 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = n01/192.168.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.21.0
> STARTUP_MSG:   classpath =
>
> /home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
> 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
> ************************************************************/
> 2011-02-07 18:52:28,881 WARN org.apache.hadoop.hdfs.server.common.Util:
> Path
> /tmp/mylocal/ should be specified as a URI in configuration files. Please
> updat$
> 2011-02-07 18:52:29,115 INFO org.apache.hadoop.security.Groups: Group
> mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=3000$
> 2011-02-07 18:52:29,580 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /tmp/mylocal: namenode name$
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:260)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:237)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)
>
> --
> View this message in context:
> http://old.nabble.com/Data-Nodes-do-not-start-tp30866323p30866323.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>