You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Kaushal Amin <ka...@gmail.com> on 2009/11/10 05:10:03 UTC

Hadoop NameNode not starting up

I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.

Only way I can start NameNode is by formatting it and I end up losing data
in HDFS.

 

Does anyone have solution to this issue?

 

Kaushal

 


Re: Hadoop NameNode not starting up

Posted by Starry SHI <st...@gmail.com>.
actually you can put the hadoop.tmp.dir to other place, e.g /opt/hadoop_tmp
or /var/hadoop_tmp. first create the folder there, and assign the correct
mode for the hadoop_tmp folder, (chmod 777 for all the user to use hadoop).
then change the conf xml file accordingly, and run "hadoop namenode
-format", then start it. hopefully it will work

my experience is that put hadoop.tmp.dir in /tmp will make hadoop unstable,
especially for long-running jobs.

Best regards,
Starry

/* Tomorrow is another day. So is today. */


On Thu, Nov 12, 2009 at 04:04, Edward Capriolo <ed...@gmail.com>wrote:

> The property you are going to need to set is
>
> <property>
>  <name>dfs.name.dir</name>
>  <value>${hadoop.tmp.dir}/dfs/name</value>
>  <description>Determines where on the local filesystem the DFS name node
>      should store the name table.  If this is a comma-delimited list
>      of directories then the name table is replicated in all of the
>      directories, for redundancy. </description>
> </property>
>
>
> If you are running 0.20 and later information the information about
> the critical variables you need to setup to get running is here:
> (give these a good read through)
>
> http://hadoop.apache.org/common/docs/current/quickstart.html
> http://hadoop.apache.org/common/docs/current/cluster_setup.html
>
> If you are running a version older then 0.20 you can look in
> hadoop-default.xml and make changes to hadoop-site.xml.
>
> Edward
>
> On Wed, Nov 11, 2009 at 2:55 PM, Kaushal Amin <ka...@gmail.com>
> wrote:
> > which configuration file?
> >
> > On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo <edlinuxguru@gmail.com
> >wrote:
> >
> >> Are you starting hadoop as a different user?
> >> Maybe first time you are starting as user hadoop, now this time you
> >> are starting as user root.
> >>
> >> Or as stated above something is cleaning out your /tmp. Use your
> >> configuration files to have namenode write to a permanent place.
> >>
> >> Edward
> >>
> >> On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <ka...@gmail.com>
> >> wrote:
> >> > I am seeing following error in my NameNode log file.
> >> >
> >> > 2009-11-11 10:59:59,407 ERROR
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> >> > initialization failed.
> >> > 2009-11-11 10:59:59,449 ERROR
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> >> Directory
> >> > /tmp/hadoop-root/dfs/name is in an inconsistent state: storage
> directory
> >> > does not exist or is not accessible.
> >> >
> >> > Any idea?
> >> >
> >> >
> >> > On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <ka...@gmail.com>
> >> wrote:
> >> >
> >> >>  I am running Hadoop on single server. The issue I am running into is
> >> that
> >> >> start-all.sh script is not starting up NameNode.
> >> >>
> >> >> Only way I can start NameNode is by formatting it and I end up losing
> >> data
> >> >> in HDFS.
> >> >>
> >> >>
> >> >>
> >> >> Does anyone have solution to this issue?
> >> >>
> >> >>
> >> >>
> >> >> Kaushal
> >> >>
> >> >>
> >> >>
> >> >
> >>
> >
>

Re: Hadoop NameNode not starting up

Posted by Edward Capriolo <ed...@gmail.com>.
The property you are going to need to set is

<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table.  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>


If you are running 0.20 and later information the information about
the critical variables you need to setup to get running is here:
(give these a good read through)

http://hadoop.apache.org/common/docs/current/quickstart.html
http://hadoop.apache.org/common/docs/current/cluster_setup.html

If you are running a version older then 0.20 you can look in
hadoop-default.xml and make changes to hadoop-site.xml.

Edward

On Wed, Nov 11, 2009 at 2:55 PM, Kaushal Amin <ka...@gmail.com> wrote:
> which configuration file?
>
> On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo <ed...@gmail.com>wrote:
>
>> Are you starting hadoop as a different user?
>> Maybe first time you are starting as user hadoop, now this time you
>> are starting as user root.
>>
>> Or as stated above something is cleaning out your /tmp. Use your
>> configuration files to have namenode write to a permanent place.
>>
>> Edward
>>
>> On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <ka...@gmail.com>
>> wrote:
>> > I am seeing following error in my NameNode log file.
>> >
>> > 2009-11-11 10:59:59,407 ERROR
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> > initialization failed.
>> > 2009-11-11 10:59:59,449 ERROR
>> > org.apache.hadoop.hdfs.server.namenode.NameNode:
>> > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
>> Directory
>> > /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
>> > does not exist or is not accessible.
>> >
>> > Any idea?
>> >
>> >
>> > On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <ka...@gmail.com>
>> wrote:
>> >
>> >>  I am running Hadoop on single server. The issue I am running into is
>> that
>> >> start-all.sh script is not starting up NameNode.
>> >>
>> >> Only way I can start NameNode is by formatting it and I end up losing
>> data
>> >> in HDFS.
>> >>
>> >>
>> >>
>> >> Does anyone have solution to this issue?
>> >>
>> >>
>> >>
>> >> Kaushal
>> >>
>> >>
>> >>
>> >
>>
>

Re: Hadoop NameNode not starting up

Posted by Kaushal Amin <ka...@gmail.com>.
which configuration file?

On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo <ed...@gmail.com>wrote:

> Are you starting hadoop as a different user?
> Maybe first time you are starting as user hadoop, now this time you
> are starting as user root.
>
> Or as stated above something is cleaning out your /tmp. Use your
> configuration files to have namenode write to a permanent place.
>
> Edward
>
> On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <ka...@gmail.com>
> wrote:
> > I am seeing following error in my NameNode log file.
> >
> > 2009-11-11 10:59:59,407 ERROR
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> > initialization failed.
> > 2009-11-11 10:59:59,449 ERROR
> > org.apache.hadoop.hdfs.server.namenode.NameNode:
> > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory
> > /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
> > does not exist or is not accessible.
> >
> > Any idea?
> >
> >
> > On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <ka...@gmail.com>
> wrote:
> >
> >>  I am running Hadoop on single server. The issue I am running into is
> that
> >> start-all.sh script is not starting up NameNode.
> >>
> >> Only way I can start NameNode is by formatting it and I end up losing
> data
> >> in HDFS.
> >>
> >>
> >>
> >> Does anyone have solution to this issue?
> >>
> >>
> >>
> >> Kaushal
> >>
> >>
> >>
> >
>

Re: Hadoop NameNode not starting up

Posted by Edward Capriolo <ed...@gmail.com>.
Are you starting hadoop as a different user?
Maybe first time you are starting as user hadoop, now this time you
are starting as user root.

Or as stated above something is cleaning out your /tmp. Use your
configuration files to have namenode write to a permanent place.

Edward

On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <ka...@gmail.com> wrote:
> I am seeing following error in my NameNode log file.
>
> 2009-11-11 10:59:59,407 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> 2009-11-11 10:59:59,449 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
> /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
> does not exist or is not accessible.
>
> Any idea?
>
>
> On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <ka...@gmail.com> wrote:
>
>>  I am running Hadoop on single server. The issue I am running into is that
>> start-all.sh script is not starting up NameNode.
>>
>> Only way I can start NameNode is by formatting it and I end up losing data
>> in HDFS.
>>
>>
>>
>> Does anyone have solution to this issue?
>>
>>
>>
>> Kaushal
>>
>>
>>
>

Re: Hadoop NameNode not starting up

Posted by Kaushal Amin <ka...@gmail.com>.
I am seeing following error in my NameNode log file.

2009-11-11 10:59:59,407 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
2009-11-11 10:59:59,449 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
/tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
does not exist or is not accessible.

Any idea?


On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <ka...@gmail.com> wrote:

>  I am running Hadoop on single server. The issue I am running into is that
> start-all.sh script is not starting up NameNode.
>
> Only way I can start NameNode is by formatting it and I end up losing data
> in HDFS.
>
>
>
> Does anyone have solution to this issue?
>
>
>
> Kaushal
>
>
>

Re: Hadoop NameNode not starting up

Posted by Edmund Kohlwey <ek...@gmail.com>.
Is there error output from start-all.sh?

On 11/9/09 11:10 PM, Kaushal Amin wrote:
> I am running Hadoop on single server. The issue I am running into is that
> start-all.sh script is not starting up NameNode.
>
> Only way I can start NameNode is by formatting it and I end up losing data
> in HDFS.
>
>
>
> Does anyone have solution to this issue?
>
>
>
> Kaushal
>
>
>
>
>    


Re: Hadoop NameNode not starting up

Posted by Sagar <sn...@attributor.com>.
did u format it for the first time
another quick way to fugure out
is
${HADOOP_HOME}/bin/hadoop start namenode

see wht error it gives

-Sagar

Stephen Watt wrote:
> You need to go to your logs directory and have a look at what is going on 
> in the namenode log. What version are you using ? 
>
> I'm going to take a guess at your issue here and say that you used the 
> /tmp as a path for some of your hadoop conf settings and you have rebooted 
> lately. The /tmp dir is wiped out on reboot.
>
> Kind regards
> Steve Watt
>
>
>
> From:
> "Kaushal Amin" <ka...@gmail.com>
> To:
> <co...@hadoop.apache.org>
> Date:
> 11/10/2009 08:47 AM
> Subject:
> Hadoop NameNode not starting up
>
>
>
> I am running Hadoop on single server. The issue I am running into is that
> start-all.sh script is not starting up NameNode.
>
> Only way I can start NameNode is by formatting it and I end up losing data
> in HDFS.
>
>  
>
> Does anyone have solution to this issue?
>
>  
>
> Kaushal
>
>  
>
>
>
>
>   


Re: Hadoop NameNode not starting up

Posted by Stephen Watt <sw...@us.ibm.com>.
You need to go to your logs directory and have a look at what is going on 
in the namenode log. What version are you using ? 

I'm going to take a guess at your issue here and say that you used the 
/tmp as a path for some of your hadoop conf settings and you have rebooted 
lately. The /tmp dir is wiped out on reboot.

Kind regards
Steve Watt



From:
"Kaushal Amin" <ka...@gmail.com>
To:
<co...@hadoop.apache.org>
Date:
11/10/2009 08:47 AM
Subject:
Hadoop NameNode not starting up



I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.

Only way I can start NameNode is by formatting it and I end up losing data
in HDFS.

 

Does anyone have solution to this issue?

 

Kaushal