You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by William Kang <we...@gmail.com> on 2010/03/08 06:28:47 UTC

Namenode problem

Hi all,
I am running HDFS in Pseudo-distributed mode. Every time after I restarted
the machine, I have to format the namenode otherwise the localhost:50070
wont show up. It is quite annoying to do so since all the data would be
lost. Does anybody know this happens? And how should I fix this problem?
Many thanks.


William

RE: Namenode problem

Posted by sagar_shukla <sa...@persistent.co.in>.
Hi William,
      Can you provide a snapshot of the log-file log/hadoop-hadoop-namenode.log file when start of service fails on reboot of machine ? Also what does your configuration look like ?

Thanks,
Sagar

-----Original Message-----
From: William Kang [mailto:weliam.cloud@gmail.com] 
Sent: Monday, March 08, 2010 10:59 AM
To: core-user@hadoop.apache.org
Subject: Namenode problem

Hi all,
I am running HDFS in Pseudo-distributed mode. Every time after I restarted
the machine, I have to format the namenode otherwise the localhost:50070
wont show up. It is quite annoying to do so since all the data would be
lost. Does anybody know this happens? And how should I fix this problem?
Many thanks.


William

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

Re: Namenode problem

Posted by "Eason.Lee" <le...@gmail.com>.
this is datanode's log
You'd better post the namenode's log(filename contains "namenode")


2010/3/10 William Kang <we...@gmail.com>

> Hi,
> I got the log dumped here:
>
> 2010-03-09 00:36:47,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_6221934658367436050_1025
> 2010-03-09 00:46:49,155 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 12 blocks
> got processed in 11 msecs
> 2010-03-09 01:08:08,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
> ************************************************************/
> 2010-03-09 22:45:54,715 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = weliam-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
> ************************************************************/
> 2010-03-09 22:45:55,330 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
> to localhost/127.0.0.1:9000 failed on local exception:
> java.io.IOException:
> Connection reset by peer
>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> at org.apache.hadoop.ipc.Client.call(Client.java:742)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy4.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
>  at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>  at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> at
>
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcher.read0(Native Method)
>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
>  at sun.nio.ch.IOUtil.read(IOUtil.java:206)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>  at
>
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>  at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>  at java.io.FilterInputStream.read(FilterInputStream.java:116)
> at
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>  at java.io.DataInputStream.readInt(DataInputStream.java:370)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>
> 2010-03-09 22:45:55,334 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
> ************************************************************/
>
> At this point, unless I format the Namenode, the web interface for hadoop
> at
> port 50070 is not coming back.
>
>
> William
>
> On Mon, Mar 8, 2010 at 10:59 PM, Eason.Lee <le...@gmail.com> wrote:
>
> > It's usually in $HADOOP_HOME/logs
> >
> > 2010/3/9 William Kang <we...@gmail.com>
> >
> > > Hi,
> > > If the namenode is not up, how can I get the logdir?
> > >
> > >
> > > William
> > >
> > > On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <le...@gmail.com>
> wrote:
> > >
> > > > 2010/3/9 William Kang <we...@gmail.com>
> > > >
> > > > > Hi Eason,
> > > > > Thanks a lot for your reply. But I do have another folder which in
> > not
> > > > > inside /tmp. I did not use default settings.
> > > > >
> > > >
> > > > you'd better post your configuration in detail~~
> > > >
> > > >
> > > > > To make it clear, I will describe what happened:
> > > > > 1. hadoop namenode -format
> > > > > 2. start-all.sh
> > > > > 3. running fine, http://localhost:50070 is accessible
> > > > > 4. stop-all.sh
> > > > > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > > > > Unless I format the namenode, the HDFS master
> > > > > http://localhost:50070/dfshealth.jsp is not accessible.
> > > > >
> > > >
> > > > Try "jps" to see if the namenode is up~~
> > > > If the namenode is not up, maybe there is some error log in logdir,
> try
> > > to
> > > > post the error~~
> > > >
> > > >
> > > > > So, I have to redo step 1, 2 again to gain access to
> > > > > http://localhost:50070/dfshealth.jsp. But all data would be lost
> > after
> > > > > format.
> > > > >
> > > >
> > > > format will delete the old namespace, so everything will lost~~
> > > >
> > > >
> > > > >
> > > > >
> > > > > William
> > > > >
> > > > > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com>
> > wrote:
> > > > >
> > > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > > >
> > > > > > > Hi guys,
> > > > > > > Thanks for your replies. I did not put anything in /tmp. It's
> > just
> > > > that
> > > > > > >
> > > > > >
> > > > > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir
> > in
> > > > /tmp
> > > > > >
> > > > > > every time when I restart the hadoop, the localhost:50070 does
> not
> > > show
> > > > > up.
> > > > > > > The localhost:50030 is fine. Unless I reformat namenode, I wont
> > be
> > > > able
> > > > > > to
> > > > > > > see the HDFS' web page at 50070. It did not clean /tmp
> > > automatically.
> > > > > But
> > > > > > >
> > > > > >
> > > > > > It's not you clean the /tmp dir. Some operation clean it
> > > > automatically~~
> > > > > >
> > > > > >
> > > > > > > after format, everything is gone, well, it is a format. I did
> not
> > > > > really
> > > > > > > see
> > > > > > > anything in log. Not sure what caused it.
> > > > > > >
> > > > > > >
> > > > > > > William
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > > > > bradfordstephens@gmail.com> wrote:
> > > > > > >
> > > > > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long
> > > run.
> > > > > > > >
> > > > > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <
> leongfans@gmail.com
> > >
> > > > > wrote:
> > > > > > > > > Your /tmp directory is cleaned automaticly?
> > > > > > > > >
> > > > > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > > > > >
> > > > > > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > > > > > >
> > > > > > > > >> Hi all,
> > > > > > > > >> I am running HDFS in Pseudo-distributed mode. Every time
> > after
> > > I
> > > > > > > > restarted
> > > > > > > > >> the machine, I have to format the namenode otherwise the
> > > > > > > localhost:50070
> > > > > > > > >> wont show up. It is quite annoying to do so since all the
> > data
> > > > > would
> > > > > > > be
> > > > > > > > >> lost. Does anybody know this happens? And how should I fix
> > > this
> > > > > > > problem?
> > > > > > > > >> Many thanks.
> > > > > > > > >>
> > > > > > > > >>
> > > > > > > > >> William
> > > > > > > > >>
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale
> > > data
> > > > > > > > solution. Process, store, query, search, and serve all your
> > data.
> > > > > > > >
> > > > > > > > http://www.roadtofailure.com -- The Fringes of Scalability,
> > > Social
> > > > > > > > Media, and Computer Science
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Namenode problem

Posted by William Kang <we...@gmail.com>.
Hi,
I got the log dumped here:

2010-03-09 00:36:47,795 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_6221934658367436050_1025
2010-03-09 00:46:49,155 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 12 blocks
got processed in 11 msecs
2010-03-09 01:08:08,430 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
************************************************************/
2010-03-09 22:45:54,715 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = weliam-desktop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
2010-03-09 22:45:55,330 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
to localhost/127.0.0.1:9000 failed on local exception: java.io.IOException:
Connection reset by peer
 at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
 at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
 at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
 at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
 at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
 at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
 at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.FilterInputStream.read(FilterInputStream.java:116)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
 at java.io.DataInputStream.readInt(DataInputStream.java:370)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

2010-03-09 22:45:55,334 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
************************************************************/

At this point, unless I format the Namenode, the web interface for hadoop at
port 50070 is not coming back.


William

On Mon, Mar 8, 2010 at 10:59 PM, Eason.Lee <le...@gmail.com> wrote:

> It's usually in $HADOOP_HOME/logs
>
> 2010/3/9 William Kang <we...@gmail.com>
>
> > Hi,
> > If the namenode is not up, how can I get the logdir?
> >
> >
> > William
> >
> > On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <le...@gmail.com> wrote:
> >
> > > 2010/3/9 William Kang <we...@gmail.com>
> > >
> > > > Hi Eason,
> > > > Thanks a lot for your reply. But I do have another folder which in
> not
> > > > inside /tmp. I did not use default settings.
> > > >
> > >
> > > you'd better post your configuration in detail~~
> > >
> > >
> > > > To make it clear, I will describe what happened:
> > > > 1. hadoop namenode -format
> > > > 2. start-all.sh
> > > > 3. running fine, http://localhost:50070 is accessible
> > > > 4. stop-all.sh
> > > > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > > > Unless I format the namenode, the HDFS master
> > > > http://localhost:50070/dfshealth.jsp is not accessible.
> > > >
> > >
> > > Try "jps" to see if the namenode is up~~
> > > If the namenode is not up, maybe there is some error log in logdir, try
> > to
> > > post the error~~
> > >
> > >
> > > > So, I have to redo step 1, 2 again to gain access to
> > > > http://localhost:50070/dfshealth.jsp. But all data would be lost
> after
> > > > format.
> > > >
> > >
> > > format will delete the old namespace, so everything will lost~~
> > >
> > >
> > > >
> > > >
> > > > William
> > > >
> > > > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com>
> wrote:
> > > >
> > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > >
> > > > > > Hi guys,
> > > > > > Thanks for your replies. I did not put anything in /tmp. It's
> just
> > > that
> > > > > >
> > > > >
> > > > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir
> in
> > > /tmp
> > > > >
> > > > > every time when I restart the hadoop, the localhost:50070 does not
> > show
> > > > up.
> > > > > > The localhost:50030 is fine. Unless I reformat namenode, I wont
> be
> > > able
> > > > > to
> > > > > > see the HDFS' web page at 50070. It did not clean /tmp
> > automatically.
> > > > But
> > > > > >
> > > > >
> > > > > It's not you clean the /tmp dir. Some operation clean it
> > > automatically~~
> > > > >
> > > > >
> > > > > > after format, everything is gone, well, it is a format. I did not
> > > > really
> > > > > > see
> > > > > > anything in log. Not sure what caused it.
> > > > > >
> > > > > >
> > > > > > William
> > > > > >
> > > > > >
> > > > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > > > bradfordstephens@gmail.com> wrote:
> > > > > >
> > > > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long
> > run.
> > > > > > >
> > > > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <leongfans@gmail.com
> >
> > > > wrote:
> > > > > > > > Your /tmp directory is cleaned automaticly?
> > > > > > > >
> > > > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > > > >
> > > > > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > > > > >
> > > > > > > >> Hi all,
> > > > > > > >> I am running HDFS in Pseudo-distributed mode. Every time
> after
> > I
> > > > > > > restarted
> > > > > > > >> the machine, I have to format the namenode otherwise the
> > > > > > localhost:50070
> > > > > > > >> wont show up. It is quite annoying to do so since all the
> data
> > > > would
> > > > > > be
> > > > > > > >> lost. Does anybody know this happens? And how should I fix
> > this
> > > > > > problem?
> > > > > > > >> Many thanks.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> William
> > > > > > > >>
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale
> > data
> > > > > > > solution. Process, store, query, search, and serve all your
> data.
> > > > > > >
> > > > > > > http://www.roadtofailure.com -- The Fringes of Scalability,
> > Social
> > > > > > > Media, and Computer Science
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Namenode problem

Posted by "Eason.Lee" <le...@gmail.com>.
It's usually in $HADOOP_HOME/logs

2010/3/9 William Kang <we...@gmail.com>

> Hi,
> If the namenode is not up, how can I get the logdir?
>
>
> William
>
> On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <le...@gmail.com> wrote:
>
> > 2010/3/9 William Kang <we...@gmail.com>
> >
> > > Hi Eason,
> > > Thanks a lot for your reply. But I do have another folder which in not
> > > inside /tmp. I did not use default settings.
> > >
> >
> > you'd better post your configuration in detail~~
> >
> >
> > > To make it clear, I will describe what happened:
> > > 1. hadoop namenode -format
> > > 2. start-all.sh
> > > 3. running fine, http://localhost:50070 is accessible
> > > 4. stop-all.sh
> > > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > > Unless I format the namenode, the HDFS master
> > > http://localhost:50070/dfshealth.jsp is not accessible.
> > >
> >
> > Try "jps" to see if the namenode is up~~
> > If the namenode is not up, maybe there is some error log in logdir, try
> to
> > post the error~~
> >
> >
> > > So, I have to redo step 1, 2 again to gain access to
> > > http://localhost:50070/dfshealth.jsp. But all data would be lost after
> > > format.
> > >
> >
> > format will delete the old namespace, so everything will lost~~
> >
> >
> > >
> > >
> > > William
> > >
> > > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com> wrote:
> > >
> > > > 2010/3/8 William Kang <we...@gmail.com>
> > > >
> > > > > Hi guys,
> > > > > Thanks for your replies. I did not put anything in /tmp. It's just
> > that
> > > > >
> > > >
> > > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir in
> > /tmp
> > > >
> > > > every time when I restart the hadoop, the localhost:50070 does not
> show
> > > up.
> > > > > The localhost:50030 is fine. Unless I reformat namenode, I wont be
> > able
> > > > to
> > > > > see the HDFS' web page at 50070. It did not clean /tmp
> automatically.
> > > But
> > > > >
> > > >
> > > > It's not you clean the /tmp dir. Some operation clean it
> > automatically~~
> > > >
> > > >
> > > > > after format, everything is gone, well, it is a format. I did not
> > > really
> > > > > see
> > > > > anything in log. Not sure what caused it.
> > > > >
> > > > >
> > > > > William
> > > > >
> > > > >
> > > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > > bradfordstephens@gmail.com> wrote:
> > > > >
> > > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long
> run.
> > > > > >
> > > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com>
> > > wrote:
> > > > > > > Your /tmp directory is cleaned automaticly?
> > > > > > >
> > > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > > >
> > > > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > > > >
> > > > > > >> Hi all,
> > > > > > >> I am running HDFS in Pseudo-distributed mode. Every time after
> I
> > > > > > restarted
> > > > > > >> the machine, I have to format the namenode otherwise the
> > > > > localhost:50070
> > > > > > >> wont show up. It is quite annoying to do so since all the data
> > > would
> > > > > be
> > > > > > >> lost. Does anybody know this happens? And how should I fix
> this
> > > > > problem?
> > > > > > >> Many thanks.
> > > > > > >>
> > > > > > >>
> > > > > > >> William
> > > > > > >>
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale
> data
> > > > > > solution. Process, store, query, search, and serve all your data.
> > > > > >
> > > > > > http://www.roadtofailure.com -- The Fringes of Scalability,
> Social
> > > > > > Media, and Computer Science
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Namenode problem

Posted by William Kang <we...@gmail.com>.
Hi,
If the namenode is not up, how can I get the logdir?


William

On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <le...@gmail.com> wrote:

> 2010/3/9 William Kang <we...@gmail.com>
>
> > Hi Eason,
> > Thanks a lot for your reply. But I do have another folder which in not
> > inside /tmp. I did not use default settings.
> >
>
> you'd better post your configuration in detail~~
>
>
> > To make it clear, I will describe what happened:
> > 1. hadoop namenode -format
> > 2. start-all.sh
> > 3. running fine, http://localhost:50070 is accessible
> > 4. stop-all.sh
> > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > Unless I format the namenode, the HDFS master
> > http://localhost:50070/dfshealth.jsp is not accessible.
> >
>
> Try "jps" to see if the namenode is up~~
> If the namenode is not up, maybe there is some error log in logdir, try to
> post the error~~
>
>
> > So, I have to redo step 1, 2 again to gain access to
> > http://localhost:50070/dfshealth.jsp. But all data would be lost after
> > format.
> >
>
> format will delete the old namespace, so everything will lost~~
>
>
> >
> >
> > William
> >
> > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com> wrote:
> >
> > > 2010/3/8 William Kang <we...@gmail.com>
> > >
> > > > Hi guys,
> > > > Thanks for your replies. I did not put anything in /tmp. It's just
> that
> > > >
> > >
> > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir in
> /tmp
> > >
> > > every time when I restart the hadoop, the localhost:50070 does not show
> > up.
> > > > The localhost:50030 is fine. Unless I reformat namenode, I wont be
> able
> > > to
> > > > see the HDFS' web page at 50070. It did not clean /tmp automatically.
> > But
> > > >
> > >
> > > It's not you clean the /tmp dir. Some operation clean it
> automatically~~
> > >
> > >
> > > > after format, everything is gone, well, it is a format. I did not
> > really
> > > > see
> > > > anything in log. Not sure what caused it.
> > > >
> > > >
> > > > William
> > > >
> > > >
> > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > bradfordstephens@gmail.com> wrote:
> > > >
> > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long run.
> > > > >
> > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com>
> > wrote:
> > > > > > Your /tmp directory is cleaned automaticly?
> > > > > >
> > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > >
> > > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > > >
> > > > > >> Hi all,
> > > > > >> I am running HDFS in Pseudo-distributed mode. Every time after I
> > > > > restarted
> > > > > >> the machine, I have to format the namenode otherwise the
> > > > localhost:50070
> > > > > >> wont show up. It is quite annoying to do so since all the data
> > would
> > > > be
> > > > > >> lost. Does anybody know this happens? And how should I fix this
> > > > problem?
> > > > > >> Many thanks.
> > > > > >>
> > > > > >>
> > > > > >> William
> > > > > >>
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
> > > > > solution. Process, store, query, search, and serve all your data.
> > > > >
> > > > > http://www.roadtofailure.com -- The Fringes of Scalability, Social
> > > > > Media, and Computer Science
> > > > >
> > > >
> > >
> >
>

Re: Namenode problem

Posted by "Eason.Lee" <le...@gmail.com>.
2010/3/9 William Kang <we...@gmail.com>

> Hi Eason,
> Thanks a lot for your reply. But I do have another folder which in not
> inside /tmp. I did not use default settings.
>

you'd better post your configuration in detail~~


> To make it clear, I will describe what happened:
> 1. hadoop namenode -format
> 2. start-all.sh
> 3. running fine, http://localhost:50070 is accessible
> 4. stop-all.sh
> 5. start-all.sh, http://localhost:50070 is NOT accessible
> Unless I format the namenode, the HDFS master
> http://localhost:50070/dfshealth.jsp is not accessible.
>

Try "jps" to see if the namenode is up~~
If the namenode is not up, maybe there is some error log in logdir, try to
post the error~~


> So, I have to redo step 1, 2 again to gain access to
> http://localhost:50070/dfshealth.jsp. But all data would be lost after
> format.
>

format will delete the old namespace, so everything will lost~~


>
>
> William
>
> On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com> wrote:
>
> > 2010/3/8 William Kang <we...@gmail.com>
> >
> > > Hi guys,
> > > Thanks for your replies. I did not put anything in /tmp. It's just that
> > >
> >
> > default setting of dfs.name.dir/dfs.data.dir is set to the subdir in /tmp
> >
> > every time when I restart the hadoop, the localhost:50070 does not show
> up.
> > > The localhost:50030 is fine. Unless I reformat namenode, I wont be able
> > to
> > > see the HDFS' web page at 50070. It did not clean /tmp automatically.
> But
> > >
> >
> > It's not you clean the /tmp dir. Some operation clean it automatically~~
> >
> >
> > > after format, everything is gone, well, it is a format. I did not
> really
> > > see
> > > anything in log. Not sure what caused it.
> > >
> > >
> > > William
> > >
> > >
> > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > bradfordstephens@gmail.com> wrote:
> > >
> > > > Yeah. Don't put things in /tmp. That's unpleasant in the long run.
> > > >
> > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com>
> wrote:
> > > > > Your /tmp directory is cleaned automaticly?
> > > > >
> > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > >
> > > > > 2010/3/8 William Kang <we...@gmail.com>
> > > > >
> > > > >> Hi all,
> > > > >> I am running HDFS in Pseudo-distributed mode. Every time after I
> > > > restarted
> > > > >> the machine, I have to format the namenode otherwise the
> > > localhost:50070
> > > > >> wont show up. It is quite annoying to do so since all the data
> would
> > > be
> > > > >> lost. Does anybody know this happens? And how should I fix this
> > > problem?
> > > > >> Many thanks.
> > > > >>
> > > > >>
> > > > >> William
> > > > >>
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
> > > > solution. Process, store, query, search, and serve all your data.
> > > >
> > > > http://www.roadtofailure.com -- The Fringes of Scalability, Social
> > > > Media, and Computer Science
> > > >
> > >
> >
>

Re: Namenode problem

Posted by William Kang <we...@gmail.com>.
Hi Eason,
Thanks a lot for your reply. But I do have another folder which in not
inside /tmp. I did not use default settings.
To make it clear, I will describe what happened:
1. hadoop namenode -format
2. start-all.sh
3. running fine, http://localhost:50070 is accessible
4. stop-all.sh
5. start-all.sh, http://localhost:50070 is NOT accessible
Unless I format the namenode, the HDFS master
http://localhost:50070/dfshealth.jsp is not accessible.
So, I have to redo step 1, 2 again to gain access to
http://localhost:50070/dfshealth.jsp. But all data would be lost after
format.


William

On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <le...@gmail.com> wrote:

> 2010/3/8 William Kang <we...@gmail.com>
>
> > Hi guys,
> > Thanks for your replies. I did not put anything in /tmp. It's just that
> >
>
> default setting of dfs.name.dir/dfs.data.dir is set to the subdir in /tmp
>
> every time when I restart the hadoop, the localhost:50070 does not show up.
> > The localhost:50030 is fine. Unless I reformat namenode, I wont be able
> to
> > see the HDFS' web page at 50070. It did not clean /tmp automatically. But
> >
>
> It's not you clean the /tmp dir. Some operation clean it automatically~~
>
>
> > after format, everything is gone, well, it is a format. I did not really
> > see
> > anything in log. Not sure what caused it.
> >
> >
> > William
> >
> >
> > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > bradfordstephens@gmail.com> wrote:
> >
> > > Yeah. Don't put things in /tmp. That's unpleasant in the long run.
> > >
> > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com> wrote:
> > > > Your /tmp directory is cleaned automaticly?
> > > >
> > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > >
> > > > 2010/3/8 William Kang <we...@gmail.com>
> > > >
> > > >> Hi all,
> > > >> I am running HDFS in Pseudo-distributed mode. Every time after I
> > > restarted
> > > >> the machine, I have to format the namenode otherwise the
> > localhost:50070
> > > >> wont show up. It is quite annoying to do so since all the data would
> > be
> > > >> lost. Does anybody know this happens? And how should I fix this
> > problem?
> > > >> Many thanks.
> > > >>
> > > >>
> > > >> William
> > > >>
> > > >
> > >
> > >
> > >
> > > --
> > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
> > > solution. Process, store, query, search, and serve all your data.
> > >
> > > http://www.roadtofailure.com -- The Fringes of Scalability, Social
> > > Media, and Computer Science
> > >
> >
>

Re: Namenode problem

Posted by "Eason.Lee" <le...@gmail.com>.
2010/3/8 William Kang <we...@gmail.com>

> Hi guys,
> Thanks for your replies. I did not put anything in /tmp. It's just that
>

default setting of dfs.name.dir/dfs.data.dir is set to the subdir in /tmp

every time when I restart the hadoop, the localhost:50070 does not show up.
> The localhost:50030 is fine. Unless I reformat namenode, I wont be able to
> see the HDFS' web page at 50070. It did not clean /tmp automatically. But
>

It's not you clean the /tmp dir. Some operation clean it automatically~~


> after format, everything is gone, well, it is a format. I did not really
> see
> anything in log. Not sure what caused it.
>
>
> William
>
>
> On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> bradfordstephens@gmail.com> wrote:
>
> > Yeah. Don't put things in /tmp. That's unpleasant in the long run.
> >
> > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com> wrote:
> > > Your /tmp directory is cleaned automaticly?
> > >
> > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > >
> > > 2010/3/8 William Kang <we...@gmail.com>
> > >
> > >> Hi all,
> > >> I am running HDFS in Pseudo-distributed mode. Every time after I
> > restarted
> > >> the machine, I have to format the namenode otherwise the
> localhost:50070
> > >> wont show up. It is quite annoying to do so since all the data would
> be
> > >> lost. Does anybody know this happens? And how should I fix this
> problem?
> > >> Many thanks.
> > >>
> > >>
> > >> William
> > >>
> > >
> >
> >
> >
> > --
> > http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
> > solution. Process, store, query, search, and serve all your data.
> >
> > http://www.roadtofailure.com -- The Fringes of Scalability, Social
> > Media, and Computer Science
> >
>

Re: Namenode problem

Posted by William Kang <we...@gmail.com>.
Hi guys,
Thanks for your replies. I did not put anything in /tmp. It's just that
every time when I restart the hadoop, the localhost:50070 does not show up.
The localhost:50030 is fine. Unless I reformat namenode, I wont be able to
see the HDFS' web page at 50070. It did not clean /tmp automatically. But
after format, everything is gone, well, it is a format. I did not really see
anything in log. Not sure what caused it.


William


On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
bradfordstephens@gmail.com> wrote:

> Yeah. Don't put things in /tmp. That's unpleasant in the long run.
>
> On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com> wrote:
> > Your /tmp directory is cleaned automaticly?
> >
> > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> >
> > 2010/3/8 William Kang <we...@gmail.com>
> >
> >> Hi all,
> >> I am running HDFS in Pseudo-distributed mode. Every time after I
> restarted
> >> the machine, I have to format the namenode otherwise the localhost:50070
> >> wont show up. It is quite annoying to do so since all the data would be
> >> lost. Does anybody know this happens? And how should I fix this problem?
> >> Many thanks.
> >>
> >>
> >> William
> >>
> >
>
>
>
> --
> http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
> solution. Process, store, query, search, and serve all your data.
>
> http://www.roadtofailure.com -- The Fringes of Scalability, Social
> Media, and Computer Science
>

Re: Namenode problem

Posted by Bradford Stephens <br...@gmail.com>.
Yeah. Don't put things in /tmp. That's unpleasant in the long run.

On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <le...@gmail.com> wrote:
> Your /tmp directory is cleaned automaticly?
>
> Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
>
> 2010/3/8 William Kang <we...@gmail.com>
>
>> Hi all,
>> I am running HDFS in Pseudo-distributed mode. Every time after I restarted
>> the machine, I have to format the namenode otherwise the localhost:50070
>> wont show up. It is quite annoying to do so since all the data would be
>> lost. Does anybody know this happens? And how should I fix this problem?
>> Many thanks.
>>
>>
>> William
>>
>



-- 
http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
solution. Process, store, query, search, and serve all your data.

http://www.roadtofailure.com -- The Fringes of Scalability, Social
Media, and Computer Science

Re: Namenode problem

Posted by "Eason.Lee" <le...@gmail.com>.
Your /tmp directory is cleaned automaticly?

Try to set dfs.name.dir/dfs.data.dir to a safe dir~~

2010/3/8 William Kang <we...@gmail.com>

> Hi all,
> I am running HDFS in Pseudo-distributed mode. Every time after I restarted
> the machine, I have to format the namenode otherwise the localhost:50070
> wont show up. It is quite annoying to do so since all the data would be
> lost. Does anybody know this happens? And how should I fix this problem?
> Many thanks.
>
>
> William
>