You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Keith Wiley <kw...@keithwiley.com> on 2013/02/19 20:37:08 UTC

webapps/ CLASSPATH err

This is Hadoop 2.0, but using the separate MR1 package (hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop namenode -format") and saw no errors in the shell or in the logs/[namenode].log file (in fact, simply formatting the namenode doesn't even create the log file yet).  I believe that merely formatting the namenode shouldn't leave any persistent java processes running, so I wouldn't expect "ps aux | grep java" to show anything, which of course it doesn't.

I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This produces the log file and still shows no errors.  The final entry in the log is:
2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000

Curiously, I still don't see any java processes running and netstat doesn't show any obvious 9000 listeners.  I get this:
$ netstat -a -t --numeric-ports -p
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 localhost:25                *:*                         LISTEN      -                  
tcp        0      0 *:22                        *:*                         LISTEN      -                  
tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 ESTABLISHED 23591/ssh          
tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          ESTABLISHED -                  
tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          ESTABLISHED -                  
tcp        0      0 *:22                        *:*                         LISTEN      -                  

Note that ip-13-0-177-11 is the current machine (it is also specified as the master in /etc/hosts and is indicated via localhost in fs.default.name on port 9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm beginning to get confused because I don't see a java namenode process and I don't see a port 9000 listener...but still haven't seen any blatant error messages.

Next, I try "hadoop fs -ls /".  I then get the shell error I have been wrestling with recently:
ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Furthermore, this last step adds the following entry to the namenode log file:
2013-02-19 19:15:20,434 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
	at java.lang.Thread.run(Thread.java:679)
2013-02-19 19:15:20,438 WARN org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
	at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
	at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
	at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
************************************************************/

This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  It contains job/, static/, and task/.

If I start over from a freshly formatted namenode and take a slightly different approach -- if I try to start the datanode immediately after starting the namenode -- once again it fails, and in a very similar way.  This time the command to start the datanode has two effects: the namenode log still can't find webapps/hdfs, just as shown above, and also, there is now a datanode log file, and it likewise can't find webapps/datanode ("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I get two very similar errors at once, one on the namenode and one on the datanode.

This webapps/ dir business makes no sense since the files (or directories) the logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't exist!

Thoughts?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"It's a fine line between meticulous and obsessive-compulsive and a slippery
rope between obsessive-compulsive and debilitatingly slow."
                                           --  Keith Wiley
________________________________________________________________________________


Re: webapps/ CLASSPATH err

Posted by Keith Wiley <kw...@keithwiley.com>.
On Feb 19, 2013, at 11:43 , Harsh J wrote:

> Hi Keith,
> 
> The webapps/hdfs bundle is present at
> $HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
> tarball. This should get on the classpath automatically as well.

Hadoop 2.0 Yarn does indeed have a share/ dir but Hadoop 2.0 MR1 doesn't have a share/ dir at all.  Is MR1 not usable?  I was hoping to use it as a stepping stone between older versions of Hadoop (for which I have found some EC2 support, not the least being an actual ec2/ dir and associated scripts in src/contrib/ec2) and Yarn, for which I have found no such support, provided scripts, or online walkthroughs yet).  However, I am discovering the H2 MR1 is sufficiently different from older versions of Hadoop that it does not easily extrapolate from those previous successes (the bin/ directory is quite different for one thing).  At the same time, H2 MR1 is also sufficiently different from Yarn that I can't easily extend Yarn advise onto it (as noted, I don't even see a share/ directory in H2 MR1, so I'm not sure how to apply the response above).

> What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
> "aside" tarball or the chief hadoop-2 one?

I figured, as long as I'm trying to us MR1, I would use it exclusively and not touch the Yarn installation at all, so I'm relying entirely on the conf/ and bin/ dirs under MR1 (note that MR1's sbin/ dir only contains a nonexecutable "task-controller", not all the other stuff that Yarn's sbin/ dir contains)...so I'm using MR1's bin/hadoop and bin/hadoop-daemon.sh, nothin else).

> On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
> works fine.

MR1's bin/ dir has no such executable, nor does it have the conventional start-all.sh I'm used to.  I recognize those script names from older versions of Hadoop, but H2 MR1 doesn't provide them.  I'm using hadoop-2.0.0-mr1-cdh4.1.3.

> Another simple check you could do is to try to start with
> "$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
> this way and brings up the NN as a foreground process.

H2 MR1's bin/ dir doesn't have an hdfs executable in it.  Admittedly, H2 Yarn's bin/ dir does.  The following are my H2 MR1 bin/ options:
~/hadoop-2.0.0-mr1-cdh4.1.3/ $ ls bin/
total 60
 4 drwxr-xr-x  2 ec2-user ec2-user  4096 Feb 18 23:45 ./
 4 drwxr-xr-x 17 ec2-user ec2-user  4096 Feb 19 00:08 ../
20 -rwxr-xr-x  1 ec2-user ec2-user 17405 Jan 27 01:07 hadoop*
 8 -rwxr-xr-x  1 ec2-user ec2-user  4356 Jan 27 01:07 hadoop-config.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  3988 Jan 27 01:07 hadoop-daemon.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1227 Jan 27 01:07 hadoop-daemons.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2710 Jan 27 01:07 rcc*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2043 Jan 27 01:07 slaves.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1159 Jan 27 01:07 start-mapred.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1068 Jan 27 01:07 stop-mapred.sh*

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
                                           --  Keith Wiley
________________________________________________________________________________


Re: webapps/ CLASSPATH err

Posted by Keith Wiley <kw...@keithwiley.com>.
On Feb 19, 2013, at 11:43 , Harsh J wrote:

> Hi Keith,
> 
> The webapps/hdfs bundle is present at
> $HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
> tarball. This should get on the classpath automatically as well.

Hadoop 2.0 Yarn does indeed have a share/ dir but Hadoop 2.0 MR1 doesn't have a share/ dir at all.  Is MR1 not usable?  I was hoping to use it as a stepping stone between older versions of Hadoop (for which I have found some EC2 support, not the least being an actual ec2/ dir and associated scripts in src/contrib/ec2) and Yarn, for which I have found no such support, provided scripts, or online walkthroughs yet).  However, I am discovering the H2 MR1 is sufficiently different from older versions of Hadoop that it does not easily extrapolate from those previous successes (the bin/ directory is quite different for one thing).  At the same time, H2 MR1 is also sufficiently different from Yarn that I can't easily extend Yarn advise onto it (as noted, I don't even see a share/ directory in H2 MR1, so I'm not sure how to apply the response above).

> What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
> "aside" tarball or the chief hadoop-2 one?

I figured, as long as I'm trying to us MR1, I would use it exclusively and not touch the Yarn installation at all, so I'm relying entirely on the conf/ and bin/ dirs under MR1 (note that MR1's sbin/ dir only contains a nonexecutable "task-controller", not all the other stuff that Yarn's sbin/ dir contains)...so I'm using MR1's bin/hadoop and bin/hadoop-daemon.sh, nothin else).

> On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
> works fine.

MR1's bin/ dir has no such executable, nor does it have the conventional start-all.sh I'm used to.  I recognize those script names from older versions of Hadoop, but H2 MR1 doesn't provide them.  I'm using hadoop-2.0.0-mr1-cdh4.1.3.

> Another simple check you could do is to try to start with
> "$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
> this way and brings up the NN as a foreground process.

H2 MR1's bin/ dir doesn't have an hdfs executable in it.  Admittedly, H2 Yarn's bin/ dir does.  The following are my H2 MR1 bin/ options:
~/hadoop-2.0.0-mr1-cdh4.1.3/ $ ls bin/
total 60
 4 drwxr-xr-x  2 ec2-user ec2-user  4096 Feb 18 23:45 ./
 4 drwxr-xr-x 17 ec2-user ec2-user  4096 Feb 19 00:08 ../
20 -rwxr-xr-x  1 ec2-user ec2-user 17405 Jan 27 01:07 hadoop*
 8 -rwxr-xr-x  1 ec2-user ec2-user  4356 Jan 27 01:07 hadoop-config.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  3988 Jan 27 01:07 hadoop-daemon.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1227 Jan 27 01:07 hadoop-daemons.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2710 Jan 27 01:07 rcc*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2043 Jan 27 01:07 slaves.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1159 Jan 27 01:07 start-mapred.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1068 Jan 27 01:07 stop-mapred.sh*

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
                                           --  Keith Wiley
________________________________________________________________________________


Re: webapps/ CLASSPATH err

Posted by Keith Wiley <kw...@keithwiley.com>.
On Feb 19, 2013, at 11:43 , Harsh J wrote:

> Hi Keith,
> 
> The webapps/hdfs bundle is present at
> $HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
> tarball. This should get on the classpath automatically as well.

Hadoop 2.0 Yarn does indeed have a share/ dir but Hadoop 2.0 MR1 doesn't have a share/ dir at all.  Is MR1 not usable?  I was hoping to use it as a stepping stone between older versions of Hadoop (for which I have found some EC2 support, not the least being an actual ec2/ dir and associated scripts in src/contrib/ec2) and Yarn, for which I have found no such support, provided scripts, or online walkthroughs yet).  However, I am discovering the H2 MR1 is sufficiently different from older versions of Hadoop that it does not easily extrapolate from those previous successes (the bin/ directory is quite different for one thing).  At the same time, H2 MR1 is also sufficiently different from Yarn that I can't easily extend Yarn advise onto it (as noted, I don't even see a share/ directory in H2 MR1, so I'm not sure how to apply the response above).

> What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
> "aside" tarball or the chief hadoop-2 one?

I figured, as long as I'm trying to us MR1, I would use it exclusively and not touch the Yarn installation at all, so I'm relying entirely on the conf/ and bin/ dirs under MR1 (note that MR1's sbin/ dir only contains a nonexecutable "task-controller", not all the other stuff that Yarn's sbin/ dir contains)...so I'm using MR1's bin/hadoop and bin/hadoop-daemon.sh, nothin else).

> On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
> works fine.

MR1's bin/ dir has no such executable, nor does it have the conventional start-all.sh I'm used to.  I recognize those script names from older versions of Hadoop, but H2 MR1 doesn't provide them.  I'm using hadoop-2.0.0-mr1-cdh4.1.3.

> Another simple check you could do is to try to start with
> "$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
> this way and brings up the NN as a foreground process.

H2 MR1's bin/ dir doesn't have an hdfs executable in it.  Admittedly, H2 Yarn's bin/ dir does.  The following are my H2 MR1 bin/ options:
~/hadoop-2.0.0-mr1-cdh4.1.3/ $ ls bin/
total 60
 4 drwxr-xr-x  2 ec2-user ec2-user  4096 Feb 18 23:45 ./
 4 drwxr-xr-x 17 ec2-user ec2-user  4096 Feb 19 00:08 ../
20 -rwxr-xr-x  1 ec2-user ec2-user 17405 Jan 27 01:07 hadoop*
 8 -rwxr-xr-x  1 ec2-user ec2-user  4356 Jan 27 01:07 hadoop-config.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  3988 Jan 27 01:07 hadoop-daemon.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1227 Jan 27 01:07 hadoop-daemons.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2710 Jan 27 01:07 rcc*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2043 Jan 27 01:07 slaves.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1159 Jan 27 01:07 start-mapred.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1068 Jan 27 01:07 stop-mapred.sh*

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
                                           --  Keith Wiley
________________________________________________________________________________


Re: webapps/ CLASSPATH err

Posted by Keith Wiley <kw...@keithwiley.com>.
On Feb 19, 2013, at 11:43 , Harsh J wrote:

> Hi Keith,
> 
> The webapps/hdfs bundle is present at
> $HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
> tarball. This should get on the classpath automatically as well.

Hadoop 2.0 Yarn does indeed have a share/ dir but Hadoop 2.0 MR1 doesn't have a share/ dir at all.  Is MR1 not usable?  I was hoping to use it as a stepping stone between older versions of Hadoop (for which I have found some EC2 support, not the least being an actual ec2/ dir and associated scripts in src/contrib/ec2) and Yarn, for which I have found no such support, provided scripts, or online walkthroughs yet).  However, I am discovering the H2 MR1 is sufficiently different from older versions of Hadoop that it does not easily extrapolate from those previous successes (the bin/ directory is quite different for one thing).  At the same time, H2 MR1 is also sufficiently different from Yarn that I can't easily extend Yarn advise onto it (as noted, I don't even see a share/ directory in H2 MR1, so I'm not sure how to apply the response above).

> What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
> "aside" tarball or the chief hadoop-2 one?

I figured, as long as I'm trying to us MR1, I would use it exclusively and not touch the Yarn installation at all, so I'm relying entirely on the conf/ and bin/ dirs under MR1 (note that MR1's sbin/ dir only contains a nonexecutable "task-controller", not all the other stuff that Yarn's sbin/ dir contains)...so I'm using MR1's bin/hadoop and bin/hadoop-daemon.sh, nothin else).

> On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
> works fine.

MR1's bin/ dir has no such executable, nor does it have the conventional start-all.sh I'm used to.  I recognize those script names from older versions of Hadoop, but H2 MR1 doesn't provide them.  I'm using hadoop-2.0.0-mr1-cdh4.1.3.

> Another simple check you could do is to try to start with
> "$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
> this way and brings up the NN as a foreground process.

H2 MR1's bin/ dir doesn't have an hdfs executable in it.  Admittedly, H2 Yarn's bin/ dir does.  The following are my H2 MR1 bin/ options:
~/hadoop-2.0.0-mr1-cdh4.1.3/ $ ls bin/
total 60
 4 drwxr-xr-x  2 ec2-user ec2-user  4096 Feb 18 23:45 ./
 4 drwxr-xr-x 17 ec2-user ec2-user  4096 Feb 19 00:08 ../
20 -rwxr-xr-x  1 ec2-user ec2-user 17405 Jan 27 01:07 hadoop*
 8 -rwxr-xr-x  1 ec2-user ec2-user  4356 Jan 27 01:07 hadoop-config.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  3988 Jan 27 01:07 hadoop-daemon.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1227 Jan 27 01:07 hadoop-daemons.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2710 Jan 27 01:07 rcc*
 4 -rwxr-xr-x  1 ec2-user ec2-user  2043 Jan 27 01:07 slaves.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1159 Jan 27 01:07 start-mapred.sh*
 4 -rwxr-xr-x  1 ec2-user ec2-user  1068 Jan 27 01:07 stop-mapred.sh*

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"You can scratch an itch, but you can't itch a scratch. Furthermore, an itch can
itch but a scratch can't scratch. Finally, a scratch can itch, but an itch can't
scratch. All together this implies: He scratched the itch from the scratch that
itched but would never itch the scratch from the itch that scratched."
                                           --  Keith Wiley
________________________________________________________________________________


Re: webapps/ CLASSPATH err

Posted by Harsh J <ha...@cloudera.com>.
Hi Keith,

The webapps/hdfs bundle is present at
$HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
tarball. This should get on the classpath automatically as well.

What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
"aside" tarball or the chief hadoop-2 one?

On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
works fine.

Another simple check you could do is to try to start with
"$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
this way and brings up the NN as a foreground process.

On Wed, Feb 20, 2013 at 1:07 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0, but using the separate MR1 package (hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop namenode -format") and saw no errors in the shell or in the logs/[namenode].log file (in fact, simply formatting the namenode doesn't even create the log file yet).  I believe that merely formatting the namenode shouldn't leave any persistent java processes running, so I wouldn't expect "ps aux | grep java" to show anything, which of course it doesn't.
>
> I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This produces the log file and still shows no errors.  The final entry in the log is:
> 2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
>
> Curiously, I still don't see any java processes running and netstat doesn't show any obvious 9000 listeners.  I get this:
> $ netstat -a -t --numeric-ports -p
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
> tcp        0      0 localhost:25                *:*                         LISTEN      -
> tcp        0      0 *:22                        *:*                         LISTEN      -
> tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 ESTABLISHED 23591/ssh
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          ESTABLISHED -
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          ESTABLISHED -
> tcp        0      0 *:22                        *:*                         LISTEN      -
>
> Note that ip-13-0-177-11 is the current machine (it is also specified as the master in /etc/hosts and is indicated via localhost in fs.default.name on port 9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm beginning to get confused because I don't see a java namenode process and I don't see a port 9000 listener...but still haven't seen any blatant error messages.
>
> Next, I try "hadoop fs -ls /".  I then get the shell error I have been wrestling with recently:
> ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> Furthermore, this last step adds the following entry to the namenode log file:
> 2013-02-19 19:15:20,434 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
>         at java.lang.Thread.run(Thread.java:679)
> 2013-02-19 19:15:20,438 WARN org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
> 2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
> 2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
>         at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  It contains job/, static/, and task/.
>
> If I start over from a freshly formatted namenode and take a slightly different approach -- if I try to start the datanode immediately after starting the namenode -- once again it fails, and in a very similar way.  This time the command to start the datanode has two effects: the namenode log still can't find webapps/hdfs, just as shown above, and also, there is now a datanode log file, and it likewise can't find webapps/datanode ("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I get two very similar errors at once, one on the namenode and one on the datanode.
>
> This webapps/ dir business makes no sense since the files (or directories) the logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't exist!
>
> Thoughts?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "It's a fine line between meticulous and obsessive-compulsive and a slippery
> rope between obsessive-compulsive and debilitatingly slow."
>                                            --  Keith Wiley
> ________________________________________________________________________________
>



--
Harsh J

Re: webapps/ CLASSPATH err

Posted by Harsh J <ha...@cloudera.com>.
Hi Keith,

The webapps/hdfs bundle is present at
$HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
tarball. This should get on the classpath automatically as well.

What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
"aside" tarball or the chief hadoop-2 one?

On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
works fine.

Another simple check you could do is to try to start with
"$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
this way and brings up the NN as a foreground process.

On Wed, Feb 20, 2013 at 1:07 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0, but using the separate MR1 package (hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop namenode -format") and saw no errors in the shell or in the logs/[namenode].log file (in fact, simply formatting the namenode doesn't even create the log file yet).  I believe that merely formatting the namenode shouldn't leave any persistent java processes running, so I wouldn't expect "ps aux | grep java" to show anything, which of course it doesn't.
>
> I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This produces the log file and still shows no errors.  The final entry in the log is:
> 2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
>
> Curiously, I still don't see any java processes running and netstat doesn't show any obvious 9000 listeners.  I get this:
> $ netstat -a -t --numeric-ports -p
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
> tcp        0      0 localhost:25                *:*                         LISTEN      -
> tcp        0      0 *:22                        *:*                         LISTEN      -
> tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 ESTABLISHED 23591/ssh
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          ESTABLISHED -
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          ESTABLISHED -
> tcp        0      0 *:22                        *:*                         LISTEN      -
>
> Note that ip-13-0-177-11 is the current machine (it is also specified as the master in /etc/hosts and is indicated via localhost in fs.default.name on port 9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm beginning to get confused because I don't see a java namenode process and I don't see a port 9000 listener...but still haven't seen any blatant error messages.
>
> Next, I try "hadoop fs -ls /".  I then get the shell error I have been wrestling with recently:
> ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> Furthermore, this last step adds the following entry to the namenode log file:
> 2013-02-19 19:15:20,434 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
>         at java.lang.Thread.run(Thread.java:679)
> 2013-02-19 19:15:20,438 WARN org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
> 2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
> 2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
>         at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  It contains job/, static/, and task/.
>
> If I start over from a freshly formatted namenode and take a slightly different approach -- if I try to start the datanode immediately after starting the namenode -- once again it fails, and in a very similar way.  This time the command to start the datanode has two effects: the namenode log still can't find webapps/hdfs, just as shown above, and also, there is now a datanode log file, and it likewise can't find webapps/datanode ("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I get two very similar errors at once, one on the namenode and one on the datanode.
>
> This webapps/ dir business makes no sense since the files (or directories) the logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't exist!
>
> Thoughts?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "It's a fine line between meticulous and obsessive-compulsive and a slippery
> rope between obsessive-compulsive and debilitatingly slow."
>                                            --  Keith Wiley
> ________________________________________________________________________________
>



--
Harsh J

Re: webapps/ CLASSPATH err

Posted by Harsh J <ha...@cloudera.com>.
Hi Keith,

The webapps/hdfs bundle is present at
$HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
tarball. This should get on the classpath automatically as well.

What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
"aside" tarball or the chief hadoop-2 one?

On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
works fine.

Another simple check you could do is to try to start with
"$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
this way and brings up the NN as a foreground process.

On Wed, Feb 20, 2013 at 1:07 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0, but using the separate MR1 package (hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop namenode -format") and saw no errors in the shell or in the logs/[namenode].log file (in fact, simply formatting the namenode doesn't even create the log file yet).  I believe that merely formatting the namenode shouldn't leave any persistent java processes running, so I wouldn't expect "ps aux | grep java" to show anything, which of course it doesn't.
>
> I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This produces the log file and still shows no errors.  The final entry in the log is:
> 2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
>
> Curiously, I still don't see any java processes running and netstat doesn't show any obvious 9000 listeners.  I get this:
> $ netstat -a -t --numeric-ports -p
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
> tcp        0      0 localhost:25                *:*                         LISTEN      -
> tcp        0      0 *:22                        *:*                         LISTEN      -
> tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 ESTABLISHED 23591/ssh
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          ESTABLISHED -
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          ESTABLISHED -
> tcp        0      0 *:22                        *:*                         LISTEN      -
>
> Note that ip-13-0-177-11 is the current machine (it is also specified as the master in /etc/hosts and is indicated via localhost in fs.default.name on port 9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm beginning to get confused because I don't see a java namenode process and I don't see a port 9000 listener...but still haven't seen any blatant error messages.
>
> Next, I try "hadoop fs -ls /".  I then get the shell error I have been wrestling with recently:
> ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> Furthermore, this last step adds the following entry to the namenode log file:
> 2013-02-19 19:15:20,434 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
>         at java.lang.Thread.run(Thread.java:679)
> 2013-02-19 19:15:20,438 WARN org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
> 2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
> 2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
>         at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  It contains job/, static/, and task/.
>
> If I start over from a freshly formatted namenode and take a slightly different approach -- if I try to start the datanode immediately after starting the namenode -- once again it fails, and in a very similar way.  This time the command to start the datanode has two effects: the namenode log still can't find webapps/hdfs, just as shown above, and also, there is now a datanode log file, and it likewise can't find webapps/datanode ("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I get two very similar errors at once, one on the namenode and one on the datanode.
>
> This webapps/ dir business makes no sense since the files (or directories) the logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't exist!
>
> Thoughts?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "It's a fine line between meticulous and obsessive-compulsive and a slippery
> rope between obsessive-compulsive and debilitatingly slow."
>                                            --  Keith Wiley
> ________________________________________________________________________________
>



--
Harsh J

Re: webapps/ CLASSPATH err

Posted by Harsh J <ha...@cloudera.com>.
Hi Keith,

The webapps/hdfs bundle is present at
$HADOOP_PREFIX/share/hadoop/hdfs/ directory of the Hadoop 2.x release
tarball. This should get on the classpath automatically as well.

What "bin/hadoop-daemon.sh" script are you using, the one from the MR1
"aside" tarball or the chief hadoop-2 one?

On my tarball setups, I 'start-dfs.sh' via the regular tarball, and it
works fine.

Another simple check you could do is to try to start with
"$HADOOP_PREFIX/bin/hdfs namenode" to see if it at least starts well
this way and brings up the NN as a foreground process.

On Wed, Feb 20, 2013 at 1:07 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0, but using the separate MR1 package (hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop namenode -format") and saw no errors in the shell or in the logs/[namenode].log file (in fact, simply formatting the namenode doesn't even create the log file yet).  I believe that merely formatting the namenode shouldn't leave any persistent java processes running, so I wouldn't expect "ps aux | grep java" to show anything, which of course it doesn't.
>
> I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This produces the log file and still shows no errors.  The final entry in the log is:
> 2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
>
> Curiously, I still don't see any java processes running and netstat doesn't show any obvious 9000 listeners.  I get this:
> $ netstat -a -t --numeric-ports -p
> (Not all processes could be identified, non-owned process info
>  will not be shown, you would have to be root to see it all.)
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
> tcp        0      0 localhost:25                *:*                         LISTEN      -
> tcp        0      0 *:22                        *:*                         LISTEN      -
> tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 ESTABLISHED 23591/ssh
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          ESTABLISHED -
> tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          ESTABLISHED -
> tcp        0      0 *:22                        *:*                         LISTEN      -
>
> Note that ip-13-0-177-11 is the current machine (it is also specified as the master in /etc/hosts and is indicated via localhost in fs.default.name on port 9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm beginning to get confused because I don't see a java namenode process and I don't see a port 9000 listener...but still haven't seen any blatant error messages.
>
> Next, I try "hadoop fs -ls /".  I then get the shell error I have been wrestling with recently:
> ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> Furthermore, this last step adds the following entry to the namenode log file:
> 2013-02-19 19:15:20,434 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
>         at java.lang.Thread.run(Thread.java:679)
> 2013-02-19 19:15:20,438 WARN org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
> 2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
> 2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
> 2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
> 2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
>         at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
>         at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  It contains job/, static/, and task/.
>
> If I start over from a freshly formatted namenode and take a slightly different approach -- if I try to start the datanode immediately after starting the namenode -- once again it fails, and in a very similar way.  This time the command to start the datanode has two effects: the namenode log still can't find webapps/hdfs, just as shown above, and also, there is now a datanode log file, and it likewise can't find webapps/datanode ("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I get two very similar errors at once, one on the namenode and one on the datanode.
>
> This webapps/ dir business makes no sense since the files (or directories) the logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't exist!
>
> Thoughts?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "It's a fine line between meticulous and obsessive-compulsive and a slippery
> rope between obsessive-compulsive and debilitatingly slow."
>                                            --  Keith Wiley
> ________________________________________________________________________________
>



--
Harsh J