You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by lazikid <ra...@hotmail.com> on 2009/10/18 15:44:24 UTC

Datanode Throwing NoRouteToHostException

I need some help with setting up a Hadoop cluster. The datanode on the slave
is not coming up throwing java.net.NoRouteToHostException: No route to host.
Please see the details below.

I have a centos master and a fedora slave. Both have Java 6 and Hadoop
0.20.1. I instaleed hadoop under /opt in both machines.

The machines can ping one another using hostname and ip. They can also make
password-less ssh connection to one another using both hostname and ip.

This is what I have in the /etc/hosts file for the master :

************************************************
127.0.0.1       localhost.localdomain localhost
192.168.1.125   centos
192.168.1.102   fedora
::1             localhost6.localdomain6 localhost6
************************************************

This is what I have in the /etc/hosts for the slave

************************************************
127.0.0.1	localhost.localdomain localhost
192.168.1.104	ubuntu64
192.168.1.102	fedora
192.168.1.125	centos
::1         localhost localhost.localdomain localhost6
localhost6.localdomain6
************************************************



These are my configuration files :

1. core-site.xml 

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://centos:54310</value>
  <description>determine the host, port, etc. for a
filesystem.</description>
</property>
</configuration>

2. mapred-site.xml

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>hdfs://centos:54311</value>
  <description>....</description>
</property>
</configuration>

3. masters
  
   centos

4. slaves

   fedora


I issued bin/start-dfs.sh from the master(centos) machine and this is what I
see :

starting namenode, logging to xxxxxxxxxxx
fedora: starting datanode, logging to xxxxxxx
centos: starting secondarynamenode, logging to xxxxxx

When I checked the slave(fedora) logs, this what I see :

*************************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = fedora/192.168.1.102
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 

2009
************************************************************/
2009-10-17 17:05:43,385 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 0 time(s).
2009-10-17 17:05:44,387 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 1 time(s).
2009-10-17 17:05:45,389 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 2 time(s).
2009-10-17 17:05:46,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 3 time(s).
2009-10-17 17:05:47,388 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 4 time(s).
2009-10-17 17:05:48,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 5 time(s).
2009-10-17 17:05:49,393 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 6 time(s).
2009-10-17 17:05:50,394 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 7 time(s).
2009-10-17 17:05:51,395 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 8 time(s).
2009-10-17 17:05:52,398 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: centos/192.168.1.125:54310. Already tried 9 time(s).
2009-10-17 17:05:52,404 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
to centos/192.168.1.125:54310 failed on local 

exception: java.net.NoRouteToHostException: No route to host
	at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
	at org.apache.hadoop.ipc.Client.call(Client.java:742)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
	at $Proxy4.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
	at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
	at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
	at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
	at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
	at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.net.NoRouteToHostException: No route to host
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
	at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
	at org.apache.hadoop.ipc.Client.call(Client.java:719)
	... 13 more

2009-10-17 17:05:52,407 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at fedora/192.168.1.102
************************************************************/

*************************************************************************





The log on the master shows :

2009-10-17 17:05:39,352 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = centos/192.168.1.125
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 

2009
************************************************************/
2009-10-17 17:05:39,615 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=54310
2009-10-17 17:05:39,620 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
centos/192.168.1.125:54310
2009-10-17 17:05:39,622 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-10-17 17:05:39,624 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
NameNodeMeterics using context 

object:org.apache.hadoop.metrics.spi.NullContext
2009-10-17 17:05:39,779 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=joe,joe
2009-10-17 17:05:39,780 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2009-10-17 17:05:39,781 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2009-10-17 17:05:39,877 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context 

object:org.apache.hadoop.metrics.spi.NullContext
2009-10-17 17:05:39,883 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2009-10-17 17:05:39,929 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 1
2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 97 loaded in 0 seconds.
2009-10-17 17:05:39,934 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /tmp/hadoop-joe/dfs/name/current/edits of size 4 edits # 0 

loaded in 0 seconds.
2009-10-17 17:05:39,972 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 97 saved in 0 seconds.
2009-10-17 17:05:40,130 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 469 msecs
2009-10-17 17:05:40,131 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
= 0
2009-10-17 17:05:40,131 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2009-10-17 17:05:40,131 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2009-10-17 17:05:40,132 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
over-replicated blocks = 0
2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 0 secs.
2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2009-10-17 17:05:40,320 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2009-10-17 17:05:40,518 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. 

Opening the listener on 50070
2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50070
webServer.getConnectors()[0].getLocalPort() 

returned 50070
2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50070
2009-10-17 17:05:40,520 INFO org.mortbay.log: jetty-6.1.14
2009-10-17 17:05:46,612 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50070
2009-10-17 17:05:46,612 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
0.0.0.0:50070
2009-10-17 17:05:46,613 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2009-10-17 17:05:46,615 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 54310: starting
2009-10-17 17:05:46,629 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 54310: starting
2009-10-17 17:05:46,634 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 54310: starting
2009-10-17 17:05:46,636 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 54310: starting
2009-10-17 17:05:46,637 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 54310: starting
2009-10-17 17:05:46,641 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 54310: starting
2009-10-17 17:05:46,642 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 54310: starting
2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 54310: starting
2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 54310: starting
2009-10-17 17:05:46,900 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 54310: starting
2009-10-17 17:05:46,916 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 54310: starting
2009-10-17 17:11:10,967 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
192.168.1.125
2009-10-17 17:11:10,968 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
0 Total time for transactions(ms): 0Number of 

transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2009-10-17 17:11:11,518 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
192.168.1.125
2009-10-17 17:11:11,519 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
0 Total time for transactions(ms): 0Number of 

transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 2 


   
-- 
View this message in context: http://www.nabble.com/Datanode-Throwing-NoRouteToHostException-tp25946798p25946798.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Datanode Throwing NoRouteToHostException

Posted by Huy Phan <da...@gmail.com>.
1. Can you try to netstat on your namenode machine to make sure that 
port 54310 is opened ?
2. If it's opened already, check if iptables 's blocking  your 
connection or not.

Best,
Huy Phan

lazikid wrote:
> I need some help with setting up a Hadoop cluster. The datanode on the slave
> is not coming up throwing java.net.NoRouteToHostException: No route to host.
> Please see the details below.
>
> I have a centos master and a fedora slave. Both have Java 6 and Hadoop
> 0.20.1. I instaleed hadoop under /opt in both machines.
>
> The machines can ping one another using hostname and ip. They can also make
> password-less ssh connection to one another using both hostname and ip.
>
> This is what I have in the /etc/hosts file for the master :
>
> ************************************************
> 127.0.0.1       localhost.localdomain localhost
> 192.168.1.125   centos
> 192.168.1.102   fedora
> ::1             localhost6.localdomain6 localhost6
> ************************************************
>
> This is what I have in the /etc/hosts for the slave
>
> ************************************************
> 127.0.0.1	localhost.localdomain localhost
> 192.168.1.104	ubuntu64
> 192.168.1.102	fedora
> 192.168.1.125	centos
> ::1         localhost localhost.localdomain localhost6
> localhost6.localdomain6
> ************************************************
>
>
>
> These are my configuration files :
>
> 1. core-site.xml 
>
> <configuration>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://centos:54310</value>
>   <description>determine the host, port, etc. for a
> filesystem.</description>
> </property>
> </configuration>
>
> 2. mapred-site.xml
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>hdfs://centos:54311</value>
>   <description>....</description>
> </property>
> </configuration>
>
> 3. masters
>   
>    centos
>
> 4. slaves
>
>    fedora
>
>
> I issued bin/start-dfs.sh from the master(centos) machine and this is what I
> see :
>
> starting namenode, logging to xxxxxxxxxxx
> fedora: starting datanode, logging to xxxxxxx
> centos: starting secondarynamenode, logging to xxxxxx
>
> When I checked the slave(fedora) logs, this what I see :
>
> *************************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = fedora/192.168.1.102
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
>
> 2009
> ************************************************************/
> 2009-10-17 17:05:43,385 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 0 time(s).
> 2009-10-17 17:05:44,387 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 1 time(s).
> 2009-10-17 17:05:45,389 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 2 time(s).
> 2009-10-17 17:05:46,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 3 time(s).
> 2009-10-17 17:05:47,388 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 4 time(s).
> 2009-10-17 17:05:48,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 5 time(s).
> 2009-10-17 17:05:49,393 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 6 time(s).
> 2009-10-17 17:05:50,394 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 7 time(s).
> 2009-10-17 17:05:51,395 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 8 time(s).
> 2009-10-17 17:05:52,398 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 9 time(s).
> 2009-10-17 17:05:52,404 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
> to centos/192.168.1.125:54310 failed on local 
>
> exception: java.net.NoRouteToHostException: No route to host
> 	at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:742)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> 	at $Proxy4.getProtocolVersion(Unknown Source)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> Caused by: java.net.NoRouteToHostException: No route to host
> 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> 	at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> 	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:719)
> 	... 13 more
>
> 2009-10-17 17:05:52,407 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at fedora/192.168.1.102
> ************************************************************/
>
> *************************************************************************
>
>
>
>
>
> The log on the master shows :
>
> 2009-10-17 17:05:39,352 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = centos/192.168.1.125
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
>
> 2009
> ************************************************************/
> 2009-10-17 17:05:39,615 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2009-10-17 17:05:39,620 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> centos/192.168.1.125:54310
> 2009-10-17 17:05:39,622 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2009-10-17 17:05:39,624 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context 
>
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,779 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=joe,joe
> 2009-10-17 17:05:39,780 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2009-10-17 17:05:39,781 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2009-10-17 17:05:39,877 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context 
>
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,883 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2009-10-17 17:05:39,929 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 loaded in 0 seconds.
> 2009-10-17 17:05:39,934 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /tmp/hadoop-joe/dfs/name/current/edits of size 4 edits # 0 
>
> loaded in 0 seconds.
> 2009-10-17 17:05:39,972 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 saved in 0 seconds.
> 2009-10-17 17:05:40,130 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 469 msecs
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
> over-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2009-10-17 17:05:40,320 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2009-10-17 17:05:40,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. 
>
> Opening the listener on 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() 
>
> returned 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50070
> 2009-10-17 17:05:40,520 INFO org.mortbay.log: jetty-6.1.14
> 2009-10-17 17:05:46,612 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2009-10-17 17:05:46,612 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2009-10-17 17:05:46,613 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2009-10-17 17:05:46,615 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2009-10-17 17:05:46,629 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2009-10-17 17:05:46,634 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2009-10-17 17:05:46,636 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2009-10-17 17:05:46,637 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2009-10-17 17:05:46,641 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2009-10-17 17:05:46,642 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2009-10-17 17:05:46,900 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2009-10-17 17:05:46,916 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2009-10-17 17:11:10,967 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
> 192.168.1.125
> 2009-10-17 17:11:10,968 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of 
>
> transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
> 2009-10-17 17:11:11,518 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
> 192.168.1.125
> 2009-10-17 17:11:11,519 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of 
>
> transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 2 
>
>
>    
>   


Re: Datanode Throwing NoRouteToHostException

Posted by Last-chance Architect <ar...@galatea.com>.
Do you by chance have any firewall configuration blocking those ports 
(though I'd imagine the error would be different)?

Out of curiousity, does the command 'hostname' return the correct name 
on both boxes?

Lajos


lazikid wrote:
> I need some help with setting up a Hadoop cluster. The datanode on the slave
> is not coming up throwing java.net.NoRouteToHostException: No route to host.
> Please see the details below.
> 
> I have a centos master and a fedora slave. Both have Java 6 and Hadoop
> 0.20.1. I instaleed hadoop under /opt in both machines.
> 
> The machines can ping one another using hostname and ip. They can also make
> password-less ssh connection to one another using both hostname and ip.
> 
> This is what I have in the /etc/hosts file for the master :
> 
> ************************************************
> 127.0.0.1       localhost.localdomain localhost
> 192.168.1.125   centos
> 192.168.1.102   fedora
> ::1             localhost6.localdomain6 localhost6
> ************************************************
> 
> This is what I have in the /etc/hosts for the slave
> 
> ************************************************
> 127.0.0.1	localhost.localdomain localhost
> 192.168.1.104	ubuntu64
> 192.168.1.102	fedora
> 192.168.1.125	centos
> ::1         localhost localhost.localdomain localhost6
> localhost6.localdomain6
> ************************************************
> 
> 
> 
> These are my configuration files :
> 
> 1. core-site.xml 
> 
> <configuration>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://centos:54310</value>
>   <description>determine the host, port, etc. for a
> filesystem.</description>
> </property>
> </configuration>
> 
> 2. mapred-site.xml
> 
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>hdfs://centos:54311</value>
>   <description>....</description>
> </property>
> </configuration>
> 
> 3. masters
>   
>    centos
> 
> 4. slaves
> 
>    fedora
> 
> 
> I issued bin/start-dfs.sh from the master(centos) machine and this is what I
> see :
> 
> starting namenode, logging to xxxxxxxxxxx
> fedora: starting datanode, logging to xxxxxxx
> centos: starting secondarynamenode, logging to xxxxxx
> 
> When I checked the slave(fedora) logs, this what I see :
> 
> *************************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = fedora/192.168.1.102
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
> 
> 2009
> ************************************************************/
> 2009-10-17 17:05:43,385 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 0 time(s).
> 2009-10-17 17:05:44,387 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 1 time(s).
> 2009-10-17 17:05:45,389 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 2 time(s).
> 2009-10-17 17:05:46,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 3 time(s).
> 2009-10-17 17:05:47,388 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 4 time(s).
> 2009-10-17 17:05:48,390 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 5 time(s).
> 2009-10-17 17:05:49,393 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 6 time(s).
> 2009-10-17 17:05:50,394 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 7 time(s).
> 2009-10-17 17:05:51,395 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 8 time(s).
> 2009-10-17 17:05:52,398 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: centos/192.168.1.125:54310. Already tried 9 time(s).
> 2009-10-17 17:05:52,404 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
> to centos/192.168.1.125:54310 failed on local 
> 
> exception: java.net.NoRouteToHostException: No route to host
> 	at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:742)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> 	at $Proxy4.getProtocolVersion(Unknown Source)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> Caused by: java.net.NoRouteToHostException: No route to host
> 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> 	at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> 	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:719)
> 	... 13 more
> 
> 2009-10-17 17:05:52,407 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at fedora/192.168.1.102
> ************************************************************/
> 
> *************************************************************************
> 
> 
> 
> 
> 
> The log on the master shows :
> 
> 2009-10-17 17:05:39,352 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = centos/192.168.1.125
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
> 
> 2009
> ************************************************************/
> 2009-10-17 17:05:39,615 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2009-10-17 17:05:39,620 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> centos/192.168.1.125:54310
> 2009-10-17 17:05:39,622 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2009-10-17 17:05:39,624 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context 
> 
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,779 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=joe,joe
> 2009-10-17 17:05:39,780 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2009-10-17 17:05:39,781 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2009-10-17 17:05:39,877 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context 
> 
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,883 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2009-10-17 17:05:39,929 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 loaded in 0 seconds.
> 2009-10-17 17:05:39,934 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /tmp/hadoop-joe/dfs/name/current/edits of size 4 edits # 0 
> 
> loaded in 0 seconds.
> 2009-10-17 17:05:39,972 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 saved in 0 seconds.
> 2009-10-17 17:05:40,130 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 469 msecs
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
> over-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2009-10-17 17:05:40,320 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2009-10-17 17:05:40,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. 
> 
> Opening the listener on 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() 
> 
> returned 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50070
> 2009-10-17 17:05:40,520 INFO org.mortbay.log: jetty-6.1.14
> 2009-10-17 17:05:46,612 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2009-10-17 17:05:46,612 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2009-10-17 17:05:46,613 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2009-10-17 17:05:46,615 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2009-10-17 17:05:46,629 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2009-10-17 17:05:46,634 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2009-10-17 17:05:46,636 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2009-10-17 17:05:46,637 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2009-10-17 17:05:46,641 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2009-10-17 17:05:46,642 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2009-10-17 17:05:46,900 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2009-10-17 17:05:46,916 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2009-10-17 17:11:10,967 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
> 192.168.1.125
> 2009-10-17 17:11:10,968 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of 
> 
> transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
> 2009-10-17 17:11:11,518 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
> 192.168.1.125
> 2009-10-17 17:11:11,519 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of 
> 
> transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 2 
> 
> 
>    

-- 
***************************
The 'Last-Chance' Architect
www.galatea.com
(US) +1 303 731 3116
(UK) +44 20 8144 4367
***************************

Re: Datanode Throwing NoRouteToHostException

Posted by lazikid <ra...@hotmail.com>.
Thanks very much for your reply and time. I found it was a firewall issue.
After stopping iptables, everything worked fine.

Thanks again.


Comment all the IPs in /etc/hosts first except localhost or 127.0.0.1 as i
have given below. Then, give in conf/hadoop-site.xml
fs.default.name parameter value is hdfs://hdfs://[Master Node IP]:9000/
mapred.job.tracker parameter value is hdfs://[Job Tracker IP]:9001
Make sure you have enabled ssh and sshd running without password.

-- 
View this message in context: http://www.nabble.com/Datanode-Throwing-NoRouteToHostException-tp25946798p25995151.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Datanode Throwing NoRouteToHostException

Posted by Doss_IPH <do...@intellipowerhive.com>.
Comment all the IPs in /etc/hosts first except localhost or 127.0.0.1 as i
have given below. Then, give in conf/hadoop-site.xml
fs.default.name parameter value is hdfs://hdfs://[Master Node IP]:9000/
mapred.job.tracker parameter value is hdfs://[Job Tracker IP]:9001
Make sure you have enabled ssh and sshd running without password.


lazikid wrote:
> 
> I need some help with setting up a Hadoop cluster. The datanode on the
> slave is not coming up throwing java.net.NoRouteToHostException: No route
> to host. Please see the details below.
> 
> I have a centos master and a fedora slave. Both have Java 6 and Hadoop
> 0.20.1. I instaleed hadoop under /opt in both machines.
> 
> The machines can ping one another using hostname and ip. They can also
> make password-less ssh connection to one another using both hostname and
> ip.
> 
> This is what I have in the /etc/hosts file for the master :
> 
> ************************************************
> 127.0.0.1       localhost.localdomain localhost
> #192.168.1.125   centos
> #192.168.1.102   fedora
> ::1             localhost6.localdomain6 localhost6
> ************************************************
> 
> This is what I have in the /etc/hosts for the slave
> 
> ************************************************
> 127.0.0.1	localhost.localdomain localhost
> #192.168.1.104	ubuntu64
> #192.168.1.102	fedora
> #192.168.1.125	centos
> ::1         localhost localhost.localdomain localhost6
> localhost6.localdomain6
> ************************************************
> 
> 
> 
> These are my configuration files :
> 
> 1. core-site.xml 
> 
> <configuration>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://centos:54310</value>
>   <description>determine the host, port, etc. for a
> filesystem.</description>
> </property>
> </configuration>
> 
> 2. mapred-site.xml
> 
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>hdfs://centos:54311</value>
>   <description>....</description>
> </property>
> </configuration>
> 
> 3. masters
>   
>    centos
> 
> 4. slaves
> 
>    fedora
> 
> 
> I issued bin/start-dfs.sh from the master(centos) machine and this is what
> I see :
> 
> starting namenode, logging to xxxxxxxxxxx
> fedora: starting datanode, logging to xxxxxxx
> centos: starting secondarynamenode, logging to xxxxxx
> 
> When I checked the slave(fedora) logs, this what I see :
> 
> *************************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = fedora/192.168.1.102
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
> 
> 2009
> ************************************************************/
> 2009-10-17 17:05:43,385 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 0 time(s).
> 2009-10-17 17:05:44,387 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 1 time(s).
> 2009-10-17 17:05:45,389 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 2 time(s).
> 2009-10-17 17:05:46,390 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 3 time(s).
> 2009-10-17 17:05:47,388 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 4 time(s).
> 2009-10-17 17:05:48,390 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 5 time(s).
> 2009-10-17 17:05:49,393 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 6 time(s).
> 2009-10-17 17:05:50,394 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 7 time(s).
> 2009-10-17 17:05:51,395 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 8 time(s).
> 2009-10-17 17:05:52,398 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: centos/192.168.1.125:54310. Already tried 9 time(s).
> 2009-10-17 17:05:52,404 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
> to centos/192.168.1.125:54310 failed on local 
> 
> exception: java.net.NoRouteToHostException: No route to host
> 	at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:742)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> 	at $Proxy4.getProtocolVersion(Unknown Source)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
> 	at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
> 	at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
> Caused by: java.net.NoRouteToHostException: No route to host
> 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> 	at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> 	at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> 	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:719)
> 	... 13 more
> 
> 2009-10-17 17:05:52,407 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at fedora/192.168.1.102
> ************************************************************/
> 
> *************************************************************************
> 
> 
> 
> 
> 
> The log on the master shows :
> 
> 2009-10-17 17:05:39,352 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = centos/192.168.1.125
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
> 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 
> 
> 2009
> ************************************************************/
> 2009-10-17 17:05:39,615 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2009-10-17 17:05:39,620 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> centos/192.168.1.125:54310
> 2009-10-17 17:05:39,622 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2009-10-17 17:05:39,624 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context 
> 
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,779 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=joe,joe
> 2009-10-17 17:05:39,780 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2009-10-17 17:05:39,781 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2009-10-17 17:05:39,877 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context 
> 
> object:org.apache.hadoop.metrics.spi.NullContext
> 2009-10-17 17:05:39,883 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2009-10-17 17:05:39,929 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2009-10-17 17:05:39,933 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 loaded in 0 seconds.
> 2009-10-17 17:05:39,934 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /tmp/hadoop-joe/dfs/name/current/edits of size 4 edits # 0 
> 
> loaded in 0 seconds.
> 2009-10-17 17:05:39,972 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 97 saved in 0 seconds.
> 2009-10-17 17:05:40,130 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 469 msecs
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> blocks = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2009-10-17 17:05:40,131 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
> over-replicated blocks = 0
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2009-10-17 17:05:40,132 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2009-10-17 17:05:40,320 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2009-10-17 17:05:40,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. 
> 
> Opening the listener on 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() 
> 
> returned 50070
> 2009-10-17 17:05:40,520 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2009-10-17 17:05:40,520 INFO org.mortbay.log: jetty-6.1.14
> 2009-10-17 17:05:46,612 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2009-10-17 17:05:46,612 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2009-10-17 17:05:46,613 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2009-10-17 17:05:46,615 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2009-10-17 17:05:46,629 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2009-10-17 17:05:46,634 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2009-10-17 17:05:46,636 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2009-10-17 17:05:46,637 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2009-10-17 17:05:46,641 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2009-10-17 17:05:46,642 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2009-10-17 17:05:46,643 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2009-10-17 17:05:46,900 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2009-10-17 17:05:46,916 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2009-10-17 17:11:10,967 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
> 192.168.1.125
> 2009-10-17 17:11:10,968 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of 
> 
> transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
> 2009-10-17 17:11:11,518 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
> 192.168.1.125
> 2009-10-17 17:11:11,519 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of 
> 
> transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 2 
> 
> 
>    
> 

-- 
View this message in context: http://www.nabble.com/Datanode-Throwing-NoRouteToHostException-tp25946798p25986468.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.