You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by asif md <as...@gmail.com> on 2009/06/06 00:24:00 UTC

No Route To Host at Slave

Hello all,

I'm strugling to fix the 'NO ROUTE TO HOST' problem on my Only Slave. The
datanode log is as follows

2009-06-05 15:12:41,076 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ********
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 736250;
compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
************************************************************/
2009-06-05 15:12:42,346 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 0 time(s).
2009-06-05 15:12:43,349 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ******** Already tried 1 time(s).
2009-06-05 15:12:44,352 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 2 time(s).
2009-06-05 15:12:45,355 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ******** Already tried 3 time(s).
2009-06-05 15:12:46,358 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 4 time(s).
2009-06-05 15:12:47,374 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ******** Already tried 5 time(s).
2009-06-05 15:12:48,377 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ******** Already tried 6 time(s).
2009-06-05 15:12:49,380 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 7 time(s).
2009-06-05 15:12:50,383 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 8 time(s).
2009-06-05 15:12:51,386 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ********. Already tried 9 time(s).
2009-06-05 15:12:51,389 ERROR org.apache.hadoop.dfs.DataNode:
java.io.IOException: Call to ******** failed on local exception:
java.net.NoRouteToHostException: No route to host
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:751)
    at org.apache.hadoop.ipc.Client.call(Client.java:719)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
    at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:335)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:372)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:309)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:286)
    at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:277)
    at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:223)
    at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
    at
org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
    at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
    at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
    at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
    at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:301)
    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:178)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:820)
    at org.apache.hadoop.ipc.Client.call(Client.java:705)
    ... 13 more

2009-06-05 15:12:51,390 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at*******************************

Has anyone fixed this problem before. Because i've found similar problems on
this list but solutions are not working.

Can anyone suggest any solution for this problem.

Thanks

Asif.

Re: No Route To Host at Slave

Posted by asif md <as...@gmail.com>.
When i run nmap on 54310 on the slave it gives


[ ~]$ nmap -PN -p54310 master

Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2009-06-05 19:14 PDT
Interesting ports on master (***.**.**.***):
PORT      STATE    SERVICE
54310/tcp filtered unknown

Nmap finished: 1 IP address (1 host up) scanned in 0.094 seconds


On Fri, Jun 5, 2009 at 7:13 PM, asif md <as...@gmail.com> wrote:

> I've found the problem but am clueless about how to fix it.
>
> wen i did the following on the master after running "
> $HADOOP_HOME/bin/start-dfs.sh"
>
> [ ~]$ nmap -PN -p54310 localhost
>
> Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2009-06-05 17:04
> PDT
> Interesting ports on localhost (127.0.0.1):
> PORT      STATE  SERVICE
> *54310/tcp closed unknown*
>
> Nmap finished: 1 IP address (1 host up) scanned in 5.605 seconds
>
>
> It is showing that 54310 < my namenode > is closed.
>
> *********************
> hadoop-site.xml
> *********************
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
> <configuration>
>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/home/utdhadoop1/Hadoop/hadoop-0.18.3/hadoop-datastore/hadoop-${
> user.name}</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://master:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>master:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
>
> </configuration>
> **************************
>
> also running netstat at master is giving the followin output:
>
> [ ~]$ sudo netstat -a -t -n -p | grep java | grep LISTEN*
> tcp        0      0 :::50020                    :::*
> LISTEN      3683/java
> tcp        0      0 :::38148                    :::*
> LISTEN      3576/java
> tcp        0      0 ::ffff:198.55.35.229:54310
> :::*                        LISTEN      3576/java
> tcp        0      0 :::50090                    :::*
> LISTEN      3846/java
> tcp        0      0 :::49134                    :::*
> LISTEN      3683/java
> tcp        0      0 :::50070                    :::*
> LISTEN      3576/java
> tcp        0      0 :::50010                    :::*
> LISTEN      3683/java
> tcp        0      0 :::50075                    :::*
> LISTEN      3683/java
> tcp        0      0 :::51837                    :::*
> LISTEN      3846/java
>
> Does anyone have any suggestions how to fix this?
>
> Thanks and Regards
>
> Asif.
>
>
> On Fri, Jun 5, 2009 at 5:24 PM, asif md <as...@gmail.com> wrote:
>
>> Hello all,
>>
>> I'm strugling to fix the 'NO ROUTE TO HOST' problem on my Only Slave. The
>> datanode log is as follows
>>
>> 2009-06-05 15:12:41,076 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = ********
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.18.3
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
>> 736250; compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
>> ************************************************************/
>> 2009-06-05 15:12:42,346 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 0 time(s).
>> 2009-06-05 15:12:43,349 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ******** Already tried 1 time(s).
>> 2009-06-05 15:12:44,352 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 2 time(s).
>> 2009-06-05 15:12:45,355 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ******** Already tried 3 time(s).
>> 2009-06-05 15:12:46,358 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 4 time(s).
>> 2009-06-05 15:12:47,374 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ******** Already tried 5 time(s).
>> 2009-06-05 15:12:48,377 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ******** Already tried 6 time(s).
>> 2009-06-05 15:12:49,380 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 7 time(s).
>> 2009-06-05 15:12:50,383 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 8 time(s).
>> 2009-06-05 15:12:51,386 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: ********. Already tried 9 time(s).
>> 2009-06-05 15:12:51,389 ERROR org.apache.hadoop.dfs.DataNode:
>> java.io.IOException: Call to ******** failed on local exception:
>> java.net.NoRouteToHostException: No route to host
>>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:751)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:719)
>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>>     at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:335)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:372)
>>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:309)
>>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:286)
>>     at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:277)
>>     at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:223)
>>     at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
>>     at
>> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
>>     at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
>>     at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
>> Caused by: java.net.NoRouteToHostException: No route to host
>>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>     at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>     at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
>>     at
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:301)
>>     at
>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:178)
>>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:820)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:705)
>>     ... 13 more
>>
>> 2009-06-05 15:12:51,390 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
>>
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at*******************************
>>
>> Has anyone fixed this problem before. Because i've found similar problems
>> on this list but solutions are not working.
>>
>> Can anyone suggest any solution for this problem.
>>
>> Thanks
>>
>> Asif.
>>
>>
>

Re: No Route To Host at Slave

Posted by asif md <as...@gmail.com>.
I've found the problem but am clueless about how to fix it.

wen i did the following on the master after running "
$HADOOP_HOME/bin/start-dfs.sh"

[ ~]$ nmap -PN -p54310 localhost

Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2009-06-05 17:04 PDT
Interesting ports on localhost (127.0.0.1):
PORT      STATE  SERVICE
*54310/tcp closed unknown*

Nmap finished: 1 IP address (1 host up) scanned in 5.605 seconds


It is showing that 54310 < my namenode > is closed.

*********************
hadoop-site.xml
*********************

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/utdhadoop1/Hadoop/hadoop-0.18.3/hadoop-datastore/hadoop-${
user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is
created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>
**************************

also running netstat at master is giving the followin output:

[ ~]$ sudo netstat -a -t -n -p | grep java | grep LISTEN*
tcp        0      0 :::50020                    :::*
LISTEN      3683/java
tcp        0      0 :::38148                    :::*
LISTEN      3576/java
tcp        0      0 ::ffff:198.55.35.229:54310  :::*
LISTEN      3576/java
tcp        0      0 :::50090                    :::*
LISTEN      3846/java
tcp        0      0 :::49134                    :::*
LISTEN      3683/java
tcp        0      0 :::50070                    :::*
LISTEN      3576/java
tcp        0      0 :::50010                    :::*
LISTEN      3683/java
tcp        0      0 :::50075                    :::*
LISTEN      3683/java
tcp        0      0 :::51837                    :::*
LISTEN      3846/java

Does anyone have any suggestions how to fix this?

Thanks and Regards

Asif.

On Fri, Jun 5, 2009 at 5:24 PM, asif md <as...@gmail.com> wrote:

> Hello all,
>
> I'm strugling to fix the 'NO ROUTE TO HOST' problem on my Only Slave. The
> datanode log is as follows
>
> 2009-06-05 15:12:41,076 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = ********
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.18.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> 736250; compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
> ************************************************************/
> 2009-06-05 15:12:42,346 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 0 time(s).
> 2009-06-05 15:12:43,349 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ******** Already tried 1 time(s).
> 2009-06-05 15:12:44,352 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 2 time(s).
> 2009-06-05 15:12:45,355 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ******** Already tried 3 time(s).
> 2009-06-05 15:12:46,358 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 4 time(s).
> 2009-06-05 15:12:47,374 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ******** Already tried 5 time(s).
> 2009-06-05 15:12:48,377 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ******** Already tried 6 time(s).
> 2009-06-05 15:12:49,380 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 7 time(s).
> 2009-06-05 15:12:50,383 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 8 time(s).
> 2009-06-05 15:12:51,386 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ********. Already tried 9 time(s).
> 2009-06-05 15:12:51,389 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOException: Call to ******** failed on local exception:
> java.net.NoRouteToHostException: No route to host
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:751)
>     at org.apache.hadoop.ipc.Client.call(Client.java:719)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>     at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:335)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:372)
>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:309)
>     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:286)
>     at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:277)
>     at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:223)
>     at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3071)
>     at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:3026)
>     at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:3034)
>     at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3156)
> Caused by: java.net.NoRouteToHostException: No route to host
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>     at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
>     at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:301)
>     at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:178)
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:820)
>     at org.apache.hadoop.ipc.Client.call(Client.java:705)
>     ... 13 more
>
> 2009-06-05 15:12:51,390 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at*******************************
>
> Has anyone fixed this problem before. Because i've found similar problems
> on this list but solutions are not working.
>
> Can anyone suggest any solution for this problem.
>
> Thanks
>
> Asif.
>
>