You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by neo anderson <ja...@yahoo.co.uk> on 2010/02/24 18:16:31 UTC
java.net.SocketException: Network is unreachable
While running example programe ('hadoop jar *example*jar pi 2 2'), I
encounter 'Network is unreachable' problem (at
$HADOOP_HOME/logs/userlogs/.../stderr), as below:
Exception in thread "main" java.io.IOException: Call to /127.0.0.1:<port>
failed on local exception: java.net.SocketException: Network is unreachable
at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown
Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.mapred.Child.main(Child.java:64)
Caused by: java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect(Native Method)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at
org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
at org.apache.hadoop.ipc.Client.call(Client.java:719)
... 6 more
Initially, it seems to me that is firewall issue, but after disabling
iptables the example programe still can not execute correctly.
command for disabling iptables.
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -X
iptables -F
When starting up hadoop cluster (start-dfs.sh and start-mapred.sh), it looks
like the namenode was correctly started up because the log in name node
contains information
... org.apache.hadoop.net.NetworkTopology: Adding a new node:
/default-rack/111.222.333.5:10010
... org.apache.hadoop.net.NetworkTopology: Adding a new node:
/default-rack/111.222.333.4:10010
... org.apache.hadoop.net.NetworkTopology: Adding a new node:
/default-rack/111.222.333.3:10010
Also, in datanode
...
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/111.222.333.4:34539, dest: /111.222.333.5:50010, bytes: 4, op: HDFS_WRITE,
...
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/111.222.333.4:51610, dest: /111.222.333.3:50010, bytes: 118, op:
HDFS_WRITE, cliID: ...
...
The command 'hadoop fs -ls' can list the data uploaded to the hdfs without a
problem. And jps shows the necessary processes are running.
name node:
7710 SecondaryNameNode
7594 NameNode
8038 JobTracker
data nodes:
3181 TaskTracker
3000 DataNode
Environment: Debian squeeze, hadoop 0.20.1, jdk 1.6.x
I search online and couldn't find the possible root cause. Is there any
possibility that may cause such issue? Or any place that I may be able to
check for more deatail information?
Thanks for help.
--
View this message in context: http://old.nabble.com/java.net.SocketException%3A-Network-is-unreachable-tp27714253p27714253.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Re: java.net.SocketException: Network is unreachable
Posted by neo anderson <ja...@yahoo.co.uk>.
Finally get this problem solved. Edit /etc/sysctl.d/bindv6only.conf. Set
net.ipv6.bindv6only=1 to net.ipv6.bindv6only=0 the error would go away.
neo anderson wrote:
>
> While running example programe ('hadoop jar *example*jar pi 2 2'), I
> encounter 'Network is unreachable' problem (at
> $HADOOP_HOME/logs/userlogs/.../stderr), as below:
>
> Exception in thread "main" java.io.IOException: Call to /127.0.0.1:<port>
> failed on local exception: java.net.SocketException: Network is
> unreachable
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> at org.apache.hadoop.ipc.Client.call(Client.java:742)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown
> Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> at org.apache.hadoop.mapred.Child.main(Child.java:64)
> Caused by: java.net.SocketException: Network is unreachable
> at sun.nio.ch.Net.connect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> at org.apache.hadoop.ipc.Client.call(Client.java:719)
> ... 6 more
>
> Initially, it seems to me that is firewall issue, but after disabling
> iptables the example programe still can not execute correctly.
>
> command for disabling iptables.
> iptables -P INPUT ACCEPT
> iptables -P FORWARD ACCEPT
> iptables -P OUTPUT ACCEPT
> iptables -X
> iptables -F
>
> When starting up hadoop cluster (start-dfs.sh and start-mapred.sh), it
> looks like the namenode was correctly started up because the log in name
> node contains information
>
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.5:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.4:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.3:10010
>
> Also, in datanode
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:34539, dest: /111.222.333.5:50010, bytes: 4, op:
> HDFS_WRITE, ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:51610, dest: /111.222.333.3:50010, bytes: 118, op:
> HDFS_WRITE, cliID: ...
> ...
>
> The command 'hadoop fs -ls' can list the data uploaded to the hdfs without
> a problem. And jps shows the necessary processes are running.
>
> name node:
> 7710 SecondaryNameNode
> 7594 NameNode
> 8038 JobTracker
>
> data nodes:
> 3181 TaskTracker
> 3000 DataNode
>
> Environment: Debian squeeze, hadoop 0.20.1, jdk 1.6.x
>
> I search online and couldn't find the possible root cause. Is there any
> possibility that may cause such issue? Or any place that I may be able to
> check for more deatail information?
>
> Thanks for help.
>
>
>
--
View this message in context: http://old.nabble.com/java.net.SocketException%3A-Network-is-unreachable-tp27714253p27714443.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Re: java.net.SocketException: Network is unreachable
Posted by Alvaro Cabrerizo <to...@gmail.com>.
Hi:
Hope this helps: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560142
Regards.
2010/2/24 neo anderson <ja...@yahoo.co.uk>
>
> While running example programe ('hadoop jar *example*jar pi 2 2'), I
> encounter 'Network is unreachable' problem (at
> $HADOOP_HOME/logs/userlogs/.../stderr), as below:
>
> Exception in thread "main" java.io.IOException: Call to /127.0.0.1:<port>
> failed on local exception: java.net.SocketException: Network is unreachable
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> at org.apache.hadoop.ipc.Client.call(Client.java:742)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown
> Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> at org.apache.hadoop.mapred.Child.main(Child.java:64)
> Caused by: java.net.SocketException: Network is unreachable
> at sun.nio.ch.Net.connect(Native Method)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> at org.apache.hadoop.ipc.Client.call(Client.java:719)
> ... 6 more
>
> Initially, it seems to me that is firewall issue, but after disabling
> iptables the example programe still can not execute correctly.
>
> command for disabling iptables.
> iptables -P INPUT ACCEPT
> iptables -P FORWARD ACCEPT
> iptables -P OUTPUT ACCEPT
> iptables -X
> iptables -F
>
> When starting up hadoop cluster (start-dfs.sh and start-mapred.sh), it
> looks
> like the namenode was correctly started up because the log in name node
> contains information
>
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.5:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.4:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.3:10010
>
> Also, in datanode
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:34539, dest: /111.222.333.5:50010, bytes: 4, op: HDFS_WRITE,
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:51610, dest: /111.222.333.3:50010, bytes: 118, op:
> HDFS_WRITE, cliID: ...
> ...
>
> The command 'hadoop fs -ls' can list the data uploaded to the hdfs without
> a
> problem. And jps shows the necessary processes are running.
>
> name node:
> 7710 SecondaryNameNode
> 7594 NameNode
> 8038 JobTracker
>
> data nodes:
> 3181 TaskTracker
> 3000 DataNode
>
> Environment: Debian squeeze, hadoop 0.20.1, jdk 1.6.x
>
> I search online and couldn't find the possible root cause. Is there any
> possibility that may cause such issue? Or any place that I may be able to
> check for more deatail information?
>
> Thanks for help.
>
>
> --
> View this message in context:
> http://old.nabble.com/java.net.SocketException%3A-Network-is-unreachable-tp27714253p27714253.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>
Re: java.net.SocketException: Network is unreachable
Posted by Todd Lipcon <to...@cloudera.com>.
Hi Neo,
See this bug:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560044
as well as the discussion here:
http://issues.apache.org/jira/browse/HADOOP-6056
Thanks
-Todd
On Wed, Feb 24, 2010 at 9:16 AM, neo anderson
<ja...@yahoo.co.uk> wrote:
>
> While running example programe ('hadoop jar *example*jar pi 2 2'), I
> encounter 'Network is unreachable' problem (at
> $HADOOP_HOME/logs/userlogs/.../stderr), as below:
>
> Exception in thread "main" java.io.IOException: Call to /127.0.0.1:<port>
> failed on local exception: java.net.SocketException: Network is unreachable
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
> at org.apache.hadoop.ipc.Client.call(Client.java:742)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown
> Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
> at org.apache.hadoop.mapred.Child.main(Child.java:64)
> Caused by: java.net.SocketException: Network is unreachable
> at sun.nio.ch.Net.connect(Native Method)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
> at org.apache.hadoop.ipc.Client.call(Client.java:719)
> ... 6 more
>
> Initially, it seems to me that is firewall issue, but after disabling
> iptables the example programe still can not execute correctly.
>
> command for disabling iptables.
> iptables -P INPUT ACCEPT
> iptables -P FORWARD ACCEPT
> iptables -P OUTPUT ACCEPT
> iptables -X
> iptables -F
>
> When starting up hadoop cluster (start-dfs.sh and start-mapred.sh), it looks
> like the namenode was correctly started up because the log in name node
> contains information
>
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.5:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.4:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.3:10010
>
> Also, in datanode
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:34539, dest: /111.222.333.5:50010, bytes: 4, op: HDFS_WRITE,
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:51610, dest: /111.222.333.3:50010, bytes: 118, op:
> HDFS_WRITE, cliID: ...
> ...
>
> The command 'hadoop fs -ls' can list the data uploaded to the hdfs without a
> problem. And jps shows the necessary processes are running.
>
> name node:
> 7710 SecondaryNameNode
> 7594 NameNode
> 8038 JobTracker
>
> data nodes:
> 3181 TaskTracker
> 3000 DataNode
>
> Environment: Debian squeeze, hadoop 0.20.1, jdk 1.6.x
>
> I search online and couldn't find the possible root cause. Is there any
> possibility that may cause such issue? Or any place that I may be able to
> check for more deatail information?
>
> Thanks for help.
>
>
> --
> View this message in context: http://old.nabble.com/java.net.SocketException%3A-Network-is-unreachable-tp27714253p27714253.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>