You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by bhavin pandya <bv...@gmail.com> on 2008/10/07 08:51:49 UTC
DataNode - IOException: Call failed on local exception
Hi,
I am trying to configure nutch only on one server. So all modules i am
trying to run on localhost.
The tutorial i followed.
http://wiki.apache.org/nutch/NutchHadoopTutorial
I am using nutch-1.0-dev version and jdk 1.5.
When i run start-all.sh i found exception in datanode log file.
In secondary namenode log also i found same exception.
2008-10-07 03:27:36,627 ERROR dfs.DataNode - java.io.IOException: Call
failed on local exception
at org.apache.hadoop.ipc.Client.call(Client.java:718)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
at org.apache.hadoop.ipc.Client.call(Client.java:704)
... 12 more
It seems datanode is not able to start.
>From jobtracker log, ( seems working fine)
2008-10-07 03:27:29,547 INFO util.Container - Started
org.mortbay.jetty.servlet.WebApplicationHandler@18352d8
2008-10-07 03:27:29,589 INFO util.Container - Started
WebApplicationContext[/,/]
2008-10-07 03:27:29,589 INFO util.Container - Started HttpContext[/logs,/logs]
2008-10-07 03:27:29,589 INFO util.Container - Started
HttpContext[/static,/static]
2008-10-07 03:27:29,590 INFO http.SocketListener - Started
SocketListener on 0.0.0.0:50030
2008-10-07 03:27:29,591 INFO util.Container - Started
org.mortbay.jetty.Server@f42ad0
>From task tracker log, ( seems working fine)
2008-10-07 03:27:30,747 INFO util.Container - Started
org.mortbay.jetty.servlet.WebApplicationHandler@1d36dfe
2008-10-07 03:27:30,781 INFO util.Container - Started
WebApplicationContext[/,/]
2008-10-07 03:27:30,782 INFO util.Container - Started HttpContext[/logs,/logs]
2008-10-07 03:27:30,782 INFO util.Container - Started
HttpContext[/static,/static]
2008-10-07 03:27:30,785 INFO http.SocketListener - Started
SocketListener on 0.0.0.0:50060
2008-10-07 03:27:30,785 INFO util.Container - Started
org.mortbay.jetty.Server@b8deef
Here is the content of my hadoop-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000/</value>
<description>
The name of the default file system. Either the literal string
"local" or a host:port for NDFS.
</description>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001/</value>
<description>
The host and port that the MapReduce job tracker runs at. If
"local", then jobs are run in-process as a single map and
reduce task.
</description>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
<description>
define mapred.map tasks to be number of slave hosts
</description>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>1</value>
<description>
define mapred.reduce tasks to be number of slave hosts
</description>
</property>
<property>
<name>dfs.name.dir</name>
<value>/nutch/filesystem/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/nutch/filesystem/data</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/nutch/filesystem/mapreduce/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/nutch/filesystem/mapreduce/local</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.logging.level</name>
<value>all</value>
</property>
</configuration>
Here is the content of my /etc/hosts file.
127.0.0.1 localhost.localdomain localhost nutch-master
Any pointer will be really helpful.
Thanks.
Bhavin