You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by sr...@epfl.ch on 2008/11/04 18:11:38 UTC
Problem while starting Hadoop
Hi,
I am trying to use hadoop 0.18.1. After I start the hadoop, I am
able to see namenode running on the master. But, datanode on the
client machine is unable to connect to the namenode. I use 2 machines
with hostnames lca2-s3-pc01 and lca2-s3-pc04 respectively. It shows
the following message in the client log file.
2008-11-04 17:19:25,253 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = lca2-s3-pc04/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.18.1
STARTUP_MSG: build =
http://svn.apache.org/repos/asf/hadoop/core/branches/bran
ch-0.18 -r 694836; compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
************************************************************/
2008-11-04 17:19:26,464 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 0 time(s).
2008-11-04 17:19:27,468 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 1 time(s).
2008-11-04 17:19:28,472 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 2 time(s).
2008-11-04 17:19:29,476 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 3 time(s).
2008-11-04 17:19:30,479 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 4 time(s).
2008-11-04 17:19:31,483 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
2008-11-04 17:19:32,487 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 6 time(s).
2008-11-04 17:19:33,491 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 7 time(s).
2008-11-04 17:19:34,495 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 8 time(s).
2008-11-04 17:19:35,499 INFO org.apache.hadoop.ipc.Client: Retrying
connect to s
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 9 time(s).
2008-11-04 17:19:35,502 ERROR org.apache.hadoop.dfs.DataNode:
java.io.IOExceptio
n: Call failed on local exception
at org.apache.hadoop.ipc.Client.call(Client.java:718)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
2008-11-at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown
Source) to s
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
at
org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942
)
at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574
)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:30
0)
at
org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
at org.apache.hadoop.ipc.Client.call(Client.java:704)
... 12 more
erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
2008-11-04 17:19:35,502 INFO org.apache.hadoop.dfs.DataNode:
SHUTDOWN_MSG:t to s
/************************************************************e(s).
SHUTDOWN_MSG: Shutting down DataNode at lca2-s3-pc04/127.0.1.1
************************************************************/haracters
Here is the hadoop-site configuration file data that I use on both the
master and the client.
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/okkam/datastore/hadoop</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://lca2-s3-pc01:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
Could you please tell the mistake I am committing.
Thanks a lot in advance,
Srikanth.
Re: Problem while starting Hadoop
Posted by Steve Loughran <st...@apache.org>.
srikanth.bondalapati@epfl.ch wrote:
> Hi Alex,
>
> ping works on both of the machines. And in fact I do ssh onto
> both of these machines. I stopped the service and reformatted the
> namenode, but the problem pertains. I use the same configuration file
> hadoop-site.xml on both of the machines. The content of the
> configuration file is as follows: -
>
>> <configuration>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/okkam/datastore/hadoop</value>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://lca2-s3-pc01:9000</value>
Try to telnet to that port. If you can do it on that machine via a
telnet localhost 9000 but not remotely, you have a firewall in the way
--
Steve Loughran http://www.1060.org/blogxter/publish/5
Author: Ant in Action http://antbook.org/
Re: Problem while starting Hadoop
Posted by sr...@epfl.ch.
Hi Alex,
ping works on both of the machines. And in fact I do ssh onto
both of these machines. I stopped the service and reformatted the
namenode, but the problem pertains. I use the same configuration file
hadoop-site.xml on both of the machines. The content of the
configuration file is as follows: -
> <configuration>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/okkam/datastore/hadoop</value>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://lca2-s3-pc01:9000</value>
> </property>
>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> </configuration>
Thanks,
Srikanth.
--------------------------------------------------------
Does 'ping lca2-s3-pc01' resolve from lca2-s3-pc04 and vise-versa? Are your
'slaves' and 'master' configuration files configured correctly?
You can also try stopping everything, deleting all of your Hadoop data on
each machine (by default in /tmp), reformating the namenode, and starting
all again.
Alex
Quoting "srikanth.bondalapati@epfl.ch" <sr...@epfl.ch>:
> Hi,
>
> I am trying to use hadoop 0.18.1. After I start the hadoop, I am
> able to see namenode running on the master. But, datanode on the
> client machine is unable to connect to the namenode. I use 2 machines
> with hostnames lca2-s3-pc01 and lca2-s3-pc04 respectively. It shows
> the following message in the client log file.
>
> 2008-11-04 17:19:25,253 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = lca2-s3-pc04/127.0.1.1
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.18.1
> STARTUP_MSG: build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/bran
> ch-0.18 -r 694836; compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
> ************************************************************/
> 2008-11-04 17:19:26,464 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 0 time(s).
> 2008-11-04 17:19:27,468 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 1 time(s).
> 2008-11-04 17:19:28,472 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 2 time(s).
> 2008-11-04 17:19:29,476 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 3 time(s).
> 2008-11-04 17:19:30,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 4 time(s).
> 2008-11-04 17:19:31,483 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:32,487 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 6 time(s).
> 2008-11-04 17:19:33,491 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 7 time(s).
> 2008-11-04 17:19:34,495 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 8 time(s).
> 2008-11-04 17:19:35,499 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 9 time(s).
> 2008-11-04 17:19:35,502 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOExceptio
> n: Call failed on local exception
> at org.apache.hadoop.ipc.Client.call(Client.java:718)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> 2008-11-at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown
> Source) to s
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
> at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
> at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
> at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
> at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942
> )
> at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
> at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574
> )
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:30
> 0)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
> at org.apache.hadoop.ipc.Client.call(Client.java:704)
> ... 12 more
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:35,502 INFO org.apache.hadoop.dfs.DataNode:
> SHUTDOWN_MSG:t to s
> /************************************************************e(s).
> SHUTDOWN_MSG: Shutting down DataNode at lca2-s3-pc04/127.0.1.1
> ************************************************************/haracters
>
> Here is the hadoop-site configuration file data that I use on both the
> master and the client.
>
> <configuration>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/okkam/datastore/hadoop</value>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://lca2-s3-pc01:9000</value>
> </property>
>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> </configuration>
>
> Could you please tell the mistake I am committing.
>
> Thanks a lot in advance,
> Srikanth.
>
Re: Problem while starting Hadoop
Posted by Jason Venner <ja...@attributor.com>.
Is it possible there is a firewall blocking port 9000 on one or more of
the machines.
We had that happen to us with some machines that were kickstarted by our
IT, the firewall was configured to only allow ssh.
srikanth.bondalapati@epfl.ch wrote:
> Hi,
>
> I am trying to use hadoop 0.18.1. After I start the hadoop, I am
> able to see namenode running on the master. But, datanode on the
> client machine is unable to connect to the namenode. I use 2 machines
> with hostnames lca2-s3-pc01 and lca2-s3-pc04 respectively. It shows
> the following message in the client log file.
>
> 2008-11-04 17:19:25,253 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = lca2-s3-pc04/127.0.1.1
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.18.1
> STARTUP_MSG: build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/bran
> ch-0.18 -r 694836; compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
> ************************************************************/
> 2008-11-04 17:19:26,464 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 0 time(s).
> 2008-11-04 17:19:27,468 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 1 time(s).
> 2008-11-04 17:19:28,472 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 2 time(s).
> 2008-11-04 17:19:29,476 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 3 time(s).
> 2008-11-04 17:19:30,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 4 time(s).
> 2008-11-04 17:19:31,483 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:32,487 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 6 time(s).
> 2008-11-04 17:19:33,491 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 7 time(s).
> 2008-11-04 17:19:34,495 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 8 time(s).
> 2008-11-04 17:19:35,499 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 9 time(s).
> 2008-11-04 17:19:35,502 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOExceptio
> n: Call failed on local exception
> at org.apache.hadoop.ipc.Client.call(Client.java:718)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> 2008-11-at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown
> Source) to s
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
> at
> org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
> at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
> at
> org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
> at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942
> )
> at
> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
> at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574
> )
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:30
> 0)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
> at org.apache.hadoop.ipc.Client.call(Client.java:704)
> ... 12 more
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:35,502 INFO org.apache.hadoop.dfs.DataNode:
> SHUTDOWN_MSG:t to s
> /************************************************************e(s).
> SHUTDOWN_MSG: Shutting down DataNode at lca2-s3-pc04/127.0.1.1
> ************************************************************/haracters
>
> Here is the hadoop-site configuration file data that I use on both the
> master and the client.
>
> <configuration>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/okkam/datastore/hadoop</value>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://lca2-s3-pc01:9000</value>
> </property>
>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> </configuration>
>
> Could you please tell the mistake I am committing.
>
> Thanks a lot in advance,
> Srikanth.
Re: Problem while starting Hadoop
Posted by Alex Loddengaard <al...@cloudera.com>.
Does 'ping lca2-s3-pc01' resolve from lca2-s3-pc04 and vise-versa? Are your
'slaves' and 'master' configuration files configured correctly?
You can also try stopping everything, deleting all of your Hadoop data on
each machine (by default in /tmp), reformating the namenode, and starting
all again.
Alex
On Tue, Nov 4, 2008 at 11:11 AM, <sr...@epfl.ch> wrote:
> Hi,
>
> I am trying to use hadoop 0.18.1. After I start the hadoop, I am able to
> see namenode running on the master. But, datanode on the client machine is
> unable to connect to the namenode. I use 2 machines with hostnames
> lca2-s3-pc01 and lca2-s3-pc04 respectively. It shows the following message
> in the client log file.
>
> 2008-11-04 17:19:25,253 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = lca2-s3-pc04/127.0.1.1
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.18.1
> STARTUP_MSG: build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/bran
> ch-0.18 -r 694836; compiled by 'hadoopqa' on Fri Sep 12 23:29:35 UTC 2008
> ************************************************************/
> 2008-11-04 17:19:26,464 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 0 time(s).
> 2008-11-04 17:19:27,468 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 1 time(s).
> 2008-11-04 17:19:28,472 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 2 time(s).
> 2008-11-04 17:19:29,476 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 3 time(s).
> 2008-11-04 17:19:30,479 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 4 time(s).
> 2008-11-04 17:19:31,483 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:32,487 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 6 time(s).
> 2008-11-04 17:19:33,491 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 7 time(s).
> 2008-11-04 17:19:34,495 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 8 time(s).
> 2008-11-04 17:19:35,499 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to s
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 9 time(s).
> 2008-11-04 17:19:35,502 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOExceptio
> n: Call failed on local exception
> at org.apache.hadoop.ipc.Client.call(Client.java:718)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> 2008-11-at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown Source)
> to s
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
> at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
> at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
> at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
> at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942
> )
> at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
> at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574
> )
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:30
> 0)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
> at org.apache.hadoop.ipc.Client.call(Client.java:704)
> ... 12 more
> erver: lca2-s3-pc01/128.178.156.221:9000. Already tried 5 time(s).
> 2008-11-04 17:19:35,502 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:t
> to s
> /************************************************************e(s).
> SHUTDOWN_MSG: Shutting down DataNode at lca2-s3-pc04/127.0.1.1
> ************************************************************/haracters
>
> Here is the hadoop-site configuration file data that I use on both the
> master and the client.
>
> <configuration>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/okkam/datastore/hadoop</value>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://lca2-s3-pc01:9000</value>
> </property>
>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> </configuration>
>
> Could you please tell the mistake I am committing.
>
> Thanks a lot in advance,
> Srikanth.
>