You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Saurabh Jain <Sa...@symantec.com> on 2013/04/08 16:29:52 UTC

Problem accessing HDFS from a remote machine

Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the configuration used -

core-site.xml :-

<property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.10.206:54310 failed on connection exception: java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is also the configured value for "fs.default.name" in the core-site.xml file on the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /' also result in same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How do we provide credentials while connecting to HDFS?

Thanks
Saurabh



RE: Problem accessing HDFS from a remote machine

Posted by Saurabh Jain <Sa...@symantec.com>.
Thanks for all the help.

Changing fs.default.name value from localhost to IP in all the conf files and making a configuration change in the /etc/conf did the job.

Thanks
Saurabh

From: Rishi Yadav [mailto:rishi@infoobjects.com]
Sent: 09 April 2013 10:11
To: user@hadoop.apache.org
Subject: Re: Problem accessing HDFS from a remote machine

have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020


Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com<http://www.infoobjects.com/> (Big Data Solutions)

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com>> wrote:
can you use command "jps" on your localhost to see if there is NameNode process running?

On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com>> wrote:
Yes, the namenode port is not open for your cluster. I had this problem to. First, log into your namenode and do netstat -nap to see what ports are listening. You can do service --status-all to see if the namenode service is running. Basically you need Hadoop to bind to the correct ip (an external one, or at least reachable from your remote machine). So listening on 127.0.0.1 or localhost or some ip for a private network will not be sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this helps :)

Best,
B

On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>> wrote:
Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the configuration used -

core-site.xml :-

<property>
     <name>fs.default.name<http://fs.default.name></name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310> failed on connection exception: java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is also the configured value for "fs.default.name<http://fs.default.name>" in the core-site.xml file on the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310<http://10.209.10.206:54310> -ls /' also result in same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How do we provide credentials while connecting to HDFS?

Thanks
Saurabh






RE: Problem accessing HDFS from a remote machine

Posted by Saurabh Jain <Sa...@symantec.com>.
Thanks for all the help.

Changing fs.default.name value from localhost to IP in all the conf files and making a configuration change in the /etc/conf did the job.

Thanks
Saurabh

From: Rishi Yadav [mailto:rishi@infoobjects.com]
Sent: 09 April 2013 10:11
To: user@hadoop.apache.org
Subject: Re: Problem accessing HDFS from a remote machine

have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020


Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com<http://www.infoobjects.com/> (Big Data Solutions)

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com>> wrote:
can you use command "jps" on your localhost to see if there is NameNode process running?

On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com>> wrote:
Yes, the namenode port is not open for your cluster. I had this problem to. First, log into your namenode and do netstat -nap to see what ports are listening. You can do service --status-all to see if the namenode service is running. Basically you need Hadoop to bind to the correct ip (an external one, or at least reachable from your remote machine). So listening on 127.0.0.1 or localhost or some ip for a private network will not be sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this helps :)

Best,
B

On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>> wrote:
Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the configuration used -

core-site.xml :-

<property>
     <name>fs.default.name<http://fs.default.name></name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310> failed on connection exception: java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is also the configured value for "fs.default.name<http://fs.default.name>" in the core-site.xml file on the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310<http://10.209.10.206:54310> -ls /' also result in same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How do we provide credentials while connecting to HDFS?

Thanks
Saurabh






RE: Problem accessing HDFS from a remote machine

Posted by Saurabh Jain <Sa...@symantec.com>.
Thanks for all the help.

Changing fs.default.name value from localhost to IP in all the conf files and making a configuration change in the /etc/conf did the job.

Thanks
Saurabh

From: Rishi Yadav [mailto:rishi@infoobjects.com]
Sent: 09 April 2013 10:11
To: user@hadoop.apache.org
Subject: Re: Problem accessing HDFS from a remote machine

have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020


Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com<http://www.infoobjects.com/> (Big Data Solutions)

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com>> wrote:
can you use command "jps" on your localhost to see if there is NameNode process running?

On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com>> wrote:
Yes, the namenode port is not open for your cluster. I had this problem to. First, log into your namenode and do netstat -nap to see what ports are listening. You can do service --status-all to see if the namenode service is running. Basically you need Hadoop to bind to the correct ip (an external one, or at least reachable from your remote machine). So listening on 127.0.0.1 or localhost or some ip for a private network will not be sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this helps :)

Best,
B

On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>> wrote:
Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the configuration used -

core-site.xml :-

<property>
     <name>fs.default.name<http://fs.default.name></name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310> failed on connection exception: java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is also the configured value for "fs.default.name<http://fs.default.name>" in the core-site.xml file on the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310<http://10.209.10.206:54310> -ls /' also result in same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How do we provide credentials while connecting to HDFS?

Thanks
Saurabh






RE: Problem accessing HDFS from a remote machine

Posted by Saurabh Jain <Sa...@symantec.com>.
Thanks for all the help.

Changing fs.default.name value from localhost to IP in all the conf files and making a configuration change in the /etc/conf did the job.

Thanks
Saurabh

From: Rishi Yadav [mailto:rishi@infoobjects.com]
Sent: 09 April 2013 10:11
To: user@hadoop.apache.org
Subject: Re: Problem accessing HDFS from a remote machine

have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020


Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com<http://www.infoobjects.com/> (Big Data Solutions)

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com>> wrote:
can you use command "jps" on your localhost to see if there is NameNode process running?

On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com>> wrote:
Yes, the namenode port is not open for your cluster. I had this problem to. First, log into your namenode and do netstat -nap to see what ports are listening. You can do service --status-all to see if the namenode service is running. Basically you need Hadoop to bind to the correct ip (an external one, or at least reachable from your remote machine). So listening on 127.0.0.1 or localhost or some ip for a private network will not be sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this helps :)

Best,
B

On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>> wrote:
Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the configuration used -

core-site.xml :-

<property>
     <name>fs.default.name<http://fs.default.name></name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310>. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.10.206:54310<http://10.209.10.206/10.209.10.206:54310> failed on connection exception: java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is also the configured value for "fs.default.name<http://fs.default.name>" in the core-site.xml file on the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310<http://10.209.10.206:54310> -ls /' also result in same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How do we provide credentials while connecting to HDFS?

Thanks
Saurabh






Re: Problem accessing HDFS from a remote machine

Posted by Rishi Yadav <ri...@infoobjects.com>.
have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020

Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)*

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you use command "jps" on your localhost to see if there is NameNode
> process running?
>
>
> On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:
>
>> Yes, the namenode port is not open for your cluster. I had this problem
>> to. First, log into your namenode and do netstat -nap to see what ports are
>> listening. You can do service --status-all to see if the namenode service
>> is running. Basically you need Hadoop to bind to the correct ip (an
>> external one, or at least reachable from your remote machine). So listening
>> on 127.0.0.1 or localhost or some ip for a private network will not be
>> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
>> files to configure the correct ip/ports.
>>
>> I'm no expert, so my understanding might be limited/wrong...but I hope
>> this helps :)
>>
>> Best,
>> B
>>
>>
>> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>>
>>> Hi All,****
>>>
>>> ** **
>>>
>>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>>> the configuration used –****
>>>
>>> ** **
>>>
>>> *core-site.xml :-*
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>      <name>fs.default.name</name>****
>>>
>>>      <value>hdfs://localhost:54310</value> ****
>>>
>>> </property>****
>>>
>>> * *
>>>
>>> *masters:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> *slaves:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> I am able to successfully format the Namenode and perform files system
>>> operations by running the CLIs on Namenode.****
>>>
>>> ** **
>>>
>>> But I am receiving following error when I try to access HDFS from a *remote
>>> machine* – ****
>>>
>>> ** **
>>>
>>> $ bin/hadoop fs -ls /****
>>>
>>> Warning: $HADOOP_HOME is deprecated.****
>>>
>>> ** **
>>>
>>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>>
>>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>>
>>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>>
>>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>>
>>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>>
>>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>>
>>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>>
>>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>>
>>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>>
>>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>>
>>> Bad connection to FS. command aborted. exception: Call to
>>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>>> java.net.ConnectException: Connection refused****
>>>
>>> ** **
>>>
>>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>>> is also the configured value for “fs.default.name” in the core-site.xml
>>> file on the remote machine.****
>>>
>>> ** **
>>>
>>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>>> result in same output.****
>>>
>>> ** **
>>>
>>> Also, I am writing a C application using libhdfs to communicate with
>>> HDFS. How do we provide credentials while connecting to HDFS?****
>>>
>>> ** **
>>>
>>> Thanks****
>>>
>>> Saurabh****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: Problem accessing HDFS from a remote machine

Posted by Rishi Yadav <ri...@infoobjects.com>.
have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020

Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)*

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you use command "jps" on your localhost to see if there is NameNode
> process running?
>
>
> On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:
>
>> Yes, the namenode port is not open for your cluster. I had this problem
>> to. First, log into your namenode and do netstat -nap to see what ports are
>> listening. You can do service --status-all to see if the namenode service
>> is running. Basically you need Hadoop to bind to the correct ip (an
>> external one, or at least reachable from your remote machine). So listening
>> on 127.0.0.1 or localhost or some ip for a private network will not be
>> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
>> files to configure the correct ip/ports.
>>
>> I'm no expert, so my understanding might be limited/wrong...but I hope
>> this helps :)
>>
>> Best,
>> B
>>
>>
>> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>>
>>> Hi All,****
>>>
>>> ** **
>>>
>>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>>> the configuration used –****
>>>
>>> ** **
>>>
>>> *core-site.xml :-*
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>      <name>fs.default.name</name>****
>>>
>>>      <value>hdfs://localhost:54310</value> ****
>>>
>>> </property>****
>>>
>>> * *
>>>
>>> *masters:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> *slaves:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> I am able to successfully format the Namenode and perform files system
>>> operations by running the CLIs on Namenode.****
>>>
>>> ** **
>>>
>>> But I am receiving following error when I try to access HDFS from a *remote
>>> machine* – ****
>>>
>>> ** **
>>>
>>> $ bin/hadoop fs -ls /****
>>>
>>> Warning: $HADOOP_HOME is deprecated.****
>>>
>>> ** **
>>>
>>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>>
>>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>>
>>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>>
>>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>>
>>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>>
>>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>>
>>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>>
>>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>>
>>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>>
>>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>>
>>> Bad connection to FS. command aborted. exception: Call to
>>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>>> java.net.ConnectException: Connection refused****
>>>
>>> ** **
>>>
>>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>>> is also the configured value for “fs.default.name” in the core-site.xml
>>> file on the remote machine.****
>>>
>>> ** **
>>>
>>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>>> result in same output.****
>>>
>>> ** **
>>>
>>> Also, I am writing a C application using libhdfs to communicate with
>>> HDFS. How do we provide credentials while connecting to HDFS?****
>>>
>>> ** **
>>>
>>> Thanks****
>>>
>>> Saurabh****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: Problem accessing HDFS from a remote machine

Posted by Rishi Yadav <ri...@infoobjects.com>.
have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020

Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)*

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you use command "jps" on your localhost to see if there is NameNode
> process running?
>
>
> On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:
>
>> Yes, the namenode port is not open for your cluster. I had this problem
>> to. First, log into your namenode and do netstat -nap to see what ports are
>> listening. You can do service --status-all to see if the namenode service
>> is running. Basically you need Hadoop to bind to the correct ip (an
>> external one, or at least reachable from your remote machine). So listening
>> on 127.0.0.1 or localhost or some ip for a private network will not be
>> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
>> files to configure the correct ip/ports.
>>
>> I'm no expert, so my understanding might be limited/wrong...but I hope
>> this helps :)
>>
>> Best,
>> B
>>
>>
>> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>>
>>> Hi All,****
>>>
>>> ** **
>>>
>>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>>> the configuration used –****
>>>
>>> ** **
>>>
>>> *core-site.xml :-*
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>      <name>fs.default.name</name>****
>>>
>>>      <value>hdfs://localhost:54310</value> ****
>>>
>>> </property>****
>>>
>>> * *
>>>
>>> *masters:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> *slaves:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> I am able to successfully format the Namenode and perform files system
>>> operations by running the CLIs on Namenode.****
>>>
>>> ** **
>>>
>>> But I am receiving following error when I try to access HDFS from a *remote
>>> machine* – ****
>>>
>>> ** **
>>>
>>> $ bin/hadoop fs -ls /****
>>>
>>> Warning: $HADOOP_HOME is deprecated.****
>>>
>>> ** **
>>>
>>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>>
>>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>>
>>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>>
>>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>>
>>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>>
>>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>>
>>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>>
>>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>>
>>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>>
>>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>>
>>> Bad connection to FS. command aborted. exception: Call to
>>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>>> java.net.ConnectException: Connection refused****
>>>
>>> ** **
>>>
>>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>>> is also the configured value for “fs.default.name” in the core-site.xml
>>> file on the remote machine.****
>>>
>>> ** **
>>>
>>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>>> result in same output.****
>>>
>>> ** **
>>>
>>> Also, I am writing a C application using libhdfs to communicate with
>>> HDFS. How do we provide credentials while connecting to HDFS?****
>>>
>>> ** **
>>>
>>> Thanks****
>>>
>>> Saurabh****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: Problem accessing HDFS from a remote machine

Posted by Rishi Yadav <ri...@infoobjects.com>.
have you checked firewall on namenode.

If you are running ubuntu and namenode port is 8020 command is
-> ufw allow 8020

Thanks and Regards,

Rishi Yadav

InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)*

On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you use command "jps" on your localhost to see if there is NameNode
> process running?
>
>
> On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:
>
>> Yes, the namenode port is not open for your cluster. I had this problem
>> to. First, log into your namenode and do netstat -nap to see what ports are
>> listening. You can do service --status-all to see if the namenode service
>> is running. Basically you need Hadoop to bind to the correct ip (an
>> external one, or at least reachable from your remote machine). So listening
>> on 127.0.0.1 or localhost or some ip for a private network will not be
>> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
>> files to configure the correct ip/ports.
>>
>> I'm no expert, so my understanding might be limited/wrong...but I hope
>> this helps :)
>>
>> Best,
>> B
>>
>>
>> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>>
>>> Hi All,****
>>>
>>> ** **
>>>
>>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>>> the configuration used –****
>>>
>>> ** **
>>>
>>> *core-site.xml :-*
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>      <name>fs.default.name</name>****
>>>
>>>      <value>hdfs://localhost:54310</value> ****
>>>
>>> </property>****
>>>
>>> * *
>>>
>>> *masters:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> *slaves:-*
>>>
>>> localhost****
>>>
>>> ** **
>>>
>>> I am able to successfully format the Namenode and perform files system
>>> operations by running the CLIs on Namenode.****
>>>
>>> ** **
>>>
>>> But I am receiving following error when I try to access HDFS from a *remote
>>> machine* – ****
>>>
>>> ** **
>>>
>>> $ bin/hadoop fs -ls /****
>>>
>>> Warning: $HADOOP_HOME is deprecated.****
>>>
>>> ** **
>>>
>>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>>
>>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>>
>>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>>
>>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>>
>>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>>
>>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>>
>>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>>
>>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>>
>>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>>
>>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>>
>>> Bad connection to FS. command aborted. exception: Call to
>>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>>> java.net.ConnectException: Connection refused****
>>>
>>> ** **
>>>
>>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>>> is also the configured value for “fs.default.name” in the core-site.xml
>>> file on the remote machine.****
>>>
>>> ** **
>>>
>>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>>> result in same output.****
>>>
>>> ** **
>>>
>>> Also, I am writing a C application using libhdfs to communicate with
>>> HDFS. How do we provide credentials while connecting to HDFS?****
>>>
>>> ** **
>>>
>>> Thanks****
>>>
>>> Saurabh****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: Problem accessing HDFS from a remote machine

Posted by Azuryy Yu <az...@gmail.com>.
can you use command "jps" on your localhost to see if there is NameNode
process running?


On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:

> Yes, the namenode port is not open for your cluster. I had this problem
> to. First, log into your namenode and do netstat -nap to see what ports are
> listening. You can do service --status-all to see if the namenode service
> is running. Basically you need Hadoop to bind to the correct ip (an
> external one, or at least reachable from your remote machine). So listening
> on 127.0.0.1 or localhost or some ip for a private network will not be
> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
> files to configure the correct ip/ports.
>
> I'm no expert, so my understanding might be limited/wrong...but I hope
> this helps :)
>
> Best,
> B
>
>
> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>
>> Hi All,****
>>
>> ** **
>>
>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>> the configuration used –****
>>
>> ** **
>>
>> *core-site.xml :-*
>>
>> ** **
>>
>> <property>****
>>
>>      <name>fs.default.name</name>****
>>
>>      <value>hdfs://localhost:54310</value> ****
>>
>> </property>****
>>
>> * *
>>
>> *masters:-*
>>
>> localhost****
>>
>> ** **
>>
>> *slaves:-*
>>
>> localhost****
>>
>> ** **
>>
>> I am able to successfully format the Namenode and perform files system
>> operations by running the CLIs on Namenode.****
>>
>> ** **
>>
>> But I am receiving following error when I try to access HDFS from a *remote
>> machine* – ****
>>
>> ** **
>>
>> $ bin/hadoop fs -ls /****
>>
>> Warning: $HADOOP_HOME is deprecated.****
>>
>> ** **
>>
>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>
>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>
>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>
>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>
>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>
>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>
>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>
>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>
>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>
>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>
>> Bad connection to FS. command aborted. exception: Call to
>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>> java.net.ConnectException: Connection refused****
>>
>> ** **
>>
>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>> is also the configured value for “fs.default.name” in the core-site.xml
>> file on the remote machine.****
>>
>> ** **
>>
>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>> result in same output.****
>>
>> ** **
>>
>> Also, I am writing a C application using libhdfs to communicate with
>> HDFS. How do we provide credentials while connecting to HDFS?****
>>
>> ** **
>>
>> Thanks****
>>
>> Saurabh****
>>
>> ** **
>>
>> ** **
>>
>
>

Re: Problem accessing HDFS from a remote machine

Posted by Azuryy Yu <az...@gmail.com>.
can you use command "jps" on your localhost to see if there is NameNode
process running?


On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:

> Yes, the namenode port is not open for your cluster. I had this problem
> to. First, log into your namenode and do netstat -nap to see what ports are
> listening. You can do service --status-all to see if the namenode service
> is running. Basically you need Hadoop to bind to the correct ip (an
> external one, or at least reachable from your remote machine). So listening
> on 127.0.0.1 or localhost or some ip for a private network will not be
> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
> files to configure the correct ip/ports.
>
> I'm no expert, so my understanding might be limited/wrong...but I hope
> this helps :)
>
> Best,
> B
>
>
> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>
>> Hi All,****
>>
>> ** **
>>
>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>> the configuration used –****
>>
>> ** **
>>
>> *core-site.xml :-*
>>
>> ** **
>>
>> <property>****
>>
>>      <name>fs.default.name</name>****
>>
>>      <value>hdfs://localhost:54310</value> ****
>>
>> </property>****
>>
>> * *
>>
>> *masters:-*
>>
>> localhost****
>>
>> ** **
>>
>> *slaves:-*
>>
>> localhost****
>>
>> ** **
>>
>> I am able to successfully format the Namenode and perform files system
>> operations by running the CLIs on Namenode.****
>>
>> ** **
>>
>> But I am receiving following error when I try to access HDFS from a *remote
>> machine* – ****
>>
>> ** **
>>
>> $ bin/hadoop fs -ls /****
>>
>> Warning: $HADOOP_HOME is deprecated.****
>>
>> ** **
>>
>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>
>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>
>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>
>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>
>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>
>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>
>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>
>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>
>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>
>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>
>> Bad connection to FS. command aborted. exception: Call to
>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>> java.net.ConnectException: Connection refused****
>>
>> ** **
>>
>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>> is also the configured value for “fs.default.name” in the core-site.xml
>> file on the remote machine.****
>>
>> ** **
>>
>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>> result in same output.****
>>
>> ** **
>>
>> Also, I am writing a C application using libhdfs to communicate with
>> HDFS. How do we provide credentials while connecting to HDFS?****
>>
>> ** **
>>
>> Thanks****
>>
>> Saurabh****
>>
>> ** **
>>
>> ** **
>>
>
>

Re: Problem accessing HDFS from a remote machine

Posted by Azuryy Yu <az...@gmail.com>.
can you use command "jps" on your localhost to see if there is NameNode
process running?


On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:

> Yes, the namenode port is not open for your cluster. I had this problem
> to. First, log into your namenode and do netstat -nap to see what ports are
> listening. You can do service --status-all to see if the namenode service
> is running. Basically you need Hadoop to bind to the correct ip (an
> external one, or at least reachable from your remote machine). So listening
> on 127.0.0.1 or localhost or some ip for a private network will not be
> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
> files to configure the correct ip/ports.
>
> I'm no expert, so my understanding might be limited/wrong...but I hope
> this helps :)
>
> Best,
> B
>
>
> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>
>> Hi All,****
>>
>> ** **
>>
>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>> the configuration used –****
>>
>> ** **
>>
>> *core-site.xml :-*
>>
>> ** **
>>
>> <property>****
>>
>>      <name>fs.default.name</name>****
>>
>>      <value>hdfs://localhost:54310</value> ****
>>
>> </property>****
>>
>> * *
>>
>> *masters:-*
>>
>> localhost****
>>
>> ** **
>>
>> *slaves:-*
>>
>> localhost****
>>
>> ** **
>>
>> I am able to successfully format the Namenode and perform files system
>> operations by running the CLIs on Namenode.****
>>
>> ** **
>>
>> But I am receiving following error when I try to access HDFS from a *remote
>> machine* – ****
>>
>> ** **
>>
>> $ bin/hadoop fs -ls /****
>>
>> Warning: $HADOOP_HOME is deprecated.****
>>
>> ** **
>>
>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>
>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>
>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>
>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>
>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>
>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>
>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>
>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>
>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>
>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>
>> Bad connection to FS. command aborted. exception: Call to
>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>> java.net.ConnectException: Connection refused****
>>
>> ** **
>>
>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>> is also the configured value for “fs.default.name” in the core-site.xml
>> file on the remote machine.****
>>
>> ** **
>>
>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>> result in same output.****
>>
>> ** **
>>
>> Also, I am writing a C application using libhdfs to communicate with
>> HDFS. How do we provide credentials while connecting to HDFS?****
>>
>> ** **
>>
>> Thanks****
>>
>> Saurabh****
>>
>> ** **
>>
>> ** **
>>
>
>

Re: Problem accessing HDFS from a remote machine

Posted by Azuryy Yu <az...@gmail.com>.
can you use command "jps" on your localhost to see if there is NameNode
process running?


On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bj...@gmail.com> wrote:

> Yes, the namenode port is not open for your cluster. I had this problem
> to. First, log into your namenode and do netstat -nap to see what ports are
> listening. You can do service --status-all to see if the namenode service
> is running. Basically you need Hadoop to bind to the correct ip (an
> external one, or at least reachable from your remote machine). So listening
> on 127.0.0.1 or localhost or some ip for a private network will not be
> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
> files to configure the correct ip/ports.
>
> I'm no expert, so my understanding might be limited/wrong...but I hope
> this helps :)
>
> Best,
> B
>
>
> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:
>
>> Hi All,****
>>
>> ** **
>>
>> I have setup a single node cluster(release hadoop-1.0.4). Following is
>> the configuration used –****
>>
>> ** **
>>
>> *core-site.xml :-*
>>
>> ** **
>>
>> <property>****
>>
>>      <name>fs.default.name</name>****
>>
>>      <value>hdfs://localhost:54310</value> ****
>>
>> </property>****
>>
>> * *
>>
>> *masters:-*
>>
>> localhost****
>>
>> ** **
>>
>> *slaves:-*
>>
>> localhost****
>>
>> ** **
>>
>> I am able to successfully format the Namenode and perform files system
>> operations by running the CLIs on Namenode.****
>>
>> ** **
>>
>> But I am receiving following error when I try to access HDFS from a *remote
>> machine* – ****
>>
>> ** **
>>
>> $ bin/hadoop fs -ls /****
>>
>> Warning: $HADOOP_HOME is deprecated.****
>>
>> ** **
>>
>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>>
>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>>
>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>>
>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>>
>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>>
>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>>
>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>>
>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>>
>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>>
>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>>
>> Bad connection to FS. command aborted. exception: Call to
>> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
>> java.net.ConnectException: Connection refused****
>>
>> ** **
>>
>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
>> is also the configured value for “fs.default.name” in the core-site.xml
>> file on the remote machine.****
>>
>> ** **
>>
>> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
>> result in same output.****
>>
>> ** **
>>
>> Also, I am writing a C application using libhdfs to communicate with
>> HDFS. How do we provide credentials while connecting to HDFS?****
>>
>> ** **
>>
>> Thanks****
>>
>> Saurabh****
>>
>> ** **
>>
>> ** **
>>
>
>

Re: Problem accessing HDFS from a remote machine

Posted by Bjorn Jonsson <bj...@gmail.com>.
Yes, the namenode port is not open for your cluster. I had this problem to.
First, log into your namenode and do netstat -nap to see what ports are
listening. You can do service --status-all to see if the namenode service
is running. Basically you need Hadoop to bind to the correct ip (an
external one, or at least reachable from your remote machine). So listening
on 127.0.0.1 or localhost or some ip for a private network will not be
sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this
helps :)

Best,
B


On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:

> Hi All,****
>
> ** **
>
> I have setup a single node cluster(release hadoop-1.0.4). Following is the
> configuration used –****
>
> ** **
>
> *core-site.xml :-*
>
> ** **
>
> <property>****
>
>      <name>fs.default.name</name>****
>
>      <value>hdfs://localhost:54310</value> ****
>
> </property>****
>
> * *
>
> *masters:-*
>
> localhost****
>
> ** **
>
> *slaves:-*
>
> localhost****
>
> ** **
>
> I am able to successfully format the Namenode and perform files system
> operations by running the CLIs on Namenode.****
>
> ** **
>
> But I am receiving following error when I try to access HDFS from a *remote
> machine* – ****
>
> ** **
>
> $ bin/hadoop fs -ls /****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> ** **
>
> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>
> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>
> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>
> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>
> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>
> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>
> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>
> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>
> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>
> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>
> Bad connection to FS. command aborted. exception: Call to
> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
> java.net.ConnectException: Connection refused****
>
> ** **
>
> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
> is also the configured value for “fs.default.name” in the core-site.xml
> file on the remote machine.****
>
> ** **
>
> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
> result in same output.****
>
> ** **
>
> Also, I am writing a C application using libhdfs to communicate with HDFS.
> How do we provide credentials while connecting to HDFS?****
>
> ** **
>
> Thanks****
>
> Saurabh****
>
> ** **
>
> ** **
>

Re: Problem accessing HDFS from a remote machine

Posted by Bjorn Jonsson <bj...@gmail.com>.
Yes, the namenode port is not open for your cluster. I had this problem to.
First, log into your namenode and do netstat -nap to see what ports are
listening. You can do service --status-all to see if the namenode service
is running. Basically you need Hadoop to bind to the correct ip (an
external one, or at least reachable from your remote machine). So listening
on 127.0.0.1 or localhost or some ip for a private network will not be
sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this
helps :)

Best,
B


On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:

> Hi All,****
>
> ** **
>
> I have setup a single node cluster(release hadoop-1.0.4). Following is the
> configuration used –****
>
> ** **
>
> *core-site.xml :-*
>
> ** **
>
> <property>****
>
>      <name>fs.default.name</name>****
>
>      <value>hdfs://localhost:54310</value> ****
>
> </property>****
>
> * *
>
> *masters:-*
>
> localhost****
>
> ** **
>
> *slaves:-*
>
> localhost****
>
> ** **
>
> I am able to successfully format the Namenode and perform files system
> operations by running the CLIs on Namenode.****
>
> ** **
>
> But I am receiving following error when I try to access HDFS from a *remote
> machine* – ****
>
> ** **
>
> $ bin/hadoop fs -ls /****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> ** **
>
> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>
> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>
> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>
> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>
> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>
> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>
> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>
> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>
> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>
> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>
> Bad connection to FS. command aborted. exception: Call to
> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
> java.net.ConnectException: Connection refused****
>
> ** **
>
> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
> is also the configured value for “fs.default.name” in the core-site.xml
> file on the remote machine.****
>
> ** **
>
> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
> result in same output.****
>
> ** **
>
> Also, I am writing a C application using libhdfs to communicate with HDFS.
> How do we provide credentials while connecting to HDFS?****
>
> ** **
>
> Thanks****
>
> Saurabh****
>
> ** **
>
> ** **
>

Re: Problem accessing HDFS from a remote machine

Posted by Bjorn Jonsson <bj...@gmail.com>.
Yes, the namenode port is not open for your cluster. I had this problem to.
First, log into your namenode and do netstat -nap to see what ports are
listening. You can do service --status-all to see if the namenode service
is running. Basically you need Hadoop to bind to the correct ip (an
external one, or at least reachable from your remote machine). So listening
on 127.0.0.1 or localhost or some ip for a private network will not be
sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this
helps :)

Best,
B


On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:

> Hi All,****
>
> ** **
>
> I have setup a single node cluster(release hadoop-1.0.4). Following is the
> configuration used –****
>
> ** **
>
> *core-site.xml :-*
>
> ** **
>
> <property>****
>
>      <name>fs.default.name</name>****
>
>      <value>hdfs://localhost:54310</value> ****
>
> </property>****
>
> * *
>
> *masters:-*
>
> localhost****
>
> ** **
>
> *slaves:-*
>
> localhost****
>
> ** **
>
> I am able to successfully format the Namenode and perform files system
> operations by running the CLIs on Namenode.****
>
> ** **
>
> But I am receiving following error when I try to access HDFS from a *remote
> machine* – ****
>
> ** **
>
> $ bin/hadoop fs -ls /****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> ** **
>
> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>
> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>
> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>
> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>
> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>
> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>
> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>
> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>
> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>
> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>
> Bad connection to FS. command aborted. exception: Call to
> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
> java.net.ConnectException: Connection refused****
>
> ** **
>
> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
> is also the configured value for “fs.default.name” in the core-site.xml
> file on the remote machine.****
>
> ** **
>
> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
> result in same output.****
>
> ** **
>
> Also, I am writing a C application using libhdfs to communicate with HDFS.
> How do we provide credentials while connecting to HDFS?****
>
> ** **
>
> Thanks****
>
> Saurabh****
>
> ** **
>
> ** **
>

Re: Problem accessing HDFS from a remote machine

Posted by Bjorn Jonsson <bj...@gmail.com>.
Yes, the namenode port is not open for your cluster. I had this problem to.
First, log into your namenode and do netstat -nap to see what ports are
listening. You can do service --status-all to see if the namenode service
is running. Basically you need Hadoop to bind to the correct ip (an
external one, or at least reachable from your remote machine). So listening
on 127.0.0.1 or localhost or some ip for a private network will not be
sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml
files to configure the correct ip/ports.

I'm no expert, so my understanding might be limited/wrong...but I hope this
helps :)

Best,
B


On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain <Sa...@symantec.com>wrote:

> Hi All,****
>
> ** **
>
> I have setup a single node cluster(release hadoop-1.0.4). Following is the
> configuration used –****
>
> ** **
>
> *core-site.xml :-*
>
> ** **
>
> <property>****
>
>      <name>fs.default.name</name>****
>
>      <value>hdfs://localhost:54310</value> ****
>
> </property>****
>
> * *
>
> *masters:-*
>
> localhost****
>
> ** **
>
> *slaves:-*
>
> localhost****
>
> ** **
>
> I am able to successfully format the Namenode and perform files system
> operations by running the CLIs on Namenode.****
>
> ** **
>
> But I am receiving following error when I try to access HDFS from a *remote
> machine* – ****
>
> ** **
>
> $ bin/hadoop fs -ls /****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> ** **
>
> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).****
>
> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).****
>
> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).****
>
> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).****
>
> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).****
>
> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).****
>
> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).****
>
> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).****
>
> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).****
>
> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server:
> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).****
>
> Bad connection to FS. command aborted. exception: Call to
> 10.209.10.206/10.209.10.206:54310 failed on connection exception:
> java.net.ConnectException: Connection refused****
>
> ** **
>
> Where 10.209.10.206 is the IP of the server hosting the Namenode and it
> is also the configured value for “fs.default.name” in the core-site.xml
> file on the remote machine.****
>
> ** **
>
> Executing ‘*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*’ also
> result in same output.****
>
> ** **
>
> Also, I am writing a C application using libhdfs to communicate with HDFS.
> How do we provide credentials while connecting to HDFS?****
>
> ** **
>
> Thanks****
>
> Saurabh****
>
> ** **
>
> ** **
>