You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by 帝 <un...@foxmail.com> on 2012/04/25 10:04:58 UTC

When I connect to Hadoop it always 'retrying connect', something wrong with my configurations?

I am running a Hadoop cluster of about 40 machines, and I got some problem with the HDFS.
 
When I try to connect to the HDFS, say using the 'hadoop fs -ls /' command, sometimes it needs to retry connect to HDFS:
12/04/25 11:22:01 INFO ipc.Client: Retrying connect to server: master/10.10.10.51:8020. Already tried 0 time(s).

And then it can connect to HDFS and return the result. But the process costs time and I am looking for some way to solve it.
 
Is there something wrong with my Hadoop configurations? Or something wrong with the networking?

Re: When I connect to Hadoop it always 'retrying connect', something wrong with my configurations?

Posted by Ravi Prakash <ra...@gmail.com>.
Or maybe its  a flaky network connection? Perhaps you can do a ping and
check the network link is reliable?

The only daemon that needs to be up is Namenode and unless you are taking
it down and bringing it back up often (please don't), you should not see
that message.

2012/4/25 Lukáš Kryške <lu...@hotmail.cz>

>  I am getting this message if I try to process some data in HDFS but the
> necessary daemons are not started prior to my HDFS request (I am using
> Hadoop V0.20.2 so I am using /start-all.sh script). You're writting it
> works after some time so I guess the problem is in the communication
> between HDFS daemons in your Hadoop cluster.
> Is your HDFS formatted for all machines or have you add some nodes already
> into the formatted cluster?
>
> _________________
> Regards,
> Lukas
>
>
> ------------------------------
> From: unionx@foxmail.com
> To: hdfs-user@hadoop.apache.org
> Subject: When I connect to Hadoop it always 'retrying connect', something
> wrong with my configurations?
> Date: Wed, 25 Apr 2012 16:04:58 +0800
>
>
> I am running a Hadoop cluster of about 40 machines, and I got some problem
> with the HDFS.
> When I try to connect to the HDFS, say using the 'hadoop fs -ls /'
> command, sometimes it needs to retry connect to HDFS:
>
> 12/04/25 11:22:01 INFO ipc.Client: Retrying connect to server: master/10.10.10.51:8020. Already tried 0 time(s).
>
> And then it can connect to HDFS and return the result. But the process
> costs time and I am looking for some way to solve it.
> Is there something wrong with my Hadoop configurations? Or something wrong
> with the networking?
>
>
>

RE: When I connect to Hadoop it always 'retrying connect', something wrong with my configurations?

Posted by Lukáš Kryške <lu...@hotmail.cz>.
I am getting this message if I try to process some data in HDFS but the necessary daemons are not started prior to my HDFS request (I am using Hadoop V0.20.2 so I am using /start-all.sh script). You're writting it works after some time so I guess the problem is in the communication between HDFS daemons in your Hadoop cluster.Is your HDFS formatted for all machines or have you add some nodes already into the formatted cluster?

_________________Regards,Lukas


From: unionx@foxmail.com
To: hdfs-user@hadoop.apache.org
Subject: When I connect to Hadoop it always 'retrying connect', something wrong with my configurations?
Date: Wed, 25 Apr 2012 16:04:58 +0800

I am running a Hadoop cluster of about 40 machines, and I got some problem with the HDFS.

When I try to connect to the HDFS, say using the 'hadoop fs -ls /' command, sometimes it needs to retry connect to HDFS:12/04/25 11:22:01 INFO ipc.Client: Retrying connect to server: master/10.10.10.51:8020. Already tried 0 time(s).
And then it can connect to HDFS and return the result. But the process costs time and I am looking for some way to solve it.

Is there something wrong with my Hadoop configurations? Or something wrong with the networking?