You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Ayon Sinha <ay...@yahoo.com> on 2010/08/18 00:49:56 UTC

HDFS java client connect retry count

Hi,
I have a java HDFS client which connects to a production cluster and gets data. 
On our staging environment we see that since it cannot connect to the namenode 
(expected) it keeps retrying for 45 times. I am looking for a way to set the 
retry count to much much lower.

This is what we see in server logs:

2010-08-17 15:15:06,973 INFO  [Client] Retrying connect to server: 
xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 0 time(s).
2010-08-17 15:15:27,979 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 1 time(s).
2010-08-17 15:15:48,984 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 2 time(s).
2010-08-17 15:16:09,989 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 3 time(s).
..
..
..
2010-08-17 15:16:09,989 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 44 time(s).

I have tried the client config 
job.set("ipc.client.connect.max.retries", "5");

which doesn't seem to work.

Where is this 45 reties coming from? 

Thanks in advance. -Ayon


      

Re: HDFS java client connect retry count

Posted by Ayon Sinha <ay...@yahoo.com>.
Looks like Client.java has the following code:

} catch (SocketTimeoutException toe) {
            /* The max number of retries is 45,
             * which amounts to 20s*45 = 15 minutes retries.
             */
            handleConnectionFailure(timeoutFailures++, 45, toe);
          } catch (IOException ie) {
            handleConnectionFailure(ioFailures++, maxRetries, ie);
          }
So it doesnt use the maxRetries variable when it gets a SocketTimeoutException. 
Our client is 0.18.3 (I know its ancient) but is this fixed in a later release?
-Ayon






________________________________
From: Ayon Sinha <ay...@yahoo.com>
To: hdfs-user@hadoop.apache.org
Sent: Tue, August 17, 2010 3:49:56 PM
Subject: HDFS java client connect retry count


Hi,
I have a java HDFS client which connects to a production cluster and gets data. 
On our staging environment we see that since it cannot connect to the namenode 
(expected) it keeps retrying for 45 times. I am looking for a way to set the 
retry count to much much lower.

This is what we see in server logs:

2010-08-17 15:15:06,973 INFO  [Client] Retrying connect to server: 
xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 0 time(s).
2010-08-17 15:15:27,979 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 1 time(s).
2010-08-17 15:15:48,984 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 2  time(s).
2010-08-17 15:16:09,989 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 3 time(s).
..
..
..
2010-08-17 15:16:09,989 INFO  [Client] Retrying connect to 
server: xxxx.yyyy.zzzz.com/192.168.1.11:9000. Already tried 44 time(s).

I have tried the client config 
job.set("ipc.client.connect.max.retries", "5");

which doesn't seem to work.

Where is this 45 reties coming from? 

Thanks in advance. -Ayon