You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Babu, Suresh" <su...@corp.aol.com> on 2008/07/18 11:22:10 UTC
No exception received by application on HDFS restart
Hi,
I have written client application in java, which writes Apache log
data to HDFS Cluster. When the HDFS cluster is brought down, the client
attempts to connect to the Cluster. I have a few issues regarding the
same.
1) When the HDFS cluster is brought down, the Application does not get
any exception, instead I see ipc reconnecting messages displayed from
DFS client . Is there a way to get an exception/notice, when the cluster
is down?
2) The HDFS DFS client, which is not part of the Application code,
tries to connect to HDFS cluster infinitely. Is there a way to configure
the number of times the client tries to connect?
Do we have an application which writes data to HDFS cluster and is
fault-torant? (Handle HDFS cluster shutdown/restart)
Thanks
Suresh