You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Marc Sturlese <ma...@gmail.com> on 2011/07/08 19:49:30 UTC
check namenode, jobtracker, datanodes and tasktracker status
Hey there,
I've written some scripts to check dfs disk space, number of datanodes,
number of tasktrackers, heap in use...
I'm with hadoop 0.20.2 and to do that I use the DFSClient and JobClient
APIs.
I do things like:
JobClient jc = new JobClient(socketJT, conf);
ClusterStatus clusterStatus = jc.getClusterStatus(true);
clusterStatus.getTaskTrackers();
...
jc.close();
DFSClient client = new DFSClient(socketNN, conf);
DatanodeInfo[] dni = client.datanodeReport(DatanodeReportType.ALL);
...
client.close();
FileSystem fs = FileSystem.get(new URI("hdfs://" + host + "/"), conf);
fs.getStatus().getCapacity();
...
fs.close();
It's is working well but I'm worried it could be harmful for the cluster to
run the script continuously (resource consumer). Is it alrite for example to
run it every 10 o 15 minutes? In case not, which is a good practice to
monitor the cluster?
Thanks in advance.
--
View this message in context: http://lucene.472066.n3.nabble.com/check-namenode-jobtracker-datanodes-and-tasktracker-status-tp3152565p3152565.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
Re: check namenode, jobtracker, datanodes and tasktracker status
Posted by Bharath Mundlapudi <bh...@yahoo.com>.
Shouldn't be a problem. But making sure, you disconnect the connection from this monitoring client might be helpful at peak loads.
-Bharath
________________________________
From: Marc Sturlese <ma...@gmail.com>
To: hadoop-user@lucene.apache.org
Sent: Friday, July 8, 2011 10:49 AM
Subject: check namenode, jobtracker, datanodes and tasktracker status
Hey there,
I've written some scripts to check dfs disk space, number of datanodes,
number of tasktrackers, heap in use...
I'm with hadoop 0.20.2 and to do that I use the DFSClient and JobClient
APIs.
I do things like:
JobClient jc = new JobClient(socketJT, conf);
ClusterStatus clusterStatus = jc.getClusterStatus(true);
clusterStatus.getTaskTrackers();
...
jc.close();
DFSClient client = new DFSClient(socketNN, conf);
DatanodeInfo[] dni = client.datanodeReport(DatanodeReportType.ALL);
...
client.close();
FileSystem fs = FileSystem.get(new URI("hdfs://" + host + "/"), conf);
fs.getStatus().getCapacity();
...
fs.close();
It's is working well but I'm worried it could be harmful for the cluster to
run the script continuously (resource consumer). Is it alrite for example to
run it every 10 o 15 minutes? In case not, which is a good practice to
monitor the cluster?
Thanks in advance.
--
View this message in context: http://lucene.472066.n3.nabble.com/check-namenode-jobtracker-datanodes-and-tasktracker-status-tp3152565p3152565.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.