You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by cho ju il <tj...@kgrid.co.kr> on 2014/10/17 09:22:18 UTC

How do i find volume failure using java code???

Hadoop 2.4.1
2 namenodes(ha), 3 datanodes.
I want to find failed volumes. but getVolumeFailures() always return zero.
How do i find volume failure using java code??? 
 
		Configuration conf = getConf(configPath);
		FileSystem fs = null;
		try {
			fs = FileSystem.get(conf);
			if (!(fs instanceof DistributedFileSystem)) {
				System.err.println("FileSystem is " + fs.getUri());
				return ;
		    }
			
			DistributedFileSystem dfs = (DistributedFileSystem) fs;
			DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.ALL);
			for(DatanodeInfo node : nodes) 
			{	
				if( node instanceof DatanodeID )
				{	
					DatanodeDescriptor desc = new DatanodeDescriptor(node);
					// getVolumeFailures() always return zero.
					System.out.println(desc.getVolumeFailures());
				}
				
			}
		} catch (IOException ioe) {
			System.err.println("FileSystem is inaccessible due to:\n" + StringUtils.stringifyException(ioe));
			return ;
		}

Re: How do i find volume failure using java code???

Posted by Mahesh Kumar Vasanthu Somashekar <mv...@pivotal.io>.
Hi Cho Ju Il,

Some of the objects that you are trying to access, like DatanodeDescriptor,
are only internal to namenode, not for clients. Client always get a fresh
instance, not the one which namenode updates as part of heartbeat, and
that's why getVolumeFailures returns zero.

https://github.com/apache/hadoop-hdfs/blob/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java

Thanks,
Mahesh


On Fri, Oct 17, 2014 at 12:22 AM, cho ju il <tj...@kgrid.co.kr> wrote:

> Hadoop 2.4.1
>
> 2 namenodes(ha), 3 datanodes.
>
> I want to find failed volumes. but getVolumeFailures() always return zero.
>
> How do i find volume failure using java code???
>
>
>
> Configuration conf = getConf(configPath);
>
> FileSystem fs = null;
>
> try {
>
> fs = FileSystem.get(conf);
>
> if (!(fs instanceof DistributedFileSystem)) {
>
> System.err.println("FileSystem is " + fs.getUri());
>
> return ;
>
>     }
>
>  DistributedFileSystem dfs = (DistributedFileSystem) fs;
>
> DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.ALL);
>
> for(DatanodeInfo node : nodes)
>
> {
>
> if( node instanceof DatanodeID )
>
> {
>
> DatanodeDescriptor desc = new DatanodeDescriptor(node);
>
> // getVolumeFailures() always return zero.
>
> System.out.println(desc.getVolumeFailures());
>
> }
>
>  }
>
> } catch (IOException ioe) {
>
> System.err.println("FileSystem is inaccessible due to:\n" +
> StringUtils.stringifyException(ioe));
>
> return ;
>
> }
>

Re: How do i find volume failure using java code???

Posted by Mahesh Kumar Vasanthu Somashekar <mv...@pivotal.io>.
Hi Cho Ju Il,

Some of the objects that you are trying to access, like DatanodeDescriptor,
are only internal to namenode, not for clients. Client always get a fresh
instance, not the one which namenode updates as part of heartbeat, and
that's why getVolumeFailures returns zero.

https://github.com/apache/hadoop-hdfs/blob/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java

Thanks,
Mahesh


On Fri, Oct 17, 2014 at 12:22 AM, cho ju il <tj...@kgrid.co.kr> wrote:

> Hadoop 2.4.1
>
> 2 namenodes(ha), 3 datanodes.
>
> I want to find failed volumes. but getVolumeFailures() always return zero.
>
> How do i find volume failure using java code???
>
>
>
> Configuration conf = getConf(configPath);
>
> FileSystem fs = null;
>
> try {
>
> fs = FileSystem.get(conf);
>
> if (!(fs instanceof DistributedFileSystem)) {
>
> System.err.println("FileSystem is " + fs.getUri());
>
> return ;
>
>     }
>
>  DistributedFileSystem dfs = (DistributedFileSystem) fs;
>
> DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.ALL);
>
> for(DatanodeInfo node : nodes)
>
> {
>
> if( node instanceof DatanodeID )
>
> {
>
> DatanodeDescriptor desc = new DatanodeDescriptor(node);
>
> // getVolumeFailures() always return zero.
>
> System.out.println(desc.getVolumeFailures());
>
> }
>
>  }
>
> } catch (IOException ioe) {
>
> System.err.println("FileSystem is inaccessible due to:\n" +
> StringUtils.stringifyException(ioe));
>
> return ;
>
> }
>

Re: How do i find volume failure using java code???

Posted by Mahesh Kumar Vasanthu Somashekar <mv...@pivotal.io>.
Hi Cho Ju Il,

Some of the objects that you are trying to access, like DatanodeDescriptor,
are only internal to namenode, not for clients. Client always get a fresh
instance, not the one which namenode updates as part of heartbeat, and
that's why getVolumeFailures returns zero.

https://github.com/apache/hadoop-hdfs/blob/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java

Thanks,
Mahesh


On Fri, Oct 17, 2014 at 12:22 AM, cho ju il <tj...@kgrid.co.kr> wrote:

> Hadoop 2.4.1
>
> 2 namenodes(ha), 3 datanodes.
>
> I want to find failed volumes. but getVolumeFailures() always return zero.
>
> How do i find volume failure using java code???
>
>
>
> Configuration conf = getConf(configPath);
>
> FileSystem fs = null;
>
> try {
>
> fs = FileSystem.get(conf);
>
> if (!(fs instanceof DistributedFileSystem)) {
>
> System.err.println("FileSystem is " + fs.getUri());
>
> return ;
>
>     }
>
>  DistributedFileSystem dfs = (DistributedFileSystem) fs;
>
> DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.ALL);
>
> for(DatanodeInfo node : nodes)
>
> {
>
> if( node instanceof DatanodeID )
>
> {
>
> DatanodeDescriptor desc = new DatanodeDescriptor(node);
>
> // getVolumeFailures() always return zero.
>
> System.out.println(desc.getVolumeFailures());
>
> }
>
>  }
>
> } catch (IOException ioe) {
>
> System.err.println("FileSystem is inaccessible due to:\n" +
> StringUtils.stringifyException(ioe));
>
> return ;
>
> }
>

Re: How do i find volume failure using java code???

Posted by Mahesh Kumar Vasanthu Somashekar <mv...@pivotal.io>.
Hi Cho Ju Il,

Some of the objects that you are trying to access, like DatanodeDescriptor,
are only internal to namenode, not for clients. Client always get a fresh
instance, not the one which namenode updates as part of heartbeat, and
that's why getVolumeFailures returns zero.

https://github.com/apache/hadoop-hdfs/blob/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/DatanodeDescriptor.java

Thanks,
Mahesh


On Fri, Oct 17, 2014 at 12:22 AM, cho ju il <tj...@kgrid.co.kr> wrote:

> Hadoop 2.4.1
>
> 2 namenodes(ha), 3 datanodes.
>
> I want to find failed volumes. but getVolumeFailures() always return zero.
>
> How do i find volume failure using java code???
>
>
>
> Configuration conf = getConf(configPath);
>
> FileSystem fs = null;
>
> try {
>
> fs = FileSystem.get(conf);
>
> if (!(fs instanceof DistributedFileSystem)) {
>
> System.err.println("FileSystem is " + fs.getUri());
>
> return ;
>
>     }
>
>  DistributedFileSystem dfs = (DistributedFileSystem) fs;
>
> DatanodeInfo[] nodes = dfs.getDataNodeStats(DatanodeReportType.ALL);
>
> for(DatanodeInfo node : nodes)
>
> {
>
> if( node instanceof DatanodeID )
>
> {
>
> DatanodeDescriptor desc = new DatanodeDescriptor(node);
>
> // getVolumeFailures() always return zero.
>
> System.out.println(desc.getVolumeFailures());
>
> }
>
>  }
>
> } catch (IOException ioe) {
>
> System.err.println("FileSystem is inaccessible due to:\n" +
> StringUtils.stringifyException(ioe));
>
> return ;
>
> }
>