You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Christophe Bisciglia <ch...@cloudera.com> on 2009/08/14 04:02:16 UTC
Announcement: Cloudera Hadoop Training in San Francisco (August
26-28)
Hadoop Fans, please pardon the short notice, but we wanted to let you know
that we are offering a 3 day training program at the end of the month in San
Francisco. There is a $300 discount for those who register before 11PM PDT
on August 20th.
Day 1: Hadoop Basics + Ecosystem and Deployment (data center, EC2, etc)
Day 2: Hive, Pig, and Data Processing Pipelines
Day 3: Advanced APIs + MapReduce Debugging and Optimization
You can see the full agenda here: http://www.eventbrite.com/event/408826812
We are using a smaller space than usual, and as such, can only accommodate
20 people. If you do want to come, please take advantage of the early bird
discount by registering soon :-)
<http://www.eventbrite.com/event/408826812>Cheers,
Christophe
--
get hadoop: cloudera.com/hadoop
online training: cloudera.com/hadoop-training
blog: cloudera.com/blog
twitter: twitter.com/cloudera
RE: Cluster Disk Usage
Posted by zjffdu <zj...@gmail.com>.
You can use the jobtracker Web UI to use the disk usage.
-----Original Message-----
From: Arvind Sharma [mailto:arvind321@yahoo.com]
Sent: 2009年8月20日 15:57
To: common-user@hadoop.apache.org
Subject: Cluster Disk Usage
Is there a way to find out how much disk space - overall or per Datanode
basis - is available before creating a file ?
I am trying to address an issue where the disk got full (config error) and
the client was not able to create a file on the HDFS.
I want to be able to check if there space left on the grid before trying to
create the file.
Arvind
RE: Cluster Disk Usage
Posted by zjffdu <zj...@gmail.com>.
Arvind,
You can use this API to get the size of file system used
FileSystem.getUsed();
But, I do not find the API for calculate the remaining space. You can write
some code to create a API,
The remaining disk space = Total of disk space - operate system space -
FileSystem.getUsed()
-----Original Message-----
From: Arvind Sharma [mailto:arvind321@yahoo.com]
Sent: 2009年8月20日 16:45
To: common-user@hadoop.apache.org
Subject: Re: Cluster Disk Usage
Sorry, I also sent a direct e-mail to one response....
there I asked one question - what is the cost of these APIs ??? Are they
too expensive calls ? Is the API only going to the NN which stores this
data ?
Thanks!
Arvind
________________________________
From: Arvind Sharma <ar...@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 4:01:02 PM
Subject: Re: Cluster Disk Usage
Using hadoop-0.19.2
________________________________
From: Arvind Sharma <ar...@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 3:56:53 PM
Subject: Cluster Disk Usage
Is there a way to find out how much disk space - overall or per Datanode
basis - is available before creating a file ?
I am trying to address an issue where the disk got full (config error) and
the client was not able to create a file on the HDFS.
I want to be able to check if there space left on the grid before trying to
create the file.
Arvind
Re: Cluster Disk Usage
Posted by Arvind Sharma <ar...@yahoo.com>.
Sorry, I also sent a direct e-mail to one response....
there I asked one question - what is the cost of these APIs ??? Are they too expensive calls ? Is the API only going to the NN which stores this data ?
Thanks!
Arvind
________________________________
From: Arvind Sharma <ar...@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 4:01:02 PM
Subject: Re: Cluster Disk Usage
Using hadoop-0.19.2
________________________________
From: Arvind Sharma <ar...@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 3:56:53 PM
Subject: Cluster Disk Usage
Is there a way to find out how much disk space - overall or per Datanode basis - is available before creating a file ?
I am trying to address an issue where the disk got full (config error) and the client was not able to create a file on the HDFS.
I want to be able to check if there space left on the grid before trying to create the file.
Arvind
Re: Cluster Disk Usage
Posted by Arvind Sharma <ar...@yahoo.com>.
Using hadoop-0.19.2
________________________________
From: Arvind Sharma <ar...@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 3:56:53 PM
Subject: Cluster Disk Usage
Is there a way to find out how much disk space - overall or per Datanode basis - is available before creating a file ?
I am trying to address an issue where the disk got full (config error) and the client was not able to create a file on the HDFS.
I want to be able to check if there space left on the grid before trying to create the file.
Arvind
Cluster Disk Usage
Posted by Arvind Sharma <ar...@yahoo.com>.
Is there a way to find out how much disk space - overall or per Datanode basis - is available before creating a file ?
I am trying to address an issue where the disk got full (config error) and the client was not able to create a file on the HDFS.
I want to be able to check if there space left on the grid before trying to create the file.
Arvind