You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@hadoop.apache.org by rahul <rm...@apple.com> on 2010/10/02 06:27:22 UTC

Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Hi,

I am using Hadoop 0.20.2 version for data processing by setting up Hadoop Cluster on two nodes. 

And I am continuously adding more space to the nodes.

Can some body let me know how to get the total space available on the hadoop cluster using command line.

or 

Hadoop version "df", Unix command.

Any input is helpful.

Thanks
Rahul

Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by Marcos Pinto <ma...@gmail.com>.
HDFS takes the memory available from the closest partition to your
hadoop.tmp.dir(core-site.xml).
I hope it helps you.


user hadoop => /home/hadoop

On Sun, Oct 3, 2010 at 2:58 PM, rahul <rm...@apple.com> wrote:

> Thanks Jonathan,
>
> Its really of great help.
>
> Rahul
> On Oct 2, 2010, at 9:32 PM, Jonathan Gray wrote:
>
> > Rahul,
> >
> > There is a ton of documentation available for Hadoop (including books).
> >
> > Best place to start is the wiki: http://wiki.apache.org/hadoop/
> >
> > On your specific issue, you need to configure Hadoop to tell it what
> directories to store data.
> >
> > The configuration parameter name is 'dfs.data.dir' and you need to put in
> a comma-delimited list of directories to use to store data.
> >
> > JG
> >
> >> -----Original Message-----
> >> From: rahul [mailto:rmalviya@apple.com]
> >> Sent: Saturday, October 02, 2010 9:53 AM
> >> To: general@hadoop.apache.org
> >> Subject: Re: Total Space Available on Hadoop Cluster Or Hadoop version
> >> of "df".
> >>
> >> Hi Marcos,
> >>
> >> Same thing is happening for me as well.
> >>
> >> I have multiple disks mounted to my system but by default when i format
> >> it took the nearest/ disk in which hadoop binary is present.
> >>
> >> Is there a way in which I can format all the drives mounted to my
> >> system ?
> >>
> >> So can we control in some way the drives or the places which we want to
> >> format for hdfs?
> >>
> >> Thanks,
> >> Rahul
> >>
> >> On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:
> >>
> >>> I gotte the same problem, I remember it was something realted to
> >> user's
> >>> partition.
> >>> for example I created hadoop user so HDFS took the closest partition
> >> to
> >>> user.
> >>> I dont remenber exaclty but it was something like that. I hope it
> >> helps u in
> >>> someway.
> >>>
> >>> On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore
> >> <Gl...@melbourneit.com.au>wrote:
> >>>
> >>>> hadoop dfsadmin -report
> >>>>
> >>>> Regards
> >>>>
> >>>> Glenn
> >>>>
> >>>>
> >>>> -----Original Message-----
> >>>> From: rahul [mailto:rmalviya@apple.com]
> >>>> Sent: Sat 10/2/2010 2:27 PM
> >>>> To: general@hadoop.apache.org
> >>>> Subject: Total Space Available on Hadoop Cluster Or Hadoop version
> >> of "df".
> >>>>
> >>>> Hi,
> >>>>
> >>>> I am using Hadoop 0.20.2 version for data processing by setting up
> >> Hadoop
> >>>> Cluster on two nodes.
> >>>>
> >>>> And I am continuously adding more space to the nodes.
> >>>>
> >>>> Can some body let me know how to get the total space available on
> >> the
> >>>> hadoop cluster using command line.
> >>>>
> >>>> or
> >>>>
> >>>> Hadoop version "df", Unix command.
> >>>>
> >>>> Any input is helpful.
> >>>>
> >>>> Thanks
> >>>> Rahul
> >>>>
> >>>>
> >
>
>

Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by rahul <rm...@apple.com>.
Thanks Jonathan,

Its really of great help.

Rahul
On Oct 2, 2010, at 9:32 PM, Jonathan Gray wrote:

> Rahul,
> 
> There is a ton of documentation available for Hadoop (including books).
> 
> Best place to start is the wiki: http://wiki.apache.org/hadoop/
> 
> On your specific issue, you need to configure Hadoop to tell it what directories to store data.
> 
> The configuration parameter name is 'dfs.data.dir' and you need to put in a comma-delimited list of directories to use to store data.
> 
> JG
> 
>> -----Original Message-----
>> From: rahul [mailto:rmalviya@apple.com]
>> Sent: Saturday, October 02, 2010 9:53 AM
>> To: general@hadoop.apache.org
>> Subject: Re: Total Space Available on Hadoop Cluster Or Hadoop version
>> of "df".
>> 
>> Hi Marcos,
>> 
>> Same thing is happening for me as well.
>> 
>> I have multiple disks mounted to my system but by default when i format
>> it took the nearest/ disk in which hadoop binary is present.
>> 
>> Is there a way in which I can format all the drives mounted to my
>> system ?
>> 
>> So can we control in some way the drives or the places which we want to
>> format for hdfs?
>> 
>> Thanks,
>> Rahul
>> 
>> On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:
>> 
>>> I gotte the same problem, I remember it was something realted to
>> user's
>>> partition.
>>> for example I created hadoop user so HDFS took the closest partition
>> to
>>> user.
>>> I dont remenber exaclty but it was something like that. I hope it
>> helps u in
>>> someway.
>>> 
>>> On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore
>> <Gl...@melbourneit.com.au>wrote:
>>> 
>>>> hadoop dfsadmin -report
>>>> 
>>>> Regards
>>>> 
>>>> Glenn
>>>> 
>>>> 
>>>> -----Original Message-----
>>>> From: rahul [mailto:rmalviya@apple.com]
>>>> Sent: Sat 10/2/2010 2:27 PM
>>>> To: general@hadoop.apache.org
>>>> Subject: Total Space Available on Hadoop Cluster Or Hadoop version
>> of "df".
>>>> 
>>>> Hi,
>>>> 
>>>> I am using Hadoop 0.20.2 version for data processing by setting up
>> Hadoop
>>>> Cluster on two nodes.
>>>> 
>>>> And I am continuously adding more space to the nodes.
>>>> 
>>>> Can some body let me know how to get the total space available on
>> the
>>>> hadoop cluster using command line.
>>>> 
>>>> or
>>>> 
>>>> Hadoop version "df", Unix command.
>>>> 
>>>> Any input is helpful.
>>>> 
>>>> Thanks
>>>> Rahul
>>>> 
>>>> 
> 


RE: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by Jonathan Gray <jg...@facebook.com>.
Rahul,

There is a ton of documentation available for Hadoop (including books).

Best place to start is the wiki: http://wiki.apache.org/hadoop/

On your specific issue, you need to configure Hadoop to tell it what directories to store data.

The configuration parameter name is 'dfs.data.dir' and you need to put in a comma-delimited list of directories to use to store data.

JG

> -----Original Message-----
> From: rahul [mailto:rmalviya@apple.com]
> Sent: Saturday, October 02, 2010 9:53 AM
> To: general@hadoop.apache.org
> Subject: Re: Total Space Available on Hadoop Cluster Or Hadoop version
> of "df".
> 
> Hi Marcos,
> 
> Same thing is happening for me as well.
> 
> I have multiple disks mounted to my system but by default when i format
> it took the nearest/ disk in which hadoop binary is present.
> 
> Is there a way in which I can format all the drives mounted to my
> system ?
> 
> So can we control in some way the drives or the places which we want to
> format for hdfs?
> 
> Thanks,
> Rahul
> 
> On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:
> 
> > I gotte the same problem, I remember it was something realted to
> user's
> > partition.
> > for example I created hadoop user so HDFS took the closest partition
> to
> > user.
> > I dont remenber exaclty but it was something like that. I hope it
> helps u in
> > someway.
> >
> > On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore
> <Gl...@melbourneit.com.au>wrote:
> >
> >> hadoop dfsadmin -report
> >>
> >> Regards
> >>
> >> Glenn
> >>
> >>
> >> -----Original Message-----
> >> From: rahul [mailto:rmalviya@apple.com]
> >> Sent: Sat 10/2/2010 2:27 PM
> >> To: general@hadoop.apache.org
> >> Subject: Total Space Available on Hadoop Cluster Or Hadoop version
> of "df".
> >>
> >> Hi,
> >>
> >> I am using Hadoop 0.20.2 version for data processing by setting up
> Hadoop
> >> Cluster on two nodes.
> >>
> >> And I am continuously adding more space to the nodes.
> >>
> >> Can some body let me know how to get the total space available on
> the
> >> hadoop cluster using command line.
> >>
> >> or
> >>
> >> Hadoop version "df", Unix command.
> >>
> >> Any input is helpful.
> >>
> >> Thanks
> >> Rahul
> >>
> >>


Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by rahul <rm...@apple.com>.
Hi Marcos,

Same thing is happening for me as well. 

I have multiple disks mounted to my system but by default when i format it took the nearest/ disk in which hadoop binary is present.

Is there a way in which I can format all the drives mounted to my system ?

So can we control in some way the drives or the places which we want to format for hdfs?

Thanks,
Rahul

On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:

> I gotte the same problem, I remember it was something realted to user's
> partition.
> for example I created hadoop user so HDFS took the closest partition to
> user.
> I dont remenber exaclty but it was something like that. I hope it helps u in
> someway.
> 
> On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore <Gl...@melbourneit.com.au>wrote:
> 
>> hadoop dfsadmin -report
>> 
>> Regards
>> 
>> Glenn
>> 
>> 
>> -----Original Message-----
>> From: rahul [mailto:rmalviya@apple.com]
>> Sent: Sat 10/2/2010 2:27 PM
>> To: general@hadoop.apache.org
>> Subject: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
>> 
>> Hi,
>> 
>> I am using Hadoop 0.20.2 version for data processing by setting up Hadoop
>> Cluster on two nodes.
>> 
>> And I am continuously adding more space to the nodes.
>> 
>> Can some body let me know how to get the total space available on the
>> hadoop cluster using command line.
>> 
>> or
>> 
>> Hadoop version "df", Unix command.
>> 
>> Any input is helpful.
>> 
>> Thanks
>> Rahul
>> 
>> 


Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by Marcos Pinto <ma...@gmail.com>.
I gotte the same problem, I remember it was something realted to user's
partition.
for example I created hadoop user so HDFS took the closest partition to
user.
I dont remenber exaclty but it was something like that. I hope it helps u in
someway.

On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore <Gl...@melbourneit.com.au>wrote:

> hadoop dfsadmin -report
>
> Regards
>
> Glenn
>
>
> -----Original Message-----
> From: rahul [mailto:rmalviya@apple.com]
> Sent: Sat 10/2/2010 2:27 PM
> To: general@hadoop.apache.org
> Subject: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
>
> Hi,
>
> I am using Hadoop 0.20.2 version for data processing by setting up Hadoop
> Cluster on two nodes.
>
> And I am continuously adding more space to the nodes.
>
> Can some body let me know how to get the total space available on the
> hadoop cluster using command line.
>
> or
>
> Hadoop version "df", Unix command.
>
> Any input is helpful.
>
> Thanks
> Rahul
>
>

Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by rahul <rm...@apple.com>.
Thanks Glenn

On Oct 1, 2010, at 11:13 PM, Glenn Gore wrote:

> hadoop dfsadmin -report
> 
> Regards
> 
> Glenn
> 
> 
> -----Original Message-----
> From: rahul [mailto:rmalviya@apple.com]
> Sent: Sat 10/2/2010 2:27 PM
> To: general@hadoop.apache.org
> Subject: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
> 
> Hi,
> 
> I am using Hadoop 0.20.2 version for data processing by setting up Hadoop Cluster on two nodes. 
> 
> And I am continuously adding more space to the nodes.
> 
> Can some body let me know how to get the total space available on the hadoop cluster using command line.
> 
> or 
> 
> Hadoop version "df", Unix command.
> 
> Any input is helpful.
> 
> Thanks
> Rahul
> 


RE: Total Space Available on Hadoop Cluster Or Hadoop version of "df".

Posted by Glenn Gore <Gl...@melbourneit.com.au>.
hadoop dfsadmin -report

Regards

Glenn


-----Original Message-----
From: rahul [mailto:rmalviya@apple.com]
Sent: Sat 10/2/2010 2:27 PM
To: general@hadoop.apache.org
Subject: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
 
Hi,

I am using Hadoop 0.20.2 version for data processing by setting up Hadoop Cluster on two nodes. 

And I am continuously adding more space to the nodes.

Can some body let me know how to get the total space available on the hadoop cluster using command line.

or 

Hadoop version "df", Unix command.

Any input is helpful.

Thanks
Rahul