You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by 自己 <zx...@163.com> on 2013/04/24 04:26:27 UTC

namenode memory test

Hi, I would like to know  how much memory our data take on the name-node per block, file and directory. 
For example, the metadata size of a file.
When I store some files in HDFS,how can I get the memory size take on the name-node?
Is there some tools or commands to test the memory size take on the name-node?


I'm looking forward to your reply! Thanks!

Re: namenode memory test

Posted by sudhakara st <su...@gmail.com>.
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).


On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
<ba...@gmail.com>wrote:

> Can you manually go into the directory configured for hadoop.tmp.dir under
> core-site.xml and do an ls -l to find the disk usage details, it will have
> fsimage, edits, fstime, VERSION.
> or the basic commands like,
> hadoop fs -du
> hadoop fsck
>
>
>
> On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:
>
>> Hi, I would like to know  how much memory our data take on the name-node
>> per block, file and directory.
>> For example, the metadata size of a file.
>> When I store some files in HDFS,how can I get the memory size take on
>> the name-node?
>> Is there some tools or commands to test the memory size take on the
>> name-node?
>>
>> I'm looking forward to your reply! Thanks!
>>
>>
>>
>


-- 

Regards,
.....  Sudhakara.st

Re: namenode memory test

Posted by sudhakara st <su...@gmail.com>.
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).


On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
<ba...@gmail.com>wrote:

> Can you manually go into the directory configured for hadoop.tmp.dir under
> core-site.xml and do an ls -l to find the disk usage details, it will have
> fsimage, edits, fstime, VERSION.
> or the basic commands like,
> hadoop fs -du
> hadoop fsck
>
>
>
> On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:
>
>> Hi, I would like to know  how much memory our data take on the name-node
>> per block, file and directory.
>> For example, the metadata size of a file.
>> When I store some files in HDFS,how can I get the memory size take on
>> the name-node?
>> Is there some tools or commands to test the memory size take on the
>> name-node?
>>
>> I'm looking forward to your reply! Thanks!
>>
>>
>>
>


-- 

Regards,
.....  Sudhakara.st

Re: namenode memory test

Posted by sudhakara st <su...@gmail.com>.
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).


On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
<ba...@gmail.com>wrote:

> Can you manually go into the directory configured for hadoop.tmp.dir under
> core-site.xml and do an ls -l to find the disk usage details, it will have
> fsimage, edits, fstime, VERSION.
> or the basic commands like,
> hadoop fs -du
> hadoop fsck
>
>
>
> On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:
>
>> Hi, I would like to know  how much memory our data take on the name-node
>> per block, file and directory.
>> For example, the metadata size of a file.
>> When I store some files in HDFS,how can I get the memory size take on
>> the name-node?
>> Is there some tools or commands to test the memory size take on the
>> name-node?
>>
>> I'm looking forward to your reply! Thanks!
>>
>>
>>
>


-- 

Regards,
.....  Sudhakara.st

Re: namenode memory test

Posted by sudhakara st <su...@gmail.com>.
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).


On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
<ba...@gmail.com>wrote:

> Can you manually go into the directory configured for hadoop.tmp.dir under
> core-site.xml and do an ls -l to find the disk usage details, it will have
> fsimage, edits, fstime, VERSION.
> or the basic commands like,
> hadoop fs -du
> hadoop fsck
>
>
>
> On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:
>
>> Hi, I would like to know  how much memory our data take on the name-node
>> per block, file and directory.
>> For example, the metadata size of a file.
>> When I store some files in HDFS,how can I get the memory size take on
>> the name-node?
>> Is there some tools or commands to test the memory size take on the
>> name-node?
>>
>> I'm looking forward to your reply! Thanks!
>>
>>
>>
>


-- 

Regards,
.....  Sudhakara.st

Re: namenode memory test

Posted by Mahesh Balija <ba...@gmail.com>.
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck



On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:

> Hi, I would like to know  how much memory our data take on the name-node
> per block, file and directory.
> For example, the metadata size of a file.
> When I store some files in HDFS,how can I get the memory size take on the
> name-node?
> Is there some tools or commands to test the memory size take on the
> name-node?
>
> I'm looking forward to your reply! Thanks!
>
>
>

Re: namenode memory test

Posted by Mahesh Balija <ba...@gmail.com>.
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck



On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:

> Hi, I would like to know  how much memory our data take on the name-node
> per block, file and directory.
> For example, the metadata size of a file.
> When I store some files in HDFS,how can I get the memory size take on the
> name-node?
> Is there some tools or commands to test the memory size take on the
> name-node?
>
> I'm looking forward to your reply! Thanks!
>
>
>

Re: namenode memory test

Posted by Mahesh Balija <ba...@gmail.com>.
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck



On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:

> Hi, I would like to know  how much memory our data take on the name-node
> per block, file and directory.
> For example, the metadata size of a file.
> When I store some files in HDFS,how can I get the memory size take on the
> name-node?
> Is there some tools or commands to test the memory size take on the
> name-node?
>
> I'm looking forward to your reply! Thanks!
>
>
>

Re: namenode memory test

Posted by Mahesh Balija <ba...@gmail.com>.
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck



On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx...@163.com> wrote:

> Hi, I would like to know  how much memory our data take on the name-node
> per block, file and directory.
> For example, the metadata size of a file.
> When I store some files in HDFS,how can I get the memory size take on the
> name-node?
> Is there some tools or commands to test the memory size take on the
> name-node?
>
> I'm looking forward to your reply! Thanks!
>
>
>