You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by bourne1900 <bo...@yahoo.cn> on 2011/12/23 03:35:15 UTC

DN limit

Hi all,
How many files a datanode can hold?
In my test case, when a datanode save 14million files, the cluster can't work.




Bourne

Re: Re: DN limit

Posted by bourne1900 <bo...@yahoo.cn>.
Hi,
The replica of block is 1.
Threre is 150million block in NN web UI.




Bourne

发件人: Harsh J
发送时间: 2011年12月24日(星期六) 下午2:09
收件人: common-user
主题: Re: Re: DN limit
Bourne,

You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up as in the live
nodes list of the NN web UI?

2011/12/23 bourne1900 <bo...@yahoo.cn>:
> Sorry, a detailed description:
> I wanna know how many files a datanode can hold, so there is only one datanode in my cluster.
> When the datanode save 14million files, the cluster can't work, and the datanode has used all of it's MEM(32G), the namenode's MEM is OK.
>
>
>
>
> Bourne
>
> Sender: Adrian Liu
> Date: 2011年12月23日(星期五) 上午10:47
> To: common-user@hadoop.apache.org
> Subject: Re: DN limit
> In my understanding, the max number of files stored in the HDFS should be <MEM of namenode>/sizeof(inode struct).   This max number of HDFS files should be no smaller than max files a datanode can hold.
>
> Please feel free to correct me because I'm just beginning learning hadoop.
>
> On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:
>
>> Hi all,
>> How many files a datanode can hold?
>> In my test case, when a datanode save 14million files, the cluster can't work.
>>
>>
>>
>>
>> Bourne
>
> Adrian Liu
> adrianl@yahoo-inc.com



-- 
Harsh J

Re: Re: DN limit

Posted by Harsh J <ha...@cloudera.com>.
Bourne,

You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up as in the live
nodes list of the NN web UI?

2011/12/23 bourne1900 <bo...@yahoo.cn>:
> Sorry, a detailed description:
> I wanna know how many files a datanode can hold, so there is only one datanode in my cluster.
> When the datanode save 14million files, the cluster can't work, and the datanode has used all of it's MEM(32G), the namenode's MEM is OK.
>
>
>
>
> Bourne
>
> Sender: Adrian Liu
> Date: 2011年12月23日(星期五) 上午10:47
> To: common-user@hadoop.apache.org
> Subject: Re: DN limit
> In my understanding, the max number of files stored in the HDFS should be <MEM of namenode>/sizeof(inode struct).   This max number of HDFS files should be no smaller than max files a datanode can hold.
>
> Please feel free to correct me because I'm just beginning learning hadoop.
>
> On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:
>
>> Hi all,
>> How many files a datanode can hold?
>> In my test case, when a datanode save 14million files, the cluster can't work.
>>
>>
>>
>>
>> Bourne
>
> Adrian Liu
> adrianl@yahoo-inc.com



-- 
Harsh J

Re: Re: DN limit

Posted by bourne1900 <bo...@yahoo.cn>.
Sorry, a detailed description:
I wanna know how many files a datanode can hold, so there is only one datanode in my cluster.
When the datanode save 14million files, the cluster can't work, and the datanode has used all of it's MEM(32G), the namenode's MEM is OK.




Bourne

Sender: Adrian Liu
Date: 2011年12月23日(星期五) 上午10:47
To: common-user@hadoop.apache.org
Subject: Re: DN limit
In my understanding, the max number of files stored in the HDFS should be <MEM of namenode>/sizeof(inode struct).   This max number of HDFS files should be no smaller than max files a datanode can hold.

Please feel free to correct me because I'm just beginning learning hadoop.

On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

> Hi all,
> How many files a datanode can hold?
> In my test case, when a datanode save 14million files, the cluster can't work.
> 
> 
> 
> 
> Bourne

Adrian Liu
adrianl@yahoo-inc.com

Re: DN limit

Posted by Adrian Liu <ad...@yahoo-inc.com>.
In my understanding, the max number of files stored in the HDFS should be <MEM of namenode>/sizeof(inode struct).   This max number of HDFS files should be no smaller than max files a datanode can hold.

Please feel free to correct me because I'm just beginning learning hadoop.

On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

> Hi all,
> How many files a datanode can hold?
> In my test case, when a datanode save 14million files, the cluster can't work.
> 
> 
> 
> 
> Bourne

Adrian Liu
adrianl@yahoo-inc.com