You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by karthik raman <ka...@yahoo.com> on 2008/06/07 09:51:13 UTC

Maximum number of files in hadoop

Hi,
   What is the maximum number of files that can be stored on HDFS? Is it dependent on namenode memory configuration? Also does this impact on the performance of namenode anyway?
thanks in advance
Karthik


      From Chandigarh to Chennai - find friends all over India. Go to http://in.promos.yahoo.com/groups/citygroups/

Re: Maximum number of files in hadoop

Posted by Dhruba Borthakur <dh...@gmail.com>.
The maximum number of files in HDFS depends on the amount of memory
available for the namenode. Each file object and each block object
takes about  150 bytes of the memory. Thus, if you have 1million files
and each file has 1 one block each, then you would need about 3GB of
memory for the namenode.

thanks
dhruba


On Fri, Jun 6, 2008 at 11:51 PM, karthik raman <ka...@yahoo.com> wrote:
> Hi,
>    What is the maximum number of files that can be stored on HDFS? Is it dependent on namenode memory configuration? Also does this impact on the performance of namenode anyway?
> thanks in advance
> Karthik
>
>
>      From Chandigarh to Chennai - find friends all over India. Go to http://in.promos.yahoo.com/groups/citygroups/