You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Naama Kraus <na...@gmail.com> on 2008/06/30 21:10:08 UTC

Underlying file system Block size

Hi All,

To my knowledge, HDFS block size is 64MB - fairly large. Is this a
requirement from a file system, if one wishes to implement Hadoop on top of
it ? Or is there a way to get along with a file system supporting a smaller
block size such as 1M or even less ? What is the case for existing, non
HDFS, implementations of Hadoop (such as S3, KFS) ?

Thanks for any input,
Naama

-- 
oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo
00 oo 00 oo
"If you want your children to be intelligent, read them fairy tales. If you
want them to be more intelligent, read them more fairy tales." (Albert
Einstein)