You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by rakesh kothari <rk...@hotmail.com> on 2010/10/07 20:03:35 UTC

Hdfs Block Size

Is there a reason why block size should be set to some 2^N, for some integer N ? Does it help with block defragmentation etc. ?

Thanks,
-Rakesh
 		 	   		  

Re: Hdfs Block Size

Posted by Jeff Zhang <zj...@gmail.com>.
Yes, this relates with the native file system block size and disk
block size, and can reduce the disk fragmentation.



On Fri, Oct 8, 2010 at 2:03 AM, rakesh kothari <rk...@hotmail.com> wrote:
> Is there a reason why block size should be set to some 2^N, for some integer
> N ? Does it help with block defragmentation etc. ?
>
> Thanks,
> -Rakesh
>



-- 
Best Regards

Jeff Zhang