You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Dr Mich Talebzadeh <mi...@peridale.co.uk> on 2015/03/25 16:11:08 UTC

can block size for namenode be different from datanode block size?

Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich

Re: can block size for namenode be different from datanode block size?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Mich!

The block size you are referring to is used only on the datanodes. The file that the namenode writes (fsimage OR editlog) is not chunked using this block size.
HTHRavi
 


     On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh <mi...@peridale.co.uk> wrote:
   

 
Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich

  

Re: can block size for namenode be different from datanode block size?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Mich!

The block size you are referring to is used only on the datanodes. The file that the namenode writes (fsimage OR editlog) is not chunked using this block size.
HTHRavi
 


     On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh <mi...@peridale.co.uk> wrote:
   

 
Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich

  

Re: can block size for namenode be different from datanode block size?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Mich!

The block size you are referring to is used only on the datanodes. The file that the namenode writes (fsimage OR editlog) is not chunked using this block size.
HTHRavi
 


     On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh <mi...@peridale.co.uk> wrote:
   

 
Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich

  

Re: can block size for namenode be different from datanode block size?

Posted by Ravi Prakash <ra...@ymail.com>.
Hi Mich!

The block size you are referring to is used only on the datanodes. The file that the namenode writes (fsimage OR editlog) is not chunked using this block size.
HTHRavi
 


     On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh <mi...@peridale.co.uk> wrote:
   

 
Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Thanks,

Mich