You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Mich Talebzadeh <mi...@peridale.co.uk> on 2015/03/25 21:17:56 UTC

Can block size for namenode be different from wdatanode block size?

Thank you all for your contribution.

 

I have summarised the findings as below

 

1.     The Hadoop block size is a configurable parameter dfs.block.size in bytes . By default this is set to 134217728 bytes or 128MB

2.     The block size is only relevant to DataNodes (DN). NameNode (NN) does not use this parameter

3.     NN behaves like an in-memory database IMDB and uses a disk file system called the FsImage to load the metadata as startup. This is the only place that I see value for Solid State Disk to make this initial load faster

4.     For the remaining period until HDFS shutdown or otherwise NN will use the in memory cache to access metadata

5.     With regard to sizing of NN to store metadata, one can use the following rules of thumb (heuristics):

a.     NN consumes roughly 1GB for every 1 million blokes (source Hadoop Operations, Eric Sammer, ISBN: 978-1-499-3205-7). So if you have 128MB block size, you can store  128 * 1E6 / (3 *1024) = 41,666GB of data for every 1GB. Number 3 comes from the fact that the block is replicated three times. In other words just under 42TB of data. So if you have 10GB of namenode cache, you can have up to 420TB of data on your datanodes

6.     You can take FsImage file from Hadoop and convert it into a text file as follows:

 

hdfs dfsadmin -fetchImage nnimage

 

15/03/25 20:17:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

15/03/25 20:17:41 INFO namenode.TransferFsImage: Opening connection to http://rhes564:50070/imagetransfer?getimage=1&txid=latest

15/03/25 20:17:41 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds

15/03/25 20:17:41 WARN namenode.TransferFsImage: Overwriting existing file nnimage with file downloaded from http://rhes564:50070/imagetransfer?getimage=1&txid=latest

15/03/25 20:17:41 INFO namenode.TransferFsImage: Transfer took 0.03s at 1393.94 KB/s

 

7.     That create an image file in the current directory that can be converted to text file

hdfs  oiv -i nnimage -o nnimage.txt

 

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 2 strings

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 543 inodes.

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode references

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode references

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode directory section

15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 198 directories

15/03/25 20:20:07 INFO offlineImageViewer.WebImageViewer: WebImageViewer started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer.

 

Let me know if I missed  anything or got it wrong.

 

HTH

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 


Re: Can block size for namenode be different from wdatanode block size?

Posted by Harsh J <ha...@cloudera.com>.
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
does not use this parameter

Actually, as a configuration, its only relevant to the client. See also
http://www.quora.com/How-do-I-check-HDFS-blocksize-default-custom

Other points sound about right, except the ability to do (7) can only now
be done if you have legacy mode of fsimage writes enabled. The new OIV tool
in recent releases only serves a REST based Web Server to query the file
data upon.

On Thu, Mar 26, 2015 at 1:47 AM, Mich Talebzadeh <mi...@peridale.co.uk>
wrote:

> Thank you all for your contribution.
>
>
>
> I have summarised the findings as below
>
>
>
> 1.     The Hadoop block size is a configurable parameter dfs.block.size
> in bytes . By default this is set to 134217728 bytes or 128MB
>
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
> does not use this parameter
>
> 3.     NN behaves like an in-memory database IMDB and uses a disk file
> system called the FsImage to load the metadata as startup. This is the only
> place that I see value for Solid State Disk to make this initial load faster
>
> 4.     For the remaining period until HDFS shutdown or otherwise NN will
> use the in memory cache to access metadata
>
> 5.     With regard to sizing of NN to store metadata, one can use the
> following rules of thumb (heuristics):
>
> a.     NN consumes roughly 1GB for every 1 million blokes (source Hadoop
> Operations, Eric Sammer, ISBN: 978-1-499-3205-7). So if you have 128MB
> block size, you can store  128 * 1E6 / (3 *1024) = 41,666GB of data for
> every 1GB. Number 3 comes from the fact that the block is replicated three
> times. In other words just under 42TB of data. So if you have 10GB of
> namenode cache, you can have up to 420TB of data on your datanodes
>
> 6.     You can take FsImage file from Hadoop and convert it into a text
> file as follows:
>
>
>
> *hdfs dfsadmin -fetchImage nnimage*
>
>
>
> 15/03/25 20:17:40 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Opening connection to
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Image Transfer timeout
> configured to 60000 milliseconds
>
> 15/03/25 20:17:41 WARN namenode.TransferFsImage: Overwriting existing file
> nnimage with file downloaded from
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Transfer took 0.03s at
> 1393.94 KB/s
>
>
>
> 7.     That create an image file in the current directory that can be
> converted to text file
>
> *hdfs  oiv -i nnimage -o nnimage.txt*
>
>
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 2 strings
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 543
> inodes.
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> directory section
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 198
> directories
>
> 15/03/25 20:20:07 INFO offlineImageViewer.WebImageViewer: WebImageViewer
> started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer.
>
>
>
> Let me know if I missed  anything or got it wrong.
>
>
>
> HTH
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>



-- 
Harsh J

Re: Can block size for namenode be different from wdatanode block size?

Posted by Harsh J <ha...@cloudera.com>.
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
does not use this parameter

Actually, as a configuration, its only relevant to the client. See also
http://www.quora.com/How-do-I-check-HDFS-blocksize-default-custom

Other points sound about right, except the ability to do (7) can only now
be done if you have legacy mode of fsimage writes enabled. The new OIV tool
in recent releases only serves a REST based Web Server to query the file
data upon.

On Thu, Mar 26, 2015 at 1:47 AM, Mich Talebzadeh <mi...@peridale.co.uk>
wrote:

> Thank you all for your contribution.
>
>
>
> I have summarised the findings as below
>
>
>
> 1.     The Hadoop block size is a configurable parameter dfs.block.size
> in bytes . By default this is set to 134217728 bytes or 128MB
>
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
> does not use this parameter
>
> 3.     NN behaves like an in-memory database IMDB and uses a disk file
> system called the FsImage to load the metadata as startup. This is the only
> place that I see value for Solid State Disk to make this initial load faster
>
> 4.     For the remaining period until HDFS shutdown or otherwise NN will
> use the in memory cache to access metadata
>
> 5.     With regard to sizing of NN to store metadata, one can use the
> following rules of thumb (heuristics):
>
> a.     NN consumes roughly 1GB for every 1 million blokes (source Hadoop
> Operations, Eric Sammer, ISBN: 978-1-499-3205-7). So if you have 128MB
> block size, you can store  128 * 1E6 / (3 *1024) = 41,666GB of data for
> every 1GB. Number 3 comes from the fact that the block is replicated three
> times. In other words just under 42TB of data. So if you have 10GB of
> namenode cache, you can have up to 420TB of data on your datanodes
>
> 6.     You can take FsImage file from Hadoop and convert it into a text
> file as follows:
>
>
>
> *hdfs dfsadmin -fetchImage nnimage*
>
>
>
> 15/03/25 20:17:40 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Opening connection to
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Image Transfer timeout
> configured to 60000 milliseconds
>
> 15/03/25 20:17:41 WARN namenode.TransferFsImage: Overwriting existing file
> nnimage with file downloaded from
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Transfer took 0.03s at
> 1393.94 KB/s
>
>
>
> 7.     That create an image file in the current directory that can be
> converted to text file
>
> *hdfs  oiv -i nnimage -o nnimage.txt*
>
>
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 2 strings
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 543
> inodes.
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> directory section
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 198
> directories
>
> 15/03/25 20:20:07 INFO offlineImageViewer.WebImageViewer: WebImageViewer
> started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer.
>
>
>
> Let me know if I missed  anything or got it wrong.
>
>
>
> HTH
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>



-- 
Harsh J

Re: Can block size for namenode be different from wdatanode block size?

Posted by Harsh J <ha...@cloudera.com>.
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
does not use this parameter

Actually, as a configuration, its only relevant to the client. See also
http://www.quora.com/How-do-I-check-HDFS-blocksize-default-custom

Other points sound about right, except the ability to do (7) can only now
be done if you have legacy mode of fsimage writes enabled. The new OIV tool
in recent releases only serves a REST based Web Server to query the file
data upon.

On Thu, Mar 26, 2015 at 1:47 AM, Mich Talebzadeh <mi...@peridale.co.uk>
wrote:

> Thank you all for your contribution.
>
>
>
> I have summarised the findings as below
>
>
>
> 1.     The Hadoop block size is a configurable parameter dfs.block.size
> in bytes . By default this is set to 134217728 bytes or 128MB
>
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
> does not use this parameter
>
> 3.     NN behaves like an in-memory database IMDB and uses a disk file
> system called the FsImage to load the metadata as startup. This is the only
> place that I see value for Solid State Disk to make this initial load faster
>
> 4.     For the remaining period until HDFS shutdown or otherwise NN will
> use the in memory cache to access metadata
>
> 5.     With regard to sizing of NN to store metadata, one can use the
> following rules of thumb (heuristics):
>
> a.     NN consumes roughly 1GB for every 1 million blokes (source Hadoop
> Operations, Eric Sammer, ISBN: 978-1-499-3205-7). So if you have 128MB
> block size, you can store  128 * 1E6 / (3 *1024) = 41,666GB of data for
> every 1GB. Number 3 comes from the fact that the block is replicated three
> times. In other words just under 42TB of data. So if you have 10GB of
> namenode cache, you can have up to 420TB of data on your datanodes
>
> 6.     You can take FsImage file from Hadoop and convert it into a text
> file as follows:
>
>
>
> *hdfs dfsadmin -fetchImage nnimage*
>
>
>
> 15/03/25 20:17:40 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Opening connection to
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Image Transfer timeout
> configured to 60000 milliseconds
>
> 15/03/25 20:17:41 WARN namenode.TransferFsImage: Overwriting existing file
> nnimage with file downloaded from
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Transfer took 0.03s at
> 1393.94 KB/s
>
>
>
> 7.     That create an image file in the current directory that can be
> converted to text file
>
> *hdfs  oiv -i nnimage -o nnimage.txt*
>
>
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 2 strings
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 543
> inodes.
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> directory section
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 198
> directories
>
> 15/03/25 20:20:07 INFO offlineImageViewer.WebImageViewer: WebImageViewer
> started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer.
>
>
>
> Let me know if I missed  anything or got it wrong.
>
>
>
> HTH
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>



-- 
Harsh J

Re: Can block size for namenode be different from wdatanode block size?

Posted by Harsh J <ha...@cloudera.com>.
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
does not use this parameter

Actually, as a configuration, its only relevant to the client. See also
http://www.quora.com/How-do-I-check-HDFS-blocksize-default-custom

Other points sound about right, except the ability to do (7) can only now
be done if you have legacy mode of fsimage writes enabled. The new OIV tool
in recent releases only serves a REST based Web Server to query the file
data upon.

On Thu, Mar 26, 2015 at 1:47 AM, Mich Talebzadeh <mi...@peridale.co.uk>
wrote:

> Thank you all for your contribution.
>
>
>
> I have summarised the findings as below
>
>
>
> 1.     The Hadoop block size is a configurable parameter dfs.block.size
> in bytes . By default this is set to 134217728 bytes or 128MB
>
> 2.     The block size is only relevant to DataNodes (DN). NameNode (NN)
> does not use this parameter
>
> 3.     NN behaves like an in-memory database IMDB and uses a disk file
> system called the FsImage to load the metadata as startup. This is the only
> place that I see value for Solid State Disk to make this initial load faster
>
> 4.     For the remaining period until HDFS shutdown or otherwise NN will
> use the in memory cache to access metadata
>
> 5.     With regard to sizing of NN to store metadata, one can use the
> following rules of thumb (heuristics):
>
> a.     NN consumes roughly 1GB for every 1 million blokes (source Hadoop
> Operations, Eric Sammer, ISBN: 978-1-499-3205-7). So if you have 128MB
> block size, you can store  128 * 1E6 / (3 *1024) = 41,666GB of data for
> every 1GB. Number 3 comes from the fact that the block is replicated three
> times. In other words just under 42TB of data. So if you have 10GB of
> namenode cache, you can have up to 420TB of data on your datanodes
>
> 6.     You can take FsImage file from Hadoop and convert it into a text
> file as follows:
>
>
>
> *hdfs dfsadmin -fetchImage nnimage*
>
>
>
> 15/03/25 20:17:40 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Opening connection to
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Image Transfer timeout
> configured to 60000 milliseconds
>
> 15/03/25 20:17:41 WARN namenode.TransferFsImage: Overwriting existing file
> nnimage with file downloaded from
> http://rhes564:50070/imagetransfer?getimage=1&txid=latest
>
> 15/03/25 20:17:41 INFO namenode.TransferFsImage: Transfer took 0.03s at
> 1393.94 KB/s
>
>
>
> 7.     That create an image file in the current directory that can be
> converted to text file
>
> *hdfs  oiv -i nnimage -o nnimage.txt*
>
>
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 2 strings
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading 543
> inodes.
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode
> references
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loading inode
> directory section
>
> 15/03/25 20:20:07 INFO offlineImageViewer.FSImageHandler: Loaded 198
> directories
>
> 15/03/25 20:20:07 INFO offlineImageViewer.WebImageViewer: WebImageViewer
> started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer.
>
>
>
> Let me know if I missed  anything or got it wrong.
>
>
>
> HTH
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>



-- 
Harsh J