You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mich Talebzadeh <mi...@peridale.co.uk> on 2015/03/26 11:16:34 UTC

Total memory available to NameNode

Is there any parameter that sets the total memory that NameNode can use?

 

Thanks

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com] 
Sent: 25 March 2015 16:08
To: user@hadoop.apache.org; mich@peridale.co.uk
Subject: Re: can block size for namenode be different from wdatanode block size?

 

Correct, let's say you run the NameNode with just 1GB of RAM.
This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.

 

Cheers,

Mirko

 

2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mi...@peridale.co.uk>:

Hi Mirko,

Thanks for feedback.

Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.

IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?

Regards,

Mich

Let your email find you with BlackBerry from Vodafone

  _____  

From: Mirko Kämpf <mi...@gmail.com> 

Date: Wed, 25 Mar 2015 15:20:03 +0000

To: user@hadoop.apache.org<us...@hadoop.apache.org>

ReplyTo: user@hadoop.apache.org 

Subject: Re: can block size for namenode be different from datanode block size?

 

Hi Mich,

 

please see the comments in your text.

 

 

2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mi...@peridale.co.uk>:


Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 


My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. 

Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
  

However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
 


For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Remember, metadata is in memory. The fsimage-file, which contains the metadata 
is loaded on startup of the NameNode.

 

Please be not confused by the two types of block-sizes.

 

Hope this helps a bit.

Cheers,

Mirko

 


Thanks,

Mich

 

 


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Ah, yes. Toms book is a good start, and Eric Sammers book Hadoop Operations too :) 

BR,
 AL


> On 26 Mar 2015, at 11:50, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Many thanks AL. I believe you meant “Hadoop the definitive guide” J
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
> Sent: 26 March 2015 10:30
> To: user@hadoop.apache.org
> Subject: Re: Total memory available to NameNode
>  
> Hi Mich,
>  
> the book Hadoop Operations may a good start:
> https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>
>  
> BR,
>  AL
>  
>  
>> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>> wrote:
>>  
>> Is there any parameter that sets the total memory that NameNode can use?
>>  
>> Thanks
>>  
>> Mich Talebzadeh
>>  
>> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>>  
>> Publications due shortly:
>> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>>  
>> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>>  
>> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
>> Sent: 25 March 2015 16:08
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
>> Subject: Re: can block size for namenode be different from wdatanode block size?
>>  
>> Correct, let's say you run the NameNode with just 1GB of RAM.
>> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>>  
>> Cheers,
>> Mirko
>>  
>> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> Hi Mirko,
>> 
>> Thanks for feedback.
>> 
>> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
>> 
>> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
>> 
>> Regards,
>> 
>> Mich
>> Let your email find you with BlackBerry from Vodafone
>> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
>> Date: Wed, 25 Mar 2015 15:20:03 +0000
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
>> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
>> Subject: Re: can block size for namenode be different from datanode block size?
>>  
>> Hi Mich,
>>  
>> please see the comments in your text.
>> 
>>  
>>  
>> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> 
>> Hi,
>> 
>> The block size for HDFS is currently set to 128MB by defauilt. This is
>> configurable.
>> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>>> 
>>> My point is that I assume this  parameter in hadoop-core.xml sets the
>>> block size for both namenode and datanode. 
>> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>>   
>>> However, the storage and
>>> random access for metadata in nsamenode is different and suits smaller
>>> block sizes.
>> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>>  
>>> 
>>> For example in Linux the OS block size is 4k which means one HTFS blopck
>>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>>> useful and smaller block size will be suitable and hence my question.
>> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
>> is loaded on startup of the NameNode.
>>  
>> Please be not confused by the two types of block-sizes.
>>  
>> Hope this helps a bit.
>> Cheers,
>> Mirko
>>  
>>> 
>>> Thanks,
>>> 
>>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Ah, yes. Toms book is a good start, and Eric Sammers book Hadoop Operations too :) 

BR,
 AL


> On 26 Mar 2015, at 11:50, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Many thanks AL. I believe you meant “Hadoop the definitive guide” J
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
> Sent: 26 March 2015 10:30
> To: user@hadoop.apache.org
> Subject: Re: Total memory available to NameNode
>  
> Hi Mich,
>  
> the book Hadoop Operations may a good start:
> https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>
>  
> BR,
>  AL
>  
>  
>> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>> wrote:
>>  
>> Is there any parameter that sets the total memory that NameNode can use?
>>  
>> Thanks
>>  
>> Mich Talebzadeh
>>  
>> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>>  
>> Publications due shortly:
>> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>>  
>> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>>  
>> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
>> Sent: 25 March 2015 16:08
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
>> Subject: Re: can block size for namenode be different from wdatanode block size?
>>  
>> Correct, let's say you run the NameNode with just 1GB of RAM.
>> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>>  
>> Cheers,
>> Mirko
>>  
>> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> Hi Mirko,
>> 
>> Thanks for feedback.
>> 
>> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
>> 
>> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
>> 
>> Regards,
>> 
>> Mich
>> Let your email find you with BlackBerry from Vodafone
>> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
>> Date: Wed, 25 Mar 2015 15:20:03 +0000
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
>> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
>> Subject: Re: can block size for namenode be different from datanode block size?
>>  
>> Hi Mich,
>>  
>> please see the comments in your text.
>> 
>>  
>>  
>> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> 
>> Hi,
>> 
>> The block size for HDFS is currently set to 128MB by defauilt. This is
>> configurable.
>> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>>> 
>>> My point is that I assume this  parameter in hadoop-core.xml sets the
>>> block size for both namenode and datanode. 
>> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>>   
>>> However, the storage and
>>> random access for metadata in nsamenode is different and suits smaller
>>> block sizes.
>> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>>  
>>> 
>>> For example in Linux the OS block size is 4k which means one HTFS blopck
>>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>>> useful and smaller block size will be suitable and hence my question.
>> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
>> is loaded on startup of the NameNode.
>>  
>> Please be not confused by the two types of block-sizes.
>>  
>> Hope this helps a bit.
>> Cheers,
>> Mirko
>>  
>>> 
>>> Thanks,
>>> 
>>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Ah, yes. Toms book is a good start, and Eric Sammers book Hadoop Operations too :) 

BR,
 AL


> On 26 Mar 2015, at 11:50, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Many thanks AL. I believe you meant “Hadoop the definitive guide” J
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
> Sent: 26 March 2015 10:30
> To: user@hadoop.apache.org
> Subject: Re: Total memory available to NameNode
>  
> Hi Mich,
>  
> the book Hadoop Operations may a good start:
> https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>
>  
> BR,
>  AL
>  
>  
>> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>> wrote:
>>  
>> Is there any parameter that sets the total memory that NameNode can use?
>>  
>> Thanks
>>  
>> Mich Talebzadeh
>>  
>> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>>  
>> Publications due shortly:
>> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>>  
>> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>>  
>> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
>> Sent: 25 March 2015 16:08
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
>> Subject: Re: can block size for namenode be different from wdatanode block size?
>>  
>> Correct, let's say you run the NameNode with just 1GB of RAM.
>> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>>  
>> Cheers,
>> Mirko
>>  
>> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> Hi Mirko,
>> 
>> Thanks for feedback.
>> 
>> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
>> 
>> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
>> 
>> Regards,
>> 
>> Mich
>> Let your email find you with BlackBerry from Vodafone
>> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
>> Date: Wed, 25 Mar 2015 15:20:03 +0000
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
>> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
>> Subject: Re: can block size for namenode be different from datanode block size?
>>  
>> Hi Mich,
>>  
>> please see the comments in your text.
>> 
>>  
>>  
>> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> 
>> Hi,
>> 
>> The block size for HDFS is currently set to 128MB by defauilt. This is
>> configurable.
>> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>>> 
>>> My point is that I assume this  parameter in hadoop-core.xml sets the
>>> block size for both namenode and datanode. 
>> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>>   
>>> However, the storage and
>>> random access for metadata in nsamenode is different and suits smaller
>>> block sizes.
>> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>>  
>>> 
>>> For example in Linux the OS block size is 4k which means one HTFS blopck
>>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>>> useful and smaller block size will be suitable and hence my question.
>> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
>> is loaded on startup of the NameNode.
>>  
>> Please be not confused by the two types of block-sizes.
>>  
>> Hope this helps a bit.
>> Cheers,
>> Mirko
>>  
>>> 
>>> Thanks,
>>> 
>>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Ah, yes. Toms book is a good start, and Eric Sammers book Hadoop Operations too :) 

BR,
 AL


> On 26 Mar 2015, at 11:50, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Many thanks AL. I believe you meant “Hadoop the definitive guide” J
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
> Sent: 26 March 2015 10:30
> To: user@hadoop.apache.org
> Subject: Re: Total memory available to NameNode
>  
> Hi Mich,
>  
> the book Hadoop Operations may a good start:
> https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>
>  
> BR,
>  AL
>  
>  
>> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>> wrote:
>>  
>> Is there any parameter that sets the total memory that NameNode can use?
>>  
>> Thanks
>>  
>> Mich Talebzadeh
>>  
>> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>>  
>> Publications due shortly:
>> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>>  
>> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>>  
>> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
>> Sent: 25 March 2015 16:08
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
>> Subject: Re: can block size for namenode be different from wdatanode block size?
>>  
>> Correct, let's say you run the NameNode with just 1GB of RAM.
>> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>>  
>> Cheers,
>> Mirko
>>  
>> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> Hi Mirko,
>> 
>> Thanks for feedback.
>> 
>> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
>> 
>> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
>> 
>> Regards,
>> 
>> Mich
>> Let your email find you with BlackBerry from Vodafone
>> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
>> Date: Wed, 25 Mar 2015 15:20:03 +0000
>> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
>> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
>> Subject: Re: can block size for namenode be different from datanode block size?
>>  
>> Hi Mich,
>>  
>> please see the comments in your text.
>> 
>>  
>>  
>> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
>> 
>> Hi,
>> 
>> The block size for HDFS is currently set to 128MB by defauilt. This is
>> configurable.
>> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>>> 
>>> My point is that I assume this  parameter in hadoop-core.xml sets the
>>> block size for both namenode and datanode. 
>> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>>   
>>> However, the storage and
>>> random access for metadata in nsamenode is different and suits smaller
>>> block sizes.
>> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>>  
>>> 
>>> For example in Linux the OS block size is 4k which means one HTFS blopck
>>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>>> useful and smaller block size will be suitable and hence my question.
>> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
>> is loaded on startup of the NameNode.
>>  
>> Please be not confused by the two types of block-sizes.
>>  
>> Hope this helps a bit.
>> Cheers,
>> Mirko
>>  
>>> 
>>> Thanks,
>>> 
>>> Mich


RE: Total memory available to NameNode

Posted by Mich Talebzadeh <mi...@peridale.co.uk>.
Many thanks AL. I believe you meant “Hadoop the definitive guide” J

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
Sent: 26 March 2015 10:30
To: user@hadoop.apache.org
Subject: Re: Total memory available to NameNode

 

Hi Mich,

 

the book Hadoop Operations may a good start:

https://books.google.de/books?id=drbI_aro20oC <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false> &pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false

 

BR,

 AL

 

 

On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:

 

Is there any parameter that sets the total memory that NameNode can use?

 

Thanks

 

Mich Talebzadeh

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mirko Kämpf [ <ma...@gmail.com> mailto:mirko.kaempf@gmail.com] 
Sent: 25 March 2015 16:08
To:  <ma...@hadoop.apache.org> user@hadoop.apache.org;  <ma...@peridale.co.uk> mich@peridale.co.uk
Subject: Re: can block size for namenode be different from wdatanode block size?

 

Correct, let's say you run the NameNode with just 1GB of RAM.
This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.

 

Cheers,

Mirko

 

2015-03-25 15:34 GMT+00:00 Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:

Hi Mirko,

Thanks for feedback.

Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.

IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?

Regards,

Mich

Let your email find you with BlackBerry from Vodafone

  _____  

From: Mirko Kämpf < <ma...@gmail.com> mirko.kaempf@gmail.com> 

Date: Wed, 25 Mar 2015 15:20:03 +0000

To:  <ma...@hadoop.apache.org> user@hadoop.apache.org< <ma...@hadoop.apache.org> user@hadoop.apache.org>

ReplyTo:  <ma...@hadoop.apache.org> user@hadoop.apache.org

Subject: Re: can block size for namenode be different from datanode block size?

 

Hi Mich,

 

please see the comments in your text.

 

 

2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:


Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 


My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. 

Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
  

However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
 


For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Remember, metadata is in memory. The fsimage-file, which contains the metadata 
is loaded on startup of the NameNode.

 

Please be not confused by the two types of block-sizes.

 

Hope this helps a bit.

Cheers,

Mirko

 


Thanks,

Mich

 


RE: Total memory available to NameNode

Posted by Mich Talebzadeh <mi...@peridale.co.uk>.
Many thanks AL. I believe you meant “Hadoop the definitive guide” J

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
Sent: 26 March 2015 10:30
To: user@hadoop.apache.org
Subject: Re: Total memory available to NameNode

 

Hi Mich,

 

the book Hadoop Operations may a good start:

https://books.google.de/books?id=drbI_aro20oC <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false> &pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false

 

BR,

 AL

 

 

On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:

 

Is there any parameter that sets the total memory that NameNode can use?

 

Thanks

 

Mich Talebzadeh

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mirko Kämpf [ <ma...@gmail.com> mailto:mirko.kaempf@gmail.com] 
Sent: 25 March 2015 16:08
To:  <ma...@hadoop.apache.org> user@hadoop.apache.org;  <ma...@peridale.co.uk> mich@peridale.co.uk
Subject: Re: can block size for namenode be different from wdatanode block size?

 

Correct, let's say you run the NameNode with just 1GB of RAM.
This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.

 

Cheers,

Mirko

 

2015-03-25 15:34 GMT+00:00 Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:

Hi Mirko,

Thanks for feedback.

Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.

IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?

Regards,

Mich

Let your email find you with BlackBerry from Vodafone

  _____  

From: Mirko Kämpf < <ma...@gmail.com> mirko.kaempf@gmail.com> 

Date: Wed, 25 Mar 2015 15:20:03 +0000

To:  <ma...@hadoop.apache.org> user@hadoop.apache.org< <ma...@hadoop.apache.org> user@hadoop.apache.org>

ReplyTo:  <ma...@hadoop.apache.org> user@hadoop.apache.org

Subject: Re: can block size for namenode be different from datanode block size?

 

Hi Mich,

 

please see the comments in your text.

 

 

2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:


Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 


My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. 

Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
  

However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
 


For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Remember, metadata is in memory. The fsimage-file, which contains the metadata 
is loaded on startup of the NameNode.

 

Please be not confused by the two types of block-sizes.

 

Hope this helps a bit.

Cheers,

Mirko

 


Thanks,

Mich

 


RE: Total memory available to NameNode

Posted by Mich Talebzadeh <mi...@peridale.co.uk>.
Many thanks AL. I believe you meant “Hadoop the definitive guide” J

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
Sent: 26 March 2015 10:30
To: user@hadoop.apache.org
Subject: Re: Total memory available to NameNode

 

Hi Mich,

 

the book Hadoop Operations may a good start:

https://books.google.de/books?id=drbI_aro20oC <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false> &pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false

 

BR,

 AL

 

 

On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:

 

Is there any parameter that sets the total memory that NameNode can use?

 

Thanks

 

Mich Talebzadeh

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mirko Kämpf [ <ma...@gmail.com> mailto:mirko.kaempf@gmail.com] 
Sent: 25 March 2015 16:08
To:  <ma...@hadoop.apache.org> user@hadoop.apache.org;  <ma...@peridale.co.uk> mich@peridale.co.uk
Subject: Re: can block size for namenode be different from wdatanode block size?

 

Correct, let's say you run the NameNode with just 1GB of RAM.
This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.

 

Cheers,

Mirko

 

2015-03-25 15:34 GMT+00:00 Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:

Hi Mirko,

Thanks for feedback.

Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.

IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?

Regards,

Mich

Let your email find you with BlackBerry from Vodafone

  _____  

From: Mirko Kämpf < <ma...@gmail.com> mirko.kaempf@gmail.com> 

Date: Wed, 25 Mar 2015 15:20:03 +0000

To:  <ma...@hadoop.apache.org> user@hadoop.apache.org< <ma...@hadoop.apache.org> user@hadoop.apache.org>

ReplyTo:  <ma...@hadoop.apache.org> user@hadoop.apache.org

Subject: Re: can block size for namenode be different from datanode block size?

 

Hi Mich,

 

please see the comments in your text.

 

 

2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:


Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 


My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. 

Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
  

However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
 


For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Remember, metadata is in memory. The fsimage-file, which contains the metadata 
is loaded on startup of the NameNode.

 

Please be not confused by the two types of block-sizes.

 

Hope this helps a bit.

Cheers,

Mirko

 


Thanks,

Mich

 


RE: Total memory available to NameNode

Posted by Mich Talebzadeh <mi...@peridale.co.uk>.
Many thanks AL. I believe you meant “Hadoop the definitive guide” J

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Alexander Alten-Lorenz [mailto:wget.null@gmail.com] 
Sent: 26 March 2015 10:30
To: user@hadoop.apache.org
Subject: Re: Total memory available to NameNode

 

Hi Mich,

 

the book Hadoop Operations may a good start:

https://books.google.de/books?id=drbI_aro20oC <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false> &pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false

 

BR,

 AL

 

 

On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:

 

Is there any parameter that sets the total memory that NameNode can use?

 

Thanks

 

Mich Talebzadeh

 

 <http://talebzadehmich.wordpress.com/> http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mirko Kämpf [ <ma...@gmail.com> mailto:mirko.kaempf@gmail.com] 
Sent: 25 March 2015 16:08
To:  <ma...@hadoop.apache.org> user@hadoop.apache.org;  <ma...@peridale.co.uk> mich@peridale.co.uk
Subject: Re: can block size for namenode be different from wdatanode block size?

 

Correct, let's say you run the NameNode with just 1GB of RAM.
This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.

 

Cheers,

Mirko

 

2015-03-25 15:34 GMT+00:00 Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:

Hi Mirko,

Thanks for feedback.

Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.

IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?

Regards,

Mich

Let your email find you with BlackBerry from Vodafone

  _____  

From: Mirko Kämpf < <ma...@gmail.com> mirko.kaempf@gmail.com> 

Date: Wed, 25 Mar 2015 15:20:03 +0000

To:  <ma...@hadoop.apache.org> user@hadoop.apache.org< <ma...@hadoop.apache.org> user@hadoop.apache.org>

ReplyTo:  <ma...@hadoop.apache.org> user@hadoop.apache.org

Subject: Re: can block size for namenode be different from datanode block size?

 

Hi Mich,

 

please see the comments in your text.

 

 

2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh < <ma...@peridale.co.uk> mich@peridale.co.uk>:


Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.

Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 


My point is that I assume this  parameter in hadoop-core.xml sets the
block size for both namenode and datanode. 

Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
  

However, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.

HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
 


For example in Linux the OS block size is 4k which means one HTFS blopck
size  of 128MB can hold 32K OS blocks. For metadata this may not be
useful and smaller block size will be suitable and hence my question.

Remember, metadata is in memory. The fsimage-file, which contains the metadata 
is loaded on startup of the NameNode.

 

Please be not confused by the two types of block-sizes.

 

Hope this helps a bit.

Cheers,

Mirko

 


Thanks,

Mich

 


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Mich,

the book Hadoop Operations may a good start:
https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>

BR,
 AL


> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Is there any parameter that sets the total memory that NameNode can use?
>  
> Thanks
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
> Sent: 25 March 2015 16:08
> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
> Subject: Re: can block size for namenode be different from wdatanode block size?
>  
> Correct, let's say you run the NameNode with just 1GB of RAM.
> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>  
> Cheers,
> Mirko
>  
> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> Hi Mirko,
> 
> Thanks for feedback.
> 
> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
> 
> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
> 
> Regards,
> 
> Mich
> Let your email find you with BlackBerry from Vodafone
> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
> Date: Wed, 25 Mar 2015 15:20:03 +0000
> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
> Subject: Re: can block size for namenode be different from datanode block size?
>  
> Hi Mich,
>  
> please see the comments in your text.
> 
>  
>  
> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> 
> Hi,
> 
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>> 
>> My point is that I assume this  parameter in hadoop-core.xml sets the
>> block size for both namenode and datanode. 
> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>   
>> However, the storage and
>> random access for metadata in nsamenode is different and suits smaller
>> block sizes.
> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>  
>> 
>> For example in Linux the OS block size is 4k which means one HTFS blopck
>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>> useful and smaller block size will be suitable and hence my question.
> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
> is loaded on startup of the NameNode.
>  
> Please be not confused by the two types of block-sizes.
>  
> Hope this helps a bit.
> Cheers,
> Mirko
>  
>> 
>> Thanks,
>> 
>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Mich,

the book Hadoop Operations may a good start:
https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>

BR,
 AL


> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Is there any parameter that sets the total memory that NameNode can use?
>  
> Thanks
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
> Sent: 25 March 2015 16:08
> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
> Subject: Re: can block size for namenode be different from wdatanode block size?
>  
> Correct, let's say you run the NameNode with just 1GB of RAM.
> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>  
> Cheers,
> Mirko
>  
> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> Hi Mirko,
> 
> Thanks for feedback.
> 
> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
> 
> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
> 
> Regards,
> 
> Mich
> Let your email find you with BlackBerry from Vodafone
> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
> Date: Wed, 25 Mar 2015 15:20:03 +0000
> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
> Subject: Re: can block size for namenode be different from datanode block size?
>  
> Hi Mich,
>  
> please see the comments in your text.
> 
>  
>  
> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> 
> Hi,
> 
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>> 
>> My point is that I assume this  parameter in hadoop-core.xml sets the
>> block size for both namenode and datanode. 
> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>   
>> However, the storage and
>> random access for metadata in nsamenode is different and suits smaller
>> block sizes.
> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>  
>> 
>> For example in Linux the OS block size is 4k which means one HTFS blopck
>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>> useful and smaller block size will be suitable and hence my question.
> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
> is loaded on startup of the NameNode.
>  
> Please be not confused by the two types of block-sizes.
>  
> Hope this helps a bit.
> Cheers,
> Mirko
>  
>> 
>> Thanks,
>> 
>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Mich,

the book Hadoop Operations may a good start:
https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>

BR,
 AL


> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Is there any parameter that sets the total memory that NameNode can use?
>  
> Thanks
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
> Sent: 25 March 2015 16:08
> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
> Subject: Re: can block size for namenode be different from wdatanode block size?
>  
> Correct, let's say you run the NameNode with just 1GB of RAM.
> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>  
> Cheers,
> Mirko
>  
> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> Hi Mirko,
> 
> Thanks for feedback.
> 
> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
> 
> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
> 
> Regards,
> 
> Mich
> Let your email find you with BlackBerry from Vodafone
> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
> Date: Wed, 25 Mar 2015 15:20:03 +0000
> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
> Subject: Re: can block size for namenode be different from datanode block size?
>  
> Hi Mich,
>  
> please see the comments in your text.
> 
>  
>  
> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> 
> Hi,
> 
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>> 
>> My point is that I assume this  parameter in hadoop-core.xml sets the
>> block size for both namenode and datanode. 
> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>   
>> However, the storage and
>> random access for metadata in nsamenode is different and suits smaller
>> block sizes.
> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>  
>> 
>> For example in Linux the OS block size is 4k which means one HTFS blopck
>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>> useful and smaller block size will be suitable and hence my question.
> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
> is loaded on startup of the NameNode.
>  
> Please be not confused by the two types of block-sizes.
>  
> Hope this helps a bit.
> Cheers,
> Mirko
>  
>> 
>> Thanks,
>> 
>> Mich


Re: Total memory available to NameNode

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Mich,

the book Hadoop Operations may a good start:
https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop%20memory%20namenode&f=false <https://books.google.de/books?id=drbI_aro20oC&pg=PA308&lpg=PA308&dq=hadoop+memory+namenode&source=bl&ots=t_yltgk_i7&sig=_6LXkcSjfuwwqfz_kDGDi9ytgqU&hl=en&sa=X&ei=Nt8TVfn9AcjLPZyXgKAC&ved=0CFYQ6AEwBg#v=onepage&q=hadoop memory namenode&f=false>

BR,
 AL


> On 26 Mar 2015, at 11:16, Mich Talebzadeh <mi...@peridale.co.uk> wrote:
> 
> Is there any parameter that sets the total memory that NameNode can use?
>  
> Thanks
>  
> Mich Talebzadeh
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> Publications due shortly:
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache
>  
> NOTE: The information in this email is proprietary and confidential. This message is for the designated recipient only, if you are not the intended recipient, you should destroy it immediately. Any information in this message shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility.
>  
> From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com <ma...@gmail.com>] 
> Sent: 25 March 2015 16:08
> To: user@hadoop.apache.org <ma...@hadoop.apache.org>; mich@peridale.co.uk <ma...@peridale.co.uk>
> Subject: Re: can block size for namenode be different from wdatanode block size?
>  
> Correct, let's say you run the NameNode with just 1GB of RAM.
> This would be a very strong limitation for the cluster. For each file we need about 200 bytes and for each block as well. Now we can estimate the max. capacity depending on HDFS-Blocksize and average File size.
>  
> Cheers,
> Mirko
>  
> 2015-03-25 15:34 GMT+00:00 Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> Hi Mirko,
> 
> Thanks for feedback.
> 
> Since i have worked with in memory databases, this metadata caching sounds more like an IMDB that caches data at start up from disk resident storage.
> 
> IMDBs tend to get issues when the cache cannot hold all data. Is this the case the case with metada as well?
> 
> Regards,
> 
> Mich
> Let your email find you with BlackBerry from Vodafone
> From: Mirko Kämpf <mirko.kaempf@gmail.com <ma...@gmail.com>> 
> Date: Wed, 25 Mar 2015 15:20:03 +0000
> To: user@hadoop.apache.org <ma...@hadoop.apache.org><user@hadoop.apache.org <ma...@hadoop.apache.org>>
> ReplyTo: user@hadoop.apache.org <ma...@hadoop.apache.org>
> Subject: Re: can block size for namenode be different from datanode block size?
>  
> Hi Mich,
>  
> please see the comments in your text.
> 
>  
>  
> 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk <ma...@peridale.co.uk>>:
> 
> Hi,
> 
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
> Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. 
>> 
>> My point is that I assume this  parameter in hadoop-core.xml sets the
>> block size for both namenode and datanode. 
> Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks.
>   
>> However, the storage and
>> random access for metadata in nsamenode is different and suits smaller
>> block sizes.
> HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server.
>  
>> 
>> For example in Linux the OS block size is 4k which means one HTFS blopck
>> size  of 128MB can hold 32K OS blocks. For metadata this may not be
>> useful and smaller block size will be suitable and hence my question.
> Remember, metadata is in memory. The fsimage-file, which contains the metadata 
> is loaded on startup of the NameNode.
>  
> Please be not confused by the two types of block-sizes.
>  
> Hope this helps a bit.
> Cheers,
> Mirko
>  
>> 
>> Thanks,
>> 
>> Mich