You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Abbass MAROUNI <ab...@virtualscale.fr> on 2014/06/18 18:20:23 UTC

MapReduce Memory Utilization

Hi all,

I have a hadoop cluster with 4 dataNodes+nodeManager and 1 
namenode+resourceManager. I'm launching a MR job (identity mapper and 
identity reducer) with the relevant memory settings set to appropriate 
values :
mapreduce.[map|reduce].memory.mb, JAVA_CHILD_OPTS, map sort buffer, 
reduce buffer, ...

Does the framework guarantee that I will not run into a "Out of memory" 
situation for any input dataset size ? i.e. The only things that can 
lead to a "Out of memory" on mappers or reducers are :
Bad memory settings (for example map sort buffer > mapreduce.map.memory.mb )
Bad Mapper/Reducer code (user code)

Best Regards,