You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Dhanasekaran Anbalagan <bu...@gmail.com> on 2013/03/11 15:11:13 UTC

Best way tune in to Hadoop Heap size Parameter

Hi Guys,

We have problem with production Hadoop cluster most of the time we seeing
Java heap size issue.
one of the hadoop component goes to out of Memory Error.

2013-03-08 08:01:10,749 WARN org.apache.hadoop.ipc.Server: IPC Server
handler 57 on 8020, call
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReport from
172.16.30.139:57087: *error: java.lang.OutOfMemoryError: Java heap space*
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2760)
 at java.util.Arrays.copyOf(Arrays.java:2734)


It's any mechanism is there fine tune the Name-node and job tracker heap
size.

In my current scenario, we have 181 TB DFS size, We are keep on putting
data to cluster.

I want any calculation is there [ like formula ], Data size related with
Namenode Heap size and Datanode Heap size.

Please guide me.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.