You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Suresh V <ve...@gmail.com> on 2016/01/21 22:52:05 UTC

Outofmemory error with Java Heap space when running mapreduce

We have a mapreduce that processes text files that are inside a zip file.
The program ran fine when we gave upto 40GB sized zip files.

When we gave a zip file of size 80MB as input (the zip file has a 1.2GB
text file inside), the map reduce errored out with
below error:

2016-01-21 14:47:19,384 FATAL [main]
org.apache.hadoop.mapred.YarnChild: Error running child :
java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:2271)


We suspect it could be due to memory overrun in the container...

Can you please help us with the parameters we should set to the map
reduce to make it process this zip file?

This is running on Yarn.


Please let me know if any additional information is required.

Thank you,

Suresh.