You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Saptarshi Guha <sa...@gmail.com> on 2008/12/28 22:00:27 UTC
OutofMemory Error, inspite of large amounts provided
Hello,
I have work machines with 32GB and allocated 16GB to the heap size
==hadoop-env.sh==
export HADOOP_HEAPSIZE=16384
==hadoop-site.xml==
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx16384m</value>
</property>
The same code runs when not being run through Hadoop, but it fails
when in a Maptask.
Are there other places where I can specify the memory to the maptasks?
Regards
Saptarshi
--
Saptarshi Guha - saptarshi.guha@gmail.com
Re: OutofMemory Error, inspite of large amounts provided
Posted by Amareshwari Sriramadasu <am...@yahoo-inc.com>.
Saptarshi Guha wrote:
> Caught it in action.
> Running ps -e -o 'vsz pid ruser args' |sort -nr|head -5
> on a machine where the map task was running
> 04812 16962 sguha /home/godhuli/custom/jdk1.6.0_11/jre/bin/java
> -Djava.library.path=/home/godhuli/custom/hadoop/bin/../lib/native/Linux-amd64-64:/home/godhuli/custom/hdfs/mapred/local/taskTracker/jobcache/job_200812282102_0003/attempt_200812282102_0003_m_000000_0/work
> -Xmx200m -Djava.io.tmpdir=/home/godhuli/custom/hdfs/mapred/local/taskTracker/jobcache/job_200812282102_0003/attempt_200812282102_0003_m_000000_0/work/tmp
> -classpath /attempt_200812282102_0003_m_000000_0/work
> -Dhadoop.log.dir=/home/godhuli/custom/hadoop/bin/../logs
> -Dhadoop.root.logger=INFO,TLA
> -Dhadoop.tasklog.taskid=attempt_200812282102_0003_m_000000_0
> -Dhadoop.tasklog.totalLogFileSize=0 org.apache.hadoop.mapred.Child
> 127.0.0.1 40443 attempt_200812282102_0003_m_000000_0 1525207782
>
> Also, the reducer only used 540mb. I notice -Xmx200m was passed, how
> to change it?
> Regards
> Saptarshi
>
>
You can set the configuration property mapred.child.java.opts as -Xmx540m.
Thanks
Amareshwari
> On Sun, Dec 28, 2008 at 10:19 PM, Saptarshi Guha
> <sa...@gmail.com> wrote:
>
>> On Sun, Dec 28, 2008 at 4:33 PM, Brian Bockelman <bb...@cse.unl.edu> wrote:
>>
>>> Hey Saptarshi,
>>>
>>> Watch the running child process while using "ps", "top", or Ganglia
>>> monitoring. Does the map task actually use 16GB of memory, or is the memory
>>> not getting set properly?
>>>
>>> Brian
>>>
>> I haven't figured out how to run ganglia, however, also the children
>> quit before i can see their memory usage. The trackers all use
>> 16GB.(from the ps command). However, i noticed some use 512MB
>> only(when i manged to catch them in time)
>>
>> Regards
>>
>>
>
>
>
>
Re: OutofMemory Error, inspite of large amounts provided
Posted by Saptarshi Guha <sa...@gmail.com>.
Caught it in action.
Running ps -e -o 'vsz pid ruser args' |sort -nr|head -5
on a machine where the map task was running
04812 16962 sguha /home/godhuli/custom/jdk1.6.0_11/jre/bin/java
-Djava.library.path=/home/godhuli/custom/hadoop/bin/../lib/native/Linux-amd64-64:/home/godhuli/custom/hdfs/mapred/local/taskTracker/jobcache/job_200812282102_0003/attempt_200812282102_0003_m_000000_0/work
-Xmx200m -Djava.io.tmpdir=/home/godhuli/custom/hdfs/mapred/local/taskTracker/jobcache/job_200812282102_0003/attempt_200812282102_0003_m_000000_0/work/tmp
-classpath /attempt_200812282102_0003_m_000000_0/work
-Dhadoop.log.dir=/home/godhuli/custom/hadoop/bin/../logs
-Dhadoop.root.logger=INFO,TLA
-Dhadoop.tasklog.taskid=attempt_200812282102_0003_m_000000_0
-Dhadoop.tasklog.totalLogFileSize=0 org.apache.hadoop.mapred.Child
127.0.0.1 40443 attempt_200812282102_0003_m_000000_0 1525207782
Also, the reducer only used 540mb. I notice -Xmx200m was passed, how
to change it?
Regards
Saptarshi
On Sun, Dec 28, 2008 at 10:19 PM, Saptarshi Guha
<sa...@gmail.com> wrote:
> On Sun, Dec 28, 2008 at 4:33 PM, Brian Bockelman <bb...@cse.unl.edu> wrote:
>> Hey Saptarshi,
>>
>> Watch the running child process while using "ps", "top", or Ganglia
>> monitoring. Does the map task actually use 16GB of memory, or is the memory
>> not getting set properly?
>>
>> Brian
>
> I haven't figured out how to run ganglia, however, also the children
> quit before i can see their memory usage. The trackers all use
> 16GB.(from the ps command). However, i noticed some use 512MB
> only(when i manged to catch them in time)
>
> Regards
>
--
Saptarshi Guha - saptarshi.guha@gmail.com
Re: OutofMemory Error, inspite of large amounts provided
Posted by Saptarshi Guha <sa...@gmail.com>.
On Sun, Dec 28, 2008 at 4:33 PM, Brian Bockelman <bb...@cse.unl.edu> wrote:
> Hey Saptarshi,
>
> Watch the running child process while using "ps", "top", or Ganglia
> monitoring. Does the map task actually use 16GB of memory, or is the memory
> not getting set properly?
>
> Brian
I haven't figured out how to run ganglia, however, also the children
quit before i can see their memory usage. The trackers all use
16GB.(from the ps command). However, i noticed some use 512MB
only(when i manged to catch them in time)
Regards
Re: OutofMemory Error, inspite of large amounts provided
Posted by Brian Bockelman <bb...@cse.unl.edu>.
Hey Saptarshi,
Watch the running child process while using "ps", "top", or Ganglia
monitoring. Does the map task actually use 16GB of memory, or is the
memory not getting set properly?
Brian
On Dec 28, 2008, at 3:00 PM, Saptarshi Guha wrote:
> Hello,
> I have work machines with 32GB and allocated 16GB to the heap size
> ==hadoop-env.sh==
> export HADOOP_HEAPSIZE=16384
>
> ==hadoop-site.xml==
> <property>
> <name>mapred.child.java.opts</name>
> <value>-Xmx16384m</value>
> </property>
>
> The same code runs when not being run through Hadoop, but it fails
> when in a Maptask.
> Are there other places where I can specify the memory to the maptasks?
>
> Regards
> Saptarshi
>
> --
> Saptarshi Guha - saptarshi.guha@gmail.com