You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Rahul.V." <gr...@gmail.com> on 2010/08/16 09:04:48 UTC

Error with Heap Space.

Hi,
Am not sure if this is the right place to post this doubt.We tried
implementing M/R on a different distributed file system developed in house.
Every time I run more than 30 threads I get an error saying
"Java.lang.outofmemory: Heap exception"
Is the heap error due to the number of threads or more intermediate data
getting generated?

-- 
Regards,
R.V.

Re: Error with Heap Space.

Posted by "Rahul.V." <gr...@gmail.com>.
Thanks a lot for your reply..:)

On Mon, Aug 16, 2010 at 8:39 PM, Vitaliy Semochkin <vi...@gmail.com>wrote:

> sorry for being inacurate
> if your question is
> "Is the heap error due to the number of threads or more intermediate data
> getting generated?"
> when you have 30 threads running - the answer is, it definatly not
> threads, cause 30 threads consume very few amount of memory.
>
> Regards,
> Vitaliy S
>
> On Mon, Aug 16, 2010 at 1:38 PM, Rahul.V. <gr...@gmail.com>
> wrote:
> > Hi,
> > I think you didnt get my question. I am not working on Hadoop but
> > implementing Map Reduce on a different file system developed within my
> > workplace.
> > I posted this question here as I thought it is related somewhere.
> >
> > On Mon, Aug 16, 2010 at 2:56 PM, Vitaliy Semochkin <vitaliy.se@gmail.com
> >wrote:
> >
> >> Hello Rahul,
> >>
> >> Is the heap error due to the number of threads or more intermediate data
> >> getting generated?
> >>
> >> Thread itself do not comsume a lot of memory. I think your problem is
> >> in intermediate data.
> >> However in this case I thought it would be a OutOfMemory exception in
> >> M/R task rather than hadoop itself.
> >>
> >> If the problem happens in hadoop itself
> >> then increase variable
> >>  HADOOP_HEAPSIZE to the desired value
> >> you can find the variable in bin/hadoop file.
> >>
> >>
> >> please let me know value of mapred.job.reuse.jvm.num.tasks in your
> cluster
> >>
> >> if the OutOfMemory exception happens in MapReduce task
> >> then increase  -Xmx value.
> >>
> >> for an instance I have 1024 per JVM.
> >> -- from mapred-site.xml - file------------------------
> >> <property>
> >>  <name>mapred.child.java.opts</name>
> >>  <value>-Xmx1024m -XX:-UseGCOverheadLimit</value>
> >>  <!-- Not marked as final so jobs can include JVM debugging options -->
> >> </property>
> >> -----------------------------------------------------------
> >>
> >> Regards,
> >> Vitaliy S
> >>
> >> On Mon, Aug 16, 2010 at 11:04 AM, Rahul.V. <
> greatness.hardness@gmail.com>
> >> wrote:
> >> > Hi,
> >> > Am not sure if this is the right place to post this doubt.We tried
> >> > implementing M/R on a different distributed file system developed in
> >> house.
> >> > Every time I run more than 30 threads I get an error saying
> >> > "Java.lang.outofmemory: Heap exception"
> >> > Is the heap error due to the number of threads or more intermediate
> data
> >> > getting generated?
> >> >
> >> > --
> >> > Regards,
> >> > R.V.
> >> >
> >>
> >
> >
> >
> > --
> > Regards,
> > R.V.
> >
>



-- 
Regards,
R.V.

Re: Error with Heap Space.

Posted by Vitaliy Semochkin <vi...@gmail.com>.
sorry for being inacurate
if your question is
"Is the heap error due to the number of threads or more intermediate data
getting generated?"
when you have 30 threads running - the answer is, it definatly not
threads, cause 30 threads consume very few amount of memory.

Regards,
Vitaliy S

On Mon, Aug 16, 2010 at 1:38 PM, Rahul.V. <gr...@gmail.com> wrote:
> Hi,
> I think you didnt get my question. I am not working on Hadoop but
> implementing Map Reduce on a different file system developed within my
> workplace.
> I posted this question here as I thought it is related somewhere.
>
> On Mon, Aug 16, 2010 at 2:56 PM, Vitaliy Semochkin <vi...@gmail.com>wrote:
>
>> Hello Rahul,
>>
>> Is the heap error due to the number of threads or more intermediate data
>> getting generated?
>>
>> Thread itself do not comsume a lot of memory. I think your problem is
>> in intermediate data.
>> However in this case I thought it would be a OutOfMemory exception in
>> M/R task rather than hadoop itself.
>>
>> If the problem happens in hadoop itself
>> then increase variable
>>  HADOOP_HEAPSIZE to the desired value
>> you can find the variable in bin/hadoop file.
>>
>>
>> please let me know value of mapred.job.reuse.jvm.num.tasks in your cluster
>>
>> if the OutOfMemory exception happens in MapReduce task
>> then increase  -Xmx value.
>>
>> for an instance I have 1024 per JVM.
>> -- from mapred-site.xml - file------------------------
>> <property>
>>  <name>mapred.child.java.opts</name>
>>  <value>-Xmx1024m -XX:-UseGCOverheadLimit</value>
>>  <!-- Not marked as final so jobs can include JVM debugging options -->
>> </property>
>> -----------------------------------------------------------
>>
>> Regards,
>> Vitaliy S
>>
>> On Mon, Aug 16, 2010 at 11:04 AM, Rahul.V. <gr...@gmail.com>
>> wrote:
>> > Hi,
>> > Am not sure if this is the right place to post this doubt.We tried
>> > implementing M/R on a different distributed file system developed in
>> house.
>> > Every time I run more than 30 threads I get an error saying
>> > "Java.lang.outofmemory: Heap exception"
>> > Is the heap error due to the number of threads or more intermediate data
>> > getting generated?
>> >
>> > --
>> > Regards,
>> > R.V.
>> >
>>
>
>
>
> --
> Regards,
> R.V.
>

Re: Error with Heap Space.

Posted by "Rahul.V." <gr...@gmail.com>.
Hi,
I think you didnt get my question. I am not working on Hadoop but
implementing Map Reduce on a different file system developed within my
workplace.
I posted this question here as I thought it is related somewhere.

On Mon, Aug 16, 2010 at 2:56 PM, Vitaliy Semochkin <vi...@gmail.com>wrote:

> Hello Rahul,
>
> Is the heap error due to the number of threads or more intermediate data
> getting generated?
>
> Thread itself do not comsume a lot of memory. I think your problem is
> in intermediate data.
> However in this case I thought it would be a OutOfMemory exception in
> M/R task rather than hadoop itself.
>
> If the problem happens in hadoop itself
> then increase variable
>  HADOOP_HEAPSIZE to the desired value
> you can find the variable in bin/hadoop file.
>
>
> please let me know value of mapred.job.reuse.jvm.num.tasks in your cluster
>
> if the OutOfMemory exception happens in MapReduce task
> then increase  -Xmx value.
>
> for an instance I have 1024 per JVM.
> -- from mapred-site.xml - file------------------------
> <property>
>  <name>mapred.child.java.opts</name>
>  <value>-Xmx1024m -XX:-UseGCOverheadLimit</value>
>  <!-- Not marked as final so jobs can include JVM debugging options -->
> </property>
> -----------------------------------------------------------
>
> Regards,
> Vitaliy S
>
> On Mon, Aug 16, 2010 at 11:04 AM, Rahul.V. <gr...@gmail.com>
> wrote:
> > Hi,
> > Am not sure if this is the right place to post this doubt.We tried
> > implementing M/R on a different distributed file system developed in
> house.
> > Every time I run more than 30 threads I get an error saying
> > "Java.lang.outofmemory: Heap exception"
> > Is the heap error due to the number of threads or more intermediate data
> > getting generated?
> >
> > --
> > Regards,
> > R.V.
> >
>



-- 
Regards,
R.V.

Re: Error with Heap Space.

Posted by Vitaliy Semochkin <vi...@gmail.com>.
Hello Rahul,

Is the heap error due to the number of threads or more intermediate data
getting generated?

Thread itself do not comsume a lot of memory. I think your problem is
in intermediate data.
However in this case I thought it would be a OutOfMemory exception in
M/R task rather than hadoop itself.

If the problem happens in hadoop itself
then increase variable
 HADOOP_HEAPSIZE to the desired value
you can find the variable in bin/hadoop file.


please let me know value of mapred.job.reuse.jvm.num.tasks in your cluster

if the OutOfMemory exception happens in MapReduce task
then increase  -Xmx value.

for an instance I have 1024 per JVM.
-- from mapred-site.xml - file------------------------
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1024m -XX:-UseGCOverheadLimit</value>
  <!-- Not marked as final so jobs can include JVM debugging options -->
</property>
-----------------------------------------------------------

Regards,
Vitaliy S

On Mon, Aug 16, 2010 at 11:04 AM, Rahul.V. <gr...@gmail.com> wrote:
> Hi,
> Am not sure if this is the right place to post this doubt.We tried
> implementing M/R on a different distributed file system developed in house.
> Every time I run more than 30 threads I get an error saying
> "Java.lang.outofmemory: Heap exception"
> Is the heap error due to the number of threads or more intermediate data
> getting generated?
>
> --
> Regards,
> R.V.
>