You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ma qiang <ma...@gmail.com> on 2008/02/01 05:52:20 UTC

about the exception in mapreduce program?

Hi all:
      I meet this problem as below:
      My map function read from a table in HBase, then merge several
string and finally save these string into another table HBase. The
number of string and the length of the string  is large. After ten
minutes, the  hadoop print error "out of memory, java heap is not
enough" . And the program is tested using small string and there is no
error.  But when the number and length of string become large, the
error happened. I installed the hadoop in non-distributed mode, and
the size of my computer's memory is 2G, this size is enough fit for my
simple program in theory.
     Who can tell me why?
     Thank you very much!


Best Wishes!

Re: about the exception in mapreduce program?

Posted by Peeyush Bishnoi <pe...@yahoo-inc.com>.
You can also set in the Hbase HeapSize in hbase-env.sh file itself ,
which is located at $HBASE_HOME/conf/hbase-env.sh

---
Peeyush Bishnoi

On Fri, 2008-02-01 at 10:55 +0530, Jaideep Dhok wrote:

> You can change the max memory used by JVM using the -Xmx option. There is
> also a HADOOP_HEAPSIZE option in hadoop-env.sh, which you can increase.
> 
> On Feb 1, 2008 10:22 AM, ma qiang <ma...@gmail.com> wrote:
> 
> > Hi all:
> >      I meet this problem as below:
> >      My map function read from a table in HBase, then merge several
> > string and finally save these string into another table HBase. The
> > number of string and the length of the string  is large. After ten
> > minutes, the  hadoop print error "out of memory, java heap is not
> > enough" . And the program is tested using small string and there is no
> > error.  But when the number and length of string become large, the
> > error happened. I installed the hadoop in non-distributed mode, and
> > the size of my computer's memory is 2G, this size is enough fit for my
> > simple program in theory.
> >     Who can tell me why?
> >     Thank you very much!
> >
> >
> > Best Wishes!
> >
> 
> 
> 

Re: about the exception in mapreduce program?

Posted by Arun C Murthy <ac...@yahoo-inc.com>.
On Jan 31, 2008, at 9:25 PM, Jaideep Dhok wrote:

> You can change the max memory used by JVM using the -Xmx option.  
> There is
> also a HADOOP_HEAPSIZE option in hadoop-env.sh, which you can  
> increase.
>

They affect different components:

HADOOP_HEAPSIZE in conf/hadoop-env.sh controls the heap-size for the  
hadoop daemons on  a given node (NN/DN/JT/TT) - http:// 
hadoop.apache.org/core/docs/r0.15.3/cluster_setup.html#Configuring+the 
+Environment+of+the+Hadoop+Daemons

To ensure your map/reduce tasks get more memory, the way to configure  
them is to use the *mapred.child.java.opts* knob in hadoop-site.xml:
<property>
   <name>mapred.child.java.opts</name>
   <value>-Xmx200m</value>      <--- bump this up to 512m or higher,  
depending on your needs
</property>

I've also opened http://issues.apache.org/jira/browse/HADOOP-2762 to  
help users...

Arun

> On Feb 1, 2008 10:22 AM, ma qiang <ma...@gmail.com> wrote:
>
>> Hi all:
>>      I meet this problem as below:
>>      My map function read from a table in HBase, then merge several
>> string and finally save these string into another table HBase. The
>> number of string and the length of the string  is large. After ten
>> minutes, the  hadoop print error "out of memory, java heap is not
>> enough" . And the program is tested using small string and there  
>> is no
>> error.  But when the number and length of string become large, the
>> error happened. I installed the hadoop in non-distributed mode, and
>> the size of my computer's memory is 2G, this size is enough fit  
>> for my
>> simple program in theory.
>>     Who can tell me why?
>>     Thank you very much!
>>
>>
>> Best Wishes!
>>
>
>
>
> -- 
> Jaideep Dhok


Re: about the exception in mapreduce program?

Posted by Jaideep Dhok <ja...@gmail.com>.
You can change the max memory used by JVM using the -Xmx option. There is
also a HADOOP_HEAPSIZE option in hadoop-env.sh, which you can increase.

On Feb 1, 2008 10:22 AM, ma qiang <ma...@gmail.com> wrote:

> Hi all:
>      I meet this problem as below:
>      My map function read from a table in HBase, then merge several
> string and finally save these string into another table HBase. The
> number of string and the length of the string  is large. After ten
> minutes, the  hadoop print error "out of memory, java heap is not
> enough" . And the program is tested using small string and there is no
> error.  But when the number and length of string become large, the
> error happened. I installed the hadoop in non-distributed mode, and
> the size of my computer's memory is 2G, this size is enough fit for my
> simple program in theory.
>     Who can tell me why?
>     Thank you very much!
>
>
> Best Wishes!
>



-- 
Jaideep Dhok