You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by madhuri72 <ak...@gmail.com> on 2009/02/26 02:05:25 UTC

Could not reserve enough space for heap in JVM

Hi,

I'm trying to run hadoop version 19 on ubuntu with java build 1.6.0_11-b03. 
I'm getting the following error:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
make: *** [run] Error 1

I searched the forums and found some advice on setting the VM's memory via
the javac options 

-J-Xmx512m or -J-Xms256m

I have tried this with various sizes between 128 and 1024 MB.  I am adding
this tag when I compile the source.  This isn't working for me, and
allocating 1 GB of memory is a lot for the machine I'm using.  Is there some
way to make this work with hadoop?  Is there somewhere else I can set the
heap memory?

Thanks.





-- 
View this message in context: http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Could not reserve enough space for heap in JVM

Posted by Arijit Mukherjee <ar...@gmail.com>.
I'm making a guesswork here.

Are the namenode, datanode, jobtracker and testtracker separate java
processes? Does each of them take up separate heap spaces? (Running Jps
would show them separately - I guess they may be separate java processes
with separate heap spaces). If that's true, then, if the total memory is X,
and you are allocating Y as the heapspace, then 4*Y must be much less than
X.

I'm not sure - this is what I'm speculating. Can anyone confirm this please?

Cheers
Arijit

2009/2/26 Nick Cen <ce...@gmail.com>

> I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
> memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee <ar...@gmail.com>
>
> > I was getting similar errors too while running the mapreduce samples. I
> > fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> > hadoop-site.xml files - and rectified it after some trial and error. But
> I
> > would like to know if there is a thumb rule for this. Right now, I've a
> > core
> > duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> > HEAPSIZE of 256Mb works without any problems. Anything more than that
> would
> > give the same error (even when nothing else is going on in the machine).
> >
> > Arijit
> >
> > 2009/2/26 Anum Ali <mi...@gmail.com>
> >
> > > If the solution given my Matei Zaharia wont work , which I guess it
> > > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > > resloved it in later version which is eclipse 3.4 ganymede. Better
> > > upgrade eclipse version.
> > >
> > >
> > >
> > >
> > > On 2/26/09, Matei Zaharia <ma...@cloudera.com> wrote:
> > > > These variables have to be at runtime through a config file, not at
> > > compile
> > > > time. You can set them in hadoop-env.sh: Uncomment the line with
> export
> > > > HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop
> > processes,
> > > or
> > > > change options for specific commands. Now these commands are for the
> > > Hadoop
> > > > processes themselves, but if you are getting the error in tasks
> you're
> > > > running, you can set these in your hadoop-site.xml through the
> > > > mapred.child.java.opts variable, as follows:
> > > > <property>
> > > >   <name>mapred.child.java.opts</name>
> > > >   <value>-Xmx512m</value>
> > > > </property>
> > > >
> > > > By the way I'm not sure if -J-Xmx is the right syntax; I've always
> seen
> > > -Xmx
> > > > and -Xms.
> > > >
> > > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <ak...@gmail.com>
> wrote:
> > > >
> > > >>
> > > >> Hi,
> > > >>
> > > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > > >> 1.6.0_11-b03.
> > > >> I'm getting the following error:
> > > >>
> > > >> Error occurred during initialization of VM
> > > >> Could not reserve enough space for object heap
> > > >> Could not create the Java virtual machine.
> > > >> make: *** [run] Error 1
> > > >>
> > > >> I searched the forums and found some advice on setting the VM's
> memory
> > > via
> > > >> the javac options
> > > >>
> > > >> -J-Xmx512m or -J-Xms256m
> > > >>
> > > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > > adding
> > > >> this tag when I compile the source.  This isn't working for me, and
> > > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> > there
> > > >> some
> > > >> way to make this work with hadoop?  Is there somewhere else I can
> set
> > > the
> > > >> heap memory?
> > > >>
> > > >> Thanks.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> View this message in context:
> > > >>
> > >
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > > >>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > "And when the night is cloudy,
> > There is still a light that shines on me,
> > Shine on until tomorrow, let it be."
> >
>
>
>
> --
> http://daily.appspot.com/food/
>



-- 
"And when the night is cloudy,
There is still a light that shines on me,
Shine on until tomorrow, let it be."

Re: Could not reserve enough space for heap in JVM

Posted by Nick Cen <ce...@gmail.com>.
I got a question relatived to the HADOOP_HEAPSIZE variable. My machine's
memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
exception refered in this thread. how can i make full use of my mem. thx.

2009/2/26 Arijit Mukherjee <ar...@gmail.com>

> I was getting similar errors too while running the mapreduce samples. I
> fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
> hadoop-site.xml files - and rectified it after some trial and error. But I
> would like to know if there is a thumb rule for this. Right now, I've a
> core
> duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
> HEAPSIZE of 256Mb works without any problems. Anything more than that would
> give the same error (even when nothing else is going on in the machine).
>
> Arijit
>
> 2009/2/26 Anum Ali <mi...@gmail.com>
>
> > If the solution given my Matei Zaharia wont work , which I guess it
> > wont if you are using eclipse 3.3.0 because this is a bug , which they
> > resloved it in later version which is eclipse 3.4 ganymede. Better
> > upgrade eclipse version.
> >
> >
> >
> >
> > On 2/26/09, Matei Zaharia <ma...@cloudera.com> wrote:
> > > These variables have to be at runtime through a config file, not at
> > compile
> > > time. You can set them in hadoop-env.sh: Uncomment the line with export
> > > HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop
> processes,
> > or
> > > change options for specific commands. Now these commands are for the
> > Hadoop
> > > processes themselves, but if you are getting the error in tasks you're
> > > running, you can set these in your hadoop-site.xml through the
> > > mapred.child.java.opts variable, as follows:
> > > <property>
> > >   <name>mapred.child.java.opts</name>
> > >   <value>-Xmx512m</value>
> > > </property>
> > >
> > > By the way I'm not sure if -J-Xmx is the right syntax; I've always seen
> > -Xmx
> > > and -Xms.
> > >
> > > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <ak...@gmail.com> wrote:
> > >
> > >>
> > >> Hi,
> > >>
> > >> I'm trying to run hadoop version 19 on ubuntu with java build
> > >> 1.6.0_11-b03.
> > >> I'm getting the following error:
> > >>
> > >> Error occurred during initialization of VM
> > >> Could not reserve enough space for object heap
> > >> Could not create the Java virtual machine.
> > >> make: *** [run] Error 1
> > >>
> > >> I searched the forums and found some advice on setting the VM's memory
> > via
> > >> the javac options
> > >>
> > >> -J-Xmx512m or -J-Xms256m
> > >>
> > >> I have tried this with various sizes between 128 and 1024 MB.  I am
> > adding
> > >> this tag when I compile the source.  This isn't working for me, and
> > >> allocating 1 GB of memory is a lot for the machine I'm using.  Is
> there
> > >> some
> > >> way to make this work with hadoop?  Is there somewhere else I can set
> > the
> > >> heap memory?
> > >>
> > >> Thanks.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >> View this message in context:
> > >>
> >
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> > >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> > >>
> > >>
> > >
> >
>
>
>
> --
> "And when the night is cloudy,
> There is still a light that shines on me,
> Shine on until tomorrow, let it be."
>



-- 
http://daily.appspot.com/food/

Re: Could not reserve enough space for heap in JVM

Posted by Arijit Mukherjee <ar...@gmail.com>.
I was getting similar errors too while running the mapreduce samples. I
fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
hadoop-site.xml files - and rectified it after some trial and error. But I
would like to know if there is a thumb rule for this. Right now, I've a core
duo machine with 2GB RAM running on Ubuntu 8.10, and I've found that a
HEAPSIZE of 256Mb works without any problems. Anything more than that would
give the same error (even when nothing else is going on in the machine).

Arijit

2009/2/26 Anum Ali <mi...@gmail.com>

> If the solution given my Matei Zaharia wont work , which I guess it
> wont if you are using eclipse 3.3.0 because this is a bug , which they
> resloved it in later version which is eclipse 3.4 ganymede. Better
> upgrade eclipse version.
>
>
>
>
> On 2/26/09, Matei Zaharia <ma...@cloudera.com> wrote:
> > These variables have to be at runtime through a config file, not at
> compile
> > time. You can set them in hadoop-env.sh: Uncomment the line with export
> > HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop processes,
> or
> > change options for specific commands. Now these commands are for the
> Hadoop
> > processes themselves, but if you are getting the error in tasks you're
> > running, you can set these in your hadoop-site.xml through the
> > mapred.child.java.opts variable, as follows:
> > <property>
> >   <name>mapred.child.java.opts</name>
> >   <value>-Xmx512m</value>
> > </property>
> >
> > By the way I'm not sure if -J-Xmx is the right syntax; I've always seen
> -Xmx
> > and -Xms.
> >
> > On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <ak...@gmail.com> wrote:
> >
> >>
> >> Hi,
> >>
> >> I'm trying to run hadoop version 19 on ubuntu with java build
> >> 1.6.0_11-b03.
> >> I'm getting the following error:
> >>
> >> Error occurred during initialization of VM
> >> Could not reserve enough space for object heap
> >> Could not create the Java virtual machine.
> >> make: *** [run] Error 1
> >>
> >> I searched the forums and found some advice on setting the VM's memory
> via
> >> the javac options
> >>
> >> -J-Xmx512m or -J-Xms256m
> >>
> >> I have tried this with various sizes between 128 and 1024 MB.  I am
> adding
> >> this tag when I compile the source.  This isn't working for me, and
> >> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
> >> some
> >> way to make this work with hadoop?  Is there somewhere else I can set
> the
> >> heap memory?
> >>
> >> Thanks.
> >>
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> >> Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >>
> >>
> >
>



-- 
"And when the night is cloudy,
There is still a light that shines on me,
Shine on until tomorrow, let it be."

Re: Could not reserve enough space for heap in JVM

Posted by Anum Ali <mi...@gmail.com>.
If the solution given my Matei Zaharia wont work , which I guess it
wont if you are using eclipse 3.3.0 because this is a bug , which they
resloved it in later version which is eclipse 3.4 ganymede. Better
upgrade eclipse version.




On 2/26/09, Matei Zaharia <ma...@cloudera.com> wrote:
> These variables have to be at runtime through a config file, not at compile
> time. You can set them in hadoop-env.sh: Uncomment the line with export
> HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop processes, or
> change options for specific commands. Now these commands are for the Hadoop
> processes themselves, but if you are getting the error in tasks you're
> running, you can set these in your hadoop-site.xml through the
> mapred.child.java.opts variable, as follows:
> <property>
>   <name>mapred.child.java.opts</name>
>   <value>-Xmx512m</value>
> </property>
>
> By the way I'm not sure if -J-Xmx is the right syntax; I've always seen -Xmx
> and -Xms.
>
> On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <ak...@gmail.com> wrote:
>
>>
>> Hi,
>>
>> I'm trying to run hadoop version 19 on ubuntu with java build
>> 1.6.0_11-b03.
>> I'm getting the following error:
>>
>> Error occurred during initialization of VM
>> Could not reserve enough space for object heap
>> Could not create the Java virtual machine.
>> make: *** [run] Error 1
>>
>> I searched the forums and found some advice on setting the VM's memory via
>> the javac options
>>
>> -J-Xmx512m or -J-Xms256m
>>
>> I have tried this with various sizes between 128 and 1024 MB.  I am adding
>> this tag when I compile the source.  This isn't working for me, and
>> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
>> some
>> way to make this work with hadoop?  Is there somewhere else I can set the
>> heap memory?
>>
>> Thanks.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
>

Re: Could not reserve enough space for heap in JVM

Posted by Matei Zaharia <ma...@cloudera.com>.
These variables have to be at runtime through a config file, not at compile
time. You can set them in hadoop-env.sh: Uncomment the line with export
HADOOP_HEAPSIZE=<whatever> to set the heap size for all Hadoop processes, or
change options for specific commands. Now these commands are for the Hadoop
processes themselves, but if you are getting the error in tasks you're
running, you can set these in your hadoop-site.xml through the
mapred.child.java.opts variable, as follows:
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx512m</value>
</property>

By the way I'm not sure if -J-Xmx is the right syntax; I've always seen -Xmx
and -Xms.

On Wed, Feb 25, 2009 at 5:05 PM, madhuri72 <ak...@gmail.com> wrote:

>
> Hi,
>
> I'm trying to run hadoop version 19 on ubuntu with java build 1.6.0_11-b03.
> I'm getting the following error:
>
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Could not create the Java virtual machine.
> make: *** [run] Error 1
>
> I searched the forums and found some advice on setting the VM's memory via
> the javac options
>
> -J-Xmx512m or -J-Xms256m
>
> I have tried this with various sizes between 128 and 1024 MB.  I am adding
> this tag when I compile the source.  This isn't working for me, and
> allocating 1 GB of memory is a lot for the machine I'm using.  Is there
> some
> way to make this work with hadoop?  Is there somewhere else I can set the
> heap memory?
>
> Thanks.
>
>
>
>
>
> --
> View this message in context:
> http://www.nabble.com/Could-not-reserve-enough-space-for-heap-in-JVM-tp22215608p22215608.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>