You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Adam Shook <as...@clearedgeit.com> on 2011/08/19 17:34:44 UTC

Hadoop JVM Size (Not MapReduce)

Hello all,

I have a Hadoop related application that is integrated with HDFS and is started via the command line with "hadoop jar ..."  The amount of data used by the application changes from use case to use case, and I have to adjust the JVM that is started using the "hadoop jar" command.  Typically, you just set the -Xmx and -Xms variables from the "java -jar" command, but this doesn't seem to work.

Does anyone know how I can set it?  Note that this is unrelated to the JVM size for map and reduce tasks - there is no MapReduce involved in my application.

Thanks in advance!

--Adam

PS - I imagine I can code my application to be hooked into HDFS or read from the Hadoop configuration files by hand - but I would prefer Hadoop to do all the work for me!

Re: Hadoop JVM Size (Not MapReduce)

Posted by Harsh J <ha...@cloudera.com>.
Extending on Bobby's pointer, you're looking for the
HADOOP_CLIENT_OPTS env-var. You can set it with the -Xmx/-Xms opts you
need under hadoop-env.sh or set it per command as you execute it.

On Fri, Aug 19, 2011 at 9:04 PM, Adam Shook <as...@clearedgeit.com> wrote:
> Hello all,
>
> I have a Hadoop related application that is integrated with HDFS and is started via the command line with "hadoop jar ..."  The amount of data used by the application changes from use case to use case, and I have to adjust the JVM that is started using the "hadoop jar" command.  Typically, you just set the -Xmx and -Xms variables from the "java -jar" command, but this doesn't seem to work.
>
> Does anyone know how I can set it?  Note that this is unrelated to the JVM size for map and reduce tasks - there is no MapReduce involved in my application.
>
> Thanks in advance!
>
> --Adam
>
> PS - I imagine I can code my application to be hooked into HDFS or read from the Hadoop configuration files by hand - but I would prefer Hadoop to do all the work for me!
>



-- 
Harsh J

Re: Hadoop JVM Size (Not MapReduce)

Posted by Robert Evans <ev...@yahoo-inc.com>.
The hadoop command is just a shell script that sets up the class path before call java.  I think if you set the ENV HADOOP_JAVA_OPTS then they will show up on the command line, but you can look at the top of the hadoop shell script to be sure.  It has all the env vars it supports listed there at the top.

--Bobby Evans

On 8/19/11 10:34 AM, "Adam Shook" <as...@clearedgeit.com> wrote:

Hello all,

I have a Hadoop related application that is integrated with HDFS and is started via the command line with "hadoop jar ..."  The amount of data used by the application changes from use case to use case, and I have to adjust the JVM that is started using the "hadoop jar" command.  Typically, you just set the -Xmx and -Xms variables from the "java -jar" command, but this doesn't seem to work.

Does anyone know how I can set it?  Note that this is unrelated to the JVM size for map and reduce tasks - there is no MapReduce involved in my application.

Thanks in advance!

--Adam

PS - I imagine I can code my application to be hooked into HDFS or read from the Hadoop configuration files by hand - but I would prefer Hadoop to do all the work for me!