You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Michel Dufresne <sp...@gmail.com> on 2015/01/16 18:56:08 UTC

Setting JVM options to Spark executors in Standalone mode

Hi All,

I'm trying to set some JVM options to the executor processes in a
standalone cluster. Here's what I have in *spark-env.sh*:

jmx_opt="-Dcom.sun.management.jmxremote"
> jmx_opt="${jmx_opt} -Djava.net.preferIPv4Stack=true"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.port=9999"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.rmi.port=9998"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.ssl=false"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.authenticate=false"
> jmx_opt="${jmx_opt} -Djava.rmi.server.hostname=${SPARK_PUBLIC_DNS}"
> export SPARK_WORKER_OPTS="${jmx_opt}"


However the option are showing up on the *daemon* JVM not the *workers*. It
has the same effect as if I was using SPARK_DAEMON_JAVA_OPTS (which should
set it on the daemon process).

Thanks in advance for your help,

Michel

Re: Setting JVM options to Spark executors in Standalone mode

Posted by Marcelo Vanzin <va...@cloudera.com>.
On Fri, Jan 16, 2015 at 10:07 AM, Michel Dufresne
<sp...@gmail.com> wrote:
> Thank for your reply, I've should have mentioned that spark-env.sh is the
> only option i found because:
>
>    - I'm creating the SpeakConf/SparkContext from a Play Application
>    (therefore I'm not using spark-submit script)

Then you can set that configuration Zhan mentions directly in your
SparkConf object.

BTW the env variable for what you want is SPARK_EXECUTOR_OPTS, but the
use of env variables to set app configuration is discouraged.


-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Re: Setting JVM options to Spark executors in Standalone mode

Posted by Michel Dufresne <sp...@gmail.com>.
Thank for your reply, I've should have mentioned that spark-env.sh is the
only option i found because:

   - I'm passing the public IP address of the slave (which is determined in
   the shell script)
   - I'm creating the SpeakConf/SparkContext from a Play Application
   (therefore I'm not using spark-submit script)

Thanks

On Fri, Jan 16, 2015 at 1:02 PM, Zhan Zhang <zz...@hortonworks.com> wrote:

> You can try to add it in in conf/spark-defaults.conf
>
>  # spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value
> -Dnumbers="one two three”
>
> Thanks.
>
> Zhan Zhang
>
> On Jan 16, 2015, at 9:56 AM, Michel Dufresne <
> sparkhealthanalytics@gmail.com> wrote:
>
> > Hi All,
> >
> > I'm trying to set some JVM options to the executor processes in a
> > standalone cluster. Here's what I have in *spark-env.sh*:
> >
> > jmx_opt="-Dcom.sun.management.jmxremote"
> >> jmx_opt="${jmx_opt} -Djava.net.preferIPv4Stack=true"
> >> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.port=9999"
> >> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.rmi.port=9998"
> >> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.ssl=false"
> >> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.authenticate=false"
> >> jmx_opt="${jmx_opt} -Djava.rmi.server.hostname=${SPARK_PUBLIC_DNS}"
> >> export SPARK_WORKER_OPTS="${jmx_opt}"
> >
> >
> > However the option are showing up on the *daemon* JVM not the *workers*.
> It
> > has the same effect as if I was using SPARK_DAEMON_JAVA_OPTS (which
> should
> > set it on the daemon process).
> >
> > Thanks in advance for your help,
> >
> > Michel
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Re: Setting JVM options to Spark executors in Standalone mode

Posted by Zhan Zhang <zz...@hortonworks.com>.
You can try to add it in in conf/spark-defaults.conf

 # spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three”

Thanks.

Zhan Zhang

On Jan 16, 2015, at 9:56 AM, Michel Dufresne <sp...@gmail.com> wrote:

> Hi All,
> 
> I'm trying to set some JVM options to the executor processes in a
> standalone cluster. Here's what I have in *spark-env.sh*:
> 
> jmx_opt="-Dcom.sun.management.jmxremote"
>> jmx_opt="${jmx_opt} -Djava.net.preferIPv4Stack=true"
>> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.port=9999"
>> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.rmi.port=9998"
>> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.ssl=false"
>> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.authenticate=false"
>> jmx_opt="${jmx_opt} -Djava.rmi.server.hostname=${SPARK_PUBLIC_DNS}"
>> export SPARK_WORKER_OPTS="${jmx_opt}"
> 
> 
> However the option are showing up on the *daemon* JVM not the *workers*. It
> has the same effect as if I was using SPARK_DAEMON_JAVA_OPTS (which should
> set it on the daemon process).
> 
> Thanks in advance for your help,
> 
> Michel


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org