You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by MEETHU MATHEW <me...@yahoo.co.in> on 2014/06/19 07:21:13 UTC

options set in spark-env.sh is not reflecting on actual execution

Hi all,

I have a doubt regarding the options in spark-env.sh. I set the following values in the file in master and 2 workers

SPARK_WORKER_MEMORY=7g
SPARK_EXECUTOR_MEMORY=6g
SPARK_DAEMON_JAVA_OPTS+="- Dspark.akka.timeout=300000 -Dspark.akka.frameSize=10000 -Dspark.blockManagerHeartBeatMs=800000 -Dspark.shuffle.spill=false

But SPARK_EXECUTOR_MEMORY is showing 4g in web UI.Do I need to change it anywhere else to make it 4g and to reflect it in web UI.

A warning is coming that blockManagerHeartBeatMs is exceeding 450000 while executing a process even though I set it to 800000.

So I doubt whether it should be set  as SPARK_MASTER_OPTS or SPARK_WORKER_OPTS..
 
Thanks & Regards, 
Meethu M

Re: options set in spark-env.sh is not reflecting on actual execution

Posted by Andrew Or <an...@databricks.com>.
Hi Meethu,

Are you using Spark 1.0? If so, you should use spark-submit (
http://spark.apache.org/docs/latest/submitting-applications.html), which
has --executor-memory. If you don't want to specify this every time you
submit an application, you can also specify spark.executor.memory in
$SPARK_HOME/conf/spark-defaults.conf (
http://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties
).

SPARK_WORKER_MEMORY is for the worker daemon, not your individual
application. A worker can launch many executors, and the value of
SPARK_WORKER_MEMORY is shared across all executors running on that worker.
SPARK_EXECUTOR_MEMORY is deprecated and replaced by
"spark.executor.memory". This is the value you should set.
SPARK_DAEMON_JAVA_OPTS should not be used for setting spark configs, but
instead is intended for java options for worker and master instances (not
for Spark applications). Similarly, you shouldn't be setting
SPARK_MASTER_OPTS or SPARK_WORKER_OPTS to configure your application.

The recommended way for setting spark.* configurations is to do it
programmatically by creating a new SparkConf, set these configurations in
the conf, and pass this conf to the SparkContext (see
http://spark.apache.org/docs/latest/configuration.html#spark-properties).

Andrew



2014-06-18 22:21 GMT-07:00 MEETHU MATHEW <me...@yahoo.co.in>:

> Hi all,
>
> I have a doubt regarding the options in spark-env.sh. I set the following
> values in the file in master and 2 workers
>
> SPARK_WORKER_MEMORY=7g
> SPARK_EXECUTOR_MEMORY=6g
> SPARK_DAEMON_JAVA_OPTS+="- Dspark.akka.timeout=300000
> -Dspark.akka.frameSize=10000 -Dspark.blockManagerHeartBeatMs=800000
> -Dspark.shuffle.spill=false
>
> But SPARK_EXECUTOR_MEMORY is showing 4g in web UI.Do I need to change it
> anywhere else to make it 4g and to reflect it in web UI.
>
> A warning is coming that blockManagerHeartBeatMs is exceeding 450000 while
> executing a process even though I set it to 800000.
>
> So I doubt whether it should be set  as SPARK_MASTER_OPTS
> or SPARK_WORKER_OPTS..
>
> Thanks & Regards,
> Meethu M
>