You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Hu...@Dell.com on 2013/11/08 04:04:32 UTC

setting spark worker node jvm options not working as expected

Hi,
I have this defined in my spark-env.sh for each node
export SPARK_MEM=3g
export SPARK_WORKER_MEMORY=3g
export SPARK_JAVA_OPTS+=" -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"

How I can set spark workers to use larger JVM heap memory like 3G instead of 512.0 MB, it looks like setting in spark-env.sh is not working as expected?
Can someone explain to me also what does it means in spark UI to show each node have 3G memory but also list memory per node is 512.0 MB?

my spark UI (attached screen shot) shows that each work node have 3G but memory per node is 512.0 MB, which is matching working invocation in node logs as follows
13/11/07 19:08:38 INFO Worker: Starting Spark worker poc2:37419 with 16 cores, 3.0 GB RAM
13/11/07 19:08:38 INFO Worker: Spark home: /opt/spark-0.8.0
13/11/07 19:08:39 INFO WorkerWebUI: Started Worker web UI at http://poc2:8081
13/11/07 19:08:39 INFO Worker: Connecting to master spark://poc1.stc1lab.local:7077
13/11/07 19:08:39 INFO Worker: Successfully registered with master
13/11/07 19:10:42 INFO Worker: Asked to launch executor app-20131107190253-0000/0 for OMDBQueryService
13/11/07 19:10:42 INFO ExecutorRunner: Launch command: "java" "-cp" ":/opt/spark-0.8.0/conf:/opt/spark-0.8.0/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop1.0.4.jar" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@poc1:51800/user/StandaloneScheduler" "0" "poc2" "16"

[cid:image001.png@01CEDBEC.310E5230]


Thanks,
Hussam

RE: setting spark worker node jvm options not working as expected

Posted by Hu...@Dell.com.
Dell - Internal Use - Confidential
Worked.

Thanks,
Hussam

From: Kapil Malik [mailto:kmalik@adobe.com]
Sent: Friday, November 08, 2013 2:29 AM
To: Jarada, Hussam; user@spark.incubator.apache.org
Subject: RE: setting spark worker node jvm options not working as expected

Hi Hussam,

Did you try setting spark.executor.memory ?

export SPARK_JAVA_OPTS+="-Dspark.executor.memory=3g -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"
>From the website, the difference b/w WORKER_MEMORY and executor memory is as follows -

SPARK_WORKER_MEMORY

Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GB); note that each application's individual memory is configured using its spark.executor.memoryproperty.


Thanks and regards,

From: Hussam_Jarada@Dell.com<ma...@Dell.com> [mailto:Hussam_Jarada@Dell.com]
Sent: 08 November 2013 08:35
To: user@spark.incubator.apache.org<ma...@spark.incubator.apache.org>
Subject: setting spark worker node jvm options not working as expected

Hi,
I have this defined in my spark-env.sh for each node
export SPARK_MEM=3g
export SPARK_WORKER_MEMORY=3g
export SPARK_JAVA_OPTS+=" -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"

How I can set spark workers to use larger JVM heap memory like 3G instead of 512.0 MB, it looks like setting in spark-env.sh is not working as expected?
Can someone explain to me also what does it means in spark UI to show each node have 3G memory but also list memory per node is 512.0 MB?

my spark UI (attached screen shot) shows that each work node have 3G but memory per node is 512.0 MB, which is matching working invocation in node logs as follows
13/11/07 19:08:38 INFO Worker: Starting Spark worker poc2:37419 with 16 cores, 3.0 GB RAM
13/11/07 19:08:38 INFO Worker: Spark home: /opt/spark-0.8.0
13/11/07 19:08:39 INFO WorkerWebUI: Started Worker web UI at http://poc2:8081
13/11/07 19:08:39 INFO Worker: Connecting to master spark://poc1.stc1lab.local:7077
13/11/07 19:08:39 INFO Worker: Successfully registered with master
13/11/07 19:10:42 INFO Worker: Asked to launch executor app-20131107190253-0000/0 for OMDBQueryService
13/11/07 19:10:42 INFO ExecutorRunner: Launch command: "java" "-cp" ":/opt/spark-0.8.0/conf:/opt/spark-0.8.0/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop1.0.4.jar" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@poc1:51800/user/StandaloneScheduler" "0" "poc2" "16"

[cid:image001.png@01CEDCA9.8951DB70]

Thanks,
Hussam

RE: setting spark worker node jvm options not working as expected

Posted by Kapil Malik <km...@adobe.com>.
Hi Hussam,

Did you try setting spark.executor.memory ?

export SPARK_JAVA_OPTS+="-Dspark.executor.memory=3g -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"
>From the website, the difference b/w WORKER_MEMORY and executor memory is as follows -

SPARK_WORKER_MEMORY

Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GB); note that each application's individual memory is configured using its spark.executor.memoryproperty.


Thanks and regards,

From: Hussam_Jarada@Dell.com [mailto:Hussam_Jarada@Dell.com]
Sent: 08 November 2013 08:35
To: user@spark.incubator.apache.org
Subject: setting spark worker node jvm options not working as expected

Hi,
I have this defined in my spark-env.sh for each node
export SPARK_MEM=3g
export SPARK_WORKER_MEMORY=3g
export SPARK_JAVA_OPTS+=" -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"

How I can set spark workers to use larger JVM heap memory like 3G instead of 512.0 MB, it looks like setting in spark-env.sh is not working as expected?
Can someone explain to me also what does it means in spark UI to show each node have 3G memory but also list memory per node is 512.0 MB?

my spark UI (attached screen shot) shows that each work node have 3G but memory per node is 512.0 MB, which is matching working invocation in node logs as follows
13/11/07 19:08:38 INFO Worker: Starting Spark worker poc2:37419 with 16 cores, 3.0 GB RAM
13/11/07 19:08:38 INFO Worker: Spark home: /opt/spark-0.8.0
13/11/07 19:08:39 INFO WorkerWebUI: Started Worker web UI at http://poc2:8081
13/11/07 19:08:39 INFO Worker: Connecting to master spark://poc1.stc1lab.local:7077
13/11/07 19:08:39 INFO Worker: Successfully registered with master
13/11/07 19:10:42 INFO Worker: Asked to launch executor app-20131107190253-0000/0 for OMDBQueryService
13/11/07 19:10:42 INFO ExecutorRunner: Launch command: "java" "-cp" ":/opt/spark-0.8.0/conf:/opt/spark-0.8.0/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop1.0.4.jar" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@poc1:51800/user/StandaloneScheduler" "0" "poc2" "16"

[cid:image001.png@01CEDC9B.81F6F490]

Thanks,
Hussam

RE: setting spark worker node jvm options not working as expected

Posted by Stanley Burnitt <St...@huawei.com>.
Hi Hussam,

Try setting this system prop in your spark driver:  System.setProperty("spark.executor.memory", "3g");      It must be set before you create the spark context.

Also, add matching jvm args to your spark-env.sh:  -Xms3g -Xmx3g  (keep them in sync with that 'spark.executor.memory' sys-property.






From: Hussam_Jarada@Dell.com [mailto:Hussam_Jarada@Dell.com]
Sent: Thursday, November 07, 2013 7:05 PM
To: user@spark.incubator.apache.org
Subject: setting spark worker node jvm options not working as expected

Hi,
I have this defined in my spark-env.sh for each node
export SPARK_MEM=3g
export SPARK_WORKER_MEMORY=3g
export SPARK_JAVA_OPTS+=" -Dspark.local.dir=/tmp/spark  -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+DisableExplicitGC -XX:MaxPermSize=256m"

How I can set spark workers to use larger JVM heap memory like 3G instead of 512.0 MB, it looks like setting in spark-env.sh is not working as expected?
Can someone explain to me also what does it means in spark UI to show each node have 3G memory but also list memory per node is 512.0 MB?

my spark UI (attached screen shot) shows that each work node have 3G but memory per node is 512.0 MB, which is matching working invocation in node logs as follows
13/11/07 19:08:38 INFO Worker: Starting Spark worker poc2:37419 with 16 cores, 3.0 GB RAM
13/11/07 19:08:38 INFO Worker: Spark home: /opt/spark-0.8.0
13/11/07 19:08:39 INFO WorkerWebUI: Started Worker web UI at http://poc2:8081
13/11/07 19:08:39 INFO Worker: Connecting to master spark://poc1.stc1lab.local:7077
13/11/07 19:08:39 INFO Worker: Successfully registered with master
13/11/07 19:10:42 INFO Worker: Asked to launch executor app-20131107190253-0000/0 for OMDBQueryService
13/11/07 19:10:42 INFO ExecutorRunner: Launch command: "java" "-cp" ":/opt/spark-0.8.0/conf:/opt/spark-0.8.0/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop1.0.4.jar" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Dspark.local.dir=/tmp/spark" "-XX:+UseParallelGC" "-XX:+UseParallelOldGC" "-XX:+DisableExplicitGC" "-XX:MaxPermSize=256m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@poc1:51800/user/StandaloneScheduler" "0" "poc2" "16"

[cid:image001.png@01CEDC75.0A77C9B0]

Thanks,
Hussam