You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kalyana Chakravarthy Kadiyala (JIRA)" <ji...@apache.org> on 2014/11/10 07:57:33 UTC

[jira] [Commented] (SPARK-4311) ContainerLauncher setting up executor -- invalid Xms settings (-Xms0m -Xmx0m)

    [ https://issues.apache.org/jira/browse/SPARK-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14204433#comment-14204433 ] 

Kalyana Chakravarthy Kadiyala commented on SPARK-4311:
------------------------------------------------------

Not sure what's going on this...here is the snippet from the trail that verifies --executor-memory value, as set on the command line to submit the job...

2014-11-08 18:12:40,195 INFO  [main] yarn.YarnAllocationHandler (Logging.scala:logInfo(59)) - Will allocate 3 executor containers, each with 256 MB memory including 256 MB overhead
2014-11-08 18:12:40,225 INFO  [main] yarn.YarnAllocationHandler (Logging.scala:logInfo(59)) - Container request (host: Any, priority: 1, capability: <memory:256, vCores:1>
2014-11-08 18:12:40,225 INFO  [main] yarn.YarnAllocationHandler (Logging.scala:logInfo(59)) - Container request (host: Any, priority: 1, capability: <memory:256, vCores:1>
2014-11-08 18:12:40,226 INFO  [main] yarn.YarnAllocationHandler (Logging.scala:logInfo(59)) - Container request (host: Any, priority: 1, capability: <memory:256, vCores:1>

> ContainerLauncher setting up executor -- invalid Xms settings (-Xms0m -Xmx0m)
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-4311
>                 URL: https://issues.apache.org/jira/browse/SPARK-4311
>             Project: Spark
>          Issue Type: Question
>          Components: YARN
>    Affects Versions: 1.1.0
>            Reporter: Kalyana Chakravarthy Kadiyala
>              Labels: spark_submit
>
> <spark_home>/conf/spark-defaults.conf entry for executor extra options:
> spark.executor.extraJavaOptions -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+UseCompressedOops
> Sample from Container logs...(note - masked the nodes for privacy reasons; driver runs on xxxx1 node and executor is being spawned on xxxx2 node in the 3 node YARN cluster)
>  - Setting up executor with commands: List($JAVA_HOME/bin/java, -server, -XX:OnOutOfMemoryError='kill %p', -Xms0m -Xmx0m , -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+UseCompressedOops, -Djava.io.tmpdir=$PWD/tmp, '-Dspark.authenticate=false', '-Dspark.akka.timeout=100', '-Dspark.akka.frameSize=10', '-Dspark.akka.heartbeat.pauses=600', '-Dspark.akka.failure-detector.threshold=300', '-Dspark.akka.heartbeat.interval=1000', '-Dspark.akka.threads=4', -Dspark.yarn.app.container.log.dir=<LOG_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend, akka.tcp://sparkDriver@xxxx1.xxxxxx.xxx:49760/user/CoarseGrainedScheduler, 6, xxxx1.xxxxxxx.xxx, 1, application_1415440760385_0012, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
> 2014-11-08 17:19:07,201 INFO  [ContainerLauncher #3] yarn.ExecutorRunnable (Logging.scala:logInfo(59)) - Setting up executor with commands: List($JAVA_HOME/bin/java, -server, -XX:OnOutOfMemoryError='kill %p', -Xms0m -Xmx0m , -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+UseCompressedOops, -Djava.io.tmpdir=$PWD/tmp, '-Dspark.authenticate=false', '-Dspark.akka.timeout=100', '-Dspark.akka.frameSize=10', '-Dspark.akka.heartbeat.pauses=600', '-Dspark.akka.failure-detector.threshold=300', '-Dspark.akka.heartbeat.interval=1000', '-Dspark.akka.threads=4', -Dspark.yarn.app.container.log.dir=<LOG_DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend, akka.tcp://sparkDriver@xxxx1.xxxxx.xxx:49760/user/CoarseGrainedScheduler, 4, xxxx2.xxxxx.xxx, 1, application_1415440760385_0012, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org