You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by firemonk9 <dh...@gmail.com> on 2014/11/25 17:11:21 UTC

Spark yarn cluster Application Master not running yarn container

I am running a 3 node(32 core, 60gb) Yarn cluster for Spark jobs.

1) Below are my Yarn memory settings

yarn.nodemanager.resource.memory-mb = 52224
yarn.scheduler.minimum-allocation-mb = 40960
yarn.scheduler.maximum-allocation-mb = 52224
Apache Spark Memory Settings

export SPARK_EXECUTOR_MEMORY=40G
export SPARK_EXECUTOR_CORES=27
export SPARK_EXECUTOR_INSTANCES=3
With above settings I am hoping to see my job run on two nodes how ever the
the job is not running on the node where Application Master is running.

2) Yarn memory settings

yarn.nodemanager.resource.memory-mb = 52224
yarn.scheduler.minimum-allocation-mb = 20480
yarn.scheduler.maximum-allocation-mb = 52224
Apache Spark Memory Settings

export SPARK_EXECUTOR_MEMORY=18G
export SPARK_EXECUTOR_CORES=13
export SPARK_EXECUTOR_INSTANCES=4
I would like to know how can I run the job on both the nodes with the first
memory settings ? Thanks for the help.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-yarn-cluster-Application-Master-not-running-yarn-container-tp19761.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org