You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Assaf <as...@Intel.com> on 2014/03/09 22:10:43 UTC

Spark on YARN use only one node

Hi,

I've installed Spark 0.81 on IDH 3.0.2 as on YARN.
My cluster have 3 servers, 1 is NN and DN, other 2 only DN.
I manage to launch spark-shell and execute the mllib kmeans.
The problem is it is using only one node ( the NN ) and not running on the
other 2 DN

Please advise

My spark-env.sh file:

export
SPARK_CLASSPATH=/usr/lib/hbase/hbase-0.94.7-Intel.jar:/usr/lib/hadoop/hadoop-auth-2.0.4-Intel.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/hadoop-common-2.0.4-Intel.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar
export SPARK_LIBRARY_PATH=/usr/lib/hadoop/lib/native
export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hbase/conf
export SPARK_PRINT_LAUNCH_COMMAND=1
export YARN_CONF_DIR=/etc/hadoop/conf

Thanks,
Assaf



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-use-only-one-node-tp2435.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.