You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pa Rö <pa...@googlemail.com> on 2015/07/14 10:43:27 UTC

spark submit configuration on yarn

hello community,

i want run my spark app on a cluster (cloudera 5.4.4) with 3 nodes (one pc
has i7 8core with 16GB RAM). now i want submit my spark job on yarn (20GB
RAM).

my script to submit the job is to time the following:

export HADOOP_CONF_DIR=/etc/hadoop/conf/
./spark-1.3.0-bin-hadoop2.4/bin/spark-submit \
  --class mgm.tp.bigdata.ma_spark.SparkMain \
  --master yarn-cluster \
  --executor-memory 9G \
  --total-executor-cores 16 \
  ma-spark.jar \
  1000

maybe my configuration is not the optimal??

best regards,
paul