You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by jerryye <je...@gmail.com> on 2014/09/12 09:49:28 UTC

Perserving conf files when restarting ec2 cluster

Hi,
I'm using --use-existing-master to launch a previous stopped ec2 cluster
with spark-ec2. However, my configuration files are overwritten once is the
cluster is setup. What's the best way of preserving existing configuration
files in spark/conf.

Alternatively, what I'm trying to do is set SPARK_WORKER_CORES to use fewer
cores than default. Is there a nice way to pass this while starting the
cluster or is it possible to do this in SparkContext?

I'm currently copying the configuration and restarting the cluster using the
stop-all.sh and start-all.sh scripts. Anything better would be greatly
appreciated.

Thanks!

- jerry



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Perserving-conf-files-when-restarting-ec2-cluster-tp14070.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org