You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by sequoiadb <ma...@sequoiadb.com> on 2015/03/19 09:00:15 UTC

how to specify multiple masters in sbin/start-slaves.sh script?

Hey guys,

Not sure if i’m the only one got this. We are building high-available standalone spark env. We are using ZK with 3 masters in the cluster.
However, in sbin/start-slaves.sh, it calls start-slave.sh for each member in conf/slaves file, and specify master using $SPARK_MASTER_IP and $SPARK_MASTER_PORT
exec "$sbin/slaves.sh" cd "$SPARK_HOME" \; "$sbin/start-slave.sh" 1 "spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT"

But if I want to specify more than one master node, I have to use the format
spark://host1:port1,host2:port2,host3:port3 <spark://host1:port1,host2:port2,host3:port3>

In this case, it seems the original sbin/start-slaves.sh can’t do the trick.
Does everyone need to modify the script in order to build a HA cluster, or is there something I missed?

Thanks