You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Dhimant <dh...@gmail.com> on 2014/09/23 07:20:06 UTC

Change number of workers and memory

I am having a spark cluster having some high performance nodes and others are
having commodity specs (lower configuration). 
When I configure worker memory and instances in spark-env.sh, it reflects to
all the nodes.
Can I change SPARK_WORKER_MEMORY and SPARK_WORKER_INSTANCES properties per
node/machine basis ?
I am using Spark 1.1.0 version.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Change-number-of-workers-and-memory-tp14866.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Change number of workers and memory

Posted by Liquan Pei <li...@gmail.com>.
Hi Dhimant,

One thread related to your question is
http://apache-spark-user-list.1001560.n3.nabble.com/heterogeneous-cluster-hardware-td11567.html

One argument that you should set every machine the same SPARK_WORKER_MEMORY
is that all tasks in a stage has to finish in order for the next stage to
run. In your setting, you suppose the data is evenly distributed across
nodes, even if you set SPARK_WORKER_MEMORY higher in the high performance
node, you still need to wait for tasks to finish in lower configuration
nodes.

Thanks,
Liquan

On Mon, Sep 22, 2014 at 10:20 PM, Dhimant <dh...@gmail.com>
wrote:

> I am having a spark cluster having some high performance nodes and others
> are
> having commodity specs (lower configuration).
> When I configure worker memory and instances in spark-env.sh, it reflects
> to
> all the nodes.
> Can I change SPARK_WORKER_MEMORY and SPARK_WORKER_INSTANCES properties per
> node/machine basis ?
> I am using Spark 1.1.0 version.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Change-number-of-workers-and-memory-tp14866.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>


-- 
Liquan Pei
Department of Physics
University of Massachusetts Amherst