You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by phagunbaya <ph...@falkonry.com> on 2015/07/22 13:20:21 UTC

Scaling spark cluster for a running application

I have a spark cluster running in client mode with driver outside the spark
cluster. I want to scale the cluster after an application is submitted. In
order to do this, I'm creating new workers and they are getting registered
with master but issue I'm seeing is; running application does not use the
newly added worker. Hence cannot add more resources to existing running
application.

Is there any other way or config to deal with this use-case ? How to make
running application to ask for executors from newly issued worker node ?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Scaling-spark-cluster-for-a-running-application-tp23951.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Scaling spark cluster for a running application

Posted by Romi Kuntsman <ro...@totango.com>.
Are you running the Spark cluster in standalone or YARN?
In standalone, the application gets the available resources when it starts.
With YARN, you can try to turn on the setting
*spark.dynamicAllocation.enabled*
See https://spark.apache.org/docs/latest/configuration.html

On Wed, Jul 22, 2015 at 2:20 PM phagunbaya <ph...@falkonry.com> wrote:

> I have a spark cluster running in client mode with driver outside the spark
> cluster. I want to scale the cluster after an application is submitted. In
> order to do this, I'm creating new workers and they are getting registered
> with master but issue I'm seeing is; running application does not use the
> newly added worker. Hence cannot add more resources to existing running
> application.
>
> Is there any other way or config to deal with this use-case ? How to make
> running application to ask for executors from newly issued worker node ?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Scaling-spark-cluster-for-a-running-application-tp23951.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>