You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pranav Shukla <pr...@brevitaz.com> on 2017/03/15 00:47:47 UTC

Scaling Kafka Direct Streming application

How to scale or possibly auto-scale a spark streaming application consuming
from kafka and using kafka direct streams. We are using spark 1.6.3, cannot
move to 2.x unless there is a strong reason.

Scenario:
Kafka topic with 10 partitions
Standalone cluster running on kubernetes with 1 master and 2 workers

What we would like to know?
Increase the number of partitions (say from 10 to 15)
Add additional worker node without restarting the streaming application and
start consuming off the additional partitions.

Is this possible? i.e. start additional workers in standalone cluster to
auto-scale an existing spark streaming application that is already running
or we have to stop and resubmit the streaming app?

Best Regards,
Pranav Shukla

Re: Scaling Kafka Direct Streming application

Posted by vincent gromakowski <vi...@gmail.com>.
You would probably need dynamic allocation which is only available on yarn
and mesos. Or wait for on going spark k8s integration


Le 15 mars 2017 1:54 AM, "Pranav Shukla" <pr...@brevitaz.com> a
écrit :

> How to scale or possibly auto-scale a spark streaming application
> consuming from kafka and using kafka direct streams. We are using spark
> 1.6.3, cannot move to 2.x unless there is a strong reason.
>
> Scenario:
> Kafka topic with 10 partitions
> Standalone cluster running on kubernetes with 1 master and 2 workers
>
> What we would like to know?
> Increase the number of partitions (say from 10 to 15)
> Add additional worker node without restarting the streaming application
> and start consuming off the additional partitions.
>
> Is this possible? i.e. start additional workers in standalone cluster to
> auto-scale an existing spark streaming application that is already running
> or we have to stop and resubmit the streaming app?
>
> Best Regards,
> Pranav Shukla
>