You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Mazen Ezzeddine <ma...@etu.univ-cotedazur.fr> on 2021/03/15 14:58:18 UTC

Rebalancing and scaling of consumers on kubernetes, instanteous scale to x consumer replicas ==> x rebalancing?

Hi all,

I have a kafka consumer  pod running on kubernetes, I executed the command kubectl scale consumerName --replicas=2,  and as shown in the logs below two seperate rebalancing processes were trigerred, so if the number of  consumer replicas scaled = 100, one hundred seperate rebalancing are going to be trigerred.  is that accurate? am I missing something? any workaroud to trigger a single rebalancing regardless of the number of replicas in the scale command.


group coordinator logs
=================


2021-03-15 13:57:34,230 INFO [GroupCoordinator 1]: Preparing to rebalance group debugconsumerlag in state PreparingRebalance with old generation 0 (__consumer_offsets-31) (reason: Adding new member consumer-debugconsumerlag-1-1a577d6c-7389-4217-883f-89535032ae02 with group instance id None) (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-5]
2021-03-15 13:57:37,266 INFO [GroupCoordinator 1]: Stabilized group debugconsumerlag generation 1 (__consumer_offsets-31) (kafka.coordinator.group.GroupCoordinator) [executor-Rebalance]
2021-03-15 13:57:37,784 INFO [GroupCoordinator 1]: Assignment received from leader for group debugconsumerlag for generation 1 (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-3]
2021-03-15 14:07:43,822 INFO [GroupCoordinator 1]: Preparing to rebalance group debugconsumerlag in state PreparingRebalance with old generation 1 (__consumer_offsets-31) (reason: Adding new member consumer-debugconsumerlag-1-e2e57bf6-6cbc-4dba-81d4-d7e58219c23f with group instance id None) (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-1]
2021-03-15 14:07:46,530 INFO [GroupCoordinator 1]: Stabilized group debugconsumerlag generation 2 (__consumer_offsets-31) (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-1]
2021-03-15 14:07:46,675 INFO [GroupCoordinator 1]: Assignment received from leader for group debugconsumerlag for generation 2 (kafka.coordinator.group.GroupCoordinator) [data-plane-kafka-request-handler-3]

Kind regards,

Re: Rebalancing and scaling of consumers on kubernetes, instanteous scale to x consumer replicas ==> x rebalancing?

Posted by Sophie Blee-Goldman <so...@confluent.io.INVALID>.
Hey Mazen,

There's not necessarily one rebalance per new consumer, in theory if all
100 consumers are started up at the same time then there
may be just a single rebalance. It really depends on the timing -- for
example in the log snippet you provided, you can see that the
first member joined at 13:57:34 and the rebalance completed ~3 seconds
later at 13:57:37. Then the second member is seen joining
at 14:07:43, which is just over 10 minutes later. I think you need to
investigate why there's such a long delay between the first and
second consumers joining the group.


On Mon, Mar 15, 2021 at 7:58 AM Mazen Ezzeddine <
mazen.ezzeddine@etu.univ-cotedazur.fr> wrote:

> Hi all,
>
> I have a kafka consumer  pod running on kubernetes, I executed the command
> kubectl scale consumerName --replicas=2,  and as shown in the logs below
> two seperate rebalancing processes were trigerred, so if the number of
> consumer replicas scaled = 100, one hundred seperate rebalancing are going
> to be trigerred.  is that accurate? am I missing something? any workaroud
> to trigger a single rebalancing regardless of the number of replicas in the
> scale command.
>
>
> group coordinator logs
> =================
>
>
> 2021-03-15 13:57:34,230 INFO [GroupCoordinator 1]: Preparing to rebalance
> group debugconsumerlag in state PreparingRebalance with old generation 0
> (__consumer_offsets-31) (reason: Adding new member
> consumer-debugconsumerlag-1-1a577d6c-7389-4217-883f-89535032ae02 with group
> instance id None) (kafka.coordinator.group.GroupCoordinator)
> [data-plane-kafka-request-handler-5]
> 2021-03-15 13:57:37,266 INFO [GroupCoordinator 1]: Stabilized group
> debugconsumerlag generation 1 (__consumer_offsets-31)
> (kafka.coordinator.group.GroupCoordinator) [executor-Rebalance]
> 2021-03-15 13:57:37,784 INFO [GroupCoordinator 1]: Assignment received
> from leader for group debugconsumerlag for generation 1
> (kafka.coordinator.group.GroupCoordinator)
> [data-plane-kafka-request-handler-3]
> 2021-03-15 14:07:43,822 INFO [GroupCoordinator 1]: Preparing to rebalance
> group debugconsumerlag in state PreparingRebalance with old generation 1
> (__consumer_offsets-31) (reason: Adding new member
> consumer-debugconsumerlag-1-e2e57bf6-6cbc-4dba-81d4-d7e58219c23f with group
> instance id None) (kafka.coordinator.group.GroupCoordinator)
> [data-plane-kafka-request-handler-1]
> 2021-03-15 14:07:46,530 INFO [GroupCoordinator 1]: Stabilized group
> debugconsumerlag generation 2 (__consumer_offsets-31)
> (kafka.coordinator.group.GroupCoordinator)
> [data-plane-kafka-request-handler-1]
> 2021-03-15 14:07:46,675 INFO [GroupCoordinator 1]: Assignment received
> from leader for group debugconsumerlag for generation 2
> (kafka.coordinator.group.GroupCoordinator)
> [data-plane-kafka-request-handler-3]
>
> Kind regards,
>