You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Shantanu Deshmukh <sh...@gmail.com> on 2018/06/01 04:41:08 UTC

Re: Best Practice for Consumer Liveliness and avoid frequent rebalancing

Do you want to avoid rebalancing in such way that if a consumer exits then
its previously owned partition should be left disowned? But then who will
consume from partition that was deserted by a exiting consumer? In such
case you can go for manual partition assignment. Then there is no question
of consumer-group management and subsequently rebalancing.

On Thu, May 31, 2018 at 6:00 PM M. Manna <ma...@gmail.com> wrote:

> Hello,
>
> We are trying to move from single partition to multi-partition approach for
> our topics. The purpose is:
>
> 1) Each production/testbed server will have a non-Daemon thread (consumer)
> running.
> 2) It will consume messages, commit offset (manual), and determine next
> steps if commit fails, app fails etc.
> 3) Ideally, 1 partition per server (consumer). If rebalance occurs, first
> (lexi ordered) server will end up having additional partition(s).
>
> As I previously understood, and also read Consumer article by Jason
> Gustafson
> <
> https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
> >,
> we should always close consumers for resource optimisation. But departing
> from a consumer group means that a rebalance will occur. In our case, we
> would like every consumer to be alive (and sleep for a while) but still
> send heartbeat so that rebalancing effort is saved. But we re worried
> whether this might cause memory leak in our application.
>
> In other words, if we don't restart the servers (shutdown hook), we would
> like to avoid invoking KafkaConsumer#close().
>
> Has anyone got similar use case that they can share with us? We are simply
> interested to know whether this is a "use-case" scenario or not a good
> practice to keep consumers alive.
>
> Any suggestion/help is appreciated.
>
> Regards,
>

Re: Best Practice for Consumer Liveliness and avoid frequent rebalancing

Posted by "M. Manna" <ma...@gmail.com>.
What you are talking about is manual partition assignment, which is
different than reassignment upon rebalancing.

Consumer informs group coordinator that close() is invoked and that would
eventually cause rebalancing. I believe what you are talking about is the
rebalance listener.

On 1 Jun 2018 05:41, "Shantanu Deshmukh" <sh...@gmail.com> wrote:

Do you want to avoid rebalancing in such way that if a consumer exits then
its previously owned partition should be left disowned? But then who will
consume from partition that was deserted by a exiting consumer? In such
case you can go for manual partition assignment. Then there is no question
of consumer-group management and subsequently rebalancing.


On Thu, May 31, 2018 at 6:00 PM M. Manna <ma...@gmail.com> wrote:

> Hello,
>
> We are trying to move from single partition to multi-partition approach
for
> our topics. The purpose is:
>
> 1) Each production/testbed server will have a non-Daemon thread (consumer)
> running.
> 2) It will consume messages, commit offset (manual), and determine next
> steps if commit fails, app fails etc.
> 3) Ideally, 1 partition per server (consumer). If rebalance occurs, first
> (lexi ordered) server will end up having additional partition(s).
>
> As I previously understood, and also read Consumer article by Jason
> Gustafson
> <
>
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
> >,

> we should always close consumers for resource optimisation. But departing
> from a consumer group means that a rebalance will occur. In our case, we
> would like every consumer to be alive (and sleep for a while) but still
> send heartbeat so that rebalancing effort is saved. But we re worried
> whether this might cause memory leak in our application.
>
> In other words, if we don't restart the servers (shutdown hook), we would
> like to avoid invoking KafkaConsumer#close().
>
> Has anyone got similar use case that they can share with us? We are simply
> interested to know whether this is a "use-case" scenario or not a good
> practice to keep consumers alive.
>
> Any suggestion/help is appreciated.
>
> Regards,
>