You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Sameer Kumar <sa...@gmail.com> on 2018/01/25 09:27:09 UTC

Cater to processing longer than max.poll.interval.ms

I have a scneario, let say due to GC or any other issue, my consumer takes
longer than max.poll.interval.ms to process data, what is the alternative
for preventing the consumer to be marked dead and not shun it out of the
consumer group.

Though the consumer has not died and session.timeout.ms is being sent at
regular intervals in this case.

-Sameer.

Re: Cater to processing longer than max.poll.interval.ms

Posted by Sameer Kumar <sa...@gmail.com>.
I am really not sure of max.poll.interval.ms, do we really need this.
Consumer liveless is already ensured by
session.timeout.ms/hearbeat.interval.ms.
max.poll.interval.ms - by default is 5 mins.
session.timeout.ms - 10 secs.

if max.poll.interval.ms is not met, then we kill the thread. So, lets say
is there was some intermittent problem, we have unnecessary killed the
thread and also, we are not recreating that thread again. Aren't we wasting
processing here.

-Sameer.

On Fri, Jan 26, 2018 at 2:08 PM, Sameer Kumar <sa...@gmail.com>
wrote:

> Hi,
>
> I am talking w.r.t. Kafka 1.0., reducing the poll interval and reducing
> the number of records polled are always an option.
> I wanted to explore if there are some other options apart from this and in
> case of GC pause, both of the above mentioned options will not help.
>
> -Sameer.
>
> On Fri, Jan 26, 2018 at 1:35 AM, R Krishna <kr...@gmail.com> wrote:
>
>> Think, new versions have better ways of doing this. In 0.10.2, because
>> poll() ensure liveness, you can disable auto commits and use consumer
>> pause() to avoid calling poll() (so brokers may ignore
>> max.poll.interval.ms)
>> so those partitions are not assigned to other consumers and also handle
>> ConsumerRebalanceListener onPartitionsAssigned or reduce amount of data
>> being processed using max.poll.records.
>> https://kafka.apache.org/0102/javadoc/org/apache/kafka/clien
>> ts/consumer/KafkaConsumer.html
>>
>> On Thu, Jan 25, 2018 at 1:27 AM, Sameer Kumar <sa...@gmail.com>
>> wrote:
>>
>> > I have a scneario, let say due to GC or any other issue, my consumer
>> takes
>> > longer than max.poll.interval.ms to process data, what is the
>> alternative
>> > for preventing the consumer to be marked dead and not shun it out of the
>> > consumer group.
>> >
>> > Though the consumer has not died and session.timeout.ms is being sent
>> at
>> > regular intervals in this case.
>> >
>> > -Sameer.
>> >
>>
>>
>>
>> --
>> Radha Krishna, Proddaturi
>> 253-234-5657
>>
>
>

Re: Cater to processing longer than max.poll.interval.ms

Posted by Sameer Kumar <sa...@gmail.com>.
Hi,

I am talking w.r.t. Kafka 1.0., reducing the poll interval and reducing the
number of records polled are always an option.
I wanted to explore if there are some other options apart from this and in
case of GC pause, both of the above mentioned options will not help.

-Sameer.

On Fri, Jan 26, 2018 at 1:35 AM, R Krishna <kr...@gmail.com> wrote:

> Think, new versions have better ways of doing this. In 0.10.2, because
> poll() ensure liveness, you can disable auto commits and use consumer
> pause() to avoid calling poll() (so brokers may ignore
> max.poll.interval.ms)
> so those partitions are not assigned to other consumers and also handle
> ConsumerRebalanceListener onPartitionsAssigned or reduce amount of data
> being processed using max.poll.records.
> https://kafka.apache.org/0102/javadoc/org/apache/kafka/clients/consumer/
> KafkaConsumer.html
>
> On Thu, Jan 25, 2018 at 1:27 AM, Sameer Kumar <sa...@gmail.com>
> wrote:
>
> > I have a scneario, let say due to GC or any other issue, my consumer
> takes
> > longer than max.poll.interval.ms to process data, what is the
> alternative
> > for preventing the consumer to be marked dead and not shun it out of the
> > consumer group.
> >
> > Though the consumer has not died and session.timeout.ms is being sent at
> > regular intervals in this case.
> >
> > -Sameer.
> >
>
>
>
> --
> Radha Krishna, Proddaturi
> 253-234-5657
>

Re: Cater to processing longer than max.poll.interval.ms

Posted by R Krishna <kr...@gmail.com>.
Think, new versions have better ways of doing this. In 0.10.2, because
poll() ensure liveness, you can disable auto commits and use consumer
pause() to avoid calling poll() (so brokers may ignore max.poll.interval.ms)
so those partitions are not assigned to other consumers and also handle
ConsumerRebalanceListener onPartitionsAssigned or reduce amount of data
being processed using max.poll.records.
https://kafka.apache.org/0102/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html

On Thu, Jan 25, 2018 at 1:27 AM, Sameer Kumar <sa...@gmail.com>
wrote:

> I have a scneario, let say due to GC or any other issue, my consumer takes
> longer than max.poll.interval.ms to process data, what is the alternative
> for preventing the consumer to be marked dead and not shun it out of the
> consumer group.
>
> Though the consumer has not died and session.timeout.ms is being sent at
> regular intervals in this case.
>
> -Sameer.
>



-- 
Radha Krishna, Proddaturi
253-234-5657