You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Ruslan Gryn (Jira)" <ji...@apache.org> on 2020/11/12 10:15:00 UTC

[jira] [Created] (KAFKA-10711) A low value in commit.interval.ms leads to unnecessary committing offsets

Ruslan Gryn created KAFKA-10711:
-----------------------------------

             Summary: A low value in commit.interval.ms leads to unnecessary committing offsets
                 Key: KAFKA-10711
                 URL: https://issues.apache.org/jira/browse/KAFKA-10711
             Project: Kafka
          Issue Type: Improvement
          Components: consumer, offset manager
    Affects Versions: 2.6.0
            Reporter: Ruslan Gryn


We want to avoid double delivery of the same records in Kafka. Therefore, we decided to set 
{code:java}
commit.interval.ms=0 and max.poll.records=1{code}
. Because default commit.interval.ms= 5 sec. That's why, if the app crashed in runtime then after restarting the app will receive all uncommitted records for 5 sec. But when committing every record then the app will receive only 1 duplicated record.

We are expecting that the consumer will poll(5 sec) a single record from the topic and after the next poll(5 sec), the consumer will commit offset of the record from the previous poll.

 

But the consumer commits offsets without any delays even if offsets were already committed before. That's why such a high volume of commits overload Kafka Brokers.

Could you please improve the behavior of consumers to avoid committing offsets that were already committed before and only commit offset if necessary?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)