You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@camel.apache.org by "Chris McCarthy (Jira)" <ji...@apache.org> on 2020/04/21 01:07:00 UTC

[jira] [Created] (CAMEL-14935) KafkaConsumer commits old offset values in certain failure case causing offset reset error

Chris McCarthy created CAMEL-14935:
--------------------------------------

             Summary: KafkaConsumer commits old offset values in certain failure case causing offset reset error
                 Key: CAMEL-14935
                 URL: https://issues.apache.org/jira/browse/CAMEL-14935
             Project: Camel
          Issue Type: Bug
          Components: camel-kafka
    Affects Versions: 2.24.0
            Reporter: Chris McCarthy


We are getting unexpected offset reset errors occasionally.

The cause seems to be a failed commit on rebalance, leaving an old value in the hashMap that is then re-read and re-committed across rebalances in certain situations.

Our relevant configuration details are:

autoCommitEnable=false
 allowManualCommit=true
 autoOffsetReset=earliest

It seems when the KafkaConsumer experiences an Exception committing the offset upon a rebalance, this leaves the old offset value in the lastProcessedOffset hashMap.

A subsequent rebalance that assigns the same partition to the same consumer, that then shortly thereafter experiences another rebalance (before any messages have been processed successfully) will commit this old offset again.  This offset may be very old if there have been many rebalances in between the original rebalance that failed to commit its offset.

If it is old enough that the message is no longer available the outcome is an offset reset error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)