You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Luigi Berrettini (Jira)" <ji...@apache.org> on 2020/09/24 15:41:00 UTC

[jira] [Created] (KAFKA-10522) Duplicate detection and max.in.flight.requests.per.connection details

Luigi Berrettini created KAFKA-10522:
----------------------------------------

             Summary: Duplicate detection and max.in.flight.requests.per.connection details
                 Key: KAFKA-10522
                 URL: https://issues.apache.org/jira/browse/KAFKA-10522
             Project: Kafka
          Issue Type: Wish
            Reporter: Luigi Berrettini


I was looking at [https://github.com/apache/kafka/pull/3743] and I was wondering if you could help me understand it better.

I saw that the [Sender|https://github.com/apache/kafka/blob/2.6.0/clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java#L602] checks for a {{Errors.DUPLICATE_SEQUENCE_NUMBER}} but I was not able to find where this error is triggered on the server side.

It seems to me that duplicate detection relies on checking if the sequence number is more than {{lastPersistedSeq + 1}}.

If this is the case:
 * why storing the metadata for the last batches and not just relying on the sequence number of the last message persisted in the log?
 * why limiting {{max.in.flight.requests.per.connection}} to a maximun value of 5 if duplicates are still detected when metadata is not found (and therefore with any number of max in flights)?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)