You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Alexis Sarda-Espinosa (Jira)" <ji...@apache.org> on 2023/04/15 12:23:00 UTC

[jira] [Commented] (FLINK-31305) KafkaWriter doesn't wait for errors for in-flight records before completing flush

    [ https://issues.apache.org/jira/browse/FLINK-31305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17712648#comment-17712648 ] 

Alexis Sarda-Espinosa commented on FLINK-31305:
-----------------------------------------------

I understand the externalized release of the connector (v3.0) will only be compatible with Flink 1.17.x, but _if_ a Flink 1.16.2 patch is released, will it also include a non-externalized release of the connector? Given the criticality of this, I had hoped the externalized connector would also support 1.16.x so I could immediately use it with 1.16.1.

> KafkaWriter doesn't wait for errors for in-flight records before completing flush
> ---------------------------------------------------------------------------------
>
>                 Key: FLINK-31305
>                 URL: https://issues.apache.org/jira/browse/FLINK-31305
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.17.0, 1.16.1
>            Reporter: Mason Chen
>            Assignee: Mason Chen
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: kafka-3.0.0
>
>
> The KafkaWriter flushing needs to wait for all in-flight records to send successfully. This can be achieved by tracking requests and returning a response from the registered callback from the producer#send() logic.
> There is potential for data loss since the checkpoint does not accurately reflect that all records have been sent successfully, to preserve at least once semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)