You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "pengyusong (Jira)" <ji...@apache.org> on 2022/04/22 01:52:00 UTC

[jira] [Created] (FLINK-27343) flink jdbc sink with default param will lead buffer records in one batch unorder

pengyusong created FLINK-27343:
----------------------------------

             Summary: flink jdbc sink with default param will lead buffer records in one batch unorder
                 Key: FLINK-27343
                 URL: https://issues.apache.org/jira/browse/FLINK-27343
             Project: Flink
          Issue Type: Improvement
          Components: Connectors / JDBC
    Affects Versions: 1.13.6
         Environment: flink 1.13.6

kafka

postgres jdbc sink
            Reporter: pengyusong


* situation one

    when i use flink sql kafka connector re-consume a topic, the topic already has many messages.

    jdbc sink param with default.

    kafka topic is a compact topic, which contents is a mysql table cdc events.

    there some records with same key in one batch, buffer within one batch, finnaly sink to postgres with unorder, later record in the buffer batch are executed first.

    this will lead to the older message in kafka deal with after the newer message, the results are inconsistent with kafka message orders.
 * situation two

     If i set 
h5. sink.buffer-flush.interval = 0
h5. sink.buffer-flush.max-rows = 1

   the result are  inconsistent with kafka message orders.

 

So, I have a suspicion that the order in jdbc buffer execute is non-deterministic, lead to result in jdbc unordered.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)