You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Tzu-Li (Gordon) Tai (JIRA)" <ji...@apache.org> on 2018/02/13 06:52:00 UTC

[jira] [Commented] (FLINK-5728) FlinkKafkaProducer should flush on checkpoint by default

    [ https://issues.apache.org/jira/browse/FLINK-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361900#comment-16361900 ] 

Tzu-Li (Gordon) Tai commented on FLINK-5728:
--------------------------------------------

There was some discussion on the mailing list [1] that we do this as part of a major rework of the Kafka / Kinesis connectors in Flink 1.6. I'll downgrade the priority and reopen this in a Kafka / Kinesis connector rework umbrella issue.

 

[1] [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Timestamp-watermark-support-in-Kinesis-consumer-td20910.html]

> FlinkKafkaProducer should flush on checkpoint by default
> --------------------------------------------------------
>
>                 Key: FLINK-5728
>                 URL: https://issues.apache.org/jira/browse/FLINK-5728
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>            Reporter: Tzu-Li (Gordon) Tai
>            Priority: Blocker
>
> As discussed in FLINK-5702, it might be a good idea to let the FlinkKafkaProducer flush on checkpoints by default. Currently, it is disabled by default.
> It's a very simple change, but we should think about whether or not we want to break user behaviour, or have proper usage migration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)