You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Danny Chen (Jira)" <ji...@apache.org> on 2020/08/31 09:24:00 UTC
[jira] [Comment Edited] (FLINK-19099) consumer kafka message repeat
[ https://issues.apache.org/jira/browse/FLINK-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17187592#comment-17187592 ]
Danny Chen edited comment on FLINK-19099 at 8/31/20, 9:23 AM:
--------------------------------------------------------------
Before FLINK-15221, the SQL Kafka connector only supports "at least once" semantic, where the records may duplicate when there are failures. You can use data stream instead.
was (Author: danny0405):
Before Flink-15221, the SQL Kafka connector only supports "at least once" semantic, where the records may duplicate when there are failures. You can use data stream instead.
> consumer kafka message repeat
> -----------------------------
>
> Key: FLINK-19099
> URL: https://issues.apache.org/jira/browse/FLINK-19099
> Project: Flink
> Issue Type: Bug
> Components: API / DataStream, Connectors / Kafka
> Affects Versions: 1.11.0
> Reporter: zouwenlong
> Priority: Major
>
> when taksmanager be killed ,my job consume some message , but the offset in not commit,
> then restart it ,my job consume kafka message repeat, I used checkpoint and set 5 seconds ,
> I think this is a very common problem,how to solve this problem?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)