You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Robert Metzger (JIRA)" <ji...@apache.org> on 2016/02/11 12:33:18 UTC
[jira] [Commented] (FLINK-3386) Kafka consumers should not
necessarily fail on expiring data
[ https://issues.apache.org/jira/browse/FLINK-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142614#comment-15142614 ]
Robert Metzger commented on FLINK-3386:
---------------------------------------
I think this issue is basically a duplicate of: https://issues.apache.org/jira/browse/FLINK-3264
> Kafka consumers should not necessarily fail on expiring data
> ------------------------------------------------------------
>
> Key: FLINK-3386
> URL: https://issues.apache.org/jira/browse/FLINK-3386
> Project: Flink
> Issue Type: Improvement
> Components: Kafka Connector, Streaming Connectors
> Affects Versions: 1.0.0
> Reporter: Gyula Fora
>
> Currently if the data in a kafka topic expires while reading from it, it causes an unrecoverable failure as subsequent retries will also fail on invalid offsets.
> While this might be the desired behaviour under some circumstances, it would probably be better in most cases to automatically jump to the earliest valid offset in these cases.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)