You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2016/12/02 22:57:58 UTC
[jira] [Updated] (KAFKA-4473) RecordCollector should handle
retriable exceptions more strictly
[ https://issues.apache.org/jira/browse/KAFKA-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Guozhang Wang updated KAFKA-4473:
---------------------------------
Summary: RecordCollector should handle retriable exceptions more strictly (was: KafkaStreams does *not* guarantee at-least-once delivery)
> RecordCollector should handle retriable exceptions more strictly
> ----------------------------------------------------------------
>
> Key: KAFKA-4473
> URL: https://issues.apache.org/jira/browse/KAFKA-4473
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 0.10.1.0
> Reporter: Thomas Schulz
> Priority: Critical
> Labels: architecture
>
> see: https://groups.google.com/forum/#!topic/confluent-platform/DT5bk1oCVk8
> There is probably a bug in the RecordCollector as described in my detailed Cluster test published in the aforementioned post.
> The class RecordCollector has the following behavior:
> - if there is no exception, add the message offset to a map
> - otherwise, do not add the message offset and instead log the above statement
> Is it possible that this offset map contains the latest offset to commit? If so, a message that fails might be overriden be a successful (later) message and the consumer commits every message up to the latest offset?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)