You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/12/13 10:10:58 UTC

[jira] [Commented] (KAFKA-4473) RecordCollector should handle retriable exceptions more strictly

    [ https://issues.apache.org/jira/browse/KAFKA-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744756#comment-15744756 ] 

ASF GitHub Bot commented on KAFKA-4473:
---------------------------------------

GitHub user dguy opened a pull request:

    https://github.com/apache/kafka/pull/2249

    KAFKA-4473: RecordCollector should handle retriable exceptions more strictly

    The `RecordCollectorImpl` currently drops messages on the floor if an exception is non-null in the producer callback. This will result in message loss and violates at-least-once processing.
    Rather than just log an error in the callback, save the exception in a field. On subsequent calls to `send`, `flush`, `close`, first check for the existence of an exception and throw a `StreamsException` if it is non-null. Also, in the callback, if an exception has already occurred, the `offsets` map should not be updated.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dguy/kafka kafka-4473

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/2249.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2249
    
----
commit 9ceaa67a9967edfa7522bb9daea64c1bc44738c0
Author: Damian Guy <da...@gmail.com>
Date:   2016-12-12T17:47:27Z

    throw exception in record collector instead of just dropping messages on the floor

----


> RecordCollector should handle retriable exceptions more strictly
> ----------------------------------------------------------------
>
>                 Key: KAFKA-4473
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4473
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 0.10.1.0
>            Reporter: Thomas Schulz
>            Assignee: Damian Guy
>            Priority: Critical
>              Labels: architecture
>
> see: https://groups.google.com/forum/#!topic/confluent-platform/DT5bk1oCVk8
> There is probably a bug in the RecordCollector as described in my detailed Cluster test published in the aforementioned post.
> The class RecordCollector has the following behavior:
> - if there is no exception, add the message offset to a map
> - otherwise, do not add the message offset and instead log the above statement
> Is it possible that this offset map contains the latest offset to commit? If so, a message that fails might be overriden be a successful (later) message and the consumer commits every message up to the latest offset?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)