You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/05/22 06:28:04 UTC

[jira] [Commented] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

    [ https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019183#comment-16019183 ] 

ASF GitHub Bot commented on KAFKA-5211:
---------------------------------------

GitHub user becketqin opened a pull request:

    https://github.com/apache/kafka/pull/3114

    KAFKA-5211: Do not skip a corrupted record in consumer

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/becketqin/kafka KAFKA-5211

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/3114.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3114
    
----
commit e15530c262b4fc4727e61656f22881642d5e5049
Author: Jiangjie Qin <be...@gmail.com>
Date:   2017-05-22T06:24:22Z

    KAFKA-5211: Do not skip a corrupted record in consumer

----


> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-5211
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5211
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Jiangjie Qin
>            Assignee: Jiangjie Qin
>              Labels: clients, consumer
>             Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw an exception and block on that corrupted record. In the latest trunk this behavior has changed to skip the corrupted record (which is the old consumer behavior). With KIP-98, skipping corrupted messages would be a little dangerous as the message could be a control message for a transaction. We should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)