You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Alexandre Dupriez (Jira)" <ji...@apache.org> on 2021/03/12 15:29:00 UTC

[jira] [Commented] (KAFKA-7525) Handling corrupt records

    [ https://issues.apache.org/jira/browse/KAFKA-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17300379#comment-17300379 ] 

Alexandre Dupriez commented on KAFKA-7525:
------------------------------------------

It was fixed in 2.6.0: KAFKA-9206

> Handling corrupt records
> ------------------------
>
>                 Key: KAFKA-7525
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7525
>             Project: Kafka
>          Issue Type: Improvement
>          Components: consumer, core
>    Affects Versions: 1.1.0
>            Reporter: Katarzyna Solnica
>            Priority: Major
>
> When Java consumer encounters a corrupt record on a partition it reads from, it throws:
> {code:java}
> org.apache.kafka.common.KafkaException: Received exception when fetching the next record from XYZ. If needed, please seek past the record to continue consumption.
>     at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1125)
>     at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993)
>     at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527)
>     at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488)
>     at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size is less than the minimum record overhead (14){code}
> or:
> {code:java}
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
>     at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:936)
>     at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:485)
>     at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...){code}
> 1. Could you consider throwing CorruptRecordException from parseCompletedFetch() when error == Errors.CORRUPT_MESSAGE?
> 2. Seeking past the corrupt record means losing data. I've noticed that the record is often correct on a follower ISR, and manual change of the partition leader to the follower node solves the issue in case partition is used by a single consumer group. Couldn't Kafka server discover such situations and recover corrupt records from logs available on other ISRs somehow?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)