You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Ismael Juma (JIRA)" <ji...@apache.org> on 2017/07/24 12:22:02 UTC
[jira] [Updated] (KAFKA-5630) Consumer poll loop over the same
record after a CorruptRecordException
[ https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ismael Juma updated KAFKA-5630:
-------------------------------
Priority: Critical (was: Major)
> Consumer poll loop over the same record after a CorruptRecordException
> ----------------------------------------------------------------------
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
> Issue Type: Bug
> Components: consumer
> Affects Versions: 0.11.0.0
> Reporter: Vincent Maurin
> Priority: Critical
> Labels: reliability
> Fix For: 0.11.0.1
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite consumption loop of the same record, i.e, each call to poll is returning to me 500 times one record (500 is my max.poll.records). I am using the java client 0.11.0.0.
> Running the code with the debugger, the initial problem come from `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the last record.
> I guess the corruption problem is similar too https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the client is probably not the expected one
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)