You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by David Garcia <Da...@bazaarvoice.com> on 2019/12/19 19:35:05 UTC

Corrupt Kafka file

Hello, we are getting the following error:

server.log:[2019-12-17 15:05:28,757] ERROR [ReplicaManager broker=5] Error processing fetch with max size 1048576 from consumer on partition my-topic-2: (fetchOffset=312239, logStartOffset=-1, maxBytes=1048576, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)
server.log:org.apache.kafka.common.errors.CorruptRecordException: Found record size -2010690462 smaller than minimum record overhead (14) in file /var/lib/kafka/my-topic-2/00000000000000307631.log.

This error started occurring after our brokers got overloaded with fetch requests from an errant spark job.  At the moment, our consumers aren’t able to progress past the respective offset.  I found a few tickets that seemed relevant, (e.g. https://issues.apache.org/jira/browse/KAFKA-6679 )…but they aren’t quite the same…we were able to dump the records from the relevant files.

-David