You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Sergey Ivanov (Jira)" <ji...@apache.org> on 2020/11/12 06:33:00 UTC

[jira] [Commented] (KAFKA-9613) orruptRecordException: Found record size 0 smaller than minimum record overhead

    [ https://issues.apache.org/jira/browse/KAFKA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17230378#comment-17230378 ] 

Sergey Ivanov commented on KAFKA-9613:
--------------------------------------

Hi,

We faced the similar issue with one of Kafka broker. It doesn't start with an error:


{code:java}
[2020-11-11T10:29:02,748][ERROR][category=kafka.log.LogManager] There was an error in one of the threads during logs loading: org.apache.kafka.common.errors.CorruptRecordException: Found record size 0 smaller than minimum record overhead (14) in file /var/opt/kafka/data/2/__consumer_offsets-4/00000000000021072037.log.
[2020-11-11T10:29:02,870][ERROR][category=kafka.server.KafkaServer] [KafkaServer id=2] Fatal error during KafkaServer startup. Prepare to shutdown
org.apache.kafka.common.errors.CorruptRecordException: Found record size 0 smaller than minimum record overhead (14) in file /var/opt/kafka/data/2/__consumer_offsets-4/00000000000021072037.log.
[2020-11-11T10:29:02,880][INFO][category=kafka.server.KafkaServer] [KafkaServer id=2] shutting down{code}
 

 

Our Kafka is deployed in OpenShift environment with local storages. It happened after upgrade Kafka cluster from 2.2.1 to 2.4.1. We performed upgrade with full cluster restart. And on starting second broker failed with error above, and it continues to fall after restarts. 

We tried to remove corrupted partition logs (__consumer_offsets-4) but after restart we started to see new one (__consumer_offsets-23 etc). So we removed all partitions for topic "__consumer_offsets" in the hope that only this service topic was corrupted.

But it didn't get result now we have similar errors with Kafka Connect topics. 
So it seems we got WA and we need to remove all corrupted partitions from this broker. But probably there is more beautiful way to find them out (not via restart after each deletion)?

Does anyone know what could have caused this error?

> orruptRecordException: Found record size 0 smaller than minimum record overhead
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-9613
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9613
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Amit Khandelwal
>            Priority: Major
>
> 20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0] Error processing fetch with max size 1048576 from consumer on partition SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1, maxBytes=1048576, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)
> 20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException: Found record size 0 smaller than minimum record overhead (14) in file /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/00000000000000000000.log.
> 20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
> 20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]: Member xxxxxxxx_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f in group yyyyyyyyy_011 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
>  
> [https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)