You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Amit Khandelwal (Jira)" <ji...@apache.org> on 2020/02/26 20:10:00 UTC

[jira] [Created] (KAFKA-9613) orruptRecordException: Found record size 0 smaller than minimum record overhead

Amit Khandelwal created KAFKA-9613:
--------------------------------------

             Summary: orruptRecordException: Found record size 0 smaller than minimum record overhead
                 Key: KAFKA-9613
                 URL: https://issues.apache.org/jira/browse/KAFKA-9613
             Project: Kafka
          Issue Type: Bug
            Reporter: Amit Khandelwal


20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0] Error processing fetch with max size 1048576 from consumer on partition SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1, maxBytes=1048576, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)

20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException: Found record size 0 smaller than minimum record overhead (14) in file /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/00000000000000000000.log.

20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]: Member xxxxxxxx_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f in group yyyyyyyyy_011 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)

 

[https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)