You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by G V <xo...@gmail.com> on 2021/06/10 14:25:39 UTC

Very big partition of __consumer_offsets topic

Hi all, I am using Kafka 2.0.0 with java 8u191
There is a partitions of the __consumer_offsets topic that is 600 GB with
roughly 6000 segments. These segments are older than 4 months. There are 60
consumer groups, 90 topics and 100 partitions per topic

There aren't errors in the logs and that topic is COMPACT as default.
What could be the problem? a bug of kafka 2.0.0? a problem with java 8u191?
What checks could I do?

My settings:
log.cleaner.enable=true
log.cleanup.policy = [delete]
log.retention.bytes = -1
log.segment.bytes = 268435456
log.retention.hours = 72
log.retention.check.interval.ms = 300000
...
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600

Thanks :)