You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Jun Ma <mj...@gmail.com> on 2017/05/16 22:20:46 UTC

Log compaction failed because offset map doesn't have enough space

Hi team,

We are having a issue with compacting __consumer_offsets topic in our
cluster. We’re seeing logs in log-cleaner.log saying:

[2017-05-16 11:56:28,993] INFO Cleaner 0: Building offset map for log
__consumer_offsets-15 for 349 segments in offset range [0, 619265471).
(kafka.log.LogCleaner)
[2017-05-16 11:56:29,014] ERROR [kafka-log-cleaner-thread-0], Error due to
 (kafka.log.LogCleaner)
java.lang.IllegalArgumentException: requirement failed: 306088059 messages
in segment __consumer_offsets-15/00000000000000000000.log but offset map
can fit only 74999999. You can increase log.cleaner.dedupe.buffer.size or
decrease log.cleaner.threads
at scala.Predef$.require(Predef.scala:219)
at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:584)
at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:580)
at
scala.collection.immutable.Stream$StreamWithFilter.foreach(Stream.scala:570)
at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:580)
at kafka.log.Cleaner.clean(LogCleaner.scala:322)
at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2017-05-16 11:56:29,016] INFO [kafka-log-cleaner-thread-0], Stopped
 (kafka.log.LogCleaner)

We have log.cleaner.dedupe.buffer.size=2000000000, which is slightly less
than 2G, but still, it can only fit 74,999,999 messages. The segment has
306,088,059 messages which is 4 times larger than the buffer can hold. We
tried to set log.cleaner.dedupe.buffer.size even larger, but we see the log
saying that
[2017-05-16 11:52:16,238] WARN [kafka-log-cleaner-thread-0], Cannot use
more than 2G of cleaner buffer space per cleaner thread, ignoring excess
buffer space... (kafka.log.LogCleaner)

The size of 00000000000000000000.log segment is 100MB, and
log.cleaner.threads=1. We’re running Kafka 0.9.0.1.
How can we get through this?

Thanks,
Jun

Re: Log compaction failed because offset map doesn't have enough space

Posted by Tom Crayford <tc...@heroku.com>.
Hi,

You should upgrade Kafka versions, this was a bug fixed in KAFKA-3894:
https://issues.apache.org/jira/browse/KAFKA-3894

Generally it's a very good idea to keep on top of Kafka version upgrades,
there are numerous bugs fixed with every release, and it's stability goes
up each time.

On Tue, May 16, 2017 at 11:20 PM, Jun Ma <mj...@gmail.com> wrote:

> Hi team,
>
> We are having a issue with compacting __consumer_offsets topic in our
> cluster. We’re seeing logs in log-cleaner.log saying:
>
> [2017-05-16 11:56:28,993] INFO Cleaner 0: Building offset map for log
> __consumer_offsets-15 for 349 segments in offset range [0, 619265471).
> (kafka.log.LogCleaner)
> [2017-05-16 11:56:29,014] ERROR [kafka-log-cleaner-thread-0], Error due to
>  (kafka.log.LogCleaner)
> java.lang.IllegalArgumentException: requirement failed: 306088059 messages
> in segment __consumer_offsets-15/00000000000000000000.log but offset map
> can fit only 74999999. You can increase log.cleaner.dedupe.buffer.size or
> decrease log.cleaner.threads
> at scala.Predef$.require(Predef.scala:219)
> at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:584)
> at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:580)
> at
> scala.collection.immutable.Stream$StreamWithFilter.
> foreach(Stream.scala:570)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:580)
> at kafka.log.Cleaner.clean(LogCleaner.scala:322)
> at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> [2017-05-16 11:56:29,016] INFO [kafka-log-cleaner-thread-0], Stopped
>  (kafka.log.LogCleaner)
>
> We have log.cleaner.dedupe.buffer.size=2000000000, which is slightly less
> than 2G, but still, it can only fit 74,999,999 messages. The segment has
> 306,088,059 messages which is 4 times larger than the buffer can hold. We
> tried to set log.cleaner.dedupe.buffer.size even larger, but we see the log
> saying that
> [2017-05-16 11:52:16,238] WARN [kafka-log-cleaner-thread-0], Cannot use
> more than 2G of cleaner buffer space per cleaner thread, ignoring excess
> buffer space... (kafka.log.LogCleaner)
>
> The size of 00000000000000000000.log segment is 100MB, and
> log.cleaner.threads=1. We’re running Kafka 0.9.0.1.
> How can we get through this?
>
> Thanks,
> Jun
>