You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Tomohiro Hashidate (Jira)" <ji...@apache.org> on 2022/06/15 10:28:00 UTC
[jira] [Created] (KAFKA-13993) Large log.cleaner.buffer.size config breaks Kafka Broker
Tomohiro Hashidate created KAFKA-13993:
------------------------------------------
Summary: Large log.cleaner.buffer.size config breaks Kafka Broker
Key: KAFKA-13993
URL: https://issues.apache.org/jira/browse/KAFKA-13993
Project: Kafka
Issue Type: Bug
Components: core
Affects Versions: 3.1.1, 3.2.0, 3.0.1, 2.8.1, 2.7.2
Reporter: Tomohiro Hashidate
LogCleaner build a Cleaner instance with following way.
```
{color:#cc7832}val {color}{color:#9876aa}cleaner {color}= {color:#cc7832}new {color}Cleaner(id = threadId{color:#cc7832},
{color}{color:#cc7832} {color}offsetMap = {color:#cc7832}new {color}SkimpyOffsetMap(memory = math.min({color:#9876aa}config{color}.dedupeBufferSize / {color:#9876aa}config{color}.numThreads{color:#cc7832}, Int{color}.{color:#9876aa}MaxValue{color}).toInt{color:#cc7832},
{color}{color:#cc7832} {color}hashAlgorithm = {color:#9876aa}config{color}.hashAlgorithm){color:#cc7832},
{color}{color:#cc7832} {color}ioBufferSize = {color:#9876aa}config{color}.ioBufferSize / {color:#9876aa}config{color}.numThreads / {color:#6897bb}2{color}{color:#cc7832},
{color}{color:#cc7832} {color}maxIoBufferSize = {color:#9876aa}config{color}.maxMessageSize{color:#cc7832},
{color}{color:#cc7832} {color}dupBufferLoadFactor = {color:#9876aa}config{color}.dedupeBufferLoadFactor{color:#cc7832},
{color}{color:#cc7832} {color}throttler = {color:#9876aa}throttler{color}{color:#cc7832},
{color}{color:#cc7832} {color}time = time{color:#cc7832},
{color}{color:#cc7832} {color}checkDone = checkDone)
```
If `log.cleaner.buffer.size` / `log.cleaner.threads` is larger than Int.MaxValue, SkimpyOffsetMap uses Int.MaxValue.
And SkimpyOffsetMap try to allocates ByteBuffer that has Int.MaxValue capacity.
But, in the implmentation of Hotspot VM, the maximum array size is Int.MaxValue - 5.
Accoring to ArraysSupport in OpenJDK, SOFT_MAX_ARRAY_LENGTH is Int.MaxValue - 8 (This is more safety).
If ByteBuffer capacity exceeds the maximum array length, Kafka Broker failed to start.
```
[2022-06-14 18:08:09,609] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at kafka.log.SkimpyOffsetMap.<init>(OffsetMap.scala:45)
at kafka.log.LogCleaner$CleanerThread.<init>(LogCleaner.scala:300)
at kafka.log.LogCleaner.$anonfun$startup$2(LogCleaner.scala:155)
at kafka.log.LogCleaner.startup(LogCleaner.scala:154)
at kafka.log.LogManager.startup(LogManager.scala:435)
at kafka.server.KafkaServer.startup(KafkaServer.scala:291)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
```
I suggest to use `Int.MaxValue - 8`instead of `Int.MaxValue`.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)