You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Dawid Kulig (JIRA)" <ji...@apache.org> on 2018/05/10 13:02:00 UTC

[jira] [Created] (KAFKA-6892) Kafka Streams memory usage grows

Dawid Kulig created KAFKA-6892:
----------------------------------

             Summary: Kafka Streams memory usage grows 
                 Key: KAFKA-6892
                 URL: https://issues.apache.org/jira/browse/KAFKA-6892
             Project: Kafka
          Issue Type: Bug
          Components: streams
    Affects Versions: 1.1.0
            Reporter: Dawid Kulig
         Attachments: kafka-streams-per-pod-resources-usage.png

Hi. I am observing indefinite memory growth of my kafka-streams application. It gets killed by the OS when reaching the memory limit (10gb). 

It's running two unrelated pipelines (read from 4 source topics - 100 partitions each - aggregate data and write to two destination topics) 

My environment: 
 * Kubernetes cluster
 * 4 app instances
 * 10GB memory limit per pod (instance)
 * JRE 8

JVM:
 * -Xms2g
 * -Xmx4g
 * num.stream.threads = 4

 

When my app is running for 24hours it reaches 10GB memory limit. Heap and GC looks good, non-heap avg memory usage is 120MB. I've read it might be related to the RocksDB that works underneath streams app, however I tried to tune it using [confluent doc|https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config] however with no luck. 

RocksDB config #1:
{code:java}
tableConfig.setBlockCacheSize(16 * 1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);{code}
RocksDB config #2 
{code:java}
tableConfig.setBlockCacheSize(1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);
options.setWriteBufferSize(8 * 1024L);{code}
 

This behavior has only been observed with our production traffic, where per topic input message rate is 10msg/sec. I am attaching cluster resources usage from last 24h.

Any help or advice would be much appreciated. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)