You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Andrew Jorgensen (JIRA)" <ji...@apache.org> on 2016/07/20 20:10:20 UTC

[jira] [Created] (KAFKA-3980) JMXReport using excessive memory

Andrew Jorgensen created KAFKA-3980:
---------------------------------------

             Summary: JMXReport using excessive memory
                 Key: KAFKA-3980
                 URL: https://issues.apache.org/jira/browse/KAFKA-3980
             Project: Kafka
          Issue Type: Bug
    Affects Versions: 0.9.0.1
            Reporter: Andrew Jorgensen


I have some nodes in a kafka cluster that occasionally will run out of memory whenever I restart the producers. I was able to take a heap dump from both a recently restarted Kafka node which weighed in at about 20 MB while a node that has been running for 2 months is using over 700MB of memory. Looking at the heap dump it looks like the JmxReporter is holding on to metrics and causing them to build up over time. 

!http://imgur.com/N6Cd0Ku.png!

!http://imgur.com/kQBqA2j.png!

The ultimate problem this causes is that there is a change when I restart the producers it will cause the node to experience an Java heap space exception and OOM. The nodes  then fail to startup correctly and write a -1 as the leader number to the partitions they were responsible for effectively reseting the offset and rendering that partition unavailable. The kafka process then needs to go be restarted in order to re-assign the node to the partition that it owns.

I have a few questions:
1. I am not quite sure why there are so many client id entries in that JmxReporter map.
2. Is there a way to have the JmxReporter release metrics after a set amount of time or a way to turn certain high cardinality metrics like these off?

I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)