You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "John Lu (JIRA)" <ji...@apache.org> on 2018/06/01 13:39:00 UTC

[jira] [Created] (KAFKA-6980) Recommended MaxDirectMemorySize for consumers

John Lu created KAFKA-6980:
------------------------------

             Summary: Recommended MaxDirectMemorySize for consumers
                 Key: KAFKA-6980
                 URL: https://issues.apache.org/jira/browse/KAFKA-6980
             Project: Kafka
          Issue Type: Wish
          Components: consumer, documentation
    Affects Versions: 0.10.2.0
         Environment: CloudFoundry
            Reporter: John Lu


We are observing that when MaxDirectMemorySize is set too low, our Kafka consumer threads are failing and encountering the following exception:

{{java.lang.OutOfMemoryError: Direct buffer memory}}

Is there a way to estimate how much direct memory is required for optimal performance?  In the documentation, it is suggested that the amount of memory required is  [Number of Partitions * max.partition.fetch.bytes].  

When we pick a value slightly above that, we no longer encounter the error, but if we double or triple the number, our throughput improves drastically.  So we are wondering if there is another setting or parameter to consider?

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)