You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by sunil chaudhari <su...@gmail.com> on 2020/06/20 08:18:29 UTC

Memory for a broker

Hi,
I was going through this document.
https://docs.confluent.io/current/kafka/deployment.html
“ does not require setting heap sizes more than 6 GB. This will result in a
file system cache of up to 28-30 GB on a 32 GB machine.”

Can someone please put focus on above statement? Its bit unclear to me as
why file system cache will reach to 28-30 GB ?
I have 64 GB machine for each broker. Should I stick to 6 GB still? Or I
can assign some more?

Regards,
Sunil.

Re: Memory for a broker

Posted by Ricardo Ferreira <ri...@riferrei.com>.
Sunil,

This has to do with Kafka's behavior of being persistent and using the 
broker's filesystem as the storage mechanism for the commit log. In 
modern operating systems a watermark of *85%* of the available RAM is 
dedicated to page cache and therefore, with Kafka running in a machine 
with *32GB* of RAM *~28-30GB* will be used to store the data.

Reason why the JVM heap doesn't need to be higher than *~6GB*. All the 
data is stored off-heap anyway ¯\_(ツ)_/¯

Thanks,

-- Ricardo

On 6/20/20 4:18 AM, sunil chaudhari wrote:
> Hi,
> I was going through this document.
> https://docs.confluent.io/current/kafka/deployment.html
> “ does not require setting heap sizes more than 6 GB. This will result in a
> file system cache of up to 28-30 GB on a 32 GB machine.”
>
> Can someone please put focus on above statement? Its bit unclear to me as
> why file system cache will reach to 28-30 GB ?
> I have 64 GB machine for each broker. Should I stick to 6 GB still? Or I
> can assign some more?
>
> Regards,
> Sunil.
>