You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Pushkar Deole <pd...@gmail.com> on 2020/07/03 13:30:47 UTC

Need inputs - Confluent kafka memory usage pattern

Hi All,

We are using Confluent version of Kafka i.e. version 5.5.0. We deploy this
as pod service on kubernetes.
We use 3 broker pods and have set the request/limit memory for the pod to
512Mi/2GiB respectively and we observed all pods were almost touching the
limit or going over the limit a bit i.e. 2.1GiB and it never ran out of
memory.

Still since it was touching limits, we increased the limit to 4GiB for all
pods and then started the same volume of load again. Now, all pods are
touching the limit again i.e. around 4GiB memory is being used.

I am confused with this number. Earlier the same load consumed around 2GiB
and now that limit increased, it is consuming 4GiB.
Is this the behavior of confluent kafka that it reserves entire allocated
memory for usage, else what is the logic behind this memory usage pattern?