You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Pushkar Deole <pd...@gmail.com> on 2021/05/15 04:48:37 UTC

kafka metric to monitor for consumer FETCH using disk caching and not going to disk

Hi All,

is there any metric that I can use to check whether the memory allocated
for kafka is sufficient for the given load on the brokers and whether kafka
is optimally making use of page cache for consumer fetch reads which are
not going to disk for each read slowing down the overall consumer
processing ad thus increasing consumer lag?

which metric can tell that i should assign more memory to brokers?

Re: kafka metric to monitor for consumer FETCH using disk caching and not going to disk

Posted by Alexandre Dupriez <al...@gmail.com>.
Not that I know of - but others may advise otherwise.
The change from KIP-551 is fairly self-contained and can be backported
well though.

Thanks,
Alexandre

Le dim. 16 mai 2021 à 14:51, Pushkar Deole <pd...@gmail.com> a écrit :
>
> thanks Alexandre... currently we are using kafka 2.5.0, so is there any
> metric that can be used from 2.5.0?
>
> On Sun, May 16, 2021 at 6:02 PM Alexandre Dupriez <
> alexandre.dupriez@gmail.com> wrote:
>
> > Hi Pushkar,
> >
> > If you are using Linux and Kafka 2.6.0+, the closest metric to what
> > you are looking for is TotalDiskReadBytes [1], which measures data
> > transfer at the block layer.
> > Assuming your consumers are doing tail reads and there is no other
> > activity which requires loading pages from the disk on your system
> > (including log compaction from Kafka), you can determine if you are
> > effectively hitting the disk or not.
> >
> > [1]
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-551%3A+Expose+disk+read+and+write+metrics
> >
> > Thanks,
> > Alexandre
> >
> > Le sam. 15 mai 2021 à 05:49, Pushkar Deole <pd...@gmail.com> a écrit
> > :
> > >
> > > Hi All,
> > >
> > > is there any metric that I can use to check whether the memory allocated
> > > for kafka is sufficient for the given load on the brokers and whether
> > kafka
> > > is optimally making use of page cache for consumer fetch reads which are
> > > not going to disk for each read slowing down the overall consumer
> > > processing ad thus increasing consumer lag?
> > >
> > > which metric can tell that i should assign more memory to brokers?
> >

Re: kafka metric to monitor for consumer FETCH using disk caching and not going to disk

Posted by Pushkar Deole <pd...@gmail.com>.
thanks Alexandre... currently we are using kafka 2.5.0, so is there any
metric that can be used from 2.5.0?

On Sun, May 16, 2021 at 6:02 PM Alexandre Dupriez <
alexandre.dupriez@gmail.com> wrote:

> Hi Pushkar,
>
> If you are using Linux and Kafka 2.6.0+, the closest metric to what
> you are looking for is TotalDiskReadBytes [1], which measures data
> transfer at the block layer.
> Assuming your consumers are doing tail reads and there is no other
> activity which requires loading pages from the disk on your system
> (including log compaction from Kafka), you can determine if you are
> effectively hitting the disk or not.
>
> [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-551%3A+Expose+disk+read+and+write+metrics
>
> Thanks,
> Alexandre
>
> Le sam. 15 mai 2021 à 05:49, Pushkar Deole <pd...@gmail.com> a écrit
> :
> >
> > Hi All,
> >
> > is there any metric that I can use to check whether the memory allocated
> > for kafka is sufficient for the given load on the brokers and whether
> kafka
> > is optimally making use of page cache for consumer fetch reads which are
> > not going to disk for each read slowing down the overall consumer
> > processing ad thus increasing consumer lag?
> >
> > which metric can tell that i should assign more memory to brokers?
>

Re: kafka metric to monitor for consumer FETCH using disk caching and not going to disk

Posted by Alexandre Dupriez <al...@gmail.com>.
Hi Pushkar,

If you are using Linux and Kafka 2.6.0+, the closest metric to what
you are looking for is TotalDiskReadBytes [1], which measures data
transfer at the block layer.
Assuming your consumers are doing tail reads and there is no other
activity which requires loading pages from the disk on your system
(including log compaction from Kafka), you can determine if you are
effectively hitting the disk or not.

[1] https://cwiki.apache.org/confluence/display/KAFKA/KIP-551%3A+Expose+disk+read+and+write+metrics

Thanks,
Alexandre

Le sam. 15 mai 2021 à 05:49, Pushkar Deole <pd...@gmail.com> a écrit :
>
> Hi All,
>
> is there any metric that I can use to check whether the memory allocated
> for kafka is sufficient for the given load on the brokers and whether kafka
> is optimally making use of page cache for consumer fetch reads which are
> not going to disk for each read slowing down the overall consumer
> processing ad thus increasing consumer lag?
>
> which metric can tell that i should assign more memory to brokers?