You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Sunil Parmar <su...@gmail.com> on 2017/07/28 22:49:25 UTC

Open file descriptors on a broker

Environment
CDH 5.7.
Kafka 0.9 ( Cloudera)

Our broker ( Cloudera manager) is warning us about open file descriptors on
the cluster. It has around 17K file descriptors open. There is a
configuration in Cloudera manager to change threshold for warning and
critical number of file descriptors open at given time. We can always fix
it by doing it but not sure if it's the right way to deal with this warning.

Is there way to ( roughly) calculate what should we set this thresholds (
allowable open File descriptors ) to ? What affects open file descriptors
topic partitions, small batch size, # consumer, # producers ?
What makes an broker to leave a file open read/write or both ? Some
insights might help understand this.
We always see upward trend in # of file descriptors on broker. Does it ever
go down ? When ?

Also, although we’re using 0.9 Kafka right now, I noticed this bug
https://issues.apache.org/jira/browse/KAFKA-3619 which appears to found and
fixed in 0.10. Can someone confirm if this is not an issue in 0.9.

Thanks,
Sunil Parmar