You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jeff Klukas (JIRA)" <ji...@apache.org> on 2016/05/25 13:43:12 UTC

[jira] [Created] (KAFKA-3753) Metrics for StateStores

Jeff Klukas created KAFKA-3753:
----------------------------------

             Summary: Metrics for StateStores
                 Key: KAFKA-3753
                 URL: https://issues.apache.org/jira/browse/KAFKA-3753
             Project: Kafka
          Issue Type: Improvement
          Components: streams
            Reporter: Jeff Klukas
            Assignee: Guozhang Wang
            Priority: Minor
             Fix For: 0.10.1.0


As a developer building a Kafka Streams application, I'd like to have visibility into what's happening with my state stores. How can I know if a particular store is growing large? How can I know if a particular store is frequently needing to hit disk?

I'm interested to know if there are existing mechanisms for extracting this information or if other people have thoughts on how we might approach this.

I can't think of a way to provide metrics generically, so each state store implementation would likely need to handle this separately. Given that the default RocksDBStore will likely be the most-used, it would be a first target for adding metrics.

I'd be interested in knowing the total number of entries in the store, the total size on disk and in memory, rates of gets and puts, and hit/miss ratio for the MemoryLRUCache. Some of these numbers are likely calculable through the RocksDB API, others may simply not be accessible.

Would there be value to the wider community in having state stores register metrics?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)