You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2016/05/28 01:46:12 UTC

[jira] [Updated] (KAFKA-3753) Metrics for StateStores

     [ https://issues.apache.org/jira/browse/KAFKA-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Guozhang Wang updated KAFKA-3753:
---------------------------------
    Labels: api  (was: )

> Metrics for StateStores
> -----------------------
>
>                 Key: KAFKA-3753
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3753
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>            Reporter: Jeff Klukas
>            Assignee: Guozhang Wang
>            Priority: Minor
>              Labels: api
>             Fix For: 0.10.1.0
>
>
> As a developer building a Kafka Streams application, I'd like to have visibility into what's happening with my state stores. How can I know if a particular store is growing large? How can I know if a particular store is frequently needing to hit disk?
> I'm interested to know if there are existing mechanisms for extracting this information or if other people have thoughts on how we might approach this.
> I can't think of a way to provide metrics generically, so each state store implementation would likely need to handle this separately. Given that the default RocksDBStore will likely be the most-used, it would be a first target for adding metrics.
> I'd be interested in knowing the total number of entries in the store, the total size on disk and in memory, rates of gets and puts, and hit/miss ratio for the MemoryLRUCache. Some of these numbers are likely calculable through the RocksDB API, others may simply not be accessible.
> Would there be value to the wider community in having state stores register metrics?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)