You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Serban Teodorescu (JIRA)" <ji...@apache.org> on 2019/07/18 15:28:00 UTC

[jira] [Commented] (CASSANDRA-13096) Snapshots slow down jmx scraping

    [ https://issues.apache.org/jira/browse/CASSANDRA-13096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888067#comment-16888067 ] 

Serban Teodorescu commented on CASSANDRA-13096:
-----------------------------------------------

See [https://github.com/criteo/cassandra_exporter#why-cache-metrics-results-this-is-not-the-prometheus-way-]

Snapshot scraping is expensive, and it will be triggered frequently by Prometheus. That would explain why it gets to normal once you clear the snapshots.

> Snapshots slow down jmx scraping
> --------------------------------
>
>                 Key: CASSANDRA-13096
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13096
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Observability/Metrics
>            Reporter: Maxime Fouilleul
>            Priority: Normal
>         Attachments: CPU Load.png, Clear Snapshots.png, JMX Scrape Duration.png
>
>
> Hello,
> We are scraping the jmx metrics through a prometheus exporter and we noticed that some nodes became really long to answer (more than 20 seconds). After some investigations we do not find any hardware problem or overload issues on there "slow" nodes. It happens on different clusters, some with only few giga bytes of dataset and it does not seams to be related to a specific version neither as it happens on 2.1, 2.2 and 3.0 nodes. 
> After some unsuccessful actions, one of our ideas was to clean the snapshots staying on one problematic node:
> {code}
> nodetool clearsnapshot
> {code}
> And the magic happens... as you can see in the attached diagrams, the second we cleared the snapshots, the CPU activity dropped immediatly and the duration to scrape the jmx metrics goes from +20 secs to instantaneous...
> Can you enlighten us on this issue? Once again, it appears on our three 2.1, 2.2 and 3.0 versions, on different volumetry and it is not systematically linked to the snapshots as we have some nodes with the same snapshots volume which are going pretty well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org