You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Edward Capriolo (JIRA)" <ji...@apache.org> on 2010/07/07 22:34:50 UTC
[jira] Updated: (CASSANDRA-1181) kinder gentler compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-1181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Edward Capriolo updated CASSANDRA-1181:
---------------------------------------
Attachment: stats.txt
Just wanted to check back in. I enabled the thread priorities and initiated a compaction.
Both my latency and tpstats looked on par with other nodes in the cluster that were not compacting at the time. This looks great. I included some system statistics to show off. Thanks!
> kinder gentler compaction
> -------------------------
>
> Key: CASSANDRA-1181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1181
> Project: Cassandra
> Issue Type: Task
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Jonathan Ellis
> Fix For: 0.6.3
>
> Attachments: 1181.txt, CompactionManager.java, stats.txt
>
>
> I suggested this in a ML thread but it seems that nobody actually tried it. I think it's worth following up on:
> You could try setting the compaction thread to a lower priority. You could add a thread priority to NamedThreadPool, and pass that up from CompactionExecutor constructor. According to http://www.javamex.com/tutorials/threads/priority_what.shtml you have to run as root and add a JVM option to get this to work.
> In particular, Brandon saw stress.py read latencies spike to 100ms during [anti]compaction on a 2 core machine. I'd like to see if this can mitigate that.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.