You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Catalin Alexandru Zamfir (JIRA)" <ji...@apache.org> on 2015/01/06 18:50:34 UTC

[jira] [Commented] (CASSANDRA-7139) Default concurrent_compactors is probably too high

    [ https://issues.apache.org/jira/browse/CASSANDRA-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266473#comment-14266473 ] 

Catalin Alexandru Zamfir commented on CASSANDRA-7139:
-----------------------------------------------------

Our set-up was RAID5 and the min (numberOfDisk, numberOfCores) would just be 2, when we have 40+ cores. The commented "concurrent_compactors" would be "2" meaning that a lot of SSTables are accumulating in high-cardinality tables (where the partition key is an UUID-type) because the compaction is limited to "2". Looking at "dstat" even if we've set compaction_throughput_in_mb_per_sec to 192 (spinning disk) the dstat -lrv1 disk write maxes out at 10MB/s.

IMHO, the concurrent_compactors should be number_of_cores/compaction_throughput_in_mb_per_sec * 100 which in our case (40 cores) gives around 20/21 compactors. And on 8 cores (8/192 * 100 gives 4 concurrent compactors).

> Default concurrent_compactors is probably too high
> --------------------------------------------------
>
>                 Key: CASSANDRA-7139
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7139
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Benedict
>            Assignee: Jonathan Ellis
>            Priority: Minor
>             Fix For: 2.1 rc1
>
>         Attachments: 7139.txt
>
>
> The default number of concurrent compactors is probably too high for modern hardware with spinning disks for storage: A modern blade can easily have 24+ Cores, which would result in a default of 24 concurrent compactions. This not only increases random IO, it also keeps around a lot of obsoleted files for an unnecessarily long time, as each compaction keeps references to any possibly overlapping files that it isn't itself compacting - but these can have been obsoleted part way through by compactions that finished earlier. If you factor in the default compaction throughput rate of 16Mb/s, anything but a single default concurrent_compactor makes very little sense, as a single thread should always be able to handle 16Mb/s, will cause less interference with other processes, and permits obsoleted files to be immediately removed.
> See [http://imgur.com/HDqhxFp] for a graph demonstrating the result of making this change on a box with 24-cores and 8Tb of storage (first spike is default settings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)