You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Lerh Chuan Low (JIRA)" <ji...@apache.org> on 2018/06/22 01:22:00 UTC

[jira] [Comment Edited] (CASSANDRA-10540) RangeAwareCompaction

    [ https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16519900#comment-16519900 ] 

Lerh Chuan Low edited comment on CASSANDRA-10540 at 6/22/18 1:21 AM:
---------------------------------------------------------------------

Here is another benchmark run. It is still the same stressspec YAML. This time, the process is to stop one of the nodes in a DC (Same as before, 3 in 1 DC and 2 in the other), and then insert for 10 minutes. 
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=10m cl=QUORUM ops\(insert=1\) -node file=nodelist.txt -rate threads=50 -log file=insert.log > nohup.txt &
{code}
After that, trigger a stress but at the same time run a full repair in the DC:
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=1h cl=QUORUM ops\(insert=10,simple1=10,range1=1\) -node file=nodelist.txt -rate threads=50 -log file=mixed.log > nohup.txt &


nohup nodetool repair --full stresscql2 typestest > nohup.txt &
{code}
Here are the results:


|| ||RACS||Non RACS||
|Stress Result|Op rate : 244 op/s [insert: 116 op/s, range1: 12 op/s, simple1: 116 op/s] 
Partition rate : 243 pk/s [insert: 116 pk/s, range1: 10 pk/s, simple1: 116 pk/s] 
Row rate : 274 row/s [insert: 116 row/s, range1: 41 row/s, simple1: 116 row/s] 
Latency mean : 204.6 ms [insert: 2.5 ms, range1: 387.4 ms, simple1: 388.8 ms] 
Latency median : 39.7 ms [insert: 2.0 ms, range1: 378.0 ms, simple1: 377.7 ms] 
Latency 95th percentile : 706.2 ms [insert: 3.2 ms, range1: 802.2 ms, simple1: 805.3 ms] Latency 99th percentile : 941.6 ms [insert: 19.7 ms, range1: 1,022.9 ms, simple1: 1,022.4 ms] Latency 99.9th percentile : 1183.8 ms [insert: 65.5 ms, range1: 1,232.1 ms, simple1: 1,218.4 ms] Latency max : 7314.9 ms [insert: 550.0 ms, range1: 1,472.2 ms, simple1: 7,314.9 ms] 
Total partitions : 874,058 [insert: 419,116, range1: 36,428, simple1: 418,514] 
Total errors : 0 [insert: 0, range1: 0, simple1: 0] 
Total GC count : 0 
Total GC memory : 0.000 KiB 
Total GC time : 0.0 seconds 
Avg GC time : NaN ms 
StdDev GC time : 0.0 ms 
Total operation time : 01:00:00|Op rate : 221 op/s [insert: 105 op/s, range1: 11 op/s, simple1: 105 op/s]
Partition rate : 220 pk/s [insert: 105 pk/s, range1: 9 pk/s, simple1: 105 pk/s]
Row rate : 248 row/s [insert: 105 row/s, range1: 38 row/s, simple1: 105 row/s]
Latency mean : 226.2 ms [insert: 2.7 ms, range1: 428.8 ms, simple1: 429.1 ms]
Latency median : 150.3 ms [insert: 2.0 ms, range1: 385.4 ms, simple1: 383.8 ms]
Latency 95th percentile : 716.2 ms [insert: 3.0 ms, range1: 837.3 ms, simple1: 841.5 ms]
Latency 99th percentile : 1047.5 ms [insert: 14.8 ms, range1: 1,210.1 ms, simple1: 1,230.0 ms]
Latency 99.9th percentile : 1830.8 ms [insert: 57.5 ms, range1: 2,029.0 ms, simple1: 2,063.6 ms]
Latency max : 7457.5 ms [insert: 6,358.6 ms, range1: 7,159.7 ms, simple1: 7,457.5 ms]
Total partitions : 790,543 [insert: 378,618, range1: 33,908, simple1: 378,017]
Total errors : 0 [insert: 0, range1: 0, simple1: 0]
Total GC count : 0
Total GC memory : 0.000 KiB
Total GC time : 0.0 seconds
Avg GC time : NaN ms
StdDev GC time : 0.0 ms
Total operation time : 01:00:00|

Big thanks to Jason Brown for the repair patch, works like a charm :)

 


was (Author: lerh low):
Here is another benchmark run. It is still the same stressspec YAML. This time, the process is to stop one of the nodes in a DC (Same as before, 3 in 1 DC and 2 in the other), and then insert for 10 minutes. 
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=10m cl=QUORUM ops\(insert=1\) -node file=nodelist.txt -rate threads=50 -log file=insert.log > nohup.txt &
{code}
After that, trigger a stress but at the same time run a full repair in the DC:
{code:java}
nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=1h cl=QUORUM ops\(insert=10,simple1=10,range1=1\) -node file=nodelist.txt -rate threads=50 -log file=mixed.log > nohup.txt &


nohup nodetool repair --full stresscql2 typestest > nohup.txt &
{code}
Here are the results:


|| ||RACS||Non RACS||
|Stress Result|Op rate : 244 op/s [insert: 116 op/s, range1: 12 op/s, simple1: 116 op/s] 
Partition rate : 243 pk/s [insert: 116 pk/s, range1: 10 pk/s, simple1: 116 pk/s] 
Row rate : 274 row/s [insert: 116 row/s, range1: 41 row/s, simple1: 116 row/s] 
Latency mean : 204.6 ms [insert: 2.5 ms, range1: 387.4 ms, simple1: 388.8 ms] 
Latency median : 39.7 ms [insert: 2.0 ms, range1: 378.0 ms, simple1: 377.7 ms] 
Latency 95th percentile : 706.2 ms [insert: 3.2 ms, range1: 802.2 ms, simple1: 805.3 ms] Latency 99th percentile : 941.6 ms [insert: 19.7 ms, range1: 1,022.9 ms, simple1: 1,022.4 ms] Latency 99.9th percentile : 1183.8 ms [insert: 65.5 ms, range1: 1,232.1 ms, simple1: 1,218.4 ms] Latency max : 7314.9 ms [insert: 550.0 ms, range1: 1,472.2 ms, simple1: 7,314.9 ms] 
Total partitions : 874,058 [insert: 419,116, range1: 36,428, simple1: 418,514] 
Total errors : 0 [insert: 0, range1: 0, simple1: 0] 
Total GC count : 0 
Total GC memory : 0.000 KiB 
Total GC time : 0.0 seconds 
Avg GC time : NaN ms 
StdDev GC time : 0.0 ms 
Total operation time : 01:00:00|Op rate : 221 op/s [insert: 105 op/s, range1: 11 op/s, simple1: 105 op/s]
Partition rate : 220 pk/s [insert: 105 pk/s, range1: 9 pk/s, simple1: 105 pk/s]
Row rate : 248 row/s [insert: 105 row/s, range1: 38 row/s, simple1: 105 row/s]
Latency mean : 226.2 ms [insert: 2.7 ms, range1: 428.8 ms, simple1: 429.1 ms]
Latency median : 150.3 ms [insert: 2.0 ms, range1: 385.4 ms, simple1: 383.8 ms]
Latency 95th percentile : 716.2 ms [insert: 3.0 ms, range1: 837.3 ms, simple1: 841.5 ms]
Latency 99th percentile : 1047.5 ms [insert: 14.8 ms, range1: 1,210.1 ms, simple1: 1,230.0 ms]
Latency 99.9th percentile : 1830.8 ms [insert: 57.5 ms, range1: 2,029.0 ms, simple1: 2,063.6 ms]
Latency max : 7457.5 ms [insert: 6,358.6 ms, range1: 7,159.7 ms, simple1: 7,457.5 ms]
Total partitions : 790,543 [insert: 378,618, range1: 33,908, simple1: 378,017]
Total errors : 0 [insert: 0, range1: 0, simple1: 0]
Total GC count : 0
Total GC memory : 0.000 KiB
Total GC time : 0.0 seconds
Avg GC time : NaN ms
StdDev GC time : 0.0 ms
Total operation time : 01:00:00|

 

 

> RangeAwareCompaction
> --------------------
>
>                 Key: CASSANDRA-10540
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10540
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Marcus Eriksson
>            Assignee: Marcus Eriksson
>            Priority: Major
>              Labels: compaction, lcs, vnodes
>             Fix For: 4.x
>
>
> Broken out from CASSANDRA-6696, we should split sstables based on ranges during compaction.
> Requirements;
> * dont create tiny sstables - keep them bunched together until a single vnode is big enough (configurable how big that is)
> * make it possible to run existing compaction strategies on the per-range sstables
> We should probably add a global compaction strategy parameter that states whether this should be enabled or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org