You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Sebastian Weber <se...@web.de> on 2019/08/10 07:30:07 UTC

Benchmarks showing unexpected results

Hi all,

I'm currently benchmarking a single cassandra 3.11.4 node with YCSB
0.15.0
(https://github.com/brianfrankcooper/YCSB/releases/download/0.15.0/ycsb-0.15.0.tar.gz).
Cassandra and YCSB are deployed on different VMs (Ubuntu 18.04) in a
cloud system. The latency results of YCSB with different target
throughput differs, which I didn't expect (I would expect them to be
equal). Both benchmarks have the same parameters except the target
throughput. The effect (lower target throughput but higher latencies)
occurs in every measurement I have taken so far (target throughput 100,
400, 800, 1200 operations/sec; thread count 1, 2, 4, 8). I have taken
some measurements on another PC (Windows 10) and with another tool
(cassandra-stress) with similar results. The table below shows a
benchmark result with one thread, 1.000.000 operations and target
throughput 100 and 0 respectively (0 = no predefined target throughput =
maximum achievable throughput).

target 100 -> [OVERALL], Throughput(ops/sec), 99.9667010918663
	target 0 -> [OVERALL], Throughput(ops/sec), 1172.9462297989335
[READ], Operations, 1000000
[READ], AverageLatency(us), 1261.235075
[READ], MinLatency(us), 507
[READ], MaxLatency(us), 80127
[READ], 5thPercentileLatency(us), 874
[READ], 10thPercentileLatency(us), 926
[READ], 15thPercentileLatency(us), 966
[READ], 20thPercentileLatency(us), 996
[READ], 25thPercentileLatency(us), 1022
[READ], 30thPercentileLatency(us), 1046
[READ], 35thPercentileLatency(us), 1069
[READ], 40thPercentileLatency(us), 1091
[READ], 45thPercentileLatency(us), 1113
[READ], 50thPercentileLatency(us), 1135
[READ], 55thPercentileLatency(us), 1157
[READ], 60thPercentileLatency(us), 1180
[READ], 65thPercentileLatency(us), 1204
[READ], 70thPercentileLatency(us), 1230
[READ], 75thPercentileLatency(us), 1260
[READ], 80thPercentileLatency(us), 1295
[READ], 85thPercentileLatency(us), 1341
[READ], 90thPercentileLatency(us), 1413
[READ], 95thPercentileLatency(us), 1530
[READ], 99thPercentileLatency(us), 5391
[READ], 999thPercentileLatency(us), 80127
[READ], 9999thPercentileLatency(us), 80127
[READ], Return=OK, 1000000

rtt min/avg/max/mdev = 0.273/0.729/53.627/1.479 ms
	[READ], Operations, 1000000
[READ], AverageLatency(us), 845.826972
[READ], MinLatency(us), 382
[READ], MaxLatency(us), 59487
[READ], 5thPercentileLatency(us), 625
[READ], 10thPercentileLatency(us), 666
[READ], 15thPercentileLatency(us), 693
[READ], 20thPercentileLatency(us), 714
[READ], 25thPercentileLatency(us), 732
[READ], 30thPercentileLatency(us), 748
[READ], 35thPercentileLatency(us), 764
[READ], 40thPercentileLatency(us), 779
[READ], 45thPercentileLatency(us), 794
[READ], 50thPercentileLatency(us), 810
[READ], 55thPercentileLatency(us), 826
[READ], 60thPercentileLatency(us), 842
[READ], 65thPercentileLatency(us), 860
[READ], 70thPercentileLatency(us), 879
[READ], 75thPercentileLatency(us), 901
[READ], 80thPercentileLatency(us), 925
[READ], 85thPercentileLatency(us), 954
[READ], 90thPercentileLatency(us), 991
[READ], 95thPercentileLatency(us), 1053
[READ], 99thPercentileLatency(us), 1279
[READ], 999thPercentileLatency(us), 59487
[READ], 9999thPercentileLatency(us), 59487
[READ], Return=OK, 1000000

rtt min/avg/max/mdev = 0.251/0.602/51.361/1.631 ms

The higher latencies with lower target throughput are partly caused by
ping differences (average difference of 130us), but I don't know what
may cause the remaining difference.

Thank you in advance for any explanations or hints how to find out what
causes this effect.

Regards
Sebastian Weber