You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Ryan McGuire (JIRA)" <ji...@apache.org> on 2013/09/27 16:58:04 UTC

[jira] [Comment Edited] (CASSANDRA-5932) Speculative read performance data show unexpected results

    [ https://issues.apache.org/jira/browse/CASSANDRA-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13779994#comment-13779994 ] 

Ryan McGuire edited comment on CASSANDRA-5932 at 9/27/13 2:56 PM:
------------------------------------------------------------------

The good news is that speculative read has improved across the board.

However, this new batch of testing introduces some new mysteries.

Here is all of the runs from 7a87fc1186f39678382cf9b3e1dd224d9c71aead:

!5933-7a87fc11.png!

All of the speculative retry runs are better than with 2.0.0-rc1. However, I can't explain why sr=NONE did better than ALWAYS and 95percentile. There is no visible indication that a node went down for sr=NONE. I have double checked the logs, and it did, in fact, go down. 

Compare this to the baseline of 1.2.8 and 2.0.0-rc1 (redone last night on same hardware as above):

!5933-128_and_200rc1.png!

All of these have clear indications of the node going down.

You can [see all the data here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.5933.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1] - you can double click the colored squares to toggle the visibility of the lines, as they do overlap.

I've uploaded logs from all these runs as !5933-logs.tar.gz!.
                
      was (Author: enigmacurry):
    The good news is that speculative read has improved across the board.

However, this new batch of testing introduces some new mysteries.

!5933-7a87fc11.png!

This is all of the runs from 7a87fc1186f39678382cf9b3e1dd224d9c71aead. All of the speculative retry runs are better than with 2.0.0-rc1. However, I can't explain why sr=NONE did better than ALWAYS and 95percentile. There is no visible indication that a node went down for sr=NONE. I have double checked the logs, and it did, in fact, go down. 

Compare this to the baseline of 1.2.8 and 2.0.0-rc1 (redone last night on same hardware as above):

!5933-128_and_200rc1.png!

All of these have clear indications of the node going down.

You can [see all the data here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.5933.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1] - you can double click the colored squares to toggle the visibility of the lines, as they do overlap.

I've uploaded logs from all these runs as !5933-logs.tar.gz!.
                  
> Speculative read performance data show unexpected results
> ---------------------------------------------------------
>
>                 Key: CASSANDRA-5932
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5932
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Ryan McGuire
>            Assignee: Aleksey Yeschenko
>             Fix For: 2.0.2
>
>         Attachments: 5932.txt, 5933-128_and_200rc1.png, 5933-7a87fc11.png, 5933-logs.tar.gz, compaction-makes-slow.png, compaction-makes-slow-stats.png, eager-read-looks-promising.png, eager-read-looks-promising-stats.png, eager-read-not-consistent.png, eager-read-not-consistent-stats.png, node-down-increase-performance.png
>
>
> I've done a series of stress tests with eager retries enabled that show undesirable behavior. I'm grouping these behaviours into one ticket as they are most likely related.
> 1) Killing off a node in a 4 node cluster actually increases performance.
> 2) Compactions make nodes slow, even after the compaction is done.
> 3) Eager Reads tend to lessen the *immediate* performance impact of a node going down, but not consistently.
> My Environment:
> 1 stress machine: node0
> 4 C* nodes: node4, node5, node6, node7
> My script:
> node0 writes some data: stress -d node4 -F 30000000 -n 30000000 -i 5 -l 2 -K 20
> node0 reads some data: stress -d node4 -n 30000000 -o read -i 5 -K 20
> h3. Examples:
> h5. A node going down increases performance:
> !node-down-increase-performance.png!
> [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.json&metric=interval_op_rate&operation=stress-read&smoothing=1]
> At 450s, I kill -9 one of the nodes. There is a brief decrease in performance as the snitch adapts, but then it recovers... to even higher performance than before.
> h5. Compactions make nodes permanently slow:
> !compaction-makes-slow.png!
> !compaction-makes-slow-stats.png!
> The green and orange lines represent trials with eager retry enabled, they never recover their op-rate from before the compaction as the red and blue lines do.
> [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.compaction.2.json&metric=interval_op_rate&operation=stress-read&smoothing=1]
> h5. Speculative Read tends to lessen the *immediate* impact:
> !eager-read-looks-promising.png!
> !eager-read-looks-promising-stats.png!
> This graph looked the most promising to me, the two trials with eager retry, the green and orange line, at 450s showed the smallest dip in performance. 
> [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1]
> h5. But not always:
> !eager-read-not-consistent.png!
> !eager-read-not-consistent-stats.png!
> This is a retrial with the same settings as above, yet the 95percentile eager retry (red line) did poorly this time at 450s.
> [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.rc1.try2.json&metric=interval_op_rate&operation=stress-read&smoothing=1]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira