You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Marcus Eriksson (JIRA)" <ji...@apache.org> on 2015/02/17 09:37:16 UTC

[jira] [Commented] (CASSANDRA-8366) Repair grows data on nodes, causes load to become unbalanced

    [ https://issues.apache.org/jira/browse/CASSANDRA-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14323935#comment-14323935 ] 

Marcus Eriksson commented on CASSANDRA-8366:
--------------------------------------------

The problem is that we don't block the nodetool command until anticompaction has completed on the other nodes, this means that when nodetool repair exits, we might still be doing anticompaction on the other nodes, so if you start a new repair immediately once it exits, you might get that error.

I doubt anyone does this in real life, but we should probably fix it (for 3.0 as it needs a new message, AntiCompactionCompleted or something)

I added a sleep after each repair command and it completed without errors

> Repair grows data on nodes, causes load to become unbalanced
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-8366
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8366
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: 4 node cluster
> 2.1.2 Cassandra
> Inserts and reads are done with CQL driver
>            Reporter: Jan Karlsson
>            Assignee: Marcus Eriksson
>         Attachments: results-10000000-inc-repairs.txt, results-17500000_inc_repair.txt, results-5000000_1_inc_repairs.txt, results-5000000_2_inc_repairs.txt, results-5000000_full_repair_then_inc_repairs.txt, results-5000000_inc_repairs_not_parallel.txt, run1_with_compact_before_repair.log, run2_no_compact_before_repair.log, run3_no_compact_before_repair.log, test.sh, testv2.sh
>
>
> There seems to be something weird going on when repairing data.
> I have a program that runs 2 hours which inserts 250 random numbers and reads 250 times per second. It creates 2 keyspaces with SimpleStrategy and RF of 3. 
> I use size-tiered compaction for my cluster. 
> After those 2 hours I run a repair and the load of all nodes goes up. If I run incremental repair the load goes up alot more. I saw the load shoot up 8 times the original size multiple times with incremental repair. (from 2G to 16G)
> with node 9 8 7 and 6 the repro procedure looked like this:
> (Note that running full repair first is not a requirement to reproduce.)
> {noformat}
> After 2 hours of 250 reads + 250 writes per second:
> UN  9  583.39 MB  256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  584.01 MB  256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  583.72 MB  256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  583.84 MB  256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> Repair -pr -par on all nodes sequentially
> UN  9  746.29 MB  256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  751.02 MB  256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  748.89 MB  256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.34 MB  256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  2.41 GB    256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.53 GB    256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.6 GB     256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  2.17 GB    256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> after rolling restart
> UN  9  1.47 GB    256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  1.5 GB     256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.46 GB    256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.19 GB    256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> compact all nodes sequentially
> UN  9  989.99 MB  256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  994.75 MB  256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  1.46 GB    256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.82 MB  256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  1.98 GB    256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.3 GB     256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  3.71 GB    256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB    256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> restart once more
> UN  9  2 GB       256     ?       28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.05 GB    256     ?       f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  4.1 GB     256     ?       2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB    256     ?       b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> {noformat}
> Is there something im missing or is this strange behavior?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)