You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2011/02/15 20:48:57 UTC

[jira] Resolved: (CASSANDRA-2058) Load spikes due to MessagingService-generated garbage collection

     [ https://issues.apache.org/jira/browse/CASSANDRA-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-2058.
---------------------------------------

    Resolution: Fixed

closing this so it's clear that the excessive object creation problem introduced in CASSANDRA-1905 is fixed in 0.6.11 / 0.7.1.

opened CASSANDRA-2170 for other load spikes.

> Load spikes due to MessagingService-generated garbage collection
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2058
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2058
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.6.10, 0.7.0
>         Environment: OpenJDK 64-Bit Server VM (build 1.6.0_0-b12, mixed mode)
> Ubuntu 8.10
> Linux pmc01 2.6.27-22-xen #1 SMP Fri Feb 20 23:58:13 UTC 2009 x86_64 GNU/Linux
>            Reporter: David King
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.1, 0.6.11
>
>         Attachments: 2058-0.7-v2.txt, 2058-0.7-v3.txt, 2058-0.7.txt, 2058.txt, cassandra.pmc01.log.bz2, cassandra.pmc14.log.bz2, graph a.png, graph b.png
>
>
> (Filing as a placeholder bug as I gather information.)
> At ~10p 24 Jan, I upgraded our 20-node cluster from 0.6.8->0.6.10, turned on the DES, and moved some CFs from one KS into another (drain whole cluster, take it down, move files, change schema, put it back up). Since then, I've had four storms whereby a node's load will shoot to 700+ (400% CPU on a 4-cpu machine) and become totally unresponsive. After a moment or two like that, its neighbour dies too, and the failure cascades around the ring. Unfortunately because of the high load I'm not able to get into the machine to pull a thread dump to see wtf it's doing as it happens.
> I've also had an issue where a single node spikes up to high load, but recovers. This may or may not be the same issue from which the nodes don't recover as above, but both are new behaviour

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira