You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Robert Stupp (JIRA)" <ji...@apache.org> on 2014/08/28 20:14:08 UTC

[jira] [Created] (CASSANDRA-7845) Negative load of C* nodes

Robert Stupp created CASSANDRA-7845:
---------------------------------------

             Summary: Negative load of C* nodes
                 Key: CASSANDRA-7845
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7845
             Project: Cassandra
          Issue Type: Bug
            Reporter: Robert Stupp


I've completed two C* workshops. Both groups also did an upgrade of C* 2.0.9 to 2.1.0rc6 in a 6 node multi-DC cluster.

Both groups encountered the same phenomenon that "nodetool status" and OpsCenter report a negative load (data size) of most (not all) nodes. I did not take the phenomenon seriously for the first group, because there were only operations guys that "did their best to crash the cluster". But the second groups did nothing seriously wrong.

2.0.9 configuration was the default one with just changed directories (data, cl, caches) and cluster name. Configurations of 2.1.0rc6 nodes matched the config of 2.0.9 - they just removed 5 config parameters that were removed in 2.1. They did not run any repair or forced a compaction.

After a rolling restart both "nodetool status" and OpsCenter report the correct load.

I was not able to reproduce this locally.

I have a third group tomorrow and hope to have some time to do the upgrade again. Anything that I can check? I think it would be possible to grab the data files from at least one node for further analysis. Anything else I can do to check that?



--
This message was sent by Atlassian JIRA
(v6.2#6252)