You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jason Tyler <ja...@yahoo-inc.com> on 2013/07/11 01:04:15 UTC

node tool ring displays 33.33% owns on 3 node cluster with replication

Hello,

I recently upgraded cassandra from 1.1.9 to 1.2.6 on a three node cluster with {replication_factor : 3}.

When I run nodetool's ring, I see 'Owns' now reports 33.33%.  Previously it reported 100.00% on each node.  The following snapshots are from two different clusters, so please ignore the Load diffs. I did verify {replication_factor : 3} on both clusters.


1.1.9-xobni1 'nodetool -h 127.0.0.1 -p 8080 ring':
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Address         DC          Rack        Status State   Load            Effective-Ownership Token
                                                                                           170141183460469231731687303715884105728
Xxx.xx.xx.00   16          96          Up     Normal  225.03 GB       100.00%             56713727820156410577229101238628035242
Xxx.xx.xx.01   16          97          Up     Normal  226.43 GB       100.00%             113427455640312821154458202477256070484
Xxx.xx.xx.02   16          97          Up     Normal  231.76 GB       100.00%             170141183460469231731687303715884105728
------------------------------------------------------------------------------------------------------------------------------------------------------------------------


1.2.6-xobni1 'nodetool -h 127.0.0.1 -p 8080 ring':
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Address        Rack        Status State   Load            Owns                Token
                                                                              170141183460469231731687303715884105728
Xxx.xx.xx.00   97          Up     Normal  453.94 GB       33.33%              56713727820156410577229101238628035242
Xxx.xx.xx.01   97          Up     Normal  565.87 GB       33.33%              113427455640312821154458202477256070484
Xxx.xx.xx.02   96          Up     Normal  523.53 GB       33.33%              170141183460469231731687303715884105728
------------------------------------------------------------------------------------------------------------------------------------------------------------------------


Is this simply a display issue, or have I lost replication?

Thanks for any info.


Cheers,

~Jason

Re: node tool ring displays 33.33% owns on 3 node cluster with replication

Posted by Andrew Bialecki <an...@gmail.com>.
Not sure if it's the best/intended behavior, but you should see it go back
to 100% if you run: nodetool -h 127.0.0.1 -p 8080 ring <keyspace>.

I think the rationale for showing 33% is that different keyspaces might
have different RFs, so it's unclear what to show for ownership. However, if
you include the keyspace as part of your query, you'll get it weighted by
the RF of that keyspace. I believe the same logic applies for nodetool
status.

Andrew


On Thu, Jul 11, 2013 at 12:58 PM, Jason Tyler <ja...@yahoo-inc.com> wrote:

>  Thanks Rob!  I was able to confirm with getendpoints.
>
>  Cheers,
>
>  ~Jason
>
>   From: Robert Coli <rc...@eventbrite.com>
> Reply-To: "user@cassandra.apache.org" <us...@cassandra.apache.org>
> Date: Wednesday, July 10, 2013 4:09 PM
> To: "user@cassandra.apache.org" <us...@cassandra.apache.org>
> Cc: Francois Richard <fr...@yahoo-inc.com>
> Subject: Re: node tool ring displays 33.33% owns on 3 node cluster with
> replication
>
>   On Wed, Jul 10, 2013 at 4:04 PM, Jason Tyler <ja...@yahoo-inc.com>wrote:
>
>>  Is this simply a display issue, or have I lost replication?
>>
>
>  Almost certainly just a display issue. Do "nodetool -h localhost
> getendpoints <keyspace> <columnfamily> 0", which will tell you the
> endpoints for the non-transformed key "0." It should give you 3 endpoints.
> You could also do this test with a known existing key and then go to those
> nodes and verify that they have that data on disk via sstable2json.
>
>  (FWIW, it is an odd display issue/bug if it is one. Because it has
> reverted to pre-1.1 behavior...)
>
>  =Rob
>

Re: node tool ring displays 33.33% owns on 3 node cluster with replication

Posted by Jason Tyler <ja...@yahoo-inc.com>.
Thanks Rob!  I was able to confirm with getendpoints.

Cheers,

~Jason

From: Robert Coli <rc...@eventbrite.com>>
Reply-To: "user@cassandra.apache.org<ma...@cassandra.apache.org>" <us...@cassandra.apache.org>>
Date: Wednesday, July 10, 2013 4:09 PM
To: "user@cassandra.apache.org<ma...@cassandra.apache.org>" <us...@cassandra.apache.org>>
Cc: Francois Richard <fr...@yahoo-inc.com>>
Subject: Re: node tool ring displays 33.33% owns on 3 node cluster with replication

On Wed, Jul 10, 2013 at 4:04 PM, Jason Tyler <ja...@yahoo-inc.com>> wrote:
Is this simply a display issue, or have I lost replication?

Almost certainly just a display issue. Do "nodetool -h localhost getendpoints <keyspace> <columnfamily> 0", which will tell you the endpoints for the non-transformed key "0." It should give you 3 endpoints. You could also do this test with a known existing key and then go to those nodes and verify that they have that data on disk via sstable2json.

(FWIW, it is an odd display issue/bug if it is one. Because it has reverted to pre-1.1 behavior...)

=Rob

Re: node tool ring displays 33.33% owns on 3 node cluster with replication

Posted by Robert Coli <rc...@eventbrite.com>.
On Wed, Jul 10, 2013 at 4:04 PM, Jason Tyler <ja...@yahoo-inc.com> wrote:

>  Is this simply a display issue, or have I lost replication?
>

Almost certainly just a display issue. Do "nodetool -h localhost
getendpoints <keyspace> <columnfamily> 0", which will tell you the
endpoints for the non-transformed key "0." It should give you 3 endpoints.
You could also do this test with a known existing key and then go to those
nodes and verify that they have that data on disk via sstable2json.

(FWIW, it is an odd display issue/bug if it is one. Because it has reverted
to pre-1.1 behavior...)

=Rob