You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jens Rantil <je...@tink.se> on 2015/06/12 11:58:14 UTC

Question about "nodetool status ..." output

Hi,

I have one node in my 5-node cluster that effectively owns 100% and it
looks like my cluster is rather imbalanced. Is it common to have it this
imbalanced for 4-5 nodes?

My current output for a keyspace is:

$ nodetool status myks
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens  Owns (effective)  Host ID
                Rack
UN  X.X.X.33  203.92 GB  256     41.3%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  RAC1
UN  X.X.X.32  200.44 GB  256     34.2%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  RAC1
UN  X.X.X.51  197.17 GB  256     100.0%
 344b0adf-2b5d-47c8-8881-9a3f56be6f3b  RAC1
UN  X.X.X.52  113.63 GB  1       46.3%
55daa807-af49-44c5-9742-fe456df621a1  RAC1
UN  X.X.X.31  204.49 GB  256     78.3%
48cb0782-6c9a-4805-9330-38e192b6b680  RAC1

My keyspace has RF=3 and originally I added X.X.X.52 (num_tokens=1 was a
mistake) and then X.X.X.51. I haven't executed `nodetool cleanup` on any
nodes yet.

For the curious, the full ring can be found here:
https://gist.github.com/JensRantil/57ee515e647e2f154779

Cheers,
Jens

-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.rantil@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
 Twitter <https://twitter.com/tink>

Re: Question about "nodetool status ..." output

Posted by Jens Rantil <je...@tink.se>.
Hi Carlos,

Yes, I should have been more specific about that; basically all my primary
ID:s are random UUIDs so I find that very hard to believe that my data
model should be the problem here. I will run a full repair of the cluster,
execute a cleanup and recommission the node, then.

Thanks,
Jens

On Fri, Jun 12, 2015 at 2:38 PM, Carlos Rolo <ro...@pythian.com> wrote:

> Your data model also contributes to the balance (or lack of) of the
> cluster. If you have a really bad data partitioning Cassandra will not do
> any magic.
>
> Regarding that cluster, I would decommission the x.52 node and add it
> again with the correct configuration. After the bootstrap, run a cleanup.
> If is still that off-balance, you need to look into your data model.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> On Fri, Jun 12, 2015 at 11:58 AM, Jens Rantil <je...@tink.se> wrote:
>
>> Hi,
>>
>> I have one node in my 5-node cluster that effectively owns 100% and it
>> looks like my cluster is rather imbalanced. Is it common to have it this
>> imbalanced for 4-5 nodes?
>>
>> My current output for a keyspace is:
>>
>> $ nodetool status myks
>> Datacenter: Cassandra
>> =====================
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address     Load       Tokens  Owns (effective)  Host ID
>>                   Rack
>> UN  X.X.X.33  203.92 GB  256     41.3%
>> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  RAC1
>> UN  X.X.X.32  200.44 GB  256     34.2%
>> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  RAC1
>> UN  X.X.X.51  197.17 GB  256     100.0%
>>  344b0adf-2b5d-47c8-8881-9a3f56be6f3b  RAC1
>> UN  X.X.X.52  113.63 GB  1       46.3%
>> 55daa807-af49-44c5-9742-fe456df621a1  RAC1
>> UN  X.X.X.31  204.49 GB  256     78.3%
>> 48cb0782-6c9a-4805-9330-38e192b6b680  RAC1
>>
>> My keyspace has RF=3 and originally I added X.X.X.52 (num_tokens=1 was a
>> mistake) and then X.X.X.51. I haven't executed `nodetool cleanup` on any
>> nodes yet.
>>
>> For the curious, the full ring can be found here:
>> https://gist.github.com/JensRantil/57ee515e647e2f154779
>>
>> Cheers,
>> Jens
>>
>> --
>> Jens Rantil
>> Backend engineer
>> Tink AB
>>
>> Email: jens.rantil@tink.se
>> Phone: +46 708 84 18 32
>> Web: www.tink.se
>>
>> Facebook <https://www.facebook.com/#!/tink.se> Linkedin
>> <http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
>>  Twitter <https://twitter.com/tink>
>>
>
>
> --
>
>
>
>


-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.rantil@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
 Twitter <https://twitter.com/tink>

Re: Question about "nodetool status ..." output

Posted by Carlos Rolo <ro...@pythian.com>.
Your data model also contributes to the balance (or lack of) of the
cluster. If you have a really bad data partitioning Cassandra will not do
any magic.

Regarding that cluster, I would decommission the x.52 node and add it again
with the correct configuration. After the bootstrap, run a cleanup. If is
still that off-balance, you need to look into your data model.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
<http://linkedin.com/in/carlosjuzarterolo>*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Fri, Jun 12, 2015 at 11:58 AM, Jens Rantil <je...@tink.se> wrote:

> Hi,
>
> I have one node in my 5-node cluster that effectively owns 100% and it
> looks like my cluster is rather imbalanced. Is it common to have it this
> imbalanced for 4-5 nodes?
>
> My current output for a keyspace is:
>
> $ nodetool status myks
> Datacenter: Cassandra
> =====================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address     Load       Tokens  Owns (effective)  Host ID
>                 Rack
> UN  X.X.X.33  203.92 GB  256     41.3%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  RAC1
> UN  X.X.X.32  200.44 GB  256     34.2%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  RAC1
> UN  X.X.X.51  197.17 GB  256     100.0%
>  344b0adf-2b5d-47c8-8881-9a3f56be6f3b  RAC1
> UN  X.X.X.52  113.63 GB  1       46.3%
> 55daa807-af49-44c5-9742-fe456df621a1  RAC1
> UN  X.X.X.31  204.49 GB  256     78.3%
> 48cb0782-6c9a-4805-9330-38e192b6b680  RAC1
>
> My keyspace has RF=3 and originally I added X.X.X.52 (num_tokens=1 was a
> mistake) and then X.X.X.51. I haven't executed `nodetool cleanup` on any
> nodes yet.
>
> For the curious, the full ring can be found here:
> https://gist.github.com/JensRantil/57ee515e647e2f154779
>
> Cheers,
> Jens
>
> --
> Jens Rantil
> Backend engineer
> Tink AB
>
> Email: jens.rantil@tink.se
> Phone: +46 708 84 18 32
> Web: www.tink.se
>
> Facebook <https://www.facebook.com/#!/tink.se> Linkedin
> <http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
>  Twitter <https://twitter.com/tink>
>

-- 


--