You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Andreas Finke <An...@solvians.com> on 2013/12/12 11:49:46 UTC

Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes

Hi,

after adding 2 more nodes to a 4 nodes cluster (before) we are experiencing high load on both new nodes. After doing some investigation we found out the following:

- High cpu load on vm5+6
- Higher data load on vm5+6
- Write requests are evenly distributed to all 6 nodes by our client application (opscenter -> metrics -> WriteRequests)
- Local writes are as twice as much in vm5 +6 (vm1-4: ~2800/s, vm5-6: ~6800/s)
- Nodetool output:


UN  vm1  9.51 GB    256     20,7%  13fa7bb7-19cb-44f5-af83-71a72e04993a  X1

UN  vm2  9.41 GB    256     20,0%  b71c2d3d-4721-4dde-a418-802f1af4b7a1  D1

UN  vm3  9.37 GB    256     18,9%  8ce4c419-d79c-4ef1-b3fd-8936bff3e44f  X1

UN  vm4  9.23 GB    256     19,5%  17974f20-5756-4eba-a377-52feed3a1b10  D1

UN  vm5  15.95 GB   256     10,7%  0c6db9ea-4c60-43f6-a12e-51a7d76f8e80  X1

UN  vm6  14.86 GB   256     10,2%  f64d1909-dd96-442b-b602-efee29eee0a0  D1


Although the ownership is lower on vm5-6 (which already is not right) the data load is way higher.


Some cluster facts:


Node: 4 CPU, 6 GB RAM, virtual appliance

Cassandra: 3 GB Heap, vnodes 256

Schema: Replication strategy network, RF:2


Has anyone an idea what could be the cause for the unbalancing. Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.


Regards

Andi



Re: Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes

Posted by Aaron Morton <aa...@thelastpickle.com>.
> I assume you mean "seed_provider" setting in cassandra.yaml by "seed list". The current setting for vm1-vm6 is:
the value of seeds, e.g. 

seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: “127.0.0.1"

> seed_provider = vm1,vm2,vm3,vm4
> 
> This setting also applied when the vm5 and vm6 were added. I checked the read repair metrics and it is about mean 20/s on vm5 and vm6. 
5 and 6 should have bootstrapped then. 

> 2. for node in vm1 vm2 vm3 vm4 vm5 vm6 ; do cat /tmp/ring.txt |grep <ip_of($node)> | wc -l; done
> 
> This prints the number of times when a node was listed as endpoint:
Did you check the output in ring.txt to see if the what the entries for vm 5 and 6 looked like ? 
The code that builds that info relies on resolving the host name, just wondering if they are not resolving to their host names. 


> 1. Is there any way how we can fix that on a running production cluster?
> 2. Our backup plan is to snapshot all data, raise a complete fresh 6 node cluster and stream the data using sstable loader. Are there any objections about that plan from your point of view?
IMHO it’s a better idea to fix what you have. It does not seem like a huge problem, just a mystery. 

I would ensure that repair and compaction have completed, i.e. there is nothing in the nodetool compactionstats. 

Another thought about where there extra local writes are coming from, they could be automatic ”hoisting” rows when the number of SSTables in the read is greater than the min compaction threshold (only applies when STCS) is used. Check how many SSTables are being used per read with nodetool cfhistograms. 

There is not way to disable hoisting, it should settle down once compaction catches up. 

Cheers

-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 20/12/2013, at 10:51 pm, Andreas Finke <An...@solvians.com> wrote:

> Hi Aaron,
> 
> I assume you mean "seed_provider" setting in cassandra.yaml by "seed list". The current setting for vm1-vm6 is:
> 
> seed_provider = vm1,vm2,vm3,vm4
> 
> This setting also applied when the vm5 and vm6 were added. I checked the read repair metrics and it is about mean 20/s on vm5 and vm6. 
> 
> I tried to investigate the real distribution of tokens again and did on vm1:
> 
> 1. nodetool describering marketdata >> /tmp/ring.txt
> 2. for node in vm1 vm2 vm3 vm4 vm5 vm6 ; do cat /tmp/ring.txt |grep <ip_of($node)> | wc -l; done
> 
> This prints the number of times when a node was listed as endpoint:
> 
> vm1: 303
> vm2: 312
> vm3: 332
> vm4: 311
> vm5: 901
> vm6: 913
> 
> So this shows that we are really unbalanced. 
> 
> 1. Is there any way how we can fix that on a running production cluster?
> 2. Our backup plan is to snapshot all data, raise a complete fresh 6 node cluster and stream the data using sstable loader. Are there any objections about that plan from your point of view?
> 
> Thanks in advance!
> 
> Andi
> ________________________________________
> From: Aaron Morton [aaron@thelastpickle.com]
> Sent: Wednesday, December 18, 2013 3:14 AM
> To: Cassandra User
> Subject: Re: Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes
> 
>> Node: 4 CPU, 6 GB RAM, virtual appliance
>> 
>> Cassandra: 3 GB Heap, vnodes 256
> FWIW that’s a very low powered node.
> 
>> Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
> Where the nodes in the seed list when they joined the cluster? If so they do not bootstrap.
> 
> The extra writes in nodes 5 and 6 could be from Read Repair writing to them.
> 
> Cheers
> 
> -----------------
> Aaron Morton
> New Zealand
> @aaronmorton
> 
> Co-Founder & Principal Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
> 
> On 12/12/2013, at 11:49 pm, Andreas Finke <An...@solvians.com> wrote:
> 
>> Hi,
>> 
>> after adding 2 more nodes to a 4 nodes cluster (before) we are experiencing high load on both new nodes. After doing some investigation we found out the following:
>> 
>> - High cpu load on vm5+6
>> - Higher data load on vm5+6
>> - Write requests are evenly distributed to all 6 nodes by our client application (opscenter -> metrics -> WriteRequests)
>> - Local writes are as twice as much in vm5 +6 (vm1-4: ~2800/s, vm5-6: ~6800/s)
>> - Nodetool output:
>> 
>> UN  vm1  9.51 GB    256     20,7%  13fa7bb7-19cb-44f5-af83-71a72e04993a  X1
>> 
>> UN  vm2  9.41 GB    256     20,0%  b71c2d3d-4721-4dde-a418-802f1af4b7a1  D1
>> 
>> UN  vm3  9.37 GB    256     18,9%  8ce4c419-d79c-4ef1-b3fd-8936bff3e44f  X1
>> 
>> 
>> UN  vm4  9.23 GB    256     19,5%  17974f20-5756-4eba-a377-52feed3a1b10  D1
>> 
>> UN  vm5  15.95 GB   256     10,7%  0c6db9ea-4c60-43f6-a12e-51a7d76f8e80  X1
>> 
>> UN  vm6  14.86 GB   256     10,2%  f64d1909-dd96-442b-b602-efee29eee0a0  D1
>> 
>> 
>> 
>> Although the ownership is lower on vm5-6 (which already is not right) the data load is way higher.
>> 
>> 
>> 
>> Some cluster facts:
>> 
>> 
>> 
>> Node: 4 CPU, 6 GB RAM, virtual appliance
>> 
>> Cassandra: 3 GB Heap, vnodes 256
>> 
>> Schema: Replication strategy network, RF:2
>> 
>> 
>> 
>> Has anyone an idea what could be the cause for the unbalancing. Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
>> 
>> 
>> 
>> Regards
>> 
>> Andi
>> 
> 


RE: Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes

Posted by Andreas Finke <An...@solvians.com>.
Hi Aaron,

I assume you mean "seed_provider" setting in cassandra.yaml by "seed list". The current setting for vm1-vm6 is:

seed_provider = vm1,vm2,vm3,vm4

This setting also applied when the vm5 and vm6 were added. I checked the read repair metrics and it is about mean 20/s on vm5 and vm6. 

I tried to investigate the real distribution of tokens again and did on vm1:

1. nodetool describering marketdata >> /tmp/ring.txt
2. for node in vm1 vm2 vm3 vm4 vm5 vm6 ; do cat /tmp/ring.txt |grep <ip_of($node)> | wc -l; done

This prints the number of times when a node was listed as endpoint:

vm1: 303
vm2: 312
vm3: 332
vm4: 311
vm5: 901
vm6: 913

So this shows that we are really unbalanced. 

1. Is there any way how we can fix that on a running production cluster?
2. Our backup plan is to snapshot all data, raise a complete fresh 6 node cluster and stream the data using sstable loader. Are there any objections about that plan from your point of view?

Thanks in advance!

Andi
________________________________________
From: Aaron Morton [aaron@thelastpickle.com]
Sent: Wednesday, December 18, 2013 3:14 AM
To: Cassandra User
Subject: Re: Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes

> Node: 4 CPU, 6 GB RAM, virtual appliance
>
> Cassandra: 3 GB Heap, vnodes 256
FWIW that’s a very low powered node.

> Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
Where the nodes in the seed list when they joined the cluster? If so they do not bootstrap.

The extra writes in nodes 5 and 6 could be from Read Repair writing to them.

Cheers

-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 12/12/2013, at 11:49 pm, Andreas Finke <An...@solvians.com> wrote:

> Hi,
>
> after adding 2 more nodes to a 4 nodes cluster (before) we are experiencing high load on both new nodes. After doing some investigation we found out the following:
>
> - High cpu load on vm5+6
> - Higher data load on vm5+6
> - Write requests are evenly distributed to all 6 nodes by our client application (opscenter -> metrics -> WriteRequests)
> - Local writes are as twice as much in vm5 +6 (vm1-4: ~2800/s, vm5-6: ~6800/s)
> - Nodetool output:
>
> UN  vm1  9.51 GB    256     20,7%  13fa7bb7-19cb-44f5-af83-71a72e04993a  X1
>
> UN  vm2  9.41 GB    256     20,0%  b71c2d3d-4721-4dde-a418-802f1af4b7a1  D1
>
> UN  vm3  9.37 GB    256     18,9%  8ce4c419-d79c-4ef1-b3fd-8936bff3e44f  X1
>
>
> UN  vm4  9.23 GB    256     19,5%  17974f20-5756-4eba-a377-52feed3a1b10  D1
>
> UN  vm5  15.95 GB   256     10,7%  0c6db9ea-4c60-43f6-a12e-51a7d76f8e80  X1
>
> UN  vm6  14.86 GB   256     10,2%  f64d1909-dd96-442b-b602-efee29eee0a0  D1
>
>
>
> Although the ownership is lower on vm5-6 (which already is not right) the data load is way higher.
>
>
>
> Some cluster facts:
>
>
>
> Node: 4 CPU, 6 GB RAM, virtual appliance
>
> Cassandra: 3 GB Heap, vnodes 256
>
> Schema: Replication strategy network, RF:2
>
>
>
> Has anyone an idea what could be the cause for the unbalancing. Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
>
>
>
> Regards
>
> Andi
>


Re: Unbalanced ring with C* 2.0.3 and vnodes after adding additional nodes

Posted by Aaron Morton <aa...@thelastpickle.com>.
> Node: 4 CPU, 6 GB RAM, virtual appliance
> 
> Cassandra: 3 GB Heap, vnodes 256
FWIW that’s a very low powered node. 

> Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
Where the nodes in the seed list when they joined the cluster? If so they do not bootstrap. 

The extra writes in nodes 5 and 6 could be from Read Repair writing to them. 

Cheers

-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 12/12/2013, at 11:49 pm, Andreas Finke <An...@solvians.com> wrote:

> Hi,
> 
> after adding 2 more nodes to a 4 nodes cluster (before) we are experiencing high load on both new nodes. After doing some investigation we found out the following:
> 
> - High cpu load on vm5+6
> - Higher data load on vm5+6
> - Write requests are evenly distributed to all 6 nodes by our client application (opscenter -> metrics -> WriteRequests) 
> - Local writes are as twice as much in vm5 +6 (vm1-4: ~2800/s, vm5-6: ~6800/s)
> - Nodetool output:
> 
> UN  vm1  9.51 GB    256     20,7%  13fa7bb7-19cb-44f5-af83-71a72e04993a  X1
> 
> UN  vm2  9.41 GB    256     20,0%  b71c2d3d-4721-4dde-a418-802f1af4b7a1  D1
> 
> UN  vm3  9.37 GB    256     18,9%  8ce4c419-d79c-4ef1-b3fd-8936bff3e44f  X1
> 
> 
> UN  vm4  9.23 GB    256     19,5%  17974f20-5756-4eba-a377-52feed3a1b10  D1
> 
> UN  vm5  15.95 GB   256     10,7%  0c6db9ea-4c60-43f6-a12e-51a7d76f8e80  X1
> 
> UN  vm6  14.86 GB   256     10,2%  f64d1909-dd96-442b-b602-efee29eee0a0  D1
> 
> 
> 
> Although the ownership is lower on vm5-6 (which already is not right) the data load is way higher. 
> 
> 
> 
> Some cluster facts:
> 
> 
> 
> Node: 4 CPU, 6 GB RAM, virtual appliance
> 
> Cassandra: 3 GB Heap, vnodes 256
> 
> Schema: Replication strategy network, RF:2
> 
> 
> 
> Has anyone an idea what could be the cause for the unbalancing. Maybe we forgot necessary actions during or after cluster expanding process. We are open for every idea.
> 
> 
> 
> Regards
> 
> Andi
>