You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Gábor Auth <au...@gmail.com> on 2018/04/30 09:39:29 UTC

Schema disagreement

Hi,

I've just tried to add a new DC and new node to my cluster (3 DCs and 10
nodes) and the new node has a different schema version:

Cluster Information:
        Name: cluster
        Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
        Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
        Schema versions:
                7e12a13e-dcca-301b-a5ce-b1ad29fbbacb: [x.x.x.x, ..., ...]
                bb186922-82b5-3a61-9c12-bf4eb87b9155: [new.new.new.new]

I've tried:
- node decommission and node re-addition
- resetlocalschema
- rebuild
- replace node
- repair
- cluster restart (node-by-node)

The MigrationManager constantly running on the new node and try to migrate
schema:
DEBUG [NonPeriodicTasks:1] 2018-04-30 09:33:22,405
MigrationManager.java:125 - submitting migration task for /x.x.x.x

What also can I do? :(

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Gábor Auth <au...@gmail.com>.
Hi,

On Tue, May 1, 2018 at 10:27 PM Gábor Auth <au...@gmail.com> wrote:

> One or two years ago I've tried the CDC feature but switched off... maybe
> is it a side effect of switched off CDC? How can I fix it? :)
>

Okay, I've worked out. Updated the schema of the affected keyspaces on the
new nodes with 'cdc=false' and everything is okay now.

I think, it is a strange bug around the CDC...

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Gábor Auth <au...@gmail.com>.
Hi,

On Tue, May 1, 2018 at 7:40 PM Gábor Auth <au...@gmail.com> wrote:

> What can I do? Any suggestion? :(
>

Okay, I've diffed the good and the bad system_scheme tables. The only
difference is the `cdc` field in three keyspaces (in `tables` and `views`):
- the value of `cdc` field on the good node is `False`
- the value of `cdc` field on the bad node is `null`

The value of `cdc` field on the other keyspaces is `null`.

One or two years ago I've tried the CDC feature but switched off... maybe
is it a side effect of switched off CDC? How can I fix it? :)

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Gábor Auth <au...@gmail.com>.
Hi,

On Mon, Apr 30, 2018 at 11:11 PM Gábor Auth <au...@gmail.com> wrote:

> On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail <Al...@petrolink.com>
> wrote:
>
>> What steps have you performed to add the new DC? Have you tried to follow
>> certain procedures like this?
>>
>> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>>
>
> Yes, exactly. :/
>

Okay, removed all new nodes (with `removenode`). Cleared all new node
(removed data and logs).

I did all the steps described in the link (again).

Same result:

Cluster Information:
       Name: cluster
       Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
       Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
       Schema versions:
               5de14758-887d-38c1-9105-fc60649b0edf: [new, new, ...]

               f4ed784a-174a-38dd-a7e5-55ff6f3002b2: [old, old, ...]

The old nodes try to gossip their own schema:
DEBUG [InternalResponseStage:1] 2018-05-01 17:36:36,266
MigrationManager.java:572 - Gossiping my schema version
f4ed784a-174a-38dd-a7e5-55ff6f3002b2
DEBUG [InternalResponseStage:1] 2018-05-01 17:36:36,863
MigrationManager.java:572 - Gossiping my schema version
f4ed784a-174a-38dd-a7e5-55ff6f3002b2

The new nodes try to gossip their own schema:
DEBUG [InternalResponseStage:4] 2018-05-01 17:36:26,329
MigrationManager.java:572 - Gossiping my schema version
5de14758-887d-38c1-9105-fc60649b0edf
DEBUG [InternalResponseStage:4] 2018-05-01 17:36:27,595
MigrationManager.java:572 - Gossiping my schema version
5de14758-887d-38c1-9105-fc60649b0edf

What can I do? Any suggestion? :(

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Gábor Auth <au...@gmail.com>.
Hi,

On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail <Al...@petrolink.com>
wrote:

> What steps have you performed to add the new DC? Have you tried to follow
> certain procedures like this?
>
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>

Yes, exactly. :/

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Ali Hubail <Al...@petrolink.com>.
Hi,

What steps have you performed to add the new DC? Have you tried to follow 
certain procedures like this?
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html

Node can appear offline to other nodes for various reasons. It would help 
greatly to know what steps you have taken in order to know why you're 
facing this

Ali Hubail

Confidentiality warning: This message and any attachments are intended 
only for the persons to whom this message is addressed, are confidential, 
and may be privileged. If you are not the intended recipient, you are 
hereby notified that any review, retransmission, conversion to hard copy, 
copying, modification, circulation or other use of this message and any 
attachments is strictly prohibited. If you receive this message in error, 
please notify the sender immediately by return email, and delete this 
message and any attachments from your system. Petrolink International 
Limited its subsidiaries, holding companies and affiliates disclaims all 
responsibility from and accepts no liability whatsoever for the 
consequences of any unauthorized person acting, or refraining from acting, 
on any information contained in this message. For security purposes, staff 
training, to assist in resolving complaints and to improve our customer 
service, email communications may be monitored and telephone calls may be 
recorded.



Gábor Auth <au...@gmail.com> 
04/30/2018 03:40 PM
Please respond to
user@cassandra.apache.org


To
"user@cassandra.apache.org" <us...@cassandra.apache.org>, 
cc

Subject
Re: Schema disagreement






Hi,

On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth <au...@gmail.com> wrote:
've just tried to add a new DC and new node to my cluster (3 DCs and 10 
nodes) and the new node has a different schema version:

Is it normal? Node is marked down but doing a repair successfully?

WARN  [MigrationStage:1] 2018-04-30 20:36:56,579 MigrationTask.java:67 - 
Can't send schema pull request: node /x.x.216.121 is down.
INFO  [AntiEntropyStage:1] 2018-04-30 20:36:56,611 Validator.java:281 - 
[repair #323bf873-4cb6-11e8-bdd5-5feb84046dc9] Sending completed merkle 
tree to /x.x.216.121 for keyspace.table

The `nodetool status` is looking good:
UN  x.x.216.121  959.29 MiB  32           ? 
      322e4e9b-4d9e-43e3-94a3-bbe012058516  RACK01

Bye,
Gábor Auth

Re: Schema disagreement

Posted by Gábor Auth <au...@gmail.com>.
Hi,

On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth <au...@gmail.com> wrote:

> 've just tried to add a new DC and new node to my cluster (3 DCs and 10
> nodes) and the new node has a different schema version:
>

Is it normal? Node is marked down but doing a repair successfully?

WARN  [MigrationStage:1] 2018-04-30 20:36:56,579 MigrationTask.java:67 -
Can't send schema pull request: node /x.x.216.121 is down.
INFO  [AntiEntropyStage:1] 2018-04-30 20:36:56,611 Validator.java:281 -
[repair #323bf873-4cb6-11e8-bdd5-5feb84046dc9] Sending completed merkle
tree to /x.x.216.121 for keyspace.table

The `nodetool status` is looking good:
UN  x.x.216.121  959.29 MiB  32           ?
      322e4e9b-4d9e-43e3-94a3-bbe012058516  RACK01

Bye,
Gábor Auth