You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Prateek Agarwal (JIRA)" <ji...@apache.org> on 2016/09/02 22:38:21 UTC

[jira] [Resolved] (CASSANDRA-12560) Cassandra Restart issues while restoring to a new cluster

     [ https://issues.apache.org/jira/browse/CASSANDRA-12560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Prateek Agarwal resolved CASSANDRA-12560.
-----------------------------------------
    Resolution: Invalid

Turns out there were stale directories commit_log and saved_caches which i missed to delete earlier. The instructions work correctly with those directories deleted.

> Cassandra Restart issues while restoring to a new cluster
> ---------------------------------------------------------
>
>                 Key: CASSANDRA-12560
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12560
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Configuration
>         Environment: distro: Ubuntu 14.04 LTS
>            Reporter: Prateek Agarwal
>
> I am restoring to a fresh new Cassandra 2.2.5 cluster consisting of 3 nodes.
> Initial cluster health of the NEW cluster:
> {code}
> --  Address       Load       Tokens       Owns    Host ID                               Rack
> UN  10.40.1.1   259.31 KB   256          ?       d2b29b08-9eac-4733-9798-019275d66cfc  uswest1adevc
> UN  10.40.1.2   230.12 KB   256          ?       5484ab11-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
> UN  10.40.1.3   248.47 KB   256          ?       bad95fe2-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
> {code}
> As part of the [restore instructions in Datastax 2.2 docs|http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsSnapshotRestoreNewCluster.html], i do the following on the new cluster:
> 1) cassandra stop on all of the three nodes one by one.
> 2) Edit cassandra.yaml for all of the three nodes with the backup'ed token ring information. [Step 2 from docs]
> 3) Remove the contents from /var/lib/cassandra/data/system/* [Step 4 from docs]
> 4) cassandra start on nodes 10.40.1.1, 10.40.1.2, 10.40.1.3 respectively.
> Result: 10.40.1.1 restarts back successfully:
> {code}
> --  Address       Load       Tokens       Owns    Host ID                               Rack
> UN  10.40.1.1   259.31 KB   256          ?       2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
> {code}
> But the second and the third nodes fail to restart stating:
> {code}
> java.lang.RuntimeException: A node with address 10.40.1.2 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
>     at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:546) ~[apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:766) ~[apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.StorageService.initServer(StorageService.java:693) ~[apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.StorageService.initServer(StorageService.java:585) ~[apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) [apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516) [apache-cassandra-2.2.5.jar:2.2.5]
>     at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) [apache-cassandra-2.2.5.jar:2.2.5]
> INFO  [StorageServiceShutdownHook] 2016-08-09 18:13:21,980 Gossiper.java:1449 - Announcing shutdown
> {code}
> {code}
> java.lang.RuntimeException: A node with address 10.40.1.3 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
> ...
> {code}
> Eventual cluster health:
> {code}
> --  Address       Load       Tokens       Owns    Host ID                               Rack
> UN  10.40.1.1   259.31 KB   256          ?       2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
> DN  10.40.1.2   230.12 KB   256          ?       6w2321ad-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
> DN  10.40.1.3   248.47 KB   256          ?       9et4944d-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
> {code}
> I understand that the HostID of a node might change after system dirs are removed.
> I think the restore docs are incomplete and need to mention the 'replace IP' part as well OR am i missing something in my steps?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)