You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Fd Habash <fm...@gmail.com> on 2017/08/03 13:00:34 UTC

Replacing a Seed Node

Hi all …
I know there is plenty of docs on how to replace a seed node, but some are steps are contradictory e.g. need to remote the node from seed list for entire cluster. 

My cluster has 6 nodes with 3 seeds running C* 2.8. One seed node was terminated by AWS. 

I came up with this procedure. Did I miss anything …

1) Remove the node (decomm or removenode) based on its current status
2) Remove the node from its own seed list
a. No need to remove it from other nodes. My cluster has 3 seeds
3) Restart C* with auto_bootstrap = true
4) Once autobootstrap is done, re-add the node as seed in its own Cassandra.yaml again
5) Restart C* on this node
6) No need to restart other nodes in the cluster


----------------
Thank you


Re: Replacing a Seed Node

Posted by Oleksandr Shulgin <ol...@zalando.de>.
On Thu, Aug 3, 2017 at 3:00 PM, Fd Habash <fm...@gmail.com> wrote:

> Hi all …
>
> I know there is plenty of docs on how to replace a seed node, but some are
> steps are contradictory e.g. need to remote the node from seed list for
> entire cluster.
>
>
>
> My cluster has 6 nodes with 3 seeds running C* 2.8. One seed node was
> terminated by AWS.
>

Hi,

First of all -- are you using instance storage or EBS?  If the latter: is
it attached with a setting to delete the volume on instance termination?
In other words: do you still have the data files from that node?

If you still have that EBS volume, you can start a replacement instance
with that volume attached with the same private IP address (unless it was
taken by any other EC2 instance meanwhile).  This would be preferred way,
since the node just gets UP again without bootstrapping and just needs to
replay hints or be repaired (if it was down longer than max_hint_window
which is 3 hours by default).

I came up with this procedure. Did I miss anything …
>
>
>
>    1. Remove the node (decomm or removenode) based on its current status
>    2. Remove the node from its own seed list
>       1. No need to remove it from other nodes. My cluster has 3 seeds
>    3. Restart C* with auto_bootstrap = true
>    4. Once autobootstrap is done, re-add the node as seed in its own
>    Cassandra.yaml again
>    5. Restart C* on this node
>    6. No need to restart other nodes in the cluster
>
> You won't be able to decommission if the node is not up.  At the same
time, you can avoid changing topology (first to remove the dead node, then
to bootstrap a new one) by using -Dcassandra.replace_address=172.31.xx.yyy
i.e. address of that dead node.  If your Cassandra versions supports it,
use replace_address_first_boot.

This should bootstrap the node by streaming exactly the data your dead seed
node was responsible for previously.  After this is done, you still need to
do a rolling restart of all nodes, updating their seed list.  You should
remove the IP address of the dead seed and add the address of any currently
healthy node, not necessarily this freshly boot strapped one: consider
balancing Availability Zones, so that you have a seed node in each AZ.

Regards,
--
Alex