You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by Facundo Barceló <fa...@hotmail.com> on 2018/05/06 00:08:25 UTC

Best approach to reboot each node of Storm cluster

Hi all,
we have Storm 0.9.3 installed in a cluster composed by the nimbus node + 3 supervisor nodes + 3 ZK nodes. Here we have more than 10 topologies running. We have the Systems requirement of applying a OS patch in each of the boxes. To finish that work, each node needs to be rebooted. What would the best approach to follow? I mean, do we need to kill topologies or to shutdown the Storm cluster? Or don't we need to do anything? When each node boots again, everything Storm related like nimbus, supervisors and ZK will start automatically, won't they?
I really appreciate your help!

Thanks,
Facundo

Re: Best approach to reboot each node of Storm cluster

Posted by Stig Rohde Døssing <sr...@apache.org>.
You should check how you've started the Storm/Zookeeper processes. If
you've set them up to start with e.g. systemd, then they should come back
up after reboot.

You don't need to do anything special other than reboot the machines, but
if you'd like to pause processing while you reboot, you can go into Storm
UI and deactivate the topologies until you're done rebooting (or do it
through the Storm CLI).

2018-05-06 2:08 GMT+02:00 Facundo Barceló <fa...@hotmail.com>:

> Hi all,
> we have Storm 0.9.3 installed in a cluster composed by the nimbus node + 3
> supervisor nodes + 3 ZK nodes. Here we have more than 10 topologies
> running. We have the Systems requirement of applying a OS patch in each of
> the boxes. To finish that work, each node needs to be rebooted. What would
> the best approach to follow? I mean, do we need to kill topologies or to
> shutdown the Storm cluster? Or don't we need to do anything? When each node
> boots again, everything Storm related like nimbus, supervisors and ZK will
> start automatically, won't they?
> I really appreciate your help!
>
> Thanks,
> Facundo
>