You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Sourav Moitra <so...@gmail.com> on 2018/10/15 03:31:36 UTC

Zookeeper external vs internal

Hello,

As per the documentation it is preferable to use external zookeeper
service. I am provisioning 3 Solr servers with each having Solr 7.5 in
cloud mode and seperate zookeeper daemon process running. The
zookeepers of each boxes are configured to form ensemble among them.

My question does running separate zookeeper ensemble in the same boxes
provides any advantage over using the solr embedded zookeeper ?


Sourav Moitra
https://souravmoitra.com

Re: Zookeeper external vs internal

Posted by Charlie Hull <ch...@flax.co.uk>.
It's also important to remember that you don't need a particularly large or
powerful node to run Zookeeper.

Charlie

On Sun, 14 Oct 2018 at 23:57, Shawn Heisey <ap...@elyograg.org> wrote:

> On 10/14/2018 9:31 PM, Sourav Moitra wrote:
> > My question does running separate zookeeper ensemble in the same boxes
> > provides any advantage over using the solr embedded zookeeper ?
>
> The major disadvantage to having ZK embedded in Solr is this:  If you
> stop or restart the Solr process, part of your ZK ensemble goes down
> too.  It is vastly preferable to have it running as a separate process,
> so that you can restart one of the services without causing disruption
> in the other service.
>
> Thanks,
> Shawn
>
>

Re: Zookeeper external vs internal

Posted by Shawn Heisey <ap...@elyograg.org>.
On 10/14/2018 9:31 PM, Sourav Moitra wrote:
> My question does running separate zookeeper ensemble in the same boxes
> provides any advantage over using the solr embedded zookeeper ?

The major disadvantage to having ZK embedded in Solr is this:  If you 
stop or restart the Solr process, part of your ZK ensemble goes down 
too.  It is vastly preferable to have it running as a separate process, 
so that you can restart one of the services without causing disruption 
in the other service.

Thanks,
Shawn