You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Lars Karlsson <la...@gmail.com> on 2017/07/06 21:04:42 UTC

Network segmentation of replica

Hi all, please help clarify how solr will handle network segmented replica
meanwhile configuration and reload of cores/nodes for one collection is
applied?

Does the replica become part of the collection after connectivity is
restored?

Hence the node is not down, but lost ability to communicate to zookeepers
and other nodes for a short while.

Regards
Lars

Re: Network segmentation of replica

Posted by Lars Karlsson <la...@gmail.com>.
Anyone that might be able to test this, or already did, please help
clarify..

On Fri, 7 Jul 2017 at 00:42, Dave <ha...@gmail.com> wrote:

> Sorry that should have read have not tested in solr cloud.
>
> > On Jul 6, 2017, at 6:37 PM, Dave <ha...@gmail.com> wrote:
> >
> > I have tested that out in solr cloud, but for solr master slave
> replication the config sets will not go without a reload, even if specified
> in the in the slave settings.
> >
> >> On Jul 6, 2017, at 5:56 PM, Erick Erickson <er...@gmail.com>
> wrote:
> >>
> >> I'm not entirely sure what happens if the sequence is
> >> 1> node drops out due to network glitch but Solr is still running
> >> 2> you upload a new configset
> >> 3> the network glitch repairs itself
> >> 4> the Solr instance reconnects.
> >>
> >> Certainly if the Solr node is _restarted_ or _reloaded_ the new
> >> configs are read down.
> >>
> >> The _index_ is always checked after a node is unavailable, so I'm sure
> >> of this sequence.
> >> 1> node drops out due to network glitch but Solr is still running
> >> 2> indexing continues
> >> 3> the network glitch repairs itself
> >> 4> the Solr instance reconnects.
> >> 5> the index is synchronized if necessary
> >>
> >> Anyone else wants to chime in?
> >>
> >>
> >> Best,
> >> Erick
> >>
> >> On Thu, Jul 6, 2017 at 2:27 PM, Lars Karlsson
> >> <la...@gmail.com> wrote:
> >>> Ok, so although there was a configuration change and/or schema change
> >>> (during network segmentation) that normally requires a manual core
> reload
> >>> (that nowadays happen automatically via the schema API), this replica
> will
> >>> get instructions from Zookeeper to update its configuration and schema,
> >>> reload its core, then synchronize and finally serve again,
> >>>
> >>> Please confirm.
> >>>
> >>> Regards
> >>> Lars
> >>>
> >>>
> >>>> On Thu, 6 Jul 2017 at 23:18, Erick Erickson <er...@gmail.com>
> wrote:
> >>>>
> >>>> right, when the node connects again to Zookeeper, it will also rejoin
> >>>> the collection. At that point it's index is synchronized with the
> >>>> leader and when it goes "active", then it should again start serving
> >>>> queries.
> >>>>
> >>>> Best,
> >>>> Erick
> >>>>
> >>>> On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
> >>>> <la...@gmail.com> wrote:
> >>>>> Hi all, please help clarify how solr will handle network segmented
> >>>> replica
> >>>>> meanwhile configuration and reload of cores/nodes for one collection
> is
> >>>>> applied?
> >>>>>
> >>>>> Does the replica become part of the collection after connectivity is
> >>>>> restored?
> >>>>>
> >>>>> Hence the node is not down, but lost ability to communicate to
> zookeepers
> >>>>> and other nodes for a short while.
> >>>>>
> >>>>> Regards
> >>>>> Lars
> >>>>
>

Re: Network segmentation of replica

Posted by Dave <ha...@gmail.com>.
Sorry that should have read have not tested in solr cloud. 

> On Jul 6, 2017, at 6:37 PM, Dave <ha...@gmail.com> wrote:
> 
> I have tested that out in solr cloud, but for solr master slave replication the config sets will not go without a reload, even if specified in the in the slave settings. 
> 
>> On Jul 6, 2017, at 5:56 PM, Erick Erickson <er...@gmail.com> wrote:
>> 
>> I'm not entirely sure what happens if the sequence is
>> 1> node drops out due to network glitch but Solr is still running
>> 2> you upload a new configset
>> 3> the network glitch repairs itself
>> 4> the Solr instance reconnects.
>> 
>> Certainly if the Solr node is _restarted_ or _reloaded_ the new
>> configs are read down.
>> 
>> The _index_ is always checked after a node is unavailable, so I'm sure
>> of this sequence.
>> 1> node drops out due to network glitch but Solr is still running
>> 2> indexing continues
>> 3> the network glitch repairs itself
>> 4> the Solr instance reconnects.
>> 5> the index is synchronized if necessary
>> 
>> Anyone else wants to chime in?
>> 
>> 
>> Best,
>> Erick
>> 
>> On Thu, Jul 6, 2017 at 2:27 PM, Lars Karlsson
>> <la...@gmail.com> wrote:
>>> Ok, so although there was a configuration change and/or schema change
>>> (during network segmentation) that normally requires a manual core reload
>>> (that nowadays happen automatically via the schema API), this replica will
>>> get instructions from Zookeeper to update its configuration and schema,
>>> reload its core, then synchronize and finally serve again,
>>> 
>>> Please confirm.
>>> 
>>> Regards
>>> Lars
>>> 
>>> 
>>>> On Thu, 6 Jul 2017 at 23:18, Erick Erickson <er...@gmail.com> wrote:
>>>> 
>>>> right, when the node connects again to Zookeeper, it will also rejoin
>>>> the collection. At that point it's index is synchronized with the
>>>> leader and when it goes "active", then it should again start serving
>>>> queries.
>>>> 
>>>> Best,
>>>> Erick
>>>> 
>>>> On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
>>>> <la...@gmail.com> wrote:
>>>>> Hi all, please help clarify how solr will handle network segmented
>>>> replica
>>>>> meanwhile configuration and reload of cores/nodes for one collection is
>>>>> applied?
>>>>> 
>>>>> Does the replica become part of the collection after connectivity is
>>>>> restored?
>>>>> 
>>>>> Hence the node is not down, but lost ability to communicate to zookeepers
>>>>> and other nodes for a short while.
>>>>> 
>>>>> Regards
>>>>> Lars
>>>> 

Re: Network segmentation of replica

Posted by Dave <ha...@gmail.com>.
I have tested that out in solr cloud, but for solr master slave replication the config sets will not go without a reload, even if specified in the in the slave settings. 

> On Jul 6, 2017, at 5:56 PM, Erick Erickson <er...@gmail.com> wrote:
> 
> I'm not entirely sure what happens if the sequence is
> 1> node drops out due to network glitch but Solr is still running
> 2> you upload a new configset
> 3> the network glitch repairs itself
> 4> the Solr instance reconnects.
> 
> Certainly if the Solr node is _restarted_ or _reloaded_ the new
> configs are read down.
> 
> The _index_ is always checked after a node is unavailable, so I'm sure
> of this sequence.
> 1> node drops out due to network glitch but Solr is still running
> 2> indexing continues
> 3> the network glitch repairs itself
> 4> the Solr instance reconnects.
> 5> the index is synchronized if necessary
> 
> Anyone else wants to chime in?
> 
> 
> Best,
> Erick
> 
> On Thu, Jul 6, 2017 at 2:27 PM, Lars Karlsson
> <la...@gmail.com> wrote:
>> Ok, so although there was a configuration change and/or schema change
>> (during network segmentation) that normally requires a manual core reload
>> (that nowadays happen automatically via the schema API), this replica will
>> get instructions from Zookeeper to update its configuration and schema,
>> reload its core, then synchronize and finally serve again,
>> 
>> Please confirm.
>> 
>> Regards
>> Lars
>> 
>> 
>>> On Thu, 6 Jul 2017 at 23:18, Erick Erickson <er...@gmail.com> wrote:
>>> 
>>> right, when the node connects again to Zookeeper, it will also rejoin
>>> the collection. At that point it's index is synchronized with the
>>> leader and when it goes "active", then it should again start serving
>>> queries.
>>> 
>>> Best,
>>> Erick
>>> 
>>> On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
>>> <la...@gmail.com> wrote:
>>>> Hi all, please help clarify how solr will handle network segmented
>>> replica
>>>> meanwhile configuration and reload of cores/nodes for one collection is
>>>> applied?
>>>> 
>>>> Does the replica become part of the collection after connectivity is
>>>> restored?
>>>> 
>>>> Hence the node is not down, but lost ability to communicate to zookeepers
>>>> and other nodes for a short while.
>>>> 
>>>> Regards
>>>> Lars
>>> 

Re: Network segmentation of replica

Posted by Erick Erickson <er...@gmail.com>.
I'm not entirely sure what happens if the sequence is
1> node drops out due to network glitch but Solr is still running
2> you upload a new configset
3> the network glitch repairs itself
4> the Solr instance reconnects.

Certainly if the Solr node is _restarted_ or _reloaded_ the new
configs are read down.

The _index_ is always checked after a node is unavailable, so I'm sure
of this sequence.
1> node drops out due to network glitch but Solr is still running
2> indexing continues
3> the network glitch repairs itself
4> the Solr instance reconnects.
5> the index is synchronized if necessary

Anyone else wants to chime in?


Best,
Erick

On Thu, Jul 6, 2017 at 2:27 PM, Lars Karlsson
<la...@gmail.com> wrote:
> Ok, so although there was a configuration change and/or schema change
> (during network segmentation) that normally requires a manual core reload
> (that nowadays happen automatically via the schema API), this replica will
> get instructions from Zookeeper to update its configuration and schema,
> reload its core, then synchronize and finally serve again,
>
> Please confirm.
>
> Regards
> Lars
>
>
> On Thu, 6 Jul 2017 at 23:18, Erick Erickson <er...@gmail.com> wrote:
>
>> right, when the node connects again to Zookeeper, it will also rejoin
>> the collection. At that point it's index is synchronized with the
>> leader and when it goes "active", then it should again start serving
>> queries.
>>
>> Best,
>> Erick
>>
>> On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
>> <la...@gmail.com> wrote:
>> > Hi all, please help clarify how solr will handle network segmented
>> replica
>> > meanwhile configuration and reload of cores/nodes for one collection is
>> > applied?
>> >
>> > Does the replica become part of the collection after connectivity is
>> > restored?
>> >
>> > Hence the node is not down, but lost ability to communicate to zookeepers
>> > and other nodes for a short while.
>> >
>> > Regards
>> > Lars
>>

Re: Network segmentation of replica

Posted by Lars Karlsson <la...@gmail.com>.
Ok, so although there was a configuration change and/or schema change
(during network segmentation) that normally requires a manual core reload
(that nowadays happen automatically via the schema API), this replica will
get instructions from Zookeeper to update its configuration and schema,
reload its core, then synchronize and finally serve again,

Please confirm.

Regards
Lars


On Thu, 6 Jul 2017 at 23:18, Erick Erickson <er...@gmail.com> wrote:

> right, when the node connects again to Zookeeper, it will also rejoin
> the collection. At that point it's index is synchronized with the
> leader and when it goes "active", then it should again start serving
> queries.
>
> Best,
> Erick
>
> On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
> <la...@gmail.com> wrote:
> > Hi all, please help clarify how solr will handle network segmented
> replica
> > meanwhile configuration and reload of cores/nodes for one collection is
> > applied?
> >
> > Does the replica become part of the collection after connectivity is
> > restored?
> >
> > Hence the node is not down, but lost ability to communicate to zookeepers
> > and other nodes for a short while.
> >
> > Regards
> > Lars
>

Re: Network segmentation of replica

Posted by Erick Erickson <er...@gmail.com>.
right, when the node connects again to Zookeeper, it will also rejoin
the collection. At that point it's index is synchronized with the
leader and when it goes "active", then it should again start serving
queries.

Best,
Erick

On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
<la...@gmail.com> wrote:
> Hi all, please help clarify how solr will handle network segmented replica
> meanwhile configuration and reload of cores/nodes for one collection is
> applied?
>
> Does the replica become part of the collection after connectivity is
> restored?
>
> Hence the node is not down, but lost ability to communicate to zookeepers
> and other nodes for a short while.
>
> Regards
> Lars