You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Mike Drob (JIRA)" <ji...@apache.org> on 2017/01/20 22:00:29 UTC

[jira] [Updated] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

     [ https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mike Drob updated SOLR-10006:
-----------------------------
    Attachment: SOLR-10006.patch

Patch that adds FNFE and NSFE to rethrow as CorruptIndexException.

There might be an argument to be made for pushing the try/catch down to the various implementations of {{SegmentInfoFormat::read}} but I don't think that will be maintainable going forward.

Another option is to catch all IOExceptions in {{SegmentInfos::readCommit}} but that's a pretty wide net to include and would mask and IndexTooOldExceptions, unless we specifically exclude them.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> -----------------------------------------------------------------------
>
>                 Key: SOLR-10006
>                 URL: https://issues.apache.org/jira/browse/SOLR-10006
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>    Affects Versions: 5.3.1, 6.4
>            Reporter: Erick Erickson
>         Attachments: SOLR-10006.patch
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index became corrupt. When the node was restarted, the replica tried to do a full sync but fails because the core can't open a searcher. The replica went into an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a similar situation. Create a 2x2 collection and index some docs. Then stop one of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a searcher. Sure, issuing a fetchindex is abusive.... but I think it's the same underlying issue: why should we care about the state of a replica's current index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org