You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by THADC <ti...@gmail.com> on 2018/07/24 11:52:55 UTC

Solr Optimization Failure due to Timeout Waiting for Server - How to Address

Hi,

We have recently been performing a bulk reindexing against a large database
of ours. At the end of reindexing all documents we successfully perform a
CloudSolrClient.commit(). The entire reindexing process takes around 9
hours. This is solr 7.3, by the way..

Anyway, immediately after the commit, we execute a
CloudSolrClient.optimize(), but immediately receive a "SolrServerException:
Timeout occurred while waiting response from server at" (followed by URL of
this collection).

We have never had an issue with this against bulk reindexes of smaller
databases (most are much smaller and reindexing takes only 10-15 minutes
with those). The other difference with this environment is that the
reindexing is performed across multiple threads (calls to solrCloud server
from a multi-threaded setup) for performance reasons, rather than a single
thread. However, the reindexing process itself is completely successful, its
just the subsequent optimization that fails.

Is there a simple way to avoid this timeout failure issue? Could it be a
matter of retrying until the optimize() request is successful (that is,
after a reasonable number of attempts) rather than just trying once and
quitting? Any and all ideas are greatly appreciated. Thanks!



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Re: Solr Optimization Failure due to Timeout Waiting for Server - How to Address

Posted by THADC <ti...@gmail.com>.
Thanks, we feel confident we will not need the optimization for our
circumstances and just remove the code. Appreciated the response!



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Re: Solr Optimization Failure due to Timeout Waiting for Server - How to Address

Posted by Erick Erickson <er...@gmail.com>.
Does the optimize actually fail or just take a long time? That is, if
you wait does the index eventually get down to one segment?
For long-running operations, the _request_ can time out even though
the action is still continuing.

But that brings up whether you should optimize in the first place.
Optimize will reclaim resources from
deleted (or replaced) documents, but in terms of query speed, the
effects may be minimal, and it is
very expensive. See:

https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/

So what I'd do is if at the end of your indexing, the percentage of
deleted docs was less than, say,
20% I wouldn't optimize.

If you have some tests that show enough increased query speed to be
worth the bother, then sure. But
optimize isn't usually necessary.

Best,
Erick

On Tue, Jul 24, 2018 at 4:52 AM, THADC
<ti...@gmail.com> wrote:
> Hi,
>
> We have recently been performing a bulk reindexing against a large database
> of ours. At the end of reindexing all documents we successfully perform a
> CloudSolrClient.commit(). The entire reindexing process takes around 9
> hours. This is solr 7.3, by the way..
>
> Anyway, immediately after the commit, we execute a
> CloudSolrClient.optimize(), but immediately receive a "SolrServerException:
> Timeout occurred while waiting response from server at" (followed by URL of
> this collection).
>
> We have never had an issue with this against bulk reindexes of smaller
> databases (most are much smaller and reindexing takes only 10-15 minutes
> with those). The other difference with this environment is that the
> reindexing is performed across multiple threads (calls to solrCloud server
> from a multi-threaded setup) for performance reasons, rather than a single
> thread. However, the reindexing process itself is completely successful, its
> just the subsequent optimization that fails.
>
> Is there a simple way to avoid this timeout failure issue? Could it be a
> matter of retrying until the optimize() request is successful (that is,
> after a reasonable number of attempts) rather than just trying once and
> quitting? Any and all ideas are greatly appreciated. Thanks!
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html