You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Jon Poulton <Jo...@vyre.com> on 2010/03/30 16:15:34 UTC

Some indexing requests to Solr fail

Hi there,
We have a setup in which our main application (running on a separate Tomcat instance on the same machine) uses SolrJ calls to an instance of Solr running on the same box. SolrJ is used both for indexing and searching Solr. Searching seems to be working fine, but quite frequently we see the following stack trace in our application logs:

org.apache.solr.common.SolrException: Service Unavailable
Service Unavailable
request: http://localhost:8070/solr/unify/update/javabin
  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424)
  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243)
  at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
  at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:86)
  at vyre.content.rabida.index.RemoteIndexingThread.sendIndexRequest(RemoteIndexingThread.java:283)
  at vyre.content.rabida.index.RemoteIndexingThread.commitBatch(RemoteIndexingThread.java:195)
  at vyre.util.thread.AbstractBatchProcessor.commit(AbstractBatchProcessor.java:93)
  at vyre.util.thread.AbstractBatchProcessor.run(AbstractBatchProcessor.java:117)
  at java.lang.Thread.run(Thread.java:619)

Looking in the Solr logs, there does not appear to be any problems. The host and port number are correct, its just sometimes our content gets indexed (visible in the solr logs), and sometimes it doesn't (nothing visible in solr logs). I'm not sure what could be causing this problem, but I can hazard a couple of guesses; is there any upper llimit on the size of a javabin request, or any point at which the service would decide that the POST was too large? Has any one else encountered a similar problem?

On a final note, scrolling back through the solr logs does reveal the following:

29-Mar-2010 17:05:25 org.apache.solr.core.SolrCore getSearcher
WARNING: [unify] Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try again later.
29-Mar-2010 17:05:25 org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: {} 0 22
29-Mar-2010 17:05:25 org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try again later.
       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1029)
       at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:418)
       at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
       at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
       at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:48)
       at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
       at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
       at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
       at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
       at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
       at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
       at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
       at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
       at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
       at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
       at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
       at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
       at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
       at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
       at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
       at java.lang.Thread.run(Thread.java:619)

This appears to be an unrelated problem, as the timing is different from the rejected indexing requests. As there is a large number of concurrent searching and indexing going on constantly, I'm guessing that I've set the number of "maxWarmingSearches" too low, and that as two searchers are warming up, further indexing causes more searches to be warmed, violating this maximum value - does this sound like a reasonable conclusion?

Thanks in advance for any help.

Jon

Re: Some indexing requests to Solr fail

Posted by Lance Norskog <go...@gmail.com>.
'waitFlush' means 'wait until the data from this commit is completely
written to disk'.  'waitSearcher' means 'wait until Solr has
completely finished loading up the new index from what it wrote to
disk'.

Optimize rearranges the entire disk footprint of the disk. It needs a
separate amount of free disk space in the same partition. Usually
people run optimize overnight, not during active production hours.
There is a way to limit the optimize pass so that it makes the index
'more optimized': the maxSegments parameter:

http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22commit.22_and_.22optimize.22

On Wed, Mar 31, 2010 at 10:04 AM, Jon Poulton <Jo...@vyre.com> wrote:
> Hi there,
> Thanks for the reply!
>
> Our backend code is currently set to commit every time it sends over a
> batch of documents - so it depends on how big the batch is and how
> often edits occur - probably too often. I've looked at the code, and
> the SolrJ commit() method takes two parameters - one is called
> waitSearcher, and another waitFlush. They aren't really documented too
> well, but I assume that the waitSearcher bool (currently set to false)
> may be part of the problem.
>
> I am considering removing the code that calls the commit() method
> altogether and relying on the settings for DirectUpdateHandler2 to
> determine when commits actually get done. That way we can tweak it on
> the Solr side without having to recompile and redeploy our main app
> (or by having to add new settings and code to handle them to our main
> app).
>
> Out of curiosity; how are people doing optimize() calls? Are you doing
> them immediately after every commit(), or periodically as part of a job?
>
> Jon
>
> On 31 Mar 2010, at 05:11, Lance Norskog wrote:
>
>> How often do you commit? New searchers are only created after a
>> commit. You notice that handleCommit is in the stack trace :) This
>> means that commits are happening too often for the amount of other
>> traffic currently happening, and so it can't finishing creating the
>> searcher before the next commit starts the next searcher.
>>
>> The "service unavailable" messages are roughly the same problem: these
>> commits might be timing out because the other end is too busy doing
>> commits.  You might try using autocommit instead: commits can happen
>> every N documents, every T seconds, or both. This keeps the commit
>> overhead to a controlled amount and commits should stay behind warming
>> up previous searchers.
>>
>> On Tue, Mar 30, 2010 at 7:15 AM, Jon Poulton <Jo...@vyre.com>
>> wrote:
>>> Hi there,
>>> We have a setup in which our main application (running on a
>>> separate Tomcat instance on the same machine) uses SolrJ calls to
>>> an instance of Solr running on the same box. SolrJ is used both for
>>> indexing and searching Solr. Searching seems to be working fine,
>>> but quite frequently we see the following stack trace in our
>>> application logs:
>>>
>>> org.apache.solr.common.SolrException: Service Unavailable
>>> Service Unavailable
>>> request: http://localhost:8070/solr/unify/update/javabin
>>>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request
>>> (CommonsHttpSolrServer.java:424)
>>>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request
>>> (CommonsHttpSolrServer.java:243)
>>>  at
>>> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process
>>> (AbstractUpdateRequest.java:105)
>>>  at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:
>>> 86)
>>>  at vyre.content.rabida.index.RemoteIndexingThread.sendIndexRequest
>>> (RemoteIndexingThread.java:283)
>>>  at vyre.content.rabida.index.RemoteIndexingThread.commitBatch
>>> (RemoteIndexingThread.java:195)
>>>  at vyre.util.thread.AbstractBatchProcessor.commit
>>> (AbstractBatchProcessor.java:93)
>>>  at vyre.util.thread.AbstractBatchProcessor.run
>>> (AbstractBatchProcessor.java:117)
>>>  at java.lang.Thread.run(Thread.java:619)
>>>
>>> Looking in the Solr logs, there does not appear to be any problems.
>>> The host and port number are correct, its just sometimes our
>>> content gets indexed (visible in the solr logs), and sometimes it
>>> doesn't (nothing visible in solr logs). I'm not sure what could be
>>> causing this problem, but I can hazard a couple of guesses; is
>>> there any upper llimit on the size of a javabin request, or any
>>> point at which the service would decide that the POST was too
>>> large? Has any one else encountered a similar problem?
>>>
>>> On a final note, scrolling back through the solr logs does reveal
>>> the following:
>>>
>>> 29-Mar-2010 17:05:25 org.apache.solr.core.SolrCore getSearcher
>>> WARNING: [unify] Error opening new searcher. exceeded limit of
>>> maxWarmingSearchers=2, try again later.
>>> 29-Mar-2010 17:05:25
>>> org.apache.solr.update.processor.LogUpdateProcessor finish
>>> INFO: {} 0 22
>>> 29-Mar-2010 17:05:25 org.apache.solr.common.SolrException log
>>> SEVERE: org.apache.solr.common.SolrException: Error opening new
>>> searcher. exceeded limit of maxWarmingSearchers=2, try again later.
>>>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:
>>> 1029)
>>>       at org.apache.solr.update.DirectUpdateHandler2.commit
>>> (DirectUpdateHandler2.java:418)
>>>       at
>>> org.apache.solr.update.processor.RunUpdateProcessor.processCommit
>>> (RunUpdateProcessorFactory.java:85)
>>>       at org.apache.solr.handler.RequestHandlerUtils.handleCommit
>>> (RequestHandlerUtils.java:107)
>>>       at
>>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody
>>> (ContentStreamHandlerBase.java:48)
>>>       at org.apache.solr.handler.RequestHandlerBase.handleRequest
>>> (RequestHandlerBase.java:131)
>>>       at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>>>       at org.apache.solr.servlet.SolrDispatchFilter.execute
>>> (SolrDispatchFilter.java:338)
>>>       at org.apache.solr.servlet.SolrDispatchFilter.doFilter
>>> (SolrDispatchFilter.java:241)
>>>       at
>>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter
>>> (ApplicationFilterChain.java:235)
>>>       at org.apache.catalina.core.ApplicationFilterChain.doFilter
>>> (ApplicationFilterChain.java:206)
>>>       at org.apache.catalina.core.StandardWrapperValve.invoke
>>> (StandardWrapperValve.java:233)
>>>       at org.apache.catalina.core.StandardContextValve.invoke
>>> (StandardContextValve.java:191)
>>>       at org.apache.catalina.core.StandardHostValve.invoke
>>> (StandardHostValve.java:127)
>>>       at org.apache.catalina.valves.ErrorReportValve.invoke
>>> (ErrorReportValve.java:102)
>>>       at org.apache.catalina.core.StandardEngineValve.invoke
>>> (StandardEngineValve.java:109)
>>>       at org.apache.catalina.connector.CoyoteAdapter.service
>>> (CoyoteAdapter.java:298)
>>>       at org.apache.coyote.http11.Http11Processor.process
>>> (Http11Processor.java:852)
>>>       at org.apache.coyote.http11.Http11Protocol
>>> $Http11ConnectionHandler.process(Http11Protocol.java:588)
>>>       at org.apache.tomcat.util.net.JIoEndpoint$Worker.run
>>> (JIoEndpoint.java:489)
>>>       at java.lang.Thread.run(Thread.java:619)
>>>
>>> This appears to be an unrelated problem, as the timing is different
>>> from the rejected indexing requests. As there is a large number of
>>> concurrent searching and indexing going on constantly, I'm guessing
>>> that I've set the number of "maxWarmingSearches" too low, and that
>>> as two searchers are warming up, further indexing causes more
>>> searches to be warmed, violating this maximum value - does this
>>> sound like a reasonable conclusion?
>>>
>>> Thanks in advance for any help.
>>>
>>> Jon
>>>
>>
>>
>>
>> --
>> Lance Norskog
>> goksron@gmail.com
>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Some indexing requests to Solr fail

Posted by Jon Poulton <Jo...@vyre.com>.
Hi there,
Thanks for the reply!

Our backend code is currently set to commit every time it sends over a  
batch of documents - so it depends on how big the batch is and how  
often edits occur - probably too often. I've looked at the code, and  
the SolrJ commit() method takes two parameters - one is called  
waitSearcher, and another waitFlush. They aren't really documented too  
well, but I assume that the waitSearcher bool (currently set to false)  
may be part of the problem.

I am considering removing the code that calls the commit() method  
altogether and relying on the settings for DirectUpdateHandler2 to  
determine when commits actually get done. That way we can tweak it on  
the Solr side without having to recompile and redeploy our main app  
(or by having to add new settings and code to handle them to our main  
app).

Out of curiosity; how are people doing optimize() calls? Are you doing  
them immediately after every commit(), or periodically as part of a job?

Jon

On 31 Mar 2010, at 05:11, Lance Norskog wrote:

> How often do you commit? New searchers are only created after a
> commit. You notice that handleCommit is in the stack trace :) This
> means that commits are happening too often for the amount of other
> traffic currently happening, and so it can't finishing creating the
> searcher before the next commit starts the next searcher.
>
> The "service unavailable" messages are roughly the same problem: these
> commits might be timing out because the other end is too busy doing
> commits.  You might try using autocommit instead: commits can happen
> every N documents, every T seconds, or both. This keeps the commit
> overhead to a controlled amount and commits should stay behind warming
> up previous searchers.
>
> On Tue, Mar 30, 2010 at 7:15 AM, Jon Poulton <Jo...@vyre.com>  
> wrote:
>> Hi there,
>> We have a setup in which our main application (running on a  
>> separate Tomcat instance on the same machine) uses SolrJ calls to  
>> an instance of Solr running on the same box. SolrJ is used both for  
>> indexing and searching Solr. Searching seems to be working fine,  
>> but quite frequently we see the following stack trace in our  
>> application logs:
>>
>> org.apache.solr.common.SolrException: Service Unavailable
>> Service Unavailable
>> request: http://localhost:8070/solr/unify/update/javabin
>>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request 
>> (CommonsHttpSolrServer.java:424)
>>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request 
>> (CommonsHttpSolrServer.java:243)
>>  at  
>> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process 
>> (AbstractUpdateRequest.java:105)
>>  at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java: 
>> 86)
>>  at vyre.content.rabida.index.RemoteIndexingThread.sendIndexRequest 
>> (RemoteIndexingThread.java:283)
>>  at vyre.content.rabida.index.RemoteIndexingThread.commitBatch 
>> (RemoteIndexingThread.java:195)
>>  at vyre.util.thread.AbstractBatchProcessor.commit 
>> (AbstractBatchProcessor.java:93)
>>  at vyre.util.thread.AbstractBatchProcessor.run 
>> (AbstractBatchProcessor.java:117)
>>  at java.lang.Thread.run(Thread.java:619)
>>
>> Looking in the Solr logs, there does not appear to be any problems.  
>> The host and port number are correct, its just sometimes our  
>> content gets indexed (visible in the solr logs), and sometimes it  
>> doesn't (nothing visible in solr logs). I'm not sure what could be  
>> causing this problem, but I can hazard a couple of guesses; is  
>> there any upper llimit on the size of a javabin request, or any  
>> point at which the service would decide that the POST was too  
>> large? Has any one else encountered a similar problem?
>>
>> On a final note, scrolling back through the solr logs does reveal  
>> the following:
>>
>> 29-Mar-2010 17:05:25 org.apache.solr.core.SolrCore getSearcher
>> WARNING: [unify] Error opening new searcher. exceeded limit of  
>> maxWarmingSearchers=2, try again later.
>> 29-Mar-2010 17:05:25  
>> org.apache.solr.update.processor.LogUpdateProcessor finish
>> INFO: {} 0 22
>> 29-Mar-2010 17:05:25 org.apache.solr.common.SolrException log
>> SEVERE: org.apache.solr.common.SolrException: Error opening new  
>> searcher. exceeded limit of maxWarmingSearchers=2, try again later.
>>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java: 
>> 1029)
>>       at org.apache.solr.update.DirectUpdateHandler2.commit 
>> (DirectUpdateHandler2.java:418)
>>       at  
>> org.apache.solr.update.processor.RunUpdateProcessor.processCommit 
>> (RunUpdateProcessorFactory.java:85)
>>       at org.apache.solr.handler.RequestHandlerUtils.handleCommit 
>> (RequestHandlerUtils.java:107)
>>       at  
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody 
>> (ContentStreamHandlerBase.java:48)
>>       at org.apache.solr.handler.RequestHandlerBase.handleRequest 
>> (RequestHandlerBase.java:131)
>>       at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>>       at org.apache.solr.servlet.SolrDispatchFilter.execute 
>> (SolrDispatchFilter.java:338)
>>       at org.apache.solr.servlet.SolrDispatchFilter.doFilter 
>> (SolrDispatchFilter.java:241)
>>       at  
>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter 
>> (ApplicationFilterChain.java:235)
>>       at org.apache.catalina.core.ApplicationFilterChain.doFilter 
>> (ApplicationFilterChain.java:206)
>>       at org.apache.catalina.core.StandardWrapperValve.invoke 
>> (StandardWrapperValve.java:233)
>>       at org.apache.catalina.core.StandardContextValve.invoke 
>> (StandardContextValve.java:191)
>>       at org.apache.catalina.core.StandardHostValve.invoke 
>> (StandardHostValve.java:127)
>>       at org.apache.catalina.valves.ErrorReportValve.invoke 
>> (ErrorReportValve.java:102)
>>       at org.apache.catalina.core.StandardEngineValve.invoke 
>> (StandardEngineValve.java:109)
>>       at org.apache.catalina.connector.CoyoteAdapter.service 
>> (CoyoteAdapter.java:298)
>>       at org.apache.coyote.http11.Http11Processor.process 
>> (Http11Processor.java:852)
>>       at org.apache.coyote.http11.Http11Protocol 
>> $Http11ConnectionHandler.process(Http11Protocol.java:588)
>>       at org.apache.tomcat.util.net.JIoEndpoint$Worker.run 
>> (JIoEndpoint.java:489)
>>       at java.lang.Thread.run(Thread.java:619)
>>
>> This appears to be an unrelated problem, as the timing is different  
>> from the rejected indexing requests. As there is a large number of  
>> concurrent searching and indexing going on constantly, I'm guessing  
>> that I've set the number of "maxWarmingSearches" too low, and that  
>> as two searchers are warming up, further indexing causes more  
>> searches to be warmed, violating this maximum value - does this  
>> sound like a reasonable conclusion?
>>
>> Thanks in advance for any help.
>>
>> Jon
>>
>
>
>
> -- 
> Lance Norskog
> goksron@gmail.com


Re: Some indexing requests to Solr fail

Posted by Lance Norskog <go...@gmail.com>.
How often do you commit? New searchers are only created after a
commit. You notice that handleCommit is in the stack trace :) This
means that commits are happening too often for the amount of other
traffic currently happening, and so it can't finishing creating the
searcher before the next commit starts the next searcher.

The "service unavailable" messages are roughly the same problem: these
commits might be timing out because the other end is too busy doing
commits.  You might try using autocommit instead: commits can happen
every N documents, every T seconds, or both. This keeps the commit
overhead to a controlled amount and commits should stay behind warming
up previous searchers.

On Tue, Mar 30, 2010 at 7:15 AM, Jon Poulton <Jo...@vyre.com> wrote:
> Hi there,
> We have a setup in which our main application (running on a separate Tomcat instance on the same machine) uses SolrJ calls to an instance of Solr running on the same box. SolrJ is used both for indexing and searching Solr. Searching seems to be working fine, but quite frequently we see the following stack trace in our application logs:
>
> org.apache.solr.common.SolrException: Service Unavailable
> Service Unavailable
> request: http://localhost:8070/solr/unify/update/javabin
>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424)
>  at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243)
>  at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
>  at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:86)
>  at vyre.content.rabida.index.RemoteIndexingThread.sendIndexRequest(RemoteIndexingThread.java:283)
>  at vyre.content.rabida.index.RemoteIndexingThread.commitBatch(RemoteIndexingThread.java:195)
>  at vyre.util.thread.AbstractBatchProcessor.commit(AbstractBatchProcessor.java:93)
>  at vyre.util.thread.AbstractBatchProcessor.run(AbstractBatchProcessor.java:117)
>  at java.lang.Thread.run(Thread.java:619)
>
> Looking in the Solr logs, there does not appear to be any problems. The host and port number are correct, its just sometimes our content gets indexed (visible in the solr logs), and sometimes it doesn't (nothing visible in solr logs). I'm not sure what could be causing this problem, but I can hazard a couple of guesses; is there any upper llimit on the size of a javabin request, or any point at which the service would decide that the POST was too large? Has any one else encountered a similar problem?
>
> On a final note, scrolling back through the solr logs does reveal the following:
>
> 29-Mar-2010 17:05:25 org.apache.solr.core.SolrCore getSearcher
> WARNING: [unify] Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try again later.
> 29-Mar-2010 17:05:25 org.apache.solr.update.processor.LogUpdateProcessor finish
> INFO: {} 0 22
> 29-Mar-2010 17:05:25 org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try again later.
>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1029)
>       at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:418)
>       at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>       at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
>       at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:48)
>       at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>       at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>       at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>       at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>       at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>       at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>       at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>       at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>       at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>       at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>       at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>       at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>       at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
>       at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>       at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>       at java.lang.Thread.run(Thread.java:619)
>
> This appears to be an unrelated problem, as the timing is different from the rejected indexing requests. As there is a large number of concurrent searching and indexing going on constantly, I'm guessing that I've set the number of "maxWarmingSearches" too low, and that as two searchers are warming up, further indexing causes more searches to be warmed, violating this maximum value - does this sound like a reasonable conclusion?
>
> Thanks in advance for any help.
>
> Jon
>



-- 
Lance Norskog
goksron@gmail.com