You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Naveen Gupta <nk...@gmail.com> on 2011/08/14 03:47:51 UTC

exceeded limit of maxWarmingSearchers ERROR

Hi,

Most of the settings are default.

We have single node (Memory 1 GB, Index Size 4GB)

We have a requirement where we are doing very fast commit. This is kind of
real time requirement where we are polling many threads from third party and
indexes into our system.

We want these results to be available soon.

We are committing for each user (may have 10k threads and inside that 1
thread may have 10 messages). So overall documents per user will be having
around .1 million (100000)

Earlier we were using commit Within  as 10 milliseconds inside the document,
but that was slowing the indexing and we were not getting any error.

As we removed the commit Within, indexing became very fast. But after that
we started experiencing in the system

As i read many forums, everybody told that this is happening because of very
fast commit rate, but what is the solution for our problem?

We are using CURL to post the data and commit

Also till now we are using default solrconfig.

Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
exceeded limit of maxWarmingSearchers=2, try again later.
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
        at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
        at
org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
        at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
        at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
        at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
        at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
        at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:662)

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Loka <lo...@zensar.in>.
Erickson,

Thanks for your reply, before your reply, I have googled and found the following and added under 
<updateHandler class="solr.DirectUpdateHandler2"> tag of solrconfig.xml file.


<autoCommit> 
    <maxTime>30000</maxTime> 
  </autoCommit>

  <autoSoftCommit> 
    <maxTime>10000</maxTime> 
  </autoSoftCommit>

Is the above one is fine or should I go strictly as per ypur suggestion means as below:

<autoCommit> 
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
       <openSearcher>false</openSearcher> 
     </autoCommit> 

    <!-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      --> 

     <autoSoftCommit> 
       <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
     </autoSoftCommit> 



Please confirm me.

But how can I check how much autowarming that Iam doing, as of now I have set the maxWarmingSearchers as 2, should I increase the value?


Regards,
Lokanadham Ganta


----- Original Message -----
From: "Erick Erickson [via Lucene]" <ml...@n3.nabble.com>
To: "Loka" <lo...@zensar.in>
Sent: Friday, November 15, 2013 6:07:12 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Where did you get that syntax? I've never seen that before. 

What you want to configure is the "maxTime" in your 
autocommit and autosoftcommit sections of solrconfig.xml, 
as: 

     <autoCommit> 
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
       <openSearcher>false</openSearcher> 
     </autoCommit> 

    <!-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      --> 

     <autoSoftCommit> 
       <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
     </autoSoftCommit> 

And you do NOT want to commit from your client. 

Depending on how long autowarm takes, you may still see this error, 
so check how much autowarming you're doing, i.e. how you've 
configured the caches in solrconfig.xml and what you 
have for newSearcher and firstSearcher. 

I'd start with autowarm numbers of, maybe, 16 or so at most. 

Best, 
Erick 


On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote: 


> Hi Erickson, 
> 
> Thanks for your reply, basically, I used commitWithin tag as below in 
> solrconfig.xml file 
> 
> 
>  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"> 
>            <lst name="defaults"> 
>              <str name="update.processor">dedupe</str> 
>            </lst> 
>             <add commitWithin="10000"/> 
>          </requestHandler> 
> 
> <updateRequestProcessorChain name="dedupe"> 
>     <processor 
> class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory"> 
>       <bool name="enabled">true</bool> 
>       <str name="signatureField">id</str> 
>       <bool name="overwriteDupes">false</bool> 
>       <str name="fields">name,features,cat</str> 
>       <str 
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str> 
>     </processor> 
>     <processor class="solr.LogUpdateProcessorFactory" /> 
>     <processor class="solr.RunUpdateProcessorFactory" /> 
>   </updateRequestProcessorChain> 
> 
> 
> But this fix did not solve my problem, I mean I again got the same error. 
> PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
> messaging-spring.xml, can you sugest me where Iam doing wrong. 
> 
> Regards, 
> Lokanadham Ganta 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ----- Original Message ----- 
> From: "Erick Erickson [via Lucene]" < 
> [hidden email] > 
> To: "Loka" < [hidden email] > 
> Sent: Thursday, November 14, 2013 8:38:17 PM 
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
> 
> CommitWithin is either configured in solrconfig.xml for the 
> <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I 
> recommend you do use this. 
> 
> The other way you can do it is if you're using SolrJ, one of the 
> forms of the server.add() method takes a number of milliseconds 
> to force a commit. 
> 
> You really, really do NOT want to use ridiculously short times for this 
> like a few milliseconds. That will cause new searchers to be 
> warmed, and when too many of them are warming at once you 
> get this error. 
> 
> Seriously, make your commitWithin or autocommit parameters 
> as long as you can, for many reasons. 
> 
> Here's a bunch of background: 
> 
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ 
> 
> Best, 
> Erick 
> 
> 
> On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote: 
> 
> 
> > Hi Naveen, 
> > Iam also getting the similar problem where I do not know how to use the 
> > commitWithin Tag, can you help me how to use commitWithin Tag. can you 
> give 
> > me the example 
> > 
> > 
> > 
> > -- 
> > View this message in context: 
> > 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html 
> > Sent from the Solr - User mailing list archive at Nabble.com. 
> > 
> 
> 
> 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html 
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click 
> here . 
> NAML 
> 
> solr-spring.xml (2K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml > 
> messaging-spring.xml (2K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml 
> > 
> schema.xml (6K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml > 
> solrconfig.xml (61K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml > 
> 
> 
> 
> 
> -- 
> View this message in context: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html 
> Sent from the Solr - User mailing list archive at Nabble.com. 
> 





If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101203.html 
To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click here . 
NAML




--
View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101208.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
You're using Solr 1.4? That's long enough ago that I've mostly forgotten
the quirks there, sorry.

Erick


On Mon, Nov 18, 2013 at 2:38 AM, Loka <lo...@zensar.in> wrote:

> Hi Erickson,
>
> Thanks for your reply.
>
> Iam getting the following error with liferay tomcat.
>
> 2013/11/18 07:29:42 ERROR
> com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:90)
> []
>
> [liferay/search_writer]
> org.apache.solr.common.SolrException: Not Found
>
> Not Found
>
> request:
> http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2
> org.apache.solr.common.SolrException: Not Found
>
> Not Found
>
> request:
> http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2
>         at
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343)
>         at
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183)
>         at
> com.liferay.portal.search.solr.server.BasicAuthSolrServer.request(BasicAuthSolrServer.java:93)
>         at
> org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217)
>         at
> org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:97)
>         at
> com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:83)
>         at
> com.liferay.portal.search.solr.SolrIndexWriterImpl.updateDocument(SolrIndexWriterImpl.java:133)
>         at
> com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.doReceive
>
> (SearchWriterMessageListener.java:86)
>         at
> com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.receive
>
> (SearchWriterMessageListener.java:33)
>         at
> com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:63)
>         at
> com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:61)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:679)...
>
>
>
>
> Can you help me why Iam getting this error.
>
> PFA of the same error log and the solr-spring.xml files.
>
> Regards,
> Lokanadham Ganta
>
> ----- Original Message -----
> From: "Erick Erickson [via Lucene]" <
> ml-node+s472066n4101220h58@n3.nabble.com>
> To: "Loka" <lo...@zensar.in>
> Sent: Friday, November 15, 2013 7:14:26 PM
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR
>
> That's a fine place to start. This form:
>
> <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>
> just allows you to define a sysvar to override the 15 second default, like
> java -Dsolr.autoCommti.maxTime=30000 -jar start.jar
>
>
> On Fri, Nov 15, 2013 at 8:11 AM, Loka < [hidden email] > wrote:
>
>
> > Hi Erickson,
> >
> > I have seen the following also from google, can I use the same in
> > <updateHandler class="solr.DirectUpdateHandler2">:
> > <commitWithin>     <softCommit>false</softCommit></commitWithin>
> >
> > If the above one is correct to add, can I add the below tags aslo in
> > <updateHandler class="solr.DirectUpdateHandler2"> along with the above
> tag:
> >
> > <autoCommit>
> >     <maxTime>30000</maxTime>
> >   </autoCommit>
> >
> >   <autoSoftCommit>
> >     <maxTime>10000</maxTime>
> >   </autoSoftCommit>
> >
> >
> > so finally, it will look like as:
> >
> > <updateHandler class="solr.DirectUpdateHandler2">
> > <autoCommit>
> >     <maxTime>30000</maxTime>
> >   </autoCommit>
> >
> >   <autoSoftCommit>
> >     <maxTime>10000</maxTime>
> >   </autoSoftCommit>
> > <commitWithin>     <softCommit>false</softCommit></commitWithin>
> >
> > </updateHandler>
> >
> >
> > Is the above one fine?
> >
> >
> > Regards,
> > Lokanadham Ganta
> >
> >
> >
> >
> > ----- Original Message -----
> > From: "Lokanadham Ganta" < [hidden email] >
> > To: "Erick Erickson [via Lucene]" <
> > [hidden email] >
> > Sent: Friday, November 15, 2013 6:33:20 PM
> > Subject: Re: exceeded limit of maxWarmingSearchers ERROR
> >
> > Erickson,
> >
> > Thanks for your reply, before your reply, I have googled and found the
> > following and added under
> > <updateHandler class="solr.DirectUpdateHandler2"> tag of solrconfig.xml
> > file.
> >
> >
> > <autoCommit>
> >     <maxTime>30000</maxTime>
> >   </autoCommit>
> >
> >   <autoSoftCommit>
> >     <maxTime>10000</maxTime>
> >   </autoSoftCommit>
> >
> > Is the above one is fine or should I go strictly as per ypur suggestion
> > means as below:
> >
> > <autoCommit>
> >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> >        <openSearcher>false</openSearcher>
> >      </autoCommit>
> >
> >     <!-- softAutoCommit is like autoCommit except it causes a
> >          'soft' commit which only ensures that changes are visible
> >          but does not ensure that data is synced to disk.  This is
> >          faster and more near-realtime friendly than a hard commit.
> >       -->
> >
> >      <autoSoftCommit>
> >        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime>
> >      </autoSoftCommit>
> >
> >
> >
> > Please confirm me.
> >
> > But how can I check how much autowarming that Iam doing, as of now I have
> > set the maxWarmingSearchers as 2, should I increase the value?
> >
> >
> > Regards,
> > Lokanadham Ganta
> >
> >
> > ----- Original Message -----
> > From: "Erick Erickson [via Lucene]" <
> > [hidden email] >
> > To: "Loka" < [hidden email] >
> > Sent: Friday, November 15, 2013 6:07:12 PM
> > Subject: Re: exceeded limit of maxWarmingSearchers ERROR
> >
> > Where did you get that syntax? I've never seen that before.
> >
> > What you want to configure is the "maxTime" in your
> > autocommit and autosoftcommit sections of solrconfig.xml,
> > as:
> >
> >      <autoCommit>
> >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> >        <openSearcher>false</openSearcher>
> >      </autoCommit>
> >
> >     <!-- softAutoCommit is like autoCommit except it causes a
> >          'soft' commit which only ensures that changes are visible
> >          but does not ensure that data is synced to disk.  This is
> >          faster and more near-realtime friendly than a hard commit.
> >       -->
> >
> >      <autoSoftCommit>
> >        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime>
> >      </autoSoftCommit>
> >
> > And you do NOT want to commit from your client.
> >
> > Depending on how long autowarm takes, you may still see this error,
> > so check how much autowarming you're doing, i.e. how you've
> > configured the caches in solrconfig.xml and what you
> > have for newSearcher and firstSearcher.
> >
> > I'd start with autowarm numbers of, maybe, 16 or so at most.
> >
> > Best,
> > Erick
> >
> >
> > On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote:
> >
> >
> > > Hi Erickson,
> > >
> > > Thanks for your reply, basically, I used commitWithin tag as below in
> > > solrconfig.xml file
> > >
> > >
> > >  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler">
> > >            <lst name="defaults">
> > >              <str name="update.processor">dedupe</str>
> > >            </lst>
> > >             <add commitWithin="10000"/>
> > >          </requestHandler>
> > >
> > > <updateRequestProcessorChain name="dedupe">
> > >     <processor
> > >
> class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
> > >       <bool name="enabled">true</bool>
> > >       <str name="signatureField">id</str>
> > >       <bool name="overwriteDupes">false</bool>
> > >       <str name="fields">name,features,cat</str>
> > >       <str
> > >
> >
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str>
> > >     </processor>
> > >     <processor class="solr.LogUpdateProcessorFactory" />
> > >     <processor class="solr.RunUpdateProcessorFactory" />
> > >   </updateRequestProcessorChain>
> > >
> > >
> > > But this fix did not solve my problem, I mean I again got the same
> error.
> > > PFA of schema.xml and solrconfig.xml file, solr-spring.xml,
> > > messaging-spring.xml, can you sugest me where Iam doing wrong.
> > >
> > > Regards,
> > > Lokanadham Ganta
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: "Erick Erickson [via Lucene]" <
> > > [hidden email] >
> > > To: "Loka" < [hidden email] >
> > > Sent: Thursday, November 14, 2013 8:38:17 PM
> > > Subject: Re: exceeded limit of maxWarmingSearchers ERROR
> > >
> > > CommitWithin is either configured in solrconfig.xml for the
> > > <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I
> > > recommend you do use this.
> > >
> > > The other way you can do it is if you're using SolrJ, one of the
> > > forms of the server.add() method takes a number of milliseconds
> > > to force a commit.
> > >
> > > You really, really do NOT want to use ridiculously short times for this
> > > like a few milliseconds. That will cause new searchers to be
> > > warmed, and when too many of them are warming at once you
> > > get this error.
> > >
> > > Seriously, make your commitWithin or autocommit parameters
> > > as long as you can, for many reasons.
> > >
> > > Here's a bunch of background:
> > >
> > >
> >
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
> > >
> > > Best,
> > > Erick
> > >
> > >
> > > On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote:
> > >
> > >
> > > > Hi Naveen,
> > > > Iam also getting the similar problem where I do not know how to use
> the
> > > > commitWithin Tag, can you help me how to use commitWithin Tag. can
> you
> > > give
> > > > me the example
> > > >
> > > >
> > > >
> > > > --
> > > > View this message in context:
> > > >
> > >
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
> > > > Sent from the Solr - User mailing list archive at Nabble.com.
> > > >
> > >
> > >
> > >
> > >
> > >
> > > If you reply to this email, your message will be added to the
> discussion
> > > below:
> > >
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html
> > > To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> > > here .
> > > NAML
> > >
> > > solr-spring.xml (2K) <
> > >
> http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml >
> > > messaging-spring.xml (2K) <
> > >
> >
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml
> > > >
> > > schema.xml (6K) <
> > > http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml  >
> > > solrconfig.xml (61K) <
> > > http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml >
> > >
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
> > > Sent from the Solr - User mailing list archive at Nabble.com.
> > >
> >
> >
> >
> >
> >
> > If you reply to this email, your message will be added to the discussion
> > below:
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101203.html
> > To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> > here .
> > NAML
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101209.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
>
>
>
>
> If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101220.html
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> here .
> NAML
>
> solr_spring.xml (2K) <
> http://lucene.472066.n3.nabble.com/attachment/4101624/0/solr_spring.xml>
> Liferay_Solr_Error_Log.txt (2K) <
> http://lucene.472066.n3.nabble.com/attachment/4101624/1/Liferay_Solr_Error_Log.txt
> >
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101624.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Loka <lo...@zensar.in>.
Hi Erickson,

Thanks for your reply.

Iam getting the following error with liferay tomcat.

2013/11/18 07:29:42 ERROR
com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:90) []

[liferay/search_writer] 
org.apache.solr.common.SolrException: Not Found

Not Found

request: http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2
org.apache.solr.common.SolrException: Not Found

Not Found

request: http://10.43.4.155:8080/apache-solr-1.4.1/liferay/update?wt=javabin&version=2.2
	at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343)
	at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183)
	at com.liferay.portal.search.solr.server.BasicAuthSolrServer.request(BasicAuthSolrServer.java:93)
	at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217)
	at org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:97)
	at com.liferay.portal.search.solr.SolrIndexWriterImpl.deleteDocument(SolrIndexWriterImpl.java:83)
	at com.liferay.portal.search.solr.SolrIndexWriterImpl.updateDocument(SolrIndexWriterImpl.java:133)
	at com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.doReceive

(SearchWriterMessageListener.java:86)
	at com.liferay.portal.kernel.search.messaging.SearchWriterMessageListener.receive

(SearchWriterMessageListener.java:33)
	at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:63)
	at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:61)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
	at java.lang.Thread.run(Thread.java:679)...




Can you help me why Iam getting this error.

PFA of the same error log and the solr-spring.xml files.

Regards,
Lokanadham Ganta

----- Original Message -----
From: "Erick Erickson [via Lucene]" <ml...@n3.nabble.com>
To: "Loka" <lo...@zensar.in>
Sent: Friday, November 15, 2013 7:14:26 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

That's a fine place to start. This form: 

<maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 

just allows you to define a sysvar to override the 15 second default, like 
java -Dsolr.autoCommti.maxTime=30000 -jar start.jar 


On Fri, Nov 15, 2013 at 8:11 AM, Loka < [hidden email] > wrote: 


> Hi Erickson, 
> 
> I have seen the following also from google, can I use the same in 
> <updateHandler class="solr.DirectUpdateHandler2">: 
> <commitWithin>     <softCommit>false</softCommit></commitWithin> 
> 
> If the above one is correct to add, can I add the below tags aslo in 
> <updateHandler class="solr.DirectUpdateHandler2"> along with the above tag: 
> 
> <autoCommit> 
>     <maxTime>30000</maxTime> 
>   </autoCommit> 
> 
>   <autoSoftCommit> 
>     <maxTime>10000</maxTime> 
>   </autoSoftCommit> 
> 
> 
> so finally, it will look like as: 
> 
> <updateHandler class="solr.DirectUpdateHandler2"> 
> <autoCommit> 
>     <maxTime>30000</maxTime> 
>   </autoCommit> 
> 
>   <autoSoftCommit> 
>     <maxTime>10000</maxTime> 
>   </autoSoftCommit> 
> <commitWithin>     <softCommit>false</softCommit></commitWithin> 
> 
> </updateHandler> 
> 
> 
> Is the above one fine? 
> 
> 
> Regards, 
> Lokanadham Ganta 
> 
> 
> 
> 
> ----- Original Message ----- 
> From: "Lokanadham Ganta" < [hidden email] > 
> To: "Erick Erickson [via Lucene]" < 
> [hidden email] > 
> Sent: Friday, November 15, 2013 6:33:20 PM 
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
> 
> Erickson, 
> 
> Thanks for your reply, before your reply, I have googled and found the 
> following and added under 
> <updateHandler class="solr.DirectUpdateHandler2"> tag of solrconfig.xml 
> file. 
> 
> 
> <autoCommit> 
>     <maxTime>30000</maxTime> 
>   </autoCommit> 
> 
>   <autoSoftCommit> 
>     <maxTime>10000</maxTime> 
>   </autoSoftCommit> 
> 
> Is the above one is fine or should I go strictly as per ypur suggestion 
> means as below: 
> 
> <autoCommit> 
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
>        <openSearcher>false</openSearcher> 
>      </autoCommit> 
> 
>     <!-- softAutoCommit is like autoCommit except it causes a 
>          'soft' commit which only ensures that changes are visible 
>          but does not ensure that data is synced to disk.  This is 
>          faster and more near-realtime friendly than a hard commit. 
>       --> 
> 
>      <autoSoftCommit> 
>        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
>      </autoSoftCommit> 
> 
> 
> 
> Please confirm me. 
> 
> But how can I check how much autowarming that Iam doing, as of now I have 
> set the maxWarmingSearchers as 2, should I increase the value? 
> 
> 
> Regards, 
> Lokanadham Ganta 
> 
> 
> ----- Original Message ----- 
> From: "Erick Erickson [via Lucene]" < 
> [hidden email] > 
> To: "Loka" < [hidden email] > 
> Sent: Friday, November 15, 2013 6:07:12 PM 
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
> 
> Where did you get that syntax? I've never seen that before. 
> 
> What you want to configure is the "maxTime" in your 
> autocommit and autosoftcommit sections of solrconfig.xml, 
> as: 
> 
>      <autoCommit> 
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
>        <openSearcher>false</openSearcher> 
>      </autoCommit> 
> 
>     <!-- softAutoCommit is like autoCommit except it causes a 
>          'soft' commit which only ensures that changes are visible 
>          but does not ensure that data is synced to disk.  This is 
>          faster and more near-realtime friendly than a hard commit. 
>       --> 
> 
>      <autoSoftCommit> 
>        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
>      </autoSoftCommit> 
> 
> And you do NOT want to commit from your client. 
> 
> Depending on how long autowarm takes, you may still see this error, 
> so check how much autowarming you're doing, i.e. how you've 
> configured the caches in solrconfig.xml and what you 
> have for newSearcher and firstSearcher. 
> 
> I'd start with autowarm numbers of, maybe, 16 or so at most. 
> 
> Best, 
> Erick 
> 
> 
> On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote: 
> 
> 
> > Hi Erickson, 
> > 
> > Thanks for your reply, basically, I used commitWithin tag as below in 
> > solrconfig.xml file 
> > 
> > 
> >  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"> 
> >            <lst name="defaults"> 
> >              <str name="update.processor">dedupe</str> 
> >            </lst> 
> >             <add commitWithin="10000"/> 
> >          </requestHandler> 
> > 
> > <updateRequestProcessorChain name="dedupe"> 
> >     <processor 
> > class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory"> 
> >       <bool name="enabled">true</bool> 
> >       <str name="signatureField">id</str> 
> >       <bool name="overwriteDupes">false</bool> 
> >       <str name="fields">name,features,cat</str> 
> >       <str 
> > 
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str> 
> >     </processor> 
> >     <processor class="solr.LogUpdateProcessorFactory" /> 
> >     <processor class="solr.RunUpdateProcessorFactory" /> 
> >   </updateRequestProcessorChain> 
> > 
> > 
> > But this fix did not solve my problem, I mean I again got the same error. 
> > PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
> > messaging-spring.xml, can you sugest me where Iam doing wrong. 
> > 
> > Regards, 
> > Lokanadham Ganta 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > ----- Original Message ----- 
> > From: "Erick Erickson [via Lucene]" < 
> > [hidden email] > 
> > To: "Loka" < [hidden email] > 
> > Sent: Thursday, November 14, 2013 8:38:17 PM 
> > Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
> > 
> > CommitWithin is either configured in solrconfig.xml for the 
> > <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I 
> > recommend you do use this. 
> > 
> > The other way you can do it is if you're using SolrJ, one of the 
> > forms of the server.add() method takes a number of milliseconds 
> > to force a commit. 
> > 
> > You really, really do NOT want to use ridiculously short times for this 
> > like a few milliseconds. That will cause new searchers to be 
> > warmed, and when too many of them are warming at once you 
> > get this error. 
> > 
> > Seriously, make your commitWithin or autocommit parameters 
> > as long as you can, for many reasons. 
> > 
> > Here's a bunch of background: 
> > 
> > 
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ 
> > 
> > Best, 
> > Erick 
> > 
> > 
> > On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote: 
> > 
> > 
> > > Hi Naveen, 
> > > Iam also getting the similar problem where I do not know how to use the 
> > > commitWithin Tag, can you help me how to use commitWithin Tag. can you 
> > give 
> > > me the example 
> > > 
> > > 
> > > 
> > > -- 
> > > View this message in context: 
> > > 
> > 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html 
> > > Sent from the Solr - User mailing list archive at Nabble.com. 
> > > 
> > 
> > 
> > 
> > 
> > 
> > If you reply to this email, your message will be added to the discussion 
> > below: 
> > 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html 
> > To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click 
> > here . 
> > NAML 
> > 
> > solr-spring.xml (2K) < 
> > http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml > 
> > messaging-spring.xml (2K) < 
> > 
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml 
> > > 
> > schema.xml (6K) < 
> > http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml  > 
> > solrconfig.xml (61K) < 
> > http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml  > 
> > 
> > 
> > 
> > 
> > -- 
> > View this message in context: 
> > 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html 
> > Sent from the Solr - User mailing list archive at Nabble.com. 
> > 
> 
> 
> 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101203.html 
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click 
> here . 
> NAML 
> 
> 
> 
> 
> -- 
> View this message in context: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101209.html 
> Sent from the Solr - User mailing list archive at Nabble.com. 
> 





If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101220.html 
To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click here . 
NAML

solr_spring.xml (2K) <http://lucene.472066.n3.nabble.com/attachment/4101624/0/solr_spring.xml>
Liferay_Solr_Error_Log.txt (2K) <http://lucene.472066.n3.nabble.com/attachment/4101624/1/Liferay_Solr_Error_Log.txt>




--
View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101624.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
That's a fine place to start. This form:

<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>

just allows you to define a sysvar to override the 15 second default, like
java -Dsolr.autoCommti.maxTime=30000 -jar start.jar


On Fri, Nov 15, 2013 at 8:11 AM, Loka <lo...@zensar.in> wrote:

> Hi Erickson,
>
> I have seen the following also from google, can I use the same in
> <updateHandler class="solr.DirectUpdateHandler2">:
> <commitWithin>     <softCommit>false</softCommit></commitWithin>
>
> If the above one is correct to add, can I add the below tags aslo in
> <updateHandler class="solr.DirectUpdateHandler2"> along with the above tag:
>
> <autoCommit>
>     <maxTime>30000</maxTime>
>   </autoCommit>
>
>   <autoSoftCommit>
>     <maxTime>10000</maxTime>
>   </autoSoftCommit>
>
>
> so finally, it will look like as:
>
> <updateHandler class="solr.DirectUpdateHandler2">
> <autoCommit>
>     <maxTime>30000</maxTime>
>   </autoCommit>
>
>   <autoSoftCommit>
>     <maxTime>10000</maxTime>
>   </autoSoftCommit>
> <commitWithin>     <softCommit>false</softCommit></commitWithin>
>
> </updateHandler>
>
>
> Is the above one fine?
>
>
> Regards,
> Lokanadham Ganta
>
>
>
>
> ----- Original Message -----
> From: "Lokanadham Ganta" <lo...@zensar.in>
> To: "Erick Erickson [via Lucene]" <
> ml-node+s472066n4101203h31@n3.nabble.com>
> Sent: Friday, November 15, 2013 6:33:20 PM
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR
>
> Erickson,
>
> Thanks for your reply, before your reply, I have googled and found the
> following and added under
> <updateHandler class="solr.DirectUpdateHandler2"> tag of solrconfig.xml
> file.
>
>
> <autoCommit>
>     <maxTime>30000</maxTime>
>   </autoCommit>
>
>   <autoSoftCommit>
>     <maxTime>10000</maxTime>
>   </autoSoftCommit>
>
> Is the above one is fine or should I go strictly as per ypur suggestion
> means as below:
>
> <autoCommit>
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>        <openSearcher>false</openSearcher>
>      </autoCommit>
>
>     <!-- softAutoCommit is like autoCommit except it causes a
>          'soft' commit which only ensures that changes are visible
>          but does not ensure that data is synced to disk.  This is
>          faster and more near-realtime friendly than a hard commit.
>       -->
>
>      <autoSoftCommit>
>        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime>
>      </autoSoftCommit>
>
>
>
> Please confirm me.
>
> But how can I check how much autowarming that Iam doing, as of now I have
> set the maxWarmingSearchers as 2, should I increase the value?
>
>
> Regards,
> Lokanadham Ganta
>
>
> ----- Original Message -----
> From: "Erick Erickson [via Lucene]" <
> ml-node+s472066n4101203h31@n3.nabble.com>
> To: "Loka" <lo...@zensar.in>
> Sent: Friday, November 15, 2013 6:07:12 PM
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR
>
> Where did you get that syntax? I've never seen that before.
>
> What you want to configure is the "maxTime" in your
> autocommit and autosoftcommit sections of solrconfig.xml,
> as:
>
>      <autoCommit>
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>        <openSearcher>false</openSearcher>
>      </autoCommit>
>
>     <!-- softAutoCommit is like autoCommit except it causes a
>          'soft' commit which only ensures that changes are visible
>          but does not ensure that data is synced to disk.  This is
>          faster and more near-realtime friendly than a hard commit.
>       -->
>
>      <autoSoftCommit>
>        <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime>
>      </autoSoftCommit>
>
> And you do NOT want to commit from your client.
>
> Depending on how long autowarm takes, you may still see this error,
> so check how much autowarming you're doing, i.e. how you've
> configured the caches in solrconfig.xml and what you
> have for newSearcher and firstSearcher.
>
> I'd start with autowarm numbers of, maybe, 16 or so at most.
>
> Best,
> Erick
>
>
> On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote:
>
>
> > Hi Erickson,
> >
> > Thanks for your reply, basically, I used commitWithin tag as below in
> > solrconfig.xml file
> >
> >
> >  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler">
> >            <lst name="defaults">
> >              <str name="update.processor">dedupe</str>
> >            </lst>
> >             <add commitWithin="10000"/>
> >          </requestHandler>
> >
> > <updateRequestProcessorChain name="dedupe">
> >     <processor
> > class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
> >       <bool name="enabled">true</bool>
> >       <str name="signatureField">id</str>
> >       <bool name="overwriteDupes">false</bool>
> >       <str name="fields">name,features,cat</str>
> >       <str
> >
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str>
> >     </processor>
> >     <processor class="solr.LogUpdateProcessorFactory" />
> >     <processor class="solr.RunUpdateProcessorFactory" />
> >   </updateRequestProcessorChain>
> >
> >
> > But this fix did not solve my problem, I mean I again got the same error.
> > PFA of schema.xml and solrconfig.xml file, solr-spring.xml,
> > messaging-spring.xml, can you sugest me where Iam doing wrong.
> >
> > Regards,
> > Lokanadham Ganta
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ----- Original Message -----
> > From: "Erick Erickson [via Lucene]" <
> > [hidden email] >
> > To: "Loka" < [hidden email] >
> > Sent: Thursday, November 14, 2013 8:38:17 PM
> > Subject: Re: exceeded limit of maxWarmingSearchers ERROR
> >
> > CommitWithin is either configured in solrconfig.xml for the
> > <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I
> > recommend you do use this.
> >
> > The other way you can do it is if you're using SolrJ, one of the
> > forms of the server.add() method takes a number of milliseconds
> > to force a commit.
> >
> > You really, really do NOT want to use ridiculously short times for this
> > like a few milliseconds. That will cause new searchers to be
> > warmed, and when too many of them are warming at once you
> > get this error.
> >
> > Seriously, make your commitWithin or autocommit parameters
> > as long as you can, for many reasons.
> >
> > Here's a bunch of background:
> >
> >
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
> >
> > Best,
> > Erick
> >
> >
> > On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote:
> >
> >
> > > Hi Naveen,
> > > Iam also getting the similar problem where I do not know how to use the
> > > commitWithin Tag, can you help me how to use commitWithin Tag. can you
> > give
> > > me the example
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
> > > Sent from the Solr - User mailing list archive at Nabble.com.
> > >
> >
> >
> >
> >
> >
> > If you reply to this email, your message will be added to the discussion
> > below:
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html
> > To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> > here .
> > NAML
> >
> > solr-spring.xml (2K) <
> > http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml>
> > messaging-spring.xml (2K) <
> >
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml
> > >
> > schema.xml (6K) <
> > http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml >
> > solrconfig.xml (61K) <
> > http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml >
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
>
>
>
>
> If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101203.html
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> here .
> NAML
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101209.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Loka <lo...@zensar.in>.
Hi Erickson,

I have seen the following also from google, can I use the same in <updateHandler class="solr.DirectUpdateHandler2">:
<commitWithin>     <softCommit>false</softCommit></commitWithin>

If the above one is correct to add, can I add the below tags aslo in <updateHandler class="solr.DirectUpdateHandler2"> along with the above tag:

<autoCommit> 
    <maxTime>30000</maxTime> 
  </autoCommit>

  <autoSoftCommit> 
    <maxTime>10000</maxTime> 
  </autoSoftCommit>


so finally, it will look like as:

<updateHandler class="solr.DirectUpdateHandler2"> 
<autoCommit> 
    <maxTime>30000</maxTime> 
  </autoCommit>

  <autoSoftCommit> 
    <maxTime>10000</maxTime> 
  </autoSoftCommit>
<commitWithin>     <softCommit>false</softCommit></commitWithin>

</updateHandler>


Is the above one fine?


Regards,
Lokanadham Ganta




----- Original Message -----
From: "Lokanadham Ganta" <lo...@zensar.in>
To: "Erick Erickson [via Lucene]" <ml...@n3.nabble.com>
Sent: Friday, November 15, 2013 6:33:20 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Erickson,

Thanks for your reply, before your reply, I have googled and found the following and added under 
<updateHandler class="solr.DirectUpdateHandler2"> tag of solrconfig.xml file.


<autoCommit> 
    <maxTime>30000</maxTime> 
  </autoCommit>

  <autoSoftCommit> 
    <maxTime>10000</maxTime> 
  </autoSoftCommit>

Is the above one is fine or should I go strictly as per ypur suggestion means as below:

<autoCommit> 
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
       <openSearcher>false</openSearcher> 
     </autoCommit> 

    <!-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      --> 

     <autoSoftCommit> 
       <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
     </autoSoftCommit> 



Please confirm me.

But how can I check how much autowarming that Iam doing, as of now I have set the maxWarmingSearchers as 2, should I increase the value?


Regards,
Lokanadham Ganta


----- Original Message -----
From: "Erick Erickson [via Lucene]" <ml...@n3.nabble.com>
To: "Loka" <lo...@zensar.in>
Sent: Friday, November 15, 2013 6:07:12 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

Where did you get that syntax? I've never seen that before. 

What you want to configure is the "maxTime" in your 
autocommit and autosoftcommit sections of solrconfig.xml, 
as: 

     <autoCommit> 
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> 
       <openSearcher>false</openSearcher> 
     </autoCommit> 

    <!-- softAutoCommit is like autoCommit except it causes a 
         'soft' commit which only ensures that changes are visible 
         but does not ensure that data is synced to disk.  This is 
         faster and more near-realtime friendly than a hard commit. 
      --> 

     <autoSoftCommit> 
       <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime> 
     </autoSoftCommit> 

And you do NOT want to commit from your client. 

Depending on how long autowarm takes, you may still see this error, 
so check how much autowarming you're doing, i.e. how you've 
configured the caches in solrconfig.xml and what you 
have for newSearcher and firstSearcher. 

I'd start with autowarm numbers of, maybe, 16 or so at most. 

Best, 
Erick 


On Fri, Nov 15, 2013 at 2:46 AM, Loka < [hidden email] > wrote: 


> Hi Erickson, 
> 
> Thanks for your reply, basically, I used commitWithin tag as below in 
> solrconfig.xml file 
> 
> 
>  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler"> 
>            <lst name="defaults"> 
>              <str name="update.processor">dedupe</str> 
>            </lst> 
>             <add commitWithin="10000"/> 
>          </requestHandler> 
> 
> <updateRequestProcessorChain name="dedupe"> 
>     <processor 
> class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory"> 
>       <bool name="enabled">true</bool> 
>       <str name="signatureField">id</str> 
>       <bool name="overwriteDupes">false</bool> 
>       <str name="fields">name,features,cat</str> 
>       <str 
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str> 
>     </processor> 
>     <processor class="solr.LogUpdateProcessorFactory" /> 
>     <processor class="solr.RunUpdateProcessorFactory" /> 
>   </updateRequestProcessorChain> 
> 
> 
> But this fix did not solve my problem, I mean I again got the same error. 
> PFA of schema.xml and solrconfig.xml file, solr-spring.xml, 
> messaging-spring.xml, can you sugest me where Iam doing wrong. 
> 
> Regards, 
> Lokanadham Ganta 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ----- Original Message ----- 
> From: "Erick Erickson [via Lucene]" < 
> [hidden email] > 
> To: "Loka" < [hidden email] > 
> Sent: Thursday, November 14, 2013 8:38:17 PM 
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR 
> 
> CommitWithin is either configured in solrconfig.xml for the 
> <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I 
> recommend you do use this. 
> 
> The other way you can do it is if you're using SolrJ, one of the 
> forms of the server.add() method takes a number of milliseconds 
> to force a commit. 
> 
> You really, really do NOT want to use ridiculously short times for this 
> like a few milliseconds. That will cause new searchers to be 
> warmed, and when too many of them are warming at once you 
> get this error. 
> 
> Seriously, make your commitWithin or autocommit parameters 
> as long as you can, for many reasons. 
> 
> Here's a bunch of background: 
> 
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ 
> 
> Best, 
> Erick 
> 
> 
> On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote: 
> 
> 
> > Hi Naveen, 
> > Iam also getting the similar problem where I do not know how to use the 
> > commitWithin Tag, can you help me how to use commitWithin Tag. can you 
> give 
> > me the example 
> > 
> > 
> > 
> > -- 
> > View this message in context: 
> > 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html 
> > Sent from the Solr - User mailing list archive at Nabble.com. 
> > 
> 
> 
> 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html 
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click 
> here . 
> NAML 
> 
> solr-spring.xml (2K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml > 
> messaging-spring.xml (2K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml 
> > 
> schema.xml (6K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml > 
> solrconfig.xml (61K) < 
> http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml > 
> 
> 
> 
> 
> -- 
> View this message in context: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html 
> Sent from the Solr - User mailing list archive at Nabble.com. 
> 





If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101203.html 
To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click here . 
NAML




--
View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101209.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
Where did you get that syntax? I've never seen that before.

What you want to configure is the "maxTime" in your
autocommit and autosoftcommit sections of solrconfig.xml,
as:

     <autoCommit>
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
       <openSearcher>false</openSearcher>
     </autoCommit>

    <!-- softAutoCommit is like autoCommit except it causes a
         'soft' commit which only ensures that changes are visible
         but does not ensure that data is synced to disk.  This is
         faster and more near-realtime friendly than a hard commit.
      -->

     <autoSoftCommit>
       <maxTime>${solr.autoSoftCommit.maxTime:10000}</maxTime>
     </autoSoftCommit>

And you do NOT want to commit from your client.

Depending on how long autowarm takes, you may still see this error,
so check how much autowarming you're doing, i.e. how you've
configured the caches in solrconfig.xml and what you
have for newSearcher and firstSearcher.

I'd start with autowarm numbers of, maybe, 16 or so at most.

Best,
Erick


On Fri, Nov 15, 2013 at 2:46 AM, Loka <lo...@zensar.in> wrote:

> Hi Erickson,
>
> Thanks for your reply, basically, I used commitWithin tag as below in
> solrconfig.xml file
>
>
>  <requestHandler name="/update" class="solr.XmlUpdateRequestHandler">
>            <lst name="defaults">
>              <str name="update.processor">dedupe</str>
>            </lst>
>             <add commitWithin="10000"/>
>          </requestHandler>
>
> <updateRequestProcessorChain name="dedupe">
>     <processor
> class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
>       <bool name="enabled">true</bool>
>       <str name="signatureField">id</str>
>       <bool name="overwriteDupes">false</bool>
>       <str name="fields">name,features,cat</str>
>       <str
> name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str>
>     </processor>
>     <processor class="solr.LogUpdateProcessorFactory" />
>     <processor class="solr.RunUpdateProcessorFactory" />
>   </updateRequestProcessorChain>
>
>
> But this fix did not solve my problem, I mean I again got the same error.
> PFA of schema.xml and solrconfig.xml file, solr-spring.xml,
> messaging-spring.xml, can you sugest me where Iam doing wrong.
>
> Regards,
> Lokanadham Ganta
>
>
>
>
>
>
>
>
>
>
> ----- Original Message -----
> From: "Erick Erickson [via Lucene]" <
> ml-node+s472066n4100924h82@n3.nabble.com>
> To: "Loka" <lo...@zensar.in>
> Sent: Thursday, November 14, 2013 8:38:17 PM
> Subject: Re: exceeded limit of maxWarmingSearchers ERROR
>
> CommitWithin is either configured in solrconfig.xml for the
> <autoCommit> or <autoSoftCommit> tags as the maxTime tag. I
> recommend you do use this.
>
> The other way you can do it is if you're using SolrJ, one of the
> forms of the server.add() method takes a number of milliseconds
> to force a commit.
>
> You really, really do NOT want to use ridiculously short times for this
> like a few milliseconds. That will cause new searchers to be
> warmed, and when too many of them are warming at once you
> get this error.
>
> Seriously, make your commitWithin or autocommit parameters
> as long as you can, for many reasons.
>
> Here's a bunch of background:
>
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
>
> Best,
> Erick
>
>
> On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote:
>
>
> > Hi Naveen,
> > Iam also getting the similar problem where I do not know how to use the
> > commitWithin Tag, can you help me how to use commitWithin Tag. can you
> give
> > me the example
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
>
>
>
>
> If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html
> To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click
> here .
> NAML
>
> solr-spring.xml (2K) <
> http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml>
> messaging-spring.xml (2K) <
> http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml
> >
> schema.xml (6K) <
> http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml>
> solrconfig.xml (61K) <
> http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Loka <lo...@zensar.in>.
Hi Erickson,

Thanks for your reply, basically, I used commitWithin tag as below in solrconfig.xml file


 <requestHandler name="/update" class="solr.XmlUpdateRequestHandler">
           <lst name="defaults">
             <str name="update.processor">dedupe</str>
           </lst>
	    <add commitWithin="10000"/>
         </requestHandler>

<updateRequestProcessorChain name="dedupe">
    <processor class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
      <bool name="enabled">true</bool>
      <str name="signatureField">id</str>
      <bool name="overwriteDupes">false</bool>
      <str name="fields">name,features,cat</str>
      <str name="signatureClass">org.apache.solr.update.processor.Lookup3Signature</str>
    </processor>
    <processor class="solr.LogUpdateProcessorFactory" />
    <processor class="solr.RunUpdateProcessorFactory" />
  </updateRequestProcessorChain>


But this fix did not solve my problem, I mean I again got the same error.
PFA of schema.xml and solrconfig.xml file, solr-spring.xml, messaging-spring.xml, can you sugest me where Iam doing wrong.

Regards,
Lokanadham Ganta










----- Original Message -----
From: "Erick Erickson [via Lucene]" <ml...@n3.nabble.com>
To: "Loka" <lo...@zensar.in>
Sent: Thursday, November 14, 2013 8:38:17 PM
Subject: Re: exceeded limit of maxWarmingSearchers ERROR

CommitWithin is either configured in solrconfig.xml for the 
<autoCommit> or <autoSoftCommit> tags as the maxTime tag. I 
recommend you do use this. 

The other way you can do it is if you're using SolrJ, one of the 
forms of the server.add() method takes a number of milliseconds 
to force a commit. 

You really, really do NOT want to use ridiculously short times for this 
like a few milliseconds. That will cause new searchers to be 
warmed, and when too many of them are warming at once you 
get this error. 

Seriously, make your commitWithin or autocommit parameters 
as long as you can, for many reasons. 

Here's a bunch of background: 
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ 

Best, 
Erick 


On Thu, Nov 14, 2013 at 5:13 AM, Loka < [hidden email] > wrote: 


> Hi Naveen, 
> Iam also getting the similar problem where I do not know how to use the 
> commitWithin Tag, can you help me how to use commitWithin Tag. can you give 
> me the example 
> 
> 
> 
> -- 
> View this message in context: 
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html 
> Sent from the Solr - User mailing list archive at Nabble.com. 
> 





If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100924.html 
To unsubscribe from exceeded limit of maxWarmingSearchers ERROR, click here . 
NAML 

solr-spring.xml (2K) <http://lucene.472066.n3.nabble.com/attachment/4101152/0/solr-spring.xml>
messaging-spring.xml (2K) <http://lucene.472066.n3.nabble.com/attachment/4101152/1/messaging-spring.xml>
schema.xml (6K) <http://lucene.472066.n3.nabble.com/attachment/4101152/2/schema.xml>
solrconfig.xml (61K) <http://lucene.472066.n3.nabble.com/attachment/4101152/3/solrconfig.xml>




--
View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4101152.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
CommitWithin is either configured in solrconfig.xml for the
<autoCommit> or <autoSoftCommit> tags as the maxTime tag. I
recommend you do use this.

The other way you can do it is if you're using SolrJ, one of the
forms of the server.add() method takes a number of milliseconds
to force a commit.

You really, really do NOT want to use ridiculously short times for this
like a few milliseconds. That will cause new searchers to be
warmed, and when too many of them are warming at once you
get this error.

Seriously, make your commitWithin or autocommit parameters
as long as you can, for many reasons.

Here's a bunch of background:
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick


On Thu, Nov 14, 2013 at 5:13 AM, Loka <lo...@zensar.in> wrote:

> Hi Naveen,
> Iam also getting the similar problem where I do not know how to use the
> commitWithin Tag, can you help me how to use commitWithin Tag. can you give
> me the example
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Loka <lo...@zensar.in>.
Hi Naveen,
Iam also getting the similar problem where I do not know how to use the
commitWithin Tag, can you help me how to use commitWithin Tag. can you give
me the example



--
View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-ERROR-tp3252844p4100864.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Naveen Gupta <nk...@gmail.com>.
Hi Nagendra,

Thanks a lot .. i will start working on NRT today.. meanwhile old settings
(increased warmSearcher in Master) have not given me trouble till now ..

but NRT will be more suitable to us ... Will work on that one and will
analyze the performance and share with you.

Thanks
Naveen

2011/8/17 Nagendra Nagarajayya <nn...@transaxtions.com>

> Naveen:
>
> See below:
>
>> *NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>>
>> document to become searchable*. Any document that you add through update
>> becomes  immediately searchable. So no need to commit from within your
>> update client code.  Since there is no commit, the cache does not have to
>> be
>> cleared or the old searchers closed or  new searchers opened, and warmed
>> (error that you are facing).
>>
>>
>> Looking at the link which you mentioned is clearly what we wanted. But the
>> real thing is that you have "RA does need a commit for  a document to
>> become
>> searchable" (please take a look at bold sentence) .
>>
>>
> Yes, as said earlier you do not need a commit. A document becomes
> searchable as soon as you add it. Below is an example of adding a document
> with curl (this from the wiki at http://solr-ra.tgels.com/wiki/**
> en/Near_Real_Time_Search_ver_**3.x<http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x>
> ):
>
> curl "http://localhost:8983/solr/**update/csv?stream.file=/tmp/**
> x1.csv&encapsulator=%1f<http://localhost:8983/solr/update/csv?stream.file=/tmp/x1.csv&encapsulator=%1f>
> "
>
>
> There is no commit included. The contents of the document become
> immediately searchable.
>
>
>  In future, for more loads, can it cater to Master Slave (Replication) and
>> etc to scale and perform better? If yes, we would like to go for NRT and
>> looking at the performance described in the article is acceptable. We were
>> expecting the same real time performance for a single user.
>>
>>
> There are no changes to Master/Slave (replication) process. So any changes
> you have currently will work as before or if you enable replication later,
> it should still work as without NRT.
>
>
>  What about multiple users, should we wait for 1-2 secs before calling the
>> curl request to make SOLR perform better. Or internally it will handle
>> with
>> multiple request (multithreaded and etc).
>>
>
> Again for updating documents, you do not have to change your current
> process or code. Everything remains the same, except that if you were
> including commit, you do not include commit in your update statements. There
> is no change to the existing update process so internally it will not queue
> or multi-thread updates. It is as in existing Solr functionality, there no
> changes to the existing setup.
>
> Regarding perform better, in the Wiki paper  every update through curl adds
> (streams) 500 documents. So you could take this approach. (this was
> something that I chose randomly to test the performance but seems to be
> good)
>
>
>  What would be doc size (10,000 docs) to allow JVM perform better? Have you
>> done any kind of benchmarking in terms of multi threaded and multi user
>> for
>> NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
>> performance analysis would help us to decide quickly to switch over to
>> NRT.
>>
>>
> The performance discussed in the wiki paper uses the MBArtists index. The
> MBArtists index is the index used as one of the examples in the book, Solr
> 1.4 Enterprise Search Server. You can download and build this index if you
> have the book or can also download the contents from musicbrainz.org.
>  Each doc maybe about 100 bytes and has about 7 fields. Performance with
> wikipedia's xml dump, commenting out skipdoc field (include redirects) in
> the dataconfig.xml [ dataimport handler ], the update performance is about
> 15000 docs / sec (100 million docs), with the skipdoc enabled (does not skip
> redirects), the performance is about 1350 docs / sec [ time spent mostly
> converting validating/xml  than actual update ] (about 11 million docs ).
>  Documents in wikipedia can be quite big, at least avg size of about
> 2500-5000 bytes or more.
>
> I would suggest that you download and give NRT with Apache Solr 3.3 and
> RankingAlgorithm a try and get a feel of it as this would be the best way to
> see how your config works with it.
>
>
>  Questions in terms for switching over to NRT,
>>
>>
>> 1.Should we upgrade to SOLR 4.x ?
>>
>> 2. Any benchmarking (10,000 docs/secs).  The question here is more
>> specific
>>
>> the detail of individual doc (fields, number of fields, fields size,
>> parameters affecting performance with faceting or w/o faceting)
>>
>
> Please see the MBArtists index as discussed above.
>
>
>
>  3. What about multiple users ?
>>
>> A user in real time might be having an large doc size of .1 million. How
>> to
>> break and analyze which one is better (though it is our task to do). But
>> still any kind of break up will help us. Imagine a user inbox.
>>
>>
> You maybe able to stream the documents in a set as in the example in the
> wiki. The example streams 500 documents at a time. The wiki paper has an
> example of a document that was used. You could copy/paste that to try it
> out.
>
>
>  4. JVM tuning and performance result based on Multithreaded environment.
>>
>> 5. Machine Details (RAM, CPU, and settings from SOLR perspective).
>>
>>
> Default Solr settings with the shipped jetty container. The startup script
> used is available when you download Solr 3.3 with RankingAlgorithm. It has
> mx set to 2Gb and uses the default collector with parallel collection
> enabled for the young generation.  The system is a x86_64 Linux (2.6
> kernel), 2 core (2.5Ghz) and uses internal disks for indexing.
>
> My suggestion would be to download a version of Solr 3.3 with
> RankingAlgorithm and give it a try to see if any changes are needed from
> your existing setup.
>
>
> Regards,
>
> - Nagendra Nagarajayya
> http://solr-ra.tgels.org
> http://rankingalgorithm.tgels.**org <http://rankingalgorithm.tgels.org>
>
>
>  Hoping that you are getting my point. We want to benchmark the
>> performance.
>> If you can involve me in your group, that would be great.
>>
>> Thanks
>> Naveen
>>
>>
>>
>> 2011/8/15 Nagendra Nagarajayya<nn...@transaxtions.com>
>> >
>>
>>  Bill:
>>>
>>> I did look at Marks performance tests. Looks very interesting.
>>>
>>> Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
>>> http://solr-ra.tgels.com/wiki/****en/Near_Real_Time_Search_**ver_**3.x<http://solr-ra.tgels.com/wiki/**en/Near_Real_Time_Search_ver_**3.x>
>>> <http://solr-ra.**tgels.com/wiki/en/Near_Real_**Time_Search_ver_3.x<http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x>
>>> >
>>>
>>>
>>>
>>> Regards
>>>
>>> - Nagendra Nagarajayya
>>> http://solr-ra.tgels.org
>>> http://rankingalgorithm.tgels.****org<http://rankingalgorithm.**
>>> tgels.org <http://rankingalgorithm.tgels.org>>
>>>
>>>
>>>
>>>
>>> On 8/14/2011 7:47 PM, Bill Bell wrote:
>>>
>>>  I understand.
>>>>
>>>> Have you looked at Mark's patch? From his performance tests, it looks
>>>> pretty good.
>>>>
>>>> When would RA work better?
>>>>
>>>> Bill
>>>>
>>>>
>>>> On 8/14/11 8:40 PM, "Nagendra Nagarajayya"<nnagarajayya@**
>>>> transaxtions.com<nn...@transaxtions.com>
>>>> >>
>>>> wrote:
>>>>
>>>>  Bill:
>>>>
>>>>> The technical details of the NRT implementation in Apache Solr with
>>>>> RankingAlgorithm (SOLR-RA) is available here:
>>>>>
>>>>> http://solr-ra.tgels.com/****papers/NRT_Solr_****RankingAlgorithm.pdf<http://solr-ra.tgels.com/**papers/NRT_Solr_**RankingAlgorithm.pdf>
>>>>> <http://**solr-ra.tgels.com/papers/NRT_**Solr_RankingAlgorithm.pdf<http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf>
>>>>> >
>>>>>
>>>>>
>>>>> (Some changes for Solr 3.x, but for most it is as above)
>>>>>
>>>>> Regarding support for 4.0 trunk, should happen sometime soon.
>>>>>
>>>>> Regards
>>>>>
>>>>> - Nagendra Nagarajayya
>>>>> http://solr-ra.tgels.org
>>>>> http://rankingalgorithm.tgels.****org<http://rankingalgorithm.**
>>>>> tgels.org <http://rankingalgorithm.tgels.org>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 8/14/2011 7:11 PM, Bill Bell wrote:
>>>>>
>>>>>  OK,
>>>>>>
>>>>>> I'll ask the elephant in the roomŠ.
>>>>>>
>>>>>> What is the difference between the new UpdateHandler from Mark and the
>>>>>> SOLR-RA?
>>>>>>
>>>>>> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>>>>>>
>>>>>> Pros/Cons?
>>>>>>
>>>>>>
>>>>>> On 8/14/11 8:10 PM, "Nagendra
>>>>>> Nagarajayya"<nnagarajayya@**tr**ansaxtions.com<http://transaxtions.com>
>>>>>> <nnagarajayya@**transaxtions.com <nn...@transaxtions.com>>
>>>>>> wrote:
>>>>>>
>>>>>>  Naveen:
>>>>>>
>>>>>>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for
>>>>>>> a
>>>>>>> document to become searchable. Any document that you add through
>>>>>>> update
>>>>>>> becomes  immediately searchable. So no need to commit from within
>>>>>>> your
>>>>>>> update client code.  Since there is no commit, the cache does not
>>>>>>> have
>>>>>>> to be cleared or the old searchers closed or  new searchers opened,
>>>>>>> and
>>>>>>> warmed (error that you are facing).
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> - Nagendra Nagarajayya
>>>>>>> http://solr-ra.tgels.org
>>>>>>> http://rankingalgorithm.tgels.****org<http://rankingalgorithm.**
>>>>>>> tgels.org <http://rankingalgorithm.tgels.org>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>>>>>>
>>>>>>>  Hi Mark/Erick/Nagendra,
>>>>>>>>
>>>>>>>> I was not very confident about NRT at that point of time, when we
>>>>>>>> started
>>>>>>>> project almost 1 year ago, definitely i would try NRT and see the
>>>>>>>> performance.
>>>>>>>>
>>>>>>>> The current requirement was working fine till we were using
>>>>>>>> commitWithin 10
>>>>>>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>>>>>>
>>>>>>>> But due to which, we were getting very poor performance (almost 3
>>>>>>>> mins
>>>>>>>> for
>>>>>>>> 15,000 docs) per user. There are many paraller user committing to
>>>>>>>> our
>>>>>>>> SOLR.
>>>>>>>>
>>>>>>>> So we removed the commitWithin, and hence performance was much much
>>>>>>>> better.
>>>>>>>>
>>>>>>>> But then we are getting this maxWarmingSearcher Error, because we
>>>>>>>> are
>>>>>>>> committing separately as a curl request after once entire doc is
>>>>>>>> submitted
>>>>>>>> for indexing.
>>>>>>>>
>>>>>>>> The question here is what is difference between commitWithin and
>>>>>>>> commit
>>>>>>>> (apart from the fact that commit takes memory and processes and
>>>>>>>> additional
>>>>>>>> hardware usage)
>>>>>>>>
>>>>>>>> Why we want it to be visible as soon as possible, since we are
>>>>>>>> applying
>>>>>>>> many
>>>>>>>> business rules on top of the results (older indexes as well as new
>>>>>>>> one)
>>>>>>>> and
>>>>>>>> apply different filters.
>>>>>>>>
>>>>>>>> upto 5 mins is fine for us. but more than that we need to think then
>>>>>>>> other
>>>>>>>> optimizations.
>>>>>>>>
>>>>>>>> We will definitely try NRT. But please tell me other options which
>>>>>>>> we
>>>>>>>> can
>>>>>>>> apply in order to optimize.?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Naveen
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>>>>>>> Erickson<er...@gmail.com>>**
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>  Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>>>>>
>>>>>>>>> Erick
>>>>>>>>>
>>>>>>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<
>>>>>>>>> markrmiller@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>  On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>>>>>>
>>>>>>>>>>  You either have to go to near real time (NRT), which is under
>>>>>>>>>>
>>>>>>>>>>> development, but not committed to trunk yet
>>>>>>>>>>>
>>>>>>>>>>>  NRT support is committed to trunk.
>>>>>>>>>>
>>>>>>>>>> - Mark Miller
>>>>>>>>>> lucidimagination.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>
>>>>
>>>>
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Nagendra Nagarajayya <nn...@transaxtions.com>.
Naveen:

See below:
> *NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
> document to become searchable*. Any document that you add through update
> becomes  immediately searchable. So no need to commit from within your
> update client code.  Since there is no commit, the cache does not have to be
> cleared or the old searchers closed or  new searchers opened, and warmed
> (error that you are facing).
>
>
> Looking at the link which you mentioned is clearly what we wanted. But the
> real thing is that you have "RA does need a commit for  a document to become
> searchable" (please take a look at bold sentence) .
>

Yes, as said earlier you do not need a commit. A document becomes 
searchable as soon as you add it. Below is an example of adding a 
document with curl (this from the wiki at 
http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x):

curl "http://localhost:8983/solr/update/csv?stream.file=/tmp/x1.csv&encapsulator=%1f"


There is no commit included. The contents of the document become 
immediately searchable.

> In future, for more loads, can it cater to Master Slave (Replication) and
> etc to scale and perform better? If yes, we would like to go for NRT and
> looking at the performance described in the article is acceptable. We were
> expecting the same real time performance for a single user.
>

There are no changes to Master/Slave (replication) process. So any 
changes you have currently will work as before or if you enable 
replication later, it should still work as without NRT.

> What about multiple users, should we wait for 1-2 secs before calling the
> curl request to make SOLR perform better. Or internally it will handle with
> multiple request (multithreaded and etc).

Again for updating documents, you do not have to change your current 
process or code. Everything remains the same, except that if you were 
including commit, you do not include commit in your update statements. 
There is no change to the existing update process so internally it will 
not queue or multi-thread updates. It is as in existing Solr 
functionality, there no changes to the existing setup.

Regarding perform better, in the Wiki paper  every update through curl 
adds (streams) 500 documents. So you could take this approach. (this was 
something that I chose randomly to test the performance but seems to be 
good)

> What would be doc size (10,000 docs) to allow JVM perform better? Have you
> done any kind of benchmarking in terms of multi threaded and multi user for
> NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
> performance analysis would help us to decide quickly to switch over to NRT.
>

The performance discussed in the wiki paper uses the MBArtists index. 
The MBArtists index is the index used as one of the examples in the 
book, Solr 1.4 Enterprise Search Server. You can download and build this 
index if you have the book or can also download the contents from 
musicbrainz.org.  Each doc maybe about 100 bytes and has about 7 fields. 
Performance with wikipedia's xml dump, commenting out skipdoc field 
(include redirects) in the dataconfig.xml [ dataimport handler ], the 
update performance is about 15000 docs / sec (100 million docs), with 
the skipdoc enabled (does not skip redirects), the performance is about 
1350 docs / sec [ time spent mostly converting validating/xml  than 
actual update ] (about 11 million docs ).  Documents in wikipedia can be 
quite big, at least avg size of about 2500-5000 bytes or more.

I would suggest that you download and give NRT with Apache Solr 3.3 and 
RankingAlgorithm a try and get a feel of it as this would be the best 
way to see how your config works with it.

> Questions in terms for switching over to NRT,
>
>
> 1.Should we upgrade to SOLR 4.x ?
>
> 2. Any benchmarking (10,000 docs/secs).  The question here is more specific
>
> the detail of individual doc (fields, number of fields, fields size,
> parameters affecting performance with faceting or w/o faceting)

Please see the MBArtists index as discussed above.


> 3. What about multiple users ?
>
> A user in real time might be having an large doc size of .1 million. How to
> break and analyze which one is better (though it is our task to do). But
> still any kind of break up will help us. Imagine a user inbox.
>

You maybe able to stream the documents in a set as in the example in the 
wiki. The example streams 500 documents at a time. The wiki paper has an 
example of a document that was used. You could copy/paste that to try it 
out.

> 4. JVM tuning and performance result based on Multithreaded environment.
>
> 5. Machine Details (RAM, CPU, and settings from SOLR perspective).
>

Default Solr settings with the shipped jetty container. The startup 
script used is available when you download Solr 3.3 with 
RankingAlgorithm. It has mx set to 2Gb and uses the default collector 
with parallel collection enabled for the young generation.  The system 
is a x86_64 Linux (2.6 kernel), 2 core (2.5Ghz) and uses internal disks 
for indexing.

My suggestion would be to download a version of Solr 3.3 with 
RankingAlgorithm and give it a try to see if any changes are needed from 
your existing setup.

Regards,

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org


> Hoping that you are getting my point. We want to benchmark the performance.
> If you can involve me in your group, that would be great.
>
> Thanks
> Naveen
>
>
>
> 2011/8/15 Nagendra Nagarajayya<nn...@transaxtions.com>
>
>> Bill:
>>
>> I did look at Marks performance tests. Looks very interesting.
>>
>> Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
>> http://solr-ra.tgels.com/wiki/**en/Near_Real_Time_Search_ver_**3.x<http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x>
>>
>>
>> Regards
>>
>> - Nagendra Nagarajayya
>> http://solr-ra.tgels.org
>> http://rankingalgorithm.tgels.**org<http://rankingalgorithm.tgels.org>
>>
>>
>>
>> On 8/14/2011 7:47 PM, Bill Bell wrote:
>>
>>> I understand.
>>>
>>> Have you looked at Mark's patch? From his performance tests, it looks
>>> pretty good.
>>>
>>> When would RA work better?
>>>
>>> Bill
>>>
>>>
>>> On 8/14/11 8:40 PM, "Nagendra Nagarajayya"<nnagarajayya@**
>>> transaxtions.com<nn...@transaxtions.com>>
>>> wrote:
>>>
>>>   Bill:
>>>> The technical details of the NRT implementation in Apache Solr with
>>>> RankingAlgorithm (SOLR-RA) is available here:
>>>>
>>>> http://solr-ra.tgels.com/**papers/NRT_Solr_**RankingAlgorithm.pdf<http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf>
>>>>
>>>> (Some changes for Solr 3.x, but for most it is as above)
>>>>
>>>> Regarding support for 4.0 trunk, should happen sometime soon.
>>>>
>>>> Regards
>>>>
>>>> - Nagendra Nagarajayya
>>>> http://solr-ra.tgels.org
>>>> http://rankingalgorithm.tgels.**org<http://rankingalgorithm.tgels.org>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 8/14/2011 7:11 PM, Bill Bell wrote:
>>>>
>>>>> OK,
>>>>>
>>>>> I'll ask the elephant in the roomŠ.
>>>>>
>>>>> What is the difference between the new UpdateHandler from Mark and the
>>>>> SOLR-RA?
>>>>>
>>>>> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>>>>>
>>>>> Pros/Cons?
>>>>>
>>>>>
>>>>> On 8/14/11 8:10 PM, "Nagendra
>>>>> Nagarajayya"<nn...@transaxtions.com>
>>>>> wrote:
>>>>>
>>>>>   Naveen:
>>>>>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>>>>>> document to become searchable. Any document that you add through update
>>>>>> becomes  immediately searchable. So no need to commit from within your
>>>>>> update client code.  Since there is no commit, the cache does not have
>>>>>> to be cleared or the old searchers closed or  new searchers opened, and
>>>>>> warmed (error that you are facing).
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> - Nagendra Nagarajayya
>>>>>> http://solr-ra.tgels.org
>>>>>> http://rankingalgorithm.tgels.**org<http://rankingalgorithm.tgels.org>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>>>>>
>>>>>>> Hi Mark/Erick/Nagendra,
>>>>>>>
>>>>>>> I was not very confident about NRT at that point of time, when we
>>>>>>> started
>>>>>>> project almost 1 year ago, definitely i would try NRT and see the
>>>>>>> performance.
>>>>>>>
>>>>>>> The current requirement was working fine till we were using
>>>>>>> commitWithin 10
>>>>>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>>>>>
>>>>>>> But due to which, we were getting very poor performance (almost 3 mins
>>>>>>> for
>>>>>>> 15,000 docs) per user. There are many paraller user committing to our
>>>>>>> SOLR.
>>>>>>>
>>>>>>> So we removed the commitWithin, and hence performance was much much
>>>>>>> better.
>>>>>>>
>>>>>>> But then we are getting this maxWarmingSearcher Error, because we are
>>>>>>> committing separately as a curl request after once entire doc is
>>>>>>> submitted
>>>>>>> for indexing.
>>>>>>>
>>>>>>> The question here is what is difference between commitWithin and
>>>>>>> commit
>>>>>>> (apart from the fact that commit takes memory and processes and
>>>>>>> additional
>>>>>>> hardware usage)
>>>>>>>
>>>>>>> Why we want it to be visible as soon as possible, since we are
>>>>>>> applying
>>>>>>> many
>>>>>>> business rules on top of the results (older indexes as well as new
>>>>>>> one)
>>>>>>> and
>>>>>>> apply different filters.
>>>>>>>
>>>>>>> upto 5 mins is fine for us. but more than that we need to think then
>>>>>>> other
>>>>>>> optimizations.
>>>>>>>
>>>>>>> We will definitely try NRT. But please tell me other options which we
>>>>>>> can
>>>>>>> apply in order to optimize.?
>>>>>>>
>>>>>>> Thanks
>>>>>>> Naveen
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>>>>>> Erickson<er...@gmail.com>>wrote:
>>>>>>>
>>>>>>>   Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>>>>> Erick
>>>>>>>>
>>>>>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>>>>>
>>>>>>>>>   You either have to go to near real time (NRT), which is under
>>>>>>>>>> development, but not committed to trunk yet
>>>>>>>>>>
>>>>>>>>> NRT support is committed to trunk.
>>>>>>>>>
>>>>>>>>> - Mark Miller
>>>>>>>>> lucidimagination.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>
>>>
>>>


Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Naveen Gupta <nk...@gmail.com>.
Nagendra

You wrote,

Naveen:

*NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable*. Any document that you add through update
becomes  immediately searchable. So no need to commit from within your
update client code.  Since there is no commit, the cache does not have to be
cleared or the old searchers closed or  new searchers opened, and warmed
(error that you are facing).


Looking at the link which you mentioned is clearly what we wanted. But the
real thing is that you have "RA does need a commit for  a document to become
searchable" (please take a look at bold sentence) .

In future, for more loads, can it cater to Master Slave (Replication) and
etc to scale and perform better? If yes, we would like to go for NRT and
looking at the performance described in the article is acceptable. We were
expecting the same real time performance for a single user.

What about multiple users, should we wait for 1-2 secs before calling the
curl request to make SOLR perform better. Or internally it will handle with
multiple request (multithreaded and etc).

What would be doc size (10,000 docs) to allow JVM perform better? Have you
done any kind of benchmarking in terms of multi threaded and multi user for
NRT and also JVM tuning in terms of SOLR sever performance. Any kind of
performance analysis would help us to decide quickly to switch over to NRT.

Questions in terms for switching over to NRT,


1.Should we upgrade to SOLR 4.x ?

2. Any benchmarking (10,000 docs/secs).  The question here is more specific

the detail of individual doc (fields, number of fields, fields size,
parameters affecting performance with faceting or w/o faceting)

3. What about multiple users ?

A user in real time might be having an large doc size of .1 million. How to
break and analyze which one is better (though it is our task to do). But
still any kind of break up will help us. Imagine a user inbox.

4. JVM tuning and performance result based on Multithreaded environment.

5. Machine Details (RAM, CPU, and settings from SOLR perspective).

Hoping that you are getting my point. We want to benchmark the performance.
If you can involve me in your group, that would be great.

Thanks
Naveen



2011/8/15 Nagendra Nagarajayya <nn...@transaxtions.com>

> Bill:
>
> I did look at Marks performance tests. Looks very interesting.
>
> Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
> http://solr-ra.tgels.com/wiki/**en/Near_Real_Time_Search_ver_**3.x<http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x>
>
>
> Regards
>
> - Nagendra Nagarajayya
> http://solr-ra.tgels.org
> http://rankingalgorithm.tgels.**org <http://rankingalgorithm.tgels.org>
>
>
>
> On 8/14/2011 7:47 PM, Bill Bell wrote:
>
>> I understand.
>>
>> Have you looked at Mark's patch? From his performance tests, it looks
>> pretty good.
>>
>> When would RA work better?
>>
>> Bill
>>
>>
>> On 8/14/11 8:40 PM, "Nagendra Nagarajayya"<nnagarajayya@**
>> transaxtions.com <nn...@transaxtions.com>>
>> wrote:
>>
>>  Bill:
>>>
>>> The technical details of the NRT implementation in Apache Solr with
>>> RankingAlgorithm (SOLR-RA) is available here:
>>>
>>> http://solr-ra.tgels.com/**papers/NRT_Solr_**RankingAlgorithm.pdf<http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf>
>>>
>>> (Some changes for Solr 3.x, but for most it is as above)
>>>
>>> Regarding support for 4.0 trunk, should happen sometime soon.
>>>
>>> Regards
>>>
>>> - Nagendra Nagarajayya
>>> http://solr-ra.tgels.org
>>> http://rankingalgorithm.tgels.**org <http://rankingalgorithm.tgels.org>
>>>
>>>
>>>
>>>
>>>
>>> On 8/14/2011 7:11 PM, Bill Bell wrote:
>>>
>>>> OK,
>>>>
>>>> I'll ask the elephant in the roomŠ.
>>>>
>>>> What is the difference between the new UpdateHandler from Mark and the
>>>> SOLR-RA?
>>>>
>>>> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>>>>
>>>> Pros/Cons?
>>>>
>>>>
>>>> On 8/14/11 8:10 PM, "Nagendra
>>>> Nagarajayya"<nn...@transaxtions.com>
>>>> >
>>>> wrote:
>>>>
>>>>  Naveen:
>>>>>
>>>>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>>>>> document to become searchable. Any document that you add through update
>>>>> becomes  immediately searchable. So no need to commit from within your
>>>>> update client code.  Since there is no commit, the cache does not have
>>>>> to be cleared or the old searchers closed or  new searchers opened, and
>>>>> warmed (error that you are facing).
>>>>>
>>>>> Regards
>>>>>
>>>>> - Nagendra Nagarajayya
>>>>> http://solr-ra.tgels.org
>>>>> http://rankingalgorithm.tgels.**org<http://rankingalgorithm.tgels.org>
>>>>>
>>>>>
>>>>>
>>>>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>>>>
>>>>>> Hi Mark/Erick/Nagendra,
>>>>>>
>>>>>> I was not very confident about NRT at that point of time, when we
>>>>>> started
>>>>>> project almost 1 year ago, definitely i would try NRT and see the
>>>>>> performance.
>>>>>>
>>>>>> The current requirement was working fine till we were using
>>>>>> commitWithin 10
>>>>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>>>>
>>>>>> But due to which, we were getting very poor performance (almost 3 mins
>>>>>> for
>>>>>> 15,000 docs) per user. There are many paraller user committing to our
>>>>>> SOLR.
>>>>>>
>>>>>> So we removed the commitWithin, and hence performance was much much
>>>>>> better.
>>>>>>
>>>>>> But then we are getting this maxWarmingSearcher Error, because we are
>>>>>> committing separately as a curl request after once entire doc is
>>>>>> submitted
>>>>>> for indexing.
>>>>>>
>>>>>> The question here is what is difference between commitWithin and
>>>>>> commit
>>>>>> (apart from the fact that commit takes memory and processes and
>>>>>> additional
>>>>>> hardware usage)
>>>>>>
>>>>>> Why we want it to be visible as soon as possible, since we are
>>>>>> applying
>>>>>> many
>>>>>> business rules on top of the results (older indexes as well as new
>>>>>> one)
>>>>>> and
>>>>>> apply different filters.
>>>>>>
>>>>>> upto 5 mins is fine for us. but more than that we need to think then
>>>>>> other
>>>>>> optimizations.
>>>>>>
>>>>>> We will definitely try NRT. But please tell me other options which we
>>>>>> can
>>>>>> apply in order to optimize.?
>>>>>>
>>>>>> Thanks
>>>>>> Naveen
>>>>>>
>>>>>>
>>>>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>>>>> Erickson<erickerickson@gmail.**com <er...@gmail.com>>wrote:
>>>>>>
>>>>>>  Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>>>>
>>>>>>> Erick
>>>>>>>
>>>>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>>>>
>>>>>>>>  You either have to go to near real time (NRT), which is under
>>>>>>>>> development, but not committed to trunk yet
>>>>>>>>>
>>>>>>>> NRT support is committed to trunk.
>>>>>>>>
>>>>>>>> - Mark Miller
>>>>>>>> lucidimagination.com
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>
>>>>
>>
>>
>>
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Nagendra Nagarajayya <nn...@transaxtions.com>.
Bill:

I did look at Marks performance tests. Looks very interesting.

Here is the Apacle Solr 3.3 with RankingAlgorithm NRT performance:
http://solr-ra.tgels.com/wiki/en/Near_Real_Time_Search_ver_3.x

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 7:47 PM, Bill Bell wrote:
> I understand.
>
> Have you looked at Mark's patch? From his performance tests, it looks
> pretty good.
>
> When would RA work better?
>
> Bill
>
>
> On 8/14/11 8:40 PM, "Nagendra Nagarajayya"<nn...@transaxtions.com>
> wrote:
>
>> Bill:
>>
>> The technical details of the NRT implementation in Apache Solr with
>> RankingAlgorithm (SOLR-RA) is available here:
>>
>> http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf
>>
>> (Some changes for Solr 3.x, but for most it is as above)
>>
>> Regarding support for 4.0 trunk, should happen sometime soon.
>>
>> Regards
>>
>> - Nagendra Nagarajayya
>> http://solr-ra.tgels.org
>> http://rankingalgorithm.tgels.org
>>
>>
>>
>>
>>
>> On 8/14/2011 7:11 PM, Bill Bell wrote:
>>> OK,
>>>
>>> I'll ask the elephant in the roomŠ.
>>>
>>> What is the difference between the new UpdateHandler from Mark and the
>>> SOLR-RA?
>>>
>>> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>>>
>>> Pros/Cons?
>>>
>>>
>>> On 8/14/11 8:10 PM, "Nagendra
>>> Nagarajayya"<nn...@transaxtions.com>
>>> wrote:
>>>
>>>> Naveen:
>>>>
>>>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>>>> document to become searchable. Any document that you add through update
>>>> becomes  immediately searchable. So no need to commit from within your
>>>> update client code.  Since there is no commit, the cache does not have
>>>> to be cleared or the old searchers closed or  new searchers opened, and
>>>> warmed (error that you are facing).
>>>>
>>>> Regards
>>>>
>>>> - Nagendra Nagarajayya
>>>> http://solr-ra.tgels.org
>>>> http://rankingalgorithm.tgels.org
>>>>
>>>>
>>>>
>>>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>>>> Hi Mark/Erick/Nagendra,
>>>>>
>>>>> I was not very confident about NRT at that point of time, when we
>>>>> started
>>>>> project almost 1 year ago, definitely i would try NRT and see the
>>>>> performance.
>>>>>
>>>>> The current requirement was working fine till we were using
>>>>> commitWithin 10
>>>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>>>
>>>>> But due to which, we were getting very poor performance (almost 3 mins
>>>>> for
>>>>> 15,000 docs) per user. There are many paraller user committing to our
>>>>> SOLR.
>>>>>
>>>>> So we removed the commitWithin, and hence performance was much much
>>>>> better.
>>>>>
>>>>> But then we are getting this maxWarmingSearcher Error, because we are
>>>>> committing separately as a curl request after once entire doc is
>>>>> submitted
>>>>> for indexing.
>>>>>
>>>>> The question here is what is difference between commitWithin and
>>>>> commit
>>>>> (apart from the fact that commit takes memory and processes and
>>>>> additional
>>>>> hardware usage)
>>>>>
>>>>> Why we want it to be visible as soon as possible, since we are
>>>>> applying
>>>>> many
>>>>> business rules on top of the results (older indexes as well as new
>>>>> one)
>>>>> and
>>>>> apply different filters.
>>>>>
>>>>> upto 5 mins is fine for us. but more than that we need to think then
>>>>> other
>>>>> optimizations.
>>>>>
>>>>> We will definitely try NRT. But please tell me other options which we
>>>>> can
>>>>> apply in order to optimize.?
>>>>>
>>>>> Thanks
>>>>> Naveen
>>>>>
>>>>>
>>>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>>>> Erickson<er...@gmail.com>wrote:
>>>>>
>>>>>> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>>>
>>>>>> Erick
>>>>>>
>>>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>>>>> wrote:
>>>>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>>>
>>>>>>>> You either have to go to near real time (NRT), which is under
>>>>>>>> development, but not committed to trunk yet
>>>>>>> NRT support is committed to trunk.
>>>>>>>
>>>>>>> - Mark Miller
>>>>>>> lucidimagination.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>
>>>
>
>
>


Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Bill Bell <bi...@gmail.com>.
I understand.

Have you looked at Mark's patch? From his performance tests, it looks
pretty good.

When would RA work better?

Bill


On 8/14/11 8:40 PM, "Nagendra Nagarajayya" <nn...@transaxtions.com>
wrote:

>Bill:
>
>The technical details of the NRT implementation in Apache Solr with
>RankingAlgorithm (SOLR-RA) is available here:
>
>http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf
>
>(Some changes for Solr 3.x, but for most it is as above)
>
>Regarding support for 4.0 trunk, should happen sometime soon.
>
>Regards
>
>- Nagendra Nagarajayya
>http://solr-ra.tgels.org
>http://rankingalgorithm.tgels.org
>
>
>
>
>
>On 8/14/2011 7:11 PM, Bill Bell wrote:
>> OK,
>>
>> I'll ask the elephant in the roomŠ.
>>
>> What is the difference between the new UpdateHandler from Mark and the
>> SOLR-RA?
>>
>> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>>
>> Pros/Cons?
>>
>>
>> On 8/14/11 8:10 PM, "Nagendra
>>Nagarajayya"<nn...@transaxtions.com>
>> wrote:
>>
>>> Naveen:
>>>
>>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>>> document to become searchable. Any document that you add through update
>>> becomes  immediately searchable. So no need to commit from within your
>>> update client code.  Since there is no commit, the cache does not have
>>> to be cleared or the old searchers closed or  new searchers opened, and
>>> warmed (error that you are facing).
>>>
>>> Regards
>>>
>>> - Nagendra Nagarajayya
>>> http://solr-ra.tgels.org
>>> http://rankingalgorithm.tgels.org
>>>
>>>
>>>
>>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>>> Hi Mark/Erick/Nagendra,
>>>>
>>>> I was not very confident about NRT at that point of time, when we
>>>> started
>>>> project almost 1 year ago, definitely i would try NRT and see the
>>>> performance.
>>>>
>>>> The current requirement was working fine till we were using
>>>> commitWithin 10
>>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>>
>>>> But due to which, we were getting very poor performance (almost 3 mins
>>>> for
>>>> 15,000 docs) per user. There are many paraller user committing to our
>>>> SOLR.
>>>>
>>>> So we removed the commitWithin, and hence performance was much much
>>>> better.
>>>>
>>>> But then we are getting this maxWarmingSearcher Error, because we are
>>>> committing separately as a curl request after once entire doc is
>>>> submitted
>>>> for indexing.
>>>>
>>>> The question here is what is difference between commitWithin and
>>>>commit
>>>> (apart from the fact that commit takes memory and processes and
>>>> additional
>>>> hardware usage)
>>>>
>>>> Why we want it to be visible as soon as possible, since we are
>>>>applying
>>>> many
>>>> business rules on top of the results (older indexes as well as new
>>>>one)
>>>> and
>>>> apply different filters.
>>>>
>>>> upto 5 mins is fine for us. but more than that we need to think then
>>>> other
>>>> optimizations.
>>>>
>>>> We will definitely try NRT. But please tell me other options which we
>>>> can
>>>> apply in order to optimize.?
>>>>
>>>> Thanks
>>>> Naveen
>>>>
>>>>
>>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>>> Erickson<er...@gmail.com>wrote:
>>>>
>>>>> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>>
>>>>> Erick
>>>>>
>>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>>>> wrote:
>>>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>>
>>>>>>> You either have to go to near real time (NRT), which is under
>>>>>>> development, but not committed to trunk yet
>>>>>> NRT support is committed to trunk.
>>>>>>
>>>>>> - Mark Miller
>>>>>> lucidimagination.com
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>
>>
>>
>



Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Nagendra Nagarajayya <nn...@transaxtions.com>.
Bill:

The technical details of the NRT implementation in Apache Solr with 
RankingAlgorithm (SOLR-RA) is available here:

http://solr-ra.tgels.com/papers/NRT_Solr_RankingAlgorithm.pdf

(Some changes for Solr 3.x, but for most it is as above)

Regarding support for 4.0 trunk, should happen sometime soon.

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org





On 8/14/2011 7:11 PM, Bill Bell wrote:
> OK,
>
> I'll ask the elephant in the roomŠ.
>
> What is the difference between the new UpdateHandler from Mark and the
> SOLR-RA?
>
> The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?
>
> Pros/Cons?
>
>
> On 8/14/11 8:10 PM, "Nagendra Nagarajayya"<nn...@transaxtions.com>
> wrote:
>
>> Naveen:
>>
>> NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>> document to become searchable. Any document that you add through update
>> becomes  immediately searchable. So no need to commit from within your
>> update client code.  Since there is no commit, the cache does not have
>> to be cleared or the old searchers closed or  new searchers opened, and
>> warmed (error that you are facing).
>>
>> Regards
>>
>> - Nagendra Nagarajayya
>> http://solr-ra.tgels.org
>> http://rankingalgorithm.tgels.org
>>
>>
>>
>> On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>>> Hi Mark/Erick/Nagendra,
>>>
>>> I was not very confident about NRT at that point of time, when we
>>> started
>>> project almost 1 year ago, definitely i would try NRT and see the
>>> performance.
>>>
>>> The current requirement was working fine till we were using
>>> commitWithin 10
>>> millisecs in the XMLDocument which we were posting to SOLR.
>>>
>>> But due to which, we were getting very poor performance (almost 3 mins
>>> for
>>> 15,000 docs) per user. There are many paraller user committing to our
>>> SOLR.
>>>
>>> So we removed the commitWithin, and hence performance was much much
>>> better.
>>>
>>> But then we are getting this maxWarmingSearcher Error, because we are
>>> committing separately as a curl request after once entire doc is
>>> submitted
>>> for indexing.
>>>
>>> The question here is what is difference between commitWithin and commit
>>> (apart from the fact that commit takes memory and processes and
>>> additional
>>> hardware usage)
>>>
>>> Why we want it to be visible as soon as possible, since we are applying
>>> many
>>> business rules on top of the results (older indexes as well as new one)
>>> and
>>> apply different filters.
>>>
>>> upto 5 mins is fine for us. but more than that we need to think then
>>> other
>>> optimizations.
>>>
>>> We will definitely try NRT. But please tell me other options which we
>>> can
>>> apply in order to optimize.?
>>>
>>> Thanks
>>> Naveen
>>>
>>>
>>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>> Erickson<er...@gmail.com>wrote:
>>>
>>>> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>>
>>>> Erick
>>>>
>>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>>> wrote:
>>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>>
>>>>>> You either have to go to near real time (NRT), which is under
>>>>>> development, but not committed to trunk yet
>>>>> NRT support is committed to trunk.
>>>>>
>>>>> - Mark Miller
>>>>> lucidimagination.com
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>
>
>


Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Bill Bell <bi...@gmail.com>.
OK,

I'll ask the elephant in the roomŠ.

What is the difference between the new UpdateHandler from Mark and the
SOLR-RA?

The UpdateHandler works with 4.0 does SOLR-RA work with 4.0 trunk?

Pros/Cons?


On 8/14/11 8:10 PM, "Nagendra Nagarajayya" <nn...@transaxtions.com>
wrote:

>Naveen:
>
>NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
>document to become searchable. Any document that you add through update
>becomes  immediately searchable. So no need to commit from within your
>update client code.  Since there is no commit, the cache does not have
>to be cleared or the old searchers closed or  new searchers opened, and
>warmed (error that you are facing).
>
>Regards
>
>- Nagendra Nagarajayya
>http://solr-ra.tgels.org
>http://rankingalgorithm.tgels.org
>
>
>
>On 8/14/2011 10:37 AM, Naveen Gupta wrote:
>> Hi Mark/Erick/Nagendra,
>>
>> I was not very confident about NRT at that point of time, when we
>>started
>> project almost 1 year ago, definitely i would try NRT and see the
>> performance.
>>
>> The current requirement was working fine till we were using
>>commitWithin 10
>> millisecs in the XMLDocument which we were posting to SOLR.
>>
>> But due to which, we were getting very poor performance (almost 3 mins
>>for
>> 15,000 docs) per user. There are many paraller user committing to our
>>SOLR.
>>
>> So we removed the commitWithin, and hence performance was much much
>>better.
>>
>> But then we are getting this maxWarmingSearcher Error, because we are
>> committing separately as a curl request after once entire doc is
>>submitted
>> for indexing.
>>
>> The question here is what is difference between commitWithin and commit
>> (apart from the fact that commit takes memory and processes and
>>additional
>> hardware usage)
>>
>> Why we want it to be visible as soon as possible, since we are applying
>>many
>> business rules on top of the results (older indexes as well as new one)
>>and
>> apply different filters.
>>
>> upto 5 mins is fine for us. but more than that we need to think then
>>other
>> optimizations.
>>
>> We will definitely try NRT. But please tell me other options which we
>>can
>> apply in order to optimize.?
>>
>> Thanks
>> Naveen
>>
>>
>> On Sun, Aug 14, 2011 at 9:42 PM, Erick
>>Erickson<er...@gmail.com>wrote:
>>
>>> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>>
>>> Erick
>>>
>>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>>> wrote:
>>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>>
>>>>> You either have to go to near real time (NRT), which is under
>>>>> development, but not committed to trunk yet
>>>> NRT support is committed to trunk.
>>>>
>>>> - Mark Miller
>>>> lucidimagination.com
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>



Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Nagendra Nagarajayya <nn...@transaxtions.com>.
Naveen:

NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a 
document to become searchable. Any document that you add through update 
becomes  immediately searchable. So no need to commit from within your 
update client code.  Since there is no commit, the cache does not have 
to be cleared or the old searchers closed or  new searchers opened, and 
warmed (error that you are facing).

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/14/2011 10:37 AM, Naveen Gupta wrote:
> Hi Mark/Erick/Nagendra,
>
> I was not very confident about NRT at that point of time, when we started
> project almost 1 year ago, definitely i would try NRT and see the
> performance.
>
> The current requirement was working fine till we were using commitWithin 10
> millisecs in the XMLDocument which we were posting to SOLR.
>
> But due to which, we were getting very poor performance (almost 3 mins for
> 15,000 docs) per user. There are many paraller user committing to our SOLR.
>
> So we removed the commitWithin, and hence performance was much much better.
>
> But then we are getting this maxWarmingSearcher Error, because we are
> committing separately as a curl request after once entire doc is submitted
> for indexing.
>
> The question here is what is difference between commitWithin and commit
> (apart from the fact that commit takes memory and processes and additional
> hardware usage)
>
> Why we want it to be visible as soon as possible, since we are applying many
> business rules on top of the results (older indexes as well as new one) and
> apply different filters.
>
> upto 5 mins is fine for us. but more than that we need to think then other
> optimizations.
>
> We will definitely try NRT. But please tell me other options which we can
> apply in order to optimize.?
>
> Thanks
> Naveen
>
>
> On Sun, Aug 14, 2011 at 9:42 PM, Erick Erickson<er...@gmail.com>wrote:
>
>> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>>
>> Erick
>>
>> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller<ma...@gmail.com>
>> wrote:
>>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>>>
>>>> You either have to go to near real time (NRT), which is under
>>>> development, but not committed to trunk yet
>>> NRT support is committed to trunk.
>>>
>>> - Mark Miller
>>> lucidimagination.com
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>


Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Naveen Gupta <nk...@gmail.com>.
Hi Mark/Erick/Nagendra,

I was not very confident about NRT at that point of time, when we started
project almost 1 year ago, definitely i would try NRT and see the
performance.

The current requirement was working fine till we were using commitWithin 10
millisecs in the XMLDocument which we were posting to SOLR.

But due to which, we were getting very poor performance (almost 3 mins for
15,000 docs) per user. There are many paraller user committing to our SOLR.

So we removed the commitWithin, and hence performance was much much better.

But then we are getting this maxWarmingSearcher Error, because we are
committing separately as a curl request after once entire doc is submitted
for indexing.

The question here is what is difference between commitWithin and commit
(apart from the fact that commit takes memory and processes and additional
hardware usage)

Why we want it to be visible as soon as possible, since we are applying many
business rules on top of the results (older indexes as well as new one) and
apply different filters.

upto 5 mins is fine for us. but more than that we need to think then other
optimizations.

We will definitely try NRT. But please tell me other options which we can
apply in order to optimize.?

Thanks
Naveen


On Sun, Aug 14, 2011 at 9:42 PM, Erick Erickson <er...@gmail.com>wrote:

> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>
> Erick
>
> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller <ma...@gmail.com>
> wrote:
> >
> > On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
> >
> >> You either have to go to near real time (NRT), which is under
> >> development, but not committed to trunk yet
> >
> > NRT support is committed to trunk.
> >
> > - Mark Miller
> > lucidimagination.com
> >
> >
> >
> >
> >
> >
> >
> >
> >
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Mark Miller <ma...@gmail.com>.
It's somewhat confusing - I'll straighten it out though. I left the issue open to keep me from taking forever to doc it - hasn't helped much yet - but maybe later today...

On Aug 14, 2011, at 12:12 PM, Erick Erickson wrote:

> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
> 
> Erick
> 
> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller <ma...@gmail.com> wrote:
>> 
>> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>> 
>>> You either have to go to near real time (NRT), which is under
>>> development, but not committed to trunk yet
>> 
>> NRT support is committed to trunk.
>> 
>> - Mark Miller
>> lucidimagination.com
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 

- Mark Miller
lucidimagination.com









Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
Ah, thanks, Mark... I must have been looking at the wrong JIRAs.

Erick

On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller <ma...@gmail.com> wrote:
>
> On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:
>
>> You either have to go to near real time (NRT), which is under
>> development, but not committed to trunk yet
>
> NRT support is committed to trunk.
>
> - Mark Miller
> lucidimagination.com
>
>
>
>
>
>
>
>
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Mark Miller <ma...@gmail.com>.
On Aug 14, 2011, at 9:03 AM, Erick Erickson wrote:

> You either have to go to near real time (NRT), which is under
> development, but not committed to trunk yet 

NRT support is committed to trunk.

- Mark Miller
lucidimagination.com









Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Erick Erickson <er...@gmail.com>.
You either have to go to near real time (NRT), which is under
development, but not committed to trunk yet or just stop
warming up searchers and let the first user to open a searcher
pay the penalty for warmup, (useColdSearchers as I remember).

Although I'd also ask whether this is a reasonable requirement,
that the messages be searchable within milliseconds. Is 1 minute
really too much time? 5 minutes? You can estimate the minimum time
you can get away with by looking at the warmup times on the admin/stats
page.

Best
Erick

On Sat, Aug 13, 2011 at 9:47 PM, Naveen Gupta <nk...@gmail.com> wrote:
> Hi,
>
> Most of the settings are default.
>
> We have single node (Memory 1 GB, Index Size 4GB)
>
> We have a requirement where we are doing very fast commit. This is kind of
> real time requirement where we are polling many threads from third party and
> indexes into our system.
>
> We want these results to be available soon.
>
> We are committing for each user (may have 10k threads and inside that 1
> thread may have 10 messages). So overall documents per user will be having
> around .1 million (100000)
>
> Earlier we were using commit Within  as 10 milliseconds inside the document,
> but that was slowing the indexing and we were not getting any error.
>
> As we removed the commit Within, indexing became very fast. But after that
> we started experiencing in the system
>
> As i read many forums, everybody told that this is happening because of very
> fast commit rate, but what is the solution for our problem?
>
> We are using CURL to post the data and commit
>
> Also till now we are using default solrconfig.
>
> Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
> exceeded limit of maxWarmingSearchers=2, try again later.
>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
>        at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
>        at
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>        at
> org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
>        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
>        at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
>        at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>        at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>        at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>        at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>        at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>        at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>        at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>        at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>        at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
>        at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>        at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>        at java.lang.Thread.run(Thread.java:662)
>

Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Nagendra Nagarajayya <nn...@transaxtions.com>.
Naveen:

You should try NRT with Apache Solr 3.3 and RankingAlgorithm. You can 
update 10,000 documents / sec while also concurrently searching. You can 
set commit  freq to about 15 mins or as desired. The 10,000 document 
update performance is with the MBArtists index on a dual core Linux 
system. So you may be able to see similar performance on your system. 
You can get more details of the NRT implementation from here:

http://solr-ra.tgels.org/wiki/en/Near_Real_Time_Search_ver_3.x

You can download Apache Solr 3.3 with RankingAlgorithm from here:

http://solr-ra.tgels.org/

(There are no changes to your existing setup, everything should work as 
earlier except for adding the <realtime> tag to your solrconfig.xml)

Regards

- Nagendra Nagarajayya
http://solr-ra.tgels.org
http://rankingalgorithm.tgels.org



On 8/13/2011 6:47 PM, Naveen Gupta wrote:
> Hi,
>
> Most of the settings are default.
>
> We have single node (Memory 1 GB, Index Size 4GB)
>
> We have a requirement where we are doing very fast commit. This is kind of
> real time requirement where we are polling many threads from third party and
> indexes into our system.
>
> We want these results to be available soon.
>
> We are committing for each user (may have 10k threads and inside that 1
> thread may have 10 messages). So overall documents per user will be having
> around .1 million (100000)
>
> Earlier we were using commit Within  as 10 milliseconds inside the document,
> but that was slowing the indexing and we were not getting any error.
>
> As we removed the commit Within, indexing became very fast. But after that
> we started experiencing in the system
>
> As i read many forums, everybody told that this is happening because of very
> fast commit rate, but what is the solution for our problem?
>
> We are using CURL to post the data and commit
>
> Also till now we are using default solrconfig.
>
> Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
> exceeded limit of maxWarmingSearchers=2, try again later.
>          at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
>          at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
>          at
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>          at
> org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
>          at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
>          at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
>          at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>          at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
>          at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
>          at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
>          at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>          at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>          at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>          at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>          at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>          at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>          at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>          at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>          at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
>          at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>          at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>          at java.lang.Thread.run(Thread.java:662)
>


Re: exceeded limit of maxWarmingSearchers ERROR

Posted by Peter Sturge <pe...@gmail.com>.
It's worth noting that the fast commit rate is only an indirect part
of the issue you're seeing. As the error comes from cache warming - a
consequence of committing, it's not the fault of commiting directly.
It's well worth having a good close look at exactly what you're caches
are doing when they are warmed, and trying as much as possible to
remove any uneeded facet/field caching etc.
The time it takes to repopulate the caches causes the error - if it's
slower than the commit rate, you'll get into the 'try again later'
spiral.

There are a number of ways to help mitigate this - NRT is the
certainly the [hopefullly near] future for this. Other strategies
include distributed search/cloud/ZK - splitting the index into logical
shards, so your commits and their associated caches are smaller and
more targeted. You can also use two Solr instances - one optimized for
writes/commits, one for reads, (write commits are async of the 'read'
instance), plus there are customized solutions like RankingAlgorithm,
Zoie etc.


On Sun, Aug 14, 2011 at 2:47 AM, Naveen Gupta <nk...@gmail.com> wrote:
> Hi,
>
> Most of the settings are default.
>
> We have single node (Memory 1 GB, Index Size 4GB)
>
> We have a requirement where we are doing very fast commit. This is kind of
> real time requirement where we are polling many threads from third party and
> indexes into our system.
>
> We want these results to be available soon.
>
> We are committing for each user (may have 10k threads and inside that 1
> thread may have 10 messages). So overall documents per user will be having
> around .1 million (100000)
>
> Earlier we were using commit Within  as 10 milliseconds inside the document,
> but that was slowing the indexing and we were not getting any error.
>
> As we removed the commit Within, indexing became very fast. But after that
> we started experiencing in the system
>
> As i read many forums, everybody told that this is happening because of very
> fast commit rate, but what is the solution for our problem?
>
> We are using CURL to post the data and commit
>
> Also till now we are using default solrconfig.
>
> Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Error opening new searcher.
> exceeded limit of maxWarmingSearchers=2, try again later.
>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052)
>        at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424)
>        at
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
>        at
> org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
>        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
>        at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
>        at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>        at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>        at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>        at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>        at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>        at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>        at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>        at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>        at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
>        at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>        at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>        at java.lang.Thread.run(Thread.java:662)
>