You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by sundar shankar <su...@hotmail.com> on 2008/10/10 20:10:26 UTC

Best way to prevent max warmers error

Hi,
     We have an application with more 2.5 million docs currently. It is hosted on a single box with 8 GIG memory. The number of warmers configured are 4 and Cold-searcher is allowed too. The application is based on data entry and commit to data happens as often as a data is entered. We optimize every night. When lots of users seem to be accessing the application in parallel, we are seeing the application running out of warmers and throwing an exception. 
Is there any suggestion on how we can handle it. 

Number of concurrent users currently are about 8 and will be growing to 40 soon and more a little later than that. I remember discussion where people have advised against using more warmers and said 2-4 should be more than enough for applications of the size of mine. I am not sure what has to be done. Please advice.

Regards
Sundar

_________________________________________________________________
Searching for weekend getaways? Try Live.com
http://www.live.com/?scope=video&form=MICOAL

RE: Best way to prevent max warmers error

Posted by Chris Hostetter <ho...@fucit.org>.
: As far as our application goes, Commits and reads are done to the index 
: during the normal business hours. However, we observed the max warmers 
: error happening during a nightly job when the only operation is 4 
: parallel threads commits data to index and Optimizes it finally. We 
: increased the number of warmers from 2 to 4 after seeing the error 
: getting solved in QA.

i suspect that there is no real problem, just threads colliding with 
eachother trying to do commits at the same time.  if you have 4 threads 
doing updates, and each thread does a commit when they are done (or when 
they are done with a "batch") there's a lot of possibility for collision.

waiting for all of the threads to finish and then doing a single commit, 
or executing commits from a fifth thread at a regular interval, (or just 
using the autocommit settings in solrconfig.xml) should cause the data to 
be visible just as fast, but without the contention of multiple clients 
trying to force commits at the same time.



-Hoss


RE: Best way to prevent max warmers error

Posted by sundar shankar <su...@hotmail.com>.
Thanks for the reply Hoss.
As far as our application goes, Commits and reads are done to the index during the normal business hours. However, we observed the max warmers error happening during a nightly job when the only operation is 4 parallel threads commits data to index and Optimizes it finally. We increased the number of warmers from 2 to 4 after seeing the error getting solved in QA.

I am not sure what could be wrong here. Like I said, We enabled cold searchers option too and still I seem to be getting this.

To Answer your Questions
1. We have about 30 searchers at the most and about 6 searchers on an average.
2. The data needs to be visible to the searcher as soon as we can, which we make sure by committing data to the index as soon as it is added. Deletes and external updates to data (from other applications) are handled by the nightly cron.
3. The Solr server has 8 Gig ram and 4Gig VM

My Question to you is, 

when you said opening a new searcher and limiting the searcher, I am not sure I get how exactly that can be done via solrj! Would appreciate if you can help me out with that.

Regards
Sundar

P.S : As far as my understanding of warmers is right, it is the threads that loads up the index in memory which is basically accessed by the searcher, isnt it?

> Date: Tue, 21 Oct 2008 16:09:11 -0700
> From: hossman_lucene@fucit.org
> To: solr-user@lucene.apache.org
> Subject: Re: Best way to prevent max warmers error
> 
> : Subject: Best way to prevent max warmers error
> 
> Slightly old thread, but i haven't seen any replies...
> 
> :      We have an application with more 2.5 million docs currently. It is 
> : hosted on a single box with 8 GIG memory. The number of warmers 
> : configured are 4 and Cold-searcher is allowed too. The application is 
> 
> the current example configs suggest 2 ... i can't honestly think of any 
> good reason to have more then that.
> 
> There is a fairly fundemental trade off question here. the knobs
> you can adjust are:
>   a) how often new searchers are opened
>   b) how much warming you wnat to do
>   c) how powerful your hardware is
> 
> if you are getting overlapping searchers, you can either do less warming (and make 
> the first users of those searchers pay an extra cost because of the empty 
> caches) or you can open new searchers less frequently so that the warm 
> time doesn't exceed the frequency, or you can throw hardware at the 
> problem and try speed things up that way.
> 
> deciding between a, b, and c is't really a technical question so much as a 
> business question.
> 
> FWIW: There may be a "d) make Solr more efficient" option, and by all 
> means if you find that knob, please let the rest of us know where it is so 
> we can all turn it :)
> 
> 
> -Hoss
> 

_________________________________________________________________
Movies, sports & news! Get your daily entertainment fix, only on live.com
http://www.live.com/?scope=video&form=MICOAL

Re: Best way to prevent max warmers error

Posted by Chris Hostetter <ho...@fucit.org>.
: Subject: Best way to prevent max warmers error

Slightly old thread, but i haven't seen any replies...

:      We have an application with more 2.5 million docs currently. It is 
: hosted on a single box with 8 GIG memory. The number of warmers 
: configured are 4 and Cold-searcher is allowed too. The application is 

the current example configs suggest 2 ... i can't honestly think of any 
good reason to have more then that.

There is a fairly fundemental trade off question here. the knobs
you can adjust are:
  a) how often new searchers are opened
  b) how much warming you wnat to do
  c) how powerful your hardware is

if you are getting overlapping searchers, you can either do less warming (and make 
the first users of those searchers pay an extra cost because of the empty 
caches) or you can open new searchers less frequently so that the warm 
time doesn't exceed the frequency, or you can throw hardware at the 
problem and try speed things up that way.

deciding between a, b, and c is't really a technical question so much as a 
business question.

FWIW: There may be a "d) make Solr more efficient" option, and by all 
means if you find that knob, please let the rest of us know where it is so 
we can all turn it :)


-Hoss