You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Paul Smith <ps...@aconex.com> on 2004/07/01 00:31:38 UTC

RE: Running OutOfMemory while optimizing and searching

Wow, I have say that those sort of numbers are concerning to me... Now I
know 3.5 million documents is a lot, but still... What would be causing a
query to require and hold that much memory?  I could understand that it
surely would be doing a lot of memory work, but why would it need to hold
onto/grab that much memory for the length of the query?  (this is getting
into the internals a bit, but it's always good to know what's going on under
the hood from a design decision point of view and how I would have to
structure an App to handle this sort of load).
Cheers,
Paul Smith

> 1 Byte * Number of fields in your query * Number of
> docs in your index
> 
> So if your query searches on all 50 fields of your 3.5
> Million document index then each search would take
> about 175MB.  If your 3-4 searches run concurrently
> then that's about 525MB to 700MB chewed up at once.
> 
> Also, if your queries use wildcards, the memory
> requirements could be much greater.


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org