You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Graham Stead <gs...@ieee.org> on 2007/02/06 20:39:10 UTC

Debugging Solr memory usage/heap problems

Hi everyone,
 
My Solr JVM runs out of heap space quite frequently. I'm trying to
understand Solr/Lucene's memory usage so I can address the problem
correctly. Otherwise, I feel I'm taking random shots in the dark.
 
I've tried previous troubleshooting suggestions. Here's what I've done:
 
1) Increased Tomcat's JVM heap space, e.g.:
    JAVA_OPTS='-Xmx1244m -Xms1244m -server'; # frequent heap space problems
    JAVA_OPTS='-XX:+AggressiveHeap -server'; # runs out of heap space at
2.0g
    JAVA_OPTS='-Xmx3072m -Xms3072m -server'; # jvm quickly hits 2.9g on
'top'
 
Solr is the only webapp deployed on this Tomcat instance.
 
2) I use Solr collection/distribution to separate indexing and searching.
The indexer is stable now and memory problems only occur when searching on
the Solr slave.
 
3) In solrconfig.xml, I reduced mergeFactor and maxBufferedDocs by 50%:
    <mergeFactor>5</mergeFactor>
    <maxBufferedDocs>500</maxBufferedDocs>
 
This helped the indexing server but not the Solr slave.
 
4) In solrconfig.xml, I set filterCache, queryResultCache, and documentCache
to 0.
 
Now for my index details: 
- To facilitate highlighting, I currently store doc contents in the index,
so the index consumes 24GB on disk.
- numDocs : 4,953,736 
  maxDoc : 4,953,736 (just optimized)
- Term files:
   logs # du -ksh ../solr/data/index/*.t??
   5.9M    ../solr/data/index/_1kjb.tii
   429M    ../solr/data/index/_1kjb.tis
- I have 22 fields and yes, they currently have norms.

Other info that may be helpful:
- My Solr is from 2006-11-15. We have a few mods, including one extra
fieldCache that stores ~40 bytes/doc.
- Thread counts from solr/admin/threaddump.jsp:
  Java HotSpot(TM) 64-Bit Server VM 1.5.0_08-b03
  Thread Count: current=37 deamon=34 peak=37
 
My machine has Gentoo Linux and 4gb RAM. 'top' indicates the JVM reaches
2.9g RAM (3472m virtual memory) after 10-20 searches and ~20 mins of use. It
seems just a matter of time before more searches or a snapinstaller 'commit'
will make it run out of heap space again.
 
I have flexibility in the changes we can make. I.e., I can omit norms for
most fields, or I can stop storing the doc contents in the index. But before
embarking on a new strategy, I need some assurance that the strategy will
work (crazy, I know). For example, it doesn't seem that removing norms would
save a great deal (I calculate saving 1 byte per norm per field on 21 fields
is ~99MB).
 
So...how do I deduce what's taking up so much memory? Any suggestions would
be very helpful to me (and hopefully to others, too).
 
many thanks,
-Graham

RE: Debugging Solr memory usage/heap problems

Posted by Graham Stead <gs...@ieee.org>.
Thanks, Chris. I will test with vanilla Solr to clarify whether the problem
occurs with it, or only in the version where we have made changes.

-Graham

> : To tweak our scoring, a custom hit collector in 
> SolrIndexSearcher creates 1
> : fieldCache and 3 ValueSources from 3 fields:
> : - an integer field with many unique values (order 10^4)
> : - another integer field with many unique values (order 10^4)
> : - an integer field with hundreds of unique values
> 
> so you customized SolrIndexSearcher? ... is it possible you 
> have a memory leak in that code?
> 
> If you have all of your cache sizes set to zero, you should 
> be able to start up the server, hit it with a bunch of 
> queries, then trigger a commit and see your heap usage drop 
> significantly. ... if you do that over and over again and see 
> the heap usage grow and grow, there may be something else 
> going on in those changes of yours.



RE: Debugging Solr memory usage/heap problems

Posted by Chris Hostetter <ho...@fucit.org>.
: To tweak our scoring, a custom hit collector in SolrIndexSearcher creates 1
: fieldCache and 3 ValueSources from 3 fields:
: - an integer field with many unique values (order 10^4)
: - another integer field with many unique values (order 10^4)
: - an integer field with hundreds of unique values

so you customized SolrIndexSearcher? ... is it possible you have a memory
leak in that code?

If you have all of your cache sizes set to zero, you should be able to
start up the server, hit it with a bunch of queries, then trigger a commit
and see your heap usage drop significantly. ... if you do that over and
over again and see the heap usage grow and grow, there may be something
else going on in those changes of yours.




-Hoss


RE: Debugging Solr memory usage/heap problems

Posted by Graham Stead <gs...@ieee.org>.
> > Our queries do not sort by any field. However, we do make use of 
> > FunctionQueries and a typical query is something like:
> >
> >         users_query AND (+linear_function_query 
> +recip_function_query
> > +language:english^0 -flags:spam^0)
> 
> Function queries often build fieldCaches--on how many fields 
> do you use function queries, and how big is the set of unique 
> values for those fields?

2 fields:
- date string with hundreds of unique values
- an integer field with < 250 unique values

To tweak our scoring, a custom hit collector in SolrIndexSearcher creates 1
fieldCache and 3 ValueSources from 3 fields:
- an integer field with many unique values (order 10^4)
- another integer field with many unique values (order 10^4)
- an integer field with hundreds of unique values

I thought a function query used ValueSource, so perhaps usage is similar in
both cases. Would a ValueSource load all values into memory, or just unique
ones?

> Is user_query a string of keywords, or is it an arbitrary 
> query in lucene syntax?

It's whatever the user types into a search box (supports arbitrary lucene).
Some queries are intentionally harsh, like 'george OR bush' or 'the OR at'.
The latter matches virtually every document in the index.

Thanks again,
-Graham



Re: Debugging Solr memory usage/heap problems

Posted by Mike Klaas <mi...@gmail.com>.
On 2/6/07, Graham Stead <gs...@ieee.org> wrote:

> Our queries do not sort by any field. However, we do make use of
> FunctionQueries and a typical query is something like:
>
>         users_query AND (+linear_function_query +recip_function_query
> +language:english^0 -flags:spam^0)

Function queries often build fieldCaches--on how many fields do you
use function queries, and how big is the set of unique values for
those fields?

Is user_query a string of keywords, or is it an arbitrary query in
lucene syntax?

-Mike

RE: Debugging Solr memory usage/heap problems

Posted by Graham Stead <gs...@ieee.org>.
Mike, Yonik, thanks for the quick reply. 
 
> I think it is in your queries.  Are you sorting on many 
> fields?  What is a typical query?  I'm not a lucene expert, 
> but there are lucene experts on this list.

Our queries do not sort by any field. However, we do make use of
FunctionQueries and a typical query is something like:

	users_query AND (+linear_function_query +recip_function_query
+language:english^0 -flags:spam^0)

> 2) If your stored fields are very large, try reducing the 
> size of the doc cache.

Is this what you mean? I'm testing with:
    <documentCache
      class="solr.LRUCache"
      size="0"
      initialSize="0"
      autowarmCount="0"/>

> During warming, there are *two* searchers open, so double the 
> number for things like the FieldCache.  If you can accept 
> slow first queries (like maybe in an offline query system) 
> then you can turn off all warming.

Good point. I already tried to eliminate warming problems like this:
    <filterCache
      class="solr.LRUCache"
      size="0"
      initialSize="0"
      autowarmCount="0"/>

    <queryResultCache
      class="solr.LRUCache"
      size="0"
      initialSize="0"
      autowarmCount="0"/>

I know these changes make things slow, but I'm trying to eliminate as many
variables as possible.

I agree with Mike that the problem must be searches -- after all, the Solr
master works fine and it doesn't host searches. Is there a rule of thumb to
guesstimate the SolrIndexSearcher memory requirements?

Thanks again,
-Graham



Re: Debugging Solr memory usage/heap problems

Posted by Mike Klaas <mi...@gmail.com>.
On 2/6/07, Graham Stead <gs...@ieee.org> wrote:
> Hi everyone,
>
> My Solr JVM runs out of heap space quite frequently. I'm trying to
> understand Solr/Lucene's memory usage so I can address the problem
> correctly. Otherwise, I feel I'm taking random shots in the dark.

<>

> 4) In solrconfig.xml, I set filterCache, queryResultCache, and documentCache
> to 0.

With this change, your memory consumption should be almost entirely on
the Lucene end of things.  The types of queries, the
nature/distribution of your fields, etc.  I'd recommend not lowering
the size of the documentCache below the size required to collect docs
for a single query, for performance reasons (especially since you are
highlighting).

> Now for my index details:
> - To facilitate highlighting, I currently store doc contents in the index,
> so the index consumes 24GB on disk.
> - numDocs : 4,953,736
>   maxDoc : 4,953,736 (just optimized)
> - Term files:
>    logs # du -ksh ../solr/data/index/*.t??
>    5.9M    ../solr/data/index/_1kjb.tii
>    429M    ../solr/data/index/_1kjb.tis
> - I have 22 fields and yes, they currently have norms.

FWIW, I have several indices of approximately that size, also with the
contents stored for highlighting (using compressThreshold=200).  I
have 50-odd fields with norms.  Memory consumption is rather small
(~1G, though the heap size is larger).

> My machine has Gentoo Linux and 4gb RAM. 'top' indicates the JVM reaches
> 2.9g RAM (3472m virtual memory) after 10-20 searches and ~20 mins of use. It
> seems just a matter of time before more searches or a snapinstaller 'commit'
> will make it run out of heap space again.
>
> I have flexibility in the changes we can make. I.e., I can omit norms for
> most fields, or I can stop storing the doc contents in the index. But before
> embarking on a new strategy, I need some assurance that the strategy will
> work (crazy, I know). For example, it doesn't seem that removing norms would
> save a great deal (I calculate saving 1 byte per norm per field on 21 fields
> is ~99MB).
>
> So...how do I deduce what's taking up so much memory? Any suggestions would
> be very helpful to me (and hopefully to others, too).

I think it is in your queries.  Are you sorting on many fields?  What
is a typical query?  I'm not a lucene expert, but there are lucene
experts on this list.

-Mike

Re: Debugging Solr memory usage/heap problems

Posted by Yonik Seeley <yo...@apache.org>.
On 2/6/07, Graham Stead <gs...@ieee.org> wrote:
> Hi everyone,
>
> My Solr JVM runs out of heap space quite frequently. I'm trying to
> understand Solr/Lucene's memory usage so I can address the problem
> correctly. Otherwise, I feel I'm taking random shots in the dark.
>
> I've tried previous troubleshooting suggestions. Here's what I've done:
>
> 1) Increased Tomcat's JVM heap space, e.g.:
>     JAVA_OPTS='-Xmx1244m -Xms1244m -server'; # frequent heap space problems
>     JAVA_OPTS='-XX:+AggressiveHeap -server'; # runs out of heap space at
> 2.0g
>     JAVA_OPTS='-Xmx3072m -Xms3072m -server'; # jvm quickly hits 2.9g on
> 'top'
>
> Solr is the only webapp deployed on this Tomcat instance.
>
> 2) I use Solr collection/distribution to separate indexing and searching.
> The indexer is stable now and memory problems only occur when searching on
> the Solr slave.
>
> 3) In solrconfig.xml, I reduced mergeFactor and maxBufferedDocs by 50%:
>     <mergeFactor>5</mergeFactor>
>     <maxBufferedDocs>500</maxBufferedDocs>
>
> This helped the indexing server but not the Solr slave.
>
> 4) In solrconfig.xml, I set filterCache, queryResultCache, and documentCache
> to 0.
>
> Now for my index details:
> - To facilitate highlighting, I currently store doc contents in the index,
> so the index consumes 24GB on disk.
> - numDocs : 4,953,736
>   maxDoc : 4,953,736 (just optimized)
> - Term files:
>    logs # du -ksh ../solr/data/index/*.t??
>    5.9M    ../solr/data/index/_1kjb.tii
>    429M    ../solr/data/index/_1kjb.tis
> - I have 22 fields and yes, they currently have norms.
>
> Other info that may be helpful:
> - My Solr is from 2006-11-15. We have a few mods, including one extra
> fieldCache that stores ~40 bytes/doc.
> - Thread counts from solr/admin/threaddump.jsp:
>   Java HotSpot(TM) 64-Bit Server VM 1.5.0_08-b03
>   Thread Count: current=37 deamon=34 peak=37
>
> My machine has Gentoo Linux and 4gb RAM. 'top' indicates the JVM reaches
> 2.9g RAM (3472m virtual memory) after 10-20 searches and ~20 mins of use. It
> seems just a matter of time before more searches or a snapinstaller 'commit'
> will make it run out of heap space again.
>
> I have flexibility in the changes we can make. I.e., I can omit norms for
> most fields, or I can stop storing the doc contents in the index. But before
> embarking on a new strategy, I need some assurance that the strategy will
> work (crazy, I know). For example, it doesn't seem that removing norms would
> save a great deal (I calculate saving 1 byte per norm per field on 21 fields
> is ~99MB).
>
> So...how do I deduce what's taking up so much memory? Any suggestions would
> be very helpful to me (and hopefully to others, too).
>
> many thanks,
> -Graham

1) Sorting on fields currently takes up a lot of memory... lucene
FieldCache info can be large (4 bytes per doc per field sorted on,
plus the unique strings).

2) If your stored fields are very large, try reducing the size of the doc cache.

During warming, there are *two* searchers open, so double the number
for things like the FieldCache.  If you can accept slow first queries
(like maybe in an offline query system) then you can turn off all
warming.

-Yonik