You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Erik Hatcher <er...@gmail.com> on 2014/07/16 23:24:12 UTC
Re: Memory leak for debugQuery?
Tom -
You could maybe isolate it a little further by seeing using the “debug" parameter with values of timing|query|results
Erik
On May 15, 2014, at 5:50 PM, Tom Burton-West <tb...@umich.edu> wrote:
> Hello all,
>
> I'm trying to get relevance scoring information for each of 1,000 docs returned for each of 250 queries. If I run the query (appended below) without debugQuery=on, I have no problem with getting all the results with under 4GB of memory use. If I add the parameter &debugQuery=on, memory use goes up continuously and after about 20 queries (with 1,000 results each), memory use reaches about 29.1 GB and the garbage collector gives up:
>
> " org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded"
>
> I've attached a jmap -histo, exgerpt below.
>
> Is this a known issue with debugQuery?
>
> Tom
> ----
> query:
>
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2&debugQuery=on
>
> without debugQuery=on:
>
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2
>
> num #instances #bytes Class description
> --------------------------------------------------------------------------
> 1: 585,559 10,292,067,456 byte[]
> 2: 743,639 18,874,349,592 char[]
> 3: 53,821 91,936,328 long[]
> 4: 70,430 69,234,400 int[]
> 5: 51,348 27,111,744 org.apache.lucene.util.fst.FST$Arc[]
> 6: 286,357 20,617,704 org.apache.lucene.util.fst.FST$Arc
> 7: 715,364 17,168,736 java.lang.String
> 8: 79,561 12,547,792 * ConstMethodKlass
> 9: 18,909 11,404,696 short[]
> 10: 345,854 11,067,328 java.util.HashMap$Entry
> 11: 8,823 10,351,024 * ConstantPoolKlass
> 12: 79,561 10,193,328 * MethodKlass
> 13: 228,587 9,143,480 org.apache.lucene.document.FieldType
> 14: 228,584 9,143,360 org.apache.lucene.document.Field
> 15: 368,423 8,842,152 org.apache.lucene.util.BytesRef
> 16: 210,342 8,413,680 java.util.TreeMap$Entry
> 17: 81,576 8,204,648 java.util.HashMap$Entry[]
> 18: 107,921 7,770,312 org.apache.lucene.util.fst.FST$Arc
> 19: 13,020 6,874,560 org.apache.lucene.util.fst.FST$Arc[]
>
> <debugQuery_jmap.txt>
Re: Memory leak for debugQuery?
Posted by Umesh Prasad <um...@gmail.com>.
Histogram by itself isn't sufficient to root cause the JVM heap issue.
We have found JVM heap memory issues multiple times in our system and each
time it was due to a different reasons. I would recommend taking heap
dumps at regular interval (using jmap/visual vm) and analyze those heap
dumps. That will give a definite answer to memory issues.
I have regularly analyzed heap dump of size > 32 GB with eclipse memory
analyzer. The linux version comes with a command line script
ParseHeapDump.sh inside mat directory.
# Usage: ParseHeapDump.sh <path/to/dump.hprof> [report]*
#
# The leak report has the id org.eclipse.mat.api:suspects
# The top component report has the id org.eclipse.mat.api:top_components
Increase the memory by setting Xmx and Xms param in MemoryAnalyzer.ini (in
same directory).
The leak suspect report is quite good. For checking detailed allocation
pattern etc , you can copy the index files generated from parsing and open
it in GUI.
On 17 July 2014 05:36, Tomás Fernández Löbbe <to...@gmail.com> wrote:
> Also, is this trunk? Solr 4.x? Single shard, right?
>
>
> On Wed, Jul 16, 2014 at 2:24 PM, Erik Hatcher <er...@gmail.com>
> wrote:
>
> > Tom -
> >
> > You could maybe isolate it a little further by seeing using the “debug"
> > parameter with values of timing|query|results
> >
> > Erik
> >
> > On May 15, 2014, at 5:50 PM, Tom Burton-West <tb...@umich.edu> wrote:
> >
> > > Hello all,
> > >
> > > I'm trying to get relevance scoring information for each of 1,000 docs
> > returned for each of 250 queries. If I run the query (appended below)
> > without debugQuery=on, I have no problem with getting all the results
> with
> > under 4GB of memory use. If I add the parameter &debugQuery=on, memory
> use
> > goes up continuously and after about 20 queries (with 1,000 results
> each),
> > memory use reaches about 29.1 GB and the garbage collector gives up:
> > >
> > > " org.apache.solr.common.SolrException;
> null:java.lang.RuntimeException:
> > java.lang.OutOfMemoryError: GC overhead limit exceeded"
> > >
> > > I've attached a jmap -histo, exgerpt below.
> > >
> > > Is this a known issue with debugQuery?
> > >
> > > Tom
> > > ----
> > > query:
> > >
> > >
> >
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2&debugQuery=on
> > >
> > > without debugQuery=on:
> > >
> > >
> >
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2
> > >
> > > num #instances #bytes Class description
> > >
> >
> --------------------------------------------------------------------------
> > > 1: 585,559 10,292,067,456 byte[]
> > > 2: 743,639 18,874,349,592 char[]
> > > 3: 53,821 91,936,328 long[]
> > > 4: 70,430 69,234,400 int[]
> > > 5: 51,348 27,111,744
> > org.apache.lucene.util.fst.FST$Arc[]
> > > 6: 286,357 20,617,704
> > org.apache.lucene.util.fst.FST$Arc
> > > 7: 715,364 17,168,736 java.lang.String
> > > 8: 79,561 12,547,792 * ConstMethodKlass
> > > 9: 18,909 11,404,696 short[]
> > > 10: 345,854 11,067,328 java.util.HashMap$Entry
> > > 11: 8,823 10,351,024 * ConstantPoolKlass
> > > 12: 79,561 10,193,328 * MethodKlass
> > > 13: 228,587 9,143,480
> > org.apache.lucene.document.FieldType
> > > 14: 228,584 9,143,360
> org.apache.lucene.document.Field
> > > 15: 368,423 8,842,152 org.apache.lucene.util.BytesRef
> > > 16: 210,342 8,413,680 java.util.TreeMap$Entry
> > > 17: 81,576 8,204,648 java.util.HashMap$Entry[]
> > > 18: 107,921 7,770,312
> > org.apache.lucene.util.fst.FST$Arc
> > > 19: 13,020 6,874,560
> > org.apache.lucene.util.fst.FST$Arc[]
> > >
> > > <debugQuery_jmap.txt>
> >
> >
>
--
---
Thanks & Regards
Umesh Prasad
Re: Memory leak for debugQuery?
Posted by Tomás Fernández Löbbe <to...@gmail.com>.
Also, is this trunk? Solr 4.x? Single shard, right?
On Wed, Jul 16, 2014 at 2:24 PM, Erik Hatcher <er...@gmail.com>
wrote:
> Tom -
>
> You could maybe isolate it a little further by seeing using the “debug"
> parameter with values of timing|query|results
>
> Erik
>
> On May 15, 2014, at 5:50 PM, Tom Burton-West <tb...@umich.edu> wrote:
>
> > Hello all,
> >
> > I'm trying to get relevance scoring information for each of 1,000 docs
> returned for each of 250 queries. If I run the query (appended below)
> without debugQuery=on, I have no problem with getting all the results with
> under 4GB of memory use. If I add the parameter &debugQuery=on, memory use
> goes up continuously and after about 20 queries (with 1,000 results each),
> memory use reaches about 29.1 GB and the garbage collector gives up:
> >
> > " org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
> java.lang.OutOfMemoryError: GC overhead limit exceeded"
> >
> > I've attached a jmap -histo, exgerpt below.
> >
> > Is this a known issue with debugQuery?
> >
> > Tom
> > ----
> > query:
> >
> >
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2&debugQuery=on
> >
> > without debugQuery=on:
> >
> >
> q=Abraham+Lincoln&fl=id,score&indent=on&wt=json&start=0&rows=1000&version=2.2
> >
> > num #instances #bytes Class description
> >
> --------------------------------------------------------------------------
> > 1: 585,559 10,292,067,456 byte[]
> > 2: 743,639 18,874,349,592 char[]
> > 3: 53,821 91,936,328 long[]
> > 4: 70,430 69,234,400 int[]
> > 5: 51,348 27,111,744
> org.apache.lucene.util.fst.FST$Arc[]
> > 6: 286,357 20,617,704
> org.apache.lucene.util.fst.FST$Arc
> > 7: 715,364 17,168,736 java.lang.String
> > 8: 79,561 12,547,792 * ConstMethodKlass
> > 9: 18,909 11,404,696 short[]
> > 10: 345,854 11,067,328 java.util.HashMap$Entry
> > 11: 8,823 10,351,024 * ConstantPoolKlass
> > 12: 79,561 10,193,328 * MethodKlass
> > 13: 228,587 9,143,480
> org.apache.lucene.document.FieldType
> > 14: 228,584 9,143,360 org.apache.lucene.document.Field
> > 15: 368,423 8,842,152 org.apache.lucene.util.BytesRef
> > 16: 210,342 8,413,680 java.util.TreeMap$Entry
> > 17: 81,576 8,204,648 java.util.HashMap$Entry[]
> > 18: 107,921 7,770,312
> org.apache.lucene.util.fst.FST$Arc
> > 19: 13,020 6,874,560
> org.apache.lucene.util.fst.FST$Arc[]
> >
> > <debugQuery_jmap.txt>
>
>