You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Pranav Prakash <pr...@gmail.com> on 2011/08/16 09:34:08 UTC

OOM due to JRE Issue (LUCENE-1566)

Hi,

This might probably have been discussed long time back, but I got this error
recently in one of my production slaves.

SEVERE: java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the
Sun VM Bug described in https://issues.apache.org/jira/browse/LUCENE-1566;
try calling FSDirectory.setReadChunkSize with a a value smaller than the
current chunk size (2147483647)

I am currently using Solr1.4. Going through JIRA Issue comments, I found
that this patch applies to 2.9 or above. We are also planning an upgrade to
Solr 3.3. Is this patch included in 3.3 so as to I don't have to manually
apply the patch?

What are the other workarounds of the problem?

Thanks in adv.

*Pranav Prakash*

"temet nosce"

Twitter <http://twitter.com/pranavprakash> | Blog <http://blog.myblive.com> |
Google <http://www.google.com/profiles/pranny>

Re: OOM due to JRE Issue (LUCENE-1566)

Posted by Bill Bell <bi...@gmail.com>.
Send gc log and force dump if you can when it happens.

Bill Bell
Sent from mobile


On Aug 16, 2011, at 5:27 AM, Pranav Prakash <pr...@gmail.com> wrote:

>> 
>> 
>> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
>> the version you are using.
>> maybe you can provide the stacktrace and more deatails about your
>> problem and report back?
>> 
> 
> Unfortunately, I have only this much information with me. However following
> is my speficiations, if they are any helpful :-
> 
> /usr/bin/java -d64 -Xms5000M -Xmx5000M -XX:+UseParallelGC -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:$GC_LOGFILE
> -XX:+CMSPermGenSweepingEnabled -Dsolr.solr.home=multicore
> -Denable.slave=true -jar start.jar
> 
> 32GiB RAM
> 
> 
> Any thoughts? Will a switch to ConcurrentGC help in any means?

Re: OOM due to JRE Issue (LUCENE-1566)

Posted by Pranav Prakash <pr...@gmail.com>.
>
>
> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
> the version you are using.
> maybe you can provide the stacktrace and more deatails about your
> problem and report back?
>

Unfortunately, I have only this much information with me. However following
is my speficiations, if they are any helpful :-

/usr/bin/java -d64 -Xms5000M -Xmx5000M -XX:+UseParallelGC -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:$GC_LOGFILE
-XX:+CMSPermGenSweepingEnabled -Dsolr.solr.home=multicore
 -Denable.slave=true -jar start.jar

32GiB RAM


Any thoughts? Will a switch to ConcurrentGC help in any means?

Re: OOM due to JRE Issue (LUCENE-1566)

Posted by Simon Willnauer <si...@googlemail.com>.
hey,

On Tue, Aug 16, 2011 at 9:34 AM, Pranav Prakash <pr...@gmail.com> wrote:
> Hi,
>
> This might probably have been discussed long time back, but I got this error
> recently in one of my production slaves.
>
> SEVERE: java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the
> Sun VM Bug described in https://issues.apache.org/jira/browse/LUCENE-1566;
> try calling FSDirectory.setReadChunkSize with a a value smaller than the
> current chunk size (2147483647)
>
> I am currently using Solr1.4. Going through JIRA Issue comments, I found
> that this patch applies to 2.9 or above. We are also planning an upgrade to
> Solr 3.3. Is this patch included in 3.3 so as to I don't have to manually
> apply the patch?
AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
the version you are using.
maybe you can provide the stacktrace and more deatails about your
problem and report back?

simon

>
> What are the other workarounds of the problem?
>
> Thanks in adv.
>
> *Pranav Prakash*
>
> "temet nosce"
>
> Twitter <http://twitter.com/pranavprakash> | Blog <http://blog.myblive.com> |
> Google <http://www.google.com/profiles/pranny>
>