You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Shawn Heisey <so...@elyograg.org> on 2012/12/18 04:57:29 UTC
OOM failures caused by java 1.7.0_09?
I have been seeing a bunch of random failures, both in Solr tests and my
own SolrJ programs. They all start with the following in the stacktrace:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
Common elements: 1) SolrJ/Solr 4.1-SNAPSHOT. 2) G1 garbage collector.
3) Built and run with oracle jdk 1.7.0_09 on CentOS 6 x64, using RPMs
created with the following guide:
http://www.city-fan.org/tips/OracleJava7OnFedora
One of the programs consistently uses less than 25MB of heap, because it
uses idle moments to do garbage collection. I have watched the heap
with jconsole to verify this. It has been configured with a max heap of
1GB, so I am very sure that there is no actual memory pressure.
Since rebooting and upgrading to 1.7.0_10, I have not seen any further
OOM problems despite pounding on everything repeatedly. Has anyone else
seen anything similar?
Thanks,
Shawn
Re: OOM failures caused by java 1.7.0_09?
Posted by Shawn Heisey <so...@elyograg.org>.
On 12/21/2012 3:00 AM, Toke Eskildsen wrote:
> On Tue, 2012-12-18 at 04:57 +0100, Shawn Heisey wrote:
>> java.lang.OutOfMemoryError: unable to create new native thread
>> at java.lang.Thread.start0(Native Method)
>> at java.lang.Thread.start(Thread.java:691)
> [...]
>> 3) Built and run with oracle jdk 1.7.0_09 on CentOS 6 x64
> [...]
>> Since rebooting and upgrading to 1.7.0_10, I have not seen any further
>> OOM problems despite pounding on everything repeatedly. Has anyone
>> else seen anything similar?
>
> Yes, also under CentOS, but with Java 1.6. The cause was a low default
> limit for user-space threads (1024 AFAIR). Try calling 'ulimit -a' and
> check that "max user processes" is sufficiently large.
>
> If the limit is fairly low, your reboot might explain why switching to
> 1.7.0_10 seemed to be the solution, as you probably had less running
> applications after reboot.
Thank you, that makes perfect sense. I have now added the following to
/etc/security/limits.conf, along with the lines already there that keep
me from exceeding the max number of open files:
ncindex hard nproc 6144
ncindex soft nproc 4096
Thanks,
Shawn
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org
Re: OOM failures caused by java 1.7.0_09?
Posted by Toke Eskildsen <te...@statsbiblioteket.dk>.
On Tue, 2012-12-18 at 04:57 +0100, Shawn Heisey wrote:
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:691)
[...]
> 3) Built and run with oracle jdk 1.7.0_09 on CentOS 6 x64
[...]
> Since rebooting and upgrading to 1.7.0_10, I have not seen any further
> OOM problems despite pounding on everything repeatedly. Has anyone
> else seen anything similar?
Yes, also under CentOS, but with Java 1.6. The cause was a low default
limit for user-space threads (1024 AFAIR). Try calling 'ulimit -a' and
check that "max user processes" is sufficiently large.
If the limit is fairly low, your reboot might explain why switching to
1.7.0_10 seemed to be the solution, as you probably had less running
applications after reboot.
- Toke Eskildsen
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org