You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by stockii <st...@googlemail.com> on 2012/11/16 09:29:24 UTC

Out Of Memory =( Too many cores on one server?

Hello.

if my server is running for a while i get some OOM Problems. I think the
problem is, that i running to many cores on one Server with too many
documents.

this is my server concept:
14 cores. 
1 with 30 million docs
1 with 22 million docs
1 with growing 25 million docs
1 with 67 million docs
and the other cores are under 1 million docs.

all these cores are running fine in one jetty and searching is very fast and
we are satisfied with this.
yesterday we got OOM. 

Do you think that we should "outsource" the big cores into another virtual
instance of the server? so that the JVM not share the memory and going OOM?
starting with: MEMORY_OPTIONS="-Xmx6g -Xms2G -Xmn1G"



--
View this message in context: http://lucene.472066.n3.nabble.com/Out-Of-Memory-Too-many-cores-on-one-server-tp4020675.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Out Of Memory =( Too many cores on one server?

Posted by stockii <st...@googlemail.com>.
i think maybe the main problem is this bug:
https://issues.apache.org/jira/browse/SOLR-1111

all our queries are sorted and the heap grows always.



--
View this message in context: http://lucene.472066.n3.nabble.com/Out-Of-Memory-Too-many-cores-on-one-server-tp4020675p4025615.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Out Of Memory =( Too many cores on one server?

Posted by stockii <st...@googlemail.com>.
okay, thx a lot. first i need to install java 1.7 and then i will try :)



--
View this message in context: http://lucene.472066.n3.nabble.com/Out-Of-Memory-Too-many-cores-on-one-server-tp4020675p4023498.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Out Of Memory =( Too many cores on one server?

Posted by Mark Miller <ma...@gmail.com>.
> I have personally found that increasing the size of the young generation (Eden) is beneficial to Solr,

I've seen the same thing - I think it's because requests create a lot
of short lived objects and if the eden is not large enough, a lot of
those objects will make it to the tenured space, which is basically an
alg fail.

It's not a bad knob to tweak, because if you just keep raising the
heap, you can wastefully keep giving more unnecessary RAM to the
tenured space when you might only want to give more to the eden space.

- Mark


On Wed, Nov 21, 2012 at 11:00 AM, Shawn Heisey <so...@elyograg.org> wrote:
> On 11/21/2012 12:36 AM, stockii wrote:
>>
>> okay. i will try out more RAM.
>>
>> i am using not much caching because of "near-realt-time"-search. in this
>> case its better to increase xmn or only xmx and xms?
>
>
> I have personally found that increasing the size of the young generation
> (Eden) is beneficial to Solr, at least if you are using the parallel GC
> options.  I theorize that the collector for the young generation is more
> efficient than the full GC, but that's just a guess.  When I started doing
> that, the amount of time my Solr JVM spent doing garbage collection went way
> down, even though the number of garbage collections went up.
>
> Lately I have been increasing the Eden size by using -XX:NewRatio=1 rather
> than an explicit value on -Xmn.  This has one advantage - if you change the
> min/max heap size, the same value for NewRatio will still work.
>
> Here are the options that I am currently using in production with Java6:
>
> -Xms4096M
> -Xmx8192M
> -XX:NewRatio=1
> -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC
> -XX:+CMSParallelRemarkEnabled
>
> Here is what I am planning for the future with Solr4 and beyond with Java7,
> including an environment variable for Xmx. Due to the experimental nature of
> the G1 collector, I would only trust it with the latest Java releases,
> especially for Java6.  The Unlock option is not required on Java7, only
> Java6.
>
> -Xms256M
> -Xmx${JMEM}
> -XX:+UnlockExperimentalVMOptions
> -XX:+UseG1GC
>
> Thanks,
> Shawn
>



-- 
- Mark

Re: Out Of Memory =( Too many cores on one server?

Posted by Shawn Heisey <so...@elyograg.org>.
On 11/21/2012 12:36 AM, stockii wrote:
> okay. i will try out more RAM.
>
> i am using not much caching because of "near-realt-time"-search. in this
> case its better to increase xmn or only xmx and xms?

I have personally found that increasing the size of the young generation 
(Eden) is beneficial to Solr, at least if you are using the parallel GC 
options.  I theorize that the collector for the young generation is more 
efficient than the full GC, but that's just a guess.  When I started 
doing that, the amount of time my Solr JVM spent doing garbage 
collection went way down, even though the number of garbage collections 
went up.

Lately I have been increasing the Eden size by using -XX:NewRatio=1 
rather than an explicit value on -Xmn.  This has one advantage - if you 
change the min/max heap size, the same value for NewRatio will still work.

Here are the options that I am currently using in production with Java6:

-Xms4096M
-Xmx8192M
-XX:NewRatio=1
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled

Here is what I am planning for the future with Solr4 and beyond with 
Java7,  including an environment variable for Xmx. Due to the 
experimental nature of the G1 collector, I would only trust it with the 
latest Java releases, especially for Java6.  The Unlock option is not 
required on Java7, only Java6.

-Xms256M
-Xmx${JMEM}
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC

Thanks,
Shawn


Re: Out Of Memory =( Too many cores on one server?

Posted by stockii <st...@googlemail.com>.
okay. i will try out more RAM. 

i am using not much caching because of "near-realt-time"-search. in this
case its better to increase xmn or only xmx and xms?



--
View this message in context: http://lucene.472066.n3.nabble.com/Out-Of-Memory-Too-many-cores-on-one-server-tp4020675p4021519.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Out Of Memory =( Too many cores on one server?

Posted by Vadim Kisselmann <v....@gmail.com>.
Hi,
your JVM need more RAM. My setup works well with 10 Cores, and 300mio.
docs, Xmx8GB Xms8GB, 16GB for OS.
But it's how Bernd mentioned, the memory consumption depends on the
number of fields and the fieldCache.
Best Regards
Vadim



2012/11/16 Bernd Fehling <be...@uni-bielefeld.de>:
> I guess you should give JVM more memory.
>
> When starting to find a good value for -Xmx I "oversized" and  set
> it to Xmx20G and Xms20G. Then I monitored the system and saw that JVM is
> between 5G and 10G (java7 with G1 GC).
> Now it is finally set to Xmx11G and Xms11G for my system with 1 core and 38 million docs.
> But JVM memory depends pretty much on number of fields in schema.xml
> and fieldCache (sortable fields).
>
> Regards
> Bernd
>
> Am 16.11.2012 09:29, schrieb stockii:
>> Hello.
>>
>> if my server is running for a while i get some OOM Problems. I think the
>> problem is, that i running to many cores on one Server with too many
>> documents.
>>
>> this is my server concept:
>> 14 cores.
>> 1 with 30 million docs
>> 1 with 22 million docs
>> 1 with growing 25 million docs
>> 1 with 67 million docs
>> and the other cores are under 1 million docs.
>>
>> all these cores are running fine in one jetty and searching is very fast and
>> we are satisfied with this.
>> yesterday we got OOM.
>>
>> Do you think that we should "outsource" the big cores into another virtual
>> instance of the server? so that the JVM not share the memory and going OOM?
>> starting with: MEMORY_OPTIONS="-Xmx6g -Xms2G -Xmn1G"
>>

Re: Out Of Memory =( Too many cores on one server?

Posted by Bernd Fehling <be...@uni-bielefeld.de>.
I guess you should give JVM more memory.

When starting to find a good value for -Xmx I "oversized" and  set
it to Xmx20G and Xms20G. Then I monitored the system and saw that JVM is
between 5G and 10G (java7 with G1 GC).
Now it is finally set to Xmx11G and Xms11G for my system with 1 core and 38 million docs.
But JVM memory depends pretty much on number of fields in schema.xml
and fieldCache (sortable fields).

Regards
Bernd

Am 16.11.2012 09:29, schrieb stockii:
> Hello.
> 
> if my server is running for a while i get some OOM Problems. I think the
> problem is, that i running to many cores on one Server with too many
> documents.
> 
> this is my server concept:
> 14 cores. 
> 1 with 30 million docs
> 1 with 22 million docs
> 1 with growing 25 million docs
> 1 with 67 million docs
> and the other cores are under 1 million docs.
> 
> all these cores are running fine in one jetty and searching is very fast and
> we are satisfied with this.
> yesterday we got OOM. 
> 
> Do you think that we should "outsource" the big cores into another virtual
> instance of the server? so that the JVM not share the memory and going OOM?
> starting with: MEMORY_OPTIONS="-Xmx6g -Xms2G -Xmn1G"
>