You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Trilok Prithvi <tr...@gmail.com> on 2014/12/16 17:55:09 UTC

OutOfMemoryError

We are getting OOME pretty often (every hour or so). We are restarting
nodes to keep up with it.

Here is our setup:
SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.

Each node has:
16GB RAM
2GB JVM (Xmx 2048, Xms 1024)
~100 Million documents (split among 2 shards - ~50M on each shard)
Solr Core is about ~16GB of data on each node.

*Physical Memory is almost always 99% full.*


The commit setup is as follows:

<updateHandler class="solr.DirectUpdateHandler2"> <updateLog> <str name=
"dir">${solr.ulog.dir:}</str> </updateLog> <autoCommit> <maxTime>300000</
maxTime> <maxDocs>100000</maxDocs> <openSearcher>false</openSearcher> </
autoCommit> <autoSoftCommit> <maxTime>5000</maxTime> </autoSoftCommit> </
updateHandler>
Rest of the solrconfig.xml setup is all default.

Some of the errors that we see on Solr ADMIN Logging is as follows:

java.lang.OutOfMemoryError: Java heap space

org.apache.solr.common.SolrException: no servers hosting shard:

org.apache.http.TruncatedChunkException: Truncated chunk ( expected
size: 8192; actual size: 7222)


Please let me know if you need anymore information.


Thanks!

Re: OutOfMemoryError

Posted by Trilok Prithvi <tr...@gmail.com>.
Shawn, looks like the JVM bump did the trick. Thanks!

On Tue, Dec 16, 2014 at 10:39 AM, Trilok Prithvi <tr...@gmail.com>
wrote:
>
> Thanks Shawn. We will increase the JVM to 4GB and see how it performs.
>
> Alexandre,
> Our queries are simple (with strdist() function in almost all the
> queries). No facets, or sorts.
> But we do a lot of data loads. We index data a lot (several documents,
> ranging from 10 - 100000 documents) and we upload data through out the day.
> Basically, we are heavy on indexing and querying (simple queries) at the
> same time.
>
>
>
> On Tue, Dec 16, 2014 at 10:17 AM, Alexandre Rafalovitch <
> arafalov@gmail.com> wrote:
>>
>> What's your queries look like? Especially FQs, facets, sort, etc. All
>> of those things require caches of various sorts.
>>
>> Regards,
>>    Alex.
>> Personal: http://www.outerthoughts.com/ and @arafalov
>> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
>> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>>
>>
>> On 16 December 2014 at 11:55, Trilok Prithvi <tr...@gmail.com>
>> wrote:
>> > We are getting OOME pretty often (every hour or so). We are restarting
>> > nodes to keep up with it.
>> >
>> > Here is our setup:
>> > SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
>> >
>> > Each node has:
>> > 16GB RAM
>> > 2GB JVM (Xmx 2048, Xms 1024)
>> > ~100 Million documents (split among 2 shards - ~50M on each shard)
>> > Solr Core is about ~16GB of data on each node.
>> >
>> > *Physical Memory is almost always 99% full.*
>> >
>> >
>> > The commit setup is as follows:
>> >
>> > <updateHandler class="solr.DirectUpdateHandler2"> <updateLog> <str name=
>> > "dir">${solr.ulog.dir:}</str> </updateLog> <autoCommit>
>> <maxTime>300000</
>> > maxTime> <maxDocs>100000</maxDocs> <openSearcher>false</openSearcher> </
>> > autoCommit> <autoSoftCommit> <maxTime>5000</maxTime> </autoSoftCommit>
>> </
>> > updateHandler>
>> > Rest of the solrconfig.xml setup is all default.
>> >
>> > Some of the errors that we see on Solr ADMIN Logging is as follows:
>> >
>> > java.lang.OutOfMemoryError: Java heap space
>> >
>> > org.apache.solr.common.SolrException: no servers hosting shard:
>> >
>> > org.apache.http.TruncatedChunkException: Truncated chunk ( expected
>> > size: 8192; actual size: 7222)
>> >
>> >
>> > Please let me know if you need anymore information.
>> >
>> >
>> > Thanks!
>>
>

Re: OutOfMemoryError

Posted by Trilok Prithvi <tr...@gmail.com>.
Thanks Shawn. We will increase the JVM to 4GB and see how it performs.

Alexandre,
Our queries are simple (with strdist() function in almost all the queries).
No facets, or sorts.
But we do a lot of data loads. We index data a lot (several documents,
ranging from 10 - 100000 documents) and we upload data through out the day.
Basically, we are heavy on indexing and querying (simple queries) at the
same time.



On Tue, Dec 16, 2014 at 10:17 AM, Alexandre Rafalovitch <ar...@gmail.com>
wrote:
>
> What's your queries look like? Especially FQs, facets, sort, etc. All
> of those things require caches of various sorts.
>
> Regards,
>    Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 16 December 2014 at 11:55, Trilok Prithvi <tr...@gmail.com>
> wrote:
> > We are getting OOME pretty often (every hour or so). We are restarting
> > nodes to keep up with it.
> >
> > Here is our setup:
> > SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
> >
> > Each node has:
> > 16GB RAM
> > 2GB JVM (Xmx 2048, Xms 1024)
> > ~100 Million documents (split among 2 shards - ~50M on each shard)
> > Solr Core is about ~16GB of data on each node.
> >
> > *Physical Memory is almost always 99% full.*
> >
> >
> > The commit setup is as follows:
> >
> > <updateHandler class="solr.DirectUpdateHandler2"> <updateLog> <str name=
> > "dir">${solr.ulog.dir:}</str> </updateLog> <autoCommit> <maxTime>300000</
> > maxTime> <maxDocs>100000</maxDocs> <openSearcher>false</openSearcher> </
> > autoCommit> <autoSoftCommit> <maxTime>5000</maxTime> </autoSoftCommit> </
> > updateHandler>
> > Rest of the solrconfig.xml setup is all default.
> >
> > Some of the errors that we see on Solr ADMIN Logging is as follows:
> >
> > java.lang.OutOfMemoryError: Java heap space
> >
> > org.apache.solr.common.SolrException: no servers hosting shard:
> >
> > org.apache.http.TruncatedChunkException: Truncated chunk ( expected
> > size: 8192; actual size: 7222)
> >
> >
> > Please let me know if you need anymore information.
> >
> >
> > Thanks!
>

Re: OutOfMemoryError

Posted by Alexandre Rafalovitch <ar...@gmail.com>.
What's your queries look like? Especially FQs, facets, sort, etc. All
of those things require caches of various sorts.

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 16 December 2014 at 11:55, Trilok Prithvi <tr...@gmail.com> wrote:
> We are getting OOME pretty often (every hour or so). We are restarting
> nodes to keep up with it.
>
> Here is our setup:
> SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
>
> Each node has:
> 16GB RAM
> 2GB JVM (Xmx 2048, Xms 1024)
> ~100 Million documents (split among 2 shards - ~50M on each shard)
> Solr Core is about ~16GB of data on each node.
>
> *Physical Memory is almost always 99% full.*
>
>
> The commit setup is as follows:
>
> <updateHandler class="solr.DirectUpdateHandler2"> <updateLog> <str name=
> "dir">${solr.ulog.dir:}</str> </updateLog> <autoCommit> <maxTime>300000</
> maxTime> <maxDocs>100000</maxDocs> <openSearcher>false</openSearcher> </
> autoCommit> <autoSoftCommit> <maxTime>5000</maxTime> </autoSoftCommit> </
> updateHandler>
> Rest of the solrconfig.xml setup is all default.
>
> Some of the errors that we see on Solr ADMIN Logging is as follows:
>
> java.lang.OutOfMemoryError: Java heap space
>
> org.apache.solr.common.SolrException: no servers hosting shard:
>
> org.apache.http.TruncatedChunkException: Truncated chunk ( expected
> size: 8192; actual size: 7222)
>
>
> Please let me know if you need anymore information.
>
>
> Thanks!

Re: OutOfMemoryError

Posted by Shawn Heisey <ap...@elyograg.org>.
On 12/16/2014 9:55 AM, Trilok Prithvi wrote:
> We are getting OOME pretty often (every hour or so). We are restarting
> nodes to keep up with it.
>
> Here is our setup:
> SolrCloud 4.10.2 (2 shards, 2 replicas) with 3 zookeepers.
>
> Each node has:
> 16GB RAM
> 2GB JVM (Xmx 2048, Xms 1024)
> ~100 Million documents (split among 2 shards - ~50M on each shard)
> Solr Core is about ~16GB of data on each node.
>
> *Physical Memory is almost always 99% full.*

I'm pretty sure that a 2GB heap will simply not be big enough for 100 
million documents.  The fact that you can get it to function for even an 
hour is pretty amazing.

If you can upgrade the memory beyond 16GB, you should ... and you'll 
need to increase your Java heap.  I would use 4GB as a starting point.

http://wiki.apache.org/solr/SolrPerformanceProblems#How_much_heap_space_do_I_need.3F

It's completely normal for physical memory to be full.  The OS uses 
available memory for disk caching.

http://en.wikipedia.org/wiki/Page_cache

Thanks,
Shawn