You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by 陈永龙 <cy...@gfire.cn> on 2017/08/25 01:10:23 UTC

Solr caching the index file make server refuse serving

Hello,

ENV:  solrcloud 6.3  

3*dell server

128G 12cores 4.3T /server

3 solr node /server

20G /node (with parameter �Cm 20G)

10 billlion documents totle

Problem:

         When we start solrcloud ,the cached index will make memory 98% or
more used . And if we continue to index document (batch commit 10 000
documents),one or more server will refuse serving.Cannot login wia ssh,even
refuse the monitor.

So,how can I limit the solr��s caching index to memory behavior?

Anyone thanks!


Re: Solr caching the index file make server refuse serving

Posted by Erick Erickson <er...@gmail.com>.
10 billion documents on 12 cores is over 800M documents/shard at best.
This is _very_ aggressive for a shard. Could you give more information
about your setup?

I've seen 250M docs fit in 12G memory. I've also seen 10M documents
strain 32G of memory. Details matter a lot. The only way I've been
able to determine what a reasonable number of docs with my queries on
my data is to do "the sizing exercise", which I've outlined here:

https://lucidworks.com/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

While this was written over 5 years ago, it's still accurate.

Best,
Erick

On Thu, Aug 24, 2017 at 6:10 PM, 陈永龙 <cy...@gfire.cn> wrote:
> Hello,
>
> ENV:  solrcloud 6.3
>
> 3*dell server
>
> 128G 12cores 4.3T /server
>
> 3 solr node /server
>
> 20G /node (with parameter –m 20G)
>
> 10 billlion documents totle
>
> Problem:
>
>          When we start solrcloud ,the cached index will make memory 98% or
> more used . And if we continue to index document (batch commit 10 000
> documents),one or more server will refuse serving.Cannot login wia ssh,even
> refuse the monitor.
>
> So,how can I limit the solr’s caching index to memory behavior?
>
> Anyone thanks!
>