You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Kevin Van Lieshout <ke...@gmail.com> on 2020/10/09 21:45:02 UTC

Solr Memory

Hi,

I use solr for distributed indexing in cloud mode. I run solr in kubernetes
on a 72 core, 256 GB sever. In the work im doing, i benchmark index times
so we are constantly indexing, and then deleting the collection, etc for
accurate benchmarking on certain size of GB. In theory, this should not
cause a memory build up but it does. As we index more and more (create
collections) and then delete the collections, we are still seeing a build
up in percentages from kubernetes metric tracking of our server. We are
running Solr 7.6 and ZK 3.5.5. Is there any reason why collections are
being "deleted" but data stays persistent on the shards that do not release
memory, therefore causing a build up and then solr shards will crash for
OOM reasons even if they have no collections or "data" on them after we
delete each time.

Let me know if anyone has seen this. Thanks

Kevin

Re: Solr Memory

Posted by Erick Erickson <er...@gmail.com>.
_Have_ they crashed due to OOMs? It’s quite normal for Java to create
a sawtooth pattern of memory consumption. If you attach, say, jconsole
to the running Solr and hit the GC button, does the memory drop back?

To answer your question, though, no there’s no reason memory should creep.
That said, the scenario you describe is not a “normal” scenario,  in general
collection creation/deletion is a fairly rare operation, so any memory leaks
in that code wouldn’t have jumped out at everyone the same way, say,
a memory leak when searching would.

A heap dump would be useful, but do use something to force a global GC
first.

Best,
Erick

> On Oct 9, 2020, at 5:45 PM, Kevin Van Lieshout <ke...@gmail.com> wrote:
> 
> Hi,
> 
> I use solr for distributed indexing in cloud mode. I run solr in kubernetes
> on a 72 core, 256 GB sever. In the work im doing, i benchmark index times
> so we are constantly indexing, and then deleting the collection, etc for
> accurate benchmarking on certain size of GB. In theory, this should not
> cause a memory build up but it does. As we index more and more (create
> collections) and then delete the collections, we are still seeing a build
> up in percentages from kubernetes metric tracking of our server. We are
> running Solr 7.6 and ZK 3.5.5. Is there any reason why collections are
> being "deleted" but data stays persistent on the shards that do not release
> memory, therefore causing a build up and then solr shards will crash for
> OOM reasons even if they have no collections or "data" on them after we
> delete each time.
> 
> Let me know if anyone has seen this. Thanks
> 
> Kevin