You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Luke Oak <lu...@gmail.com> on 2021/01/28 00:08:08 UTC

Solr 8.7.0 memory leak?

Hi, I am using solr 8.7.0, centos 7, java 8.

I just created a few collections and no data, memory keeps growing but never go down, until I got OOM and solr is killed 

Any reason?

Thanks

Sent from my iPhone

Re: Solr 8.7.0 memory leak?

Posted by Chris Hostetter <ho...@fucit.org>.
: there are not many OOM stack details printed in the solr log file, it's
: just saying No enough memory, and it's killed by ****oom.sh(solr's script).

not many isn't the same as none ... can you tell us *ANYTHING* about what 
the logs look like? ... as i said: it's not just the details of the OOM 
that would be helpful: any details about what the solr logs say solr is 
doing while the memory is growing (before the OOM) would also be helpful.

: My question(issue) is not it's OOM or not, the issue is why JVM memory
: usage keeps growing up but never going down, it's not how java programs
: work. the normal java process can use a lot of memory, but it will throw
: away after using it instead of keep it in the memory with reference.

you're absolutely right -- that's how a java program should be have, and 
that's what i'm seeing when I try to repoduce what you're describing with 
solr 8.7.0 by running a few nodes, creating a collection and waiting.

In other words: i can't reproduce what you are seing based on the 
information you've provided -- so the only thing i can do is to ask you 
for more information: what you see in the logs, what your configs are, the 
exact steps you take to trigger this situation, etc...

Please help us help you so we can figure out what is causing the 
behavior you are seeing and try to fix it....

: > Knowing exactly what your config looks like would help, knowing exactly
: > what you do before you see the OOM would help (are you realy just creating
: > the collections, or is it actauly neccessary to index some docs into those
: > collections before you see this problem start to happen? what do the logs
: > say during the time when the heap usage is just growing w/o explanation?
: > what is the stack trace of the OOM? what does a heap abalysis show in
: > terms of large/leaked objects? etc.
: >
: > You have to help us understand the minimally viable steps we need
: > to execute to see the behavior you see....
: >
: > https://cwiki.apache.org/confluence/display/SOLR/UsingMailingLists


-Hoss
http://www.lucidworks.com/

Re: Solr 8.7.0 memory leak?

Posted by Luke <lu...@gmail.com>.
Thanks Hoss and Shawn for helping.

there are not many OOM stack details printed in the solr log file, it's
just saying No enough memory, and it's killed by ****oom.sh(solr's script).

My question(issue) is not it's OOM or not, the issue is why JVM memory
usage keeps growing up but never going down, it's not how java programs
work. the normal java process can use a lot of memory, but it will throw
away after using it instead of keep it in the memory with reference.

After trying solr 8.7.0 in one week, I go back to solr 8.6.2(config, plugin
and 3th party libraries are all the same, Xmx=6G), now I can see JVM memory
usage up and down. I can see it goes up when I am creating collections, but
it will go down once the collection is created completely.

I think I should stick with 8.6.2 until I can find a proper config or
stable version.

thanks again

Derrick

On Thu, Jan 28, 2021 at 11:43 PM Chris Hostetter <ho...@fucit.org>
wrote:

>
> : Is the matter to use the config file ? I am using custom config instead
> : of _default, my config is from solr 8.6.2 with custom solrconfig.xml
>
> Well, it depends on what's *IN* the custom config ... maybe you are using
> some built in functionality that has a bug but didn't get triggered by my
> simple test case -- or maybe you have custom components that have memory
> leaks.
>
> The point of the question was to try and understand where/how you are
> running into an OOM i can't reproduce.
>
> Knowing exactly what your config looks like would help, knowing exactly
> what you do before you see the OOM would help (are you realy just creating
> the collections, or is it actauly neccessary to index some docs into those
> collections before you see this problem start to happen? what do the logs
> say during the time when the heap usage is just growing w/o explanation?
> what is the stack trace of the OOM? what does a heap abalysis show in
> terms of large/leaked objects? etc.
>
> You have to help us understand the minimally viable steps we need
> to execute to see the behavior you see....
>
> https://cwiki.apache.org/confluence/display/SOLR/UsingMailingLists
>
> -Hoss
> http://www.lucidworks.com/
>
>

Re: Solr 8.7.0 memory leak?

Posted by Chris Hostetter <ho...@fucit.org>.
: Is the matter to use the config file ? I am using custom config instead 
: of _default, my config is from solr 8.6.2 with custom solrconfig.xml

Well, it depends on what's *IN* the custom config ... maybe you are using 
some built in functionality that has a bug but didn't get triggered by my 
simple test case -- or maybe you have custom components that have memory 
leaks.

The point of the question was to try and understand where/how you are 
running into an OOM i can't reproduce.

Knowing exactly what your config looks like would help, knowing exactly 
what you do before you see the OOM would help (are you realy just creating 
the collections, or is it actauly neccessary to index some docs into those 
collections before you see this problem start to happen? what do the logs 
say during the time when the heap usage is just growing w/o explanation? 
what is the stack trace of the OOM? what does a heap abalysis show in 
terms of large/leaked objects? etc.

You have to help us understand the minimally viable steps we need 
to execute to see the behavior you see....

https://cwiki.apache.org/confluence/display/SOLR/UsingMailingLists

-Hoss
http://www.lucidworks.com/


Re: Solr 8.7.0 memory leak?

Posted by Luke Oak <lu...@gmail.com>.
Thanks Chris,  

Is the matter to use the config file ? I am using custom config instead of _default, my config is from solr 8.6.2 with custom solrconfig.xml

Derrick

Sent from my iPhone

> On Jan 28, 2021, at 2:48 PM, Chris Hostetter <ho...@fucit.org> wrote:
> 
> 
> FWIW, I just tried using 8.7.0 to run:
>    bin/solr -m 200m -e cloud -noprompt
> 
> And then setup the following bash one liner to poll the heap metrics...
> 
> while : ; do date; echo "node 8989" && (curl -sS http://localhost:8983/solr/admin/metrics | grep memory.heap); echo "node 7574" && (curl -sS http://localhost:8983/solr/admin/metrics | grep memory.heap) ; sleep 30; done
> 
> ...what i saw was about what i expected ... heap usage slowly grew on both 
> nodes as bits of garbage were generated (as expected cosidering the 
> metrics requests, let alone typical backgroup threads) until eventually it 
> garbage collected back down to low usage w/o ever encountering an OOM or crash...
> 
> ....
> Thu Jan 28 12:38:47 MST 2021
> node 8989
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.7613688659667969,
>      "memory.heap.used":159670624,
> node 7574
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.7713688659667969,
>      "memory.heap.used":161767776,
> Thu Jan 28 12:39:17 MST 2021
> node 8989
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.7813688659667969,
>      "memory.heap.used":163864928,
> node 7574
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.7913688659667969,
>      "memory.heap.used":165962080,
> Thu Jan 28 12:39:47 MST 2021
> node 8989
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.8063688659667969,
>      "memory.heap.used":169107808,
> node 7574
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.8113688659667969,
>      "memory.heap.used":170156384,
> Thu Jan 28 12:40:17 MST 2021
> node 8989
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.3428504943847656,
>      "memory.heap.used":71900960,
> node 7574
>      "memory.heap.committed":209715200,
>      "memory.heap.init":209715200,
>      "memory.heap.max":209715200,
>      "memory.heap.usage":0.3528504943847656,
>      "memory.heap.used":73998112,
> 
> 
> 
> 
> 
> 
> -Hoss
> http://www.lucidworks.com/

Re: Solr 8.7.0 memory leak?

Posted by Chris Hostetter <ho...@fucit.org>.
FWIW, I just tried using 8.7.0 to run:
	bin/solr -m 200m -e cloud -noprompt

And then setup the following bash one liner to poll the heap metrics...

while : ; do date; echo "node 8989" && (curl -sS http://localhost:8983/solr/admin/metrics | grep memory.heap); echo "node 7574" && (curl -sS http://localhost:8983/solr/admin/metrics | grep memory.heap) ; sleep 30; done

...what i saw was about what i expected ... heap usage slowly grew on both 
nodes as bits of garbage were generated (as expected cosidering the 
metrics requests, let alone typical backgroup threads) until eventually it 
garbage collected back down to low usage w/o ever encountering an OOM or crash...

....
Thu Jan 28 12:38:47 MST 2021
node 8989
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.7613688659667969,
      "memory.heap.used":159670624,
node 7574
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.7713688659667969,
      "memory.heap.used":161767776,
Thu Jan 28 12:39:17 MST 2021
node 8989
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.7813688659667969,
      "memory.heap.used":163864928,
node 7574
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.7913688659667969,
      "memory.heap.used":165962080,
Thu Jan 28 12:39:47 MST 2021
node 8989
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.8063688659667969,
      "memory.heap.used":169107808,
node 7574
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.8113688659667969,
      "memory.heap.used":170156384,
Thu Jan 28 12:40:17 MST 2021
node 8989
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.3428504943847656,
      "memory.heap.used":71900960,
node 7574
      "memory.heap.committed":209715200,
      "memory.heap.init":209715200,
      "memory.heap.max":209715200,
      "memory.heap.usage":0.3528504943847656,
      "memory.heap.used":73998112,






-Hoss
http://www.lucidworks.com/

Re: Solr 8.7.0 memory leak?

Posted by Chris Hostetter <ho...@fucit.org>.
: Hi, I am using solr 8.7.0, centos 7, java 8.
: 
: I just created a few collections and no data, memory keeps growing but 
: never go down, until I got OOM and solr is killed

Are you usinga custom config set, or just the _default configs?

if you start up this single node with something like -Xmx5g and create 
5 collections and do nothing else, how long does it take you to see the 
OOM?



-Hoss
http://www.lucidworks.com/

Re: Solr 8.7.0 memory leak?

Posted by Luke <lu...@gmail.com>.
and here is GC log when I create collection(just create collection, nothing
else)

{Heap before GC invocations=1530 (full 412):
 garbage-first heap   total 10485760K, used 10483431K [0x0000000540000000,
0x0000000540405000, 0x00000007c0000000)
  region size 4096K, 0 young (0K), 0 survivors (0K)
 Metaspace       used 70694K, capacity 75070K, committed 75260K, reserved
1116160K
  class space    used 7674K, capacity 8836K, committed 8956K, reserved
1048576K
2021-01-28T21:24:18.396+0800: 34029.526: [GC pause (G1 Evacuation Pause)
(young)
Desired survivor size 33554432 bytes, new threshold 15 (max 15)
, 0.0034128 secs]
   [Parallel Time: 2.2 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 34029525.7, Avg: 34029526.1, Max:
34029527.3, Diff: 1.6]
      [Ext Root Scanning (ms): Min: 0.0, Avg: 1.0, Max: 1.4, Diff: 1.4,
Sum: 4.1]
      [Update RS (ms): Min: 0.3, Avg: 0.6, Max: 0.7, Diff: 0.4, Sum: 2.2]
         [Processed Buffers: Min: 2, Avg: 2.8, Max: 4, Diff: 2, Sum: 11]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
Sum: 0.0]
      [Object Copy (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.2]
      [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.3, Sum: 0.6]
         [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum:
0.0]
      [GC Worker Total (ms): Min: 0.6, Avg: 1.8, Max: 2.2, Diff: 1.6, Sum:
7.2]
      [GC Worker End (ms): Min: 34029527.9, Avg: 34029527.9, Max:
34029527.9, Diff: 0.0]
   [Code Root Fixup: 0.0 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.0 ms]
   [Other: 1.2 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 0.9 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.0 ms]
      [Humongous Register: 0.1 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.0 ms]
   [Eden: 0.0B(512.0M)->0.0B(512.0M) Survivors: 0.0B->0.0B Heap:
10237.7M(10240.0M)->10237.7M(10240.0M)]
Heap after GC invocations=1531 (full 412):
 garbage-first heap   total 10485760K, used 10483431K [0x0000000540000000,
0x0000000540405000, 0x00000007c0000000)
  region size 4096K, 0 young (0K), 0 survivors (0K)
 Metaspace       used 70694K, capacity 75070K, committed 75260K, reserved
1116160K
  class space    used 7674K, capacity 8836K, committed 8956K, reserved
1048576K
}
 [Times: user=0.01 sys=0.00, real=0.01 secs]
2021-01-28T21:24:18.400+0800: 34029.529: Total time for which application
threads were stopped: 0.0044183 seconds, Stopping threads took: 0.0000500
seconds
{Heap before GC invocations=1531 (full 412):

On Thu, Jan 28, 2021 at 1:23 PM Luke <lu...@gmail.com> wrote:

> Mike,
>
> No, it's not docker. it is just one solr node(service) which connects to
> external zookeeper, the below is a JVM setting and memory usage.
>
> There are 25  collections which have a few 2000 documents totally. I am
> wondering why solr uses so much memory.
>
> -XX:+AlwaysPreTouch-XX:+ExplicitGCInvokesConcurrent
> -XX:+ParallelRefProcEnabled-XX:+PerfDisableSharedMem
> -XX:+PrintGCApplicationStoppedTime-XX:+PrintGCDateStamps
> -XX:+PrintGCDetails-XX:+PrintGCTimeStamps-XX:+PrintHeapAtGC
> -XX:+PrintTenuringDistribution-XX:+UseG1GC-XX:+UseGCLogFileRotation
> -XX:+UseLargePages-XX:-OmitStackTraceInFastThrow-XX:GCLogFileSize=20M
> -XX:MaxGCPauseMillis=250-XX:NumberOfGCLogFiles=9-XX:OnOutOfMemoryError=/mnt/ume/software/solr-8.7.0-3/bin/oom_solr.sh
> 8983 /mnt/ume/logs/solr2-Xloggc:/mnt/ume/logs/solr2/solr_gc.log-Xms6g
> -Xmx10g-Xss256k-verbose:gc
> [image: image.png]
>
> On Thu, Jan 28, 2021 at 4:40 AM Mike Drob <md...@mdrob.com> wrote:
>
>> Are you running these in docker containers?
>>
>> Also, I’m assuming this is a typo but just in case the setting is Xmx :)
>>
>> Can you share the OOM stack trace? It’s not always running out of memory,
>> sometimes Java throws OOM for file handles or threads.
>>
>> Mike
>>
>> On Wed, Jan 27, 2021 at 10:00 PM Luke <lu...@gmail.com> wrote:
>>
>> > Shawn,
>> >
>> > it's killed by OOME exception. The problem is that I just created empty
>> > collections and the Solr JVM keeps growing and never goes down. there
>> is no
>> > data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
>> > always use all of them and it will be killed by oom.sh once jvm usage
>> > reachs 100%.
>> >
>> > I have another solr 8.6.2 cloud(3 nodes) in separated environment ,
>> which
>> > have over 100 collections, the Xxm = 6G , jvm is always 4-5G.
>> >
>> >
>> >
>> > On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey <ap...@elyograg.org>
>> wrote:
>> >
>> > > On 1/27/2021 5:08 PM, Luke Oak wrote:
>> > > > I just created a few collections and no data, memory keeps growing
>> but
>> > > never go down, until I got OOM and solr is killed
>> > > >
>> > > > Any reason?
>> > >
>> > > Was Solr killed by the operating system's oom killer or did the death
>> > > start with a Java OutOfMemoryError exception?
>> > >
>> > > If it was the OS, then the entire system doesn't have enough memory
>> for
>> > > the demands that are made on it.  The problem might be Solr, or it
>> might
>> > > be something else.  You will need to either reduce the amount of
>> memory
>> > > used or increase the memory in the system.
>> > >
>> > > If it was a Java OOME exception that led to Solr being killed, then
>> some
>> > > resource (could be heap memory, but isn't always) will be too small
>> and
>> > > will need to be increased.  To figure out what resource, you need to
>> see
>> > > the exception text.  Such exceptions are not always recorded -- it may
>> > > occur in a section of code that has no logging.
>> > >
>> > > Thanks,
>> > > Shawn
>> > >
>> >
>>
>

Re: Solr 8.7.0 memory leak?

Posted by Luke <lu...@gmail.com>.
Mike,

No, it's not docker. it is just one solr node(service) which connects to
external zookeeper, the below is a JVM setting and memory usage.

There are 25  collections which have a few 2000 documents totally. I am
wondering why solr uses so much memory.

-XX:+AlwaysPreTouch-XX:+ExplicitGCInvokesConcurrent
-XX:+ParallelRefProcEnabled-XX:+PerfDisableSharedMem
-XX:+PrintGCApplicationStoppedTime-XX:+PrintGCDateStamps-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps-XX:+PrintHeapAtGC-XX:+PrintTenuringDistribution
-XX:+UseG1GC-XX:+UseGCLogFileRotation-XX:+UseLargePages
-XX:-OmitStackTraceInFastThrow-XX:GCLogFileSize=20M-XX:MaxGCPauseMillis=250
-XX:NumberOfGCLogFiles=9-XX:OnOutOfMemoryError=/mnt/ume/software/solr-8.7.0-3/bin/oom_solr.sh
8983 /mnt/ume/logs/solr2-Xloggc:/mnt/ume/logs/solr2/solr_gc.log-Xms6g-Xmx10g
-Xss256k-verbose:gc
[image: image.png]

On Thu, Jan 28, 2021 at 4:40 AM Mike Drob <md...@mdrob.com> wrote:

> Are you running these in docker containers?
>
> Also, I’m assuming this is a typo but just in case the setting is Xmx :)
>
> Can you share the OOM stack trace? It’s not always running out of memory,
> sometimes Java throws OOM for file handles or threads.
>
> Mike
>
> On Wed, Jan 27, 2021 at 10:00 PM Luke <lu...@gmail.com> wrote:
>
> > Shawn,
> >
> > it's killed by OOME exception. The problem is that I just created empty
> > collections and the Solr JVM keeps growing and never goes down. there is
> no
> > data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
> > always use all of them and it will be killed by oom.sh once jvm usage
> > reachs 100%.
> >
> > I have another solr 8.6.2 cloud(3 nodes) in separated environment , which
> > have over 100 collections, the Xxm = 6G , jvm is always 4-5G.
> >
> >
> >
> > On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey <ap...@elyograg.org>
> wrote:
> >
> > > On 1/27/2021 5:08 PM, Luke Oak wrote:
> > > > I just created a few collections and no data, memory keeps growing
> but
> > > never go down, until I got OOM and solr is killed
> > > >
> > > > Any reason?
> > >
> > > Was Solr killed by the operating system's oom killer or did the death
> > > start with a Java OutOfMemoryError exception?
> > >
> > > If it was the OS, then the entire system doesn't have enough memory for
> > > the demands that are made on it.  The problem might be Solr, or it
> might
> > > be something else.  You will need to either reduce the amount of memory
> > > used or increase the memory in the system.
> > >
> > > If it was a Java OOME exception that led to Solr being killed, then
> some
> > > resource (could be heap memory, but isn't always) will be too small and
> > > will need to be increased.  To figure out what resource, you need to
> see
> > > the exception text.  Such exceptions are not always recorded -- it may
> > > occur in a section of code that has no logging.
> > >
> > > Thanks,
> > > Shawn
> > >
> >
>

Re: Solr 8.7.0 memory leak?

Posted by Mike Drob <md...@mdrob.com>.
Are you running these in docker containers?

Also, I’m assuming this is a typo but just in case the setting is Xmx :)

Can you share the OOM stack trace? It’s not always running out of memory,
sometimes Java throws OOM for file handles or threads.

Mike

On Wed, Jan 27, 2021 at 10:00 PM Luke <lu...@gmail.com> wrote:

> Shawn,
>
> it's killed by OOME exception. The problem is that I just created empty
> collections and the Solr JVM keeps growing and never goes down. there is no
> data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
> always use all of them and it will be killed by oom.sh once jvm usage
> reachs 100%.
>
> I have another solr 8.6.2 cloud(3 nodes) in separated environment , which
> have over 100 collections, the Xxm = 6G , jvm is always 4-5G.
>
>
>
> On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey <ap...@elyograg.org> wrote:
>
> > On 1/27/2021 5:08 PM, Luke Oak wrote:
> > > I just created a few collections and no data, memory keeps growing but
> > never go down, until I got OOM and solr is killed
> > >
> > > Any reason?
> >
> > Was Solr killed by the operating system's oom killer or did the death
> > start with a Java OutOfMemoryError exception?
> >
> > If it was the OS, then the entire system doesn't have enough memory for
> > the demands that are made on it.  The problem might be Solr, or it might
> > be something else.  You will need to either reduce the amount of memory
> > used or increase the memory in the system.
> >
> > If it was a Java OOME exception that led to Solr being killed, then some
> > resource (could be heap memory, but isn't always) will be too small and
> > will need to be increased.  To figure out what resource, you need to see
> > the exception text.  Such exceptions are not always recorded -- it may
> > occur in a section of code that has no logging.
> >
> > Thanks,
> > Shawn
> >
>

Re: Solr 8.7.0 memory leak?

Posted by Shawn Heisey <ap...@elyograg.org>.
On 1/27/2021 9:00 PM, Luke wrote:
> it's killed by OOME exception. The problem is that I just created empty
> collections and the Solr JVM keeps growing and never goes down. there is no
> data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
> always use all of them and it will be killed by oom.sh once jvm usage
> reachs 100%.

We are stuck until we know what resource is running out and causing the 
OOME.  To know that we will need to see the actual exception.

Thanks,
Shawn

Re: Solr 8.7.0 memory leak?

Posted by Luke <lu...@gmail.com>.
Shawn,

it's killed by OOME exception. The problem is that I just created empty
collections and the Solr JVM keeps growing and never goes down. there is no
data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
always use all of them and it will be killed by oom.sh once jvm usage
reachs 100%.

I have another solr 8.6.2 cloud(3 nodes) in separated environment , which
have over 100 collections, the Xxm = 6G , jvm is always 4-5G.



On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey <ap...@elyograg.org> wrote:

> On 1/27/2021 5:08 PM, Luke Oak wrote:
> > I just created a few collections and no data, memory keeps growing but
> never go down, until I got OOM and solr is killed
> >
> > Any reason?
>
> Was Solr killed by the operating system's oom killer or did the death
> start with a Java OutOfMemoryError exception?
>
> If it was the OS, then the entire system doesn't have enough memory for
> the demands that are made on it.  The problem might be Solr, or it might
> be something else.  You will need to either reduce the amount of memory
> used or increase the memory in the system.
>
> If it was a Java OOME exception that led to Solr being killed, then some
> resource (could be heap memory, but isn't always) will be too small and
> will need to be increased.  To figure out what resource, you need to see
> the exception text.  Such exceptions are not always recorded -- it may
> occur in a section of code that has no logging.
>
> Thanks,
> Shawn
>

Re: Solr 8.7.0 memory leak?

Posted by Shawn Heisey <ap...@elyograg.org>.
On 1/27/2021 5:08 PM, Luke Oak wrote:
> I just created a few collections and no data, memory keeps growing but never go down, until I got OOM and solr is killed
> 
> Any reason?

Was Solr killed by the operating system's oom killer or did the death 
start with a Java OutOfMemoryError exception?

If it was the OS, then the entire system doesn't have enough memory for 
the demands that are made on it.  The problem might be Solr, or it might 
be something else.  You will need to either reduce the amount of memory 
used or increase the memory in the system.

If it was a Java OOME exception that led to Solr being killed, then some 
resource (could be heap memory, but isn't always) will be too small and 
will need to be increased.  To figure out what resource, you need to see 
the exception text.  Such exceptions are not always recorded -- it may 
occur in a section of code that has no logging.

Thanks,
Shawn