You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Christopher Schultz <ch...@christopherschultz.net> on 2024/01/03 14:23:57 UTC
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Brian,
On 12/30/23 15:42, Brian Braun wrote:
> At the beginning, this was the problem: The OOM-killer (something that I
> never knew existed) killing Tomcat unexpectedly and without any
> explanation
The explanation is always the same: some application requests memory
from the kernel, which always grants the request(!). When the
application tries to use that memory, the kernel scrambles to physically
allocate the memory on-demand and, if all the memory is gone, it will
pick a process and kill it.
There are ways to prevent this from happening, but the best way to not
to over-commit your memory.
> Not knowing how much memory would I need to satisfy the JVM, and not
> willing to migrate to more expensive Amazon instances just because I
> don't know why this is happening. And not knowing if the memory
> requirement would keep growing and growing and growing.
It might. But if your symptom is Linux oom-killer and not JVM OOME, then
the better technique is to *reduce* your heap space in the JVM.
> Then I activated the SWAP file, and I discovered that this problem stops at
> 1.5GB of memory used by the JVM. At least I am not getting more crashes
> anymore. But I consider the SWAP file as a palliative and I really want to
> know what is the root of this problem. If I don't, then maybe I should
> consider another career. I don't enjoy giving up.
Using a swap file is probably going to kill your performance. What
happens if you make your heap smaller?
> Yes, the memory used by the JVM started to grow suddenly one day, after
> several years running fine. Since I had not made any changes to my app, I
> really don't know the reason. And I really think this should not be
> happening without an explanation.
>
> I don't have any Java OOME exceptions, so it is not that my objects don't
> fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
> don't think the Heap and non-heap usage is the problem. I have been
> inspecting those and their usage seems to be normal/modest and steady. I
> can see that using the Tomcat Manager as well as several other tools (New
> Relic, VisualVM, etc).
Okay, so what you've done then is to allow a very large heap that you
mostly don't need. If/when the heap grows a lot -- possibly suddenly --
the JVM is lazy and just takes more heap space from the OS and
ultimately you run out of main memory.
The solution is to reduce the heap size.
> Regarding the 1GB I am giving now to the -Xms parameter: I was giving just
> a few hundreds and I already had the problem. Actually I think it is the
> same if I give a few hundreds of MBs or 1GB, the JVM still starts using
> more memory after 3-4 days of running until it takes 1.5GB. But during the
> first 1-4 days it uses just a few hundred MBs.
>
> My app has been "static" as you say, but probably I have upgraded Tomcat
> and/or Java recently. I don't really remember. Maybe one of those upgrades
> brought this issue as a result. Actually, If I knew that one of those
> upgrades causes this huge pike in memory consumption and there is no way to
> avoid it, then I would accept it as a fact of life and move on. But since I
> don't know, it really bugs me.
>
> I have the same amount of users and traffic as before. I also know how much
> memory a session takes and it is fine. I have also checked the HTTP(S)
> requests to see if somehow I am getting any attempts to hack my instance
> that could be the root of this problem. Yes, I get hacking attempts by
> those bots all the time, but I don't see anything relevant there. No news.
>
> I agree with what you say now regarding the GC. I should not need to use
> those switches since I understand it should work fine without using them.
> And I don't know how to use them. And since I have never cared about using
> them for about 15 years using Java+Tomcat, why should I start now?
>
> I have also checked all my long-lasting objects. I have optimized my DB
> queries recently as you suggest now, so they don't create huge amounts of
> objects in a short period of time that the GC would have to deal with. The
> same applies to my scheduled tasks. They all run very quickly and use
> modest amounts of memory. All the other default Tomcat threads create far
> more objects.
>
> I have already activated the GC log. Is there a tool that you would suggest
> to analyze it? I haven't even opened it. I suspect that the root of my
> problem comes from the GC process indeed.
The GC logs are just text, so you can eyeball them if you'd like, but to
really get a sense of what's happening you should use some kind of
visualization tool.
It's not pretty, but gcviewer (https://github.com/chewiebug/GCViewer)
gets the job done.
If you run with a 500MiB heap and everything looks good and you have no
crashes (Linux oom-killer or Java OOME), I'd stick with that. Remember
that your total OS memory requirements will be Java heap + JVM overhead
+ whatever native memory is required by native libraries.
In production, I have an application with a 2048MiB heap whose "resident
size" in `ps` shows as 2.4GiB. So nearly half a GiB is being used on top
of that 2GiB heap. gcviewer will not show anything about the native
memory being used, so you will only be seeing part of the picture.
Tracking native memory usage can be tricky depending upon your
environment. I would only look into that if there were somethng very odd
going on, like your process memory space seems to be more than 50% taken
by non-java-heap memory.
-chris
> On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
> chris@christopherschultz.net> wrote:
>
>> Brian,
>>
>> On 12/29/23 20:48, Brian Braun wrote:
>>> Hello,
>>>
>>> First of all:
>>> Christopher Schultz: You answered an email from me 6 weeks ago. You
>> helped
>>> me a lot with your suggestions. I have done a lot of research and have
>>> learnt a lot since then, so I have been able to rule out a lot of
>> potential
>>> roots for my issue. Because of that I am able to post a new more specific
>>> email. Thanks a lot!!!
>>>
>>> Now, this is my stack:
>>>
>>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been enough
>>> for years.
>>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
>> 2023-08-24
>>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m -Xms1000m
>>> ......")
>>> - My app, which I developed myself, and has been running without any
>>> problems for years
>>>
>>> Well, a couple of months ago my website/Tomcat/Java started eating more
>> and
>>> more memory about after about 4-7 days. The previous days it uses just a
>>> few hundred MB and is very steady, but then after a few days the memory
>>> usage suddenly grows up to 1.5GB (and then stops growing at that point,
>>> which is interesting). Between these anomalies the RAM usage is fine and
>>> very steady (as it has been for years) and it uses just about 40-50% of
>> the
>>> "Max memory" (according to what the Tomcat Manager server status shows).
>>> The 3 components of G1GC heap memory are steady and low, before and after
>>> the usage grows to 1.5GB, so it is definitely not that the heap starts
>>> requiring more and more memory. I have been using several tools to
>> monitor
>>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
>>> memory usage by the heap is not the problem.
>>> The Non-heaps memory usage is not the problem either. Everything there is
>>> normal, the usage is humble and even more steady.
>>>
>>> And there are no leaks, I'm sure of that. I have inspected the JVM using
>>> several tools.
>>>
>>> There are no peaks in the number of threads either. The peak is the same
>>> when the memory usage is low and when it requires 1.5GB. It stays the
>> same
>>> all the time.
>>>
>>> I have also reviewed all the scheduled tasks in my app and lowered the
>>> amount of objects they create, which was nice and entertaining. But that
>> is
>>> not the problem, I have analyzed the object creation by all the threads
>>> (and there are many) and the threads created by my scheduled tasks are
>> very
>>> humble in their memory usage, compared to many other threads.
>>>
>>> And I haven't made any relevant changes to my app in the 6-12 months
>> before
>>> this problem started occurring. It is weird that I started having this
>>> problem. Could it be that I received an update in the java version or the
>>> Tomcat version that is causing this problem?
>>>
>>> If neither the heap memory or the Non-heaps memory is the source of the
>>> growth of the memory usage, what could it be? Clearly something is
>>> happening inside the JVM that raises the memory usage. And everytime it
>>> grows, it doesn't decrease. It is like if something suddenly starts
>>> "pushing" the memory usage more and more, until it stops at 1.5GB.
>>>
>>> I think that maybe the source of the problem is the garbage collector. I
>>> haven't used any of the switches that we can use to optimize that,
>>> basically because I don't know what I should do there (if I should at
>> all).
>>> I have also activated the GC log, but I don't know how to analyze it.
>>>
>>> I have also increased and decreased the value of "-Xms" parameter and it
>> is
>>> useless.
>>>
>>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
>>> Ubuntu instance so at least my JVM would not be killed my the OS anymore
>>> (since the real memory is just 1.8GB). That worked and now the memory
>> usage
>>> can grow up to 1.5GB without crashing, by using the much slower SWAP
>>> memory, but I still think that this is an abnormal situation.
>>>
>>> Thanks in advance for your suggestions!
>>
>> First of all: what is the problem? Are you just worried that the number
>> of bytes taken by your JVM process is larger than it was ... sometime in
>> the past? Or are you experiencing Java OOME of Linux oom-killer or
>> anything like that?
>>
>> Not all JVMs behave this way, most most of them do: once memory is
>> "appropriated" by the JVM from the OS, it will never be released. It's
>> just too expensive of an operation to shrink the heap.. plus, you told
>> the JVM "feel free to use up to 1GiB of heap" so it's taking you at your
>> word. Obviously, the native heap plus stack space for every thread plus
>> native memory for any native libraries takes up more space than just the
>> 1GiB you gave for the heap, so ... things just take up space.
>>
>> Lowering the -Xms will never reduce the maximum memory the JVM ever
>> uses. Only lowering -Xmx can do that. I always recommend setting Xms ==
>> Xmx because otherwise you are lying to yourself about your needs.
>>
>> You say you've been running this application "for years". Has it been in
>> a static environment, or have you been doing things such as upgrading
>> Java and/or Tomcat during that time? There are things that Tomcat does
>> now that it did not do in the past that sometimes require more memory to
>> manage, sometimes only at startup and sometimes for the lifetime of the
>> server. There are some things that the JVM is doing that require more
>> memory than their previous versions.
>>
>> And then there is the usage of your web application. Do you have the
>> same number of users? I've told this (short)( story a few times on this
>> list, but we had a web application that ran for 10 years with only 64MiB
>> of heap and one day we started getting OOMEs. At first we just bounced
>> the service and tried looking for bugs, leaks, etc. but the heap dumps
>> were telling us everything was fine.
>>
>> The problem was user load. We simply outgrew the heap we had allocated
>> because we had more simultaneous logged-in users than we did in the
>> past, and they all had sessions, etc. We had plenty of RAM available, we
>> were just being stingy with it.
>>
>> The G1 garbage collector doesn't have very many switches to mess-around
>> with it compared to older collectors. The whole point of G1 was to "make
>> garbage collection easy". Feel free to read 30 years of lies and
>> confusion about how to best configure Java garbage collectors. At the
>> end of the day, if you don't know exactly what you are doing and/or you
>> don't have a specific problem you are trying to solve, you are better
>> off leaving everything with default settings.
>>
>> If you want to reduce the amount of RAM your application uses, set a
>> lower heap size. If that causes OOMEs, audit your application for wasted
>> memory such as too-large caches (which presumably live a long time) or
>> too-large single-transactions such as loading 10k records all at once
>> from a database. Sometimes a single request can require a whole lot of
>> memory RIGHT NOW which is only used temporarily.
>>
>> I was tracking-down something in our own application like this recently:
>> a page-generation process was causing an OOME periodically, but the JVM
>> was otherwise very healthy. It turns out we had an administrative action
>> in our application that had no limits on the amount of data that could
>> be requested from the database at once. So naive administrators were
>> able to essentially cause a query to be run that returned a huge number
>> of rows from the db, then every row was being converted into a row in an
>> HTML table in a web page. Our page-generation process builds the whole
>> page in memory before returning it, instead of streaming it back out to
>> the user, which means a single request can use many MiBs of memory just
>> for in-memory strings/byte arrays.
>>
>> If something like that happens in your application, it can pressure the
>> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
>> -- the JVM is never gonna give that memory back to the OS.
>>
>> So even though everything "looks good", your heap and native memory
>> spaces are very large until you terminate the JVM.
>>
>> If you haven't already done so, I would recommend that you enable GC
>> logging. How to do that is very dependent on your JVM, version, and
>> environment. This writes GC activity details to a series of files during
>> the JVM execution. There are freely-available tools you can use to view
>> those log files in a meaningful way and draw some conclusions. You might
>> even be able to see when that "memory event" took place that caused your
>> heap memory to shoot-up. (Or maybe it's your native memory, which isn't
>> logged by the GC logger.) If you are able to see when it happened, you
>> may be able to correlate that with your application log to see what
>> happened in your application. Maybe you need a fix.
>>
>> Then again, maybe everything is totally fine and there is nothing to
>> worry about.
>>
>> -chris
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Brian Braun <ja...@gmail.com>.
Hello,
It has been a long time since I received the last suggestions to my issue
here on this support list. Since then I decided to stop asking for help and
to "do my homework". To read, to watch YouTube presentations, to spend time
on StackOverflow, etc. So I have spent a lot of time on this and I think I
have learned a lot which is nice.
This is what I have learned lately:
I definitely don't have a leak in my code (or in the libraries I am using,
as far as I understand). And my code is not creating a significant amount
of objects that would use too much memory.
The heap memory (the 3 G1s) and non-heap memory (3 CodeHeaps + compressed
class space + metaspace) together use just using a few hundred MBs and
their usage is steady and normal.
I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo
jcmd <TomcatProcessID> VM.native_memory scale=MB":
Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)
All the other categories (Class, Thread, Code, GC, Internal, Symbol, etc)
look normal since they use a low amount of memory and they don't grow.
Then I discovered the Jemalloc tool (http://jemalloc.net) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.
I am attaching the GIF in this email.
Does anybody know how to deal with this? I have been struggling with this
issue already for 3 months. At least now I know that this is a native
memory leak, but at this point I feel lost.
By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".
Thanks in advance!
Brian
On Mon, Jan 8, 2024 at 10:05 AM Christopher Schultz <
chris@christopherschultz.net> wrote:
> Brian,
>
> On 1/5/24 17:21, Brian Braun wrote:
> > Hello Chirstopher,
> >
> > First of all: thanks a lot for your responses!
> >
> > On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/30/23 15:42, Brian Braun wrote:
> >>> At the beginning, this was the problem: The OOM-killer (something that
> I
> >>> never knew existed) killing Tomcat unexpectedly and without any
> >>> explanation
> >>
> >> The explanation is always the same: some application requests memory
> >> from the kernel, which always grants the request(!). When the
> >> application tries to use that memory, the kernel scrambles to physically
> >> allocate the memory on-demand and, if all the memory is gone, it will
> >> pick a process and kill it.
> >
> > Yes, that was happening to me until I set up the SWAP file and now at
> least
> > the Tomcat process is not being killed anymore.
>
> Swap can get you out of a bind like this, but it will ruin your
> performance. If you care more about stability (and believe me, it's a
> reasonable decision), then leave the swap on. But swap will kill (a)
> performance (b) SSD lifetime and (c) storage/transaction costs depending
> upon your environment. Besides, you either need the memory or you do
> not. It's rare to "sometimes" need the memory.
>
> >> Using a swap file is probably going to kill your performance. What
> >> happens if you make your heap smaller?
> >
> > Yes, in fact the performance is suffering and that is why I don't
> consider
> > the swap file as a solution.
>
> :D
>
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present (the Tomcat
> > process grows in memory usage up to 1.5GB combining real memory and swap
> > memory).
>
> Okay, that definitely indicates a problem that needs to be solved.
>
> I've seen things like native ZIP handling code leaking native memory,
> but I know that Tomcat does not leak like that. If you do anything in
> your application that might leave file handles open, it could be
> contributing to the problem.
>
> > As I have explained in another email recently, I think that neither heap
> > usage nor non-heap usage are the problem. I have been monitoring them and
> > their requirements have always stayed low enough, so I could leave the
> -Xms
> > parameter with about 300-400 MB and that would be enough.
>
> Well, between heap and non-heap, that's all the memory. There is no
> non-heap-non-non-heap memory to be counted. Technically stack space is
> the same as "native memory" but usually you experience other problems if
> you have too many threads and they are running out of stack space.
>
> > There is something else in the JVM that is using all that memory and I
> > still don't know what it is. And I think it doesn't care about the value
> I
> > give to -Xmx, it uses all the memory it wants. Doing what? I don't know.
>
> It might be time to start digging into those native memory-tracking tools.
>
> > Maybe I am not understanding your suggestion.
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present. In fact the
> > problem started with a low amount for -Xmx.
>
> No, you are understanding my suggestion(s). But if you are hitting Linux
> oom-killer with a 300MiB heap and a process size that is growing to 1.5G
> then getting killed... it's time to dig deeper.
>
> -chris
>
> >>> On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
> >>> chris@christopherschultz.net> wrote:
> >>>
> >>>> Brian,
> >>>>
> >>>> On 12/29/23 20:48, Brian Braun wrote:
> >>>>> Hello,
> >>>>>
> >>>>> First of all:
> >>>>> Christopher Schultz: You answered an email from me 6 weeks ago. You
> >>>> helped
> >>>>> me a lot with your suggestions. I have done a lot of research and
> have
> >>>>> learnt a lot since then, so I have been able to rule out a lot of
> >>>> potential
> >>>>> roots for my issue. Because of that I am able to post a new more
> >> specific
> >>>>> email. Thanks a lot!!!
> >>>>>
> >>>>> Now, this is my stack:
> >>>>>
> >>>>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been
> >> enough
> >>>>> for years.
> >>>>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> >>>> 2023-08-24
> >>>>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m
> >> -Xms1000m
> >>>>> ......")
> >>>>> - My app, which I developed myself, and has been running without any
> >>>>> problems for years
> >>>>>
> >>>>> Well, a couple of months ago my website/Tomcat/Java started eating
> more
> >>>> and
> >>>>> more memory about after about 4-7 days. The previous days it uses
> just
> >> a
> >>>>> few hundred MB and is very steady, but then after a few days the
> memory
> >>>>> usage suddenly grows up to 1.5GB (and then stops growing at that
> point,
> >>>>> which is interesting). Between these anomalies the RAM usage is fine
> >> and
> >>>>> very steady (as it has been for years) and it uses just about 40-50%
> of
> >>>> the
> >>>>> "Max memory" (according to what the Tomcat Manager server status
> >> shows).
> >>>>> The 3 components of G1GC heap memory are steady and low, before and
> >> after
> >>>>> the usage grows to 1.5GB, so it is definitely not that the heap
> starts
> >>>>> requiring more and more memory. I have been using several tools to
> >>>> monitor
> >>>>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that
> the
> >>>>> memory usage by the heap is not the problem.
> >>>>> The Non-heaps memory usage is not the problem either. Everything
> there
> >> is
> >>>>> normal, the usage is humble and even more steady.
> >>>>>
> >>>>> And there are no leaks, I'm sure of that. I have inspected the JVM
> >> using
> >>>>> several tools.
> >>>>>
> >>>>> There are no peaks in the number of threads either. The peak is the
> >> same
> >>>>> when the memory usage is low and when it requires 1.5GB. It stays the
> >>>> same
> >>>>> all the time.
> >>>>>
> >>>>> I have also reviewed all the scheduled tasks in my app and lowered
> the
> >>>>> amount of objects they create, which was nice and entertaining. But
> >> that
> >>>> is
> >>>>> not the problem, I have analyzed the object creation by all the
> threads
> >>>>> (and there are many) and the threads created by my scheduled tasks
> are
> >>>> very
> >>>>> humble in their memory usage, compared to many other threads.
> >>>>>
> >>>>> And I haven't made any relevant changes to my app in the 6-12 months
> >>>> before
> >>>>> this problem started occurring. It is weird that I started having
> this
> >>>>> problem. Could it be that I received an update in the java version or
> >> the
> >>>>> Tomcat version that is causing this problem?
> >>>>>
> >>>>> If neither the heap memory or the Non-heaps memory is the source of
> the
> >>>>> growth of the memory usage, what could it be? Clearly something is
> >>>>> happening inside the JVM that raises the memory usage. And everytime
> it
> >>>>> grows, it doesn't decrease. It is like if something suddenly starts
> >>>>> "pushing" the memory usage more and more, until it stops at 1.5GB.
> >>>>>
> >>>>> I think that maybe the source of the problem is the garbage
> collector.
> >> I
> >>>>> haven't used any of the switches that we can use to optimize that,
> >>>>> basically because I don't know what I should do there (if I should at
> >>>> all).
> >>>>> I have also activated the GC log, but I don't know how to analyze it.
> >>>>>
> >>>>> I have also increased and decreased the value of "-Xms" parameter and
> >> it
> >>>> is
> >>>>> useless.
> >>>>>
> >>>>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
> >>>>> Ubuntu instance so at least my JVM would not be killed my the OS
> >> anymore
> >>>>> (since the real memory is just 1.8GB). That worked and now the memory
> >>>> usage
> >>>>> can grow up to 1.5GB without crashing, by using the much slower SWAP
> >>>>> memory, but I still think that this is an abnormal situation.
> >>>>>
> >>>>> Thanks in advance for your suggestions!
> >>>>
> >>>> First of all: what is the problem? Are you just worried that the
> number
> >>>> of bytes taken by your JVM process is larger than it was ... sometime
> in
> >>>> the past? Or are you experiencing Java OOME of Linux oom-killer or
> >>>> anything like that?
> >>>>
> >>>> Not all JVMs behave this way, most most of them do: once memory is
> >>>> "appropriated" by the JVM from the OS, it will never be released. It's
> >>>> just too expensive of an operation to shrink the heap.. plus, you told
> >>>> the JVM "feel free to use up to 1GiB of heap" so it's taking you at
> your
> >>>> word. Obviously, the native heap plus stack space for every thread
> plus
> >>>> native memory for any native libraries takes up more space than just
> the
> >>>> 1GiB you gave for the heap, so ... things just take up space.
> >>>>
> >>>> Lowering the -Xms will never reduce the maximum memory the JVM ever
> >>>> uses. Only lowering -Xmx can do that. I always recommend setting Xms
> ==
> >>>> Xmx because otherwise you are lying to yourself about your needs.
> >>>>
> >>>> You say you've been running this application "for years". Has it been
> in
> >>>> a static environment, or have you been doing things such as upgrading
> >>>> Java and/or Tomcat during that time? There are things that Tomcat does
> >>>> now that it did not do in the past that sometimes require more memory
> to
> >>>> manage, sometimes only at startup and sometimes for the lifetime of
> the
> >>>> server. There are some things that the JVM is doing that require more
> >>>> memory than their previous versions.
> >>>>
> >>>> And then there is the usage of your web application. Do you have the
> >>>> same number of users? I've told this (short)( story a few times on
> this
> >>>> list, but we had a web application that ran for 10 years with only
> 64MiB
> >>>> of heap and one day we started getting OOMEs. At first we just bounced
> >>>> the service and tried looking for bugs, leaks, etc. but the heap dumps
> >>>> were telling us everything was fine.
> >>>>
> >>>> The problem was user load. We simply outgrew the heap we had allocated
> >>>> because we had more simultaneous logged-in users than we did in the
> >>>> past, and they all had sessions, etc. We had plenty of RAM available,
> we
> >>>> were just being stingy with it.
> >>>>
> >>>> The G1 garbage collector doesn't have very many switches to
> mess-around
> >>>> with it compared to older collectors. The whole point of G1 was to
> "make
> >>>> garbage collection easy". Feel free to read 30 years of lies and
> >>>> confusion about how to best configure Java garbage collectors. At the
> >>>> end of the day, if you don't know exactly what you are doing and/or
> you
> >>>> don't have a specific problem you are trying to solve, you are better
> >>>> off leaving everything with default settings.
> >>>>
> >>>> If you want to reduce the amount of RAM your application uses, set a
> >>>> lower heap size. If that causes OOMEs, audit your application for
> wasted
> >>>> memory such as too-large caches (which presumably live a long time) or
> >>>> too-large single-transactions such as loading 10k records all at once
> >>>> from a database. Sometimes a single request can require a whole lot of
> >>>> memory RIGHT NOW which is only used temporarily.
> >>>>
> >>>> I was tracking-down something in our own application like this
> recently:
> >>>> a page-generation process was causing an OOME periodically, but the
> JVM
> >>>> was otherwise very healthy. It turns out we had an administrative
> action
> >>>> in our application that had no limits on the amount of data that could
> >>>> be requested from the database at once. So naive administrators were
> >>>> able to essentially cause a query to be run that returned a huge
> number
> >>>> of rows from the db, then every row was being converted into a row in
> an
> >>>> HTML table in a web page. Our page-generation process builds the whole
> >>>> page in memory before returning it, instead of streaming it back out
> to
> >>>> the user, which means a single request can use many MiBs of memory
> just
> >>>> for in-memory strings/byte arrays.
> >>>>
> >>>> If something like that happens in your application, it can pressure
> the
> >>>> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
> >>>> -- the JVM is never gonna give that memory back to the OS.
> >>>>
> >>>> So even though everything "looks good", your heap and native memory
> >>>> spaces are very large until you terminate the JVM.
> >>>>
> >>>> If you haven't already done so, I would recommend that you enable GC
> >>>> logging. How to do that is very dependent on your JVM, version, and
> >>>> environment. This writes GC activity details to a series of files
> during
> >>>> the JVM execution. There are freely-available tools you can use to
> view
> >>>> those log files in a meaningful way and draw some conclusions. You
> might
> >>>> even be able to see when that "memory event" took place that caused
> your
> >>>> heap memory to shoot-up. (Or maybe it's your native memory, which
> isn't
> >>>> logged by the GC logger.) If you are able to see when it happened, you
> >>>> may be able to correlate that with your application log to see what
> >>>> happened in your application. Maybe you need a fix.
> >>>>
> >>>> Then again, maybe everything is totally fine and there is nothing to
> >>>> worry about.
> >>>>
> >>>> -chris
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >>>> For additional commands, e-mail: users-help@tomcat.apache.org
> >>>>
> >>>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Re: TOMCAT CERTIFICATE RENEWAL
Posted by "bigelytechnology@yahoo.com" <da...@gmail.com>.
Hello Dear
Thanks for your reply
l would use this opportunity to briefly introduce our company, Bigly
Technologies Thailand, We are one of the leading importers in Asia , and
the Middle East on general Goods and Equipment.
On behalf of Bigly Technologies Thailand, this is the samples of the
product that we want develop.
Please Kindly view our website for the samples and the product we need if
your company can
make the similar like this product.
*E-Catalogue: *https://biglytechcatalog.es.tl
Please quote us your best prices and the cost of delivery to our port and
if the prices meets our
price target we can place our order as soon as possible.
Thank You!
Best Regards
*Wat Namtip*
Purchase Manager
*Bigly Technologies Thailand*
2/51 BangNa Complex Office Tower, 11th Floor, Soi BangNa Trat 25, Bangna
Nua, Bangna, Bangkok 10260 Thailand
Telephone: +66 (0)2150 10 15
On Mon, Feb 19, 2024 at 2:02 PM Ganesan, Prabu
<pr...@capgemini.com.invalid> wrote:
> Hi Guys,
>
> How to renew the certificate in Tomcat Can anyone provide with steps as we
> have Our tomcat certificate is about to expire in Next week, Anybody can
> help with renew steps:
>
> Tomcat version : 8.5.5.0
>
> Thanks & Regards,
> _________________________________________________________
> PrabuGanesan
> Consultant|MS-Nordics
> capgemini India Pvt. Ltd. | Bangalore
> Contact: +91 8526554535
> Email: prabhu.c.ganesan@capgemini.com
>
> www.capgemini.com
> People matter, results count.
> __________________________________________________________
> Connect with Capgemini:
>
>
> Please consider the environment and do not print this email unless
> absolutely necessary.
> Capgemini encourages environmental awareness.
>
> -----Original Message-----
> From: Christopher Schultz <ch...@christopherschultz.net>
> Sent: Friday, February 16, 2024 8:56 PM
> To: users@tomcat.apache.org
> Subject: Re: Tomcat/Java starts using too much memory and not by the heap
> or non-heap memory
>
> ******This mail has been sent from an external source. Do not reply to it,
> or open any links/attachments unless you are sure of the sender's
> identity.******
>
> Chuck and Brian,
>
> On 2/15/24 10:53, Chuck Caldarale wrote:
> >
> >> On Feb 15, 2024, at 09:04, Brian Braun <ja...@gmail.com> wrote:
> >>
> >> I discovered the JCMD command to perform the native memory tracking.
> >> When running it, after 3-4 days since I started Tomcat, I found out
> >> that the compiler was using hundreds of MB and that is exactly why
> >> the Tomcat process starts abusing the memory! This is what I saw when
> executing "sudo jcmd <TomcatProcessID> VM.native_memory scale=MB":
> >>
> >> Compiler (reserved=3D340MB, commited=3D340MB) (arena=3D340MB #10)
> >>
> >> Then I discovered the Jemalloc tool (http://jemalloc.net
> >> <http://jemalloc.net/>) and its jeprof tool, so I started launching
> >> Tomcat using it. Then, after 3-4 days after Tomcat starts I was able
> >> to create some GIF images from the dumps that Jemalloc creates. The
> >> GIF files show the problem: 75-90% of the memory is being used by
> >> some weird activity in the compiler! It seems that something called
> >> "The C2 compile/JIT compiler" starts doing something after 3-4 days,
> and that creates the leak. Why after 3-4 days and not sooner? I don't know.
> >
> >
> > There have been numerous bugs filed with OpenJDK for C2 memory leaks
> over the past few years, mostly related to recompiling certain methods. The
> C2 compiler kicks in when fully optimizing methods, and it may recompile
> methods after internal instrumentation shows that additional performance
> can be obtained by doing so.
> >
> >
> >> I am attaching the GIF in this email.
> >
> >
> > Attachments are stripped on this mailing list.
>
> :(
>
> I'd love to see these.
>
> >> Does anybody know how to deal with this?
> >
> >
> > You could disable the C2 compiler temporarily, and just let C1 handle
> your code. Performance will be somewhat degraded, but may well still be
> acceptable. Add the following to the JVM options when you launch Tomcat:
> >
> > -XX:TieredStopAtLevel=1
> >
> >
> >> By the way, I'm running my website using Tomcat 9.0.58, Java
> >> "11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am
> >> developing using Eclipse and compiling my WAR file with a "Compiler
> >> compliance level:11".
> >
> >
> > You could try a more recent JVM version; JDK 11 was first released over
> 5 years ago, although it is still being maintained.
>
> There is an 11.0.22 -- just a patch-release away from what you appear to
> have. I'm not sure if it's offered through your package-manager, but you
> could give it a try directly from e.g. Eclipse Adoptium / Temurin.
>
> Honestly, if your code runs on Java 11, it's very likely that it will run
> just fine on Java 17 or Java 21. Debian has packages for Java 17 for sure,
> so I suspect Ubuntu will have them available as well.
>
> Debian-based distros will allow you to install and run multiple JDKs/JREs
> in parallel, so you can install Java 17 (or 21) without cutting-off access
> to Java 11 if you still want it.
>
> -chris
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
> This message contains information that may be privileged or confidential
> and is the property of the Capgemini Group. It is intended only for the
> person to whom it is addressed. If you are not the intended recipient, you
> are not authorized to read, print, retain, copy, disseminate, distribute,
> or use this message or any part thereof. If you receive this message in
> error, please notify the sender immediately and delete all copies of this
> message.
>
TOMCAT CERTIFICATE RENEWAL
Posted by "Ganesan, Prabu" <pr...@capgemini.com.INVALID>.
Hi Guys,
How to renew the certificate in Tomcat Can anyone provide with steps as we have Our tomcat certificate is about to expire in Next week, Anybody can help with renew steps:
Tomcat version : 8.5.5.0
Thanks & Regards,
_________________________________________________________
PrabuGanesan
Consultant|MS-Nordics
capgemini India Pvt. Ltd. | Bangalore
Contact: +91 8526554535
Email: prabhu.c.ganesan@capgemini.com
www.capgemini.com
People matter, results count.
__________________________________________________________
Connect with Capgemini:
Please consider the environment and do not print this email unless absolutely necessary.
Capgemini encourages environmental awareness.
-----Original Message-----
From: Christopher Schultz <ch...@christopherschultz.net>
Sent: Friday, February 16, 2024 8:56 PM
To: users@tomcat.apache.org
Subject: Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
******This mail has been sent from an external source. Do not reply to it, or open any links/attachments unless you are sure of the sender's identity.******
Chuck and Brian,
On 2/15/24 10:53, Chuck Caldarale wrote:
>
>> On Feb 15, 2024, at 09:04, Brian Braun <ja...@gmail.com> wrote:
>>
>> I discovered the JCMD command to perform the native memory tracking.
>> When running it, after 3-4 days since I started Tomcat, I found out
>> that the compiler was using hundreds of MB and that is exactly why
>> the Tomcat process starts abusing the memory! This is what I saw when executing "sudo jcmd <TomcatProcessID> VM.native_memory scale=MB":
>>
>> Compiler (reserved=3D340MB, commited=3D340MB) (arena=3D340MB #10)
>>
>> Then I discovered the Jemalloc tool (http://jemalloc.net
>> <http://jemalloc.net/>) and its jeprof tool, so I started launching
>> Tomcat using it. Then, after 3-4 days after Tomcat starts I was able
>> to create some GIF images from the dumps that Jemalloc creates. The
>> GIF files show the problem: 75-90% of the memory is being used by
>> some weird activity in the compiler! It seems that something called
>> "The C2 compile/JIT compiler" starts doing something after 3-4 days, and that creates the leak. Why after 3-4 days and not sooner? I don't know.
>
>
> There have been numerous bugs filed with OpenJDK for C2 memory leaks over the past few years, mostly related to recompiling certain methods. The C2 compiler kicks in when fully optimizing methods, and it may recompile methods after internal instrumentation shows that additional performance can be obtained by doing so.
>
>
>> I am attaching the GIF in this email.
>
>
> Attachments are stripped on this mailing list.
:(
I'd love to see these.
>> Does anybody know how to deal with this?
>
>
> You could disable the C2 compiler temporarily, and just let C1 handle your code. Performance will be somewhat degraded, but may well still be acceptable. Add the following to the JVM options when you launch Tomcat:
>
> -XX:TieredStopAtLevel=1
>
>
>> By the way, I'm running my website using Tomcat 9.0.58, Java
>> "11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am
>> developing using Eclipse and compiling my WAR file with a "Compiler
>> compliance level:11".
>
>
> You could try a more recent JVM version; JDK 11 was first released over 5 years ago, although it is still being maintained.
There is an 11.0.22 -- just a patch-release away from what you appear to have. I'm not sure if it's offered through your package-manager, but you could give it a try directly from e.g. Eclipse Adoptium / Temurin.
Honestly, if your code runs on Java 11, it's very likely that it will run just fine on Java 17 or Java 21. Debian has packages for Java 17 for sure, so I suspect Ubuntu will have them available as well.
Debian-based distros will allow you to install and run multiple JDKs/JREs in parallel, so you can install Java 17 (or 21) without cutting-off access to Java 11 if you still want it.
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Christopher Schultz <ch...@christopherschultz.net>.
Chuck and Brian,
On 2/15/24 10:53, Chuck Caldarale wrote:
>
>> On Feb 15, 2024, at 09:04, Brian Braun <ja...@gmail.com> wrote:
>>
>> I discovered the JCMD command to perform the native memory tracking. When
>> running it, after 3-4 days since I started Tomcat, I found out that the
>> compiler was using hundreds of MB and that is exactly why the Tomcat
>> process starts abusing the memory! This is what I saw when executing "sudo jcmd <TomcatProcessID> VM.native_memory scale=MB":
>>
>> Compiler (reserved=3D340MB, commited=3D340MB)
>> (arena=3D340MB #10)
>>
>> Then I discovered the Jemalloc tool (http://jemalloc.net <http://jemalloc.net/>) and its jeprof
>> tool, so I started launching Tomcat using it. Then, after 3-4 days after
>> Tomcat starts I was able to create some GIF images from the dumps that
>> Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
>> being used by some weird activity in the compiler! It seems that something
>> called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
>> and that creates the leak. Why after 3-4 days and not sooner? I don't know.
>
>
> There have been numerous bugs filed with OpenJDK for C2 memory leaks over the past few years, mostly related to recompiling certain methods. The C2 compiler kicks in when fully optimizing methods, and it may recompile methods after internal instrumentation shows that additional performance can be obtained by doing so.
>
>
>> I am attaching the GIF in this email.
>
>
> Attachments are stripped on this mailing list.
:(
I'd love to see these.
>> Does anybody know how to deal with this?
>
>
> You could disable the C2 compiler temporarily, and just let C1 handle your code. Performance will be somewhat degraded, but may well still be acceptable. Add the following to the JVM options when you launch Tomcat:
>
> -XX:TieredStopAtLevel=1
>
>
>> By the way, I'm running my website using Tomcat 9.0.58, Java
>> "11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
>> using Eclipse and compiling my WAR file with a "Compiler compliance
>> level:11".
>
>
> You could try a more recent JVM version; JDK 11 was first released over 5 years ago, although it is still being maintained.
There is an 11.0.22 -- just a patch-release away from what you appear to
have. I'm not sure if it's offered through your package-manager, but you
could give it a try directly from e.g. Eclipse Adoptium / Temurin.
Honestly, if your code runs on Java 11, it's very likely that it will
run just fine on Java 17 or Java 21. Debian has packages for Java 17 for
sure, so I suspect Ubuntu will have them available as well.
Debian-based distros will allow you to install and run multiple
JDKs/JREs in parallel, so you can install Java 17 (or 21) without
cutting-off access to Java 11 if you still want it.
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Chuck Caldarale <n8...@gmail.com>.
> On Feb 15, 2024, at 09:04, Brian Braun <ja...@gmail.com> wrote:
>
> I discovered the JCMD command to perform the native memory tracking. When
> running it, after 3-4 days since I started Tomcat, I found out that the
> compiler was using hundreds of MB and that is exactly why the Tomcat
> process starts abusing the memory! This is what I saw when executing "sudo jcmd <TomcatProcessID> VM.native_memory scale=MB":
>
> Compiler (reserved=3D340MB, commited=3D340MB)
> (arena=3D340MB #10)
>
> Then I discovered the Jemalloc tool (http://jemalloc.net <http://jemalloc.net/>) and its jeprof
> tool, so I started launching Tomcat using it. Then, after 3-4 days after
> Tomcat starts I was able to create some GIF images from the dumps that
> Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
> being used by some weird activity in the compiler! It seems that something
> called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
> and that creates the leak. Why after 3-4 days and not sooner? I don't know.
There have been numerous bugs filed with OpenJDK for C2 memory leaks over the past few years, mostly related to recompiling certain methods. The C2 compiler kicks in when fully optimizing methods, and it may recompile methods after internal instrumentation shows that additional performance can be obtained by doing so.
> I am attaching the GIF in this email.
Attachments are stripped on this mailing list.
> Does anybody know how to deal with this?
You could disable the C2 compiler temporarily, and just let C1 handle your code. Performance will be somewhat degraded, but may well still be acceptable. Add the following to the JVM options when you launch Tomcat:
-XX:TieredStopAtLevel=1
> By the way, I'm running my website using Tomcat 9.0.58, Java
> "11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
> using Eclipse and compiling my WAR file with a "Compiler compliance
> level:11".
You could try a more recent JVM version; JDK 11 was first released over 5 years ago, although it is still being maintained.
- Chuck
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Brian Braun <ja...@gmail.com>.
Hello,
It has been a long time since I received the last suggestions to my issue
here on this support list. Since then I decided to stop asking for help and
to "do my homework". To read, to watch YouTube presentations, to spend time
on StackOverflow, etc. So I have spent a lot of time on this and I think I
have learned a lot which is nice.
This is what I have learned lately:
I definitely don't have a leak in my code (or in the libraries I am using,
as far as I understand). And my code is not creating a significant amount
of objects that would use too much memory.
The heap memory (the 3 G1s) and non-heap memory (3 CodeHeaps + compressed
class space + metaspace) together use just using a few hundred MBs and
their usage is steady and normal.
I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo
jcmd <TomcatProcessID> VM.native_memory scale=MB":
Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)
All the other categories (Class, Thread, Code, GC, Internal, Symbol, etc)
look normal since they use a low amount of memory and they don't grow.
Then I discovered the Jemalloc tool (http://jemalloc.net) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.
I am attaching the GIF in this email.
Does anybody know how to deal with this? I have been struggling with this
issue already for 3 months. At least now I know that this is a native
memory leak, but at this point I feel lost.
By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".
Thanks in advance!
Brian
On Mon, Jan 8, 2024 at 10:05 AM Christopher Schultz <
chris@christopherschultz.net> wrote:
> Brian,
>
> On 1/5/24 17:21, Brian Braun wrote:
> > Hello Chirstopher,
> >
> > First of all: thanks a lot for your responses!
> >
> > On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/30/23 15:42, Brian Braun wrote:
> >>> At the beginning, this was the problem: The OOM-killer (something that
> I
> >>> never knew existed) killing Tomcat unexpectedly and without any
> >>> explanation
> >>
> >> The explanation is always the same: some application requests memory
> >> from the kernel, which always grants the request(!). When the
> >> application tries to use that memory, the kernel scrambles to physically
> >> allocate the memory on-demand and, if all the memory is gone, it will
> >> pick a process and kill it.
> >
> > Yes, that was happening to me until I set up the SWAP file and now at
> least
> > the Tomcat process is not being killed anymore.
>
> Swap can get you out of a bind like this, but it will ruin your
> performance. If you care more about stability (and believe me, it's a
> reasonable decision), then leave the swap on. But swap will kill (a)
> performance (b) SSD lifetime and (c) storage/transaction costs depending
> upon your environment. Besides, you either need the memory or you do
> not. It's rare to "sometimes" need the memory.
>
> >> Using a swap file is probably going to kill your performance. What
> >> happens if you make your heap smaller?
> >
> > Yes, in fact the performance is suffering and that is why I don't
> consider
> > the swap file as a solution.
>
> :D
>
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present (the Tomcat
> > process grows in memory usage up to 1.5GB combining real memory and swap
> > memory).
>
> Okay, that definitely indicates a problem that needs to be solved.
>
> I've seen things like native ZIP handling code leaking native memory,
> but I know that Tomcat does not leak like that. If you do anything in
> your application that might leave file handles open, it could be
> contributing to the problem.
>
> > As I have explained in another email recently, I think that neither heap
> > usage nor non-heap usage are the problem. I have been monitoring them and
> > their requirements have always stayed low enough, so I could leave the
> -Xms
> > parameter with about 300-400 MB and that would be enough.
>
> Well, between heap and non-heap, that's all the memory. There is no
> non-heap-non-non-heap memory to be counted. Technically stack space is
> the same as "native memory" but usually you experience other problems if
> you have too many threads and they are running out of stack space.
>
> > There is something else in the JVM that is using all that memory and I
> > still don't know what it is. And I think it doesn't care about the value
> I
> > give to -Xmx, it uses all the memory it wants. Doing what? I don't know.
>
> It might be time to start digging into those native memory-tracking tools.
>
> > Maybe I am not understanding your suggestion.
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present. In fact the
> > problem started with a low amount for -Xmx.
>
> No, you are understanding my suggestion(s). But if you are hitting Linux
> oom-killer with a 300MiB heap and a process size that is growing to 1.5G
> then getting killed... it's time to dig deeper.
>
> -chris
>
> >>> On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
> >>> chris@christopherschultz.net> wrote:
> >>>
> >>>> Brian,
> >>>>
> >>>> On 12/29/23 20:48, Brian Braun wrote:
> >>>>> Hello,
> >>>>>
> >>>>> First of all:
> >>>>> Christopher Schultz: You answered an email from me 6 weeks ago. You
> >>>> helped
> >>>>> me a lot with your suggestions. I have done a lot of research and
> have
> >>>>> learnt a lot since then, so I have been able to rule out a lot of
> >>>> potential
> >>>>> roots for my issue. Because of that I am able to post a new more
> >> specific
> >>>>> email. Thanks a lot!!!
> >>>>>
> >>>>> Now, this is my stack:
> >>>>>
> >>>>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been
> >> enough
> >>>>> for years.
> >>>>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> >>>> 2023-08-24
> >>>>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m
> >> -Xms1000m
> >>>>> ......")
> >>>>> - My app, which I developed myself, and has been running without any
> >>>>> problems for years
> >>>>>
> >>>>> Well, a couple of months ago my website/Tomcat/Java started eating
> more
> >>>> and
> >>>>> more memory about after about 4-7 days. The previous days it uses
> just
> >> a
> >>>>> few hundred MB and is very steady, but then after a few days the
> memory
> >>>>> usage suddenly grows up to 1.5GB (and then stops growing at that
> point,
> >>>>> which is interesting). Between these anomalies the RAM usage is fine
> >> and
> >>>>> very steady (as it has been for years) and it uses just about 40-50%
> of
> >>>> the
> >>>>> "Max memory" (according to what the Tomcat Manager server status
> >> shows).
> >>>>> The 3 components of G1GC heap memory are steady and low, before and
> >> after
> >>>>> the usage grows to 1.5GB, so it is definitely not that the heap
> starts
> >>>>> requiring more and more memory. I have been using several tools to
> >>>> monitor
> >>>>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that
> the
> >>>>> memory usage by the heap is not the problem.
> >>>>> The Non-heaps memory usage is not the problem either. Everything
> there
> >> is
> >>>>> normal, the usage is humble and even more steady.
> >>>>>
> >>>>> And there are no leaks, I'm sure of that. I have inspected the JVM
> >> using
> >>>>> several tools.
> >>>>>
> >>>>> There are no peaks in the number of threads either. The peak is the
> >> same
> >>>>> when the memory usage is low and when it requires 1.5GB. It stays the
> >>>> same
> >>>>> all the time.
> >>>>>
> >>>>> I have also reviewed all the scheduled tasks in my app and lowered
> the
> >>>>> amount of objects they create, which was nice and entertaining. But
> >> that
> >>>> is
> >>>>> not the problem, I have analyzed the object creation by all the
> threads
> >>>>> (and there are many) and the threads created by my scheduled tasks
> are
> >>>> very
> >>>>> humble in their memory usage, compared to many other threads.
> >>>>>
> >>>>> And I haven't made any relevant changes to my app in the 6-12 months
> >>>> before
> >>>>> this problem started occurring. It is weird that I started having
> this
> >>>>> problem. Could it be that I received an update in the java version or
> >> the
> >>>>> Tomcat version that is causing this problem?
> >>>>>
> >>>>> If neither the heap memory or the Non-heaps memory is the source of
> the
> >>>>> growth of the memory usage, what could it be? Clearly something is
> >>>>> happening inside the JVM that raises the memory usage. And everytime
> it
> >>>>> grows, it doesn't decrease. It is like if something suddenly starts
> >>>>> "pushing" the memory usage more and more, until it stops at 1.5GB.
> >>>>>
> >>>>> I think that maybe the source of the problem is the garbage
> collector.
> >> I
> >>>>> haven't used any of the switches that we can use to optimize that,
> >>>>> basically because I don't know what I should do there (if I should at
> >>>> all).
> >>>>> I have also activated the GC log, but I don't know how to analyze it.
> >>>>>
> >>>>> I have also increased and decreased the value of "-Xms" parameter and
> >> it
> >>>> is
> >>>>> useless.
> >>>>>
> >>>>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
> >>>>> Ubuntu instance so at least my JVM would not be killed my the OS
> >> anymore
> >>>>> (since the real memory is just 1.8GB). That worked and now the memory
> >>>> usage
> >>>>> can grow up to 1.5GB without crashing, by using the much slower SWAP
> >>>>> memory, but I still think that this is an abnormal situation.
> >>>>>
> >>>>> Thanks in advance for your suggestions!
> >>>>
> >>>> First of all: what is the problem? Are you just worried that the
> number
> >>>> of bytes taken by your JVM process is larger than it was ... sometime
> in
> >>>> the past? Or are you experiencing Java OOME of Linux oom-killer or
> >>>> anything like that?
> >>>>
> >>>> Not all JVMs behave this way, most most of them do: once memory is
> >>>> "appropriated" by the JVM from the OS, it will never be released. It's
> >>>> just too expensive of an operation to shrink the heap.. plus, you told
> >>>> the JVM "feel free to use up to 1GiB of heap" so it's taking you at
> your
> >>>> word. Obviously, the native heap plus stack space for every thread
> plus
> >>>> native memory for any native libraries takes up more space than just
> the
> >>>> 1GiB you gave for the heap, so ... things just take up space.
> >>>>
> >>>> Lowering the -Xms will never reduce the maximum memory the JVM ever
> >>>> uses. Only lowering -Xmx can do that. I always recommend setting Xms
> ==
> >>>> Xmx because otherwise you are lying to yourself about your needs.
> >>>>
> >>>> You say you've been running this application "for years". Has it been
> in
> >>>> a static environment, or have you been doing things such as upgrading
> >>>> Java and/or Tomcat during that time? There are things that Tomcat does
> >>>> now that it did not do in the past that sometimes require more memory
> to
> >>>> manage, sometimes only at startup and sometimes for the lifetime of
> the
> >>>> server. There are some things that the JVM is doing that require more
> >>>> memory than their previous versions.
> >>>>
> >>>> And then there is the usage of your web application. Do you have the
> >>>> same number of users? I've told this (short)( story a few times on
> this
> >>>> list, but we had a web application that ran for 10 years with only
> 64MiB
> >>>> of heap and one day we started getting OOMEs. At first we just bounced
> >>>> the service and tried looking for bugs, leaks, etc. but the heap dumps
> >>>> were telling us everything was fine.
> >>>>
> >>>> The problem was user load. We simply outgrew the heap we had allocated
> >>>> because we had more simultaneous logged-in users than we did in the
> >>>> past, and they all had sessions, etc. We had plenty of RAM available,
> we
> >>>> were just being stingy with it.
> >>>>
> >>>> The G1 garbage collector doesn't have very many switches to
> mess-around
> >>>> with it compared to older collectors. The whole point of G1 was to
> "make
> >>>> garbage collection easy". Feel free to read 30 years of lies and
> >>>> confusion about how to best configure Java garbage collectors. At the
> >>>> end of the day, if you don't know exactly what you are doing and/or
> you
> >>>> don't have a specific problem you are trying to solve, you are better
> >>>> off leaving everything with default settings.
> >>>>
> >>>> If you want to reduce the amount of RAM your application uses, set a
> >>>> lower heap size. If that causes OOMEs, audit your application for
> wasted
> >>>> memory such as too-large caches (which presumably live a long time) or
> >>>> too-large single-transactions such as loading 10k records all at once
> >>>> from a database. Sometimes a single request can require a whole lot of
> >>>> memory RIGHT NOW which is only used temporarily.
> >>>>
> >>>> I was tracking-down something in our own application like this
> recently:
> >>>> a page-generation process was causing an OOME periodically, but the
> JVM
> >>>> was otherwise very healthy. It turns out we had an administrative
> action
> >>>> in our application that had no limits on the amount of data that could
> >>>> be requested from the database at once. So naive administrators were
> >>>> able to essentially cause a query to be run that returned a huge
> number
> >>>> of rows from the db, then every row was being converted into a row in
> an
> >>>> HTML table in a web page. Our page-generation process builds the whole
> >>>> page in memory before returning it, instead of streaming it back out
> to
> >>>> the user, which means a single request can use many MiBs of memory
> just
> >>>> for in-memory strings/byte arrays.
> >>>>
> >>>> If something like that happens in your application, it can pressure
> the
> >>>> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
> >>>> -- the JVM is never gonna give that memory back to the OS.
> >>>>
> >>>> So even though everything "looks good", your heap and native memory
> >>>> spaces are very large until you terminate the JVM.
> >>>>
> >>>> If you haven't already done so, I would recommend that you enable GC
> >>>> logging. How to do that is very dependent on your JVM, version, and
> >>>> environment. This writes GC activity details to a series of files
> during
> >>>> the JVM execution. There are freely-available tools you can use to
> view
> >>>> those log files in a meaningful way and draw some conclusions. You
> might
> >>>> even be able to see when that "memory event" took place that caused
> your
> >>>> heap memory to shoot-up. (Or maybe it's your native memory, which
> isn't
> >>>> logged by the GC logger.) If you are able to see when it happened, you
> >>>> may be able to correlate that with your application log to see what
> >>>> happened in your application. Maybe you need a fix.
> >>>>
> >>>> Then again, maybe everything is totally fine and there is nothing to
> >>>> worry about.
> >>>>
> >>>> -chris
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >>>> For additional commands, e-mail: users-help@tomcat.apache.org
> >>>>
> >>>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Christopher Schultz <ch...@christopherschultz.net>.
Brian,
On 1/5/24 17:21, Brian Braun wrote:
> Hello Chirstopher,
>
> First of all: thanks a lot for your responses!
>
> On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> chris@christopherschultz.net> wrote:
>
>> Brian,
>>
>> On 12/30/23 15:42, Brian Braun wrote:
>>> At the beginning, this was the problem: The OOM-killer (something that I
>>> never knew existed) killing Tomcat unexpectedly and without any
>>> explanation
>>
>> The explanation is always the same: some application requests memory
>> from the kernel, which always grants the request(!). When the
>> application tries to use that memory, the kernel scrambles to physically
>> allocate the memory on-demand and, if all the memory is gone, it will
>> pick a process and kill it.
>
> Yes, that was happening to me until I set up the SWAP file and now at least
> the Tomcat process is not being killed anymore.
Swap can get you out of a bind like this, but it will ruin your
performance. If you care more about stability (and believe me, it's a
reasonable decision), then leave the swap on. But swap will kill (a)
performance (b) SSD lifetime and (c) storage/transaction costs depending
upon your environment. Besides, you either need the memory or you do
not. It's rare to "sometimes" need the memory.
>> Using a swap file is probably going to kill your performance. What
>> happens if you make your heap smaller?
>
> Yes, in fact the performance is suffering and that is why I don't consider
> the swap file as a solution.
:D
> I have assigned to -Xmx both small amounts (as small as 300MB) and high
> amounts (as high as 1GB) and the problem is still present (the Tomcat
> process grows in memory usage up to 1.5GB combining real memory and swap
> memory).
Okay, that definitely indicates a problem that needs to be solved.
I've seen things like native ZIP handling code leaking native memory,
but I know that Tomcat does not leak like that. If you do anything in
your application that might leave file handles open, it could be
contributing to the problem.
> As I have explained in another email recently, I think that neither heap
> usage nor non-heap usage are the problem. I have been monitoring them and
> their requirements have always stayed low enough, so I could leave the -Xms
> parameter with about 300-400 MB and that would be enough.
Well, between heap and non-heap, that's all the memory. There is no
non-heap-non-non-heap memory to be counted. Technically stack space is
the same as "native memory" but usually you experience other problems if
you have too many threads and they are running out of stack space.
> There is something else in the JVM that is using all that memory and I
> still don't know what it is. And I think it doesn't care about the value I
> give to -Xmx, it uses all the memory it wants. Doing what? I don't know.
It might be time to start digging into those native memory-tracking tools.
> Maybe I am not understanding your suggestion.
> I have assigned to -Xmx both small amounts (as small as 300MB) and high
> amounts (as high as 1GB) and the problem is still present. In fact the
> problem started with a low amount for -Xmx.
No, you are understanding my suggestion(s). But if you are hitting Linux
oom-killer with a 300MiB heap and a process size that is growing to 1.5G
then getting killed... it's time to dig deeper.
-chris
>>> On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
>>> chris@christopherschultz.net> wrote:
>>>
>>>> Brian,
>>>>
>>>> On 12/29/23 20:48, Brian Braun wrote:
>>>>> Hello,
>>>>>
>>>>> First of all:
>>>>> Christopher Schultz: You answered an email from me 6 weeks ago. You
>>>> helped
>>>>> me a lot with your suggestions. I have done a lot of research and have
>>>>> learnt a lot since then, so I have been able to rule out a lot of
>>>> potential
>>>>> roots for my issue. Because of that I am able to post a new more
>> specific
>>>>> email. Thanks a lot!!!
>>>>>
>>>>> Now, this is my stack:
>>>>>
>>>>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been
>> enough
>>>>> for years.
>>>>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
>>>> 2023-08-24
>>>>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m
>> -Xms1000m
>>>>> ......")
>>>>> - My app, which I developed myself, and has been running without any
>>>>> problems for years
>>>>>
>>>>> Well, a couple of months ago my website/Tomcat/Java started eating more
>>>> and
>>>>> more memory about after about 4-7 days. The previous days it uses just
>> a
>>>>> few hundred MB and is very steady, but then after a few days the memory
>>>>> usage suddenly grows up to 1.5GB (and then stops growing at that point,
>>>>> which is interesting). Between these anomalies the RAM usage is fine
>> and
>>>>> very steady (as it has been for years) and it uses just about 40-50% of
>>>> the
>>>>> "Max memory" (according to what the Tomcat Manager server status
>> shows).
>>>>> The 3 components of G1GC heap memory are steady and low, before and
>> after
>>>>> the usage grows to 1.5GB, so it is definitely not that the heap starts
>>>>> requiring more and more memory. I have been using several tools to
>>>> monitor
>>>>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
>>>>> memory usage by the heap is not the problem.
>>>>> The Non-heaps memory usage is not the problem either. Everything there
>> is
>>>>> normal, the usage is humble and even more steady.
>>>>>
>>>>> And there are no leaks, I'm sure of that. I have inspected the JVM
>> using
>>>>> several tools.
>>>>>
>>>>> There are no peaks in the number of threads either. The peak is the
>> same
>>>>> when the memory usage is low and when it requires 1.5GB. It stays the
>>>> same
>>>>> all the time.
>>>>>
>>>>> I have also reviewed all the scheduled tasks in my app and lowered the
>>>>> amount of objects they create, which was nice and entertaining. But
>> that
>>>> is
>>>>> not the problem, I have analyzed the object creation by all the threads
>>>>> (and there are many) and the threads created by my scheduled tasks are
>>>> very
>>>>> humble in their memory usage, compared to many other threads.
>>>>>
>>>>> And I haven't made any relevant changes to my app in the 6-12 months
>>>> before
>>>>> this problem started occurring. It is weird that I started having this
>>>>> problem. Could it be that I received an update in the java version or
>> the
>>>>> Tomcat version that is causing this problem?
>>>>>
>>>>> If neither the heap memory or the Non-heaps memory is the source of the
>>>>> growth of the memory usage, what could it be? Clearly something is
>>>>> happening inside the JVM that raises the memory usage. And everytime it
>>>>> grows, it doesn't decrease. It is like if something suddenly starts
>>>>> "pushing" the memory usage more and more, until it stops at 1.5GB.
>>>>>
>>>>> I think that maybe the source of the problem is the garbage collector.
>> I
>>>>> haven't used any of the switches that we can use to optimize that,
>>>>> basically because I don't know what I should do there (if I should at
>>>> all).
>>>>> I have also activated the GC log, but I don't know how to analyze it.
>>>>>
>>>>> I have also increased and decreased the value of "-Xms" parameter and
>> it
>>>> is
>>>>> useless.
>>>>>
>>>>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
>>>>> Ubuntu instance so at least my JVM would not be killed my the OS
>> anymore
>>>>> (since the real memory is just 1.8GB). That worked and now the memory
>>>> usage
>>>>> can grow up to 1.5GB without crashing, by using the much slower SWAP
>>>>> memory, but I still think that this is an abnormal situation.
>>>>>
>>>>> Thanks in advance for your suggestions!
>>>>
>>>> First of all: what is the problem? Are you just worried that the number
>>>> of bytes taken by your JVM process is larger than it was ... sometime in
>>>> the past? Or are you experiencing Java OOME of Linux oom-killer or
>>>> anything like that?
>>>>
>>>> Not all JVMs behave this way, most most of them do: once memory is
>>>> "appropriated" by the JVM from the OS, it will never be released. It's
>>>> just too expensive of an operation to shrink the heap.. plus, you told
>>>> the JVM "feel free to use up to 1GiB of heap" so it's taking you at your
>>>> word. Obviously, the native heap plus stack space for every thread plus
>>>> native memory for any native libraries takes up more space than just the
>>>> 1GiB you gave for the heap, so ... things just take up space.
>>>>
>>>> Lowering the -Xms will never reduce the maximum memory the JVM ever
>>>> uses. Only lowering -Xmx can do that. I always recommend setting Xms ==
>>>> Xmx because otherwise you are lying to yourself about your needs.
>>>>
>>>> You say you've been running this application "for years". Has it been in
>>>> a static environment, or have you been doing things such as upgrading
>>>> Java and/or Tomcat during that time? There are things that Tomcat does
>>>> now that it did not do in the past that sometimes require more memory to
>>>> manage, sometimes only at startup and sometimes for the lifetime of the
>>>> server. There are some things that the JVM is doing that require more
>>>> memory than their previous versions.
>>>>
>>>> And then there is the usage of your web application. Do you have the
>>>> same number of users? I've told this (short)( story a few times on this
>>>> list, but we had a web application that ran for 10 years with only 64MiB
>>>> of heap and one day we started getting OOMEs. At first we just bounced
>>>> the service and tried looking for bugs, leaks, etc. but the heap dumps
>>>> were telling us everything was fine.
>>>>
>>>> The problem was user load. We simply outgrew the heap we had allocated
>>>> because we had more simultaneous logged-in users than we did in the
>>>> past, and they all had sessions, etc. We had plenty of RAM available, we
>>>> were just being stingy with it.
>>>>
>>>> The G1 garbage collector doesn't have very many switches to mess-around
>>>> with it compared to older collectors. The whole point of G1 was to "make
>>>> garbage collection easy". Feel free to read 30 years of lies and
>>>> confusion about how to best configure Java garbage collectors. At the
>>>> end of the day, if you don't know exactly what you are doing and/or you
>>>> don't have a specific problem you are trying to solve, you are better
>>>> off leaving everything with default settings.
>>>>
>>>> If you want to reduce the amount of RAM your application uses, set a
>>>> lower heap size. If that causes OOMEs, audit your application for wasted
>>>> memory such as too-large caches (which presumably live a long time) or
>>>> too-large single-transactions such as loading 10k records all at once
>>>> from a database. Sometimes a single request can require a whole lot of
>>>> memory RIGHT NOW which is only used temporarily.
>>>>
>>>> I was tracking-down something in our own application like this recently:
>>>> a page-generation process was causing an OOME periodically, but the JVM
>>>> was otherwise very healthy. It turns out we had an administrative action
>>>> in our application that had no limits on the amount of data that could
>>>> be requested from the database at once. So naive administrators were
>>>> able to essentially cause a query to be run that returned a huge number
>>>> of rows from the db, then every row was being converted into a row in an
>>>> HTML table in a web page. Our page-generation process builds the whole
>>>> page in memory before returning it, instead of streaming it back out to
>>>> the user, which means a single request can use many MiBs of memory just
>>>> for in-memory strings/byte arrays.
>>>>
>>>> If something like that happens in your application, it can pressure the
>>>> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
>>>> -- the JVM is never gonna give that memory back to the OS.
>>>>
>>>> So even though everything "looks good", your heap and native memory
>>>> spaces are very large until you terminate the JVM.
>>>>
>>>> If you haven't already done so, I would recommend that you enable GC
>>>> logging. How to do that is very dependent on your JVM, version, and
>>>> environment. This writes GC activity details to a series of files during
>>>> the JVM execution. There are freely-available tools you can use to view
>>>> those log files in a meaningful way and draw some conclusions. You might
>>>> even be able to see when that "memory event" took place that caused your
>>>> heap memory to shoot-up. (Or maybe it's your native memory, which isn't
>>>> logged by the GC logger.) If you are able to see when it happened, you
>>>> may be able to correlate that with your application log to see what
>>>> happened in your application. Maybe you need a fix.
>>>>
>>>> Then again, maybe everything is totally fine and there is nothing to
>>>> worry about.
>>>>
>>>> -chris
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Chuck Caldarale <n8...@gmail.com>.
> On Jan 5, 2024, at 16:21, Brian Braun <ja...@gmail.com> wrote:
>>
>> Tracking native memory usage can be tricky depending upon your
>> environment. I would only look into that if there were somethng very odd
>> going on, like your process memory space seems to be more than 50% taken
>> by non-java-heap memory.
>>
> Well, actually that is my case. The heap memory (the 3 G1s) and non-heap
> memory (3 CodeHeaps + compressed class space + metaspace) together use just
> a few hundred MBs. I can see that using Tomcat Manager as well as the other
> monitoring tools. And the rest of the memory (about 1GB) is being used by
> the JVM but I don't know why or how, and that started 2 months ago. In your
> case you have just 20-25% extra memory used in a way that you don't
> understand, in my case it is about 200%.
The virtual map provided earlier doesn’t show any anomalies, but I really should have asked you to run the pmap utility on the active Tomcat process instead. The JVM heap that was active when you captured the data is this line:
c1800000-dda00000 rw-p 00000000 00:00 0
which works out to 115,200 pages or almost 472 Mb. However, we don’t know how much of that virtual space was actually allocated in real memory. The pmap utility would have shown that, as seen below for Tomcat running with a 512M heap on my small Linux box. Having pmap output from your system, both before and after the high-memory event occurs, might provide some insight on what’s using up the real memory.
Are you using the Tomcat manager app to show memory information? This is a quick way to display both maximum and used amounts of the various JVM memory pools.
Below is the sample pmap output for my test system; the Kbytes and RSS columns are of primary interest, notably the 527232 and 55092 for the JVM heap at address 00000000e0000000. Finding the actual offender won’t be easy, but having both before and after views may help.
- Chuck
26608: /usr/lib64/jvm/java-11-openjdk-11/bin/java -Djava.util.logging.config.file=/home/chuck/Downloads/apache-tomcat-9.0.84/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xms512M -Xmx512M -Dignore.endorsed.dirs= -classpath /home/chuck/Downloads/apache-tomcat-9.0.84/bin/bootstrap.jar:/home/chuck/Downloads/apache-tomca
Address Kbytes RSS PSS Dirty Swap Mode Mapping
00000000e0000000 527232 55092 55092 55092 0 rw-p- [ anon ]
00000001002e0000 1045632 0 0 0 0 ---p- [ anon ]
0000561efff1e000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/bin/java
0000561efff1f000 4 4 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/bin/java
0000561efff20000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/bin/java
0000561efff21000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/bin/java
0000561efff22000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/bin/java
0000561f0095c000 264 68 68 68 0 rw-p- [ anon ]
00007f45d0000000 132 36 36 36 0 rw-p- [ anon ]
00007f45d0021000 65404 0 0 0 0 ---p- [ anon ]
00007f45d4000000 132 16 16 16 0 rw-p- [ anon ]
00007f45d4021000 65404 0 0 0 0 ---p- [ anon ]
00007f45d8000000 132 40 40 40 0 rw-p- [ anon ]
00007f45d8021000 65404 0 0 0 0 ---p- [ anon ]
00007f45dc000000 132 84 84 84 0 rw-p- [ anon ]
00007f45dc021000 65404 0 0 0 0 ---p- [ anon ]
00007f45e0000000 132 16 16 16 0 rw-p- [ anon ]
00007f45e0021000 65404 0 0 0 0 ---p- [ anon ]
00007f45e4000000 132 16 16 16 0 rw-p- [ anon ]
00007f45e4021000 65404 0 0 0 0 ---p- [ anon ]
00007f45e8000000 1340 1276 1276 1276 0 rw-p- [ anon ]
00007f45e814f000 64196 0 0 0 0 ---p- [ anon ]
00007f45ec000000 132 32 32 32 0 rw-p- [ anon ]
00007f45ec021000 65404 0 0 0 0 ---p- [ anon ]
00007f45f0000000 132 44 44 44 0 rw-p- [ anon ]
00007f45f0021000 65404 0 0 0 0 ---p- [ anon ]
00007f45f4000000 132 52 52 52 0 rw-p- [ anon ]
00007f45f4021000 65404 0 0 0 0 ---p- [ anon ]
00007f45f8000000 132 72 72 72 0 rw-p- [ anon ]
00007f45f8021000 65404 0 0 0 0 ---p- [ anon ]
00007f45fc000000 132 52 52 52 0 rw-p- [ anon ]
00007f45fc021000 65404 0 0 0 0 ---p- [ anon ]
00007f4600000000 132 88 88 88 0 rw-p- [ anon ]
00007f4600021000 65404 0 0 0 0 ---p- [ anon ]
00007f4604000000 132 32 32 32 0 rw-p- [ anon ]
00007f4604021000 65404 0 0 0 0 ---p- [ anon ]
00007f4608000000 132 4 4 4 0 rw-p- [ anon ]
00007f4608021000 65404 0 0 0 0 ---p- [ anon ]
00007f460c000000 1080 1024 1024 1024 0 rw-p- [ anon ]
00007f460c10e000 64456 0 0 0 0 ---p- [ anon ]
00007f4610000000 132 4 4 4 0 rw-p- [ anon ]
00007f4610021000 65404 0 0 0 0 ---p- [ anon ]
00007f4614000000 132 8 8 8 0 rw-p- [ anon ]
00007f4614021000 65404 0 0 0 0 ---p- [ anon ]
00007f4618000000 14724 12568 12568 12568 0 rw-p- [ anon ]
00007f4618e61000 50812 0 0 0 0 ---p- [ anon ]
00007f461c000000 6584 6584 6584 6584 0 rw-p- [ anon ]
00007f461c66e000 58952 0 0 0 0 ---p- [ anon ]
00007f4620000000 132 4 4 4 0 rw-p- [ anon ]
00007f4620021000 65404 0 0 0 0 ---p- [ anon ]
00007f4624000000 132 4 4 4 0 rw-p- [ anon ]
00007f4624021000 65404 0 0 0 0 ---p- [ anon ]
00007f4628000000 132 44 44 44 0 rw-p- [ anon ]
00007f4628021000 65404 0 0 0 0 ---p- [ anon ]
00007f462c000000 132 12 12 12 0 rw-p- [ anon ]
00007f462c021000 65404 0 0 0 0 ---p- [ anon ]
00007f4630000000 132 4 4 4 0 rw-p- [ anon ]
00007f4630021000 65404 0 0 0 0 ---p- [ anon ]
00007f4634000000 1296 1296 1296 1296 0 rw-p- [ anon ]
00007f4634144000 64240 0 0 0 0 ---p- [ anon ]
00007f4638000000 132 8 8 8 0 rw-p- [ anon ]
00007f4638021000 65404 0 0 0 0 ---p- [ anon ]
00007f463c5fd000 1536 1340 1340 1340 0 rw-p- [ anon ]
00007f463c77d000 512 0 0 0 0 ---p- [ anon ]
00007f463c7fd000 2048 2048 2048 2048 0 rw-p- [ anon ]
00007f463c9fd000 16 0 0 0 0 ---p- [ anon ]
00007f463ca01000 1008 104 104 104 0 rw-p- [ anon ]
00007f463cafd000 16 0 0 0 0 ---p- [ anon ]
00007f463cb01000 1008 100 100 100 0 rw-p- [ anon ]
00007f463cbfd000 16 0 0 0 0 ---p- [ anon ]
00007f463cc01000 1008 88 88 88 0 rw-p- [ anon ]
00007f463ccfd000 16 0 0 0 0 ---p- [ anon ]
00007f463cd01000 1008 96 96 96 0 rw-p- [ anon ]
00007f463cdfd000 16 0 0 0 0 ---p- [ anon ]
00007f463ce01000 1008 96 96 96 0 rw-p- [ anon ]
00007f463cefd000 16 0 0 0 0 ---p- [ anon ]
00007f463cf01000 1008 96 96 96 0 rw-p- [ anon ]
00007f463cffd000 16 0 0 0 0 ---p- [ anon ]
00007f463d001000 1008 96 96 96 0 rw-p- [ anon ]
00007f463d0fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d101000 1008 96 96 96 0 rw-p- [ anon ]
00007f463d1fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d201000 1008 96 96 96 0 rw-p- [ anon ]
00007f463d2fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d301000 1008 100 100 100 0 rw-p- [ anon ]
00007f463d3fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d401000 1008 108 108 108 0 rw-p- [ anon ]
00007f463d4fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d501000 1008 140 140 140 0 rw-p- [ anon ]
00007f463d5fd000 16 0 0 0 0 ---p- [ anon ]
00007f463d601000 2800 1876 1876 1876 0 rw-p- [ anon ]
00007f463d8bd000 2304 2276 2276 2276 0 rw-p- [ anon ]
00007f463dafd000 4 0 0 0 0 ---p- [ anon ]
00007f463dafe000 1024 16 16 16 0 rw-p- [ anon ]
00007f463dbfe000 4 0 0 0 0 ---p- [ anon ]
00007f463dbff000 1024 16 16 16 0 rw-p- [ anon ]
00007f463dcff000 4 0 0 0 0 ---p- [ anon ]
00007f463dd00000 2816 1808 1808 1808 0 rw-p- [ anon ]
00007f463dfc0000 33156 264 264 264 0 rw-p- [ anon ]
00007f4640021000 65404 0 0 0 0 ---p- [ anon ]
00007f4644000000 132 24 24 24 0 rw-p- [ anon ]
00007f4644021000 65404 0 0 0 0 ---p- [ anon ]
00007f4648000000 132 40 40 40 0 rw-p- [ anon ]
00007f4648021000 65404 0 0 0 0 ---p- [ anon ]
00007f464c000000 16 0 0 0 0 ---p- [ anon ]
00007f464c004000 3056 2148 2148 2148 0 rw-p- [ anon ]
00007f464c300000 16 0 0 0 0 ---p- [ anon ]
00007f464c304000 2800 1904 1904 1904 0 rw-p- [ anon ]
00007f464c5c0000 2304 2280 2280 2280 0 rw-p- [ anon ]
00007f464c800000 2496 2048 2048 2048 0 rwxp- [ anon ]
00007f464ca70000 3200 0 0 0 0 ---p- [ anon ]
00007f464cd90000 6592 6536 6536 6536 0 rwxp- [ anon ]
00007f464d400000 113440 0 0 0 0 ---p- [ anon ]
00007f46542c8000 2496 1208 1208 1208 0 rwxp- [ anon ]
00007f4654538000 117536 0 0 0 0 ---p- [ anon ]
00007f465b800000 138908 3884 0 0 0 r--s- /usr/lib64/jvm/java-11-openjdk-11/lib/modules
00007f4664000000 11300 10312 10312 10312 0 rw-p- [ anon ]
00007f4664b09000 54236 0 0 0 0 ---p- [ anon ]
00007f4668054000 212 60 15 60 152 r--s- /run/nscd/dbYMrubY (deleted)
00007f4668089000 16 0 0 0 0 ---p- [ anon ]
00007f466808d000 1008 100 100 100 0 rw-p- [ anon ]
00007f4668189000 52 52 0 0 0 r--p- /usr/lib64/libnspr4.so
00007f4668196000 140 140 0 0 0 r-xp- /usr/lib64/libnspr4.so
00007f46681b9000 44 44 0 0 0 r--p- /usr/lib64/libnspr4.so
00007f46681c4000 8 8 8 8 0 r--p- /usr/lib64/libnspr4.so
00007f46681c6000 4 4 4 4 0 rw-p- /usr/lib64/libnspr4.so
00007f46681c7000 12 12 12 12 0 rw-p- [ anon ]
00007f46681ca000 112 112 0 0 0 r--p- /usr/lib64/libnss3.so
00007f46681e6000 880 128 0 0 0 r-xp- /usr/lib64/libnss3.so
00007f46682c2000 204 0 0 0 0 r--p- /usr/lib64/libnss3.so
00007f46682f5000 28 28 28 28 0 r--p- /usr/lib64/libnss3.so
00007f46682fc000 4 4 4 4 0 rw-p- /usr/lib64/libnss3.so
00007f46682fd000 8 0 0 0 0 rw-p- [ anon ]
00007f46682ff000 4 0 0 0 0 ---p- [ anon ]
00007f4668300000 1024 8 8 8 0 rw-p- [ anon ]
00007f4668400000 16 0 0 0 0 ---p- [ anon ]
00007f4668404000 1008 88 88 88 0 rw-p- [ anon ]
00007f4668500000 16 0 0 0 0 ---p- [ anon ]
00007f4668504000 1008 8 8 8 0 rw-p- [ anon ]
00007f4668600000 16 0 0 0 0 ---p- [ anon ]
00007f4668604000 1008 32 32 32 0 rw-p- [ anon ]
00007f4668700000 16 0 0 0 0 ---p- [ anon ]
00007f4668704000 1008 36 36 36 0 rw-p- [ anon ]
00007f4668800000 16 0 0 0 0 ---p- [ anon ]
00007f4668804000 1008 8 8 8 0 rw-p- [ anon ]
00007f4668900000 16 0 0 0 0 ---p- [ anon ]
00007f4668904000 1008 8 8 8 0 rw-p- [ anon ]
00007f4668a00000 2528 160 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_COLLATE
00007f4668c7b000 32 32 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsunec.so
00007f4668c83000 96 64 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libsunec.so
00007f4668c9b000 40 0 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsunec.so
00007f4668ca5000 16 16 16 16 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsunec.so
00007f4668ca9000 8 8 8 8 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsunec.so
00007f4668cab000 56 56 0 0 0 r--p- /usr/lib64/libnssutil3.so
00007f4668cb9000 68 64 0 0 0 r-xp- /usr/lib64/libnssutil3.so
00007f4668cca000 48 48 0 0 0 r--p- /usr/lib64/libnssutil3.so
00007f4668cd6000 28 28 28 28 0 r--p- /usr/lib64/libnssutil3.so
00007f4668cdd000 4 4 4 4 0 rw-p- /usr/lib64/libnssutil3.so
00007f4668cfd000 196 196 196 196 0 rw-p- [ anon ]
00007f4668d2e000 352 128 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_CTYPE
00007f4668d86000 16 0 0 0 0 ---p- [ anon ]
00007f4668d8a000 1008 88 88 88 0 rw-p- [ anon ]
00007f4668e86000 16 0 0 0 0 ---p- [ anon ]
00007f4668e8a000 1008 88 88 88 0 rw-p- [ anon ]
00007f4668f86000 4 0 0 0 0 ---p- [ anon ]
00007f4668f87000 9732 8728 8728 8728 0 rw-p- [ anon ]
00007f4669908000 4 0 0 0 0 ---p- [ anon ]
00007f4669909000 1024 8 8 8 0 rw-p- [ anon ]
00007f4669a09000 4 0 0 0 0 ---p- [ anon ]
00007f4669a0a000 5136 24 24 24 0 rw-p- [ anon ]
00007f4669f0e000 4 0 0 0 0 ---p- [ anon ]
00007f4669f0f000 1024 16 16 16 0 rw-p- [ anon ]
00007f466a00f000 4 0 0 0 0 ---p- [ anon ]
00007f466a010000 18428 10436 10436 10436 0 rw-p- [ anon ]
00007f466b20f000 2048 2048 2048 2048 0 rw-p- [ anon ]
00007f466b40f000 4116 88 88 88 0 rw-p- [ anon ]
00007f466b814000 4 0 0 0 0 ---p- [ anon ]
00007f466b815000 1044 36 36 36 0 rw-p- [ anon ]
00007f466b91a000 920 0 0 0 0 ---p- [ anon ]
00007f466ba00000 656 656 0 0 0 r--p- /usr/lib64/libstdc++.so.6.0.32
00007f466baa4000 1156 688 0 0 0 r-xp- /usr/lib64/libstdc++.so.6.0.32
00007f466bbc5000 460 64 0 0 0 r--p- /usr/lib64/libstdc++.so.6.0.32
00007f466bc38000 52 52 52 52 0 r--p- /usr/lib64/libstdc++.so.6.0.32
00007f466bc45000 4 4 4 4 0 rw-p- /usr/lib64/libstdc++.so.6.0.32
00007f466bc46000 16 12 12 12 0 rw-p- [ anon ]
00007f466bc5a000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libextnet.so
00007f466bc5b000 4 4 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libextnet.so
00007f466bc5c000 4 0 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libextnet.so
00007f466bc5d000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libextnet.so
00007f466bc5e000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libextnet.so
00007f466bc5f000 8 8 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement_ext.so
00007f466bc61000 12 12 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement_ext.so
00007f466bc64000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement_ext.so
00007f466bc65000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement_ext.so
00007f466bc66000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement_ext.so
00007f466bc67000 16 16 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnet.so
00007f466bc6b000 56 56 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libnet.so
00007f466bc79000 16 16 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnet.so
00007f466bc7d000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnet.so
00007f466bc7e000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnet.so
00007f466bc7f000 28 28 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnio.so
00007f466bc86000 28 28 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libnio.so
00007f466bc8d000 12 12 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnio.so
00007f466bc90000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnio.so
00007f466bc91000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libnio.so
00007f466bc92000 576 576 576 576 0 rw-p- [ anon ]
00007f466bd22000 888 0 0 0 0 ---p- [ anon ]
00007f466be00000 2572 2572 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/server/libjvm.so
00007f466c083000 12944 11120 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/server/libjvm.so
00007f466cd27000 2408 740 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/server/libjvm.so
00007f466cf81000 784 784 784 784 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/server/libjvm.so
00007f466d045000 236 216 216 216 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/server/libjvm.so
00007f466d080000 348 320 320 320 0 rw-p- [ anon ]
00007f466d0d8000 12 12 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement.so
00007f466d0db000 4 4 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement.so
00007f466d0dc000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement.so
00007f466d0dd000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement.so
00007f466d0de000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libmanagement.so
00007f466d0df000 132 132 132 132 0 rw-p- [ anon ]
00007f466d100000 16 0 0 0 0 ---p- [ anon ]
00007f466d104000 1008 152 152 152 0 rw-p- [ anon ]
00007f466d200000 152 152 0 0 0 r--p- /usr/lib64/libc.so.6
00007f466d226000 1456 1372 0 0 0 r-xp- /usr/lib64/libc.so.6
00007f466d392000 340 184 0 0 0 r--p- /usr/lib64/libc.so.6
00007f466d3e7000 16 16 16 16 0 r--p- /usr/lib64/libc.so.6
00007f466d3eb000 32 32 32 32 0 rw-p- /usr/lib64/libc.so.6
00007f466d3f3000 56 24 24 24 0 rw-p- [ anon ]
00007f466d401000 4 4 0 0 0 r--p- /usr/lib64/libplds4.so
00007f466d402000 4 4 0 0 0 r-xp- /usr/lib64/libplds4.so
00007f466d403000 4 0 0 0 0 r--p- /usr/lib64/libplds4.so
00007f466d404000 4 4 4 4 0 r--p- /usr/lib64/libplds4.so
00007f466d405000 4 4 4 4 0 rw-p- /usr/lib64/libplds4.so
00007f466d406000 8 8 0 0 0 r--p- /usr/lib64/libplc4.so
00007f466d408000 8 8 0 0 0 r-xp- /usr/lib64/libplc4.so
00007f466d40a000 4 0 0 0 0 r--p- /usr/lib64/libplc4.so
00007f466d40b000 4 4 4 4 0 r--p- /usr/lib64/libplc4.so
00007f466d40c000 4 4 4 4 0 rw-p- /usr/lib64/libplc4.so
00007f466d40d000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsystemconf.so
00007f466d40e000 4 4 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libsystemconf.so
00007f466d40f000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsystemconf.so
00007f466d410000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsystemconf.so
00007f466d411000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libsystemconf.so
00007f466d412000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_NUMERIC
00007f466d413000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_TIME
00007f466d414000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_MONETARY
00007f466d415000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
00007f466d416000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_PAPER
00007f466d417000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_NAME
00007f466d418000 28 28 0 0 0 r--s- /usr/lib64/gconv/gconv-modules.cache
00007f466d41f000 224 220 220 220 0 rw-p- [ anon ]
00007f466d457000 28 0 0 0 0 ---p- [ anon ]
00007f466d45e000 8 8 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libzip.so
00007f466d460000 16 16 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libzip.so
00007f466d464000 8 8 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libzip.so
00007f466d466000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libzip.so
00007f466d467000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libzip.so
00007f466d468000 32 32 12 12 0 rw-s- /tmp/hsperfdata_chuck/26608
00007f466d470000 212 12 0 0 0 r--s- /var/lib/nscd/passwd
00007f466d4a5000 56 56 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjava.so
00007f466d4b3000 88 88 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libjava.so
00007f466d4c9000 28 28 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjava.so
00007f466d4d0000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjava.so
00007f466d4d1000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjava.so
00007f466d4d2000 4 4 4 4 0 rw-p- [ anon ]
00007f466d4d3000 4 4 0 0 0 r--p- /usr/lib64/librt.so.1
00007f466d4d4000 4 4 0 0 0 r-xp- /usr/lib64/librt.so.1
00007f466d4d5000 4 0 0 0 0 r--p- /usr/lib64/librt.so.1
00007f466d4d6000 4 4 4 4 0 r--p- /usr/lib64/librt.so.1
00007f466d4d7000 4 4 4 4 0 rw-p- /usr/lib64/librt.so.1
00007f466d4d8000 16 16 0 0 0 r--p- /usr/lib64/libgcc_s.so.1
00007f466d4dc000 108 64 0 0 0 r-xp- /usr/lib64/libgcc_s.so.1
00007f466d4f7000 16 16 0 0 0 r--p- /usr/lib64/libgcc_s.so.1
00007f466d4fb000 4 4 4 4 0 r--p- /usr/lib64/libgcc_s.so.1
00007f466d4fc000 4 4 4 4 0 rw-p- /usr/lib64/libgcc_s.so.1
00007f466d4fd000 64 64 0 0 0 r--p- /usr/lib64/libm.so.6
00007f466d50d000 488 280 0 0 0 r-xp- /usr/lib64/libm.so.6
00007f466d587000 360 128 0 0 0 r--p- /usr/lib64/libm.so.6
00007f466d5e1000 4 4 4 4 0 r--p- /usr/lib64/libm.so.6
00007f466d5e2000 8 8 8 8 0 rw-p- /usr/lib64/libm.so.6
00007f466d5e4000 8 8 8 8 0 rw-p- [ anon ]
00007f466d5e6000 12 12 0 0 0 r--p- /usr/lib64/glibc-hwcaps/x86-64-v3/libz.so.1.3
00007f466d5e9000 56 56 0 0 0 r-xp- /usr/lib64/glibc-hwcaps/x86-64-v3/libz.so.1.3
00007f466d5f7000 28 28 0 0 0 r--p- /usr/lib64/glibc-hwcaps/x86-64-v3/libz.so.1.3
00007f466d5fe000 4 4 4 4 0 r--p- /usr/lib64/glibc-hwcaps/x86-64-v3/libz.so.1.3
00007f466d5ff000 4 4 4 4 0 rw-p- /usr/lib64/glibc-hwcaps/x86-64-v3/libz.so.1.3
00007f466d600000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_ADDRESS
00007f466d601000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_TELEPHONE
00007f466d602000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
00007f466d603000 4 4 0 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
00007f466d604000 4 0 0 0 0 ---p- [ anon ]
00007f466d605000 4 0 0 0 0 r--p- [ anon ]
00007f466d606000 8 8 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjimage.so
00007f466d608000 12 12 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libjimage.so
00007f466d60b000 4 4 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjimage.so
00007f466d60c000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjimage.so
00007f466d60d000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libjimage.so
00007f466d60e000 20 20 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libverify.so
00007f466d613000 28 28 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/libverify.so
00007f466d61a000 8 8 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libverify.so
00007f466d61c000 8 8 8 8 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/libverify.so
00007f466d61e000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/libverify.so
00007f466d61f000 12 12 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/jli/libjli.so
00007f466d622000 40 40 0 0 0 r-xp- /usr/lib64/jvm/java-11-openjdk-11/lib/jli/libjli.so
00007f466d62c000 12 12 0 0 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/jli/libjli.so
00007f466d62f000 4 4 4 4 0 r--p- /usr/lib64/jvm/java-11-openjdk-11/lib/jli/libjli.so
00007f466d630000 4 4 4 4 0 rw-p- /usr/lib64/jvm/java-11-openjdk-11/lib/jli/libjli.so
00007f466d631000 8 8 8 8 0 rw-p- [ anon ]
00007f466d633000 4 4 0 0 0 r--p- /usr/lib64/ld-linux-x86-64.so.2
00007f466d634000 156 156 0 0 0 r-xp- /usr/lib64/ld-linux-x86-64.so.2
00007f466d65b000 40 40 0 0 0 r--p- /usr/lib64/ld-linux-x86-64.so.2
00007f466d665000 8 8 8 8 0 r--p- /usr/lib64/ld-linux-x86-64.so.2
00007f466d667000 8 8 8 8 0 rw-p- /usr/lib64/ld-linux-x86-64.so.2
00007ffe9210f000 136 40 40 40 0 rw-p- [ stack ]
00007ffe921da000 16 0 0 0 0 r--p- [ anon ]
00007ffe921de000 8 4 0 0 0 r-xp- [ anon ]
ffffffffff600000 4 0 0 0 0 --xp- [ anon ]
---------------- ------- ------- ------- ------- -------
total kB 4145136 164976 301201 140936 152
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Stefan Mayr <st...@mayr-stefan.de>.
Hi,
Am 05.01.2024 um 23:21 schrieb Brian Braun:
>> Tracking native memory usage can be tricky depending upon your
>> environment. I would only look into that if there were somethng very odd
>> going on, like your process memory space seems to be more than 50% taken
>> by non-java-heap memory.
>>
>>
> Well, actually that is my case. The heap memory (the 3 G1s) and non-heap
> memory (3 CodeHeaps + compressed class space + metaspace) together use just
> a few hundred MBs. I can see that using Tomcat Manager as well as the other
> monitoring tools. And the rest of the memory (about 1GB) is being used by
> the JVM but I don't know why or how, and that started 2 months ago. In your
> case you have just 20-25% extra memory used in a way that you don't
> understand, in my case it is about 200%.
Have you tried limiting native memory (-XX:MaxDirectMemorySize)? If not
set this can be as large as your maximum heap size according to
https://github.com/openjdk/jdk/blob/ace010b38a83e0c9b43aeeb6bc5c92d0886dc53f/src/java.base/share/classes/jdk/internal/misc/VM.java#L130-L136
From what I know:
total memory ~ heap + metaspace + code cache + (#threads * thread stack
size) + direct memory
So if you set -Xmx to 1GB this should also allow 1GB of native memory
which may result in more then 2GB of memory used by the JVM
Regards,
Stefan Mayr
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory
Posted by Brian Braun <ja...@gmail.com>.
Hello Chirstopher,
First of all: thanks a lot for your responses!
On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
chris@christopherschultz.net> wrote:
> Brian,
>
> On 12/30/23 15:42, Brian Braun wrote:
> > At the beginning, this was the problem: The OOM-killer (something that I
> > never knew existed) killing Tomcat unexpectedly and without any
> > explanation
>
> The explanation is always the same: some application requests memory
> from the kernel, which always grants the request(!). When the
> application tries to use that memory, the kernel scrambles to physically
> allocate the memory on-demand and, if all the memory is gone, it will
> pick a process and kill it.
>
>
Yes, that was happening to me until I set up the SWAP file and now at least
the Tomcat process is not being killed anymore.
> There are ways to prevent this from happening, but the best way to not
> to over-commit your memory.
>
> > Not knowing how much memory would I need to satisfy the JVM, and not
> > willing to migrate to more expensive Amazon instances just because I
> > don't know why this is happening. And not knowing if the memory
> > requirement would keep growing and growing and growing.
> It might. But if your symptom is Linux oom-killer and not JVM OOME, then
> the better technique is to *reduce* your heap space in the JVM.
>
> > Then I activated the SWAP file, and I discovered that this problem stops
> at
> > 1.5GB of memory used by the JVM. At least I am not getting more crashes
> > anymore. But I consider the SWAP file as a palliative and I really want
> to
> > know what is the root of this problem. If I don't, then maybe I should
> > consider another career. I don't enjoy giving up.
>
> Using a swap file is probably going to kill your performance. What
> happens if you make your heap smaller?
>
>
Yes, in fact the performance is suffering and that is why I don't consider
the swap file as a solution.
I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present (the Tomcat
process grows in memory usage up to 1.5GB combining real memory and swap
memory).
As I have explained in another email recently, I think that neither heap
usage nor non-heap usage are the problem. I have been monitoring them and
their requirements have always stayed low enough, so I could leave the -Xms
parameter with about 300-400 MB and that would be enough.
There is something else in the JVM that is using all that memory and I
still don't know what it is. And I think it doesn't care about the value I
give to -Xmx, it uses all the memory it wants. Doing what? I don't know.
> Yes, the memory used by the JVM started to grow suddenly one day, after
> > several years running fine. Since I had not made any changes to my app, I
> > really don't know the reason. And I really think this should not be
> > happening without an explanation.
> >
> > I don't have any Java OOME exceptions, so it is not that my objects don't
> > fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
> > don't think the Heap and non-heap usage is the problem. I have been
> > inspecting those and their usage seems to be normal/modest and steady. I
> > can see that using the Tomcat Manager as well as several other tools (New
> > Relic, VisualVM, etc).
>
> Okay, so what you've done then is to allow a very large heap that you
> mostly don't need. If/when the heap grows a lot -- possibly suddenly --
> the JVM is lazy and just takes more heap space from the OS and
> ultimately you run out of main memory.
>
> The solution is to reduce the heap size.
>
>
Maybe I am not understanding your suggestion.
I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present. In fact the
problem started with a low amount for -Xmx.
> > Regarding the 1GB I am giving now to the -Xms parameter: I was giving
> just
> > a few hundreds and I already had the problem. Actually I think it is the
> > same if I give a few hundreds of MBs or 1GB, the JVM still starts using
> > more memory after 3-4 days of running until it takes 1.5GB. But during
> the
> > first 1-4 days it uses just a few hundred MBs.
> >
> > My app has been "static" as you say, but probably I have upgraded Tomcat
> > and/or Java recently. I don't really remember. Maybe one of those
> upgrades
> > brought this issue as a result. Actually, If I knew that one of those
> > upgrades causes this huge pike in memory consumption and there is no way
> to
> > avoid it, then I would accept it as a fact of life and move on. But
> since I
> > don't know, it really bugs me.
> >
> > I have the same amount of users and traffic as before. I also know how
> much
> > memory a session takes and it is fine. I have also checked the HTTP(S)
> > requests to see if somehow I am getting any attempts to hack my instance
> > that could be the root of this problem. Yes, I get hacking attempts by
> > those bots all the time, but I don't see anything relevant there. No
> news.
> >
> > I agree with what you say now regarding the GC. I should not need to use
> > those switches since I understand it should work fine without using them.
> > And I don't know how to use them. And since I have never cared about
> using
> > them for about 15 years using Java+Tomcat, why should I start now?
> >
> > I have also checked all my long-lasting objects. I have optimized my DB
> > queries recently as you suggest now, so they don't create huge amounts of
> > objects in a short period of time that the GC would have to deal with.
> The
> > same applies to my scheduled tasks. They all run very quickly and use
> > modest amounts of memory. All the other default Tomcat threads create far
> > more objects.
> >
> > I have already activated the GC log. Is there a tool that you would
> suggest
> > to analyze it? I haven't even opened it. I suspect that the root of my
> > problem comes from the GC process indeed.
>
> The GC logs are just text, so you can eyeball them if you'd like, but to
> really get a sense of what's happening you should use some kind of
> visualization tool.
>
> It's not pretty, but gcviewer (https://github.com/chewiebug/GCViewer)
> gets the job done.
>
>
Thanks a lot for the advice, I will use it! Hopefully I will find something
relevant in the log.
> If you run with a 500MiB heap and everything looks good and you have no
> crashes (Linux oom-killer or Java OOME), I'd stick with that. Remember
> that your total OS memory requirements will be Java heap + JVM overhead
> + whatever native memory is required by native libraries.
>
> In production, I have an application with a 2048MiB heap whose "resident
> size" in `ps` shows as 2.4GiB. So nearly half a GiB is being used on top
> of that 2GiB heap. gcviewer will not show anything about the native
> memory being used, so you will only be seeing part of the picture.
>
> Tracking native memory usage can be tricky depending upon your
> environment. I would only look into that if there were somethng very odd
> going on, like your process memory space seems to be more than 50% taken
> by non-java-heap memory.
>
>
Well, actually that is my case. The heap memory (the 3 G1s) and non-heap
memory (3 CodeHeaps + compressed class space + metaspace) together use just
a few hundred MBs. I can see that using Tomcat Manager as well as the other
monitoring tools. And the rest of the memory (about 1GB) is being used by
the JVM but I don't know why or how, and that started 2 months ago. In your
case you have just 20-25% extra memory used in a way that you don't
understand, in my case it is about 200%.
> -chris
>
> > On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/29/23 20:48, Brian Braun wrote:
> >>> Hello,
> >>>
> >>> First of all:
> >>> Christopher Schultz: You answered an email from me 6 weeks ago. You
> >> helped
> >>> me a lot with your suggestions. I have done a lot of research and have
> >>> learnt a lot since then, so I have been able to rule out a lot of
> >> potential
> >>> roots for my issue. Because of that I am able to post a new more
> specific
> >>> email. Thanks a lot!!!
> >>>
> >>> Now, this is my stack:
> >>>
> >>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been
> enough
> >>> for years.
> >>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> >> 2023-08-24
> >>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m
> -Xms1000m
> >>> ......")
> >>> - My app, which I developed myself, and has been running without any
> >>> problems for years
> >>>
> >>> Well, a couple of months ago my website/Tomcat/Java started eating more
> >> and
> >>> more memory about after about 4-7 days. The previous days it uses just
> a
> >>> few hundred MB and is very steady, but then after a few days the memory
> >>> usage suddenly grows up to 1.5GB (and then stops growing at that point,
> >>> which is interesting). Between these anomalies the RAM usage is fine
> and
> >>> very steady (as it has been for years) and it uses just about 40-50% of
> >> the
> >>> "Max memory" (according to what the Tomcat Manager server status
> shows).
> >>> The 3 components of G1GC heap memory are steady and low, before and
> after
> >>> the usage grows to 1.5GB, so it is definitely not that the heap starts
> >>> requiring more and more memory. I have been using several tools to
> >> monitor
> >>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
> >>> memory usage by the heap is not the problem.
> >>> The Non-heaps memory usage is not the problem either. Everything there
> is
> >>> normal, the usage is humble and even more steady.
> >>>
> >>> And there are no leaks, I'm sure of that. I have inspected the JVM
> using
> >>> several tools.
> >>>
> >>> There are no peaks in the number of threads either. The peak is the
> same
> >>> when the memory usage is low and when it requires 1.5GB. It stays the
> >> same
> >>> all the time.
> >>>
> >>> I have also reviewed all the scheduled tasks in my app and lowered the
> >>> amount of objects they create, which was nice and entertaining. But
> that
> >> is
> >>> not the problem, I have analyzed the object creation by all the threads
> >>> (and there are many) and the threads created by my scheduled tasks are
> >> very
> >>> humble in their memory usage, compared to many other threads.
> >>>
> >>> And I haven't made any relevant changes to my app in the 6-12 months
> >> before
> >>> this problem started occurring. It is weird that I started having this
> >>> problem. Could it be that I received an update in the java version or
> the
> >>> Tomcat version that is causing this problem?
> >>>
> >>> If neither the heap memory or the Non-heaps memory is the source of the
> >>> growth of the memory usage, what could it be? Clearly something is
> >>> happening inside the JVM that raises the memory usage. And everytime it
> >>> grows, it doesn't decrease. It is like if something suddenly starts
> >>> "pushing" the memory usage more and more, until it stops at 1.5GB.
> >>>
> >>> I think that maybe the source of the problem is the garbage collector.
> I
> >>> haven't used any of the switches that we can use to optimize that,
> >>> basically because I don't know what I should do there (if I should at
> >> all).
> >>> I have also activated the GC log, but I don't know how to analyze it.
> >>>
> >>> I have also increased and decreased the value of "-Xms" parameter and
> it
> >> is
> >>> useless.
> >>>
> >>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
> >>> Ubuntu instance so at least my JVM would not be killed my the OS
> anymore
> >>> (since the real memory is just 1.8GB). That worked and now the memory
> >> usage
> >>> can grow up to 1.5GB without crashing, by using the much slower SWAP
> >>> memory, but I still think that this is an abnormal situation.
> >>>
> >>> Thanks in advance for your suggestions!
> >>
> >> First of all: what is the problem? Are you just worried that the number
> >> of bytes taken by your JVM process is larger than it was ... sometime in
> >> the past? Or are you experiencing Java OOME of Linux oom-killer or
> >> anything like that?
> >>
> >> Not all JVMs behave this way, most most of them do: once memory is
> >> "appropriated" by the JVM from the OS, it will never be released. It's
> >> just too expensive of an operation to shrink the heap.. plus, you told
> >> the JVM "feel free to use up to 1GiB of heap" so it's taking you at your
> >> word. Obviously, the native heap plus stack space for every thread plus
> >> native memory for any native libraries takes up more space than just the
> >> 1GiB you gave for the heap, so ... things just take up space.
> >>
> >> Lowering the -Xms will never reduce the maximum memory the JVM ever
> >> uses. Only lowering -Xmx can do that. I always recommend setting Xms ==
> >> Xmx because otherwise you are lying to yourself about your needs.
> >>
> >> You say you've been running this application "for years". Has it been in
> >> a static environment, or have you been doing things such as upgrading
> >> Java and/or Tomcat during that time? There are things that Tomcat does
> >> now that it did not do in the past that sometimes require more memory to
> >> manage, sometimes only at startup and sometimes for the lifetime of the
> >> server. There are some things that the JVM is doing that require more
> >> memory than their previous versions.
> >>
> >> And then there is the usage of your web application. Do you have the
> >> same number of users? I've told this (short)( story a few times on this
> >> list, but we had a web application that ran for 10 years with only 64MiB
> >> of heap and one day we started getting OOMEs. At first we just bounced
> >> the service and tried looking for bugs, leaks, etc. but the heap dumps
> >> were telling us everything was fine.
> >>
> >> The problem was user load. We simply outgrew the heap we had allocated
> >> because we had more simultaneous logged-in users than we did in the
> >> past, and they all had sessions, etc. We had plenty of RAM available, we
> >> were just being stingy with it.
> >>
> >> The G1 garbage collector doesn't have very many switches to mess-around
> >> with it compared to older collectors. The whole point of G1 was to "make
> >> garbage collection easy". Feel free to read 30 years of lies and
> >> confusion about how to best configure Java garbage collectors. At the
> >> end of the day, if you don't know exactly what you are doing and/or you
> >> don't have a specific problem you are trying to solve, you are better
> >> off leaving everything with default settings.
> >>
> >> If you want to reduce the amount of RAM your application uses, set a
> >> lower heap size. If that causes OOMEs, audit your application for wasted
> >> memory such as too-large caches (which presumably live a long time) or
> >> too-large single-transactions such as loading 10k records all at once
> >> from a database. Sometimes a single request can require a whole lot of
> >> memory RIGHT NOW which is only used temporarily.
> >>
> >> I was tracking-down something in our own application like this recently:
> >> a page-generation process was causing an OOME periodically, but the JVM
> >> was otherwise very healthy. It turns out we had an administrative action
> >> in our application that had no limits on the amount of data that could
> >> be requested from the database at once. So naive administrators were
> >> able to essentially cause a query to be run that returned a huge number
> >> of rows from the db, then every row was being converted into a row in an
> >> HTML table in a web page. Our page-generation process builds the whole
> >> page in memory before returning it, instead of streaming it back out to
> >> the user, which means a single request can use many MiBs of memory just
> >> for in-memory strings/byte arrays.
> >>
> >> If something like that happens in your application, it can pressure the
> >> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
> >> -- the JVM is never gonna give that memory back to the OS.
> >>
> >> So even though everything "looks good", your heap and native memory
> >> spaces are very large until you terminate the JVM.
> >>
> >> If you haven't already done so, I would recommend that you enable GC
> >> logging. How to do that is very dependent on your JVM, version, and
> >> environment. This writes GC activity details to a series of files during
> >> the JVM execution. There are freely-available tools you can use to view
> >> those log files in a meaningful way and draw some conclusions. You might
> >> even be able to see when that "memory event" took place that caused your
> >> heap memory to shoot-up. (Or maybe it's your native memory, which isn't
> >> logged by the GC logger.) If you are able to see when it happened, you
> >> may be able to correlate that with your application log to see what
> >> happened in your application. Maybe you need a fix.
> >>
> >> Then again, maybe everything is totally fine and there is nothing to
> >> worry about.
> >>
> >> -chris
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>