You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Manuel Teira <mt...@tid.es> on 2007/06/20 16:14:00 UTC

Severe memory leak using temporary queues

After observing severe memory leaks in our production servers, 
eventually finishing with the JVM GC running intensively and activemq 
dropping connections due to inactivity (and even out of memory errors), 
we have proceeded in the following way, trying to isolate the error cause:

1.-Create a single client to be able to produce the same effect on an 
standalone activemq broker (our servers are actually embedding a broker, 
but for the sake of clarity, we thought it should be better to isolate 
the broker in a dedicated JVM). In the first test code, the client just 
used a group of threads to create sessions, send a message on a producer 
created on that session, and expect a return message in a consumer, 
created on a temporary queue of the given session, setted as jmsReplyTo 
to the message. Later, we find out that we didn't need to send any 
message to produce the memory leak. I'm attaching the code of the 
client, that summarizing, does the following:
  * es.tid.planb.test.JMSBug <queue> <threads> <iters>
    - Creates a connection to the broker and starts it.
    - Creates <threads> threads. Each thread does, <iters>/<threads> times:
        - Creates a session
        - Creates a producer on <queue>, a temporary queue and a 
consumer on this temporary queue.
        - Closes the session and the temporary queue.
        (Note that it doesn't need to send any message for the leak to show)
    - Wait for all the threads to finish, and closes the connection.


2.-We tested the client against:
  activemq 4.1.1
  activemq 4.1-SNAPSHOT (apache-activemq-4.1-20070615.012351-63.tar.gz)
  activemq 4.2-SNAPSHOT (apache-activemq-4.2-20070607.230602-81.tar.gz)

  Using the Sun JVMs:
  1.5.0_07
  1.5.0_11
  1.6.0

running on Sun UltraSparc, solaris 9 architecture.

 Using the configuration that I'm also attaching to this mail. It uses a 
openwire and an stomp connector (just to resemble our failing scenario), 
and a oracle datasource (anyway, we have reproduced the same leaks using 
the derby bundled database).


3.-We observed the following facts:
  -The memory consumption grows up, and doesn't get lower, even after 
the clients finishes (and even forcing the Garbage Collector to run).
  - Looking at jmap heap histograms, it seems that a lot of objects are 
not released (org.apache.activemq.broker.region.Topic) being among them.
 - Disabling the advisoryTopic support in the broker leads to a less 
severe leakage, but anyway, unacceptable for our requisites.
 - Using the Derby embedded database, the heap growed to similar sizes.
 - activemq 4.2-SNAPSHOT seems to leek less memory, but is still severe 
enough.
 - Commenting out the creation of the temporary queue and the consumer 
on it seems to avoid the leakage.
 - The amount of memory leaked doesn't seem related with the number of 
threads the client uses to do its job. The only difference is the time 
involved as you can see in the included charts.

4.-I'm sending you the following information.
 - The source code of the test program (JMSBug.java)
 - The activemq xbean configuration (activemq.xml)
 - The heap sizes and histograms for the following cases:
    * Just after starting activemq (histo-startup and heap-startup)
    * After starting, running JMSBug (with threads=1 iters=20000) and 
forcing the GC from the JMX console (histo-1-20000 and heap-1-20000).
    * After starting, running JMSBug (with threads=50 iters=20000) and 
forcing the GC from the JMX console (histo-50-20000 and heap-50-20000).
 - A pair of screenshots of the heap chart after running both tests. 
(jmx-heap-1-20000.png and jmx-heap-50-20000.png)

Any directions, hint or whatever you could provide us to fix this urgent 
problem, will be welcomed. Even some idea of what code could be failing, 
or if the example code is wrong in any way. We are not familiarized with 
activemq source code and that should help us a lot.

Don't hesitate to ask me for any other information about the failing 
environment.


Best regards.