You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@juddi.apache.org by Jeremi Thebeau <j....@xceptance.de> on 2009/07/20 14:14:42 UTC
Bug: Heap Memory Usage
Hi all,
I ran a load test comprising of 30 virtual user continuously executing a
the following senario on a jUDDI node:
- publish a business to the node with a unique name;
- publish a random number of services (>0 but <8) under that business;
- and search for the newly published business name.
This was supposed to run for two hours but the application crashed after
1h 46m. As can be seen in the attached jconsole screenshot, the heap
memory usage goes up almost linearly during the load test until it hit
the maximum alloted memory, 1G (inserted 'export JAVA_OPTS=-Xmx1024m' in
startup.sh). Then, it hung around one gig as the request's response
times got longer and longer as can be seen in the attached XLT report at
about 11:25 (go to 'Requests' via the navigation drop down in the top
right corner, most easily seen on the 'Averages' graphs). Eventually, I
started getting 'java.lang.Exception: GC overhead limit exceeded' and
'java.lang.Exception: Java heap space' exceptions (stacktraces can be
seen under 'Errors' in the XLT report) and the application crashed.
Also attached is the testcase used (TRegisterBusinessWithServices.java)
and the actions called under it.
Jeremi Thebeau
QA Engineer/Consultant
Xceptance GmbH
RE: Bug: Heap Memory Usage
Posted by Jeff Faath <jf...@apache.org>.
I ran the load tests using two snapshot builds - one with openjpa and one
with hibernate. I did this on a single machine with a single agent using
Jeremi's test suite. I should note that I also had background apps open
like a browser, IM, etc. but this was similar for both runs.
The results: The openjpa build bombed out after 7 minutes. The request
times were noticeably longer from the beginning. The hibernate build made
it through the whole 2 hours without a problem. The request times were much
faster overall.
I've included the test reports from both runs.
Next step I suppose is to profile the openjpa build?
-Jeff
-----Original Message-----
From: Kurt T Stam [mailto:kurt.stam@gmail.com]
Sent: Monday, July 20, 2009 8:54 AM
To: juddi-dev@ws.apache.org
Cc: r.schwietzke@xceptance.de; m.carl@xceptance.de
Subject: Re: Bug: Heap Memory Usage
Jeremi Thebeau wrote:
> Once you have XLT and the test suite it should be easy enough to
> reproduce.
Great
>
> We were running the juddi-portal-bundle-3.0.0.beta package on a Linux
> box and querying juddi remotely through our local network. We gave
> jUDDI 1G to play with but the heap usage still didn't level out. No
> other settings were change from the downloadable zip file on your web
> site.
>
> Speaking of which, does the juddi-portal-bundle-3.0.0.beta.zip on the
> site get updated with every successful build?
No 3.0.0.beta was a milestone release. Newer snapshots end up here
http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-
portal-bundle/3.0.0.SNAPSHOT/
>
> Thanks
>
> Jeremi
>
> Kurt T Stam wrote:
>> Thanks Jeremi, this looks like a huge leak, so it should be easy to
>> find. I opened a bug for this
>> https://issues.apache.org/jira/browse/JUDDI-267
>>
>> Hopefully we will have a fix soon, and I will release a snapshot. I
>> saw all the attachements, what do you think is the easiest way to
>> reproduce it?
>>
>> Thx!
>>
>> --Kurt
>>
>>
>> Jeremi Thebeau wrote:
>>> Hi all,
>>>
>>> I ran a load test comprising of 30 virtual user continuously
>>> executing a the following senario on a jUDDI node:
>>>
>>> - publish a business to the node with a unique name;
>>> - publish a random number of services (>0 but <8) under that business;
>>> - and search for the newly published business name.
>>>
>>> This was supposed to run for two hours but the application crashed
>>> after 1h 46m. As can be seen in the attached jconsole screenshot,
>>> the heap memory usage goes up almost linearly during the load test
>>> until it hit the maximum alloted memory, 1G (inserted 'export
>>> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
>>> the request's response times got longer and longer as can be seen in
>>> the attached XLT report at about 11:25 (go to 'Requests' via the
>>> navigation drop down in the top right corner, most easily seen on
>>> the 'Averages' graphs). Eventually, I started getting
>>> 'java.lang.Exception: GC overhead limit exceeded' and
>>> 'java.lang.Exception: Java heap space' exceptions (stacktraces can
>>> be seen under 'Errors' in the XLT report) and the application crashed.
>>>
>>> Also attached is the testcase used
>>> (TRegisterBusinessWithServices.java) and the actions called under it.
>>>
>>> Jeremi Thebeau
>>> QA Engineer/Consultant
>>> Xceptance GmbH
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>
>>
>
Re: Bug: Heap Memory Usage
Posted by Kurt T Stam <ku...@gmail.com>.
Jeremi Thebeau wrote:
> Once you have XLT and the test suite it should be easy enough to
> reproduce.
Great
>
> We were running the juddi-portal-bundle-3.0.0.beta package on a Linux
> box and querying juddi remotely through our local network. We gave
> jUDDI 1G to play with but the heap usage still didn't level out. No
> other settings were change from the downloadable zip file on your web
> site.
>
> Speaking of which, does the juddi-portal-bundle-3.0.0.beta.zip on the
> site get updated with every successful build?
No 3.0.0.beta was a milestone release. Newer snapshots end up here
http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-portal-bundle/3.0.0.SNAPSHOT/
>
> Thanks
>
> Jeremi
>
> Kurt T Stam wrote:
>> Thanks Jeremi, this looks like a huge leak, so it should be easy to
>> find. I opened a bug for this
>> https://issues.apache.org/jira/browse/JUDDI-267
>>
>> Hopefully we will have a fix soon, and I will release a snapshot. I
>> saw all the attachements, what do you think is the easiest way to
>> reproduce it?
>>
>> Thx!
>>
>> --Kurt
>>
>>
>> Jeremi Thebeau wrote:
>>> Hi all,
>>>
>>> I ran a load test comprising of 30 virtual user continuously
>>> executing a the following senario on a jUDDI node:
>>>
>>> - publish a business to the node with a unique name;
>>> - publish a random number of services (>0 but <8) under that business;
>>> - and search for the newly published business name.
>>>
>>> This was supposed to run for two hours but the application crashed
>>> after 1h 46m. As can be seen in the attached jconsole screenshot,
>>> the heap memory usage goes up almost linearly during the load test
>>> until it hit the maximum alloted memory, 1G (inserted 'export
>>> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
>>> the request's response times got longer and longer as can be seen in
>>> the attached XLT report at about 11:25 (go to 'Requests' via the
>>> navigation drop down in the top right corner, most easily seen on
>>> the 'Averages' graphs). Eventually, I started getting
>>> 'java.lang.Exception: GC overhead limit exceeded' and
>>> 'java.lang.Exception: Java heap space' exceptions (stacktraces can
>>> be seen under 'Errors' in the XLT report) and the application crashed.
>>>
>>> Also attached is the testcase used
>>> (TRegisterBusinessWithServices.java) and the actions called under it.
>>>
>>> Jeremi Thebeau
>>> QA Engineer/Consultant
>>> Xceptance GmbH
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>
>>
>
Re: Bug: Heap Memory Usage
Posted by Jeremi Thebeau <j....@xceptance.de>.
Once you have XLT and the test suite it should be easy enough to reproduce.
We were running the juddi-portal-bundle-3.0.0.beta package on a Linux
box and querying juddi remotely through our local network. We gave jUDDI
1G to play with but the heap usage still didn't level out. No other
settings were change from the downloadable zip file on your web site.
Speaking of which, does the juddi-portal-bundle-3.0.0.beta.zip on the
site get updated with every successful build?
Thanks
Jeremi
Kurt T Stam wrote:
> Thanks Jeremi, this looks like a huge leak, so it should be easy to
> find. I opened a bug for this
> https://issues.apache.org/jira/browse/JUDDI-267
>
> Hopefully we will have a fix soon, and I will release a snapshot. I
> saw all the attachements, what do you think is the easiest way to
> reproduce it?
>
> Thx!
>
> --Kurt
>
>
> Jeremi Thebeau wrote:
>> Hi all,
>>
>> I ran a load test comprising of 30 virtual user continuously
>> executing a the following senario on a jUDDI node:
>>
>> - publish a business to the node with a unique name;
>> - publish a random number of services (>0 but <8) under that business;
>> - and search for the newly published business name.
>>
>> This was supposed to run for two hours but the application crashed
>> after 1h 46m. As can be seen in the attached jconsole screenshot, the
>> heap memory usage goes up almost linearly during the load test until
>> it hit the maximum alloted memory, 1G (inserted 'export
>> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
>> the request's response times got longer and longer as can be seen in
>> the attached XLT report at about 11:25 (go to 'Requests' via the
>> navigation drop down in the top right corner, most easily seen on
>> the 'Averages' graphs). Eventually, I started getting
>> 'java.lang.Exception: GC overhead limit exceeded' and
>> 'java.lang.Exception: Java heap space' exceptions (stacktraces can be
>> seen under 'Errors' in the XLT report) and the application crashed.
>>
>> Also attached is the testcase used
>> (TRegisterBusinessWithServices.java) and the actions called under it.
>>
>> Jeremi Thebeau
>> QA Engineer/Consultant
>> Xceptance GmbH
>>
>> ------------------------------------------------------------------------
>>
>
>
Re: Bug: Heap Memory Usage
Posted by Kurt T Stam <ku...@gmail.com>.
Thanks Jeremi, this looks like a huge leak, so it should be easy to
find. I opened a bug for this
https://issues.apache.org/jira/browse/JUDDI-267
Hopefully we will have a fix soon, and I will release a snapshot. I saw
all the attachements, what do you think is the easiest way to reproduce it?
Thx!
--Kurt
Jeremi Thebeau wrote:
> Hi all,
>
> I ran a load test comprising of 30 virtual user continuously executing
> a the following senario on a jUDDI node:
>
> - publish a business to the node with a unique name;
> - publish a random number of services (>0 but <8) under that business;
> - and search for the newly published business name.
>
> This was supposed to run for two hours but the application crashed
> after 1h 46m. As can be seen in the attached jconsole screenshot, the
> heap memory usage goes up almost linearly during the load test until
> it hit the maximum alloted memory, 1G (inserted 'export
> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
> the request's response times got longer and longer as can be seen in
> the attached XLT report at about 11:25 (go to 'Requests' via the
> navigation drop down in the top right corner, most easily seen on the
> 'Averages' graphs). Eventually, I started getting
> 'java.lang.Exception: GC overhead limit exceeded' and
> 'java.lang.Exception: Java heap space' exceptions (stacktraces can be
> seen under 'Errors' in the XLT report) and the application crashed.
>
> Also attached is the testcase used
> (TRegisterBusinessWithServices.java) and the actions called under it.
>
> Jeremi Thebeau
> QA Engineer/Consultant
> Xceptance GmbH
>
> ------------------------------------------------------------------------
>
Re: Bug: Heap Memory Usage
Posted by Jeremi Thebeau <j....@xceptance.de>.
Thanks Kurt,
I ran the same test on the Snapshot. Results are attached.
Basically, the memory usage seems to be much better (see
MemUsage10users.png).
Unfotunately, it took a while before I could run a decent length load
test because this version would lock up after a while. It locked up
faster under bigger loads. I started at 30 users and kept lowering the
number of users and lengthening the ramp up time. Finally I got 10 users
with a 2mins ramp up time to run for over an hour before juddi locked up
(results attached).
I took stack trace dumps throughout the run (attach in stackOutputs
directory). Stack trace 12 and later are post lock up. Before the system
locks up, the http-8080-?? threads seem to cycle through RUNNABLE,
BLOKED, TIMED_WAITING, and WAITING states (not necessarily in that
order). Afterwards, all threads are "WAITING (on object monitor)".
Next, I will run separate "register business" and "find business" tests
next to see which one is locking up.
Jeremi
Kurt T Stam wrote:
> Hi Jeremi,
>
> I have uploaded a new portal bundle snapshot based on Hibernate. From
> what Jeff could see this does not seem to have the memory leak. We
> will investigate more why OpenJPA exhibits the leak, but this build
> should unblock you at least.
>
> http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-portal-bundle/3.0.0.SNAPSHOT/juddi-portal-bundle-3.0.0.20090723.201427-7.zip
>
>
> Cheers,
>
> --Kurt
>
>
> Jeremi Thebeau wrote:
>> Hi all,
>>
>> I ran a load test comprising of 30 virtual user continuously
>> executing a the following senario on a jUDDI node:
>>
>> - publish a business to the node with a unique name;
>> - publish a random number of services (>0 but <8) under that business;
>> - and search for the newly published business name.
>>
>> This was supposed to run for two hours but the application crashed
>> after 1h 46m. As can be seen in the attached jconsole screenshot, the
>> heap memory usage goes up almost linearly during the load test until
>> it hit the maximum alloted memory, 1G (inserted 'export
>> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
>> the request's response times got longer and longer as can be seen in
>> the attached XLT report at about 11:25 (go to 'Requests' via the
>> navigation drop down in the top right corner, most easily seen on
>> the 'Averages' graphs). Eventually, I started getting
>> 'java.lang.Exception: GC overhead limit exceeded' and
>> 'java.lang.Exception: Java heap space' exceptions (stacktraces can be
>> seen under 'Errors' in the XLT report) and the application crashed.
>>
>> Also attached is the testcase used
>> (TRegisterBusinessWithServices.java) and the actions called under it.
>>
>> Jeremi Thebeau
>> QA Engineer/Consultant
>> Xceptance GmbH
>>
>> ------------------------------------------------------------------------
>>
>
>
Re: Bug: Heap Memory Usage
Posted by Kurt T Stam <ku...@gmail.com>.
Hi Jeremi,
I have uploaded a new portal bundle snapshot based on Hibernate. From
what Jeff could see this does not seem to have the memory leak. We will
investigate more why OpenJPA exhibits the leak, but this build should
unblock you at least.
http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-portal-bundle/3.0.0.SNAPSHOT/juddi-portal-bundle-3.0.0.20090723.201427-7.zip
Cheers,
--Kurt
Jeremi Thebeau wrote:
> Hi all,
>
> I ran a load test comprising of 30 virtual user continuously executing
> a the following senario on a jUDDI node:
>
> - publish a business to the node with a unique name;
> - publish a random number of services (>0 but <8) under that business;
> - and search for the newly published business name.
>
> This was supposed to run for two hours but the application crashed
> after 1h 46m. As can be seen in the attached jconsole screenshot, the
> heap memory usage goes up almost linearly during the load test until
> it hit the maximum alloted memory, 1G (inserted 'export
> JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as
> the request's response times got longer and longer as can be seen in
> the attached XLT report at about 11:25 (go to 'Requests' via the
> navigation drop down in the top right corner, most easily seen on the
> 'Averages' graphs). Eventually, I started getting
> 'java.lang.Exception: GC overhead limit exceeded' and
> 'java.lang.Exception: Java heap space' exceptions (stacktraces can be
> seen under 'Errors' in the XLT report) and the application crashed.
>
> Also attached is the testcase used
> (TRegisterBusinessWithServices.java) and the actions called under it.
>
> Jeremi Thebeau
> QA Engineer/Consultant
> Xceptance GmbH
>
> ------------------------------------------------------------------------
>