You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@jmeter.apache.org by bu...@apache.org on 2012/02/26 17:22:10 UTC
DO NOT REPLY [Bug 52773] New: JMS Publisher test versus IBM
Performance Harness for Java Message Service
https://issues.apache.org/bugzilla/show_bug.cgi?id=52773
Bug #: 52773
Summary: JMS Publisher test versus IBM Performance Harness for
Java Message Service
Product: JMeter
Version: 2.6
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P2
Component: Main
AssignedTo: issues@jmeter.apache.org
ReportedBy: bruno.fs.antunes@gmail.com
Classification: Unclassified
Created attachment 28384
--> https://issues.apache.org/bugzilla/attachment.cgi?id=28384
Ssample project with the load test scenario. See readme.txt for a simple
explanation of how to use it.
I have performed a very simple test, using JMS Publisher sampler on JMeter, and
a "jms.r11.Sender" test case on Harness , with similar test conditions:
* No ramp-up, only one thread
* same message size (1000 bytes)
* same destination queue
* same duration test period (60 seconds)
* running as fast as possible (no delay between samples)
>From this tests, I observe that Harness generates almost 90% more messages than
JMeter. As such we achieve greater throughput with one thread in a one minute
test using Harness
In order to get same throughput for JMeter, we must configure more threads.
Tested with JMeter 2.6, and IBM Performance Harness for Java Message Service
1.2 (http://www.alphaworks.ibm.com/tech/perfharness)
Any ideas of why JMeter generates much less load? I have even run jmeter with
no gui, and saves results in CSV format.
In attach a sample ant script with the load test scenario.
Tested this using Oracle WebLogic Server 10.3.5 and Apache ActiveMQ 5.5.1
See readme.txt for a simple explanation of how to use it.
--
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
DO NOT REPLY [Bug 52773] JMS Publisher test versus IBM Performance
Harness for Java Message Service
Posted by bu...@apache.org.
https://issues.apache.org/bugzilla/show_bug.cgi?id=52773
Philippe Mouawad <p....@ubik-ingenierie.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends on| |52775
--- Comment #2 from Philippe Mouawad <p....@ubik-ingenierie.com> 2012-02-26 22:32:34 UTC ---
See 52775
--
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
DO NOT REPLY [Bug 52773] JMS Publisher test versus IBM Performance
Harness for Java Message Service
Posted by bu...@apache.org.
https://issues.apache.org/bugzilla/show_bug.cgi?id=52773
Philippe Mouawad <p....@ubik-ingenierie.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
CC| |p.mouawad@ubik-ingenierie.c
| |om
Resolution| |WORKSFORME
--- Comment #1 from Philippe Mouawad <p....@ubik-ingenierie.com> 2012-02-26 22:02:35 UTC ---
Hello,
In fact the BIG difference comes from the fact that JMeter with your test case
uses PERSISTENT messages while your perfharness Test uses NON PERSISTENT
messages.
I made a test on my machine with a local AMQ 5.5.0 server:
JMETER : Generate Summary Results = 199630 in 60,1s = 3319,2/s Avg: 0 Min:
0 Max: 695 Err: 0 (0,00%)
Perfharness : totalIterations=219579,avgDuration=60,12,maxrateR=3933,03
Note that if I add a setup Thread Group that runs one sample (to warm up )
(which is what is done by Perfharness I think), I get :
Generate Summary Results = 210270 in 60,6s = 3469,2/s Avg: 0 Min: 0
Max: 841 Err: 0 (0,00%)
I think documentation should be clearer about this PERSISTENT settings and we
should add a new Option to enable NON PERSISTENT messages
--
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.