You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Praveen Kallakuri <pk...@gmail.com> on 2006/01/29 11:34:30 UTC

jmeter helper package -- JmeterMeter

http://wiki.apache.org/jakarta-jmeter/JmeterMeter

I have finally put up some code that has become indispensible to me
while using jmeter to run our load/stress tests. 

The core idea behind this is that the server running jmeter should do
nothing except just that -- generate load. So aggregation of the various
performance measures is done by a single client program while there can
be one instance of the server program per jmeter instance gathering data
from the jmeter log.

Some measures as indicated on the wiki:

   0.  # of active/terminated threads
   1.  requests per minute (rpm)
   2.  average response time
   3.  average response time across last X% of requests
   4.  average response time across last X% of test duration
   5.  mean time between consecutive requests (a nice way to know when
       things are slowing down)
   6.  number of waiting requests (request sent but not received response
       yet)
   7.  grouping response times based on URL regex matching (this would
       require a text file called RequestClasses in the dir where the
       the server program in invoked with each line containing a PERL5
       regular expression to match specific URL's)

The code would only work on a linux distro once the dependencies have
been installed. But anyone who knows a little perl can port it to other
OS's. 

Also, currently, the analysis is restricted to HTTP samples, but again,
even that can be easily changed by changing a few lines in the server
process. 

It is necessary to have the following two settings in jmeter.properties.

log_level.jmeter.protocol.http.sampler.HTTPSampler=DEBUG
log_level.jmeter.threads.JMeterThread=INFO

The root loglevel should be WARN to save IO overhead within jmeter.

Ofcourse, if you are trying to gather metrics on other samplers
(JDBC,..) you should set the corresponding class at DEBUG level. 

Communication between the one or more server processes and the single
client process happens via an XML string streamed through a network
socket on each host.

The client program aggregates results across servers and can optionally
be made to invoke several child processes each of which can gather
measures from other sources. For eg., I have a database statistics
collector that collects the number of active/idle oracle sessions and
the memory used by them to monitor database performance; another program
that is invoked as a child that can collect system memory and CPU usage
of arbitrary hosts running perhaps the application/database server; and
so on. 

The client program then creates a hash of the results and passes the
hash to an object that is essentially a wrapper around the unix charting
tool, gnuplot. It dynamically creates a gnuplot script the first time it
is invoked with the results hash and from then on, keeps including the
new results to dynamically generate charts. 

Finally, you can create a simple html that shows the charts; I create a
table in the html with each column/row corresponding to each test so
that test results can be compared. 

The wiki page has all of the above and a few more usage details and the
source. Jmeter is an awesome tool... thanks a ton to sebb and everyone
else on the development team.

-- 
kp /  gpg DSA 0x63C97D6B
      key fingerprint: 0885 63B1 956A 29E3 E176  FBC5 6AD5 7D6D 63C9 7D6B

Re: jmeter helper package -- JmeterMeter

Posted by Praveen Kallakuri <pk...@gmail.com>.
On Sun, Jan 29, 2006 at 01:02:30PM +0000, sebb wrote:
> Looks very interesting - thanks for the contribution.
> I'll give a try some time.
> 
> Just wondering why you use jmeter.log rather than a JTL file for
> extracting the information?  I've not looked at the scripts: is there
> perhaps some information you need that is not in the JTL file?
> 

You may be right. I should probably look at the jtl file. But maybe if I
explain a little more on what I am doing as part of the server process,
you will understand why I went the log route. 

- In jmserver, I create a hash with the key being the thread id (eg.,
  'Thread Group 2-10') the the value being the last request that was
  sent by the thread. I maintain each thread in one of the following
  states.)
   * LIVE: The thread has been instantiated
   * RECEIVE: A request has been sent at time X and the thread is
     waiting for a response.
   * SLEEP: No request has been sent and therefore no response is
     expected. It is just waiting for probably some timer to mature so
     that it could send the next request.
   * DEAD: The thread has terminated

  RECEIVE and SLEEP are ofcourse sub-states of LIVE. 

  For each thread, I also store the time at which the current state was
  entered.

- Every 10 seconds or so (which is ofcourse configurable), there are a
  series of method calls that ask for all different types of statistics.
  The stats are serialized into an XML and stored in a shared string. 

- Whenever the jmserver thread that is listening on a socket receives a
  connection, it writes the shared strong containing the XML to the
  socket. 

When I first wrote the server process around this time of the year last
year, I remember noticing that the writes to jtl would happen only after a
request has been respoded to by the server or a timeout has been
reached. So I would have no way of knowing how many threads are in
RECEIVE state. This is ofcourse an important measure that tells me from
a client point of view the number of requests that are being queued up
by the server under test.

Secondly, and this may have been a performance issue on the machine
running jmeter or the OS IO delay, I found that there were times when
the jtl file would only be partially written so that an XML parser would
barf because of an incomplete XML. 

Third, the performance overhead that the jmserver process might incur by
having to read the entire XML every so often would be by no means lesser
than if it had to just tail the log for just changes to already
maintained state of each thread.

Finally, the jmserver process can also be made to "replay" what may have
transpired during a historical test. For eg., since the times and states
are all relative to the time at which the test started (which would be
indicated in the log file), a user using this could create charts for
various metrics for any past test simply if the log file were given.

> The problem with using DEBUG output is that there is no guarantee at
> all that it will stay the same from release to release; also we may
> add more DEBUG output at any time.

True. That is why all regular expressions used to read the log are
constants that can be changed by a user. Ofcourse, this would mean that
a user would have to ensure that every time the jmeter codebase is
refreshed from your cvs server or a new binary is installed, the user
should check to ensure that all regexes still are valid.

> 
> You also mention setting the default log level to WARN. This is set to
> INFO by default. Are there some INFO messages that should perhaps be
> removed or made DEBUG?

I don't remember, but I think there was some noise that I did not need.
I will get back to you on this.

Thank you.

> 
> S.
> On 29/01/06, Praveen Kallakuri <pk...@gmail.com> wrote:
> > http://wiki.apache.org/jakarta-jmeter/JmeterMeter
> >
> > I have finally put up some code that has become indispensible to me
> > while using jmeter to run our load/stress tests.
> >
> > The core idea behind this is that the server running jmeter should do
> > nothing except just that -- generate load. So aggregation of the various
> > performance measures is done by a single client program while there can
> > be one instance of the server program per jmeter instance gathering data
> > from the jmeter log.
> >
> > Some measures as indicated on the wiki:
> >
> >   0.  # of active/terminated threads
> >   1.  requests per minute (rpm)
> >   2.  average response time
> >   3.  average response time across last X% of requests
> >   4.  average response time across last X% of test duration
> >   5.  mean time between consecutive requests (a nice way to know when
> >       things are slowing down)
> >   6.  number of waiting requests (request sent but not received response
> >       yet)
> >   7.  grouping response times based on URL regex matching (this would
> >       require a text file called RequestClasses in the dir where the
> >       the server program in invoked with each line containing a PERL5
> >       regular expression to match specific URL's)
> >
> > The code would only work on a linux distro once the dependencies have
> > been installed. But anyone who knows a little perl can port it to other
> > OS's.
> >
> > Also, currently, the analysis is restricted to HTTP samples, but again,
> > even that can be easily changed by changing a few lines in the server
> > process.
> >
> > It is necessary to have the following two settings in jmeter.properties.
> >
> > log_level.jmeter.protocol.http.sampler.HTTPSampler=DEBUG
> > log_level.jmeter.threads.JMeterThread=INFO
> >
> > The root loglevel should be WARN to save IO overhead within jmeter.
> >
> > Ofcourse, if you are trying to gather metrics on other samplers
> > (JDBC,..) you should set the corresponding class at DEBUG level.
> >
> > Communication between the one or more server processes and the single
> > client process happens via an XML string streamed through a network
> > socket on each host.
> >
> > The client program aggregates results across servers and can optionally
> > be made to invoke several child processes each of which can gather
> > measures from other sources. For eg., I have a database statistics
> > collector that collects the number of active/idle oracle sessions and
> > the memory used by them to monitor database performance; another program
> > that is invoked as a child that can collect system memory and CPU usage
> > of arbitrary hosts running perhaps the application/database server; and
> > so on.
> >
> > The client program then creates a hash of the results and passes the
> > hash to an object that is essentially a wrapper around the unix charting
> > tool, gnuplot. It dynamically creates a gnuplot script the first time it
> > is invoked with the results hash and from then on, keeps including the
> > new results to dynamically generate charts.
> >
> > Finally, you can create a simple html that shows the charts; I create a
> > table in the html with each column/row corresponding to each test so
> > that test results can be compared.
> >
> > The wiki page has all of the above and a few more usage details and the
> > source. Jmeter is an awesome tool... thanks a ton to sebb and everyone
> > else on the development team.
> >
> > --
> > kp /  gpg DSA 0x63C97D6B
> >      key fingerprint: 0885 63B1 956A 29E3 E176  FBC5 6AD5 7D6D 63C9 7D6B
> >
> >
> > -----BEGIN PGP SIGNATURE-----
> > Version: GnuPG v1.4.1 (GNU/Linux)
> >
> > iD8DBQFD3Jo2atV9bWPJfWsRArtwAJ9kjW2mef71zWkV2OPV0JfZ7Xc6aACfY+mV
> > x7i8HqTIThAlByoRWgGCMPY=
> > =7asV
> > -----END PGP SIGNATURE-----
> >
> >
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org

-- 
kp /  gpg DSA 0x63C97D6B
      key fingerprint: 0885 63B1 956A 29E3 E176  FBC5 6AD5 7D6D 63C9 7D6B

Re: jmeter helper package -- JmeterMeter

Posted by sebb <se...@gmail.com>.
Looks very interesting - thanks for the contribution.
I'll give a try some time.

Just wondering why you use jmeter.log rather than a JTL file for
extracting the information?  I've not looked at the scripts: is there
perhaps some information you need that is not in the JTL file?

The problem with using DEBUG output is that there is no guarantee at
all that it will stay the same from release to release; also we may
add more DEBUG output at any time.

You also mention setting the default log level to WARN. This is set to
INFO by default. Are there some INFO messages that should perhaps be
removed or made DEBUG?

S.
On 29/01/06, Praveen Kallakuri <pk...@gmail.com> wrote:
> http://wiki.apache.org/jakarta-jmeter/JmeterMeter
>
> I have finally put up some code that has become indispensible to me
> while using jmeter to run our load/stress tests.
>
> The core idea behind this is that the server running jmeter should do
> nothing except just that -- generate load. So aggregation of the various
> performance measures is done by a single client program while there can
> be one instance of the server program per jmeter instance gathering data
> from the jmeter log.
>
> Some measures as indicated on the wiki:
>
>   0.  # of active/terminated threads
>   1.  requests per minute (rpm)
>   2.  average response time
>   3.  average response time across last X% of requests
>   4.  average response time across last X% of test duration
>   5.  mean time between consecutive requests (a nice way to know when
>       things are slowing down)
>   6.  number of waiting requests (request sent but not received response
>       yet)
>   7.  grouping response times based on URL regex matching (this would
>       require a text file called RequestClasses in the dir where the
>       the server program in invoked with each line containing a PERL5
>       regular expression to match specific URL's)
>
> The code would only work on a linux distro once the dependencies have
> been installed. But anyone who knows a little perl can port it to other
> OS's.
>
> Also, currently, the analysis is restricted to HTTP samples, but again,
> even that can be easily changed by changing a few lines in the server
> process.
>
> It is necessary to have the following two settings in jmeter.properties.
>
> log_level.jmeter.protocol.http.sampler.HTTPSampler=DEBUG
> log_level.jmeter.threads.JMeterThread=INFO
>
> The root loglevel should be WARN to save IO overhead within jmeter.
>
> Ofcourse, if you are trying to gather metrics on other samplers
> (JDBC,..) you should set the corresponding class at DEBUG level.
>
> Communication between the one or more server processes and the single
> client process happens via an XML string streamed through a network
> socket on each host.
>
> The client program aggregates results across servers and can optionally
> be made to invoke several child processes each of which can gather
> measures from other sources. For eg., I have a database statistics
> collector that collects the number of active/idle oracle sessions and
> the memory used by them to monitor database performance; another program
> that is invoked as a child that can collect system memory and CPU usage
> of arbitrary hosts running perhaps the application/database server; and
> so on.
>
> The client program then creates a hash of the results and passes the
> hash to an object that is essentially a wrapper around the unix charting
> tool, gnuplot. It dynamically creates a gnuplot script the first time it
> is invoked with the results hash and from then on, keeps including the
> new results to dynamically generate charts.
>
> Finally, you can create a simple html that shows the charts; I create a
> table in the html with each column/row corresponding to each test so
> that test results can be compared.
>
> The wiki page has all of the above and a few more usage details and the
> source. Jmeter is an awesome tool... thanks a ton to sebb and everyone
> else on the development team.
>
> --
> kp /  gpg DSA 0x63C97D6B
>      key fingerprint: 0885 63B1 956A 29E3 E176  FBC5 6AD5 7D6D 63C9 7D6B
>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.1 (GNU/Linux)
>
> iD8DBQFD3Jo2atV9bWPJfWsRArtwAJ9kjW2mef71zWkV2OPV0JfZ7Xc6aACfY+mV
> x7i8HqTIThAlByoRWgGCMPY=
> =7asV
> -----END PGP SIGNATURE-----
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org