You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-dev@axis.apache.org by Matt Seibert <ms...@us.ibm.com> on 2002/09/03 20:56:55 UTC

Suggestion: Performance Anaylsis test

Hey guys,

I've heard that there have been performance problems with AXIS in the past,
such as regressions from version to version in the throughput.  I would
like to propose a test that simply analyzes the output from a test run to
look a regressions in the time it takes for the tests to complete.

This, of course, would be optional, and would be dependant on setting some
property.  The results would be stored in a flat file, of known format, and
stored in the test tree, so that you could plot the times out all by
yourself.

How does this sound?  Anyone have an objection to doing this?  The main
changes would be:
      1) Creating the new test
      2) Writing the test output lines to some given alternate file (like
test/performance/<TEST_NAME.perf>
      3) Comparing the old (most recent) test data with the new data
      4) Determining a delta (if any)
      5) Creating a ${test.reportdir}/TEST-test.report file which is an
HTML view of this data
            GREEN - Performance improvement
            BLUE - performance the same
            RED - performance degredation
      6) Blowing away those intermediate test/performance/<TEST_NAME.perf>
files
      7) Committing the changed performance data file to CVS (if we want to
make this data public, static, and trackable)

Okay, now I solicit input from you.  Let me know....

Matt Seibert                                           mseibert@us.ibm.com
IBM        External:    (512) 838-3656      Internal:   678-3656


RE: Suggestion: Performance Anaylsis test

Posted by Mark Ericson <ma...@mindreef.com>.
This is a great use-case for the proposal I just made regarding
finer-grained handler hooks.  Instrumenting for timing at pre/post
serialization/deserialization points can provide valuable performance
tuning data.  

Testing can occur over the network for real-life test cases and yet
provide valuable data for performance tuning.

- Mark

-----Original Message-----
From: Steve Loughran [mailto:steve_l@iseran.com] 
Sent: Tuesday, September 03, 2002 4:19 PM
To: axis-dev@xml.apache.org
Subject: Re: Suggestion: Performance Anaylsis test


----- Original Message -----
From: "Matt Seibert" <ms...@us.ibm.com>
To: <ax...@xml.apache.org>
Sent: Tuesday, September 03, 2002 11:56 AM
Subject: Suggestion: Performance Anaylsis test


> Hey guys,
>
> I've heard that there have been performance problems with AXIS in the
past,
> such as regressions from version to version in the throughput.  I
would
> like to propose a test that simply analyzes the output from a test run
to
> look a regressions in the time it takes for the tests to complete.

you need to distinguish network induced latencies from code perf, of
course;
so localhost only tests, right?

The other issue is that there is more than just latency to test, the
other
big thing is max load before latency goes through the roof; that takes
more
time to run (but Sam is doing that kind of thing...)

> This, of course, would be optional, and would be dependant on setting
some
> property.  The results would be stored in a flat file, of known
format,
and
> stored in the test tree, so that you could plot the times out all by
> yourself.

an XML file, surely.

>
> How does this sound?  Anyone have an objection to doing this?  The
main
> changes would be:
>       1) Creating the new test
>       2) Writing the test output lines to some given alternate file
(like
> test/performance/<TEST_NAME.perf>
>       3) Comparing the old (most recent) test data with the new data
>       4) Determining a delta (if any)
>       5) Creating a ${test.reportdir}/TEST-test.report file which is
an
> HTML view of this data
>             GREEN - Performance improvement
>             BLUE - performance the same
>             RED - performance degredation
>       6) Blowing away those intermediate
test/performance/<TEST_NAME.perf>
> files
>       7) Committing the changed performance data file to CVS (if we
want
to
> make this data public, static, and trackable)
>
> Okay, now I solicit input from you.  Let me know....

We need some tests that are useful for measuring latency

-echo System.currentTimeMillis() (I have a JNI to get Pentium clock
ticks
for windows and linux for an alternate option; its one of the antbook
sample
projects). This test would help calibrate round trip times.

-simple messages (string, int) and back
-simple message in, complex object back
-complex object in, simple object back
-complex both ways
-attachments
-same tests with various headers

Its good for bottleneck hunting if the response includes a breakdown of
time, by timestamping at different points in the message (receipt,
after-parse, after-process, after-marshall); lets you know where to
focus
your energies.





Re: Suggestion: Performance Anaylsis test

Posted by Steve Loughran <st...@iseran.com>.
----- Original Message -----
From: "Matt Seibert" <ms...@us.ibm.com>
To: <ax...@xml.apache.org>
Sent: Tuesday, September 03, 2002 11:56 AM
Subject: Suggestion: Performance Anaylsis test


> Hey guys,
>
> I've heard that there have been performance problems with AXIS in the
past,
> such as regressions from version to version in the throughput.  I would
> like to propose a test that simply analyzes the output from a test run to
> look a regressions in the time it takes for the tests to complete.

you need to distinguish network induced latencies from code perf, of course;
so localhost only tests, right?

The other issue is that there is more than just latency to test, the other
big thing is max load before latency goes through the roof; that takes more
time to run (but Sam is doing that kind of thing...)

> This, of course, would be optional, and would be dependant on setting some
> property.  The results would be stored in a flat file, of known format,
and
> stored in the test tree, so that you could plot the times out all by
> yourself.

an XML file, surely.

>
> How does this sound?  Anyone have an objection to doing this?  The main
> changes would be:
>       1) Creating the new test
>       2) Writing the test output lines to some given alternate file (like
> test/performance/<TEST_NAME.perf>
>       3) Comparing the old (most recent) test data with the new data
>       4) Determining a delta (if any)
>       5) Creating a ${test.reportdir}/TEST-test.report file which is an
> HTML view of this data
>             GREEN - Performance improvement
>             BLUE - performance the same
>             RED - performance degredation
>       6) Blowing away those intermediate test/performance/<TEST_NAME.perf>
> files
>       7) Committing the changed performance data file to CVS (if we want
to
> make this data public, static, and trackable)
>
> Okay, now I solicit input from you.  Let me know....

We need some tests that are useful for measuring latency

-echo System.currentTimeMillis() (I have a JNI to get Pentium clock ticks
for windows and linux for an alternate option; its one of the antbook sample
projects). This test would help calibrate round trip times.

-simple messages (string, int) and back
-simple message in, complex object back
-complex object in, simple object back
-complex both ways
-attachments
-same tests with various headers

Its good for bottleneck hunting if the response includes a breakdown of
time, by timestamping at different points in the message (receipt,
after-parse, after-process, after-marshall); lets you know where to focus
your energies.