You are viewing a plain text version of this content. The canonical link for it is here.
Posted to proton@qpid.apache.org by "Ken Giusti (JIRA)" <ji...@apache.org> on 2013/02/05 16:39:12 UTC

[jira] [Created] (PROTON-220) Create a set of "glass box" tests to quantify the performance of the proton codebase.

Ken Giusti created PROTON-220:
---------------------------------

             Summary: Create a set of "glass box" tests to quantify the performance of the proton codebase.
                 Key: PROTON-220
                 URL: https://issues.apache.org/jira/browse/PROTON-220
             Project: Qpid Proton
          Issue Type: Test
          Components: proton-c, proton-j
            Reporter: Ken Giusti


The goal of these tests would be to detect any performance degradation inadvertently introduced during development.   These tests would not be intended to provide any metrics regarding the "real world" behavior of proton-based applications.  Rather, these tests are targeted for use by the proton developers to help gauge the effect their code changes may have on performance.

These tests should require no special configuration or setup in order to run.  It should be easy to run these test as part of the development process.  The intent would be to have developer run the tests prior to making any code changes, and record the metrics for comparison against the results obtained after making changes to the code base.

As described by Rafi:

"I think it would be good to include some performance metrics that isolate
the various components of proton. For example having a metric that simply
repeatedly encodes/decodes a message would be quite useful in isolating the
message implementation. Setting up two engines in memory and using them to
blast zero sized messages back and forth as fast as possible would tell us
how much protocol overhead the engine is adding. Using the codec directly
to encode/decode data would also be a useful measure. Each of these would
probably want to have multiple profiles, different message content,
different acknowledgement/flow control patterns, and different kinds of
data.

I think breaking out the different dimensions of the implementation as
above would provide a very useful tool to run before/after any performance
sensitive changes to detect and isolate regressions, or to test potential
improvements."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira