You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-dev@jackrabbit.apache.org by Tobias Bocanegra <tr...@apache.org> on 2013/10/22 19:34:15 UTC

Improving the benchmark suite

Hi,

I'd like to make the following changes to the way how the benchmarks work:

1. add support for executing several benchmarks within the same suite.
currently each benchmark has it's own 'setUp()' code that might be
expensive to execute. e.g. import a large structure, create nodes,
etc.

2. move the concurrency control into the 'suite' as well. so that we
can execute the same tests with different concurrency levels. like
above, this helps with tests that are expensive to set up.

WDYT?

Regards, Toby

Re: Improving the benchmark suite

Posted by Tobias Bocanegra <tr...@apache.org>.
On Tue, Oct 22, 2013 at 11:39 AM, Jukka Zitting <ju...@gmail.com> wrote:
> Hi,
>
> On Tue, Oct 22, 2013 at 1:34 PM, Tobias Bocanegra <tr...@apache.org> wrote:
>> WDYT?
>
> Any improvements are of course welcome.
>
> On the other hand I believe we are reaching the limits of what the
> benchmark suite was originally designed for, i.e. a quick and simple
> mechanism for running basic micro-benchmarks. While it's possible to
> incrementally extend the design, I'm afraid that without a clear
> roadmap or target architecture we'll end up with an overly complex
> solution that'll get increasingly difficult to use and that nobody
> outside our core team understands. Instead it might be worth it to
> revisit existing generic benchmarking tools like JMeter or JUnitPerf
> that I looked at earlier but considered overkill at the time.

true :-)

>
> BR,
>
> Jukka Zitting

Re: Improving the benchmark suite

Posted by Jukka Zitting <ju...@gmail.com>.
Hi,

On Tue, Oct 22, 2013 at 1:34 PM, Tobias Bocanegra <tr...@apache.org> wrote:
> WDYT?

Any improvements are of course welcome.

On the other hand I believe we are reaching the limits of what the
benchmark suite was originally designed for, i.e. a quick and simple
mechanism for running basic micro-benchmarks. While it's possible to
incrementally extend the design, I'm afraid that without a clear
roadmap or target architecture we'll end up with an overly complex
solution that'll get increasingly difficult to use and that nobody
outside our core team understands. Instead it might be worth it to
revisit existing generic benchmarking tools like JMeter or JUnitPerf
that I looked at earlier but considered overkill at the time.

BR,

Jukka Zitting