You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by Gilles Sadowski <gi...@harfang.homelinux.org> on 2011/07/25 00:09:09 UTC

[Math] Simple benchmarking utility

Hello.

Finding myself repeatedly writing the same few lines when trying to figure
out which of several implementations of some functionality was running
faster, I wonder wether it would be interesting to add a little utility in
the "test" section of the source tree. Something like the following:
---CUT---
    /**                                                                                                                                        
     * Timing.                                                                                                                                 
     *                                                                                                                                         
     * @param repeatChunk Each timing measurement will done done for that                                                                      
     * number of repeats of the code.                                                                                                          
     * @param repeatStat Timing will be averaged over that number of runs.                                                                     
     * @param methods Code being timed.                                                                                                        
     * @return for each of the given {@code methods}, the averaged time (in                                                                    
     * milliseconds) taken by a call to {@code run}.                                                                                           
     */
    public static double[] time(int repeatChunk,
                                int repeatStat,
                                Runnable ... methods) {
        final int numMethods = methods.length;
        final double[][] times = new double[numMethods][repeatStat];

        long time;
        for (int k = 0; k < repeatStat; k++) {
            for (int j = 0; j < numMethods; j++) {
                Runnable r = methods[j];
                time = System.nanoTime();
                for (int i = 0; i < repeatChunk; i++) {
                    r.run();
                }
                times[j][k] = (System.nanoTime() - time) * NANO_TO_MILLI;
            }
        }

        final MultivariateRealFunction acc = FunctionUtils.collector(new Add(), 0);
        final double[] avgTimes = new double[numMethods];

        final double normFactor = 1d / (repeatStat * repeatChunk);
        for (int j = 0; j < numMethods; j++) {
            avgTimes[j] = normFactor * acc.value(times[j]);
        }

        return avgTimes;
    }
---CUT---

The idea is to have "interleaved" calls to the candidate implementations, so
that (hopefully) they will be penalized (or benefit) in the same way by what
the JVM is doing (GC or JIT compilation or ...) while the benchmark is
running.

Does this make sense?


Regards,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hi.

> >>I'm willing to help on this if you want.
> >What do you propose?
> You mentioned the need for people to review/try the piece of code
> you've posted. I haven't done yet, but I'm happy to.

Yes, please try it, and report unexpected results. Thank you!
[I'll send you the Java file in a separate mail.]

> As for japex being too heavy. I agree, I didn't realize it needed
> input files, I thought only annotations were required. Also,
> although everyone says that benchmarking must be done very
> carefully, I think that most people do "quick and dirty" timing...

Yes and it seems that, if done even slightly differently, they lead to
contradicting results... That's why I thought of a simple utility that
would nevertheless have the benchmarked alternatives on equal ground with
respect to what might be happening in the JVM during the test.

> Having said that, well-formatted reports can be useful for starting
> a discussion. But the class you propose is so concise that it
> probably wins over japex and others...

Well, it would be nice to compare the results of this utility and of Japex
on some selected codes. If you feel like doing it...


Best regards,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Sébastien Brisard <se...@m4x.org>.
Le 27/07/11 12:05, Gilles Sadowski a écrit :
> Hello.
>
>> I'm willing to help on this if you want.
> What do you propose?
You mentioned the need for people to review/try the piece of code you've 
posted. I haven't done yet, but I'm happy to.
As for japex being too heavy. I agree, I didn't realize it needed input 
files, I thought only annotations were required. Also, although everyone 
says that benchmarking must be done very carefully, I think that most 
people do "quick and dirty" timing...
Having said that, well-formatted reports can be useful for starting a 
discussion. But the class you propose is so concise that it probably 
wins over japex and others...
Sebastien
>> Meanwhile, have you had a
>> look to existing frameworks, such as japex (http://japex.java.net/)?
>> [...]
> I hadn't; I have now. It looks nice. I didn't think of something as
> elaborate (charts, etc.) but rather a small utility for quick and dirty
> micro-benchmarking ;-). Sometimes, one doesn't want to depend on heavy
> tools like maven and/or xml input files and a browser to look at the
> results...
>
> I'm wondering whether we can be reasonably happy with the little code which
> I've posted here.
>
> I saw that there is a maven plugin; installing it (if others agree) might be
> interesting in itself for when we would want to produce nice-looking reports.
>
>> [...]
>
> Best,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hello.

> I'm willing to help on this if you want.

What do you propose?

> Meanwhile, have you had a
> look to existing frameworks, such as japex (http://japex.java.net/)?
> [...]

I hadn't; I have now. It looks nice. I didn't think of something as
elaborate (charts, etc.) but rather a small utility for quick and dirty
micro-benchmarking ;-). Sometimes, one doesn't want to depend on heavy
tools like maven and/or xml input files and a browser to look at the
results...

I'm wondering whether we can be reasonably happy with the little code which
I've posted here.

I saw that there is a maven plugin; installing it (if others agree) might be
interesting in itself for when we would want to produce nice-looking reports.

> [...]


Best,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Sébastien Brisard <se...@m4x.org>.
Hi,
I'm willing to help on this if you want. Meanwhile, have you had a look 
to existing frameworks, such as japex (http://japex.java.net/)?
Also, there is some interesting stuff on the web
http://www.ibm.com/developerworks/java/library/j-benchmark1/index.html
I have other electronic papers, I'll try to find them.
Best regards,
Sebastien

Le 26/07/11 12:52, Gilles Sadowski a écrit :
> Hello.
>
>>>>> [...]
>>>>>
>>>>> The idea is to have "interleaved" calls to the candidate
>>>>> implementations, so
>>>>> that (hopefully) they will be penalized (or benefit) in the same
>>>>> way
>>>>> by what
>>>>> the JVM is doing (GC or JIT compilation or ...) while the
>>>>> benchmark
>>>>> is
>>>>> running.
>>>>>
>>>>> Does this make sense?
>>>> Could it be merged by the FastMath performance tests Sebb set up ?
>>> I don't think so. If you meant rewriting the
>>> "FastMathTestPerformance" tests
>>> using the proposed utility, I don't think that it is necessary.
>> This was what I meant.
>> If this feature is not used in any existing tests, perhaps it should go in
>> some other directory. Perhaps a new "utilities" or something like that, at the
>> same level as "main" and "test" ?
>>
>> Anyway, if you feel it's useful to have this available around, don't hesitate.
>>
> Well, the first use I had in mind was to provide a agreed on way to base a
> discussion for requests such as "CM's implementation of foo is not
> efficient", and avoid wondering how the reporter got his results. [This
> problem occurs for the MATH-628 issue.]. Then, when the problem reported
> is confirmed, the new implementation will replace the less efficient one in
> CM, so that there won't be any alternative implementation left to compare.
>
> If you agree with the idea of a "standard" benchmark, it would be very much
> necessary that several people have a look at the code: It might be that my
> crude "methodology" is not right, or that there is a bug.
>
> If the code is accepted, then we'll decide where to put it. Even if,
> according to the above, its primary use will not be for long-lived unit
> tests, it might still be useful in order to compare the efficiency of CM's
> algorithms such as the various optimizers. These comparisons could be added
> as performance reports similar to "FastMathTestPerformance".
>
>
> Thanks,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Phil Steitz <ph...@gmail.com>.
On 7/29/11 3:23 AM, Gilles Sadowski wrote:
> Hello.
>
>>>>>>> [...]
>>>>>>>
>>>>>>> The idea is to have "interleaved" calls to the candidate
>>>>>>> implementations, so
>>>>>>> that (hopefully) they will be penalized (or benefit) in the same
>>>>>>> way
>>>>>>> by what
>>>>>>> the JVM is doing (GC or JIT compilation or ...) while the
>>>>>>> benchmark
>>>>>>> is
>>>>>>> running.
>>>>>>>
>>>>>>> Does this make sense?
>>>>>> Could it be merged by the FastMath performance tests Sebb set up ?
>>>>> I don't think so. If you meant rewriting the
>>>>> "FastMathTestPerformance" tests
>>>>> using the proposed utility, I don't think that it is necessary.
>>>> This was what I meant.
>>>> If this feature is not used in any existing tests, perhaps it should go in
>>>> some other directory. Perhaps a new "utilities" or something like that, at the
>>>> same level as "main" and "test" ?
>>>>
>>>> Anyway, if you feel it's useful to have this available around, don't hesitate.
>>>>
>>> Well, the first use I had in mind was to provide a agreed on way to base a
>>> discussion for requests such as "CM's implementation of foo is not
>>> efficient", and avoid wondering how the reporter got his results. [This
>>> problem occurs for the MATH-628 issue.]. Then, when the problem reported
>>> is confirmed, the new implementation will replace the less efficient one in
>>> CM, so that there won't be any alternative implementation left to compare.
>>>
>>> If you agree with the idea of a "standard" benchmark, it would be very much
>>> necessary that several people have a look at the code: It might be that my
>>> crude "methodology" is not right, or that there is a bug.
>>>
>>> If the code is accepted, then we'll decide where to put it. Even if,
>>> according to the above, its primary use will not be for long-lived unit
>>> tests, it might still be useful in order to compare the efficiency of CM's
>>> algorithms such as the various optimizers. These comparisons could be added
>>> as performance reports similar to "FastMathTestPerformance".
>> +1 to include it.  I would say start by putting it in a top level
>> package of its own - say, "benchmark" in src/test/java.  That way we
>> can use it in test classes or experimentation that we do using test
>> classes to set up benchmarks.
> Isn't it "safer" to put it in package "o.a.c.m" (under "src/test/java")?
> I was thinking of "PerfTestUtils" for the class name.

What I meant was o.a.c.m.benchmark, but I would be fine including it
at the top level to start.

Phil
>
>> If it evolves into a generically
>> useful microbenchmark generator, we can talk about moving it
>> src/main.  Thanks for doing this.
> I didn't think that this utility would ever move to "main", as it's just for
> internal testing.
>
>
> Regards,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hello.

> >
> >>>>> [...]
> >>>>>
> >>>>> The idea is to have "interleaved" calls to the candidate
> >>>>> implementations, so
> >>>>> that (hopefully) they will be penalized (or benefit) in the same
> >>>>> way
> >>>>> by what
> >>>>> the JVM is doing (GC or JIT compilation or ...) while the
> >>>>> benchmark
> >>>>> is
> >>>>> running.
> >>>>>
> >>>>> Does this make sense?
> >>>> Could it be merged by the FastMath performance tests Sebb set up ?
> >>> I don't think so. If you meant rewriting the
> >>> "FastMathTestPerformance" tests
> >>> using the proposed utility, I don't think that it is necessary.
> >> This was what I meant.
> >> If this feature is not used in any existing tests, perhaps it should go in
> >> some other directory. Perhaps a new "utilities" or something like that, at the
> >> same level as "main" and "test" ?
> >>
> >> Anyway, if you feel it's useful to have this available around, don't hesitate.
> >>
> > Well, the first use I had in mind was to provide a agreed on way to base a
> > discussion for requests such as "CM's implementation of foo is not
> > efficient", and avoid wondering how the reporter got his results. [This
> > problem occurs for the MATH-628 issue.]. Then, when the problem reported
> > is confirmed, the new implementation will replace the less efficient one in
> > CM, so that there won't be any alternative implementation left to compare.
> >
> > If you agree with the idea of a "standard" benchmark, it would be very much
> > necessary that several people have a look at the code: It might be that my
> > crude "methodology" is not right, or that there is a bug.
> >
> > If the code is accepted, then we'll decide where to put it. Even if,
> > according to the above, its primary use will not be for long-lived unit
> > tests, it might still be useful in order to compare the efficiency of CM's
> > algorithms such as the various optimizers. These comparisons could be added
> > as performance reports similar to "FastMathTestPerformance".
> 
> +1 to include it.  I would say start by putting it in a top level
> package of its own - say, "benchmark" in src/test/java.  That way we
> can use it in test classes or experimentation that we do using test
> classes to set up benchmarks.

Isn't it "safer" to put it in package "o.a.c.m" (under "src/test/java")?
I was thinking of "PerfTestUtils" for the class name.

> If it evolves into a generically
> useful microbenchmark generator, we can talk about moving it
> src/main.  Thanks for doing this.

I didn't think that this utility would ever move to "main", as it's just for
internal testing.


Regards,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Phil Steitz <ph...@gmail.com>.
On 7/26/11 3:52 AM, Gilles Sadowski wrote:
> Hello.
>
>>>>> [...]
>>>>>
>>>>> The idea is to have "interleaved" calls to the candidate
>>>>> implementations, so
>>>>> that (hopefully) they will be penalized (or benefit) in the same
>>>>> way
>>>>> by what
>>>>> the JVM is doing (GC or JIT compilation or ...) while the
>>>>> benchmark
>>>>> is
>>>>> running.
>>>>>
>>>>> Does this make sense?
>>>> Could it be merged by the FastMath performance tests Sebb set up ?
>>> I don't think so. If you meant rewriting the
>>> "FastMathTestPerformance" tests
>>> using the proposed utility, I don't think that it is necessary.
>> This was what I meant.
>> If this feature is not used in any existing tests, perhaps it should go in
>> some other directory. Perhaps a new "utilities" or something like that, at the
>> same level as "main" and "test" ?
>>
>> Anyway, if you feel it's useful to have this available around, don't hesitate.
>>
> Well, the first use I had in mind was to provide a agreed on way to base a
> discussion for requests such as "CM's implementation of foo is not
> efficient", and avoid wondering how the reporter got his results. [This
> problem occurs for the MATH-628 issue.]. Then, when the problem reported
> is confirmed, the new implementation will replace the less efficient one in
> CM, so that there won't be any alternative implementation left to compare.
>
> If you agree with the idea of a "standard" benchmark, it would be very much
> necessary that several people have a look at the code: It might be that my
> crude "methodology" is not right, or that there is a bug.
>
> If the code is accepted, then we'll decide where to put it. Even if,
> according to the above, its primary use will not be for long-lived unit
> tests, it might still be useful in order to compare the efficiency of CM's
> algorithms such as the various optimizers. These comparisons could be added
> as performance reports similar to "FastMathTestPerformance".

+1 to include it.  I would say start by putting it in a top level
package of its own - say, "benchmark" in src/test/java.  That way we
can use it in test classes or experimentation that we do using test
classes to set up benchmarks.  If it evolves into a generically
useful microbenchmark generator, we can talk about moving it
src/main.  Thanks for doing this.

Phil
>
>
> Thanks,
> Gilles
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hello.

> > > > [...]
> > > > 
> > > > The idea is to have "interleaved" calls to the candidate
> > > > implementations, so
> > > > that (hopefully) they will be penalized (or benefit) in the same
> > > > way
> > > > by what
> > > > the JVM is doing (GC or JIT compilation or ...) while the
> > > > benchmark
> > > > is
> > > > running.
> > > > 
> > > > Does this make sense?
> > > 
> > > Could it be merged by the FastMath performance tests Sebb set up ?
> > 
> > I don't think so. If you meant rewriting the
> > "FastMathTestPerformance" tests
> > using the proposed utility, I don't think that it is necessary.
> 
> This was what I meant.
> If this feature is not used in any existing tests, perhaps it should go in
> some other directory. Perhaps a new "utilities" or something like that, at the
> same level as "main" and "test" ?
> 
> Anyway, if you feel it's useful to have this available around, don't hesitate.
> 

Well, the first use I had in mind was to provide a agreed on way to base a
discussion for requests such as "CM's implementation of foo is not
efficient", and avoid wondering how the reporter got his results. [This
problem occurs for the MATH-628 issue.]. Then, when the problem reported
is confirmed, the new implementation will replace the less efficient one in
CM, so that there won't be any alternative implementation left to compare.

If you agree with the idea of a "standard" benchmark, it would be very much
necessary that several people have a look at the code: It might be that my
crude "methodology" is not right, or that there is a bug.

If the code is accepted, then we'll decide where to put it. Even if,
according to the above, its primary use will not be for long-lived unit
tests, it might still be useful in order to compare the efficiency of CM's
algorithms such as the various optimizers. These comparisons could be added
as performance reports similar to "FastMathTestPerformance".


Thanks,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by lu...@free.fr.
Hi Gilles,

----- Mail original -----
> Hello.
> 
> > > [...]
> > > 
> > > The idea is to have "interleaved" calls to the candidate
> > > implementations, so
> > > that (hopefully) they will be penalized (or benefit) in the same
> > > way
> > > by what
> > > the JVM is doing (GC or JIT compilation or ...) while the
> > > benchmark
> > > is
> > > running.
> > > 
> > > Does this make sense?
> > 
> > Could it be merged by the FastMath performance tests Sebb set up ?
> 
> I don't think so. If you meant rewriting the
> "FastMathTestPerformance" tests
> using the proposed utility, I don't think that it is necessary.

This was what I meant.
If this feature is not used in any existing tests, perhaps it should go in
some other directory. Perhaps a new "utilities" or something like that, at the
same level as "main" and "test" ?

Anyway, if you feel it's useful to have this available around, don't hesitate.

best regards,
Luc

> If you meant something else, I didn't get it. :-}
> 
> 
> Best,
> Gilles
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by Gilles Sadowski <gi...@harfang.homelinux.org>.
Hello.

> > [...]
> > 
> > The idea is to have "interleaved" calls to the candidate
> > implementations, so
> > that (hopefully) they will be penalized (or benefit) in the same way
> > by what
> > the JVM is doing (GC or JIT compilation or ...) while the benchmark
> > is
> > running.
> > 
> > Does this make sense?
> 
> Could it be merged by the FastMath performance tests Sebb set up ?

I don't think so. If you meant rewriting the "FastMathTestPerformance" tests
using the proposed utility, I don't think that it is necessary.
If you meant something else, I didn't get it. :-}


Best,
Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [Math] Simple benchmarking utility

Posted by lu...@free.fr.

----- Mail original -----
> Hello.

Hi Gilles,

> 
> Finding myself repeatedly writing the same few lines when trying to
> figure
> out which of several implementations of some functionality was
> running
> faster, I wonder wether it would be interesting to add a little
> utility in
> the "test" section of the source tree. Something like the following:
> ---CUT---
>     /**
>      * Timing.
>      *
>      * @param repeatChunk Each timing measurement will done done for
>      that
>      * number of repeats of the code.
>      * @param repeatStat Timing will be averaged over that number of
>      runs.
>      * @param methods Code being timed.
>      * @return for each of the given {@code methods}, the averaged
>      time (in
>      * milliseconds) taken by a call to {@code run}.
>      */
>     public static double[] time(int repeatChunk,
>                                 int repeatStat,
>                                 Runnable ... methods) {
>         final int numMethods = methods.length;
>         final double[][] times = new double[numMethods][repeatStat];
> 
>         long time;
>         for (int k = 0; k < repeatStat; k++) {
>             for (int j = 0; j < numMethods; j++) {
>                 Runnable r = methods[j];
>                 time = System.nanoTime();
>                 for (int i = 0; i < repeatChunk; i++) {
>                     r.run();
>                 }
>                 times[j][k] = (System.nanoTime() - time) *
>                 NANO_TO_MILLI;
>             }
>         }
> 
>         final MultivariateRealFunction acc =
>         FunctionUtils.collector(new Add(), 0);
>         final double[] avgTimes = new double[numMethods];
> 
>         final double normFactor = 1d / (repeatStat * repeatChunk);
>         for (int j = 0; j < numMethods; j++) {
>             avgTimes[j] = normFactor * acc.value(times[j]);
>         }
> 
>         return avgTimes;
>     }
> ---CUT---
> 
> The idea is to have "interleaved" calls to the candidate
> implementations, so
> that (hopefully) they will be penalized (or benefit) in the same way
> by what
> the JVM is doing (GC or JIT compilation or ...) while the benchmark
> is
> running.
> 
> Does this make sense?

Could it be merged by the FastMath performance tests Sebb set up ?

best regards,
Luc

> 
> 
> Regards,
> Gilles
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org