You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Peter <ji...@zeus.net.au> on 2011/03/01 06:07:13 UTC

Re: Benchmark organization

Could you call it FastList2? That avoids name space conflicts. Which makes testing easier.

Is junit sufficient for testing?

We could have a directory called snippets where we keep all the classes, which are alternative implementations, appended with a number.  If it replaces the original the original can be stored there as FastList0?

Peter.
----- Original message -----
> On 2/28/2011 1:38 PM, Peter wrote:
> > Hmm valuable insight into the future based on past experience.
> >
> > Alternate code snippets sitting on the performance shelf need to be compilable
> > in their own right don't they?
> >
> > But as you've mentioned, fastlist and its replacement, don't share a common
> > interface for testing.
> >
> > We must test using a common api somewhere, such as the javaspace api, but that
> > risks duplicating identical code that might diverge over time causing
> > inaccurate test results.  Divergence is expected as we evolve implementations.
> >
>
> I wrote a very simple wrapper interface that was equally smoothly
> implemented on top of either of the FastList interfaces. For example, it
> uses the visitor pattern which is neutral between the old and new ways
> of scanning a list.
>
> I would not benchmark a base utility through something as big and
> complicated as the JavaSpace API. Especially on the relatively small
> system I have for benchmarking, other issues could mask differences in
> FastList speed. I also don't know enough about the performance
> characteristics of the way the QA tests are connected to their servers
> to include that in a benchmark of anything else.
>
> Patricia
>


Re: Benchmark organization

Posted by Peter <ji...@zeus.net.au>.
See databene.org/contiperf

Its a performance test suite, syntax is similar to junit4

You can set the number of iterations and threads and expected performance window using annotations.

Peter.
----- Original message -----
> Yup, my original idea was to have a separate directory structure for
> this sort of thing, and rename classes in it for convenience. I was
> thinking of putting in a readme.txt describing the mapping to the real
> name for each source file that is in use in the trunk. For example, the
> recently checked in FastList implementation is "CLQFastList" in my test
> environment, because it is based on ConcurrentLinkedQueue.
>
> Of course, it would not just be one snippets directory. It would be more
> like an "experiments" directory with a separate sub-directory for each
> experiment.
>
> JUnit is fine for unit-level functional testing, especially with the
> advance to version 4. I have not tried using it for benchmarks, and I
> don't see any benefit for benchmarking over a simple application.
>
> Patricia
>
>
> On 2/28/2011 9:07 PM, Peter wrote:
> > Could you call it FastList2? That avoids name space conflicts. Which makes
> > testing easier.
> >
> > Is junit sufficient for testing?
> >
> > We could have a directory called snippets where we keep all the classes, which
> > are alternative implementations, appended with a number.  If it replaces the
> > original the original can be stored there as FastList0?
> >
> > Peter.
> > ----- Original message -----
> > > On 2/28/2011 1:38 PM, Peter wrote:
> > > > Hmm valuable insight into the future based on past experience.
> > > >
> > > > Alternate code snippets sitting on the performance shelf need to be
> > > > compilable in their own right don't they?
> > > >
> > > > But as you've mentioned, fastlist and its replacement, don't share a common
> > > > interface for testing.
> > > >
> > > > We must test using a common api somewhere, such as the javaspace api, but
> > > > that risks duplicating identical code that might diverge over time causing
> > > > inaccurate test results.  Divergence is expected as we evolve
> > > > implementations.
> > > >
> > >
> > > I wrote a very simple wrapper interface that was equally smoothly
> > > implemented on top of either of the FastList interfaces. For example, it
> > > uses the visitor pattern which is neutral between the old and new ways
> > > of scanning a list.
> > >
> > > I would not benchmark a base utility through something as big and
> > > complicated as the JavaSpace API. Especially on the relatively small
> > > system I have for benchmarking, other issues could mask differences in
> > > FastList speed. I also don't know enough about the performance
> > > characteristics of the way the QA tests are connected to their servers
> > > to include that in a benchmark of anything else.
> > >
> > > Patricia
> > >
> >
> >
>


Re: Benchmark organization

Posted by Patricia Shanahan <pa...@acm.org>.
Yup, my original idea was to have a separate directory structure for 
this sort of thing, and rename classes in it for convenience. I was 
thinking of putting in a readme.txt describing the mapping to the real 
name for each source file that is in use in the trunk. For example, the 
recently checked in FastList implementation is "CLQFastList" in my test 
environment, because it is based on ConcurrentLinkedQueue.

Of course, it would not just be one snippets directory. It would be more 
like an "experiments" directory with a separate sub-directory for each 
experiment.

JUnit is fine for unit-level functional testing, especially with the 
advance to version 4. I have not tried using it for benchmarks, and I 
don't see any benefit for benchmarking over a simple application.

Patricia


On 2/28/2011 9:07 PM, Peter wrote:
> Could you call it FastList2? That avoids name space conflicts. Which makes testing easier.
>
> Is junit sufficient for testing?
>
> We could have a directory called snippets where we keep all the classes, which are alternative implementations, appended with a number.  If it replaces the original the original can be stored there as FastList0?
>
> Peter.
> ----- Original message -----
>> On 2/28/2011 1:38 PM, Peter wrote:
>>> Hmm valuable insight into the future based on past experience.
>>>
>>> Alternate code snippets sitting on the performance shelf need to be compilable
>>> in their own right don't they?
>>>
>>> But as you've mentioned, fastlist and its replacement, don't share a common
>>> interface for testing.
>>>
>>> We must test using a common api somewhere, such as the javaspace api, but that
>>> risks duplicating identical code that might diverge over time causing
>>> inaccurate test results.  Divergence is expected as we evolve implementations.
>>>
>>
>> I wrote a very simple wrapper interface that was equally smoothly
>> implemented on top of either of the FastList interfaces. For example, it
>> uses the visitor pattern which is neutral between the old and new ways
>> of scanning a list.
>>
>> I would not benchmark a base utility through something as big and
>> complicated as the JavaSpace API. Especially on the relatively small
>> system I have for benchmarking, other issues could mask differences in
>> FastList speed. I also don't know enough about the performance
>> characteristics of the way the QA tests are connected to their servers
>> to include that in a benchmark of anything else.
>>
>> Patricia
>>
>
>