You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Patricia Shanahan <pa...@acm.org> on 2010/11/19 22:51:33 UTC

ant run-tests does too much

I'm having a problem with "ant run-tests" doing too much, and taking too 
long to get to the test in question.

Even when run immediately after an all.build, so that all jars are up to 
date, it goes through a lot of deleting and build jars.

Can anything be done about this? Is there some alternative target I can 
use that does not force rebuilding? Quick running of a single test is 
very important for debug.

Thanks,

Patricia

Re: ant run-tests does too much

Posted by Patricia Shanahan <pa...@acm.org>.
Sim IJskes - QCG wrote:
> On 11/20/2010 02:03 PM, Patricia Shanahan wrote:
>> See http://www.patriciashanahan.com/debug/index.html for how I approach
>> debug. In the debug loop, I think of a theory about what is going wrong,
>> design an experiment to test it, and run the experiment.
> 
> I've scanned the mentioned url, the experiments you write about, do you 
> consider these different than unit tests?

Different, though in some cases experiments may inspire unit tests.
Experiments may involve questions about internal behavior of methods.

During debug, I don't just care whether a method works or not. If it
does not work, I need to know exactly why not. During testing, I usually
look at a method as a black box, and just try to find out if it always
does what its documentation says it does.

> In my approach a unit tests can be designed as soon as an error is 
> reproducable, and this unit test can be used to verify the fix of the 
> error. The other already existing unit tests can be run to check for 
> regressions. Is it just a choice of words where we differ, or do you see 
> a real difference in experiments and unit tests? Or do i use the term 
> unit test too broadly.

My River debug efforts have all been in cases in which we have a test,
but it was not being run. In those case I see no need to write a new
test, just make sure the existing tests get run.

If a bug comes in as a user report, there are at least two bugs, the bug
in the product code, and the bug in the test process that let it get out
in the field. Both need to be fixed, and usually that requires at least
one more test.

One problem that I'm having with River is bugs I can see by code
inspection, but have not yet been able to reproduce in a test. That is
always a difficult case.

Patricia

Re: ant run-tests does too much

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 11/20/2010 02:03 PM, Patricia Shanahan wrote:
> See http://www.patriciashanahan.com/debug/index.html for how I approach
> debug. In the debug loop, I think of a theory about what is going wrong,
> design an experiment to test it, and run the experiment.

I've scanned the mentioned url, the experiments you write about, do you 
consider these different than unit tests?

In my approach a unit tests can be designed as soon as an error is 
reproducable, and this unit test can be used to verify the fix of the 
error. The other already existing unit tests can be run to check for 
regressions. Is it just a choice of words where we differ, or do you see 
a real difference in experiments and unit tests? Or do i use the term 
unit test too broadly.

Gr. Sim

Re: ant run-tests does too much

Posted by Patricia Shanahan <pa...@acm.org>.
Sim IJskes - QCG wrote:
> On 11/20/2010 02:03 PM, Patricia Shanahan wrote:
>> I usually update to the latest version between debug efforts. I tend not
>> to update in the middle of a debug effort so that I don't confuse
>> matters by getting changes due to check-ins.
> 
> Very wise.
> 
>> The alternative is to make the rebuild really, really fast if there are
>> no source code changes. I know you have been making valuable progress
>> in that direction.
> 
> In the latest revision i've removed the dependency on the 
> harness-runtime, so you have to run 'ant harness-runtime run-tests' to 
> get a build.

I'll check out the latest revision ASAP.

Patricia

Re: ant run-tests does too much

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 11/20/2010 02:03 PM, Patricia Shanahan wrote:
> I usually update to the latest version between debug efforts. I tend not
> to update in the middle of a debug effort so that I don't confuse
> matters by getting changes due to check-ins.

Very wise.

> The alternative is to make the rebuild really, really fast if there are
> no source code changes. I know you have been making valuable progress
> in that direction.

In the latest revision i've removed the dependency on the 
harness-runtime, so you have to run 'ant harness-runtime run-tests' to 
get a build.

Gr. Sim

Re: ant run-tests does too much

Posted by Patricia Shanahan <pa...@acm.org>.
On 11/20/2010 12:47 AM, Sim IJskes - QCG wrote:
> On 11/19/2010 10:51 PM, Patricia Shanahan wrote:
>> I'm having a problem with "ant run-tests" doing too much, and taking too
>> long to get to the test in question.
>>
>> Even when run immediately after an all.build, so that all jars are up to
>> date, it goes through a lot of deleting and build jars.
>>
>> Can anything be done about this? Is there some alternative target I can
>> use that does not force rebuilding? Quick running of a single test is
>> very important for debug.
>
> I'm assuming you work with the latest version. Do you want to keep
> running a single test in a loop? Thats the only reason i can think of
> where you wouldn't like to build the harness-runtime and underlying
> river-runtime.

I usually update to the latest version between debug efforts. I tend not
to update in the middle of a debug effort so that I don't confuse
matters by getting changes due to check-ins.

See http://www.patriciashanahan.com/debug/index.html for how I approach
debug. In the debug loop, I think of a theory about what is going wrong,
design an experiment to test it, and run the experiment.

Running the experiment may require source code changes, but often only
needs changes in break point placement, or the logging configuration
file, or even just in the questions I intend to ask when the program
reaches an existing break point.

And, yes, there are times that I run the same tests repeatedly in a
"while true" loop. For example, I ran the tests that fail on Peter's
system but not mine a couple of hundred times, to see if it were a
timing problem with different failure frequency.

The alternative is to make the rebuild really, really fast if there are
no source code changes. I know you have been making valuable progress
in that direction.

Patricia

Re: ant run-tests does too much

Posted by Peter Firmstone <ji...@zeus.net.au>.
Sim IJskes - QCG wrote:
> On 11/19/2010 10:51 PM, Patricia Shanahan wrote:
>> I'm having a problem with "ant run-tests" doing too much, and taking too
>> long to get to the test in question.
>>
>> Even when run immediately after an all.build, so that all jars are up to
>> date, it goes through a lot of deleting and build jars.
>>
>> Can anything be done about this? Is there some alternative target I can
>> use that does not force rebuilding? Quick running of a single test is
>> very important for debug.
>
> I'm assuming you work with the latest version. Do you want to keep 
> running a single test in a loop? Thats the only reason i can think of 
> where you wouldn't like to build the harness-runtime and underlying 
> river-runtime.
>
> If you are in a hurry, just remove the harness dependency in the 
> run-tests target.
>
> Shall i include a target run-tests-nodep?
>
> Gr. Sim
>
Or run-test?

Re: ant run-tests does too much

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 11/19/2010 10:51 PM, Patricia Shanahan wrote:
> I'm having a problem with "ant run-tests" doing too much, and taking too
> long to get to the test in question.
>
> Even when run immediately after an all.build, so that all jars are up to
> date, it goes through a lot of deleting and build jars.
>
> Can anything be done about this? Is there some alternative target I can
> use that does not force rebuilding? Quick running of a single test is
> very important for debug.

I'm assuming you work with the latest version. Do you want to keep 
running a single test in a loop? Thats the only reason i can think of 
where you wouldn't like to build the harness-runtime and underlying 
river-runtime.

If you are in a hurry, just remove the harness dependency in the 
run-tests target.

Shall i include a target run-tests-nodep?

Gr. Sim