You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Jonathan Costers <jo...@googlemail.com> on 2010/09/15 15:37:04 UTC

Status of QA test verification

This is what we have running in Hudson now:

[java] -----------------------------------------
[java]
[java] # of tests started   = 750
[java] # of tests completed = 750
[java] # of tests skipped   = 21
[java] # of tests passed    = 748
[java] # of tests failed    = 2
[java]
[java] -----------------------------------------

Total time: 598 minutes 7 seconds

That's almost 10 hours ... :-) And we seem to have 50% coverage now (seeing
there are about 1500 .td files available)

Here are the failing tests:

[java] -----------------------------------------
[java] com/sun/jini/test/impl/reggie/MultihomedClientTest.td
[java] Test Failed: Test Failed with an Unexpected Exception:
java.lang.RuntimeException: Error resolving key logging obtained from
source test description properties
[java]
[java] -----------------------------------------

-> fixed this one by pointing it to an existing logging config file (just
committed that fix)

[java] -----------------------------------------
[java] com/sun/jini/test/spec/lookupdiscovery/MulticastMonitorAllChange.td
[java] Test Failed: Test Failed:
com.sun.jini.qa.harness.TestException: change failed -- waited 870
seconds (14 minutes) -- 3 change event(s) expected, 0 change event(s)
received
[java]
[java] -----------------------------------------

-> still not sure why this is failing on Hudson ... It passes fine locally
...

Re: Status of QA test verification

Posted by Peter Firmstone <ji...@zeus.net.au>.
That's awesome Jonathan,  I'd never realised we had so many tests, for 
some reason I'd gotten the impression that I was running everything & 
never investigated it further.

This is really going to improve development confidence.

Cheers,

Peter.

Jonathan Costers wrote:
> Currently on Hudson:
>
> [java] -----------------------------------------
> [java]
> [java] # of tests started   = 1022
> [java] # of tests completed = 1022
> [java] # of tests skipped   = 40
> [java] # of tests passed    = 1021
> [java] # of tests failed    = 1
> [java]
> [java] -----------------------------------------
> [java]
> [java]    Date finished:
> [java]       Thu Sep 16 08:10:04 UTC 2010
> [java]    Time elapsed:
> [java]       41467 seconds
>
> Full results;
> https://hudson.apache.org/hudson/view/River/job/River-trunk-QA/ws/jtsk/trunk/qa/result/index.html
>
> This is the test that fails:
> https://hudson.apache.org/hudson/view/River/job/River-trunk-QA/ws/jtsk/trunk/qa/result/com_sun_jini_test_impl_discoverymanager_RemoveGroupsLocsDiscard.td.txt
>
> As you can see, it fails because it tries to bind to some port that is
> already in use.
>
> Note that this time, the tests ran on the ubuntu1 instance, not solaris1.
>
>   


Re: Status of QA test verification

Posted by Jonathan Costers <jo...@googlemail.com>.
Currently on Hudson:

[java] -----------------------------------------
[java]
[java] # of tests started   = 1022
[java] # of tests completed = 1022
[java] # of tests skipped   = 40
[java] # of tests passed    = 1021
[java] # of tests failed    = 1
[java]
[java] -----------------------------------------
[java]
[java]    Date finished:
[java]       Thu Sep 16 08:10:04 UTC 2010
[java]    Time elapsed:
[java]       41467 seconds

Full results;
https://hudson.apache.org/hudson/view/River/job/River-trunk-QA/ws/jtsk/trunk/qa/result/index.html

This is the test that fails:
https://hudson.apache.org/hudson/view/River/job/River-trunk-QA/ws/jtsk/trunk/qa/result/com_sun_jini_test_impl_discoverymanager_RemoveGroupsLocsDiscard.td.txt

As you can see, it fails because it tries to bind to some port that is
already in use.

Note that this time, the tests ran on the ubuntu1 instance, not solaris1.

Re: Status of QA test verification

Posted by Jonathan Costers <jo...@googlemail.com>.
Assuming approximate linearity, the full set would take 20 hours to run.
>

It must also be said that (almost) full logging is currently enabled on
Hudson.
Reducing that again to INFO level would probably also benefit the time it
takes to run.

Re: Status of QA test verification

Posted by Jonathan Costers <jo...@googlemail.com>.
>
>
> Perhaps we should create a new category, orthogonal to the existing
> categories, that selects about a one hour regression test that one could run
> regularly during development? I don't think dropping complete existing
> categories is a good way to do it, because that leads to complete
> non-testing of significant components.
>
> Note that the QA suite will run daily, but ONLY if there have been SVN
changes to trunk in the mean time.
But I do see value in a "reduced" set. Problem is how to identify tests that
go in that set.



> Obviously, tests that have failed during the test expansion work are very
> strong candidates for inclusion in the reduced set - they test things that
> are affected by the changes you have backed out, and presumably
> re-implementing those changes is going to be an important part of the future
> work.
>

I agree.


> Incidentally, I've suspended my attempts to write more tests, and switched
> to TaskManager work, because I realized I don't know whether we have the
> tests I think are needed or not. I did know that I was not satisfied with
> the set that were being run, but that is a different matter.
>

Ideally, we should be focusing on writing more unit tests for such things as
the TaskManager.
It seems to me that the QA suite should remain fairly stable (I mean, not a
lot of tests should be added or changed), unless we change the Jini spec or
significantly change behaviour of the implementation.

Re: Status of QA test verification

Posted by Patricia Shanahan <pa...@acm.org>.
Jonathan Costers wrote:
> This is what we have running in Hudson now:
> 
> [java] -----------------------------------------
> [java]
> [java] # of tests started   = 750
> [java] # of tests completed = 750
> [java] # of tests skipped   = 21
> [java] # of tests passed    = 748
> [java] # of tests failed    = 2
> [java]
> [java] -----------------------------------------
> 
> Total time: 598 minutes 7 seconds
> 
> That's almost 10 hours ... :-) And we seem to have 50% coverage now (seeing
> there are about 1500 .td files available)

Assuming approximate linearity, the full set would take 20 hours to run. 
It is practical to do a 20 hour test once a day, but not once every few 
builds, and I like to test early, test often.

Perhaps we should create a new category, orthogonal to the existing 
categories, that selects about a one hour regression test that one could 
run regularly during development? I don't think dropping complete 
existing categories is a good way to do it, because that leads to 
complete non-testing of significant components.

Obviously, tests that have failed during the test expansion work are 
very strong candidates for inclusion in the reduced set - they test 
things that are affected by the changes you have backed out, and 
presumably re-implementing those changes is going to be an important 
part of the future work.

Incidentally, I've suspended my attempts to write more tests, and 
switched to TaskManager work, because I realized I don't know whether we 
have the tests I think are needed or not. I did know that I was not 
satisfied with the set that were being run, but that is a different matter.

Many, many thanks for doing this work on getting more tests running. I 
think it is extremely important.

Patricia