You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Tim Ellison <t....@gmail.com> on 2006/07/03 12:30:02 UTC

Re: [classlib][testing] excluding the failed tests

Nathan Beyer wrote:
> How are other projects handling this? My opinion is that tests, which are
> expected and know to pass should always be running and if they fail and the
> failure can be independently recreated, then it's something to be posted on
> the list, if trivial (typo in build file?), or logged as a JIRA issue.

Agreed, the tests we have enabled are run on each build (hourly if
things are being committed), and failures are sent to commit list.

> If it's broken for a significant amount of time (weeks, months), then rather
> than excluding the test, I would propose moving it to a "broken" or
> "possibly invalid" source folder that's out of the test path. If it doesn't
> already have JIRA issue, then one should be created.

Yes, though I'd be inclined to move it sooner -- tests should not stay
broken for more than a couple of days.

Recently our breakages have been invalid tests rather than broken
implementation, but they still need to be investigated/resolved.

> I've been living with consistently failing tests for a long time now.
> Recently it was the unstable Socket tests, but I've been seeing the WinXP
> long file name [1] test failing for months.

IMHO you should be shouting about it!  The alternative is that we
tolerate a few broken windows and overall quality slips.

> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
> later.
> 
> So I go back to my original suggestion, implement the testing proposal, then
> fix/move any excluded tests to where they work properly or determine that
> they are invalid and delete them.

Yes, the tests do need improvements too.

Regards,
Tim


> [1] https://issues.apache.org/jira/browse/HARMONY-619
> 


-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Yes Vladimir, nice job!

I have updated the data for beans module. Since the reason of failures
for the most of  excluded test is not known yet I just put their names
there without any comment why they were excluded.

Thanks,

2006/7/14, Richard Liang <ri...@gmail.com>:
> Great job. Vladimir  ;-)
>
> Vladimir Ivanov wrote:
> > New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> > (refered from http://wiki.apache.org/harmony/ClassLibrary).
> > It would be good if before test investigation one would specify 'in
> > progress, <Name>' near module name, showing it is under investigation
> > being
> > done by <Name>.
> >
> > Thanks, Vladimir
> >
> > On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
> >
> >>
> >>
> >> Vladimir Ivanov wrote:
> >> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> >> ...
> >> >
> >> > Currently I'm looking on the excluded TestCases and it requires more
> >> time
> >> > than I expected.
> >> > I'll prepare a report/summary about excluded TestCases at the end of
> >> this
> >> > process.
> >> >
> >> Hello Vladimir,
> >>
> >> How about the progress of your report/summary?  ;-) As I'm implementing
> >> java.util.Formatter and java.util.Scanner, I'm also interested in the
> >> excluded tests in LUNI. Shall we create a wiki page to publish our
> >> status, so that other people in community can know what we're doing, and
> >> maybe we could attract more volunteers. ;-)
> >>
> >> Best regards,
> >> Richard.
> >>
> >> > Thanks, Vladimir
> >> >
> >> >
> >> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> >>
> >> >> Vladimir Ivanov wrote:
> >> >> > More details: it is
> >> >> >
> >> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> >> > test.
> >> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> >> > algorithm (no support for SHA1PRNG provider).
> >> >> > Looks like it is valid tests for non implemented functionality,
> >> >> but, I'm
> >> >>
> >> >> > not
> >> >> > sure what to do with such TestCase(s): comment these 2 tests or
> >> move
> >> >> them
> >> >> > into separate TestCase.
> >> >> > Ideas?
> >> >>
> >> >> I'd prefer that we only use one mechanism for excluding tests, and
> >> today
> >> >> that is the excludes clause in the ant script.  So I suggest that you
> >> do
> >> >> option (4) below.
> >> >>
> >> >> If there are really useful tests that are being unnecessarily
> >> excluded
> >> >> by being in the same *Test class, then you may want to consider
> >> moving
> >> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> >> the sound of it all SecureRandom tests will be failing.
> >> >>
> >> >> > By the way, probably, it worth reviewing *all* excluded TestCases
> >> and:
> >> >> > 1.      Unexclude if all tests pass.
> >> >> > 2.      Report bug and provide patch for test to make it passing if
> >> it
> >> >> > failed due to bug in test.
> >> >> > 3.      Report bug (and provide patch) for implementation to make
> >> >> tests
> >> >> > passing, if it was/is bug in implementation and no such issue in
> >> JIRA.
> >> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> >> make
> >> >> > further clean-up process easier.
> >> >> > 5.      Review results of this exclude list clean-up activity and
> >> then
> >> >> > decide what to do with the rest failing tests.
> >> >> >
> >> >> > I can do it starting next week. Do you think it worth doing?
> >> >> > Thanks, Vladimir
> >> >>
> >> >> Sounds great, thanks Vladimir.
> >> >>
> >> >> Regards,
> >> >> Tim
> >> >>
> >> >> --
> >> >>
> >> >> Tim Ellison ( t.p.ellison@gmail.com)
> >> >> IBM Java technology centre, UK.
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> >> For additional commands, e-mail:
> >> harmony-dev-help@incubator.apache.org
> >> >>
> >> >>
> >> >
> >>
> >> --
> >> Richard Liang
> >> China Software Development Lab, IBM
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Richard Liang <ri...@gmail.com>.
Great job. Vladimir  ;-)

Vladimir Ivanov wrote:
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation 
> being
> done by <Name>.
>
> Thanks, Vladimir
>
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
>
>>
>>
>> Vladimir Ivanov wrote:
>> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >> ...
>> >
>> > Currently I'm looking on the excluded TestCases and it requires more
>> time
>> > than I expected.
>> > I'll prepare a report/summary about excluded TestCases at the end of
>> this
>> > process.
>> >
>> Hello Vladimir,
>>
>> How about the progress of your report/summary?  ;-) As I'm implementing
>> java.util.Formatter and java.util.Scanner, I'm also interested in the
>> excluded tests in LUNI. Shall we create a wiki page to publish our
>> status, so that other people in community can know what we're doing, and
>> maybe we could attract more volunteers. ;-)
>>
>> Best regards,
>> Richard.
>>
>> > Thanks, Vladimir
>> >
>> >
>> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Vladimir Ivanov wrote:
>> >> > More details: it is
>> >> >
>> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> >> > test.
>> >> > At present time it has 2 failing tests with messages about SHA1PRNG
>> >> > algorithm (no support for SHA1PRNG provider).
>> >> > Looks like it is valid tests for non implemented functionality,
>> >> but, I'm
>> >>
>> >> > not
>> >> > sure what to do with such TestCase(s): comment these 2 tests or 
>> move
>> >> them
>> >> > into separate TestCase.
>> >> > Ideas?
>> >>
>> >> I'd prefer that we only use one mechanism for excluding tests, and
>> today
>> >> that is the excludes clause in the ant script.  So I suggest that you
>> do
>> >> option (4) below.
>> >>
>> >> If there are really useful tests that are being unnecessarily 
>> excluded
>> >> by being in the same *Test class, then you may want to consider 
>> moving
>> >> the failing tests into SecureRandom3Test and excluding that -- but by
>> >> the sound of it all SecureRandom tests will be failing.
>> >>
>> >> > By the way, probably, it worth reviewing *all* excluded TestCases
>> and:
>> >> > 1.      Unexclude if all tests pass.
>> >> > 2.      Report bug and provide patch for test to make it passing if
>> it
>> >> > failed due to bug in test.
>> >> > 3.      Report bug (and provide patch) for implementation to make
>> >> tests
>> >> > passing, if it was/is bug in implementation and no such issue in
>> JIRA.
>> >> > 4.      Specify reasons for excluding TestCases in exclude list to
>> >> make
>> >> > further clean-up process easier.
>> >> > 5.      Review results of this exclude list clean-up activity and
>> then
>> >> > decide what to do with the rest failing tests.
>> >> >
>> >> > I can do it starting next week. Do you think it worth doing?
>> >> > Thanks, Vladimir
>> >>
>> >> Sounds great, thanks Vladimir.
>> >>
>> >> Regards,
>> >> Tim
>> >>
>> >> --
>> >>
>> >> Tim Ellison ( t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: 
>> harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
>>
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>

-- 
Richard Liang
China Software Development Lab, IBM 



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Vladimir Ivanov wrote:
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation being
> done by <Name>.

Nice - please add those as instructions at the top.  Also add to those
instructions that if someone does add their name to the list, they
should also send a quick note to the dev list letting people know.

geir

> 
> Thanks, Vladimir
> 
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
> 
>>
>>
>> Vladimir Ivanov wrote:
>> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >> ...
>> >
>> > Currently I'm looking on the excluded TestCases and it requires more
>> time
>> > than I expected.
>> > I'll prepare a report/summary about excluded TestCases at the end of
>> this
>> > process.
>> >
>> Hello Vladimir,
>>
>> How about the progress of your report/summary?  ;-) As I'm implementing
>> java.util.Formatter and java.util.Scanner, I'm also interested in the
>> excluded tests in LUNI. Shall we create a wiki page to publish our
>> status, so that other people in community can know what we're doing, and
>> maybe we could attract more volunteers. ;-)
>>
>> Best regards,
>> Richard.
>>
>> > Thanks, Vladimir
>> >
>> >
>> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Vladimir Ivanov wrote:
>> >> > More details: it is
>> >> >
>> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> >> > test.
>> >> > At present time it has 2 failing tests with messages about SHA1PRNG
>> >> > algorithm (no support for SHA1PRNG provider).
>> >> > Looks like it is valid tests for non implemented functionality,
>> >> but, I'm
>> >>
>> >> > not
>> >> > sure what to do with such TestCase(s): comment these 2 tests or move
>> >> them
>> >> > into separate TestCase.
>> >> > Ideas?
>> >>
>> >> I'd prefer that we only use one mechanism for excluding tests, and
>> today
>> >> that is the excludes clause in the ant script.  So I suggest that you
>> do
>> >> option (4) below.
>> >>
>> >> If there are really useful tests that are being unnecessarily excluded
>> >> by being in the same *Test class, then you may want to consider moving
>> >> the failing tests into SecureRandom3Test and excluding that -- but by
>> >> the sound of it all SecureRandom tests will be failing.
>> >>
>> >> > By the way, probably, it worth reviewing *all* excluded TestCases
>> and:
>> >> > 1.      Unexclude if all tests pass.
>> >> > 2.      Report bug and provide patch for test to make it passing if
>> it
>> >> > failed due to bug in test.
>> >> > 3.      Report bug (and provide patch) for implementation to make
>> >> tests
>> >> > passing, if it was/is bug in implementation and no such issue in
>> JIRA.
>> >> > 4.      Specify reasons for excluding TestCases in exclude list to
>> >> make
>> >> > further clean-up process easier.
>> >> > 5.      Review results of this exclude list clean-up activity and
>> then
>> >> > decide what to do with the rest failing tests.
>> >> >
>> >> > I can do it starting next week. Do you think it worth doing?
>> >> > Thanks, Vladimir
>> >>
>> >> Sounds great, thanks Vladimir.
>> >>
>> >> Regards,
>> >> Tim
>> >>
>> >> --
>> >>
>> >> Tim Ellison ( t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
>>
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Andrew Zhang <zh...@gmail.com>.
Hi folks,

I'd like to investigate tests/api/java/net/DatagramSocketTest.java and
tests/api/java/net/DatagramSocketTest.java in luni module. I have updated
the wiki page(http://wiki.apache.org/harmony/Excluded_tests). I'll also plan
to study other excluded tests in luni module when I finish these two files.
Please let me know if you're interested too. Thanks!


On 7/14/06, Vladimir Ivanov <iv...@gmail.com> wrote:
>
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation
> being
> done by <Name>.
>
> Thanks, Vladimir
>
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
>
> >
> >
> > Vladimir Ivanov wrote:
> > >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> > >> ...
> > >
> > > Currently I'm looking on the excluded TestCases and it requires more
> > time
> > > than I expected.
> > > I'll prepare a report/summary about excluded TestCases at the end of
> > this
> > > process.
> > >
> > Hello Vladimir,
> >
> > How about the progress of your report/summary?  ;-) As I'm implementing
> > java.util.Formatter and java.util.Scanner, I'm also interested in the
> > excluded tests in LUNI. Shall we create a wiki page to publish our
> > status, so that other people in community can know what we're doing, and
> > maybe we could attract more volunteers. ;-)
> >
> > Best regards,
> > Richard.
> >
> > > Thanks, Vladimir
> > >
> > >
> > > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> > >>
> > >> Vladimir Ivanov wrote:
> > >> > More details: it is
> > >> >
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > >> > test.
> > >> > At present time it has 2 failing tests with messages about SHA1PRNG
> > >> > algorithm (no support for SHA1PRNG provider).
> > >> > Looks like it is valid tests for non implemented functionality,
> > >> but, I'm
> > >>
> > >> > not
> > >> > sure what to do with such TestCase(s): comment these 2 tests or
> move
> > >> them
> > >> > into separate TestCase.
> > >> > Ideas?
> > >>
> > >> I'd prefer that we only use one mechanism for excluding tests, and
> > today
> > >> that is the excludes clause in the ant script.  So I suggest that you
> > do
> > >> option (4) below.
> > >>
> > >> If there are really useful tests that are being unnecessarily
> excluded
> > >> by being in the same *Test class, then you may want to consider
> moving
> > >> the failing tests into SecureRandom3Test and excluding that -- but by
> > >> the sound of it all SecureRandom tests will be failing.
> > >>
> > >> > By the way, probably, it worth reviewing *all* excluded TestCases
> > and:
> > >> > 1.      Unexclude if all tests pass.
> > >> > 2.      Report bug and provide patch for test to make it passing if
> > it
> > >> > failed due to bug in test.
> > >> > 3.      Report bug (and provide patch) for implementation to make
> > >> tests
> > >> > passing, if it was/is bug in implementation and no such issue in
> > JIRA.
> > >> > 4.      Specify reasons for excluding TestCases in exclude list to
> > >> make
> > >> > further clean-up process easier.
> > >> > 5.      Review results of this exclude list clean-up activity and
> > then
> > >> > decide what to do with the rest failing tests.
> > >> >
> > >> > I can do it starting next week. Do you think it worth doing?
> > >> > Thanks, Vladimir
> > >>
> > >> Sounds great, thanks Vladimir.
> > >>
> > >> Regards,
> > >> Tim
> > >>
> > >> --
> > >>
> > >> Tim Ellison ( t.p.ellison@gmail.com)
> > >> IBM Java technology centre, UK.
> > >>
> > >> ---------------------------------------------------------------------
> > >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> > >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > >> For additional commands, e-mail:
> harmony-dev-help@incubator.apache.org
> > >>
> > >>
> > >
> >
> > --
> > Richard Liang
> > China Software Development Lab, IBM
> >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
>


-- 
Andrew Zhang
China Software Development Lab, IBM

Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
(refered from http://wiki.apache.org/harmony/ClassLibrary).
It would be good if before test investigation one would specify 'in
progress, <Name>' near module name, showing it is under investigation being
done by <Name>.

 Thanks, Vladimir

 On 7/14/06, Richard Liang <ri...@gmail.com> wrote:

>
>
> Vladimir Ivanov wrote:
> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> ...
> >
> > Currently I'm looking on the excluded TestCases and it requires more
> time
> > than I expected.
> > I'll prepare a report/summary about excluded TestCases at the end of
> this
> > process.
> >
> Hello Vladimir,
>
> How about the progress of your report/summary?  ;-) As I'm implementing
> java.util.Formatter and java.util.Scanner, I'm also interested in the
> excluded tests in LUNI. Shall we create a wiki page to publish our
> status, so that other people in community can know what we're doing, and
> maybe we could attract more volunteers. ;-)
>
> Best regards,
> Richard.
>
> > Thanks, Vladimir
> >
> >
> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >>
> >> Vladimir Ivanov wrote:
> >> > More details: it is
> >> >
> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> > test.
> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> > algorithm (no support for SHA1PRNG provider).
> >> > Looks like it is valid tests for non implemented functionality,
> >> but, I'm
> >>
> >> > not
> >> > sure what to do with such TestCase(s): comment these 2 tests or move
> >> them
> >> > into separate TestCase.
> >> > Ideas?
> >>
> >> I'd prefer that we only use one mechanism for excluding tests, and
> today
> >> that is the excludes clause in the ant script.  So I suggest that you
> do
> >> option (4) below.
> >>
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >>
> >> > By the way, probably, it worth reviewing *all* excluded TestCases
> and:
> >> > 1.      Unexclude if all tests pass.
> >> > 2.      Report bug and provide patch for test to make it passing if
> it
> >> > failed due to bug in test.
> >> > 3.      Report bug (and provide patch) for implementation to make
> >> tests
> >> > passing, if it was/is bug in implementation and no such issue in
> JIRA.
> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> make
> >> > further clean-up process easier.
> >> > 5.      Review results of this exclude list clean-up activity and
> then
> >> > decide what to do with the rest failing tests.
> >> >
> >> > I can do it starting next week. Do you think it worth doing?
> >> > Thanks, Vladimir
> >>
> >> Sounds great, thanks Vladimir.
> >>
> >> Regards,
> >> Tim
> >>
> >> --
> >>
> >> Tim Ellison ( t.p.ellison@gmail.com)
> >> IBM Java technology centre, UK.
> >>
> >> ---------------------------------------------------------------------
> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >>
> >>
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM
>
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Richard Liang <ri...@gmail.com>.

Vladimir Ivanov wrote:
>> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> ...
>
> Currently I'm looking on the excluded TestCases and it requires more time
> than I expected.
> I'll prepare a report/summary about excluded TestCases at the end of this
> process.
>
Hello Vladimir,

How about the progress of your report/summary?  ;-) As I'm implementing 
java.util.Formatter and java.util.Scanner, I'm also interested in the 
excluded tests in LUNI. Shall we create a wiki page to publish our 
status, so that other people in community can know what we're doing, and 
maybe we could attract more volunteers. ;-)

Best regards,
Richard.

> Thanks, Vladimir
>
>
> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>>
>> Vladimir Ivanov wrote:
>> > More details: it is
>> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> > test.
>> > At present time it has 2 failing tests with messages about SHA1PRNG
>> > algorithm (no support for SHA1PRNG provider).
>> > Looks like it is valid tests for non implemented functionality, 
>> but, I'm
>>
>> > not
>> > sure what to do with such TestCase(s): comment these 2 tests or move
>> them
>> > into separate TestCase.
>> > Ideas?
>>
>> I'd prefer that we only use one mechanism for excluding tests, and today
>> that is the excludes clause in the ant script.  So I suggest that you do
>> option (4) below.
>>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>>
>> > By the way, probably, it worth reviewing *all* excluded TestCases and:
>> > 1.      Unexclude if all tests pass.
>> > 2.      Report bug and provide patch for test to make it passing if it
>> > failed due to bug in test.
>> > 3.      Report bug (and provide patch) for implementation to make 
>> tests
>> > passing, if it was/is bug in implementation and no such issue in JIRA.
>> > 4.      Specify reasons for excluding TestCases in exclude list to 
>> make
>> > further clean-up process easier.
>> > 5.      Review results of this exclude list clean-up activity and then
>> > decide what to do with the rest failing tests.
>> >
>> > I can do it starting next week. Do you think it worth doing?
>> > Thanks, Vladimir
>>
>> Sounds great, thanks Vladimir.
>>
>> Regards,
>> Tim
>>
>> -- 
>>
>> Tim Ellison ( t.p.ellison@gmail.com)
>> IBM Java technology centre, UK.
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>

-- 
Richard Liang
China Software Development Lab, IBM 



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
>On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>...

Currently I'm looking on the excluded TestCases and it requires more time
than I expected.
I'll prepare a report/summary about excluded TestCases at the end of this
process.

Thanks, Vladimir


On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>
> Vladimir Ivanov wrote:
> > More details: it is
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > test.
> > At present time it has 2 failing tests with messages about SHA1PRNG
> > algorithm (no support for SHA1PRNG provider).
> > Looks like it is valid tests for non implemented functionality, but, I'm
>
> > not
> > sure what to do with such TestCase(s): comment these 2 tests or move
> them
> > into separate TestCase.
> > Ideas?
>
> I'd prefer that we only use one mechanism for excluding tests, and today
> that is the excludes clause in the ant script.  So I suggest that you do
> option (4) below.
>
> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.
>
> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> > 1.      Unexclude if all tests pass.
> > 2.      Report bug and provide patch for test to make it passing if it
> > failed due to bug in test.
> > 3.      Report bug (and provide patch) for implementation to make tests
> > passing, if it was/is bug in implementation and no such issue in JIRA.
> > 4.      Specify reasons for excluding TestCases in exclude list to make
> > further clean-up process easier.
> > 5.      Review results of this exclude list clean-up activity and then
> > decide what to do with the rest failing tests.
> >
> > I can do it starting next week. Do you think it worth doing?
> > Thanks, Vladimir
>
> Sounds great, thanks Vladimir.
>
> Regards,
> Tim
>
> --
>
> Tim Ellison ( t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Alexei Zakharov wrote:
> Hi,
> 
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
> 
> I think it's a nice idea to do this at least for java.beans since
> there are hundreds of useful workable tests excluded. After quite a
> long time working with this module I have a strong wish to clean up
> the mess.
> 
> But probably we should define some naming pattern for class to put
> excluded tests into. For example for XMLEncoderTest.java we can have
> XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> case we don't need to put extra "exclude" clause in the build.xml
> since such name doesn't match **/*Test.java pattern (current). Another
> variant is something like FAILED_XMLEncoderTest.java - matches the
> pattern and needs the clause. Thoughts?

I know that is a simple scheme, but let's not invent yet another way.
Either wait for the resolution of the testing layout thread George
refers to, or use the exclusion list for the moment.

Regards,
Tim

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Thanks George & Tim, I was out during last week and  today was reading
threads from oldest to the newest. :)
I agree, general solution using TestSuites or even TestNG is better
than my temporary one. However, defining a general approach can take a
long period of time.  Anyway, let's move our discussion to that
thread.

2006/7/10, George Harley <ge...@googlemail.com>:
> Alexei Zakharov wrote:
> > Hi,
> >
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >
> > I think it's a nice idea to do this at least for java.beans since
> > there are hundreds of useful workable tests excluded. After quite a
> > long time working with this module I have a strong wish to clean up
> > the mess.
> >
> > But probably we should define some naming pattern for class to put
> > excluded tests into. For example for XMLEncoderTest.java we can have
> > XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> > case we don't need to put extra "exclude" clause in the build.xml
> > since such name doesn't match **/*Test.java pattern (current). Another
> > variant is something like FAILED_XMLEncoderTest.java - matches the
> > pattern and needs the clause. Thoughts?
>
> Hi Alexei,
>
> Have you seen the discussion thread related to configuring our tests
> using suites [1] ? If not, then it seems to me that there is potential
> there for a simpler/quicker way of excluding or including tests without
> recourse to creating new files or renaming existing ones. What do you
> think ?
>
> Best regards,
> George
>
> [1]
> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200607.mbox/%3c44ABB451.30806@googlemail.com%3e
>
>
> >
> >
> > 2006/7/6, Tim Ellison <t....@gmail.com>:
> >> Vladimir Ivanov wrote:
> >> > More details: it is
> >> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> > test.
> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> > algorithm (no support for SHA1PRNG provider).
> >> > Looks like it is valid tests for non implemented functionality,
> >> but, I'm
> >> > not
> >> > sure what to do with such TestCase(s): comment these 2 tests or
> >> move them
> >> > into separate TestCase.
> >> > Ideas?
> >>
> >> I'd prefer that we only use one mechanism for excluding tests, and today
> >> that is the excludes clause in the ant script.  So I suggest that you do
> >> option (4) below.
> >>
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >>
> >> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> >> > 1.      Unexclude if all tests pass.
> >> > 2.      Report bug and provide patch for test to make it passing if it
> >> > failed due to bug in test.
> >> > 3.      Report bug (and provide patch) for implementation to make
> >> tests
> >> > passing, if it was/is bug in implementation and no such issue in JIRA.
> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> make
> >> > further clean-up process easier.
> >> > 5.      Review results of this exclude list clean-up activity and then
> >> > decide what to do with the rest failing tests.
> >> >
> >> > I can do it starting next week. Do you think it worth doing?
> >> > Thanks, Vladimir
> >>
> >> Sounds great, thanks Vladimir.
> >>
> >> Regards,
> >> Tim
> >
> >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by George Harley <ge...@googlemail.com>.
Alexei Zakharov wrote:
> Hi,
>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>
> I think it's a nice idea to do this at least for java.beans since
> there are hundreds of useful workable tests excluded. After quite a
> long time working with this module I have a strong wish to clean up
> the mess.
>
> But probably we should define some naming pattern for class to put
> excluded tests into. For example for XMLEncoderTest.java we can have
> XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> case we don't need to put extra "exclude" clause in the build.xml
> since such name doesn't match **/*Test.java pattern (current). Another
> variant is something like FAILED_XMLEncoderTest.java - matches the
> pattern and needs the clause. Thoughts?

Hi Alexei,

Have you seen the discussion thread related to configuring our tests 
using suites [1] ? If not, then it seems to me that there is potential 
there for a simpler/quicker way of excluding or including tests without 
recourse to creating new files or renaming existing ones. What do you 
think ?

Best regards,
George

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200607.mbox/%3c44ABB451.30806@googlemail.com%3e


>
>
> 2006/7/6, Tim Ellison <t....@gmail.com>:
>> Vladimir Ivanov wrote:
>> > More details: it is
>> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> > test.
>> > At present time it has 2 failing tests with messages about SHA1PRNG
>> > algorithm (no support for SHA1PRNG provider).
>> > Looks like it is valid tests for non implemented functionality, 
>> but, I'm
>> > not
>> > sure what to do with such TestCase(s): comment these 2 tests or 
>> move them
>> > into separate TestCase.
>> > Ideas?
>>
>> I'd prefer that we only use one mechanism for excluding tests, and today
>> that is the excludes clause in the ant script.  So I suggest that you do
>> option (4) below.
>>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>>
>> > By the way, probably, it worth reviewing *all* excluded TestCases and:
>> > 1.      Unexclude if all tests pass.
>> > 2.      Report bug and provide patch for test to make it passing if it
>> > failed due to bug in test.
>> > 3.      Report bug (and provide patch) for implementation to make 
>> tests
>> > passing, if it was/is bug in implementation and no such issue in JIRA.
>> > 4.      Specify reasons for excluding TestCases in exclude list to 
>> make
>> > further clean-up process easier.
>> > 5.      Review results of this exclude list clean-up activity and then
>> > decide what to do with the rest failing tests.
>> >
>> > I can do it starting next week. Do you think it worth doing?
>> > Thanks, Vladimir
>>
>> Sounds great, thanks Vladimir.
>>
>> Regards,
>> Tim
>
>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Hi,

> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.

I think it's a nice idea to do this at least for java.beans since
there are hundreds of useful workable tests excluded. After quite a
long time working with this module I have a strong wish to clean up
the mess.

But probably we should define some naming pattern for class to put
excluded tests into. For example for XMLEncoderTest.java we can have
XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
case we don't need to put extra "exclude" clause in the build.xml
since such name doesn't match **/*Test.java pattern (current). Another
variant is something like FAILED_XMLEncoderTest.java - matches the
pattern and needs the clause. Thoughts?


2006/7/6, Tim Ellison <t....@gmail.com>:
> Vladimir Ivanov wrote:
> > More details: it is
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > test.
> > At present time it has 2 failing tests with messages about SHA1PRNG
> > algorithm (no support for SHA1PRNG provider).
> > Looks like it is valid tests for non implemented functionality, but, I'm
> > not
> > sure what to do with such TestCase(s): comment these 2 tests or move them
> > into separate TestCase.
> > Ideas?
>
> I'd prefer that we only use one mechanism for excluding tests, and today
> that is the excludes clause in the ant script.  So I suggest that you do
> option (4) below.
>
> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.
>
> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> > 1.      Unexclude if all tests pass.
> > 2.      Report bug and provide patch for test to make it passing if it
> > failed due to bug in test.
> > 3.      Report bug (and provide patch) for implementation to make tests
> > passing, if it was/is bug in implementation and no such issue in JIRA.
> > 4.      Specify reasons for excluding TestCases in exclude list to make
> > further clean-up process easier.
> > 5.      Review results of this exclude list clean-up activity and then
> > decide what to do with the rest failing tests.
> >
> > I can do it starting next week. Do you think it worth doing?
> > Thanks, Vladimir
>
> Sounds great, thanks Vladimir.
>
> Regards,
> Tim


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Vladimir Ivanov wrote:
> More details: it is
> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> test.
> At present time it has 2 failing tests with messages about SHA1PRNG
> algorithm (no support for SHA1PRNG provider).
> Looks like it is valid tests for non implemented functionality, but, I'm
> not
> sure what to do with such TestCase(s): comment these 2 tests or move them
> into separate TestCase.
> Ideas?

I'd prefer that we only use one mechanism for excluding tests, and today
that is the excludes clause in the ant script.  So I suggest that you do
option (4) below.

If there are really useful tests that are being unnecessarily excluded
by being in the same *Test class, then you may want to consider moving
the failing tests into SecureRandom3Test and excluding that -- but by
the sound of it all SecureRandom tests will be failing.

> By the way, probably, it worth reviewing *all* excluded TestCases and:
> 1.      Unexclude if all tests pass.
> 2.      Report bug and provide patch for test to make it passing if it
> failed due to bug in test.
> 3.      Report bug (and provide patch) for implementation to make tests
> passing, if it was/is bug in implementation and no such issue in JIRA.
> 4.      Specify reasons for excluding TestCases in exclude list to make
> further clean-up process easier.
> 5.      Review results of this exclude list clean-up activity and then
> decide what to do with the rest failing tests.
> 
> I can do it starting next week. Do you think it worth doing?
> Thanks, Vladimir

Sounds great, thanks Vladimir.

Regards,
Tim

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
More details: it is
org/apache/harmony/security/tests/java/security/SecureRandom2Test.java test.
At present time it has 2 failing tests with messages about SHA1PRNG
algorithm (no support for SHA1PRNG provider).
Looks like it is valid tests for non implemented functionality, but, I'm not
sure what to do with such TestCase(s): comment these 2 tests or move them
into separate TestCase.
Ideas?

By the way, probably, it worth reviewing *all* excluded TestCases and:
1.      Unexclude if all tests pass.
2.      Report bug and provide patch for test to make it passing if it
failed due to bug in test.
3.      Report bug (and provide patch) for implementation to make tests
passing, if it was/is bug in implementation and no such issue in JIRA.
4.      Specify reasons for excluding TestCases in exclude list to make
further clean-up process easier.
5.      Review results of this exclude list clean-up activity and then
decide what to do with the rest failing tests.

I can do it starting next week. Do you think it worth doing?
 Thanks, Vladimir


On 7/6/06, Nathan Beyer <nb...@kc.rr.com> wrote:
>
> Did the TestCase run without a failure? If it didn't, then I would ask for
> you to attempt to fix it and post the patch and the patch to enable it. If
>
> it did pass, then just post a patch to enable it or just submit the issue
> as
> ask it to be removed from the exclude list.
>
> If the test is failing because of a bug, then log an issue about the bug
> and
> try to fix the issue.
>
> -Nathan
>
> > -----Original Message-----
> > From: Vladimir Ivanov [mailto:ivavladimir@gmail.com]
> > Sent: Wednesday, July 05, 2006 12:41 AM
> > To: harmony-dev@incubator.apache.org
> > Subject: Re: [classlib][testing] excluding the failed tests
> >
> > Yesterday I tried to add a regression test to existing in security
> module
> > TestCase, but, found that the TestCase is in exclude list. I had to
> > un-exclude it, run, check my test passes and exclude the TestCase again
> -
> > it
> > was a little bit inconvenient, besides, my new valid (I believe)
> > regression
> > test will go directly to exclude list after integration...
> >
> > I see that we are near to decision what to do with failing tests.
> > Am I right that we are at the point of agreement on the following?:
> >
> > There could be two groups of failing tests:
> > *Tests that never passed.
> > *Tests that recently started failing.
> >
> > Test that never passed should be stored in TestCases with suffix "Fail"
> (
> > StringFailTest.java for example). They are subject for review and either
>
> > deletion or fixing or fixing implementation if they find a bug in API
> > implementation.
> > There should be 0 tests that recently started failing. If such test
> > appears
> > it should be fixed within 24h, otherwise, commit which introduced the
> > failure will be rolled back.
> > Right?
> >
> >  Thanks, Vladimir
> >
> > On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:
> >
> > > Nathan Beyer wrote:
> > > > Based on what I've seen of the excluded tests, category 1 is the
> > > predominate
> > > > case. This could be validated by looking at old revisions in SVN.
> > >
> > > I'm sure that is true, I'm just saying that the build system 'normal'
> > > state is that all enabled tests pass.  My concern was over your
> > > statement you have had failing tests for months.
> > >
> > > What is failing for you now?
> > >
> > > Regards,
> > > Tim
> > >
> > >
> > > >> -----Original Message-----
> > > >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> > > >>
> > > >> Is this the case where we have two 'categories'?
> > > >>
> > > >>   1) tests that never worked
> > > >>
> > > >>   2) tests that recently broke
> > > >>
> > > >> I think that a #2 should never persist for more than one build
> > > >> iteration, as either things get fixed or backed out.  I suppose
> then
> > we
> > >
> > > >> are really talking about category #1, and that we don't have the
> > > "broken
> > > >> window" problem as we never had the window there in the first
> place?
> > > >>
> > > >> I think it's important to understand this (if it's actually true).
> > > >>
> > > >> geir
> > > >>
> > > >>
> > > >> Tim Ellison wrote:
> > > >>> Nathan Beyer wrote:
> > > >>>> How are other projects handling this? My opinion is that tests,
> > which
> > >
> > > >> are
> > > >>>> expected and know to pass should always be running and if they
> fail
> > > and
> > > >> the
> > > >>>> failure can be independently recreated, then it's something to be
> > > >> posted on
> > > >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> > > issue.
> > > >>> Agreed, the tests we have enabled are run on each build (hourly if
> > > >>> things are being committed), and failures are sent to commit list.
> > > >>>
> > > >>>> If it's broken for a significant amount of time (weeks, months),
> > then
> > >
> > > >> rather
> > > >>>> than excluding the test, I would propose moving it to a "broken"
> or
> > > >>>> "possibly invalid" source folder that's out of the test path. If
> it
> > > >> doesn't
> > > >>>> already have JIRA issue, then one should be created.
> > > >>> Yes, though I'd be inclined to move it sooner -- tests should not
> > stay
> > >
> > > >>> broken for more than a couple of days.
> > > >>>
> > > >>> Recently our breakages have been invalid tests rather than broken
> > > >>> implementation, but they still need to be investigated/resolved.
> > > >>>
> > > >>>> I've been living with consistently failing tests for a long time
> > now.
> > >
> > > >>>> Recently it was the unstable Socket tests, but I've been seeing
> the
> > > >> WinXP
> > > >>>> long file name [1] test failing for months.
> > > >>> IMHO you should be shouting about it!  The alternative is that we
> > > >>> tolerate a few broken windows and overall quality slips.
> > > >>>
> > > >>>> I think we may be unnecessarily complicating some of this by
> > assuming
> > >
> > > >> that
> > > >>>> all of the donated tests that are currently excluded and failing
> > are
> > > >>>> completely valid. I believe that the currently excluded tests are
>
> > > >> either
> > > >>>> failing because they aren't isolated according to the suggested
> > test
> > > >> layout
> > > >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a
> case
> > of
> > >
> > > >> the
> > > >>>> later.
> > > >>>>
> > > >>>> So I go back to my original suggestion, implement the testing
> > > proposal,
> > > >> then
> > > >>>> fix/move any excluded tests to where they work properly or
> > determine
> > > >> that
> > > >>>> they are invalid and delete them.
> > > >>> Yes, the tests do need improvements too.
> > > >>>
> > > >>> Regards,
> > > >>> Tim
> > > >>>
> > > >>>
> > > >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> > > >>>>
> > > >
> > > >
> > > >
> > > >
> ---------------------------------------------------------------------
> > > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail:
> harmony-dev-help@incubator.apache.org
> > > >
> > > >
> > >
> > > --
> > >
> > > Tim Ellison ( t.p.ellison@gmail.com)
> > > IBM Java technology centre, UK.
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
Did the TestCase run without a failure? If it didn't, then I would ask for
you to attempt to fix it and post the patch and the patch to enable it. If
it did pass, then just post a patch to enable it or just submit the issue as
ask it to be removed from the exclude list.

If the test is failing because of a bug, then log an issue about the bug and
try to fix the issue.

-Nathan

> -----Original Message-----
> From: Vladimir Ivanov [mailto:ivavladimir@gmail.com]
> Sent: Wednesday, July 05, 2006 12:41 AM
> To: harmony-dev@incubator.apache.org
> Subject: Re: [classlib][testing] excluding the failed tests
> 
> Yesterday I tried to add a regression test to existing in security module
> TestCase, but, found that the TestCase is in exclude list. I had to
> un-exclude it, run, check my test passes and exclude the TestCase again -
> it
> was a little bit inconvenient, besides, my new valid (I believe)
> regression
> test will go directly to exclude list after integration...
> 
> I see that we are near to decision what to do with failing tests.
> Am I right that we are at the point of agreement on the following?:
> 
> There could be two groups of failing tests:
> *Tests that never passed.
> *Tests that recently started failing.
> 
> Test that never passed should be stored in TestCases with suffix "Fail" (
> StringFailTest.java for example). They are subject for review and either
> deletion or fixing or fixing implementation if they find a bug in API
> implementation.
> There should be 0 tests that recently started failing. If such test
> appears
> it should be fixed within 24h, otherwise, commit which introduced the
> failure will be rolled back.
> Right?
> 
>  Thanks, Vladimir
> 
> On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:
> 
> > Nathan Beyer wrote:
> > > Based on what I've seen of the excluded tests, category 1 is the
> > predominate
> > > case. This could be validated by looking at old revisions in SVN.
> >
> > I'm sure that is true, I'm just saying that the build system 'normal'
> > state is that all enabled tests pass.  My concern was over your
> > statement you have had failing tests for months.
> >
> > What is failing for you now?
> >
> > Regards,
> > Tim
> >
> >
> > >> -----Original Message-----
> > >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> > >>
> > >> Is this the case where we have two 'categories'?
> > >>
> > >>   1) tests that never worked
> > >>
> > >>   2) tests that recently broke
> > >>
> > >> I think that a #2 should never persist for more than one build
> > >> iteration, as either things get fixed or backed out.  I suppose then
> we
> >
> > >> are really talking about category #1, and that we don't have the
> > "broken
> > >> window" problem as we never had the window there in the first place?
> > >>
> > >> I think it's important to understand this (if it's actually true).
> > >>
> > >> geir
> > >>
> > >>
> > >> Tim Ellison wrote:
> > >>> Nathan Beyer wrote:
> > >>>> How are other projects handling this? My opinion is that tests,
> which
> >
> > >> are
> > >>>> expected and know to pass should always be running and if they fail
> > and
> > >> the
> > >>>> failure can be independently recreated, then it's something to be
> > >> posted on
> > >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> > issue.
> > >>> Agreed, the tests we have enabled are run on each build (hourly if
> > >>> things are being committed), and failures are sent to commit list.
> > >>>
> > >>>> If it's broken for a significant amount of time (weeks, months),
> then
> >
> > >> rather
> > >>>> than excluding the test, I would propose moving it to a "broken" or
> > >>>> "possibly invalid" source folder that's out of the test path. If it
> > >> doesn't
> > >>>> already have JIRA issue, then one should be created.
> > >>> Yes, though I'd be inclined to move it sooner -- tests should not
> stay
> >
> > >>> broken for more than a couple of days.
> > >>>
> > >>> Recently our breakages have been invalid tests rather than broken
> > >>> implementation, but they still need to be investigated/resolved.
> > >>>
> > >>>> I've been living with consistently failing tests for a long time
> now.
> >
> > >>>> Recently it was the unstable Socket tests, but I've been seeing the
> > >> WinXP
> > >>>> long file name [1] test failing for months.
> > >>> IMHO you should be shouting about it!  The alternative is that we
> > >>> tolerate a few broken windows and overall quality slips.
> > >>>
> > >>>> I think we may be unnecessarily complicating some of this by
> assuming
> >
> > >> that
> > >>>> all of the donated tests that are currently excluded and failing
> are
> > >>>> completely valid. I believe that the currently excluded tests are
> > >> either
> > >>>> failing because they aren't isolated according to the suggested
> test
> > >> layout
> > >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case
> of
> >
> > >> the
> > >>>> later.
> > >>>>
> > >>>> So I go back to my original suggestion, implement the testing
> > proposal,
> > >> then
> > >>>> fix/move any excluded tests to where they work properly or
> determine
> > >> that
> > >>>> they are invalid and delete them.
> > >>> Yes, the tests do need improvements too.
> > >>>
> > >>> Regards,
> > >>> Tim
> > >>>
> > >>>
> > >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> > >>>>
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
> >
> > --
> >
> > Tim Ellison ( t.p.ellison@gmail.com)
> > IBM Java technology centre, UK.
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
Yesterday I tried to add a regression test to existing in security module
TestCase, but, found that the TestCase is in exclude list. I had to
un-exclude it, run, check my test passes and exclude the TestCase again – it
was a little bit inconvenient, besides, my new valid (I believe) regression
test will go directly to exclude list after integration...

I see that we are near to decision what to do with failing tests.
Am I right that we are at the point of agreement on the following?:

There could be two groups of failing tests:
*Tests that never passed.
*Tests that recently started failing.

Test that never passed should be stored in TestCases with suffix "Fail" (
StringFailTest.java for example). They are subject for review and either
deletion or fixing or fixing implementation if they find a bug in API
implementation.
There should be 0 tests that recently started failing. If such test appears
it should be fixed within 24h, otherwise, commit which introduced the
failure will be rolled back.
Right?

 Thanks, Vladimir

On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:

> Nathan Beyer wrote:
> > Based on what I've seen of the excluded tests, category 1 is the
> predominate
> > case. This could be validated by looking at old revisions in SVN.
>
> I'm sure that is true, I'm just saying that the build system 'normal'
> state is that all enabled tests pass.  My concern was over your
> statement you have had failing tests for months.
>
> What is failing for you now?
>
> Regards,
> Tim
>
>
> >> -----Original Message-----
> >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> >>
> >> Is this the case where we have two 'categories'?
> >>
> >>   1) tests that never worked
> >>
> >>   2) tests that recently broke
> >>
> >> I think that a #2 should never persist for more than one build
> >> iteration, as either things get fixed or backed out.  I suppose then we
>
> >> are really talking about category #1, and that we don't have the
> "broken
> >> window" problem as we never had the window there in the first place?
> >>
> >> I think it's important to understand this (if it's actually true).
> >>
> >> geir
> >>
> >>
> >> Tim Ellison wrote:
> >>> Nathan Beyer wrote:
> >>>> How are other projects handling this? My opinion is that tests, which
>
> >> are
> >>>> expected and know to pass should always be running and if they fail
> and
> >> the
> >>>> failure can be independently recreated, then it's something to be
> >> posted on
> >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> issue.
> >>> Agreed, the tests we have enabled are run on each build (hourly if
> >>> things are being committed), and failures are sent to commit list.
> >>>
> >>>> If it's broken for a significant amount of time (weeks, months), then
>
> >> rather
> >>>> than excluding the test, I would propose moving it to a "broken" or
> >>>> "possibly invalid" source folder that's out of the test path. If it
> >> doesn't
> >>>> already have JIRA issue, then one should be created.
> >>> Yes, though I'd be inclined to move it sooner -- tests should not stay
>
> >>> broken for more than a couple of days.
> >>>
> >>> Recently our breakages have been invalid tests rather than broken
> >>> implementation, but they still need to be investigated/resolved.
> >>>
> >>>> I've been living with consistently failing tests for a long time now.
>
> >>>> Recently it was the unstable Socket tests, but I've been seeing the
> >> WinXP
> >>>> long file name [1] test failing for months.
> >>> IMHO you should be shouting about it!  The alternative is that we
> >>> tolerate a few broken windows and overall quality slips.
> >>>
> >>>> I think we may be unnecessarily complicating some of this by assuming
>
> >> that
> >>>> all of the donated tests that are currently excluded and failing are
> >>>> completely valid. I believe that the currently excluded tests are
> >> either
> >>>> failing because they aren't isolated according to the suggested test
> >> layout
> >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
>
> >> the
> >>>> later.
> >>>>
> >>>> So I go back to my original suggestion, implement the testing
> proposal,
> >> then
> >>>> fix/move any excluded tests to where they work properly or determine
> >> that
> >>>> they are invalid and delete them.
> >>> Yes, the tests do need improvements too.
> >>>
> >>> Regards,
> >>> Tim
> >>>
> >>>
> >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> >>>>
> >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
> --
>
> Tim Ellison ( t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Nathan Beyer wrote:
> Based on what I've seen of the excluded tests, category 1 is the predominate
> case. This could be validated by looking at old revisions in SVN.

I'm sure that is true, I'm just saying that the build system 'normal'
state is that all enabled tests pass.  My concern was over your
statement you have had failing tests for months.

What is failing for you now?

Regards,
Tim


>> -----Original Message-----
>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
>>
>> Is this the case where we have two 'categories'?
>>
>>   1) tests that never worked
>>
>>   2) tests that recently broke
>>
>> I think that a #2 should never persist for more than one build
>> iteration, as either things get fixed or backed out.  I suppose then we
>> are really talking about category #1, and that we don't have the "broken
>> window" problem as we never had the window there in the first place?
>>
>> I think it's important to understand this (if it's actually true).
>>
>> geir
>>
>>
>> Tim Ellison wrote:
>>> Nathan Beyer wrote:
>>>> How are other projects handling this? My opinion is that tests, which
>> are
>>>> expected and know to pass should always be running and if they fail and
>> the
>>>> failure can be independently recreated, then it's something to be
>> posted on
>>>> the list, if trivial (typo in build file?), or logged as a JIRA issue.
>>> Agreed, the tests we have enabled are run on each build (hourly if
>>> things are being committed), and failures are sent to commit list.
>>>
>>>> If it's broken for a significant amount of time (weeks, months), then
>> rather
>>>> than excluding the test, I would propose moving it to a "broken" or
>>>> "possibly invalid" source folder that's out of the test path. If it
>> doesn't
>>>> already have JIRA issue, then one should be created.
>>> Yes, though I'd be inclined to move it sooner -- tests should not stay
>>> broken for more than a couple of days.
>>>
>>> Recently our breakages have been invalid tests rather than broken
>>> implementation, but they still need to be investigated/resolved.
>>>
>>>> I've been living with consistently failing tests for a long time now.
>>>> Recently it was the unstable Socket tests, but I've been seeing the
>> WinXP
>>>> long file name [1] test failing for months.
>>> IMHO you should be shouting about it!  The alternative is that we
>>> tolerate a few broken windows and overall quality slips.
>>>
>>>> I think we may be unnecessarily complicating some of this by assuming
>> that
>>>> all of the donated tests that are currently excluded and failing are
>>>> completely valid. I believe that the currently excluded tests are
>> either
>>>> failing because they aren't isolated according to the suggested test
>> layout
>>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
>> the
>>>> later.
>>>>
>>>> So I go back to my original suggestion, implement the testing proposal,
>> then
>>>> fix/move any excluded tests to where they work properly or determine
>> that
>>>> they are invalid and delete them.
>>> Yes, the tests do need improvements too.
>>>
>>> Regards,
>>> Tim
>>>
>>>
>>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
>>>>
> 
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
Based on what I've seen of the excluded tests, category 1 is the predominate
case. This could be validated by looking at old revisions in SVN.

-Nathan

> -----Original Message-----
> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> 
> Is this the case where we have two 'categories'?
> 
>   1) tests that never worked
> 
>   2) tests that recently broke
> 
> I think that a #2 should never persist for more than one build
> iteration, as either things get fixed or backed out.  I suppose then we
> are really talking about category #1, and that we don't have the "broken
> window" problem as we never had the window there in the first place?
> 
> I think it's important to understand this (if it's actually true).
> 
> geir
> 
> 
> Tim Ellison wrote:
> > Nathan Beyer wrote:
> >> How are other projects handling this? My opinion is that tests, which
> are
> >> expected and know to pass should always be running and if they fail and
> the
> >> failure can be independently recreated, then it's something to be
> posted on
> >> the list, if trivial (typo in build file?), or logged as a JIRA issue.
> >
> > Agreed, the tests we have enabled are run on each build (hourly if
> > things are being committed), and failures are sent to commit list.
> >
> >> If it's broken for a significant amount of time (weeks, months), then
> rather
> >> than excluding the test, I would propose moving it to a "broken" or
> >> "possibly invalid" source folder that's out of the test path. If it
> doesn't
> >> already have JIRA issue, then one should be created.
> >
> > Yes, though I'd be inclined to move it sooner -- tests should not stay
> > broken for more than a couple of days.
> >
> > Recently our breakages have been invalid tests rather than broken
> > implementation, but they still need to be investigated/resolved.
> >
> >> I've been living with consistently failing tests for a long time now.
> >> Recently it was the unstable Socket tests, but I've been seeing the
> WinXP
> >> long file name [1] test failing for months.
> >
> > IMHO you should be shouting about it!  The alternative is that we
> > tolerate a few broken windows and overall quality slips.
> >
> >> I think we may be unnecessarily complicating some of this by assuming
> that
> >> all of the donated tests that are currently excluded and failing are
> >> completely valid. I believe that the currently excluded tests are
> either
> >> failing because they aren't isolated according to the suggested test
> layout
> >> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
> the
> >> later.
> >>
> >> So I go back to my original suggestion, implement the testing proposal,
> then
> >> fix/move any excluded tests to where they work properly or determine
> that
> >> they are invalid and delete them.
> >
> > Yes, the tests do need improvements too.
> >
> > Regards,
> > Tim
> >
> >
> >> [1] https://issues.apache.org/jira/browse/HARMONY-619
> >>
> >



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.
Is this the case where we have two 'categories'?

  1) tests that never worked

  2) tests that recently broke

I think that a #2 should never persist for more than one build
iteration, as either things get fixed or backed out.  I suppose then we
are really talking about category #1, and that we don't have the "broken
window" problem as we never had the window there in the first place?

I think it's important to understand this (if it's actually true).

geir


Tim Ellison wrote:
> Nathan Beyer wrote:
>> How are other projects handling this? My opinion is that tests, which are
>> expected and know to pass should always be running and if they fail and the
>> failure can be independently recreated, then it's something to be posted on
>> the list, if trivial (typo in build file?), or logged as a JIRA issue.
> 
> Agreed, the tests we have enabled are run on each build (hourly if
> things are being committed), and failures are sent to commit list.
> 
>> If it's broken for a significant amount of time (weeks, months), then rather
>> than excluding the test, I would propose moving it to a "broken" or
>> "possibly invalid" source folder that's out of the test path. If it doesn't
>> already have JIRA issue, then one should be created.
> 
> Yes, though I'd be inclined to move it sooner -- tests should not stay
> broken for more than a couple of days.
> 
> Recently our breakages have been invalid tests rather than broken
> implementation, but they still need to be investigated/resolved.
> 
>> I've been living with consistently failing tests for a long time now.
>> Recently it was the unstable Socket tests, but I've been seeing the WinXP
>> long file name [1] test failing for months.
> 
> IMHO you should be shouting about it!  The alternative is that we
> tolerate a few broken windows and overall quality slips.
> 
>> I think we may be unnecessarily complicating some of this by assuming that
>> all of the donated tests that are currently excluded and failing are
>> completely valid. I believe that the currently excluded tests are either
>> failing because they aren't isolated according to the suggested test layout
>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
>> later.
>>
>> So I go back to my original suggestion, implement the testing proposal, then
>> fix/move any excluded tests to where they work properly or determine that
>> they are invalid and delete them.
> 
> Yes, the tests do need improvements too.
> 
> Regards,
> Tim
> 
> 
>> [1] https://issues.apache.org/jira/browse/HARMONY-619
>>
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org