You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Vladimir Ivanov <iv...@gmail.com> on 2006/06/26 14:02:03 UTC

[classlib][testing] excluding the failed tests

Hi,
Working with tests I noticed that we are excluding some tests just because
several tests from single TestCase fail.

For example, the TestCase 'tests.api.java.lang.StringTest' has 60 tests and
only 2 of them fails. But the build excludes the whole TestCase and we just
miss testing of java.lang.String implementation.

Do we really need to exclude TestCases in 'ant test' target?

My suggestion is: do not exclude any tests until it crashes VM.
If somebody needs a list of tests that always passed a separated target can
be added to build.

Do you think we should add target 'test-all' to the build?
 Thanks, Vladimir

Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Tim Ellison wrote:
> There was a submission that enabled finer control of failing tests (even
> by platform etc.)
> 
> I may be wrong but commenting out tests usually means that they never
> get fixed; 

Yes, that was my concern as well.

> even putting them into exclude clauses in the ant script is
> too hidden for me -- I prefer to see the exclusions and failures made
> available very clearly.

Well, if we did refactor into

    TestFOO.java and TestFOO_Failures.java

and have an explicit include, people would clearly see the problematic
test cases and might be motivated to contribute by fixing...

geir

> 
> Regards,
> Tim
> 
> Alexei Zakharov wrote:
>> Hi,
>> +1 for (3), but I think it will be better to define suite() method and
>> enumerate passing tests there rather than to comment out the code.
>>
>> 2006/6/27, Richard Liang <ri...@gmail.com>:
>>> Hello Vladimir,
>>>
>>> +1 to option 3) . We shall comment the failed test cases out and add
>>> FIXME to remind us to diagnose the problems later. ;-)
>>>
>>> Vladimir Ivanov wrote:
>>>> I see your point.
>>>> But I feel that we can miss regression in non-tested code if we exclude
>>>> TestCases.
>>>> Now, for example we miss testing of
>>> java.lang.Class/Process/Thread/String
>>>> and some other classes.
>>>>
>>>> While we have failing tests and don't want to pay attention to these
>>>> failures we can:
>>>> 1) Leave things as is – do not run TestCases with failing tests.
>>>> 2) Split passing/failing TestCase into separate "failing TestCase" and
>>>> "passing TestCase" and exclude "failing TestCases". When test or
>>>> implementation is fixed we move tests from failing TestCase to passing
>>>> TestCase.
>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
>>>> instead
>>>> of 0 for String.
>>>> 4) Run all TestCases, then, compare test run results with the 'list of
>>>> known
>>>> failures' and see whether new failures appeared. This, I think, is
>>> better
>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>>> known
>>>> failing tests and exclude list where we put crashing tests.
>>>>
>>>> Thanks, Vladimir
>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>>>> Mikhail Loenko wrote:
>>>>>> Hi Vladimir,
>>>>>>
>>>>>> IMHO the tests are to verify that an update does not introduce any
>>>>>> regression. So there are two options: remember which exactly
>>> tests may
>>>>> fail
>>>>>> and remember that all tests must pass. I believe the latter one is
>>>>> a bit
>>>>>> easier and safer.
>>>>> +1
>>>>>
>>>>> Tim
>>>>>
>>>>>> Thanks,
>>>>>> Mikhail
>>>>>>
>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>>>>>>> Hi,
>>>>>>> Working with tests I noticed that we are excluding some tests just
>>>>>>> because
>>>>>>> several tests from single TestCase fail.
>>>>>>>
>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>>>>>>> tests and
>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
>>>>> and we
>>>>>>> just
>>>>>>> miss testing of java.lang.String implementation.
>>>>>>>
>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
>>>>>>>
>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
>>>>>>> If somebody needs a list of tests that always passed a separated
>>>>>>> target can
>>>>>>> be added to build.
>>>>>>>
>>>>>>> Do you think we should add target 'test-all' to the build?
>>>>>>>  Thanks, Vladimir
>>>>>>>
>>>>>>>
>>>>>>
>>> ---------------------------------------------------------------------
>>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>>> For additional commands, e-mail:
>>> harmony-dev-help@incubator.apache.org
>>>>>>
>>>>> --
>>>>>
>>>>> Tim Ellison (t.p.ellison@gmail.com)
>>>>> IBM Java technology centre, UK.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>>>>
>>>>>
>>> -- 
>>> Richard Liang
>>> China Software Development Lab, IBM
>>
>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
There was a submission that enabled finer control of failing tests (even
by platform etc.)

I may be wrong but commenting out tests usually means that they never
get fixed; even putting them into exclude clauses in the ant script is
too hidden for me -- I prefer to see the exclusions and failures made
available very clearly.

Regards,
Tim

Alexei Zakharov wrote:
> Hi,
> +1 for (3), but I think it will be better to define suite() method and
> enumerate passing tests there rather than to comment out the code.
> 
> 2006/6/27, Richard Liang <ri...@gmail.com>:
>> Hello Vladimir,
>>
>> +1 to option 3) . We shall comment the failed test cases out and add
>> FIXME to remind us to diagnose the problems later. ;-)
>>
>> Vladimir Ivanov wrote:
>> > I see your point.
>> > But I feel that we can miss regression in non-tested code if we exclude
>> > TestCases.
>> > Now, for example we miss testing of
>> java.lang.Class/Process/Thread/String
>> > and some other classes.
>> >
>> > While we have failing tests and don't want to pay attention to these
>> > failures we can:
>> > 1) Leave things as is – do not run TestCases with failing tests.
>> > 2) Split passing/failing TestCase into separate "failing TestCase" and
>> > "passing TestCase" and exclude "failing TestCases". When test or
>> > implementation is fixed we move tests from failing TestCase to passing
>> > TestCase.
>> > 3) Comment failing tests in TestCases. It is better to run 58 tests
>> > instead
>> > of 0 for String.
>> > 4) Run all TestCases, then, compare test run results with the 'list of
>> > known
>> > failures' and see whether new failures appeared. This, I think, is
>> better
>> > then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>> known
>> > failing tests and exclude list where we put crashing tests.
>> >
>> > Thanks, Vladimir
>> > On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Mikhail Loenko wrote:
>> >> > Hi Vladimir,
>> >> >
>> >> > IMHO the tests are to verify that an update does not introduce any
>> >> > regression. So there are two options: remember which exactly
>> tests may
>> >> fail
>> >> > and remember that all tests must pass. I believe the latter one is
>> >> a bit
>> >> > easier and safer.
>> >>
>> >> +1
>> >>
>> >> Tim
>> >>
>> >> > Thanks,
>> >> > Mikhail
>> >> >
>> >> > 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>> >> >> Hi,
>> >> >> Working with tests I noticed that we are excluding some tests just
>> >> >> because
>> >> >> several tests from single TestCase fail.
>> >> >>
>> >> >> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>> >> >> tests and
>> >> >> only 2 of them fails. But the build excludes the whole TestCase
>> >> and we
>> >> >> just
>> >> >> miss testing of java.lang.String implementation.
>> >> >>
>> >> >> Do we really need to exclude TestCases in 'ant test' target?
>> >> >>
>> >> >> My suggestion is: do not exclude any tests until it crashes VM.
>> >> >> If somebody needs a list of tests that always passed a separated
>> >> >> target can
>> >> >> be added to build.
>> >> >>
>> >> >> Do you think we should add target 'test-all' to the build?
>> >> >>  Thanks, Vladimir
>> >> >>
>> >> >>
>> >> >
>> >> >
>> ---------------------------------------------------------------------
>> >> > Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> > For additional commands, e-mail:
>> harmony-dev-help@incubator.apache.org
>> >> >
>> >> >
>> >>
>> >> --
>> >>
>> >> Tim Ellison (t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
> 
> 
> 

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Hi Nathan,

> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test

I will give a concrete example. Currently for java.beans we have more
than thousand tests in 50 classes. And about 30% of them fail. These
are not invalid tests, they just came from the origin different from
the one of   java.beans implementation currently in svn.  They mostly
test the compatibility with RI the current implementation has problems
with.

Now I am working on enabling these 30% but this is not such an easy
task. It will take time (need to refactor internal stuff etc). And it
is the standard situation for test class to have for example 30 passed
tests and 9 failed. Since there are failures the whole test class is
excluded. As a result we currently have only 22 test classes enabled
with just 130 test inside. So about thousand (!) passed tests thrown
overboard. IMHO this is not normal situation and we need to find some
solution. At least for the period while these 30% are being fixed.

2006/6/30, Nathan Beyer <nb...@kc.rr.com>:
>
> > -----Original Message-----
> > From: Geir Magnusson Jr [mailto:geir@pobox.com]
> > George Harley wrote:
> > > Nathan Beyer wrote:
> > >> Two suggestions:
> > >> 1. Approve the testing strategy [1] and implement/rework the modules
> > >> appropriately.
> > >> 2. Fix the tests!
> > >>
> > >> -Nathan
> > >>
> > >> [1]
> > >>
> > http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.htm
> > l
> > >>
> > >>
> > >
> > > Hi Nathan,
> > >
> > > What are your thoughts on running or not running test cases containing
> > > problematic test methods while those methods are being investigated and
> > > fixed up ?
> > >
> >
> > That's exactly the problem.  We need a clear way to maintain and track
> > this stuff.
> >
> > geir
>
> How are other projects handling this? My opinion is that tests, which are
> expected and know to pass should always be running and if they fail and the
> failure can be independently recreated, then it's something to be posted on
> the list, if trivial (typo in build file?), or logged as a JIRA issue.
>
> If it's broken for a significant amount of time (weeks, months), then rather
> than excluding the test, I would propose moving it to a "broken" or
> "possibly invalid" source folder that's out of the test path. If it doesn't
> already have JIRA issue, then one should be created.
>
> I've been living with consistently failing tests for a long time now.
> Recently it was the unstable Socket tests, but I've been seeing the WinXP
> long file name [1] test failing for months.
>
> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
> later.
>
> So I go back to my original suggestion, implement the testing proposal, then
> fix/move any excluded tests to where they work properly or determine that
> they are invalid and delete them.
>
> [1] https://issues.apache.org/jira/browse/HARMONY-619
>
> >
> > >
> > > Best regards,
> > > George
> > >
> > >
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> > >>> Sent: Tuesday, June 27, 2006 12:09 PM
> > >>> To: harmony-dev@incubator.apache.org
> > >>> Subject: Re: [classlib][testing] excluding the failed tests
> > >>>
> > >>>
> > >>>
> > >>> George Harley wrote:
> > >>>
> > >>>> Hi Geir,
> > >>>>
> > >>>> As you may recall, a while back I floated the idea and supplied some
> > >>>> seed code to define all known test failing test methods in an XML
> > file
> > >>>> (an "exclusions list") that could be used by JUnit at test run time
> > to
> > >>>> skip over them while allowing the rest of the test methods in a
> > >>>> class to
> > >>>> run [1]. Obviously I thought about that when catching up with this
> > >>>> thread but, more importantly, your comment about being reluctant to
> > >>>> have
> > >>>> more dependencies on JUnit also motivated me to go off and read some
> > >>>> more about TestNG [2].
> > >>>>
> > >>>> It was news to me that TestNG provides out-of-the-box support for
> > >>>> excluding specific test methods as well as groups of methods (where
> > the
> > >>>> groups are declared in source file annotations or Javadoc comments).
> > >>>> Even better, it can do this on existing JUnit test code provided that
> > >>>> the necessary meta-data (annotations if compiling to a 1.5 target;
> > >>>> Javadoc comments if targeting 1.4 like we currently are). There is a
> > >>>> utility available in the TestNG download and also in the Eclipse
> > >>>> support
> > >>>> plug-in that helps migrate directories of existing JUnit tests to
> > >>>> TestNG
> > >>>> by adding in the basic meta-data (although for me the Eclipse version
> > >>>> also tried to break the test class inheritance from
> > >>>> junit.framework.TestCase which was definitely not what was required).
> > >>>>
> > >>>> Perhaps ... just perhaps ... we should be looking at something like
> > >>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the
> > >>>> granularity of test configuration that we need.
> > >>>>
> > >>>> Just a thought.
> > >>>>
> > >>> How 'bout that ;)
> > >>>
> > >>> geir
> > >>>
> > >>>
> > >>>> Best regards,
> > >>>> George
> > >>>>
> > >>>> [1] http://issues.apache.org/jira/browse/HARMONY-263
> > >>>> [2] http://testng.org
> > >>>>
> > >>>>
> > >>>>
> > >>>> Geir Magnusson Jr wrote:
> > >>>>
> > >>>>> Alexei Zakharov wrote:
> > >>>>>
> > >>>>>
> > >>>>>> Hi,
> > >>>>>> +1 for (3), but I think it will be better to define suite() method
> > >>>>>> and
> > >>>>>> enumerate passing tests there rather than to comment out the code.
> > >>>>>>
> > >>>>>>
> > >>>>> I'm reluctant to see more dependencies on JUnit when we could
> > control
> > >>>>>
> > >>> at
> > >>>
> > >>>>> a level higher in the build system.
> > >>>>>
> > >>>>> Hard to explain, I guess, but if our exclusions are buried in .java,
> > I
> > >>>>> would think that reporting and tracking over time is going to be
> > much
> > >>>>> harder.
> > >>>>>
> > >>>>> geir
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>>> 2006/6/27, Richard Liang <ri...@gmail.com>:
> > >>>>>>
> > >>>>>>
> > >>>>>>> Hello Vladimir,
> > >>>>>>>
> > >>>>>>> +1 to option 3) . We shall comment the failed test cases out and
> > add
> > >>>>>>> FIXME to remind us to diagnose the problems later. ;-)
> > >>>>>>>
> > >>>>>>> Vladimir Ivanov wrote:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> I see your point.
> > >>>>>>>> But I feel that we can miss regression in non-tested code if we
> > >>>>>>>> exclude
> > >>>>>>>> TestCases.
> > >>>>>>>> Now, for example we miss testing of
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> java.lang.Class/Process/Thread/String
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> and some other classes.
> > >>>>>>>>
> > >>>>>>>> While we have failing tests and don't want to pay attention to
> > >>>>>>>> these
> > >>>>>>>> failures we can:
> > >>>>>>>> 1) Leave things as is - do not run TestCases with failing tests.
> > >>>>>>>> 2) Split passing/failing TestCase into separate "failing
> > TestCase"
> > >>>>>>>>
> > >>> and
> > >>>
> > >>>>>>>> "passing TestCase" and exclude "failing TestCases". When test or
> > >>>>>>>> implementation is fixed we move tests from failing TestCase to
> > >>>>>>>>
> > >>> passing
> > >>>
> > >>>>>>>> TestCase.
> > >>>>>>>> 3) Comment failing tests in TestCases. It is better to run 58
> > tests
> > >>>>>>>> instead
> > >>>>>>>> of 0 for String.
> > >>>>>>>> 4) Run all TestCases, then, compare test run results with the
> > 'list
> > >>>>>>>>
> > >>> of
> > >>>
> > >>>>>>>> known
> > >>>>>>>> failures' and see whether new failures appeared. This, I think,
> > is
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> better
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list
> > of
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> known
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> failing tests and exclude list where we put crashing tests.
> > >>>>>>>>
> > >>>>>>>> Thanks, Vladimir
> > >>>>>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>> Mikhail Loenko wrote:
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> Hi Vladimir,
> > >>>>>>>>>>
> > >>>>>>>>>> IMHO the tests are to verify that an update does not introduce
> > >>>>>>>>>> any
> > >>>>>>>>>> regression. So there are two options: remember which exactly
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>> tests may
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>>> fail
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> and remember that all tests must pass. I believe the latter
> > >>>>>>>>>> one is
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>> a bit
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> easier and safer.
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>> +1
> > >>>>>>>>>
> > >>>>>>>>> Tim
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> Thanks,
> > >>>>>>>>>> Mikhail
> > >>>>>>>>>>
> > >>>>>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>> Hi,
> > >>>>>>>>>>> Working with tests I noticed that we are excluding some tests
> > >>>>>>>>>>>
> > >>> just
> > >>>
> > >>>>>>>>>>> because
> > >>>>>>>>>>> several tests from single TestCase fail.
> > >>>>>>>>>>>
> > >>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest'
> > >>>>>>>>>>> has 60
> > >>>>>>>>>>> tests and
> > >>>>>>>>>>> only 2 of them fails. But the build excludes the whole
> > TestCase
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>> and we
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>>> just
> > >>>>>>>>>>> miss testing of java.lang.String implementation.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
> > >>>>>>>>>>>
> > >>>>>>>>>>> My suggestion is: do not exclude any tests until it crashes
> > VM.
> > >>>>>>>>>>> If somebody needs a list of tests that always passed a
> > separated
> > >>>>>>>>>>> target can
> > >>>>>>>>>>> be added to build.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Do you think we should add target 'test-all' to the build?
> > >>>>>>>>>>>  Thanks, Vladimir
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>> Tim Ellison (t.p.ellison@gmail.com)
> > >>>>>>>>> IBM Java technology centre, UK.
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>> --
> > >>>>>>> Richard Liang
> > >>>>>>> China Software Development Lab, IBM

-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Yes Vladimir, nice job!

I have updated the data for beans module. Since the reason of failures
for the most of  excluded test is not known yet I just put their names
there without any comment why they were excluded.

Thanks,

2006/7/14, Richard Liang <ri...@gmail.com>:
> Great job. Vladimir  ;-)
>
> Vladimir Ivanov wrote:
> > New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> > (refered from http://wiki.apache.org/harmony/ClassLibrary).
> > It would be good if before test investigation one would specify 'in
> > progress, <Name>' near module name, showing it is under investigation
> > being
> > done by <Name>.
> >
> > Thanks, Vladimir
> >
> > On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
> >
> >>
> >>
> >> Vladimir Ivanov wrote:
> >> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> >> ...
> >> >
> >> > Currently I'm looking on the excluded TestCases and it requires more
> >> time
> >> > than I expected.
> >> > I'll prepare a report/summary about excluded TestCases at the end of
> >> this
> >> > process.
> >> >
> >> Hello Vladimir,
> >>
> >> How about the progress of your report/summary?  ;-) As I'm implementing
> >> java.util.Formatter and java.util.Scanner, I'm also interested in the
> >> excluded tests in LUNI. Shall we create a wiki page to publish our
> >> status, so that other people in community can know what we're doing, and
> >> maybe we could attract more volunteers. ;-)
> >>
> >> Best regards,
> >> Richard.
> >>
> >> > Thanks, Vladimir
> >> >
> >> >
> >> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> >>
> >> >> Vladimir Ivanov wrote:
> >> >> > More details: it is
> >> >> >
> >> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> >> > test.
> >> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> >> > algorithm (no support for SHA1PRNG provider).
> >> >> > Looks like it is valid tests for non implemented functionality,
> >> >> but, I'm
> >> >>
> >> >> > not
> >> >> > sure what to do with such TestCase(s): comment these 2 tests or
> >> move
> >> >> them
> >> >> > into separate TestCase.
> >> >> > Ideas?
> >> >>
> >> >> I'd prefer that we only use one mechanism for excluding tests, and
> >> today
> >> >> that is the excludes clause in the ant script.  So I suggest that you
> >> do
> >> >> option (4) below.
> >> >>
> >> >> If there are really useful tests that are being unnecessarily
> >> excluded
> >> >> by being in the same *Test class, then you may want to consider
> >> moving
> >> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> >> the sound of it all SecureRandom tests will be failing.
> >> >>
> >> >> > By the way, probably, it worth reviewing *all* excluded TestCases
> >> and:
> >> >> > 1.      Unexclude if all tests pass.
> >> >> > 2.      Report bug and provide patch for test to make it passing if
> >> it
> >> >> > failed due to bug in test.
> >> >> > 3.      Report bug (and provide patch) for implementation to make
> >> >> tests
> >> >> > passing, if it was/is bug in implementation and no such issue in
> >> JIRA.
> >> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> >> make
> >> >> > further clean-up process easier.
> >> >> > 5.      Review results of this exclude list clean-up activity and
> >> then
> >> >> > decide what to do with the rest failing tests.
> >> >> >
> >> >> > I can do it starting next week. Do you think it worth doing?
> >> >> > Thanks, Vladimir
> >> >>
> >> >> Sounds great, thanks Vladimir.
> >> >>
> >> >> Regards,
> >> >> Tim
> >> >>
> >> >> --
> >> >>
> >> >> Tim Ellison ( t.p.ellison@gmail.com)
> >> >> IBM Java technology centre, UK.
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> >> For additional commands, e-mail:
> >> harmony-dev-help@incubator.apache.org
> >> >>
> >> >>
> >> >
> >>
> >> --
> >> Richard Liang
> >> China Software Development Lab, IBM
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Richard Liang <ri...@gmail.com>.
Great job. Vladimir  ;-)

Vladimir Ivanov wrote:
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation 
> being
> done by <Name>.
>
> Thanks, Vladimir
>
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
>
>>
>>
>> Vladimir Ivanov wrote:
>> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >> ...
>> >
>> > Currently I'm looking on the excluded TestCases and it requires more
>> time
>> > than I expected.
>> > I'll prepare a report/summary about excluded TestCases at the end of
>> this
>> > process.
>> >
>> Hello Vladimir,
>>
>> How about the progress of your report/summary?  ;-) As I'm implementing
>> java.util.Formatter and java.util.Scanner, I'm also interested in the
>> excluded tests in LUNI. Shall we create a wiki page to publish our
>> status, so that other people in community can know what we're doing, and
>> maybe we could attract more volunteers. ;-)
>>
>> Best regards,
>> Richard.
>>
>> > Thanks, Vladimir
>> >
>> >
>> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Vladimir Ivanov wrote:
>> >> > More details: it is
>> >> >
>> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> >> > test.
>> >> > At present time it has 2 failing tests with messages about SHA1PRNG
>> >> > algorithm (no support for SHA1PRNG provider).
>> >> > Looks like it is valid tests for non implemented functionality,
>> >> but, I'm
>> >>
>> >> > not
>> >> > sure what to do with such TestCase(s): comment these 2 tests or 
>> move
>> >> them
>> >> > into separate TestCase.
>> >> > Ideas?
>> >>
>> >> I'd prefer that we only use one mechanism for excluding tests, and
>> today
>> >> that is the excludes clause in the ant script.  So I suggest that you
>> do
>> >> option (4) below.
>> >>
>> >> If there are really useful tests that are being unnecessarily 
>> excluded
>> >> by being in the same *Test class, then you may want to consider 
>> moving
>> >> the failing tests into SecureRandom3Test and excluding that -- but by
>> >> the sound of it all SecureRandom tests will be failing.
>> >>
>> >> > By the way, probably, it worth reviewing *all* excluded TestCases
>> and:
>> >> > 1.      Unexclude if all tests pass.
>> >> > 2.      Report bug and provide patch for test to make it passing if
>> it
>> >> > failed due to bug in test.
>> >> > 3.      Report bug (and provide patch) for implementation to make
>> >> tests
>> >> > passing, if it was/is bug in implementation and no such issue in
>> JIRA.
>> >> > 4.      Specify reasons for excluding TestCases in exclude list to
>> >> make
>> >> > further clean-up process easier.
>> >> > 5.      Review results of this exclude list clean-up activity and
>> then
>> >> > decide what to do with the rest failing tests.
>> >> >
>> >> > I can do it starting next week. Do you think it worth doing?
>> >> > Thanks, Vladimir
>> >>
>> >> Sounds great, thanks Vladimir.
>> >>
>> >> Regards,
>> >> Tim
>> >>
>> >> --
>> >>
>> >> Tim Ellison ( t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: 
>> harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
>>
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>

-- 
Richard Liang
China Software Development Lab, IBM 



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Vladimir Ivanov wrote:
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation being
> done by <Name>.

Nice - please add those as instructions at the top.  Also add to those
instructions that if someone does add their name to the list, they
should also send a quick note to the dev list letting people know.

geir

> 
> Thanks, Vladimir
> 
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
> 
>>
>>
>> Vladimir Ivanov wrote:
>> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >> ...
>> >
>> > Currently I'm looking on the excluded TestCases and it requires more
>> time
>> > than I expected.
>> > I'll prepare a report/summary about excluded TestCases at the end of
>> this
>> > process.
>> >
>> Hello Vladimir,
>>
>> How about the progress of your report/summary?  ;-) As I'm implementing
>> java.util.Formatter and java.util.Scanner, I'm also interested in the
>> excluded tests in LUNI. Shall we create a wiki page to publish our
>> status, so that other people in community can know what we're doing, and
>> maybe we could attract more volunteers. ;-)
>>
>> Best regards,
>> Richard.
>>
>> > Thanks, Vladimir
>> >
>> >
>> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Vladimir Ivanov wrote:
>> >> > More details: it is
>> >> >
>> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> >> > test.
>> >> > At present time it has 2 failing tests with messages about SHA1PRNG
>> >> > algorithm (no support for SHA1PRNG provider).
>> >> > Looks like it is valid tests for non implemented functionality,
>> >> but, I'm
>> >>
>> >> > not
>> >> > sure what to do with such TestCase(s): comment these 2 tests or move
>> >> them
>> >> > into separate TestCase.
>> >> > Ideas?
>> >>
>> >> I'd prefer that we only use one mechanism for excluding tests, and
>> today
>> >> that is the excludes clause in the ant script.  So I suggest that you
>> do
>> >> option (4) below.
>> >>
>> >> If there are really useful tests that are being unnecessarily excluded
>> >> by being in the same *Test class, then you may want to consider moving
>> >> the failing tests into SecureRandom3Test and excluding that -- but by
>> >> the sound of it all SecureRandom tests will be failing.
>> >>
>> >> > By the way, probably, it worth reviewing *all* excluded TestCases
>> and:
>> >> > 1.      Unexclude if all tests pass.
>> >> > 2.      Report bug and provide patch for test to make it passing if
>> it
>> >> > failed due to bug in test.
>> >> > 3.      Report bug (and provide patch) for implementation to make
>> >> tests
>> >> > passing, if it was/is bug in implementation and no such issue in
>> JIRA.
>> >> > 4.      Specify reasons for excluding TestCases in exclude list to
>> >> make
>> >> > further clean-up process easier.
>> >> > 5.      Review results of this exclude list clean-up activity and
>> then
>> >> > decide what to do with the rest failing tests.
>> >> >
>> >> > I can do it starting next week. Do you think it worth doing?
>> >> > Thanks, Vladimir
>> >>
>> >> Sounds great, thanks Vladimir.
>> >>
>> >> Regards,
>> >> Tim
>> >>
>> >> --
>> >>
>> >> Tim Ellison ( t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
>>
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Andrew Zhang <zh...@gmail.com>.
Hi folks,

I'd like to investigate tests/api/java/net/DatagramSocketTest.java and
tests/api/java/net/DatagramSocketTest.java in luni module. I have updated
the wiki page(http://wiki.apache.org/harmony/Excluded_tests). I'll also plan
to study other excluded tests in luni module when I finish these two files.
Please let me know if you're interested too. Thanks!


On 7/14/06, Vladimir Ivanov <iv...@gmail.com> wrote:
>
> New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
> (refered from http://wiki.apache.org/harmony/ClassLibrary).
> It would be good if before test investigation one would specify 'in
> progress, <Name>' near module name, showing it is under investigation
> being
> done by <Name>.
>
> Thanks, Vladimir
>
> On 7/14/06, Richard Liang <ri...@gmail.com> wrote:
>
> >
> >
> > Vladimir Ivanov wrote:
> > >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> > >> ...
> > >
> > > Currently I'm looking on the excluded TestCases and it requires more
> > time
> > > than I expected.
> > > I'll prepare a report/summary about excluded TestCases at the end of
> > this
> > > process.
> > >
> > Hello Vladimir,
> >
> > How about the progress of your report/summary?  ;-) As I'm implementing
> > java.util.Formatter and java.util.Scanner, I'm also interested in the
> > excluded tests in LUNI. Shall we create a wiki page to publish our
> > status, so that other people in community can know what we're doing, and
> > maybe we could attract more volunteers. ;-)
> >
> > Best regards,
> > Richard.
> >
> > > Thanks, Vladimir
> > >
> > >
> > > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> > >>
> > >> Vladimir Ivanov wrote:
> > >> > More details: it is
> > >> >
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > >> > test.
> > >> > At present time it has 2 failing tests with messages about SHA1PRNG
> > >> > algorithm (no support for SHA1PRNG provider).
> > >> > Looks like it is valid tests for non implemented functionality,
> > >> but, I'm
> > >>
> > >> > not
> > >> > sure what to do with such TestCase(s): comment these 2 tests or
> move
> > >> them
> > >> > into separate TestCase.
> > >> > Ideas?
> > >>
> > >> I'd prefer that we only use one mechanism for excluding tests, and
> > today
> > >> that is the excludes clause in the ant script.  So I suggest that you
> > do
> > >> option (4) below.
> > >>
> > >> If there are really useful tests that are being unnecessarily
> excluded
> > >> by being in the same *Test class, then you may want to consider
> moving
> > >> the failing tests into SecureRandom3Test and excluding that -- but by
> > >> the sound of it all SecureRandom tests will be failing.
> > >>
> > >> > By the way, probably, it worth reviewing *all* excluded TestCases
> > and:
> > >> > 1.      Unexclude if all tests pass.
> > >> > 2.      Report bug and provide patch for test to make it passing if
> > it
> > >> > failed due to bug in test.
> > >> > 3.      Report bug (and provide patch) for implementation to make
> > >> tests
> > >> > passing, if it was/is bug in implementation and no such issue in
> > JIRA.
> > >> > 4.      Specify reasons for excluding TestCases in exclude list to
> > >> make
> > >> > further clean-up process easier.
> > >> > 5.      Review results of this exclude list clean-up activity and
> > then
> > >> > decide what to do with the rest failing tests.
> > >> >
> > >> > I can do it starting next week. Do you think it worth doing?
> > >> > Thanks, Vladimir
> > >>
> > >> Sounds great, thanks Vladimir.
> > >>
> > >> Regards,
> > >> Tim
> > >>
> > >> --
> > >>
> > >> Tim Ellison ( t.p.ellison@gmail.com)
> > >> IBM Java technology centre, UK.
> > >>
> > >> ---------------------------------------------------------------------
> > >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> > >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > >> For additional commands, e-mail:
> harmony-dev-help@incubator.apache.org
> > >>
> > >>
> > >
> >
> > --
> > Richard Liang
> > China Software Development Lab, IBM
> >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
>


-- 
Andrew Zhang
China Software Development Lab, IBM

Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
(refered from http://wiki.apache.org/harmony/ClassLibrary).
It would be good if before test investigation one would specify 'in
progress, <Name>' near module name, showing it is under investigation being
done by <Name>.

 Thanks, Vladimir

 On 7/14/06, Richard Liang <ri...@gmail.com> wrote:

>
>
> Vladimir Ivanov wrote:
> >> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >> ...
> >
> > Currently I'm looking on the excluded TestCases and it requires more
> time
> > than I expected.
> > I'll prepare a report/summary about excluded TestCases at the end of
> this
> > process.
> >
> Hello Vladimir,
>
> How about the progress of your report/summary?  ;-) As I'm implementing
> java.util.Formatter and java.util.Scanner, I'm also interested in the
> excluded tests in LUNI. Shall we create a wiki page to publish our
> status, so that other people in community can know what we're doing, and
> maybe we could attract more volunteers. ;-)
>
> Best regards,
> Richard.
>
> > Thanks, Vladimir
> >
> >
> > On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
> >>
> >> Vladimir Ivanov wrote:
> >> > More details: it is
> >> >
> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> > test.
> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> > algorithm (no support for SHA1PRNG provider).
> >> > Looks like it is valid tests for non implemented functionality,
> >> but, I'm
> >>
> >> > not
> >> > sure what to do with such TestCase(s): comment these 2 tests or move
> >> them
> >> > into separate TestCase.
> >> > Ideas?
> >>
> >> I'd prefer that we only use one mechanism for excluding tests, and
> today
> >> that is the excludes clause in the ant script.  So I suggest that you
> do
> >> option (4) below.
> >>
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >>
> >> > By the way, probably, it worth reviewing *all* excluded TestCases
> and:
> >> > 1.      Unexclude if all tests pass.
> >> > 2.      Report bug and provide patch for test to make it passing if
> it
> >> > failed due to bug in test.
> >> > 3.      Report bug (and provide patch) for implementation to make
> >> tests
> >> > passing, if it was/is bug in implementation and no such issue in
> JIRA.
> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> make
> >> > further clean-up process easier.
> >> > 5.      Review results of this exclude list clean-up activity and
> then
> >> > decide what to do with the rest failing tests.
> >> >
> >> > I can do it starting next week. Do you think it worth doing?
> >> > Thanks, Vladimir
> >>
> >> Sounds great, thanks Vladimir.
> >>
> >> Regards,
> >> Tim
> >>
> >> --
> >>
> >> Tim Ellison ( t.p.ellison@gmail.com)
> >> IBM Java technology centre, UK.
> >>
> >> ---------------------------------------------------------------------
> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >>
> >>
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM
>
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Richard Liang <ri...@gmail.com>.

Vladimir Ivanov wrote:
>> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>> ...
>
> Currently I'm looking on the excluded TestCases and it requires more time
> than I expected.
> I'll prepare a report/summary about excluded TestCases at the end of this
> process.
>
Hello Vladimir,

How about the progress of your report/summary?  ;-) As I'm implementing 
java.util.Formatter and java.util.Scanner, I'm also interested in the 
excluded tests in LUNI. Shall we create a wiki page to publish our 
status, so that other people in community can know what we're doing, and 
maybe we could attract more volunteers. ;-)

Best regards,
Richard.

> Thanks, Vladimir
>
>
> On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>>
>> Vladimir Ivanov wrote:
>> > More details: it is
>> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> > test.
>> > At present time it has 2 failing tests with messages about SHA1PRNG
>> > algorithm (no support for SHA1PRNG provider).
>> > Looks like it is valid tests for non implemented functionality, 
>> but, I'm
>>
>> > not
>> > sure what to do with such TestCase(s): comment these 2 tests or move
>> them
>> > into separate TestCase.
>> > Ideas?
>>
>> I'd prefer that we only use one mechanism for excluding tests, and today
>> that is the excludes clause in the ant script.  So I suggest that you do
>> option (4) below.
>>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>>
>> > By the way, probably, it worth reviewing *all* excluded TestCases and:
>> > 1.      Unexclude if all tests pass.
>> > 2.      Report bug and provide patch for test to make it passing if it
>> > failed due to bug in test.
>> > 3.      Report bug (and provide patch) for implementation to make 
>> tests
>> > passing, if it was/is bug in implementation and no such issue in JIRA.
>> > 4.      Specify reasons for excluding TestCases in exclude list to 
>> make
>> > further clean-up process easier.
>> > 5.      Review results of this exclude list clean-up activity and then
>> > decide what to do with the rest failing tests.
>> >
>> > I can do it starting next week. Do you think it worth doing?
>> > Thanks, Vladimir
>>
>> Sounds great, thanks Vladimir.
>>
>> Regards,
>> Tim
>>
>> -- 
>>
>> Tim Ellison ( t.p.ellison@gmail.com)
>> IBM Java technology centre, UK.
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>

-- 
Richard Liang
China Software Development Lab, IBM 



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
>On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>...

Currently I'm looking on the excluded TestCases and it requires more time
than I expected.
I'll prepare a report/summary about excluded TestCases at the end of this
process.

Thanks, Vladimir


On 7/7/06, Tim Ellison <t....@gmail.com> wrote:
>
> Vladimir Ivanov wrote:
> > More details: it is
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > test.
> > At present time it has 2 failing tests with messages about SHA1PRNG
> > algorithm (no support for SHA1PRNG provider).
> > Looks like it is valid tests for non implemented functionality, but, I'm
>
> > not
> > sure what to do with such TestCase(s): comment these 2 tests or move
> them
> > into separate TestCase.
> > Ideas?
>
> I'd prefer that we only use one mechanism for excluding tests, and today
> that is the excludes clause in the ant script.  So I suggest that you do
> option (4) below.
>
> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.
>
> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> > 1.      Unexclude if all tests pass.
> > 2.      Report bug and provide patch for test to make it passing if it
> > failed due to bug in test.
> > 3.      Report bug (and provide patch) for implementation to make tests
> > passing, if it was/is bug in implementation and no such issue in JIRA.
> > 4.      Specify reasons for excluding TestCases in exclude list to make
> > further clean-up process easier.
> > 5.      Review results of this exclude list clean-up activity and then
> > decide what to do with the rest failing tests.
> >
> > I can do it starting next week. Do you think it worth doing?
> > Thanks, Vladimir
>
> Sounds great, thanks Vladimir.
>
> Regards,
> Tim
>
> --
>
> Tim Ellison ( t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Alexei Zakharov wrote:
> Hi,
> 
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
> 
> I think it's a nice idea to do this at least for java.beans since
> there are hundreds of useful workable tests excluded. After quite a
> long time working with this module I have a strong wish to clean up
> the mess.
> 
> But probably we should define some naming pattern for class to put
> excluded tests into. For example for XMLEncoderTest.java we can have
> XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> case we don't need to put extra "exclude" clause in the build.xml
> since such name doesn't match **/*Test.java pattern (current). Another
> variant is something like FAILED_XMLEncoderTest.java - matches the
> pattern and needs the clause. Thoughts?

I know that is a simple scheme, but let's not invent yet another way.
Either wait for the resolution of the testing layout thread George
refers to, or use the exclusion list for the moment.

Regards,
Tim

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Thanks George & Tim, I was out during last week and  today was reading
threads from oldest to the newest. :)
I agree, general solution using TestSuites or even TestNG is better
than my temporary one. However, defining a general approach can take a
long period of time.  Anyway, let's move our discussion to that
thread.

2006/7/10, George Harley <ge...@googlemail.com>:
> Alexei Zakharov wrote:
> > Hi,
> >
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >
> > I think it's a nice idea to do this at least for java.beans since
> > there are hundreds of useful workable tests excluded. After quite a
> > long time working with this module I have a strong wish to clean up
> > the mess.
> >
> > But probably we should define some naming pattern for class to put
> > excluded tests into. For example for XMLEncoderTest.java we can have
> > XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> > case we don't need to put extra "exclude" clause in the build.xml
> > since such name doesn't match **/*Test.java pattern (current). Another
> > variant is something like FAILED_XMLEncoderTest.java - matches the
> > pattern and needs the clause. Thoughts?
>
> Hi Alexei,
>
> Have you seen the discussion thread related to configuring our tests
> using suites [1] ? If not, then it seems to me that there is potential
> there for a simpler/quicker way of excluding or including tests without
> recourse to creating new files or renaming existing ones. What do you
> think ?
>
> Best regards,
> George
>
> [1]
> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200607.mbox/%3c44ABB451.30806@googlemail.com%3e
>
>
> >
> >
> > 2006/7/6, Tim Ellison <t....@gmail.com>:
> >> Vladimir Ivanov wrote:
> >> > More details: it is
> >> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> >> > test.
> >> > At present time it has 2 failing tests with messages about SHA1PRNG
> >> > algorithm (no support for SHA1PRNG provider).
> >> > Looks like it is valid tests for non implemented functionality,
> >> but, I'm
> >> > not
> >> > sure what to do with such TestCase(s): comment these 2 tests or
> >> move them
> >> > into separate TestCase.
> >> > Ideas?
> >>
> >> I'd prefer that we only use one mechanism for excluding tests, and today
> >> that is the excludes clause in the ant script.  So I suggest that you do
> >> option (4) below.
> >>
> >> If there are really useful tests that are being unnecessarily excluded
> >> by being in the same *Test class, then you may want to consider moving
> >> the failing tests into SecureRandom3Test and excluding that -- but by
> >> the sound of it all SecureRandom tests will be failing.
> >>
> >> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> >> > 1.      Unexclude if all tests pass.
> >> > 2.      Report bug and provide patch for test to make it passing if it
> >> > failed due to bug in test.
> >> > 3.      Report bug (and provide patch) for implementation to make
> >> tests
> >> > passing, if it was/is bug in implementation and no such issue in JIRA.
> >> > 4.      Specify reasons for excluding TestCases in exclude list to
> >> make
> >> > further clean-up process easier.
> >> > 5.      Review results of this exclude list clean-up activity and then
> >> > decide what to do with the rest failing tests.
> >> >
> >> > I can do it starting next week. Do you think it worth doing?
> >> > Thanks, Vladimir
> >>
> >> Sounds great, thanks Vladimir.
> >>
> >> Regards,
> >> Tim
> >
> >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by George Harley <ge...@googlemail.com>.
Alexei Zakharov wrote:
> Hi,
>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>
> I think it's a nice idea to do this at least for java.beans since
> there are hundreds of useful workable tests excluded. After quite a
> long time working with this module I have a strong wish to clean up
> the mess.
>
> But probably we should define some naming pattern for class to put
> excluded tests into. For example for XMLEncoderTest.java we can have
> XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
> case we don't need to put extra "exclude" clause in the build.xml
> since such name doesn't match **/*Test.java pattern (current). Another
> variant is something like FAILED_XMLEncoderTest.java - matches the
> pattern and needs the clause. Thoughts?

Hi Alexei,

Have you seen the discussion thread related to configuring our tests 
using suites [1] ? If not, then it seems to me that there is potential 
there for a simpler/quicker way of excluding or including tests without 
recourse to creating new files or renaming existing ones. What do you 
think ?

Best regards,
George

[1] 
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200607.mbox/%3c44ABB451.30806@googlemail.com%3e


>
>
> 2006/7/6, Tim Ellison <t....@gmail.com>:
>> Vladimir Ivanov wrote:
>> > More details: it is
>> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
>> > test.
>> > At present time it has 2 failing tests with messages about SHA1PRNG
>> > algorithm (no support for SHA1PRNG provider).
>> > Looks like it is valid tests for non implemented functionality, 
>> but, I'm
>> > not
>> > sure what to do with such TestCase(s): comment these 2 tests or 
>> move them
>> > into separate TestCase.
>> > Ideas?
>>
>> I'd prefer that we only use one mechanism for excluding tests, and today
>> that is the excludes clause in the ant script.  So I suggest that you do
>> option (4) below.
>>
>> If there are really useful tests that are being unnecessarily excluded
>> by being in the same *Test class, then you may want to consider moving
>> the failing tests into SecureRandom3Test and excluding that -- but by
>> the sound of it all SecureRandom tests will be failing.
>>
>> > By the way, probably, it worth reviewing *all* excluded TestCases and:
>> > 1.      Unexclude if all tests pass.
>> > 2.      Report bug and provide patch for test to make it passing if it
>> > failed due to bug in test.
>> > 3.      Report bug (and provide patch) for implementation to make 
>> tests
>> > passing, if it was/is bug in implementation and no such issue in JIRA.
>> > 4.      Specify reasons for excluding TestCases in exclude list to 
>> make
>> > further clean-up process easier.
>> > 5.      Review results of this exclude list clean-up activity and then
>> > decide what to do with the rest failing tests.
>> >
>> > I can do it starting next week. Do you think it worth doing?
>> > Thanks, Vladimir
>>
>> Sounds great, thanks Vladimir.
>>
>> Regards,
>> Tim
>
>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Hi,

> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.

I think it's a nice idea to do this at least for java.beans since
there are hundreds of useful workable tests excluded. After quite a
long time working with this module I have a strong wish to clean up
the mess.

But probably we should define some naming pattern for class to put
excluded tests into. For example for XMLEncoderTest.java we can have
XMLEncoderTest_Disabled.java or XMLEncoderTest_Failed.java. In this
case we don't need to put extra "exclude" clause in the build.xml
since such name doesn't match **/*Test.java pattern (current). Another
variant is something like FAILED_XMLEncoderTest.java - matches the
pattern and needs the clause. Thoughts?


2006/7/6, Tim Ellison <t....@gmail.com>:
> Vladimir Ivanov wrote:
> > More details: it is
> > org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> > test.
> > At present time it has 2 failing tests with messages about SHA1PRNG
> > algorithm (no support for SHA1PRNG provider).
> > Looks like it is valid tests for non implemented functionality, but, I'm
> > not
> > sure what to do with such TestCase(s): comment these 2 tests or move them
> > into separate TestCase.
> > Ideas?
>
> I'd prefer that we only use one mechanism for excluding tests, and today
> that is the excludes clause in the ant script.  So I suggest that you do
> option (4) below.
>
> If there are really useful tests that are being unnecessarily excluded
> by being in the same *Test class, then you may want to consider moving
> the failing tests into SecureRandom3Test and excluding that -- but by
> the sound of it all SecureRandom tests will be failing.
>
> > By the way, probably, it worth reviewing *all* excluded TestCases and:
> > 1.      Unexclude if all tests pass.
> > 2.      Report bug and provide patch for test to make it passing if it
> > failed due to bug in test.
> > 3.      Report bug (and provide patch) for implementation to make tests
> > passing, if it was/is bug in implementation and no such issue in JIRA.
> > 4.      Specify reasons for excluding TestCases in exclude list to make
> > further clean-up process easier.
> > 5.      Review results of this exclude list clean-up activity and then
> > decide what to do with the rest failing tests.
> >
> > I can do it starting next week. Do you think it worth doing?
> > Thanks, Vladimir
>
> Sounds great, thanks Vladimir.
>
> Regards,
> Tim


-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Vladimir Ivanov wrote:
> More details: it is
> org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
> test.
> At present time it has 2 failing tests with messages about SHA1PRNG
> algorithm (no support for SHA1PRNG provider).
> Looks like it is valid tests for non implemented functionality, but, I'm
> not
> sure what to do with such TestCase(s): comment these 2 tests or move them
> into separate TestCase.
> Ideas?

I'd prefer that we only use one mechanism for excluding tests, and today
that is the excludes clause in the ant script.  So I suggest that you do
option (4) below.

If there are really useful tests that are being unnecessarily excluded
by being in the same *Test class, then you may want to consider moving
the failing tests into SecureRandom3Test and excluding that -- but by
the sound of it all SecureRandom tests will be failing.

> By the way, probably, it worth reviewing *all* excluded TestCases and:
> 1.      Unexclude if all tests pass.
> 2.      Report bug and provide patch for test to make it passing if it
> failed due to bug in test.
> 3.      Report bug (and provide patch) for implementation to make tests
> passing, if it was/is bug in implementation and no such issue in JIRA.
> 4.      Specify reasons for excluding TestCases in exclude list to make
> further clean-up process easier.
> 5.      Review results of this exclude list clean-up activity and then
> decide what to do with the rest failing tests.
> 
> I can do it starting next week. Do you think it worth doing?
> Thanks, Vladimir

Sounds great, thanks Vladimir.

Regards,
Tim

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
More details: it is
org/apache/harmony/security/tests/java/security/SecureRandom2Test.java test.
At present time it has 2 failing tests with messages about SHA1PRNG
algorithm (no support for SHA1PRNG provider).
Looks like it is valid tests for non implemented functionality, but, I'm not
sure what to do with such TestCase(s): comment these 2 tests or move them
into separate TestCase.
Ideas?

By the way, probably, it worth reviewing *all* excluded TestCases and:
1.      Unexclude if all tests pass.
2.      Report bug and provide patch for test to make it passing if it
failed due to bug in test.
3.      Report bug (and provide patch) for implementation to make tests
passing, if it was/is bug in implementation and no such issue in JIRA.
4.      Specify reasons for excluding TestCases in exclude list to make
further clean-up process easier.
5.      Review results of this exclude list clean-up activity and then
decide what to do with the rest failing tests.

I can do it starting next week. Do you think it worth doing?
 Thanks, Vladimir


On 7/6/06, Nathan Beyer <nb...@kc.rr.com> wrote:
>
> Did the TestCase run without a failure? If it didn't, then I would ask for
> you to attempt to fix it and post the patch and the patch to enable it. If
>
> it did pass, then just post a patch to enable it or just submit the issue
> as
> ask it to be removed from the exclude list.
>
> If the test is failing because of a bug, then log an issue about the bug
> and
> try to fix the issue.
>
> -Nathan
>
> > -----Original Message-----
> > From: Vladimir Ivanov [mailto:ivavladimir@gmail.com]
> > Sent: Wednesday, July 05, 2006 12:41 AM
> > To: harmony-dev@incubator.apache.org
> > Subject: Re: [classlib][testing] excluding the failed tests
> >
> > Yesterday I tried to add a regression test to existing in security
> module
> > TestCase, but, found that the TestCase is in exclude list. I had to
> > un-exclude it, run, check my test passes and exclude the TestCase again
> -
> > it
> > was a little bit inconvenient, besides, my new valid (I believe)
> > regression
> > test will go directly to exclude list after integration...
> >
> > I see that we are near to decision what to do with failing tests.
> > Am I right that we are at the point of agreement on the following?:
> >
> > There could be two groups of failing tests:
> > *Tests that never passed.
> > *Tests that recently started failing.
> >
> > Test that never passed should be stored in TestCases with suffix "Fail"
> (
> > StringFailTest.java for example). They are subject for review and either
>
> > deletion or fixing or fixing implementation if they find a bug in API
> > implementation.
> > There should be 0 tests that recently started failing. If such test
> > appears
> > it should be fixed within 24h, otherwise, commit which introduced the
> > failure will be rolled back.
> > Right?
> >
> >  Thanks, Vladimir
> >
> > On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:
> >
> > > Nathan Beyer wrote:
> > > > Based on what I've seen of the excluded tests, category 1 is the
> > > predominate
> > > > case. This could be validated by looking at old revisions in SVN.
> > >
> > > I'm sure that is true, I'm just saying that the build system 'normal'
> > > state is that all enabled tests pass.  My concern was over your
> > > statement you have had failing tests for months.
> > >
> > > What is failing for you now?
> > >
> > > Regards,
> > > Tim
> > >
> > >
> > > >> -----Original Message-----
> > > >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> > > >>
> > > >> Is this the case where we have two 'categories'?
> > > >>
> > > >>   1) tests that never worked
> > > >>
> > > >>   2) tests that recently broke
> > > >>
> > > >> I think that a #2 should never persist for more than one build
> > > >> iteration, as either things get fixed or backed out.  I suppose
> then
> > we
> > >
> > > >> are really talking about category #1, and that we don't have the
> > > "broken
> > > >> window" problem as we never had the window there in the first
> place?
> > > >>
> > > >> I think it's important to understand this (if it's actually true).
> > > >>
> > > >> geir
> > > >>
> > > >>
> > > >> Tim Ellison wrote:
> > > >>> Nathan Beyer wrote:
> > > >>>> How are other projects handling this? My opinion is that tests,
> > which
> > >
> > > >> are
> > > >>>> expected and know to pass should always be running and if they
> fail
> > > and
> > > >> the
> > > >>>> failure can be independently recreated, then it's something to be
> > > >> posted on
> > > >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> > > issue.
> > > >>> Agreed, the tests we have enabled are run on each build (hourly if
> > > >>> things are being committed), and failures are sent to commit list.
> > > >>>
> > > >>>> If it's broken for a significant amount of time (weeks, months),
> > then
> > >
> > > >> rather
> > > >>>> than excluding the test, I would propose moving it to a "broken"
> or
> > > >>>> "possibly invalid" source folder that's out of the test path. If
> it
> > > >> doesn't
> > > >>>> already have JIRA issue, then one should be created.
> > > >>> Yes, though I'd be inclined to move it sooner -- tests should not
> > stay
> > >
> > > >>> broken for more than a couple of days.
> > > >>>
> > > >>> Recently our breakages have been invalid tests rather than broken
> > > >>> implementation, but they still need to be investigated/resolved.
> > > >>>
> > > >>>> I've been living with consistently failing tests for a long time
> > now.
> > >
> > > >>>> Recently it was the unstable Socket tests, but I've been seeing
> the
> > > >> WinXP
> > > >>>> long file name [1] test failing for months.
> > > >>> IMHO you should be shouting about it!  The alternative is that we
> > > >>> tolerate a few broken windows and overall quality slips.
> > > >>>
> > > >>>> I think we may be unnecessarily complicating some of this by
> > assuming
> > >
> > > >> that
> > > >>>> all of the donated tests that are currently excluded and failing
> > are
> > > >>>> completely valid. I believe that the currently excluded tests are
>
> > > >> either
> > > >>>> failing because they aren't isolated according to the suggested
> > test
> > > >> layout
> > > >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a
> case
> > of
> > >
> > > >> the
> > > >>>> later.
> > > >>>>
> > > >>>> So I go back to my original suggestion, implement the testing
> > > proposal,
> > > >> then
> > > >>>> fix/move any excluded tests to where they work properly or
> > determine
> > > >> that
> > > >>>> they are invalid and delete them.
> > > >>> Yes, the tests do need improvements too.
> > > >>>
> > > >>> Regards,
> > > >>> Tim
> > > >>>
> > > >>>
> > > >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> > > >>>>
> > > >
> > > >
> > > >
> > > >
> ---------------------------------------------------------------------
> > > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail:
> harmony-dev-help@incubator.apache.org
> > > >
> > > >
> > >
> > > --
> > >
> > > Tim Ellison ( t.p.ellison@gmail.com)
> > > IBM Java technology centre, UK.
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
Did the TestCase run without a failure? If it didn't, then I would ask for
you to attempt to fix it and post the patch and the patch to enable it. If
it did pass, then just post a patch to enable it or just submit the issue as
ask it to be removed from the exclude list.

If the test is failing because of a bug, then log an issue about the bug and
try to fix the issue.

-Nathan

> -----Original Message-----
> From: Vladimir Ivanov [mailto:ivavladimir@gmail.com]
> Sent: Wednesday, July 05, 2006 12:41 AM
> To: harmony-dev@incubator.apache.org
> Subject: Re: [classlib][testing] excluding the failed tests
> 
> Yesterday I tried to add a regression test to existing in security module
> TestCase, but, found that the TestCase is in exclude list. I had to
> un-exclude it, run, check my test passes and exclude the TestCase again -
> it
> was a little bit inconvenient, besides, my new valid (I believe)
> regression
> test will go directly to exclude list after integration...
> 
> I see that we are near to decision what to do with failing tests.
> Am I right that we are at the point of agreement on the following?:
> 
> There could be two groups of failing tests:
> *Tests that never passed.
> *Tests that recently started failing.
> 
> Test that never passed should be stored in TestCases with suffix "Fail" (
> StringFailTest.java for example). They are subject for review and either
> deletion or fixing or fixing implementation if they find a bug in API
> implementation.
> There should be 0 tests that recently started failing. If such test
> appears
> it should be fixed within 24h, otherwise, commit which introduced the
> failure will be rolled back.
> Right?
> 
>  Thanks, Vladimir
> 
> On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:
> 
> > Nathan Beyer wrote:
> > > Based on what I've seen of the excluded tests, category 1 is the
> > predominate
> > > case. This could be validated by looking at old revisions in SVN.
> >
> > I'm sure that is true, I'm just saying that the build system 'normal'
> > state is that all enabled tests pass.  My concern was over your
> > statement you have had failing tests for months.
> >
> > What is failing for you now?
> >
> > Regards,
> > Tim
> >
> >
> > >> -----Original Message-----
> > >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> > >>
> > >> Is this the case where we have two 'categories'?
> > >>
> > >>   1) tests that never worked
> > >>
> > >>   2) tests that recently broke
> > >>
> > >> I think that a #2 should never persist for more than one build
> > >> iteration, as either things get fixed or backed out.  I suppose then
> we
> >
> > >> are really talking about category #1, and that we don't have the
> > "broken
> > >> window" problem as we never had the window there in the first place?
> > >>
> > >> I think it's important to understand this (if it's actually true).
> > >>
> > >> geir
> > >>
> > >>
> > >> Tim Ellison wrote:
> > >>> Nathan Beyer wrote:
> > >>>> How are other projects handling this? My opinion is that tests,
> which
> >
> > >> are
> > >>>> expected and know to pass should always be running and if they fail
> > and
> > >> the
> > >>>> failure can be independently recreated, then it's something to be
> > >> posted on
> > >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> > issue.
> > >>> Agreed, the tests we have enabled are run on each build (hourly if
> > >>> things are being committed), and failures are sent to commit list.
> > >>>
> > >>>> If it's broken for a significant amount of time (weeks, months),
> then
> >
> > >> rather
> > >>>> than excluding the test, I would propose moving it to a "broken" or
> > >>>> "possibly invalid" source folder that's out of the test path. If it
> > >> doesn't
> > >>>> already have JIRA issue, then one should be created.
> > >>> Yes, though I'd be inclined to move it sooner -- tests should not
> stay
> >
> > >>> broken for more than a couple of days.
> > >>>
> > >>> Recently our breakages have been invalid tests rather than broken
> > >>> implementation, but they still need to be investigated/resolved.
> > >>>
> > >>>> I've been living with consistently failing tests for a long time
> now.
> >
> > >>>> Recently it was the unstable Socket tests, but I've been seeing the
> > >> WinXP
> > >>>> long file name [1] test failing for months.
> > >>> IMHO you should be shouting about it!  The alternative is that we
> > >>> tolerate a few broken windows and overall quality slips.
> > >>>
> > >>>> I think we may be unnecessarily complicating some of this by
> assuming
> >
> > >> that
> > >>>> all of the donated tests that are currently excluded and failing
> are
> > >>>> completely valid. I believe that the currently excluded tests are
> > >> either
> > >>>> failing because they aren't isolated according to the suggested
> test
> > >> layout
> > >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case
> of
> >
> > >> the
> > >>>> later.
> > >>>>
> > >>>> So I go back to my original suggestion, implement the testing
> > proposal,
> > >> then
> > >>>> fix/move any excluded tests to where they work properly or
> determine
> > >> that
> > >>>> they are invalid and delete them.
> > >>> Yes, the tests do need improvements too.
> > >>>
> > >>> Regards,
> > >>> Tim
> > >>>
> > >>>
> > >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> > >>>>
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
> >
> > --
> >
> > Tim Ellison ( t.p.ellison@gmail.com)
> > IBM Java technology centre, UK.
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
Yesterday I tried to add a regression test to existing in security module
TestCase, but, found that the TestCase is in exclude list. I had to
un-exclude it, run, check my test passes and exclude the TestCase again – it
was a little bit inconvenient, besides, my new valid (I believe) regression
test will go directly to exclude list after integration...

I see that we are near to decision what to do with failing tests.
Am I right that we are at the point of agreement on the following?:

There could be two groups of failing tests:
*Tests that never passed.
*Tests that recently started failing.

Test that never passed should be stored in TestCases with suffix "Fail" (
StringFailTest.java for example). They are subject for review and either
deletion or fixing or fixing implementation if they find a bug in API
implementation.
There should be 0 tests that recently started failing. If such test appears
it should be fixed within 24h, otherwise, commit which introduced the
failure will be rolled back.
Right?

 Thanks, Vladimir

On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:

> Nathan Beyer wrote:
> > Based on what I've seen of the excluded tests, category 1 is the
> predominate
> > case. This could be validated by looking at old revisions in SVN.
>
> I'm sure that is true, I'm just saying that the build system 'normal'
> state is that all enabled tests pass.  My concern was over your
> statement you have had failing tests for months.
>
> What is failing for you now?
>
> Regards,
> Tim
>
>
> >> -----Original Message-----
> >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> >>
> >> Is this the case where we have two 'categories'?
> >>
> >>   1) tests that never worked
> >>
> >>   2) tests that recently broke
> >>
> >> I think that a #2 should never persist for more than one build
> >> iteration, as either things get fixed or backed out.  I suppose then we
>
> >> are really talking about category #1, and that we don't have the
> "broken
> >> window" problem as we never had the window there in the first place?
> >>
> >> I think it's important to understand this (if it's actually true).
> >>
> >> geir
> >>
> >>
> >> Tim Ellison wrote:
> >>> Nathan Beyer wrote:
> >>>> How are other projects handling this? My opinion is that tests, which
>
> >> are
> >>>> expected and know to pass should always be running and if they fail
> and
> >> the
> >>>> failure can be independently recreated, then it's something to be
> >> posted on
> >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> issue.
> >>> Agreed, the tests we have enabled are run on each build (hourly if
> >>> things are being committed), and failures are sent to commit list.
> >>>
> >>>> If it's broken for a significant amount of time (weeks, months), then
>
> >> rather
> >>>> than excluding the test, I would propose moving it to a "broken" or
> >>>> "possibly invalid" source folder that's out of the test path. If it
> >> doesn't
> >>>> already have JIRA issue, then one should be created.
> >>> Yes, though I'd be inclined to move it sooner -- tests should not stay
>
> >>> broken for more than a couple of days.
> >>>
> >>> Recently our breakages have been invalid tests rather than broken
> >>> implementation, but they still need to be investigated/resolved.
> >>>
> >>>> I've been living with consistently failing tests for a long time now.
>
> >>>> Recently it was the unstable Socket tests, but I've been seeing the
> >> WinXP
> >>>> long file name [1] test failing for months.
> >>> IMHO you should be shouting about it!  The alternative is that we
> >>> tolerate a few broken windows and overall quality slips.
> >>>
> >>>> I think we may be unnecessarily complicating some of this by assuming
>
> >> that
> >>>> all of the donated tests that are currently excluded and failing are
> >>>> completely valid. I believe that the currently excluded tests are
> >> either
> >>>> failing because they aren't isolated according to the suggested test
> >> layout
> >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
>
> >> the
> >>>> later.
> >>>>
> >>>> So I go back to my original suggestion, implement the testing
> proposal,
> >> then
> >>>> fix/move any excluded tests to where they work properly or determine
> >> that
> >>>> they are invalid and delete them.
> >>> Yes, the tests do need improvements too.
> >>>
> >>> Regards,
> >>> Tim
> >>>
> >>>
> >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> >>>>
> >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
> --
>
> Tim Ellison ( t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Nathan Beyer wrote:
> Based on what I've seen of the excluded tests, category 1 is the predominate
> case. This could be validated by looking at old revisions in SVN.

I'm sure that is true, I'm just saying that the build system 'normal'
state is that all enabled tests pass.  My concern was over your
statement you have had failing tests for months.

What is failing for you now?

Regards,
Tim


>> -----Original Message-----
>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
>>
>> Is this the case where we have two 'categories'?
>>
>>   1) tests that never worked
>>
>>   2) tests that recently broke
>>
>> I think that a #2 should never persist for more than one build
>> iteration, as either things get fixed or backed out.  I suppose then we
>> are really talking about category #1, and that we don't have the "broken
>> window" problem as we never had the window there in the first place?
>>
>> I think it's important to understand this (if it's actually true).
>>
>> geir
>>
>>
>> Tim Ellison wrote:
>>> Nathan Beyer wrote:
>>>> How are other projects handling this? My opinion is that tests, which
>> are
>>>> expected and know to pass should always be running and if they fail and
>> the
>>>> failure can be independently recreated, then it's something to be
>> posted on
>>>> the list, if trivial (typo in build file?), or logged as a JIRA issue.
>>> Agreed, the tests we have enabled are run on each build (hourly if
>>> things are being committed), and failures are sent to commit list.
>>>
>>>> If it's broken for a significant amount of time (weeks, months), then
>> rather
>>>> than excluding the test, I would propose moving it to a "broken" or
>>>> "possibly invalid" source folder that's out of the test path. If it
>> doesn't
>>>> already have JIRA issue, then one should be created.
>>> Yes, though I'd be inclined to move it sooner -- tests should not stay
>>> broken for more than a couple of days.
>>>
>>> Recently our breakages have been invalid tests rather than broken
>>> implementation, but they still need to be investigated/resolved.
>>>
>>>> I've been living with consistently failing tests for a long time now.
>>>> Recently it was the unstable Socket tests, but I've been seeing the
>> WinXP
>>>> long file name [1] test failing for months.
>>> IMHO you should be shouting about it!  The alternative is that we
>>> tolerate a few broken windows and overall quality slips.
>>>
>>>> I think we may be unnecessarily complicating some of this by assuming
>> that
>>>> all of the donated tests that are currently excluded and failing are
>>>> completely valid. I believe that the currently excluded tests are
>> either
>>>> failing because they aren't isolated according to the suggested test
>> layout
>>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
>> the
>>>> later.
>>>>
>>>> So I go back to my original suggestion, implement the testing proposal,
>> then
>>>> fix/move any excluded tests to where they work properly or determine
>> that
>>>> they are invalid and delete them.
>>> Yes, the tests do need improvements too.
>>>
>>> Regards,
>>> Tim
>>>
>>>
>>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
>>>>
> 
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
Based on what I've seen of the excluded tests, category 1 is the predominate
case. This could be validated by looking at old revisions in SVN.

-Nathan

> -----Original Message-----
> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> 
> Is this the case where we have two 'categories'?
> 
>   1) tests that never worked
> 
>   2) tests that recently broke
> 
> I think that a #2 should never persist for more than one build
> iteration, as either things get fixed or backed out.  I suppose then we
> are really talking about category #1, and that we don't have the "broken
> window" problem as we never had the window there in the first place?
> 
> I think it's important to understand this (if it's actually true).
> 
> geir
> 
> 
> Tim Ellison wrote:
> > Nathan Beyer wrote:
> >> How are other projects handling this? My opinion is that tests, which
> are
> >> expected and know to pass should always be running and if they fail and
> the
> >> failure can be independently recreated, then it's something to be
> posted on
> >> the list, if trivial (typo in build file?), or logged as a JIRA issue.
> >
> > Agreed, the tests we have enabled are run on each build (hourly if
> > things are being committed), and failures are sent to commit list.
> >
> >> If it's broken for a significant amount of time (weeks, months), then
> rather
> >> than excluding the test, I would propose moving it to a "broken" or
> >> "possibly invalid" source folder that's out of the test path. If it
> doesn't
> >> already have JIRA issue, then one should be created.
> >
> > Yes, though I'd be inclined to move it sooner -- tests should not stay
> > broken for more than a couple of days.
> >
> > Recently our breakages have been invalid tests rather than broken
> > implementation, but they still need to be investigated/resolved.
> >
> >> I've been living with consistently failing tests for a long time now.
> >> Recently it was the unstable Socket tests, but I've been seeing the
> WinXP
> >> long file name [1] test failing for months.
> >
> > IMHO you should be shouting about it!  The alternative is that we
> > tolerate a few broken windows and overall quality slips.
> >
> >> I think we may be unnecessarily complicating some of this by assuming
> that
> >> all of the donated tests that are currently excluded and failing are
> >> completely valid. I believe that the currently excluded tests are
> either
> >> failing because they aren't isolated according to the suggested test
> layout
> >> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
> the
> >> later.
> >>
> >> So I go back to my original suggestion, implement the testing proposal,
> then
> >> fix/move any excluded tests to where they work properly or determine
> that
> >> they are invalid and delete them.
> >
> > Yes, the tests do need improvements too.
> >
> > Regards,
> > Tim
> >
> >
> >> [1] https://issues.apache.org/jira/browse/HARMONY-619
> >>
> >



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.
Is this the case where we have two 'categories'?

  1) tests that never worked

  2) tests that recently broke

I think that a #2 should never persist for more than one build
iteration, as either things get fixed or backed out.  I suppose then we
are really talking about category #1, and that we don't have the "broken
window" problem as we never had the window there in the first place?

I think it's important to understand this (if it's actually true).

geir


Tim Ellison wrote:
> Nathan Beyer wrote:
>> How are other projects handling this? My opinion is that tests, which are
>> expected and know to pass should always be running and if they fail and the
>> failure can be independently recreated, then it's something to be posted on
>> the list, if trivial (typo in build file?), or logged as a JIRA issue.
> 
> Agreed, the tests we have enabled are run on each build (hourly if
> things are being committed), and failures are sent to commit list.
> 
>> If it's broken for a significant amount of time (weeks, months), then rather
>> than excluding the test, I would propose moving it to a "broken" or
>> "possibly invalid" source folder that's out of the test path. If it doesn't
>> already have JIRA issue, then one should be created.
> 
> Yes, though I'd be inclined to move it sooner -- tests should not stay
> broken for more than a couple of days.
> 
> Recently our breakages have been invalid tests rather than broken
> implementation, but they still need to be investigated/resolved.
> 
>> I've been living with consistently failing tests for a long time now.
>> Recently it was the unstable Socket tests, but I've been seeing the WinXP
>> long file name [1] test failing for months.
> 
> IMHO you should be shouting about it!  The alternative is that we
> tolerate a few broken windows and overall quality slips.
> 
>> I think we may be unnecessarily complicating some of this by assuming that
>> all of the donated tests that are currently excluded and failing are
>> completely valid. I believe that the currently excluded tests are either
>> failing because they aren't isolated according to the suggested test layout
>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
>> later.
>>
>> So I go back to my original suggestion, implement the testing proposal, then
>> fix/move any excluded tests to where they work properly or determine that
>> they are invalid and delete them.
> 
> Yes, the tests do need improvements too.
> 
> Regards,
> Tim
> 
> 
>> [1] https://issues.apache.org/jira/browse/HARMONY-619
>>
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Nathan Beyer wrote:
> How are other projects handling this? My opinion is that tests, which are
> expected and know to pass should always be running and if they fail and the
> failure can be independently recreated, then it's something to be posted on
> the list, if trivial (typo in build file?), or logged as a JIRA issue.

Agreed, the tests we have enabled are run on each build (hourly if
things are being committed), and failures are sent to commit list.

> If it's broken for a significant amount of time (weeks, months), then rather
> than excluding the test, I would propose moving it to a "broken" or
> "possibly invalid" source folder that's out of the test path. If it doesn't
> already have JIRA issue, then one should be created.

Yes, though I'd be inclined to move it sooner -- tests should not stay
broken for more than a couple of days.

Recently our breakages have been invalid tests rather than broken
implementation, but they still need to be investigated/resolved.

> I've been living with consistently failing tests for a long time now.
> Recently it was the unstable Socket tests, but I've been seeing the WinXP
> long file name [1] test failing for months.

IMHO you should be shouting about it!  The alternative is that we
tolerate a few broken windows and overall quality slips.

> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
> later.
> 
> So I go back to my original suggestion, implement the testing proposal, then
> fix/move any excluded tests to where they work properly or determine that
> they are invalid and delete them.

Yes, the tests do need improvements too.

Regards,
Tim


> [1] https://issues.apache.org/jira/browse/HARMONY-619
> 


-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
> -----Original Message-----
> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> George Harley wrote:
> > Nathan Beyer wrote:
> >> Two suggestions:
> >> 1. Approve the testing strategy [1] and implement/rework the modules
> >> appropriately.
> >> 2. Fix the tests!
> >>
> >> -Nathan
> >>
> >> [1]
> >>
> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.htm
> l
> >>
> >>
> >
> > Hi Nathan,
> >
> > What are your thoughts on running or not running test cases containing
> > problematic test methods while those methods are being investigated and
> > fixed up ?
> >
> 
> That's exactly the problem.  We need a clear way to maintain and track
> this stuff.
> 
> geir

How are other projects handling this? My opinion is that tests, which are
expected and know to pass should always be running and if they fail and the
failure can be independently recreated, then it's something to be posted on
the list, if trivial (typo in build file?), or logged as a JIRA issue.

If it's broken for a significant amount of time (weeks, months), then rather
than excluding the test, I would propose moving it to a "broken" or
"possibly invalid" source folder that's out of the test path. If it doesn't
already have JIRA issue, then one should be created.

I've been living with consistently failing tests for a long time now.
Recently it was the unstable Socket tests, but I've been seeing the WinXP
long file name [1] test failing for months.

I think we may be unnecessarily complicating some of this by assuming that
all of the donated tests that are currently excluded and failing are
completely valid. I believe that the currently excluded tests are either
failing because they aren't isolated according to the suggested test layout
or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
later.

So I go back to my original suggestion, implement the testing proposal, then
fix/move any excluded tests to where they work properly or determine that
they are invalid and delete them.

[1] https://issues.apache.org/jira/browse/HARMONY-619

> 
> >
> > Best regards,
> > George
> >
> >
> >>
> >>
> >>> -----Original Message-----
> >>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> >>> Sent: Tuesday, June 27, 2006 12:09 PM
> >>> To: harmony-dev@incubator.apache.org
> >>> Subject: Re: [classlib][testing] excluding the failed tests
> >>>
> >>>
> >>>
> >>> George Harley wrote:
> >>>
> >>>> Hi Geir,
> >>>>
> >>>> As you may recall, a while back I floated the idea and supplied some
> >>>> seed code to define all known test failing test methods in an XML
> file
> >>>> (an "exclusions list") that could be used by JUnit at test run time
> to
> >>>> skip over them while allowing the rest of the test methods in a
> >>>> class to
> >>>> run [1]. Obviously I thought about that when catching up with this
> >>>> thread but, more importantly, your comment about being reluctant to
> >>>> have
> >>>> more dependencies on JUnit also motivated me to go off and read some
> >>>> more about TestNG [2].
> >>>>
> >>>> It was news to me that TestNG provides out-of-the-box support for
> >>>> excluding specific test methods as well as groups of methods (where
> the
> >>>> groups are declared in source file annotations or Javadoc comments).
> >>>> Even better, it can do this on existing JUnit test code provided that
> >>>> the necessary meta-data (annotations if compiling to a 1.5 target;
> >>>> Javadoc comments if targeting 1.4 like we currently are). There is a
> >>>> utility available in the TestNG download and also in the Eclipse
> >>>> support
> >>>> plug-in that helps migrate directories of existing JUnit tests to
> >>>> TestNG
> >>>> by adding in the basic meta-data (although for me the Eclipse version
> >>>> also tried to break the test class inheritance from
> >>>> junit.framework.TestCase which was definitely not what was required).
> >>>>
> >>>> Perhaps ... just perhaps ... we should be looking at something like
> >>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the
> >>>> granularity of test configuration that we need.
> >>>>
> >>>> Just a thought.
> >>>>
> >>> How 'bout that ;)
> >>>
> >>> geir
> >>>
> >>>
> >>>> Best regards,
> >>>> George
> >>>>
> >>>> [1] http://issues.apache.org/jira/browse/HARMONY-263
> >>>> [2] http://testng.org
> >>>>
> >>>>
> >>>>
> >>>> Geir Magnusson Jr wrote:
> >>>>
> >>>>> Alexei Zakharov wrote:
> >>>>>
> >>>>>
> >>>>>> Hi,
> >>>>>> +1 for (3), but I think it will be better to define suite() method
> >>>>>> and
> >>>>>> enumerate passing tests there rather than to comment out the code.
> >>>>>>
> >>>>>>
> >>>>> I'm reluctant to see more dependencies on JUnit when we could
> control
> >>>>>
> >>> at
> >>>
> >>>>> a level higher in the build system.
> >>>>>
> >>>>> Hard to explain, I guess, but if our exclusions are buried in .java,
> I
> >>>>> would think that reporting and tracking over time is going to be
> much
> >>>>> harder.
> >>>>>
> >>>>> geir
> >>>>>
> >>>>>
> >>>>>
> >>>>>> 2006/6/27, Richard Liang <ri...@gmail.com>:
> >>>>>>
> >>>>>>
> >>>>>>> Hello Vladimir,
> >>>>>>>
> >>>>>>> +1 to option 3) . We shall comment the failed test cases out and
> add
> >>>>>>> FIXME to remind us to diagnose the problems later. ;-)
> >>>>>>>
> >>>>>>> Vladimir Ivanov wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> I see your point.
> >>>>>>>> But I feel that we can miss regression in non-tested code if we
> >>>>>>>> exclude
> >>>>>>>> TestCases.
> >>>>>>>> Now, for example we miss testing of
> >>>>>>>>
> >>>>>>>>
> >>>>>>> java.lang.Class/Process/Thread/String
> >>>>>>>
> >>>>>>>
> >>>>>>>> and some other classes.
> >>>>>>>>
> >>>>>>>> While we have failing tests and don't want to pay attention to
> >>>>>>>> these
> >>>>>>>> failures we can:
> >>>>>>>> 1) Leave things as is - do not run TestCases with failing tests.
> >>>>>>>> 2) Split passing/failing TestCase into separate "failing
> TestCase"
> >>>>>>>>
> >>> and
> >>>
> >>>>>>>> "passing TestCase" and exclude "failing TestCases". When test or
> >>>>>>>> implementation is fixed we move tests from failing TestCase to
> >>>>>>>>
> >>> passing
> >>>
> >>>>>>>> TestCase.
> >>>>>>>> 3) Comment failing tests in TestCases. It is better to run 58
> tests
> >>>>>>>> instead
> >>>>>>>> of 0 for String.
> >>>>>>>> 4) Run all TestCases, then, compare test run results with the
> 'list
> >>>>>>>>
> >>> of
> >>>
> >>>>>>>> known
> >>>>>>>> failures' and see whether new failures appeared. This, I think,
> is
> >>>>>>>>
> >>>>>>>>
> >>>>>>> better
> >>>>>>>
> >>>>>>>
> >>>>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list
> of
> >>>>>>>>
> >>>>>>>>
> >>>>>>> known
> >>>>>>>
> >>>>>>>
> >>>>>>>> failing tests and exclude list where we put crashing tests.
> >>>>>>>>
> >>>>>>>> Thanks, Vladimir
> >>>>>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> Mikhail Loenko wrote:
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> Hi Vladimir,
> >>>>>>>>>>
> >>>>>>>>>> IMHO the tests are to verify that an update does not introduce
> >>>>>>>>>> any
> >>>>>>>>>> regression. So there are two options: remember which exactly
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>> tests may
> >>>>>>>
> >>>>>>>
> >>>>>>>>> fail
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> and remember that all tests must pass. I believe the latter
> >>>>>>>>>> one is
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>> a bit
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> easier and safer.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>> +1
> >>>>>>>>>
> >>>>>>>>> Tim
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> Thanks,
> >>>>>>>>>> Mikhail
> >>>>>>>>>>
> >>>>>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>> Hi,
> >>>>>>>>>>> Working with tests I noticed that we are excluding some tests
> >>>>>>>>>>>
> >>> just
> >>>
> >>>>>>>>>>> because
> >>>>>>>>>>> several tests from single TestCase fail.
> >>>>>>>>>>>
> >>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest'
> >>>>>>>>>>> has 60
> >>>>>>>>>>> tests and
> >>>>>>>>>>> only 2 of them fails. But the build excludes the whole
> TestCase
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>> and we
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>>> just
> >>>>>>>>>>> miss testing of java.lang.String implementation.
> >>>>>>>>>>>
> >>>>>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
> >>>>>>>>>>>
> >>>>>>>>>>> My suggestion is: do not exclude any tests until it crashes
> VM.
> >>>>>>>>>>> If somebody needs a list of tests that always passed a
> separated
> >>>>>>>>>>> target can
> >>>>>>>>>>> be added to build.
> >>>>>>>>>>>
> >>>>>>>>>>> Do you think we should add target 'test-all' to the build?
> >>>>>>>>>>>  Thanks, Vladimir
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>> Tim Ellison (t.p.ellison@gmail.com)
> >>>>>>>>> IBM Java technology centre, UK.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>> --
> >>>>>>> Richard Liang
> >>>>>>> China Software Development Lab, IBM
> >>>>>>>
> >>
> >>
> >>
> >> ---------------------------------------------------------------------
> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >>
> >>
> >>
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
> >
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

George Harley wrote:
> Nathan Beyer wrote:
>> Two suggestions:
>> 1. Approve the testing strategy [1] and implement/rework the modules
>> appropriately.
>> 2. Fix the tests!
>>
>> -Nathan
>>
>> [1]
>> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
>>
>>   
> 
> Hi Nathan,
> 
> What are your thoughts on running or not running test cases containing
> problematic test methods while those methods are being investigated and
> fixed up ?
> 

That's exactly the problem.  We need a clear way to maintain and track
this stuff.

geir

> 
> Best regards,
> George
> 
> 
>>
>>  
>>> -----Original Message-----
>>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
>>> Sent: Tuesday, June 27, 2006 12:09 PM
>>> To: harmony-dev@incubator.apache.org
>>> Subject: Re: [classlib][testing] excluding the failed tests
>>>
>>>
>>>
>>> George Harley wrote:
>>>    
>>>> Hi Geir,
>>>>
>>>> As you may recall, a while back I floated the idea and supplied some
>>>> seed code to define all known test failing test methods in an XML file
>>>> (an "exclusions list") that could be used by JUnit at test run time to
>>>> skip over them while allowing the rest of the test methods in a
>>>> class to
>>>> run [1]. Obviously I thought about that when catching up with this
>>>> thread but, more importantly, your comment about being reluctant to
>>>> have
>>>> more dependencies on JUnit also motivated me to go off and read some
>>>> more about TestNG [2].
>>>>
>>>> It was news to me that TestNG provides out-of-the-box support for
>>>> excluding specific test methods as well as groups of methods (where the
>>>> groups are declared in source file annotations or Javadoc comments).
>>>> Even better, it can do this on existing JUnit test code provided that
>>>> the necessary meta-data (annotations if compiling to a 1.5 target;
>>>> Javadoc comments if targeting 1.4 like we currently are). There is a
>>>> utility available in the TestNG download and also in the Eclipse
>>>> support
>>>> plug-in that helps migrate directories of existing JUnit tests to
>>>> TestNG
>>>> by adding in the basic meta-data (although for me the Eclipse version
>>>> also tried to break the test class inheritance from
>>>> junit.framework.TestCase which was definitely not what was required).
>>>>
>>>> Perhaps ... just perhaps ... we should be looking at something like
>>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the
>>>> granularity of test configuration that we need.
>>>>
>>>> Just a thought.
>>>>       
>>> How 'bout that ;)
>>>
>>> geir
>>>
>>>    
>>>> Best regards,
>>>> George
>>>>
>>>> [1] http://issues.apache.org/jira/browse/HARMONY-263
>>>> [2] http://testng.org
>>>>
>>>>
>>>>
>>>> Geir Magnusson Jr wrote:
>>>>      
>>>>> Alexei Zakharov wrote:
>>>>>
>>>>>        
>>>>>> Hi,
>>>>>> +1 for (3), but I think it will be better to define suite() method
>>>>>> and
>>>>>> enumerate passing tests there rather than to comment out the code.
>>>>>>
>>>>>>           
>>>>> I'm reluctant to see more dependencies on JUnit when we could control
>>>>>         
>>> at
>>>    
>>>>> a level higher in the build system.
>>>>>
>>>>> Hard to explain, I guess, but if our exclusions are buried in .java, I
>>>>> would think that reporting and tracking over time is going to be much
>>>>> harder.
>>>>>
>>>>> geir
>>>>>
>>>>>
>>>>>        
>>>>>> 2006/6/27, Richard Liang <ri...@gmail.com>:
>>>>>>
>>>>>>          
>>>>>>> Hello Vladimir,
>>>>>>>
>>>>>>> +1 to option 3) . We shall comment the failed test cases out and add
>>>>>>> FIXME to remind us to diagnose the problems later. ;-)
>>>>>>>
>>>>>>> Vladimir Ivanov wrote:
>>>>>>>
>>>>>>>            
>>>>>>>> I see your point.
>>>>>>>> But I feel that we can miss regression in non-tested code if we
>>>>>>>> exclude
>>>>>>>> TestCases.
>>>>>>>> Now, for example we miss testing of
>>>>>>>>
>>>>>>>>               
>>>>>>> java.lang.Class/Process/Thread/String
>>>>>>>
>>>>>>>            
>>>>>>>> and some other classes.
>>>>>>>>
>>>>>>>> While we have failing tests and don't want to pay attention to
>>>>>>>> these
>>>>>>>> failures we can:
>>>>>>>> 1) Leave things as is - do not run TestCases with failing tests.
>>>>>>>> 2) Split passing/failing TestCase into separate "failing TestCase"
>>>>>>>>               
>>> and
>>>    
>>>>>>>> "passing TestCase" and exclude "failing TestCases". When test or
>>>>>>>> implementation is fixed we move tests from failing TestCase to
>>>>>>>>               
>>> passing
>>>    
>>>>>>>> TestCase.
>>>>>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
>>>>>>>> instead
>>>>>>>> of 0 for String.
>>>>>>>> 4) Run all TestCases, then, compare test run results with the 'list
>>>>>>>>               
>>> of
>>>    
>>>>>>>> known
>>>>>>>> failures' and see whether new failures appeared. This, I think, is
>>>>>>>>
>>>>>>>>               
>>>>>>> better
>>>>>>>
>>>>>>>            
>>>>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>>>>>>>>
>>>>>>>>               
>>>>>>> known
>>>>>>>
>>>>>>>            
>>>>>>>> failing tests and exclude list where we put crashing tests.
>>>>>>>>
>>>>>>>> Thanks, Vladimir
>>>>>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>>>>>>>
>>>>>>>>              
>>>>>>>>> Mikhail Loenko wrote:
>>>>>>>>>
>>>>>>>>>                
>>>>>>>>>> Hi Vladimir,
>>>>>>>>>>
>>>>>>>>>> IMHO the tests are to verify that an update does not introduce
>>>>>>>>>> any
>>>>>>>>>> regression. So there are two options: remember which exactly
>>>>>>>>>>
>>>>>>>>>>                   
>>>>>>> tests may
>>>>>>>
>>>>>>>            
>>>>>>>>> fail
>>>>>>>>>
>>>>>>>>>                
>>>>>>>>>> and remember that all tests must pass. I believe the latter
>>>>>>>>>> one is
>>>>>>>>>>
>>>>>>>>>>                   
>>>>>>>>> a bit
>>>>>>>>>
>>>>>>>>>                
>>>>>>>>>> easier and safer.
>>>>>>>>>>
>>>>>>>>>>                   
>>>>>>>>> +1
>>>>>>>>>
>>>>>>>>> Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>                
>>>>>>>>>> Thanks,
>>>>>>>>>> Mikhail
>>>>>>>>>>
>>>>>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>>>>>>>>>>
>>>>>>>>>>                  
>>>>>>>>>>> Hi,
>>>>>>>>>>> Working with tests I noticed that we are excluding some tests
>>>>>>>>>>>                     
>>> just
>>>    
>>>>>>>>>>> because
>>>>>>>>>>> several tests from single TestCase fail.
>>>>>>>>>>>
>>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest'
>>>>>>>>>>> has 60
>>>>>>>>>>> tests and
>>>>>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
>>>>>>>>>>>
>>>>>>>>>>>                     
>>>>>>>>> and we
>>>>>>>>>
>>>>>>>>>                
>>>>>>>>>>> just
>>>>>>>>>>> miss testing of java.lang.String implementation.
>>>>>>>>>>>
>>>>>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
>>>>>>>>>>>
>>>>>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
>>>>>>>>>>> If somebody needs a list of tests that always passed a separated
>>>>>>>>>>> target can
>>>>>>>>>>> be added to build.
>>>>>>>>>>>
>>>>>>>>>>> Do you think we should add target 'test-all' to the build?
>>>>>>>>>>>  Thanks, Vladimir
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>                     
>>>>>>>>> Tim Ellison (t.p.ellison@gmail.com)
>>>>>>>>> IBM Java technology centre, UK.
>>>>>>>>>
>>>>>>>>>                 
>>>>>>> -- 
>>>>>>> Richard Liang
>>>>>>> China Software Development Lab, IBM
>>>>>>>             
>>
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>>   
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by George Harley <ge...@googlemail.com>.
Nathan Beyer wrote:
> Two suggestions:
> 1. Approve the testing strategy [1] and implement/rework the modules
> appropriately.
> 2. Fix the tests!
>
> -Nathan
>
> [1]
> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
>   

Hi Nathan,

What are your thoughts on running or not running test cases containing 
problematic test methods while those methods are being investigated and 
fixed up ?


Best regards,
George


>
>   
>> -----Original Message-----
>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
>> Sent: Tuesday, June 27, 2006 12:09 PM
>> To: harmony-dev@incubator.apache.org
>> Subject: Re: [classlib][testing] excluding the failed tests
>>
>>
>>
>> George Harley wrote:
>>     
>>> Hi Geir,
>>>
>>> As you may recall, a while back I floated the idea and supplied some
>>> seed code to define all known test failing test methods in an XML file
>>> (an "exclusions list") that could be used by JUnit at test run time to
>>> skip over them while allowing the rest of the test methods in a class to
>>> run [1]. Obviously I thought about that when catching up with this
>>> thread but, more importantly, your comment about being reluctant to have
>>> more dependencies on JUnit also motivated me to go off and read some
>>> more about TestNG [2].
>>>
>>> It was news to me that TestNG provides out-of-the-box support for
>>> excluding specific test methods as well as groups of methods (where the
>>> groups are declared in source file annotations or Javadoc comments).
>>> Even better, it can do this on existing JUnit test code provided that
>>> the necessary meta-data (annotations if compiling to a 1.5 target;
>>> Javadoc comments if targeting 1.4 like we currently are). There is a
>>> utility available in the TestNG download and also in the Eclipse support
>>> plug-in that helps migrate directories of existing JUnit tests to TestNG
>>> by adding in the basic meta-data (although for me the Eclipse version
>>> also tried to break the test class inheritance from
>>> junit.framework.TestCase which was definitely not what was required).
>>>
>>> Perhaps ... just perhaps ... we should be looking at something like
>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the
>>> granularity of test configuration that we need.
>>>
>>> Just a thought.
>>>       
>> How 'bout that ;)
>>
>> geir
>>
>>     
>>> Best regards,
>>> George
>>>
>>> [1] http://issues.apache.org/jira/browse/HARMONY-263
>>> [2] http://testng.org
>>>
>>>
>>>
>>> Geir Magnusson Jr wrote:
>>>       
>>>> Alexei Zakharov wrote:
>>>>
>>>>         
>>>>> Hi,
>>>>> +1 for (3), but I think it will be better to define suite() method and
>>>>> enumerate passing tests there rather than to comment out the code.
>>>>>
>>>>>           
>>>> I'm reluctant to see more dependencies on JUnit when we could control
>>>>         
>> at
>>     
>>>> a level higher in the build system.
>>>>
>>>> Hard to explain, I guess, but if our exclusions are buried in .java, I
>>>> would think that reporting and tracking over time is going to be much
>>>> harder.
>>>>
>>>> geir
>>>>
>>>>
>>>>         
>>>>> 2006/6/27, Richard Liang <ri...@gmail.com>:
>>>>>
>>>>>           
>>>>>> Hello Vladimir,
>>>>>>
>>>>>> +1 to option 3) . We shall comment the failed test cases out and add
>>>>>> FIXME to remind us to diagnose the problems later. ;-)
>>>>>>
>>>>>> Vladimir Ivanov wrote:
>>>>>>
>>>>>>             
>>>>>>> I see your point.
>>>>>>> But I feel that we can miss regression in non-tested code if we
>>>>>>> exclude
>>>>>>> TestCases.
>>>>>>> Now, for example we miss testing of
>>>>>>>
>>>>>>>               
>>>>>> java.lang.Class/Process/Thread/String
>>>>>>
>>>>>>             
>>>>>>> and some other classes.
>>>>>>>
>>>>>>> While we have failing tests and don't want to pay attention to these
>>>>>>> failures we can:
>>>>>>> 1) Leave things as is - do not run TestCases with failing tests.
>>>>>>> 2) Split passing/failing TestCase into separate "failing TestCase"
>>>>>>>               
>> and
>>     
>>>>>>> "passing TestCase" and exclude "failing TestCases". When test or
>>>>>>> implementation is fixed we move tests from failing TestCase to
>>>>>>>               
>> passing
>>     
>>>>>>> TestCase.
>>>>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
>>>>>>> instead
>>>>>>> of 0 for String.
>>>>>>> 4) Run all TestCases, then, compare test run results with the 'list
>>>>>>>               
>> of
>>     
>>>>>>> known
>>>>>>> failures' and see whether new failures appeared. This, I think, is
>>>>>>>
>>>>>>>               
>>>>>> better
>>>>>>
>>>>>>             
>>>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>>>>>>>
>>>>>>>               
>>>>>> known
>>>>>>
>>>>>>             
>>>>>>> failing tests and exclude list where we put crashing tests.
>>>>>>>
>>>>>>> Thanks, Vladimir
>>>>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>>>>>>
>>>>>>>               
>>>>>>>> Mikhail Loenko wrote:
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> Hi Vladimir,
>>>>>>>>>
>>>>>>>>> IMHO the tests are to verify that an update does not introduce any
>>>>>>>>> regression. So there are two options: remember which exactly
>>>>>>>>>
>>>>>>>>>                   
>>>>>> tests may
>>>>>>
>>>>>>             
>>>>>>>> fail
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> and remember that all tests must pass. I believe the latter one is
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>> a bit
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> easier and safer.
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> Thanks,
>>>>>>>>> Mikhail
>>>>>>>>>
>>>>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>>>> Hi,
>>>>>>>>>> Working with tests I noticed that we are excluding some tests
>>>>>>>>>>                     
>> just
>>     
>>>>>>>>>> because
>>>>>>>>>> several tests from single TestCase fail.
>>>>>>>>>>
>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>>>>>>>>>> tests and
>>>>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
>>>>>>>>>>
>>>>>>>>>>                     
>>>>>>>> and we
>>>>>>>>
>>>>>>>>                 
>>>>>>>>>> just
>>>>>>>>>> miss testing of java.lang.String implementation.
>>>>>>>>>>
>>>>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
>>>>>>>>>>
>>>>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
>>>>>>>>>> If somebody needs a list of tests that always passed a separated
>>>>>>>>>> target can
>>>>>>>>>> be added to build.
>>>>>>>>>>
>>>>>>>>>> Do you think we should add target 'test-all' to the build?
>>>>>>>>>>  Thanks, Vladimir
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>                     
>>>>>>>> Tim Ellison (t.p.ellison@gmail.com)
>>>>>>>> IBM Java technology centre, UK.
>>>>>>>>
>>>>>>>>                 
>>>>>> --
>>>>>> Richard Liang
>>>>>> China Software Development Lab, IBM
>>>>>>             
>
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>
>   


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


RE: [classlib][testing] excluding the failed tests

Posted by Nathan Beyer <nb...@kc.rr.com>.
Two suggestions:
1. Approve the testing strategy [1] and implement/rework the modules
appropriately.
2. Fix the tests!

-Nathan

[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html


> -----Original Message-----
> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> Sent: Tuesday, June 27, 2006 12:09 PM
> To: harmony-dev@incubator.apache.org
> Subject: Re: [classlib][testing] excluding the failed tests
> 
> 
> 
> George Harley wrote:
> > Hi Geir,
> >
> > As you may recall, a while back I floated the idea and supplied some
> > seed code to define all known test failing test methods in an XML file
> > (an "exclusions list") that could be used by JUnit at test run time to
> > skip over them while allowing the rest of the test methods in a class to
> > run [1]. Obviously I thought about that when catching up with this
> > thread but, more importantly, your comment about being reluctant to have
> > more dependencies on JUnit also motivated me to go off and read some
> > more about TestNG [2].
> >
> > It was news to me that TestNG provides out-of-the-box support for
> > excluding specific test methods as well as groups of methods (where the
> > groups are declared in source file annotations or Javadoc comments).
> > Even better, it can do this on existing JUnit test code provided that
> > the necessary meta-data (annotations if compiling to a 1.5 target;
> > Javadoc comments if targeting 1.4 like we currently are). There is a
> > utility available in the TestNG download and also in the Eclipse support
> > plug-in that helps migrate directories of existing JUnit tests to TestNG
> > by adding in the basic meta-data (although for me the Eclipse version
> > also tried to break the test class inheritance from
> > junit.framework.TestCase which was definitely not what was required).
> >
> > Perhaps ... just perhaps ... we should be looking at something like
> > TestNG (or my wonderful "exclusions list" :-) ) to provide the
> > granularity of test configuration that we need.
> >
> > Just a thought.
> 
> How 'bout that ;)
> 
> geir
> 
> >
> > Best regards,
> > George
> >
> > [1] http://issues.apache.org/jira/browse/HARMONY-263
> > [2] http://testng.org
> >
> >
> >
> > Geir Magnusson Jr wrote:
> >> Alexei Zakharov wrote:
> >>
> >>> Hi,
> >>> +1 for (3), but I think it will be better to define suite() method and
> >>> enumerate passing tests there rather than to comment out the code.
> >>>
> >>
> >> I'm reluctant to see more dependencies on JUnit when we could control
> at
> >> a level higher in the build system.
> >>
> >> Hard to explain, I guess, but if our exclusions are buried in .java, I
> >> would think that reporting and tracking over time is going to be much
> >> harder.
> >>
> >> geir
> >>
> >>
> >>> 2006/6/27, Richard Liang <ri...@gmail.com>:
> >>>
> >>>> Hello Vladimir,
> >>>>
> >>>> +1 to option 3) . We shall comment the failed test cases out and add
> >>>> FIXME to remind us to diagnose the problems later. ;-)
> >>>>
> >>>> Vladimir Ivanov wrote:
> >>>>
> >>>>> I see your point.
> >>>>> But I feel that we can miss regression in non-tested code if we
> >>>>> exclude
> >>>>> TestCases.
> >>>>> Now, for example we miss testing of
> >>>>>
> >>>> java.lang.Class/Process/Thread/String
> >>>>
> >>>>> and some other classes.
> >>>>>
> >>>>> While we have failing tests and don't want to pay attention to these
> >>>>> failures we can:
> >>>>> 1) Leave things as is - do not run TestCases with failing tests.
> >>>>> 2) Split passing/failing TestCase into separate "failing TestCase"
> and
> >>>>> "passing TestCase" and exclude "failing TestCases". When test or
> >>>>> implementation is fixed we move tests from failing TestCase to
> passing
> >>>>> TestCase.
> >>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
> >>>>> instead
> >>>>> of 0 for String.
> >>>>> 4) Run all TestCases, then, compare test run results with the 'list
> of
> >>>>> known
> >>>>> failures' and see whether new failures appeared. This, I think, is
> >>>>>
> >>>> better
> >>>>
> >>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
> >>>>>
> >>>> known
> >>>>
> >>>>> failing tests and exclude list where we put crashing tests.
> >>>>>
> >>>>> Thanks, Vladimir
> >>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
> >>>>>
> >>>>>> Mikhail Loenko wrote:
> >>>>>>
> >>>>>>> Hi Vladimir,
> >>>>>>>
> >>>>>>> IMHO the tests are to verify that an update does not introduce any
> >>>>>>> regression. So there are two options: remember which exactly
> >>>>>>>
> >>>> tests may
> >>>>
> >>>>>> fail
> >>>>>>
> >>>>>>> and remember that all tests must pass. I believe the latter one is
> >>>>>>>
> >>>>>> a bit
> >>>>>>
> >>>>>>> easier and safer.
> >>>>>>>
> >>>>>> +1
> >>>>>>
> >>>>>> Tim
> >>>>>>
> >>>>>>
> >>>>>>> Thanks,
> >>>>>>> Mikhail
> >>>>>>>
> >>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>> Working with tests I noticed that we are excluding some tests
> just
> >>>>>>>> because
> >>>>>>>> several tests from single TestCase fail.
> >>>>>>>>
> >>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
> >>>>>>>> tests and
> >>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
> >>>>>>>>
> >>>>>> and we
> >>>>>>
> >>>>>>>> just
> >>>>>>>> miss testing of java.lang.String implementation.
> >>>>>>>>
> >>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
> >>>>>>>>
> >>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
> >>>>>>>> If somebody needs a list of tests that always passed a separated
> >>>>>>>> target can
> >>>>>>>> be added to build.
> >>>>>>>>
> >>>>>>>> Do you think we should add target 'test-all' to the build?
> >>>>>>>>  Thanks, Vladimir
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>> Tim Ellison (t.p.ellison@gmail.com)
> >>>>>> IBM Java technology centre, UK.
> >>>>>>
> >>>> --
> >>>> Richard Liang
> >>>> China Software Development Lab, IBM



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

George Harley wrote:
> Hi Geir,
> 
> As you may recall, a while back I floated the idea and supplied some
> seed code to define all known test failing test methods in an XML file
> (an "exclusions list") that could be used by JUnit at test run time to
> skip over them while allowing the rest of the test methods in a class to
> run [1]. Obviously I thought about that when catching up with this
> thread but, more importantly, your comment about being reluctant to have
> more dependencies on JUnit also motivated me to go off and read some
> more about TestNG [2].
> 
> It was news to me that TestNG provides out-of-the-box support for
> excluding specific test methods as well as groups of methods (where the
> groups are declared in source file annotations or Javadoc comments).
> Even better, it can do this on existing JUnit test code provided that
> the necessary meta-data (annotations if compiling to a 1.5 target;
> Javadoc comments if targeting 1.4 like we currently are). There is a
> utility available in the TestNG download and also in the Eclipse support
> plug-in that helps migrate directories of existing JUnit tests to TestNG
> by adding in the basic meta-data (although for me the Eclipse version
> also tried to break the test class inheritance from
> junit.framework.TestCase which was definitely not what was required).
> 
> Perhaps ... just perhaps ... we should be looking at something like
> TestNG (or my wonderful "exclusions list" :-) ) to provide the
> granularity of test configuration that we need.
> 
> Just a thought.

How 'bout that ;)

geir

> 
> Best regards,
> George
> 
> [1] http://issues.apache.org/jira/browse/HARMONY-263
> [2] http://testng.org
> 
> 
> 
> Geir Magnusson Jr wrote:
>> Alexei Zakharov wrote:
>>  
>>> Hi,
>>> +1 for (3), but I think it will be better to define suite() method and
>>> enumerate passing tests there rather than to comment out the code.
>>>     
>>
>> I'm reluctant to see more dependencies on JUnit when we could control at
>> a level higher in the build system.
>>
>> Hard to explain, I guess, but if our exclusions are buried in .java, I
>> would think that reporting and tracking over time is going to be much
>> harder.
>>
>> geir
>>
>>  
>>> 2006/6/27, Richard Liang <ri...@gmail.com>:
>>>    
>>>> Hello Vladimir,
>>>>
>>>> +1 to option 3) . We shall comment the failed test cases out and add
>>>> FIXME to remind us to diagnose the problems later. ;-)
>>>>
>>>> Vladimir Ivanov wrote:
>>>>      
>>>>> I see your point.
>>>>> But I feel that we can miss regression in non-tested code if we
>>>>> exclude
>>>>> TestCases.
>>>>> Now, for example we miss testing of
>>>>>         
>>>> java.lang.Class/Process/Thread/String
>>>>      
>>>>> and some other classes.
>>>>>
>>>>> While we have failing tests and don't want to pay attention to these
>>>>> failures we can:
>>>>> 1) Leave things as is – do not run TestCases with failing tests.
>>>>> 2) Split passing/failing TestCase into separate "failing TestCase" and
>>>>> "passing TestCase" and exclude "failing TestCases". When test or
>>>>> implementation is fixed we move tests from failing TestCase to passing
>>>>> TestCase.
>>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
>>>>> instead
>>>>> of 0 for String.
>>>>> 4) Run all TestCases, then, compare test run results with the 'list of
>>>>> known
>>>>> failures' and see whether new failures appeared. This, I think, is
>>>>>         
>>>> better
>>>>      
>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>>>>>         
>>>> known
>>>>      
>>>>> failing tests and exclude list where we put crashing tests.
>>>>>
>>>>> Thanks, Vladimir
>>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>>>>        
>>>>>> Mikhail Loenko wrote:
>>>>>>          
>>>>>>> Hi Vladimir,
>>>>>>>
>>>>>>> IMHO the tests are to verify that an update does not introduce any
>>>>>>> regression. So there are two options: remember which exactly
>>>>>>>             
>>>> tests may
>>>>      
>>>>>> fail
>>>>>>          
>>>>>>> and remember that all tests must pass. I believe the latter one is
>>>>>>>             
>>>>>> a bit
>>>>>>          
>>>>>>> easier and safer.
>>>>>>>             
>>>>>> +1
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>          
>>>>>>> Thanks,
>>>>>>> Mikhail
>>>>>>>
>>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>>>>>>>            
>>>>>>>> Hi,
>>>>>>>> Working with tests I noticed that we are excluding some tests just
>>>>>>>> because
>>>>>>>> several tests from single TestCase fail.
>>>>>>>>
>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>>>>>>>> tests and
>>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
>>>>>>>>               
>>>>>> and we
>>>>>>          
>>>>>>>> just
>>>>>>>> miss testing of java.lang.String implementation.
>>>>>>>>
>>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
>>>>>>>>
>>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
>>>>>>>> If somebody needs a list of tests that always passed a separated
>>>>>>>> target can
>>>>>>>> be added to build.
>>>>>>>>
>>>>>>>> Do you think we should add target 'test-all' to the build?
>>>>>>>>  Thanks, Vladimir
>>>>>>>>
>>>>>>>>
>>>>>>>>               
>>>>>>>             
>>>> ---------------------------------------------------------------------
>>>>      
>>>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>>>> For additional commands, e-mail:
>>>>>>>             
>>>> harmony-dev-help@incubator.apache.org
>>>>      
>>>>>>>             
>>>>>> -- 
>>>>>>
>>>>>> Tim Ellison (t.p.ellison@gmail.com)
>>>>>> IBM Java technology centre, UK.
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>>> For additional commands, e-mail:
>>>>>> harmony-dev-help@incubator.apache.org
>>>>>>
>>>>>>
>>>>>>           
>>>> -- 
>>>> Richard Liang
>>>> China Software Development Lab, IBM
>>>>       
>>>
>>>     
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>>   
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by George Harley <ge...@googlemail.com>.
Hi Geir,

As you may recall, a while back I floated the idea and supplied some 
seed code to define all known test failing test methods in an XML file 
(an "exclusions list") that could be used by JUnit at test run time to 
skip over them while allowing the rest of the test methods in a class to 
run [1]. Obviously I thought about that when catching up with this 
thread but, more importantly, your comment about being reluctant to have 
more dependencies on JUnit also motivated me to go off and read some 
more about TestNG [2].

It was news to me that TestNG provides out-of-the-box support for 
excluding specific test methods as well as groups of methods (where the 
groups are declared in source file annotations or Javadoc comments). 
Even better, it can do this on existing JUnit test code provided that 
the necessary meta-data (annotations if compiling to a 1.5 target; 
Javadoc comments if targeting 1.4 like we currently are). There is a 
utility available in the TestNG download and also in the Eclipse support 
plug-in that helps migrate directories of existing JUnit tests to TestNG 
by adding in the basic meta-data (although for me the Eclipse version 
also tried to break the test class inheritance from 
junit.framework.TestCase which was definitely not what was required).

Perhaps ... just perhaps ... we should be looking at something like 
TestNG (or my wonderful "exclusions list" :-) ) to provide the 
granularity of test configuration that we need.

Just a thought.

Best regards,
George

[1] http://issues.apache.org/jira/browse/HARMONY-263
[2] http://testng.org



Geir Magnusson Jr wrote:
> Alexei Zakharov wrote:
>   
>> Hi,
>> +1 for (3), but I think it will be better to define suite() method and
>> enumerate passing tests there rather than to comment out the code.
>>     
>
> I'm reluctant to see more dependencies on JUnit when we could control at
> a level higher in the build system.
>
> Hard to explain, I guess, but if our exclusions are buried in .java, I
> would think that reporting and tracking over time is going to be much
> harder.
>
> geir
>
>   
>> 2006/6/27, Richard Liang <ri...@gmail.com>:
>>     
>>> Hello Vladimir,
>>>
>>> +1 to option 3) . We shall comment the failed test cases out and add
>>> FIXME to remind us to diagnose the problems later. ;-)
>>>
>>> Vladimir Ivanov wrote:
>>>       
>>>> I see your point.
>>>> But I feel that we can miss regression in non-tested code if we exclude
>>>> TestCases.
>>>> Now, for example we miss testing of
>>>>         
>>> java.lang.Class/Process/Thread/String
>>>       
>>>> and some other classes.
>>>>
>>>> While we have failing tests and don't want to pay attention to these
>>>> failures we can:
>>>> 1) Leave things as is – do not run TestCases with failing tests.
>>>> 2) Split passing/failing TestCase into separate "failing TestCase" and
>>>> "passing TestCase" and exclude "failing TestCases". When test or
>>>> implementation is fixed we move tests from failing TestCase to passing
>>>> TestCase.
>>>> 3) Comment failing tests in TestCases. It is better to run 58 tests
>>>> instead
>>>> of 0 for String.
>>>> 4) Run all TestCases, then, compare test run results with the 'list of
>>>> known
>>>> failures' and see whether new failures appeared. This, I think, is
>>>>         
>>> better
>>>       
>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>>>>         
>>> known
>>>       
>>>> failing tests and exclude list where we put crashing tests.
>>>>
>>>> Thanks, Vladimir
>>>> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>>>         
>>>>> Mikhail Loenko wrote:
>>>>>           
>>>>>> Hi Vladimir,
>>>>>>
>>>>>> IMHO the tests are to verify that an update does not introduce any
>>>>>> regression. So there are two options: remember which exactly
>>>>>>             
>>> tests may
>>>       
>>>>> fail
>>>>>           
>>>>>> and remember that all tests must pass. I believe the latter one is
>>>>>>             
>>>>> a bit
>>>>>           
>>>>>> easier and safer.
>>>>>>             
>>>>> +1
>>>>>
>>>>> Tim
>>>>>
>>>>>           
>>>>>> Thanks,
>>>>>> Mikhail
>>>>>>
>>>>>> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>>>>>>             
>>>>>>> Hi,
>>>>>>> Working with tests I noticed that we are excluding some tests just
>>>>>>> because
>>>>>>> several tests from single TestCase fail.
>>>>>>>
>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>>>>>>> tests and
>>>>>>> only 2 of them fails. But the build excludes the whole TestCase
>>>>>>>               
>>>>> and we
>>>>>           
>>>>>>> just
>>>>>>> miss testing of java.lang.String implementation.
>>>>>>>
>>>>>>> Do we really need to exclude TestCases in 'ant test' target?
>>>>>>>
>>>>>>> My suggestion is: do not exclude any tests until it crashes VM.
>>>>>>> If somebody needs a list of tests that always passed a separated
>>>>>>> target can
>>>>>>> be added to build.
>>>>>>>
>>>>>>> Do you think we should add target 'test-all' to the build?
>>>>>>>  Thanks, Vladimir
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>>             
>>> ---------------------------------------------------------------------
>>>       
>>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>>> For additional commands, e-mail:
>>>>>>             
>>> harmony-dev-help@incubator.apache.org
>>>       
>>>>>>             
>>>>> --
>>>>>
>>>>> Tim Ellison (t.p.ellison@gmail.com)
>>>>> IBM Java technology centre, UK.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>>>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>>>>
>>>>>
>>>>>           
>>> -- 
>>> Richard Liang
>>> China Software Development Lab, IBM
>>>       
>>
>>     
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>
>   


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Alexei Zakharov wrote:
> Hi,
> +1 for (3), but I think it will be better to define suite() method and
> enumerate passing tests there rather than to comment out the code.

I'm reluctant to see more dependencies on JUnit when we could control at
a level higher in the build system.

Hard to explain, I guess, but if our exclusions are buried in .java, I
would think that reporting and tracking over time is going to be much
harder.

geir

> 
> 2006/6/27, Richard Liang <ri...@gmail.com>:
>> Hello Vladimir,
>>
>> +1 to option 3) . We shall comment the failed test cases out and add
>> FIXME to remind us to diagnose the problems later. ;-)
>>
>> Vladimir Ivanov wrote:
>> > I see your point.
>> > But I feel that we can miss regression in non-tested code if we exclude
>> > TestCases.
>> > Now, for example we miss testing of
>> java.lang.Class/Process/Thread/String
>> > and some other classes.
>> >
>> > While we have failing tests and don't want to pay attention to these
>> > failures we can:
>> > 1) Leave things as is – do not run TestCases with failing tests.
>> > 2) Split passing/failing TestCase into separate "failing TestCase" and
>> > "passing TestCase" and exclude "failing TestCases". When test or
>> > implementation is fixed we move tests from failing TestCase to passing
>> > TestCase.
>> > 3) Comment failing tests in TestCases. It is better to run 58 tests
>> > instead
>> > of 0 for String.
>> > 4) Run all TestCases, then, compare test run results with the 'list of
>> > known
>> > failures' and see whether new failures appeared. This, I think, is
>> better
>> > then 1, 2 and 3, but, overhead is that we support 2 lists - list of
>> known
>> > failing tests and exclude list where we put crashing tests.
>> >
>> > Thanks, Vladimir
>> > On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>> >>
>> >> Mikhail Loenko wrote:
>> >> > Hi Vladimir,
>> >> >
>> >> > IMHO the tests are to verify that an update does not introduce any
>> >> > regression. So there are two options: remember which exactly
>> tests may
>> >> fail
>> >> > and remember that all tests must pass. I believe the latter one is
>> >> a bit
>> >> > easier and safer.
>> >>
>> >> +1
>> >>
>> >> Tim
>> >>
>> >> > Thanks,
>> >> > Mikhail
>> >> >
>> >> > 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>> >> >> Hi,
>> >> >> Working with tests I noticed that we are excluding some tests just
>> >> >> because
>> >> >> several tests from single TestCase fail.
>> >> >>
>> >> >> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>> >> >> tests and
>> >> >> only 2 of them fails. But the build excludes the whole TestCase
>> >> and we
>> >> >> just
>> >> >> miss testing of java.lang.String implementation.
>> >> >>
>> >> >> Do we really need to exclude TestCases in 'ant test' target?
>> >> >>
>> >> >> My suggestion is: do not exclude any tests until it crashes VM.
>> >> >> If somebody needs a list of tests that always passed a separated
>> >> >> target can
>> >> >> be added to build.
>> >> >>
>> >> >> Do you think we should add target 'test-all' to the build?
>> >> >>  Thanks, Vladimir
>> >> >>
>> >> >>
>> >> >
>> >> >
>> ---------------------------------------------------------------------
>> >> > Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> > For additional commands, e-mail:
>> harmony-dev-help@incubator.apache.org
>> >> >
>> >> >
>> >>
>> >> --
>> >>
>> >> Tim Ellison (t.p.ellison@gmail.com)
>> >> IBM Java technology centre, UK.
>> >>
>> >> ---------------------------------------------------------------------
>> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>> >>
>> >>
>> >
>>
>> -- 
>> Richard Liang
>> China Software Development Lab, IBM
> 
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Alexei Zakharov <al...@gmail.com>.
Hi,
+1 for (3), but I think it will be better to define suite() method and
enumerate passing tests there rather than to comment out the code.

2006/6/27, Richard Liang <ri...@gmail.com>:
> Hello Vladimir,
>
> +1 to option 3) . We shall comment the failed test cases out and add
> FIXME to remind us to diagnose the problems later. ;-)
>
> Vladimir Ivanov wrote:
> > I see your point.
> > But I feel that we can miss regression in non-tested code if we exclude
> > TestCases.
> > Now, for example we miss testing of java.lang.Class/Process/Thread/String
> > and some other classes.
> >
> > While we have failing tests and don't want to pay attention to these
> > failures we can:
> > 1) Leave things as is – do not run TestCases with failing tests.
> > 2) Split passing/failing TestCase into separate "failing TestCase" and
> > "passing TestCase" and exclude "failing TestCases". When test or
> > implementation is fixed we move tests from failing TestCase to passing
> > TestCase.
> > 3) Comment failing tests in TestCases. It is better to run 58 tests
> > instead
> > of 0 for String.
> > 4) Run all TestCases, then, compare test run results with the 'list of
> > known
> > failures' and see whether new failures appeared. This, I think, is better
> > then 1, 2 and 3, but, overhead is that we support 2 lists - list of known
> > failing tests and exclude list where we put crashing tests.
> >
> > Thanks, Vladimir
> > On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
> >>
> >> Mikhail Loenko wrote:
> >> > Hi Vladimir,
> >> >
> >> > IMHO the tests are to verify that an update does not introduce any
> >> > regression. So there are two options: remember which exactly tests may
> >> fail
> >> > and remember that all tests must pass. I believe the latter one is
> >> a bit
> >> > easier and safer.
> >>
> >> +1
> >>
> >> Tim
> >>
> >> > Thanks,
> >> > Mikhail
> >> >
> >> > 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> >> >> Hi,
> >> >> Working with tests I noticed that we are excluding some tests just
> >> >> because
> >> >> several tests from single TestCase fail.
> >> >>
> >> >> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
> >> >> tests and
> >> >> only 2 of them fails. But the build excludes the whole TestCase
> >> and we
> >> >> just
> >> >> miss testing of java.lang.String implementation.
> >> >>
> >> >> Do we really need to exclude TestCases in 'ant test' target?
> >> >>
> >> >> My suggestion is: do not exclude any tests until it crashes VM.
> >> >> If somebody needs a list of tests that always passed a separated
> >> >> target can
> >> >> be added to build.
> >> >>
> >> >> Do you think we should add target 'test-all' to the build?
> >> >>  Thanks, Vladimir
> >> >>
> >> >>
> >> >
> >> > ---------------------------------------------------------------------
> >> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >> >
> >> >
> >>
> >> --
> >>
> >> Tim Ellison (t.p.ellison@gmail.com)
> >> IBM Java technology centre, UK.
> >>
> >> ---------------------------------------------------------------------
> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >>
> >>
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM



-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Richard Liang <ri...@gmail.com>.
Hello Vladimir,

+1 to option 3) . We shall comment the failed test cases out and add 
FIXME to remind us to diagnose the problems later. ;-)

Vladimir Ivanov wrote:
> I see your point.
> But I feel that we can miss regression in non-tested code if we exclude
> TestCases.
> Now, for example we miss testing of java.lang.Class/Process/Thread/String
> and some other classes.
>
> While we have failing tests and don't want to pay attention to these
> failures we can:
> 1) Leave things as is – do not run TestCases with failing tests.
> 2) Split passing/failing TestCase into separate "failing TestCase" and
> "passing TestCase" and exclude "failing TestCases". When test or
> implementation is fixed we move tests from failing TestCase to passing
> TestCase.
> 3) Comment failing tests in TestCases. It is better to run 58 tests 
> instead
> of 0 for String.
> 4) Run all TestCases, then, compare test run results with the 'list of 
> known
> failures' and see whether new failures appeared. This, I think, is better
> then 1, 2 and 3, but, overhead is that we support 2 lists - list of known
> failing tests and exclude list where we put crashing tests.
>
> Thanks, Vladimir
> On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>>
>> Mikhail Loenko wrote:
>> > Hi Vladimir,
>> >
>> > IMHO the tests are to verify that an update does not introduce any
>> > regression. So there are two options: remember which exactly tests may
>> fail
>> > and remember that all tests must pass. I believe the latter one is 
>> a bit
>> > easier and safer.
>>
>> +1
>>
>> Tim
>>
>> > Thanks,
>> > Mikhail
>> >
>> > 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>> >> Hi,
>> >> Working with tests I noticed that we are excluding some tests just
>> >> because
>> >> several tests from single TestCase fail.
>> >>
>> >> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>> >> tests and
>> >> only 2 of them fails. But the build excludes the whole TestCase 
>> and we
>> >> just
>> >> miss testing of java.lang.String implementation.
>> >>
>> >> Do we really need to exclude TestCases in 'ant test' target?
>> >>
>> >> My suggestion is: do not exclude any tests until it crashes VM.
>> >> If somebody needs a list of tests that always passed a separated
>> >> target can
>> >> be added to build.
>> >>
>> >> Do you think we should add target 'test-all' to the build?
>> >>  Thanks, Vladimir
>> >>
>> >>
>> >
>> > ---------------------------------------------------------------------
>> > Terms of use : http://incubator.apache.org/harmony/mailing.html
>> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>> >
>> >
>>
>> -- 
>>
>> Tim Ellison (t.p.ellison@gmail.com)
>> IBM Java technology centre, UK.
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>

-- 
Richard Liang
China Software Development Lab, IBM 



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Vladimir Ivanov <iv...@gmail.com>.
I see your point.
But I feel that we can miss regression in non-tested code if we exclude
TestCases.
Now, for example we miss testing of java.lang.Class/Process/Thread/String
and some other classes.

While we have failing tests and don't want to pay attention to these
failures we can:
1) Leave things as is – do not run TestCases with failing tests.
2) Split passing/failing TestCase into separate "failing TestCase" and
"passing TestCase" and exclude "failing TestCases". When test or
implementation is fixed we move tests from failing TestCase to passing
TestCase.
3) Comment failing tests in TestCases. It is better to run 58 tests instead
of 0 for String.
4) Run all TestCases, then, compare test run results with the 'list of known
failures' and see whether new failures appeared. This, I think, is better
then 1, 2 and 3, but, overhead is that we support 2 lists - list of known
failing tests and exclude list where we put crashing tests.

 Thanks, Vladimir
On 6/26/06, Tim Ellison <t....@gmail.com> wrote:
>
> Mikhail Loenko wrote:
> > Hi Vladimir,
> >
> > IMHO the tests are to verify that an update does not introduce any
> > regression. So there are two options: remember which exactly tests may
> fail
> > and remember that all tests must pass. I believe the latter one is a bit
> > easier and safer.
>
> +1
>
> Tim
>
> > Thanks,
> > Mikhail
> >
> > 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> >> Hi,
> >> Working with tests I noticed that we are excluding some tests just
> >> because
> >> several tests from single TestCase fail.
> >>
> >> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
> >> tests and
> >> only 2 of them fails. But the build excludes the whole TestCase and we
> >> just
> >> miss testing of java.lang.String implementation.
> >>
> >> Do we really need to exclude TestCases in 'ant test' target?
> >>
> >> My suggestion is: do not exclude any tests until it crashes VM.
> >> If somebody needs a list of tests that always passed a separated
> >> target can
> >> be added to build.
> >>
> >> Do you think we should add target 'test-all' to the build?
> >>  Thanks, Vladimir
> >>
> >>
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
> --
>
> Tim Ellison (t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [classlib][testing] excluding the failed tests

Posted by Tim Ellison <t....@gmail.com>.
Mikhail Loenko wrote:
> Hi Vladimir,
> 
> IMHO the tests are to verify that an update does not introduce any
> regression. So there are two options: remember which exactly tests may fail
> and remember that all tests must pass. I believe the latter one is a bit
> easier and safer.

+1

Tim

> Thanks,
> Mikhail
> 
> 2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
>> Hi,
>> Working with tests I noticed that we are excluding some tests just
>> because
>> several tests from single TestCase fail.
>>
>> For example, the TestCase 'tests.api.java.lang.StringTest' has 60
>> tests and
>> only 2 of them fails. But the build excludes the whole TestCase and we
>> just
>> miss testing of java.lang.String implementation.
>>
>> Do we really need to exclude TestCases in 'ant test' target?
>>
>> My suggestion is: do not exclude any tests until it crashes VM.
>> If somebody needs a list of tests that always passed a separated
>> target can
>> be added to build.
>>
>> Do you think we should add target 'test-all' to the build?
>>  Thanks, Vladimir
>>
>>
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib][testing] excluding the failed tests

Posted by Mikhail Loenko <ml...@gmail.com>.
Hi Vladimir,

IMHO the tests are to verify that an update does not introduce any
regression. So there are two options: remember which exactly tests may fail
and remember that all tests must pass. I believe the latter one is a bit
easier and safer.

Thanks,
Mikhail

2006/6/26, Vladimir Ivanov <iv...@gmail.com>:
> Hi,
> Working with tests I noticed that we are excluding some tests just because
> several tests from single TestCase fail.
>
> For example, the TestCase 'tests.api.java.lang.StringTest' has 60 tests and
> only 2 of them fails. But the build excludes the whole TestCase and we just
> miss testing of java.lang.String implementation.
>
> Do we really need to exclude TestCases in 'ant test' target?
>
> My suggestion is: do not exclude any tests until it crashes VM.
> If somebody needs a list of tests that always passed a separated target can
> be added to build.
>
> Do you think we should add target 'test-all' to the build?
>  Thanks, Vladimir
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org