You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Mark Hindess <ma...@googlemail.com> on 2008/10/09 16:55:28 UTC

Re: API tests failing on the RI

In message <c3...@mail.gmail.com>,
"Alexey Petrenko" writes:
>
> 2008/9/30 Mark Hindess <ma...@googlemail.com>:
> >
> > In order to fix this bug I had to fix a number of invalid API tests.
> > I think it would be a good idea to:
> >
> > 1) Run the API tests against the RI
> >
> > 2) Create exclude lists - with references to the relevant JIRA - for
> > non-bug differences so the tests can be regularly run on the RI and
> > expected to pass cleanly
> >
> > 3) Fix the non-non-bug (!) differences.
>
> This job really needs to be done...

I had a quick look at how much work this might but immediately hit an
issue that I think is best discussed first.  The luni test in:

  api/common/org/apache/harmony/luni/tests/java/io/FileTest.java

has 52 asserts.  One (on line 2153) fails on the RI (because of a fix in
HARMONY-3207 for which no non-bug difference jira was created AFAIK).

Does the exclude list need to exclude the entire test - which would seem
to be a waste of potentially useful tests?  Or is there a better way
with junit 4?  Or do we just start splitting out tests into separate
source files - like FileNonBugDifferenceTest.java - for reference in
exclude lists?

I know we've discussed this many times before (along with repeatedly
discounting testng) but I'd like to resolve this once and for all so we
can use the tests to their full potential.

This is a concrete example.  How should we resolve this?

I should stress that I have no strong opinion about testng or junit, but
I do have a strong opinion about the need to understand the differences
between the behaviour of our code and the RI particularly given the
continuing absence of a TCK.  To me this means running as many tests as
possible on the RI to confirm that the tests are valid and documenting
(close to the code if possible) or fixing every case where our behaviour
doesn't match the RI.

Regards,
 Mark.



Re: API tests failing on the RI

Posted by Alexei Fedotov <al...@gmail.com>.
I would not vote for a method level exclusion because it would
increase a number of objects to mange. I like what Sean says. May be I
would prefer preserving the whole name of the testing class, eg.
"NonBugDifferencesFileTest.java" because I believe that any inaccurate
things tend to create clusters on a class level and use a shorter name
prefix, eg. "NonSpec" or "Ambig" or "Loose".

Thanks for an interesting discussion!


On Fri, Oct 10, 2008 at 1:44 PM, Sian January
<si...@googlemail.com> wrote:
> I think it would be good to have a way of running
>
> 1. All API tests
> 2. All API tests except non-bug differences (TCK-style)
>
> on any VM.
>
> So where one method tests both common behaviour and non-bug
> differences it should be split in two so that we don't lose the tests
> for common behaviour when running the TCK-style suite.  I don't really
> have a strong opinion about whether we then identify non-bug
> difference test cases by annotations or by moving to a different class
> or otherwise, but HARMONY-263 looks like a huge amount of XML and a
> lot of work to implement for the whole of Harmony compared to doing
> something like having one "NonBugDifferencesTest.java" class per
> module.
>
> I also think there's another advantage in clearly stating which tests
> cases are for non-bug differences, which is that if someone makes a
> change that causes one of these tests to fail it's easy to find the
> corresponding JIRA or mailing list discussion, rather than just assume
> the test is wrong  (and vice-versa we may currently have some tests
> that are wrong that we're assuming are non-bug differences).
>
>
> 2008/10/10 Regis <xu...@gmail.com>:
>>
>> Mark Hindess wrote:
>>>
>>> In message <c3...@mail.gmail.com>,
>>> "Alexey Petrenko" writes:
>>>>
>>>> 2008/9/30 Mark Hindess <ma...@googlemail.com>:
>>>>>
>>>>> In order to fix this bug I had to fix a number of invalid API tests.
>>>>> I think it would be a good idea to:
>>>>>
>>>>> 1) Run the API tests against the RI
>>>>>
>>>>> 2) Create exclude lists - with references to the relevant JIRA - for
>>>>> non-bug differences so the tests can be regularly run on the RI and
>>>>> expected to pass cleanly
>>>>>
>>>>> 3) Fix the non-non-bug (!) differences.
>>>>
>>>> This job really needs to be done...
>>>
>>> I had a quick look at how much work this might but immediately hit an
>>> issue that I think is best discussed first.  The luni test in:
>>>
>>>  api/common/org/apache/harmony/luni/tests/java/io/FileTest.java
>>>
>>> has 52 asserts.  One (on line 2153) fails on the RI (because of a fix in
>>> HARMONY-3207 for which no non-bug difference jira was created AFAIK).
>>>
>>> Does the exclude list need to exclude the entire test - which would seem
>>
>> FileTest could pass on Harmony, I always think the exclude list is a place
>> to hold the tests which just only break on Harmony.
>>>
>>> to be a waste of potentially useful tests?  Or is there a better way
>>> with junit 4?  Or do we just start splitting out tests into separate
>>> source files - like FileNonBugDifferenceTest.java - for reference in
>>> exclude lists?
>>
>> I agree it waste useful tests, and I don't think splitting out tests is the
>> best way. FileTest just a simple example, requirements may become complex,
>> we have to split out more test files and they may have overlaps, it make
>> these files hard to manage and maintain. I thinks we need a way to group and
>> control tests on method level. And I found[1]
>> [1] https://issues.apache.org/jira/browse/HARMONY-263
>> which extend the junit to support to exclude tests in method level in XML,
>> it maybe be helpful.
>>>
>>> I know we've discussed this many times before (along with repeatedly
>>> discounting testng) but I'd like to resolve this once and for all so we
>>> can use the tests to their full potential.
>>>
>>> This is a concrete example.  How should we resolve this?
>>>
>>> I should stress that I have no strong opinion about testng or junit, but
>>> I do have a strong opinion about the need to understand the differences
>>> between the behaviour of our code and the RI particularly given the
>>> continuing absence of a TCK.  To me this means running as many tests as
>>> possible on the RI to confirm that the tests are valid and documenting
>>> (close to the code if possible) or fixing every case where our behaviour
>>> doesn't match the RI.
>>>
>>> Regards,
>>>  Mark.
>>>
>>>
>>>
>>
>> Best Regards,
>> Regis.
>>
>
>
>
> --
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>



-- 
With best regards,
Alexei

Re: API tests failing on the RI

Posted by Sian January <si...@googlemail.com>.
I think it would be good to have a way of running

1. All API tests
2. All API tests except non-bug differences (TCK-style)

on any VM.

So where one method tests both common behaviour and non-bug
differences it should be split in two so that we don't lose the tests
for common behaviour when running the TCK-style suite.  I don't really
have a strong opinion about whether we then identify non-bug
difference test cases by annotations or by moving to a different class
or otherwise, but HARMONY-263 looks like a huge amount of XML and a
lot of work to implement for the whole of Harmony compared to doing
something like having one "NonBugDifferencesTest.java" class per
module.

I also think there's another advantage in clearly stating which tests
cases are for non-bug differences, which is that if someone makes a
change that causes one of these tests to fail it's easy to find the
corresponding JIRA or mailing list discussion, rather than just assume
the test is wrong  (and vice-versa we may currently have some tests
that are wrong that we're assuming are non-bug differences).


2008/10/10 Regis <xu...@gmail.com>:
>
> Mark Hindess wrote:
>>
>> In message <c3...@mail.gmail.com>,
>> "Alexey Petrenko" writes:
>>>
>>> 2008/9/30 Mark Hindess <ma...@googlemail.com>:
>>>>
>>>> In order to fix this bug I had to fix a number of invalid API tests.
>>>> I think it would be a good idea to:
>>>>
>>>> 1) Run the API tests against the RI
>>>>
>>>> 2) Create exclude lists - with references to the relevant JIRA - for
>>>> non-bug differences so the tests can be regularly run on the RI and
>>>> expected to pass cleanly
>>>>
>>>> 3) Fix the non-non-bug (!) differences.
>>>
>>> This job really needs to be done...
>>
>> I had a quick look at how much work this might but immediately hit an
>> issue that I think is best discussed first.  The luni test in:
>>
>>  api/common/org/apache/harmony/luni/tests/java/io/FileTest.java
>>
>> has 52 asserts.  One (on line 2153) fails on the RI (because of a fix in
>> HARMONY-3207 for which no non-bug difference jira was created AFAIK).
>>
>> Does the exclude list need to exclude the entire test - which would seem
>
> FileTest could pass on Harmony, I always think the exclude list is a place
> to hold the tests which just only break on Harmony.
>>
>> to be a waste of potentially useful tests?  Or is there a better way
>> with junit 4?  Or do we just start splitting out tests into separate
>> source files - like FileNonBugDifferenceTest.java - for reference in
>> exclude lists?
>
> I agree it waste useful tests, and I don't think splitting out tests is the
> best way. FileTest just a simple example, requirements may become complex,
> we have to split out more test files and they may have overlaps, it make
> these files hard to manage and maintain. I thinks we need a way to group and
> control tests on method level. And I found[1]
> [1] https://issues.apache.org/jira/browse/HARMONY-263
> which extend the junit to support to exclude tests in method level in XML,
> it maybe be helpful.
>>
>> I know we've discussed this many times before (along with repeatedly
>> discounting testng) but I'd like to resolve this once and for all so we
>> can use the tests to their full potential.
>>
>> This is a concrete example.  How should we resolve this?
>>
>> I should stress that I have no strong opinion about testng or junit, but
>> I do have a strong opinion about the need to understand the differences
>> between the behaviour of our code and the RI particularly given the
>> continuing absence of a TCK.  To me this means running as many tests as
>> possible on the RI to confirm that the tests are valid and documenting
>> (close to the code if possible) or fixing every case where our behaviour
>> doesn't match the RI.
>>
>> Regards,
>>  Mark.
>>
>>
>>
>
> Best Regards,
> Regis.
>



-- 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Re: API tests failing on the RI

Posted by Regis <xu...@gmail.com>.
Mark Hindess wrote:
> In message <c3...@mail.gmail.com>,
> "Alexey Petrenko" writes:
>> 2008/9/30 Mark Hindess <ma...@googlemail.com>:
>>> In order to fix this bug I had to fix a number of invalid API tests.
>>> I think it would be a good idea to:
>>>
>>> 1) Run the API tests against the RI
>>>
>>> 2) Create exclude lists - with references to the relevant JIRA - for
>>> non-bug differences so the tests can be regularly run on the RI and
>>> expected to pass cleanly
>>>
>>> 3) Fix the non-non-bug (!) differences.
>> This job really needs to be done...
> 
> I had a quick look at how much work this might but immediately hit an
> issue that I think is best discussed first.  The luni test in:
> 
>   api/common/org/apache/harmony/luni/tests/java/io/FileTest.java
> 
> has 52 asserts.  One (on line 2153) fails on the RI (because of a fix in
> HARMONY-3207 for which no non-bug difference jira was created AFAIK).
> 
> Does the exclude list need to exclude the entire test - which would seem
FileTest could pass on Harmony, I always think the exclude list is a 
place to hold the tests which just only break on Harmony.
> to be a waste of potentially useful tests?  Or is there a better way
> with junit 4?  Or do we just start splitting out tests into separate
> source files - like FileNonBugDifferenceTest.java - for reference in
> exclude lists?
I agree it waste useful tests, and I don't think splitting out tests is 
the best way. FileTest just a simple example, requirements may become 
complex, we have to split out more test files and they may have 
overlaps, it make these files hard to manage and maintain. I thinks we 
need a way to group and control tests on method level. And I found[1]
[1] https://issues.apache.org/jira/browse/HARMONY-263
which extend the junit to support to exclude tests in method level in 
XML, it maybe be helpful.
> 
> I know we've discussed this many times before (along with repeatedly
> discounting testng) but I'd like to resolve this once and for all so we
> can use the tests to their full potential.
> 
> This is a concrete example.  How should we resolve this?
> 
> I should stress that I have no strong opinion about testng or junit, but
> I do have a strong opinion about the need to understand the differences
> between the behaviour of our code and the RI particularly given the
> continuing absence of a TCK.  To me this means running as many tests as
> possible on the RI to confirm that the tests are valid and documenting
> (close to the code if possible) or fixing every case where our behaviour
> doesn't match the RI.
> 
> Regards,
>  Mark.
> 
> 
> 

Best Regards,
Regis.

Re: API tests failing on the RI

Posted by Nathan Beyer <nd...@apache.org>.
Perhaps we should have an isolated TCK-like test set, which can be run
without any special setup or build - just a JRE. Perhaps this would
create some dual-maintenance, but I agree with Mark's sentiment and
concern and think some amount of sacrifice may be necessary. A couple
of benefits to this isolation would including eliminating the complex
per-project API/impl tests and just make everything impl tests,
failing tests would never be excluded, they are either correct or not,
failures mean a Harmony bug.

-Nathan

On Thu, Oct 9, 2008 at 9:55 AM, Mark Hindess
<ma...@googlemail.com> wrote:
>
> In message <c3...@mail.gmail.com>,
> "Alexey Petrenko" writes:
>>
>> 2008/9/30 Mark Hindess <ma...@googlemail.com>:
>> >
>> > In order to fix this bug I had to fix a number of invalid API tests.
>> > I think it would be a good idea to:
>> >
>> > 1) Run the API tests against the RI
>> >
>> > 2) Create exclude lists - with references to the relevant JIRA - for
>> > non-bug differences so the tests can be regularly run on the RI and
>> > expected to pass cleanly
>> >
>> > 3) Fix the non-non-bug (!) differences.
>>
>> This job really needs to be done...
>
> I had a quick look at how much work this might but immediately hit an
> issue that I think is best discussed first.  The luni test in:
>
>  api/common/org/apache/harmony/luni/tests/java/io/FileTest.java
>
> has 52 asserts.  One (on line 2153) fails on the RI (because of a fix in
> HARMONY-3207 for which no non-bug difference jira was created AFAIK).
>
> Does the exclude list need to exclude the entire test - which would seem
> to be a waste of potentially useful tests?  Or is there a better way
> with junit 4?  Or do we just start splitting out tests into separate
> source files - like FileNonBugDifferenceTest.java - for reference in
> exclude lists?
>
> I know we've discussed this many times before (along with repeatedly
> discounting testng) but I'd like to resolve this once and for all so we
> can use the tests to their full potential.
>
> This is a concrete example.  How should we resolve this?
>
> I should stress that I have no strong opinion about testng or junit, but
> I do have a strong opinion about the need to understand the differences
> between the behaviour of our code and the RI particularly given the
> continuing absence of a TCK.  To me this means running as many tests as
> possible on the RI to confirm that the tests are valid and documenting
> (close to the code if possible) or fixing every case where our behaviour
> doesn't match the RI.
>
> Regards,
>  Mark.
>
>
>