You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Nathan Beyer <nb...@kc.rr.com> on 2006/07/11 03:01:22 UTC

[classlib] TestNG v. JUnit (was: RE: [classlib] Testing conventions - a proposal)

Not to add another fire to this topic, but with all things being relative,
so far this topic has been comparison of the TestNG and JUnit v3.8. From
what I understand, the latest JUnit v4.1 provides many of the same
annotation features that TestNG does, as well guaranteed compatibility with
JUnit v3-based tests. 

If we were to compare moving to TestNG with upgrading to JUnit 4.1, would
there still be as much value in the proposition to move to TestNG?

-Nathan

> -----Original Message-----
> From: George Harley [mailto:george.c.harley@googlemail.com]
> Sent: Monday, July 10, 2006 3:57 PM
> To: harmony-dev@incubator.apache.org
> Subject: Re: [classlib] Testing conventions - a proposal
> 
> Alexei Zakharov wrote:
> > Hi George,
> >
> >> For the purposes of this discussion it would be fascinating to find out
> >> why you refer to TestNG as being an "unstable" test harness. What is
> >> that statement based on ?
> >
> > My exact statement was referring to TestNG as "probably unstable"
> > rather than simply "unstable". ;)  This statement was based on posts
> > from Richard Liang about the bug in the TestNG migration tool and on
> > common sense. If the project has such an obvious bug in one place it
> > may probably have other bugs in other places. JUnit is quite famous
> > and widely used toolkit that proved to be stable enough. TestNG is
> > neither famous nor widely used. And IMHO it makes sense to be careful
> > with new exciting tools until we *really* need their innovative
> > functionality.
> >
> 
> Hi Alexei,
> 
> Last I heard, Richard posted saying that there was no bug in the
> migration tool [1]. The command line tool is designed to locate JUnit
> tests under a specified location and add the TestNG annotations to them.
> That's what it does.
> 
> You are right to say that it makes sense to be careful in this matter.
> Nobody wants to do anything that affects Harmony in an adverse way.
> 
> Best regards,
> George
> 
> [1]
> http://mail-archives.apache.org/mod_mbox/incubator-harmony-
> dev/200607.mbox/%3c44B1C084.3020408@gmail.com%3e
> 
> 
> >
> > 2006/7/10, George Harley <ge...@googlemail.com>:
> >> Alexei Zakharov wrote:
> >> >> Actually, there's a very valid benefit for using TestNG markers (=
> >> >> annotations/JavaDoc) for grouping tests; the directory structure is
> a
> >> >> tree, whereas the markers can form any slice of tests, and the sets
> >> >
> >> > Concerning TestNG vs JUnit. I just like to pay your attention on the
> >> > fact what it is possible to achieve the same level of test
> >> > grouping/slicing with JUnit TestSuites. You may define any number of
> >> > intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
> >> > XXXWinSpecificSuite or whatever. Without necessity of migrating to
> new
> >> > (probably unstable) test harness.
> >> > Just my two cents.
> >> >
> >> >
> >>
> >> Hi Alexei,
> >>
> >> You are quite correct that JUnit test suites are another alternative
> >> here. If I recall correctly, their use was discussed in the very early
> >> days of this project but it came to nothing and we instead went down
> the
> >> route of using exclusion filters in the Ant JUnit task. That approach
> >> does not offer much in the way of fine grain control and relies on us
> >> pushing stuff around the repository. Hence the kicking off of this
> >> thread.
> >>
> >> For the purposes of this discussion it would be fascinating to find out
> >> why you refer to TestNG as being an "unstable" test harness. What is
> >> that statement based on ?
> >>
> >> Best regards,
> >> George
> >>
> >>
> >> > 2006/7/8, Alex Blewitt <al...@gmail.com>:
> >> >> On 08/07/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
> >> >> >
> >> >> > So while I like the annotations, and expect we can use them
> >> >> effectively,
> >> >> > I have an instinctive skepticism of annotations right now
> >> because in
> >> >> > general (in general in Java), I'm not convinced we've used them
> >> enough
> >> >> > to grok good design patterns.
> >> >>
> >> >> There's really no reason to get hung up on the annotations. TestNG
> >> >> works just as well with JavaDoc source comments; annotations are
> only
> >> >> another means to that end. (They're probably a better one for the
> >> >> future, but it's just an implementation detail.)
> >> >>
> >> >> > Now since I still haven't read the thread fully, I'm jumping to
> >> >> > conclusions, taking it to the extreme, etc etc, but my thinking in
> >> >> > writing the above is that if we bury everything about our test
> >> >> > 'parameter space' in annotations, some of the visible
> >> organization we
> >> >> > have now w/ on-disk layout becomes invisible, and the readable
> >> >> > "summaries" of aspects of testing that we'd have in an XML
> metadata
> >> >> > document (or whatever) also are hard because you need to scan the
> >> >> > sources to find all instances of annotation "X".
> >> >>
> >> >> I'm hoping that this would be just as applicable to using JavaDoc
> >> >> variants, and that the problem's not with annotations per se.
> >> >>
> >> >> In either case, both are grokkable with tools -- either
> >> >> annotation-savy readers or a JavaDoc tag processor, and it
> >> wouldn't be
> >> >> hard to configure one of those to periodically scan the codebase to
> >> >> generate reports. Furthermore, as long as the annotation X is well
> >> >> defined, *you* don't have to scan it -- you leave it up to TestNG to
> >> >> figure it out.
> >> >>
> >> >> Actually, there's a very valid benefit for using TestNG markers (=
> >> >> annotations/JavaDoc) for grouping tests; the directory structure is
> a
> >> >> tree, whereas the markers can form any slice of tests, and the sets
> >> >> don't need to be strict subsets (with a tree, everything has to be a
> >> >> strict subset of its parents). That means that it's possible to
> >> define
> >> >> a marker IO to run all the IO tests, or a marker Win32 to run all
> the
> >> >> Win32 tests, and both of those will contain IO-specific Win32 tests.
> >> >> You can't do that in a tree structure without duplicating content
> >> >> somewhere along the line (e.g. /win/io or /io/win). Neither of these
> >> >> scale well, and every time you add a new dimension, you're doubling
> >> >> the structure of the directory, but merely adding a new marker with
> >> >> TestNG. So if you wanted to have (say) boot classpath tests vs api
> >> >> tests, then you'd ahve to have /api/win/io and /boot/win/io (or
> >> >> various permutations as applicable).
> >> >>
> >> >> Most of the directory-based arguments seem to be along the lines of
> >> >> "/api/win/io is better! No, /win/io/api is better!". Just have an
> >> >> 'api', 'win', 'io' TestNG marker, and then let TestNG figure out
> >> which
> >> >> ones to run. You can then even get specific, and only run the
> Windows
> >> >> IO API tests, if you really want -- but if you don't, you get the
> >> >> benefit of being able to run all IO tests (both API and boot).
> >> >>
> >> >> There doesn't seem to be any benefit to having a strict tree-like
> >> >> structure to the tests when it's possible to have a multi-
> dimensional
> >> >> matrix of all possible combinations that's managed by the tool.
> >> >>
> >> >> Alex.
> >
> >
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib] TestNG v. JUnit

Posted by George Harley <ge...@googlemail.com>.
Hi Richard,

Thank you for that great summary. Another thing that may be of interest 
to people is that TestNG has been around for a couple of years now while 
JUnit 4 was released approximately .... four months ago.

Best regards,
George



Richard Liang wrote:
>
>
> Nathan Beyer wrote:
>> Not to add another fire to this topic, but with all things being 
>> relative,
>> so far this topic has been comparison of the TestNG and JUnit v3.8. From
>> what I understand, the latest JUnit v4.1 provides many of the same
>> annotation features that TestNG does, as well guaranteed 
>> compatibility with
>> JUnit v3-based tests.
>> If we were to compare moving to TestNG with upgrading to JUnit 4.1, 
>> would
>> there still be as much value in the proposition to move to TestNG?
>>
>>   
> It's hard to give an exhaustive comparison of JUnit4 and TestNG. There 
> is an existing presentation "Comparison TestNG / JUnit 4"[1], however, 
> it's in German. Not sure if there are German here. ;-)
> I just try to list some items (Correct me if I'm wrong)
>
> Both JUit4 and TestNG:
> 1)  Test classes do not have to extend from junit.framework.TestCase.
> 2) Test methods do not have to be prefixed with 'test'.
> 3) Use @Test annotations to mark a method as a test case.
> 4) Use @Before and @After annotations to identify set up and tear 
> down. (TestNG uses @Configuration.beforeTestMethod and 
> @Configuration.afterTestMethod )
> 5) Use @BeforeClass and @AfterClass annotations to identify one time 
> set up and one time tear down. (TestNG uses 
> @Configuration.beforeTestClass and @Configuration.afterTestClass )
> 6) @Test.timeout to specify the maximum time to execute
> 7) @Test.expected to specify the expected exception to be thrown 
> (TestNG uses @ExpectedExceptions)
> 8) Can execute Junit 3.8 test cases.
>
> *Differences*:
> 1) JUnit4 requires Java 5.0. while TestNG can work with Java 1.4 and 
> Java 5.0
> 2) TestNG provides more annotations to facilitate testing 
> configuration[2]
> 3) TestNG "groups" is more sophisticated than JUnit test suite[3]
> 4) TestNG make it easy to rerun failed tests[4]
> ....
>
> 1. http://www.qaware.de/downloads/to1-adersberger.pdf
> 2. http://testng.org/doc/documentation-main.html#annotations
> 3. http://testng.org/doc/documentation-main.html#test-groups
> 4. http://testng.org/doc/documentation-main.html#rerunning
>
> Best regards,
> Richard.
>
>> -Nathan
>>
>>  
>>> -----Original Message-----
>>> From: George Harley [mailto:george.c.harley@googlemail.com]
>>> Sent: Monday, July 10, 2006 3:57 PM
>>> To: harmony-dev@incubator.apache.org
>>> Subject: Re: [classlib] Testing conventions - a proposal
>>>
>>> Alexei Zakharov wrote:
>>>    
>>>> Hi George,
>>>>
>>>>      
>>>>> For the purposes of this discussion it would be fascinating to 
>>>>> find out
>>>>> why you refer to TestNG as being an "unstable" test harness. What is
>>>>> that statement based on ?
>>>>>         
>>>> My exact statement was referring to TestNG as "probably unstable"
>>>> rather than simply "unstable". ;)  This statement was based on posts
>>>> from Richard Liang about the bug in the TestNG migration tool and on
>>>> common sense. If the project has such an obvious bug in one place it
>>>> may probably have other bugs in other places. JUnit is quite famous
>>>> and widely used toolkit that proved to be stable enough. TestNG is
>>>> neither famous nor widely used. And IMHO it makes sense to be careful
>>>> with new exciting tools until we *really* need their innovative
>>>> functionality.
>>>>
>>>>       
>>> Hi Alexei,
>>>
>>> Last I heard, Richard posted saying that there was no bug in the
>>> migration tool [1]. The command line tool is designed to locate JUnit
>>> tests under a specified location and add the TestNG annotations to 
>>> them.
>>> That's what it does.
>>>
>>> You are right to say that it makes sense to be careful in this matter.
>>> Nobody wants to do anything that affects Harmony in an adverse way.
>>>
>>> Best regards,
>>> George
>>>
>>> [1]
>>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-
>>> dev/200607.mbox/%3c44B1C084.3020408@gmail.com%3e
>>>
>>>
>>>    
>>>> 2006/7/10, George Harley <ge...@googlemail.com>:
>>>>      
>>>>> Alexei Zakharov wrote:
>>>>>        
>>>>>>> Actually, there's a very valid benefit for using TestNG markers (=
>>>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
>>>>>>>             
>>> a
>>>    
>>>>>>> tree, whereas the markers can form any slice of tests, and the sets
>>>>>>>             
>>>>>> Concerning TestNG vs JUnit. I just like to pay your attention on the
>>>>>> fact what it is possible to achieve the same level of test
>>>>>> grouping/slicing with JUnit TestSuites. You may define any number of
>>>>>> intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
>>>>>> XXXWinSpecificSuite or whatever. Without necessity of migrating to
>>>>>>           
>>> new
>>>    
>>>>>> (probably unstable) test harness.
>>>>>> Just my two cents.
>>>>>>
>>>>>>
>>>>>>           
>>>>> Hi Alexei,
>>>>>
>>>>> You are quite correct that JUnit test suites are another alternative
>>>>> here. If I recall correctly, their use was discussed in the very 
>>>>> early
>>>>> days of this project but it came to nothing and we instead went down
>>>>>         
>>> the
>>>    
>>>>> route of using exclusion filters in the Ant JUnit task. That approach
>>>>> does not offer much in the way of fine grain control and relies on us
>>>>> pushing stuff around the repository. Hence the kicking off of this
>>>>> thread.
>>>>>
>>>>> For the purposes of this discussion it would be fascinating to 
>>>>> find out
>>>>> why you refer to TestNG as being an "unstable" test harness. What is
>>>>> that statement based on ?
>>>>>
>>>>> Best regards,
>>>>> George
>>>>>
>>>>>
>>>>>        
>>>>>> 2006/7/8, Alex Blewitt <al...@gmail.com>:
>>>>>>          
>>>>>>> On 08/07/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>>>>>>>            
>>>>>>>> So while I like the annotations, and expect we can use them
>>>>>>>>               
>>>>>>> effectively,
>>>>>>>            
>>>>>>>> I have an instinctive skepticism of annotations right now
>>>>>>>>               
>>>>> because in
>>>>>        
>>>>>>>> general (in general in Java), I'm not convinced we've used them
>>>>>>>>               
>>>>> enough
>>>>>        
>>>>>>>> to grok good design patterns.
>>>>>>>>               
>>>>>>> There's really no reason to get hung up on the annotations. TestNG
>>>>>>> works just as well with JavaDoc source comments; annotations are
>>>>>>>             
>>> only
>>>    
>>>>>>> another means to that end. (They're probably a better one for the
>>>>>>> future, but it's just an implementation detail.)
>>>>>>>
>>>>>>>            
>>>>>>>> Now since I still haven't read the thread fully, I'm jumping to
>>>>>>>> conclusions, taking it to the extreme, etc etc, but my thinking in
>>>>>>>> writing the above is that if we bury everything about our test
>>>>>>>> 'parameter space' in annotations, some of the visible
>>>>>>>>               
>>>>> organization we
>>>>>        
>>>>>>>> have now w/ on-disk layout becomes invisible, and the readable
>>>>>>>> "summaries" of aspects of testing that we'd have in an XML
>>>>>>>>               
>>> metadata
>>>    
>>>>>>>> document (or whatever) also are hard because you need to scan the
>>>>>>>> sources to find all instances of annotation "X".
>>>>>>>>               
>>>>>>> I'm hoping that this would be just as applicable to using JavaDoc
>>>>>>> variants, and that the problem's not with annotations per se.
>>>>>>>
>>>>>>> In either case, both are grokkable with tools -- either
>>>>>>> annotation-savy readers or a JavaDoc tag processor, and it
>>>>>>>             
>>>>> wouldn't be
>>>>>        
>>>>>>> hard to configure one of those to periodically scan the codebase to
>>>>>>> generate reports. Furthermore, as long as the annotation X is well
>>>>>>> defined, *you* don't have to scan it -- you leave it up to 
>>>>>>> TestNG to
>>>>>>> figure it out.
>>>>>>>
>>>>>>> Actually, there's a very valid benefit for using TestNG markers (=
>>>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
>>>>>>>             
>>> a
>>>    
>>>>>>> tree, whereas the markers can form any slice of tests, and the sets
>>>>>>> don't need to be strict subsets (with a tree, everything has to 
>>>>>>> be a
>>>>>>> strict subset of its parents). That means that it's possible to
>>>>>>>             
>>>>> define
>>>>>        
>>>>>>> a marker IO to run all the IO tests, or a marker Win32 to run all
>>>>>>>             
>>> the
>>>    
>>>>>>> Win32 tests, and both of those will contain IO-specific Win32 
>>>>>>> tests.
>>>>>>> You can't do that in a tree structure without duplicating content
>>>>>>> somewhere along the line (e.g. /win/io or /io/win). Neither of 
>>>>>>> these
>>>>>>> scale well, and every time you add a new dimension, you're doubling
>>>>>>> the structure of the directory, but merely adding a new marker with
>>>>>>> TestNG. So if you wanted to have (say) boot classpath tests vs api
>>>>>>> tests, then you'd ahve to have /api/win/io and /boot/win/io (or
>>>>>>> various permutations as applicable).
>>>>>>>
>>>>>>> Most of the directory-based arguments seem to be along the lines of
>>>>>>> "/api/win/io is better! No, /win/io/api is better!". Just have an
>>>>>>> 'api', 'win', 'io' TestNG marker, and then let TestNG figure out
>>>>>>>             
>>>>> which
>>>>>        
>>>>>>> ones to run. You can then even get specific, and only run the
>>>>>>>             
>>> Windows
>>>    
>>>>>>> IO API tests, if you really want -- but if you don't, you get the
>>>>>>> benefit of being able to run all IO tests (both API and boot).
>>>>>>>
>>>>>>> There doesn't seem to be any benefit to having a strict tree-like
>>>>>>> structure to the tests when it's possible to have a multi-
>>>>>>>             
>>> dimensional
>>>    
>>>>>>> matrix of all possible combinations that's managed by the tool.
>>>>>>>
>>>>>>> Alex.
>>>>>>>             
>>>>       
>>> ---------------------------------------------------------------------
>>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>>     
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>>   
>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [classlib] TestNG v. JUnit

Posted by Andrew Zhang <zh...@gmail.com>.
Whoa, good summaries!

On 7/11/06, Richard Liang <ri...@gmail.com> wrote:
>
>
>
> Nathan Beyer wrote:
> > Not to add another fire to this topic, but with all things being
> relative,
> > so far this topic has been comparison of the TestNG and JUnit v3.8. From
> > what I understand, the latest JUnit v4.1 provides many of the same
> > annotation features that TestNG does, as well guaranteed compatibility
> with
> > JUnit v3-based tests.
> >
> > If we were to compare moving to TestNG with upgrading to JUnit 4.1,
> would
> > there still be as much value in the proposition to move to TestNG?
> >
> >
> It's hard to give an exhaustive comparison of JUnit4 and TestNG. There
> is an existing presentation "Comparison TestNG / JUnit 4"[1], however,
> it's in German. Not sure if there are German here. ;-)
>
> I just try to list some items (Correct me if I'm wrong)
>
> Both JUit4 and TestNG:
> 1)  Test classes do not have to extend from junit.framework.TestCase.
> 2) Test methods do not have to be prefixed with 'test'.
> 3) Use @Test annotations to mark a method as a test case.
> 4) Use @Before and @After annotations to identify set up and tear down.
> (TestNG uses @Configuration.beforeTestMethod and
> @Configuration.afterTestMethod )
> 5) Use @BeforeClass and @AfterClass annotations to identify one time set
> up and one time tear down. (TestNG uses @Configuration.beforeTestClass
> and @Configuration.afterTestClass )
> 6) @Test.timeout to specify the maximum time to execute
> 7) @Test.expected to specify the expected exception to be thrown (TestNG
> uses @ExpectedExceptions)
> 8) Can execute Junit 3.8 test cases.
>
> *Differences*:
> 1) JUnit4 requires Java 5.0. while TestNG can work with Java 1.4 and
> Java 5.0
> 2) TestNG provides more annotations to facilitate testing configuration[2]
> 3) TestNG "groups" is more sophisticated than JUnit test suite[3]


It's a key difference. At least, TestNG "groups" concept solves our
platform-dependent & exclude-list problems.
i.e "win","linux","broken" could easy tell TestNG which tests should be run
or excluded on windows or linux.


4) TestNG make it easy to rerun failed tests[4]
> ....
>
> 1. http://www.qaware.de/downloads/to1-adersberger.pdf
> 2. http://testng.org/doc/documentation-main.html#annotations
> 3. http://testng.org/doc/documentation-main.html#test-groups
> 4. http://testng.org/doc/documentation-main.html#rerunning
>
> Best regards,
> Richard.
>
> > -Nathan
> >
> >
> >> -----Original Message-----
> >> From: George Harley [mailto:george.c.harley@googlemail.com]
> >> Sent: Monday, July 10, 2006 3:57 PM
> >> To: harmony-dev@incubator.apache.org
> >> Subject: Re: [classlib] Testing conventions - a proposal
> >>
> >> Alexei Zakharov wrote:
> >>
> >>> Hi George,
> >>>
> >>>
> >>>> For the purposes of this discussion it would be fascinating to find
> out
> >>>> why you refer to TestNG as being an "unstable" test harness. What is
> >>>> that statement based on ?
> >>>>
> >>> My exact statement was referring to TestNG as "probably unstable"
> >>> rather than simply "unstable". ;)  This statement was based on posts
> >>> from Richard Liang about the bug in the TestNG migration tool and on
> >>> common sense. If the project has such an obvious bug in one place it
> >>> may probably have other bugs in other places. JUnit is quite famous
> >>> and widely used toolkit that proved to be stable enough. TestNG is
> >>> neither famous nor widely used. And IMHO it makes sense to be careful
> >>> with new exciting tools until we *really* need their innovative
> >>> functionality.
> >>>
> >>>
> >> Hi Alexei,
> >>
> >> Last I heard, Richard posted saying that there was no bug in the
> >> migration tool [1]. The command line tool is designed to locate JUnit
> >> tests under a specified location and add the TestNG annotations to
> them.
> >> That's what it does.
> >>
> >> You are right to say that it makes sense to be careful in this matter.
> >> Nobody wants to do anything that affects Harmony in an adverse way.
> >>
> >> Best regards,
> >> George
> >>
> >> [1]
> >> http://mail-archives.apache.org/mod_mbox/incubator-harmony-
> >> dev/200607.mbox/%3c44B1C084.3020408@gmail.com%3e
> >>
> >>
> >>
> >>> 2006/7/10, George Harley <ge...@googlemail.com>:
> >>>
> >>>> Alexei Zakharov wrote:
> >>>>
> >>>>>> Actually, there's a very valid benefit for using TestNG markers (=
> >>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
> >>>>>>
> >> a
> >>
> >>>>>> tree, whereas the markers can form any slice of tests, and the sets
> >>>>>>
> >>>>> Concerning TestNG vs JUnit. I just like to pay your attention on the
> >>>>> fact what it is possible to achieve the same level of test
> >>>>> grouping/slicing with JUnit TestSuites. You may define any number of
> >>>>> intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
> >>>>> XXXWinSpecificSuite or whatever. Without necessity of migrating to
> >>>>>
> >> new
> >>
> >>>>> (probably unstable) test harness.
> >>>>> Just my two cents.
> >>>>>
> >>>>>
> >>>>>
> >>>> Hi Alexei,
> >>>>
> >>>> You are quite correct that JUnit test suites are another alternative
> >>>> here. If I recall correctly, their use was discussed in the very
> early
> >>>> days of this project but it came to nothing and we instead went down
> >>>>
> >> the
> >>
> >>>> route of using exclusion filters in the Ant JUnit task. That approach
> >>>> does not offer much in the way of fine grain control and relies on us
> >>>> pushing stuff around the repository. Hence the kicking off of this
> >>>> thread.
> >>>>
> >>>> For the purposes of this discussion it would be fascinating to find
> out
> >>>> why you refer to TestNG as being an "unstable" test harness. What is
> >>>> that statement based on ?
> >>>>
> >>>> Best regards,
> >>>> George
> >>>>
> >>>>
> >>>>
> >>>>> 2006/7/8, Alex Blewitt <al...@gmail.com>:
> >>>>>
> >>>>>> On 08/07/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
> >>>>>>
> >>>>>>> So while I like the annotations, and expect we can use them
> >>>>>>>
> >>>>>> effectively,
> >>>>>>
> >>>>>>> I have an instinctive skepticism of annotations right now
> >>>>>>>
> >>>> because in
> >>>>
> >>>>>>> general (in general in Java), I'm not convinced we've used them
> >>>>>>>
> >>>> enough
> >>>>
> >>>>>>> to grok good design patterns.
> >>>>>>>
> >>>>>> There's really no reason to get hung up on the annotations. TestNG
> >>>>>> works just as well with JavaDoc source comments; annotations are
> >>>>>>
> >> only
> >>
> >>>>>> another means to that end. (They're probably a better one for the
> >>>>>> future, but it's just an implementation detail.)
> >>>>>>
> >>>>>>
> >>>>>>> Now since I still haven't read the thread fully, I'm jumping to
> >>>>>>> conclusions, taking it to the extreme, etc etc, but my thinking in
> >>>>>>> writing the above is that if we bury everything about our test
> >>>>>>> 'parameter space' in annotations, some of the visible
> >>>>>>>
> >>>> organization we
> >>>>
> >>>>>>> have now w/ on-disk layout becomes invisible, and the readable
> >>>>>>> "summaries" of aspects of testing that we'd have in an XML
> >>>>>>>
> >> metadata
> >>
> >>>>>>> document (or whatever) also are hard because you need to scan the
> >>>>>>> sources to find all instances of annotation "X".
> >>>>>>>
> >>>>>> I'm hoping that this would be just as applicable to using JavaDoc
> >>>>>> variants, and that the problem's not with annotations per se.
> >>>>>>
> >>>>>> In either case, both are grokkable with tools -- either
> >>>>>> annotation-savy readers or a JavaDoc tag processor, and it
> >>>>>>
> >>>> wouldn't be
> >>>>
> >>>>>> hard to configure one of those to periodically scan the codebase to
> >>>>>> generate reports. Furthermore, as long as the annotation X is well
> >>>>>> defined, *you* don't have to scan it -- you leave it up to TestNG
> to
> >>>>>> figure it out.
> >>>>>>
> >>>>>> Actually, there's a very valid benefit for using TestNG markers (=
> >>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
> >>>>>>
> >> a
> >>
> >>>>>> tree, whereas the markers can form any slice of tests, and the sets
> >>>>>> don't need to be strict subsets (with a tree, everything has to be
> a
> >>>>>> strict subset of its parents). That means that it's possible to
> >>>>>>
> >>>> define
> >>>>
> >>>>>> a marker IO to run all the IO tests, or a marker Win32 to run all
> >>>>>>
> >> the
> >>
> >>>>>> Win32 tests, and both of those will contain IO-specific Win32
> tests.
> >>>>>> You can't do that in a tree structure without duplicating content
> >>>>>> somewhere along the line (e.g. /win/io or /io/win). Neither of
> these
> >>>>>> scale well, and every time you add a new dimension, you're doubling
> >>>>>> the structure of the directory, but merely adding a new marker with
> >>>>>> TestNG. So if you wanted to have (say) boot classpath tests vs api
> >>>>>> tests, then you'd ahve to have /api/win/io and /boot/win/io (or
> >>>>>> various permutations as applicable).
> >>>>>>
> >>>>>> Most of the directory-based arguments seem to be along the lines of
> >>>>>> "/api/win/io is better! No, /win/io/api is better!". Just have an
> >>>>>> 'api', 'win', 'io' TestNG marker, and then let TestNG figure out
> >>>>>>
> >>>> which
> >>>>
> >>>>>> ones to run. You can then even get specific, and only run the
> >>>>>>
> >> Windows
> >>
> >>>>>> IO API tests, if you really want -- but if you don't, you get the
> >>>>>> benefit of being able to run all IO tests (both API and boot).
> >>>>>>
> >>>>>> There doesn't seem to be any benefit to having a strict tree-like
> >>>>>> structure to the tests when it's possible to have a multi-
> >>>>>>
> >> dimensional
> >>
> >>>>>> matrix of all possible combinations that's managed by the tool.
> >>>>>>
> >>>>>> Alex.
> >>>>>>
> >>>
> >> ---------------------------------------------------------------------
> >> Terms of use : http://incubator.apache.org/harmony/mailing.html
> >> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> >> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >>
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
> >
>
> --
> Richard Liang
> China Software Development Lab, IBM
>
>
>


-- 
Andrew Zhang
China Software Development Lab, IBM

Re: [classlib] TestNG v. JUnit

Posted by Richard Liang <ri...@gmail.com>.

Nathan Beyer wrote:
> Not to add another fire to this topic, but with all things being relative,
> so far this topic has been comparison of the TestNG and JUnit v3.8. From
> what I understand, the latest JUnit v4.1 provides many of the same
> annotation features that TestNG does, as well guaranteed compatibility with
> JUnit v3-based tests. 
>
> If we were to compare moving to TestNG with upgrading to JUnit 4.1, would
> there still be as much value in the proposition to move to TestNG?
>
>   
It's hard to give an exhaustive comparison of JUnit4 and TestNG. There 
is an existing presentation "Comparison TestNG / JUnit 4"[1], however, 
it's in German. Not sure if there are German here. ;-) 

I just try to list some items (Correct me if I'm wrong)

Both JUit4 and TestNG:
1)  Test classes do not have to extend from junit.framework.TestCase.
2) Test methods do not have to be prefixed with 'test'.
3) Use @Test annotations to mark a method as a test case.
4) Use @Before and @After annotations to identify set up and tear down. 
(TestNG uses @Configuration.beforeTestMethod and 
@Configuration.afterTestMethod )
5) Use @BeforeClass and @AfterClass annotations to identify one time set 
up and one time tear down. (TestNG uses @Configuration.beforeTestClass 
and @Configuration.afterTestClass )
6) @Test.timeout to specify the maximum time to execute
7) @Test.expected to specify the expected exception to be thrown (TestNG 
uses @ExpectedExceptions)
8) Can execute Junit 3.8 test cases.

*Differences*:
1) JUnit4 requires Java 5.0. while TestNG can work with Java 1.4 and 
Java 5.0
2) TestNG provides more annotations to facilitate testing configuration[2]
3) TestNG "groups" is more sophisticated than JUnit test suite[3]
4) TestNG make it easy to rerun failed tests[4]
....

1. http://www.qaware.de/downloads/to1-adersberger.pdf
2. http://testng.org/doc/documentation-main.html#annotations
3. http://testng.org/doc/documentation-main.html#test-groups
4. http://testng.org/doc/documentation-main.html#rerunning

Best regards,
Richard.

> -Nathan
>
>   
>> -----Original Message-----
>> From: George Harley [mailto:george.c.harley@googlemail.com]
>> Sent: Monday, July 10, 2006 3:57 PM
>> To: harmony-dev@incubator.apache.org
>> Subject: Re: [classlib] Testing conventions - a proposal
>>
>> Alexei Zakharov wrote:
>>     
>>> Hi George,
>>>
>>>       
>>>> For the purposes of this discussion it would be fascinating to find out
>>>> why you refer to TestNG as being an "unstable" test harness. What is
>>>> that statement based on ?
>>>>         
>>> My exact statement was referring to TestNG as "probably unstable"
>>> rather than simply "unstable". ;)  This statement was based on posts
>>> from Richard Liang about the bug in the TestNG migration tool and on
>>> common sense. If the project has such an obvious bug in one place it
>>> may probably have other bugs in other places. JUnit is quite famous
>>> and widely used toolkit that proved to be stable enough. TestNG is
>>> neither famous nor widely used. And IMHO it makes sense to be careful
>>> with new exciting tools until we *really* need their innovative
>>> functionality.
>>>
>>>       
>> Hi Alexei,
>>
>> Last I heard, Richard posted saying that there was no bug in the
>> migration tool [1]. The command line tool is designed to locate JUnit
>> tests under a specified location and add the TestNG annotations to them.
>> That's what it does.
>>
>> You are right to say that it makes sense to be careful in this matter.
>> Nobody wants to do anything that affects Harmony in an adverse way.
>>
>> Best regards,
>> George
>>
>> [1]
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-
>> dev/200607.mbox/%3c44B1C084.3020408@gmail.com%3e
>>
>>
>>     
>>> 2006/7/10, George Harley <ge...@googlemail.com>:
>>>       
>>>> Alexei Zakharov wrote:
>>>>         
>>>>>> Actually, there's a very valid benefit for using TestNG markers (=
>>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
>>>>>>             
>> a
>>     
>>>>>> tree, whereas the markers can form any slice of tests, and the sets
>>>>>>             
>>>>> Concerning TestNG vs JUnit. I just like to pay your attention on the
>>>>> fact what it is possible to achieve the same level of test
>>>>> grouping/slicing with JUnit TestSuites. You may define any number of
>>>>> intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite,
>>>>> XXXWinSpecificSuite or whatever. Without necessity of migrating to
>>>>>           
>> new
>>     
>>>>> (probably unstable) test harness.
>>>>> Just my two cents.
>>>>>
>>>>>
>>>>>           
>>>> Hi Alexei,
>>>>
>>>> You are quite correct that JUnit test suites are another alternative
>>>> here. If I recall correctly, their use was discussed in the very early
>>>> days of this project but it came to nothing and we instead went down
>>>>         
>> the
>>     
>>>> route of using exclusion filters in the Ant JUnit task. That approach
>>>> does not offer much in the way of fine grain control and relies on us
>>>> pushing stuff around the repository. Hence the kicking off of this
>>>> thread.
>>>>
>>>> For the purposes of this discussion it would be fascinating to find out
>>>> why you refer to TestNG as being an "unstable" test harness. What is
>>>> that statement based on ?
>>>>
>>>> Best regards,
>>>> George
>>>>
>>>>
>>>>         
>>>>> 2006/7/8, Alex Blewitt <al...@gmail.com>:
>>>>>           
>>>>>> On 08/07/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>>>>>>             
>>>>>>> So while I like the annotations, and expect we can use them
>>>>>>>               
>>>>>> effectively,
>>>>>>             
>>>>>>> I have an instinctive skepticism of annotations right now
>>>>>>>               
>>>> because in
>>>>         
>>>>>>> general (in general in Java), I'm not convinced we've used them
>>>>>>>               
>>>> enough
>>>>         
>>>>>>> to grok good design patterns.
>>>>>>>               
>>>>>> There's really no reason to get hung up on the annotations. TestNG
>>>>>> works just as well with JavaDoc source comments; annotations are
>>>>>>             
>> only
>>     
>>>>>> another means to that end. (They're probably a better one for the
>>>>>> future, but it's just an implementation detail.)
>>>>>>
>>>>>>             
>>>>>>> Now since I still haven't read the thread fully, I'm jumping to
>>>>>>> conclusions, taking it to the extreme, etc etc, but my thinking in
>>>>>>> writing the above is that if we bury everything about our test
>>>>>>> 'parameter space' in annotations, some of the visible
>>>>>>>               
>>>> organization we
>>>>         
>>>>>>> have now w/ on-disk layout becomes invisible, and the readable
>>>>>>> "summaries" of aspects of testing that we'd have in an XML
>>>>>>>               
>> metadata
>>     
>>>>>>> document (or whatever) also are hard because you need to scan the
>>>>>>> sources to find all instances of annotation "X".
>>>>>>>               
>>>>>> I'm hoping that this would be just as applicable to using JavaDoc
>>>>>> variants, and that the problem's not with annotations per se.
>>>>>>
>>>>>> In either case, both are grokkable with tools -- either
>>>>>> annotation-savy readers or a JavaDoc tag processor, and it
>>>>>>             
>>>> wouldn't be
>>>>         
>>>>>> hard to configure one of those to periodically scan the codebase to
>>>>>> generate reports. Furthermore, as long as the annotation X is well
>>>>>> defined, *you* don't have to scan it -- you leave it up to TestNG to
>>>>>> figure it out.
>>>>>>
>>>>>> Actually, there's a very valid benefit for using TestNG markers (=
>>>>>> annotations/JavaDoc) for grouping tests; the directory structure is
>>>>>>             
>> a
>>     
>>>>>> tree, whereas the markers can form any slice of tests, and the sets
>>>>>> don't need to be strict subsets (with a tree, everything has to be a
>>>>>> strict subset of its parents). That means that it's possible to
>>>>>>             
>>>> define
>>>>         
>>>>>> a marker IO to run all the IO tests, or a marker Win32 to run all
>>>>>>             
>> the
>>     
>>>>>> Win32 tests, and both of those will contain IO-specific Win32 tests.
>>>>>> You can't do that in a tree structure without duplicating content
>>>>>> somewhere along the line (e.g. /win/io or /io/win). Neither of these
>>>>>> scale well, and every time you add a new dimension, you're doubling
>>>>>> the structure of the directory, but merely adding a new marker with
>>>>>> TestNG. So if you wanted to have (say) boot classpath tests vs api
>>>>>> tests, then you'd ahve to have /api/win/io and /boot/win/io (or
>>>>>> various permutations as applicable).
>>>>>>
>>>>>> Most of the directory-based arguments seem to be along the lines of
>>>>>> "/api/win/io is better! No, /win/io/api is better!". Just have an
>>>>>> 'api', 'win', 'io' TestNG marker, and then let TestNG figure out
>>>>>>             
>>>> which
>>>>         
>>>>>> ones to run. You can then even get specific, and only run the
>>>>>>             
>> Windows
>>     
>>>>>> IO API tests, if you really want -- but if you don't, you get the
>>>>>> benefit of being able to run all IO tests (both API and boot).
>>>>>>
>>>>>> There doesn't seem to be any benefit to having a strict tree-like
>>>>>> structure to the tests when it's possible to have a multi-
>>>>>>             
>> dimensional
>>     
>>>>>> matrix of all possible combinations that's managed by the tool.
>>>>>>
>>>>>> Alex.
>>>>>>             
>>>       
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>     
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>
>   

-- 
Richard Liang
China Software Development Lab, IBM