You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Karl Fogel <kf...@newton.ch.collab.net> on 2002/08/22 04:28:38 UTC

expected failures shouldn't raise alarms

Brane, thanks much for the XFAIL stuff.  I've just one issue with it:
when "make check" is printing out the summary results, it treats
expected failures as "FAILURE"s.  This is needlessly alarming -- the
whole *point* of the all-caps word "FAILURE" is to stand out and let
the programmer know that something's wrong, so don't commit now.

Tests that generate expected failures should print "success".  They
behaved as expected, so they succeded.

As for printing a list of the expected failures *after* the summary
(see example below), I'm -0.5 on that.  After all, we don't depend on
the test suite to tell us what needs fixing, we depend on the issue
tracker.  Seeing a full list of XFAILs at the end of the test run is
just more noise for the programmer to sort through, because the
programmer is interested in *at most* 1 of those failures -- all the
rest are just in the way.  (And more often, 0 of the expected failures
will be interesting, because the programmer is working on something
else entirely.)

In summary:

   * The test suite is for telling you whether you can fixed a bug or
     not, and/or whether you broke something else in the process.

   * The issue tracker is for remembering what needs to get fixed, and
     for scheduling when we plan to fix it.

:-)

Here's the new "make check" output that just caused me to jump:

   Running all tests in hashdump-test...success
   Running all tests in path-test...success
   [...]
   Running all tests in basic_tests.py...success
   Running all tests in commit_tests.py...FAILURE
   Running all tests in update_tests.py...success
   Running all tests in switch_tests.py...success
   Running all tests in prop_tests.py...success
   Running all tests in schedule_tests.py...FAILURE
   Running all tests in log_tests.py...success
   [...]
   Running all tests in stat_tests.py...success
   Running all tests in trans_tests.py...FAILURE
   Running all tests in svnadmin_tests.py...success
   
   Summary results from /home/kfogel/src/subversion/tests.log
   number of passed tests: 240
   number of failed tests: 0
   number of expected failures: 9
   number of unexpected passes: 0
   
   Expected failures:
   XFAIL: commit_tests.py 13: hook testing.
   XFAIL: schedule_tests.py 11: commit: add some files
   XFAIL: schedule_tests.py 12: commit: add some directories
   XFAIL: schedule_tests.py 13: commit: add some nested files and directories
   XFAIL: schedule_tests.py 14: commit: delete some files
   XFAIL: schedule_tests.py 15: commit: delete some directories
   XFAIL: trans_tests.py 2: enable translation, check status, commit
   XFAIL: trans_tests.py 3: checkout files that have translation enabled
   XFAIL: trans_tests.py 4: disable translation, check status, commit
   make: *** [check] Error 1

I'm not objecting to the "Summary results from..." section.  It's the
final list of "Expected failures" that bothers me, because it's noise,
and it will only get longer and longer, and drive useful information
off the screen.

-Karl

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Branko Čibej wrote:

> Perhaps. This isn't just about issue tracking, it's also about 
> enhancing the test suite. There were 9 tests in thre that were 
> disabled, and most of those were not even implemented -- which means 
> somebody thought they'd be a good idea, then forgot about them. 
> There's nothing in the issue tracker about those tests. :-)

Ah, I take that back, seeing issue 877. :-)


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Branko Čibej <br...@xbc.nu> writes:
>  
>
>>At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
>>Failed tests:
>>FAIL:  getopt_tests.py 7: run svn help bogus-cmd
>>Unexpected passes:
>>XPASS: getopt_tests.py 1: run svn with no arguments
>>  vs.
>>
>>At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
>>XPASS: getopt_tests.py 1: run svn with no arguments
>>FAIL:  getopt_tests.py 7: run svn help bogus-cmd
>>  I'll only commit one of them. I'm not going to change the output
>>based on test `id -u` = "kfogel". :-)
>>    
>>
>
>Oh, I prefer the latter, but only mildly.  If XPASS is really a
>tradition, that means there's less point explaining it in the output.
>
>But if you prefer the other one, that's fine too.
>

O.K., r3016 is your friend.

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Karl Fogel <kf...@newton.ch.collab.net>.
Branko Čibej <br...@xbc.nu> writes:
> At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
> Failed tests:
> FAIL:  getopt_tests.py 7: run svn help bogus-cmd
> Unexpected passes:
> XPASS: getopt_tests.py 1: run svn with no arguments
>   vs.
> 
> At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
> XPASS: getopt_tests.py 1: run svn with no arguments
> FAIL:  getopt_tests.py 7: run svn help bogus-cmd
>   I'll only commit one of them. I'm not going to change the output
> based on test `id -u` = "kfogel". :-)

Oh, I prefer the latter, but only mildly.  If XPASS is really a
tradition, that means there's less point explaining it in the output.

But if you prefer the other one, that's fine too.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Branko Čibej <br...@xbc.nu> writes:
>  
>
>>>Oooooh.  I find that a bit confusing, because I think of the "X"
>>>prefix as standing for "eXpected", where as in XPASS it stands for
>>>"uneXpected".
>>>      
>>>
>>Yup, it's confusing, but it's more or less standard (IIRC it's even a
>>POSIX stantard for test suites, have too look it up -- the dejagnu
>>documentation says something about that).
>>    
>>
>
>Okay, I bow to the wisdom of the ages.
>
>  
>
>>Right. I'm on it.
>>    
>>
>
>Thanks, 'preciate it.
>

Right then, pick the output you prefer:

At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
Failed tests:
FAIL:  getopt_tests.py 7: run svn help bogus-cmd
Unexpected passes:
XPASS: getopt_tests.py 1: run svn with no arguments
  

vs.

At least one test FAILED, checking c:\Home\brane\src\svn\repo\tests.log
XPASS: getopt_tests.py 1: run svn with no arguments
FAIL:  getopt_tests.py 7: run svn help bogus-cmd
  


I'll only commit one of them. I'm not going to change the output based 
on test `id -u` = "kfogel". :-)

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Karl Fogel <kf...@newton.ch.collab.net>.
Branko Čibej <br...@xbc.nu> writes:
> >Oooooh.  I find that a bit confusing, because I think of the "X"
> >prefix as standing for "eXpected", where as in XPASS it stands for
> >"uneXpected".
>
> Yup, it's confusing, but it's more or less standard (IIRC it's even a
> POSIX stantard for test suites, have too look it up -- the dejagnu
> documentation says something about that).

Okay, I bow to the wisdom of the ages.

> Right. I'm on it.

Thanks, 'preciate it.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Branko Čibej <br...@xbc.nu> writes:
>  
>
>>You don't mark tests as XPASS. Here's what happens:
>>
>>    You write a test for a new bug, and mark it XFAIL. Later on, you fix
>>    the bug, and the test passes. BUT you forget to un-XFAIL the test.
>>    So, the test is expected to fail, but unexpectedly passes -- hence
>>    XPASS instead of PASS.
>>
>>So, XPASS is a sort of reminder. It's useful because the bug may have
>>been fixed inadvertently, or maybe the test infrastructure was changed
>>such that the XFAIL test suddenly passes even though the bug is still
>>present.
>>
>>XPASS can only happen to tests that have been marked as XFAIL.
>>    
>>
>
>Oooooh.  I find that a bit confusing, because I think of the "X"
>prefix as standing for "eXpected", where as in XPASS it stands for
>"uneXpected".
>
Yup, it's confusing, but it's more or less standard (IIRC it's even a 
POSIX stantard for test suites, have too look it up -- the dejagnu 
documentation says something about that).

>But whatever.  Let's make XPASS exit with non-zero, so it counts as a
>failure as far as the final result of the test run goes, for the
>reason I gave earlier:
>
>   > Btw, yes, I think an unexpected pass should be treated as a kind
>   > of breakage.  If we used to have a bug, and now we don't, then we
>   > want to notice if it reappears, because that's a regression.
>   > Therefore, we need to be alerted to the unexpected pass, so we
>   > can go change the test suite to expect the pass in the future, in
>   > turn so we'll detect any regression.
>  
>
Right. I'm on it.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Karl Fogel <kf...@newton.ch.collab.net>.
Branko Čibej <br...@xbc.nu> writes:
> You don't mark tests as XPASS. Here's what happens:
> 
>     You write a test for a new bug, and mark it XFAIL. Later on, you fix
>     the bug, and the test passes. BUT you forget to un-XFAIL the test.
>     So, the test is expected to fail, but unexpectedly passes -- hence
>     XPASS instead of PASS.
> 
> So, XPASS is a sort of reminder. It's useful because the bug may have
> been fixed inadvertently, or maybe the test infrastructure was changed
> such that the XFAIL test suddenly passes even though the bug is still
> present.
> 
> XPASS can only happen to tests that have been marked as XFAIL.

Oooooh.  I find that a bit confusing, because I think of the "X"
prefix as standing for "eXpected", where as in XPASS it stands for
"uneXpected".

But whatever.  Let's make XPASS exit with non-zero, so it counts as a
failure as far as the final result of the test run goes, for the
reason I gave earlier:

   > Btw, yes, I think an unexpected pass should be treated as a kind
   > of breakage.  If we used to have a bug, and now we don't, then we
   > want to notice if it reappears, because that's a regression.
   > Therefore, we need to be alerted to the unexpected pass, so we
   > can go change the test suite to expect the pass in the future, in
   > turn so we'll detect any regression.

-K

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Branko Čibej <br...@xbc.nu> writes:
>  
>
>>Let me try to recapitulate what you want:
>>
>>    * Lose the summary section
>>    * List only unexpected results (FAIL and XPASS)
>>
>>Right?
>>    
>>
>
>Right.  (I probably should have included a concise summary in my
>mail.)
>
>  
>
>>One more detail: Does an XPASS mean the tests should fail (i.e., exit
>>with non-zero)?
>>    
>>
>
>Hmmm.  I don't really understand the purpose of XPASS, perhaps you can
>help?
>
>If the idea is this:
>
>   A bug is present in Subversion, and we've written an XPASS test
>   such that as long as the bug is *present*, the test returns
>   success, but as soon as the bug is fixed, the test will "fail".
>
>...then I'd say we should simply reverse the sense of the test's
>return value and make it an XFAIL instead.  But maybe XPASS is about
>something else?
>

You don't mark tests as XPASS. Here's what happens:

    You write a test for a new bug, and mark it XFAIL. Later on, you fix
    the bug, and the test passes. BUT you forget to un-XFAIL the test.
    So, the test is expected to fail, but unexpectedly passes -- hence
    XPASS instead of PASS.

So, XPASS is a sort of reminder. It's useful because the bug may have 
been fixed inadvertently, or maybe the test infrastructure was changed 
such that the XFAIL test suddenly passes even though the bug is still 
present.

XPASS can only happen to tests that have been marked as XFAIL.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Karl Fogel <kf...@newton.ch.collab.net>.
Branko Čibej <br...@xbc.nu> writes:
> Let me try to recapitulate what you want:
> 
>     * Lose the summary section
>     * List only unexpected results (FAIL and XPASS)
> 
> Right?

Right.  (I probably should have included a concise summary in my
mail.)

> One more detail: Does an XPASS mean the tests should fail (i.e., exit
> with non-zero)?

Hmmm.  I don't really understand the purpose of XPASS, perhaps you can
help?

If the idea is this:

   A bug is present in Subversion, and we've written an XPASS test
   such that as long as the bug is *present*, the test returns
   success, but as soon as the bug is fixed, the test will "fail".

...then I'd say we should simply reverse the sense of the test's
return value and make it an XFAIL instead.  But maybe XPASS is about
something else?

> I'll bow to the majority opinion (strange, you seem to be a one-man
> majority :-), and fix things as soon as I get these answers. Then I'll
> add an XFAIL mechanism to the C tests, just to be consistent.

Thank you.  (Maybe adjust subversion/tests/README while at it?)

I actually wasn't making the argument based on majority (perceived or
otherwise), but on the basis that we already made this decision and
never consensed on changing it.  I'd be fine with people arguing
against my veto -- and would bow to majority opinion of developers,
since the whole point here is to be useful to people working on the
code.  But based on the list discussion, it appears to be a simple
1-to-1 right now, between you and me, unless I missed a post.

However, SINCE you bring it up :-), I'll point out that Mike Pilato
concurred verbally with me, so if you want to count, it's really
2-to-1.  I also feel pretty certain, based on many talks about the
test suite, that Ben Collins-Sussman would make that 3-1, but perhaps
I'm getting ahead of myself, and anyway, I don't want to set up a
Chicago mafia here... except when it serves my own nefarious purposes!

-K

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Branko Čibej <br...@xbc.nu> writes:
>  
>
>>I say, if that list will get longer and longer, then I should print it
>>in blinking mode. It's there as an incentive for fixing the test suite.
>>    
>>
>
>No :-).  Seriously, -1, although I understand where you're coming from.
>

Let me try to recapitulate what you want:

    * Lose the summary section
    * List only unexpected results (FAIL and XPASS)

Right?

One more detail: Does an XPASS mean the tests should fail (i.e., exit 
with non-zero)?

I'll bow to the majority opinion (strange, you seem to be a one-man 
majority :-), and fix things as soon as I get these answers. Then I'll 
add an XFAIL mechanism to the C tests, just to be consistent.

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Karl Fogel <kf...@newton.ch.collab.net>.
Branko Čibej <br...@xbc.nu> writes:
> I say, if that list will get longer and longer, then I should print it
> in blinking mode. It's there as an incentive for fixing the test suite.

No :-).  Seriously, -1, although I understand where you're coming from.

We discussed the "make check" output thoroughly when we were
originally designing it, and quite consciously decided that the point
of the output was to tell the programmer if he'd changed anything that
he didn't intend to change.  Hence the lower case "success" vs
upper-case "FAILURE", and the list of any *unexpected* failures
following the foo.py summary list.

This was helpful, because it meant that you could glance at the
results and see immediately if you have to dig any deeper.  Everything
looks okay?  Good, then commit.  Something looks wrong?  Fine, then
dig deeper, glance over the failure summary, maybe look in tests.log,
go run the failing test(s) by hand, whatever.

The important feature is that the common case, success, could be
distinguished instantaneously from the messy failure case.

Now you've gone and changed all that, by printing an arbitrary number
of expected/unexpected foo's at the end:

   Expected failures:
   XFAIL: commit_tests.py 13: hook testing.
   XFAIL: schedule_tests.py 11: commit: add some files
   XFAIL: schedule_tests.py 12: commit: add some directories
   XFAIL: schedule_tests.py 13: commit: add some nested files and directories
   XFAIL: schedule_tests.py 14: commit: delete some files
   XFAIL: schedule_tests.py 15: commit: delete some directories
   XFAIL: trans_tests.py 2: enable translation, check status, commit
   XFAIL: trans_tests.py 3: checkout files that have translation enabled
   XFAIL: trans_tests.py 4: disable translation, check status, commit

What is the justification for this section?  That it will somehow
remind developers what needs fixing?  Again, the issue tracker is
better for this (and note that just because you don't see the xfail
test listed in an issue summary doesn't mean it's not mentioned in the
issue's description!).  Seeing a list of XFAILs is not likely to
suddenly prompt anyone to suddenly decide to work on schedule_test 14,
out of the blue.  There's so much information missing here, about
severity and urgency and expected difficulty of fixing... All of which
is probably included in the issue tracker, or should be (i.e., if it's
not, then we should put it there).

This new section(s) is just going to get longer and longer, and make
it impossible to parse the "make check" output in a glance.  Heck,
it's already made it impossible to parse in a glance, but at least
we're only up to two glances so far.

Yuck.  Ick.  Blecch.  Please, change it back to the way it was.

Also, Mike Pilato and I were discussing this section

   Summary results from /home/kfogel/src/subversion/tests.log
   number of passed tests: 240
   number of failed tests: 0
   number of expected failures: 9
   number of unexpected passes: 0

and we frankly can't see what useful information it gives.  Why does
it matter how many total tests were in each category?  The only
categories that matter up there are the "failed tests" and the
"unexpected passes", and we'd want to signal such events in a much
more attention-getting way if they happen, not as part of routine
summary results.

(Btw, yes, I think an unexpected pass should be treated as a kind of
breakage.  If we used to have a bug, and now we don't, then we want to
notice if it reappears, because that's a regression.  Therefore, we
need to be alerted to the unexpected pass, so we can go change the
test suite to expect the pass in the future, in turn so we'll detect
any regression.)

My understanding of the purpose of the XFAIL stuff is that it gives
volunteers a really convenient way to take on a bug.  The process is

   1) Someone finds a bug.
   2) Someone writes an XFAIL regression test for it.
   3) We file an issue, mentioning the test by name.
   4) Someone fixes the bug, looking to make the test "unexpectedly pass"
   5) After they've fixed it, the last step is to change the test
       suite to expect success for that test, then commit.
   6) Close the issue.

This is already easier than the system we have now, since merely
fixing the bug will cause an interesting change to the output of
"make check" (an expected failure will become an unexpected pass).
That's all we need.

But showing _all_ the unexpected failures with every test run is just
going to overwhelm everyone, because it presents an unorganized mass
of information from which no informed decision can be made.  The issue
tracker presents the same information, but organized and prioritized,
making it easier for people to decide what to work on.

-Karl

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Karl Fogel wrote:

>Brane, thanks much for the XFAIL stuff.  I've just one issue with it:
>when "make check" is printing out the summary results, it treats
>expected failures as "FAILURE"s.  This is needlessly alarming -- the
>whole *point* of the all-caps word "FAILURE" is to stand out and let
>the programmer know that something's wrong, so don't commit now.
>
>Tests that generate expected failures should print "success".  They
>behaved as expected, so they succeded.
>
Perhaps. This isn't just about issue tracking, it's also about enhancing 
the test suite. There were 9 tests in thre that were disabled, and most 
of those were not even implemented -- which means somebody thought 
they'd be a good idea, then forgot about them. There's nothing in the 
issue tracker about those tests. :-)

>As for printing a list of the expected failures *after* the summary
>(see example below), I'm -0.5 on that.  After all, we don't depend on
>the test suite to tell us what needs fixing, we depend on the issue
>tracker.  Seeing a full list of XFAILs at the end of the test run is
>just more noise for the programmer to sort through, because the
>programmer is interested in *at most* 1 of those failures -- all the
>rest are just in the way.  (And more often, 0 of the expected failures
>will be interesting, because the programmer is working on something
>else entirely.)
>
I don't print just the XFAILs. I print the FAILs and XPASSes, too. I 
think all of those are important. Perhaps it would make sense to print 
the XFAIl, XPASS and FAIL lists first, and the summary afterwards? I 
don't regard the "Running all tests in ...." stuff as important 
information, only the summary and what comes after it.


>In summary:
>
>   * The test suite is for telling you whether you can fixed a bug or
>     not, and/or whether you broke something else in the process.
>
and whether the test suite is still broken. :-)

>   * The issue tracker is for remembering what needs to get fixed, and
>     for scheduling when we plan to fix it.
>
But only if you create a new issue for each XFAILed test. There isn't 
one now.

>Here's the new "make check" output that just caused me to jump:
>
>   Running all tests in hashdump-test...success
>   Running all tests in path-test...success
>   [...]
>   Running all tests in basic_tests.py...success
>   Running all tests in commit_tests.py...FAILURE
>   Running all tests in update_tests.py...success
>   Running all tests in switch_tests.py...success
>   Running all tests in prop_tests.py...success
>   Running all tests in schedule_tests.py...FAILURE
>   Running all tests in log_tests.py...success
>   [...]
>   Running all tests in stat_tests.py...success
>   Running all tests in trans_tests.py...FAILURE
>   Running all tests in svnadmin_tests.py...success
>   
>   Summary results from /home/kfogel/src/subversion/tests.log
>   number of passed tests: 240
>   number of failed tests: 0
>   number of expected failures: 9
>   number of unexpected passes: 0
>   
>   Expected failures:
>   XFAIL: commit_tests.py 13: hook testing.
>   XFAIL: schedule_tests.py 11: commit: add some files
>   XFAIL: schedule_tests.py 12: commit: add some directories
>   XFAIL: schedule_tests.py 13: commit: add some nested files and directories
>   XFAIL: schedule_tests.py 14: commit: delete some files
>   XFAIL: schedule_tests.py 15: commit: delete some directories
>   XFAIL: trans_tests.py 2: enable translation, check status, commit
>   XFAIL: trans_tests.py 3: checkout files that have translation enabled
>   XFAIL: trans_tests.py 4: disable translation, check status, commit
>   make: *** [check] Error 1
>
I think only the last line is wrong: If there are only expected 
failures, the test driver shouldn't be exiting with a non-zero code. 
I'll fix that.

>I'm not objecting to the "Summary results from..." section.  It's the
>final list of "Expected failures" that bothers me, because it's noise,
>and it will only get longer and longer, and drive useful information
>off the screen.
>  
>
I say, if that list will get longer and longer, then I should print it 
in blinking mode. It's there as an incentive for fixing the test suite.

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Branko Čibej <br...@xbc.nu>.
Colin Putney wrote:

>
> On Wednesday, August 21, 2002, at 09:28  PM, Karl Fogel wrote:
>
>> Brane, thanks much for the XFAIL stuff.  I've just one issue with it:
>> when "make check" is printing out the summary results, it treats
>> expected failures as "FAILURE"s.  This is needlessly alarming -- the
>> whole *point* of the all-caps word "FAILURE" is to stand out and let
>> the programmer know that something's wrong, so don't commit now.
>>
>> Tests that generate expected failures should print "success".  They
>> behaved as expected, so they succeded.
>
>
> Yes, quite so. But this does raise another question. What happens when 
> a test that is expected to fail doesn't?


After the test run, I print out a summary, then list the FAILs XFAILs 
and XPASSes explicitly. So you do see the tests that passed unexpectedy.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: expected failures shouldn't raise alarms

Posted by Colin Putney <cp...@whistler.net>.
On Wednesday, August 21, 2002, at 09:28  PM, Karl Fogel wrote:

> Brane, thanks much for the XFAIL stuff.  I've just one issue with it:
> when "make check" is printing out the summary results, it treats
> expected failures as "FAILURE"s.  This is needlessly alarming -- the
> whole *point* of the all-caps word "FAILURE" is to stand out and let
> the programmer know that something's wrong, so don't commit now.
>
> Tests that generate expected failures should print "success".  They
> behaved as expected, so they succeded.

Yes, quite so. But this does raise another question. What happens when a 
test that is expected to fail doesn't?

One way to handle this might be to print the words success or failure to 
indicate whether or not the test passed, and to use all caps to indicate 
that this was unexpected:

success:		test passed, all is well.
failure:		test failed, as expected.
SUCCESS:	test passed, huh?
FAILURE:		test failed, you broke something.

Cheers,

Colin


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org