You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@xalan.apache.org by Sh...@lotus.com on 2001/05/16 16:08:57 UTC

Re: [RFC] Moving error/exception case conformance tests to new - test organization commentary

Hey! Someone(s) was still reading at the end of my mail!  8-)
Actually I'm still waiting for David Marston and Paul Dick to wiegh in,
since they've done most of the work on organizing our test cases so far.

---- you Tom Amiro <To...@Sun.COM> wrote ----
> I like the idea of separating the negative tests from the positive tests.
> While negative tests are very valuable, they are not as high a
> priority as postive tests, so being able to distinguish the two is
> very useful.
Good point.  In the past (in the Lotus team, before we donated stuff to
Apache) we did a lot of work on ways to categorize and store tests.  While
the tests have been checked into Apache, we may not have gotten around to
sharing our whiteboard exercises...

The 'conf' directory stands for conformance - specifically, conformance in
a fairly provable and rigorous way to the XSLT and XPath specs.  (Not
necessarily 'rigorous' in the mathematical proof sense, but certainly very
specific in the traditional software testing sense)  Each conf test should
be a fairly small, conceptually 'atomic' test of a particular bit of the
spec.  We even have ideas of an XPath-like way for each test to point to
the specific sentence or phrase in the spec that the test checks
conformance to.

At that point, we put these *err tests in subdirectories to each of the
conf directories.  That way, all the axes tests are together: conf/axes is
positive conformance about axes, and conf/axes/err is error tests about
axes.

Now that we have a larger body of tests and experience using/automating
them over time, it feels like it'll be easier both to manage the tests and
to get the community to understand the tests if we split them more at the
top level: positive conformance vs. error tests vs. api tests, etc.  In
terms of automation, it also turns out to be simpler to store them this way
as well as (IMO) cleaner in terms of the TestDriver design. (Note:
ConformanceTest.java is being deprecated; please see
StylesheetTestletDriver.java, StylesheetTestlet.java,
StylesheetErrorTestlet.java if you're interested)

> This may be trivial, but naming the directory "confneg" rather than
> "conferr" might be better. At least to me, "neg" more intuitively
> implies "negative test" than "err" does. One might think that errors
> thrown by positive tests get put in this "conferr" directory.
>
> Tom
Yes, there is definitely a difference in common perception between
negative, error, boundary, etc. cases (any QE/QA/test person would be happy
to fill you in on charts like this 8-)
But: the current definition that we had been using (perhaps not
communicated well enough to the community yet) is that these 'conferr'
tests are stylesheets that are expected to stop processing and throw some
exception.  Basically we started with what was simplest to automate:
stylesheets that either during building or execution will throw an error or
fatalerror, halt processing, and throw a Java Throwable or derivative (we
haven't yet updated these for the C processor yet, although hopefully we
can use the same tests, with different ExpectedException:s).
Oh, also: currently, we do not validate the *.out output files from these
tests.  We can work on another category of tests later where we validate
both a message/error being displayed where we also validate the output
files.

We can argue about negative vs. error, but in this case (these tests
specifically should cause/throw an error) I'd lean towards error.  (Also
the existing test files all end in *err.xsl)

- Shane