You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Shai Erera <se...@gmail.com> on 2013/10/14 14:16:49 UTC

Should build fail if test does not exist?

Hi

If you run something like "ant test -Dtestcase=TestFoo" where there's no
TestFoo.java, or "ant test -Dtestcase=TestBar -Dtests.method=testFoo" where
there's TestBar.java but no testFoo() method, the build currently passes as
SUCCESSFUL. Though, the report says "0 suits" or "1 suits, 0 tests".

I wonder if it should be a build failure? The problem I have is that you
can e.g. make a very stupid typo (usually around plurals as in
testSomethingBadOnIOException vs testSomethingBadOnIOExceptions) and get a
false SUCCESSFUL notification.

If we want to fix/change it, where does it belong - build scripts,
LuceneTestCase or randomizedrunner?

Shai

Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> I get an error

An error? Or no executed tests? It should be no-matching tests, really.

> Is it possible that the runner will add that '*' for me so that I just specify the test method name?

In short no, it's not possible to do so automatically. There is no
such thing as a "method name" in JUnit, there's only a string
description and this can be anything -- doesn't have to follow any
rules or conventions. The runner would have to guess what you had in
mind.


Dawid

On Tue, Oct 15, 2013 at 1:38 PM, Shai Erera <se...@gmail.com> wrote:
>> Then when you want to rerun your test and limit to that
>> particular method you need to glob somehow because these descriptions
>> no longer correspond to raw method names
>
>
> I think it is related but not sure, yet it bugs me. When I want to run same
> method with iterations I need to specify "-Dtests.method=testFoo*
> -Dtests.iters=10". If I just specify testFoo, I get an error. Is it possible
> that the runner will add that '*' for me so that I just specify the test
> method name? Or is this also related to JUnit limitation?
>
> Shai
>
>
> On Tue, Oct 15, 2013 at 2:18 PM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>>
>> On Tue, Oct 15, 2013 at 6:56 AM, Dawid Weiss
>> <da...@cs.put.poznan.pl> wrote:
>> >> Hmm, you couldn't just enter the unqualified class name, and then it'd
>> >> match any package containing that class name?  LIke "**/TestFooBar"
>> >> patternset pattern I think?
>> >
>> > This isn't different to your "duplicated test match" scenario. Same
>> > class could be present in multiple packages -- what then?  :)
>>
>> Just run both tests in that case, I think?  User can fully qualify if
>> they really want to run a specific one?
>>
>> That's a much more benign error than saying BUILD SUCCESSFUL when no
>> test actually ran.
>>
>> >> Hmm, so globs are a convenient way to work around a JUnit limitation?
>> >
>> > I'm not talking about globs, I'm talking about the fact that every
>> > test needs a unique string description so if you want to re-run the
>> > same method multiple times in JUnit you need to differentiate them
>> > somehow. Then when you want to rerun your test and limit to that
>> > particular method you need to glob somehow because these descriptions
>> > no longer correspond to raw method names... eh, it's a longer
>> > discussion.
>>
>> OK, hairy :)
>>
>> >> I also did not remember -Dtests.method=XXX was different.... I believe
>> >> you've explained this many times already ;)
>> >>
>> >> I'll [hopefully remember to] use -Dtests.method from now on!
>> >
>> > See the top of 'ant test-help':
>>
>> Thanks.
>>
>> > You're right -- testmethod is a prefix match and testcase is a suffix/
>> > class match. The runner only "understands" tests.class and
>> > tests.method -- if you're running from Eclipse or Idea, for example,
>> > only these will be picked up.
>>
>> OK.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@gmail.com>.
Such things are easier when you don't have junit legacy and all the user
interfaces (eclipse, idea, ant) to take into equation. junit simply assumes
a single jvm model and it needs all test descriptions in advance. There are
other frameworks out there that would be more suitable for forking tests
into separate jvms but I bet there would be voices of criticism that you no
longer can run tests from your ide... There are always pros and cons. I
will look into it soon and reevaluate the options, promise. D.
On Oct 15, 2013 3:38 PM, "Michael McCandless" <lu...@mikemccandless.com>
wrote:

> With the Python runner (repeatLuceneTest.py in luceneutil), -iters N
> translates into -Dtests.iters=N, while -jvms M translates into running
> M JVMs concurrently (for better beasting), so you can combine them to
> get lots of beasting.
>
> The runner exits once any of the JVMs hit a failure...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Tue, Oct 15, 2013 at 9:32 AM, Shai Erera <se...@gmail.com> wrote:
> > That's what I've seen too -- it picks the master seed once and then all
> > iters pick their own derivative seeds. So if the test is random at the
> > before/after/test level, it usually was enough to find bugs after many
> > iterations.
> >
> > Shai
> >
> >
> > On Tue, Oct 15, 2013 at 4:26 PM, Dawid Weiss <
> dawid.weiss@cs.put.poznan.pl>
> > wrote:
> >>
> >> >> What do you mean? Doesn't it execute the test many times, picking
> >> >> different
> >> >> seeds each time?
> >>
> >> Only at the test (method) level, at @Before @After hooks and at @Rule
> >> blocks. @BeforeClass, @AfterClass and class rules are ran with an
> >> identical seed (because you'd have to effectively reload the class
> >> under a different class loader or rerun under a different JVM).
> >>
> >> This has been long on my list of things to fix, but it's not as
> >> trivial as it sounds to change it.
> >>
> >> Dawid
> >>
> >> >>
> >> >
> >> > No it does not!!!!!!
> >> >
> >> > ---------------------------------------------------------------------
> >> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> >> > For additional commands, e-mail: dev-help@lucene.apache.org
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> >> For additional commands, e-mail: dev-help@lucene.apache.org
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
With the Python runner (repeatLuceneTest.py in luceneutil), -iters N
translates into -Dtests.iters=N, while -jvms M translates into running
M JVMs concurrently (for better beasting), so you can combine them to
get lots of beasting.

The runner exits once any of the JVMs hit a failure...

Mike McCandless

http://blog.mikemccandless.com


On Tue, Oct 15, 2013 at 9:32 AM, Shai Erera <se...@gmail.com> wrote:
> That's what I've seen too -- it picks the master seed once and then all
> iters pick their own derivative seeds. So if the test is random at the
> before/after/test level, it usually was enough to find bugs after many
> iterations.
>
> Shai
>
>
> On Tue, Oct 15, 2013 at 4:26 PM, Dawid Weiss <da...@cs.put.poznan.pl>
> wrote:
>>
>> >> What do you mean? Doesn't it execute the test many times, picking
>> >> different
>> >> seeds each time?
>>
>> Only at the test (method) level, at @Before @After hooks and at @Rule
>> blocks. @BeforeClass, @AfterClass and class rules are ran with an
>> identical seed (because you'd have to effectively reload the class
>> under a different class loader or rerun under a different JVM).
>>
>> This has been long on my list of things to fix, but it's not as
>> trivial as it sounds to change it.
>>
>> Dawid
>>
>> >>
>> >
>> > No it does not!!!!!!
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> > For additional commands, e-mail: dev-help@lucene.apache.org
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
This is really dangerous: if there is a bug with XYZ codec using this
approach will not find it.

Nor will it find bugs if e.g. the index is constructed in BeforeClass()

I strongly recommend not using this option, i *really* think it should
be removed. This has come up from time to time before.

On Tue, Oct 15, 2013 at 6:32 AM, Shai Erera <se...@gmail.com> wrote:
> That's what I've seen too -- it picks the master seed once and then all
> iters pick their own derivative seeds. So if the test is random at the
> before/after/test level, it usually was enough to find bugs after many
> iterations.
>
> Shai
>
>
> On Tue, Oct 15, 2013 at 4:26 PM, Dawid Weiss <da...@cs.put.poznan.pl>
> wrote:
>>
>> >> What do you mean? Doesn't it execute the test many times, picking
>> >> different
>> >> seeds each time?
>>
>> Only at the test (method) level, at @Before @After hooks and at @Rule
>> blocks. @BeforeClass, @AfterClass and class rules are ran with an
>> identical seed (because you'd have to effectively reload the class
>> under a different class loader or rerun under a different JVM).
>>
>> This has been long on my list of things to fix, but it's not as
>> trivial as it sounds to change it.
>>
>> Dawid
>>
>> >>
>> >
>> > No it does not!!!!!!
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> > For additional commands, e-mail: dev-help@lucene.apache.org
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Shai Erera <se...@gmail.com>.
That's what I've seen too -- it picks the master seed once and then all
iters pick their own derivative seeds. So if the test is random at the
before/after/test level, it usually was enough to find bugs after many
iterations.

Shai


On Tue, Oct 15, 2013 at 4:26 PM, Dawid Weiss
<da...@cs.put.poznan.pl>wrote:

> >> What do you mean? Doesn't it execute the test many times, picking
> different
> >> seeds each time?
>
> Only at the test (method) level, at @Before @After hooks and at @Rule
> blocks. @BeforeClass, @AfterClass and class rules are ran with an
> identical seed (because you'd have to effectively reload the class
> under a different class loader or rerun under a different JVM).
>
> This has been long on my list of things to fix, but it's not as
> trivial as it sounds to change it.
>
> Dawid
>
> >>
> >
> > No it does not!!!!!!
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> > For additional commands, e-mail: dev-help@lucene.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
>> What do you mean? Doesn't it execute the test many times, picking different
>> seeds each time?

Only at the test (method) level, at @Before @After hooks and at @Rule
blocks. @BeforeClass, @AfterClass and class rules are ran with an
identical seed (because you'd have to effectively reload the class
under a different class loader or rerun under a different JVM).

This has been long on my list of things to fix, but it's not as
trivial as it sounds to change it.

Dawid

>>
>
> No it does not!!!!!!
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Tue, Oct 15, 2013 at 6:20 AM, Shai Erera <se...@gmail.com> wrote:
>> please dont use -Dtests.iters! It gives a false sense of security.
>
>
> What do you mean? Doesn't it execute the test many times, picking different
> seeds each time?
>

No it does not!!!!!!

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Shai Erera <se...@gmail.com>.
>
> please dont use -Dtests.iters! It gives a false sense of security.
>

What do you mean? Doesn't it execute the test many times, picking different
seeds each time?

Actually, since I managed to run luceneutil/repeatLuceneTest.py on my
Windows, I almost don't use iters anymore, as repeatLuceneTest.py does what
tests.iters and tests.dups do together -- multiple JVMs and each JVM gets
to pick its own seed.

Shai


On Tue, Oct 15, 2013 at 3:06 PM, Robert Muir <rc...@gmail.com> wrote:

> On Tue, Oct 15, 2013 at 7:38 AM, Shai Erera <se...@gmail.com> wrote:
>
> >
> > I think it is related but not sure, yet it bugs me. When I want to run
> same
> > method with iterations I need to specify "-Dtests.method=testFoo*
> > -Dtests.iters=10". If I just specify testFoo, I get an error. Is it
> possible
> > that the runner will add that '*' for me so that I just specify the test
> > method name? Or is this also related to JUnit limitation?
> >
>
> please dont use -Dtests.iters! It gives a false sense of security.
>
> I would like to remove this option.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> please dont use -Dtests.iters! It gives a false sense of security.

Just so that people don't freak out -- what tests.iters does is it
duplicates tests, but the "suite" will only run once. Since
LuceneTestCase picks most random components at the static level
tests.iters=X is really running a one combination of statically bound
components, hence Robert's warning.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Tue, Oct 15, 2013 at 7:38 AM, Shai Erera <se...@gmail.com> wrote:

>
> I think it is related but not sure, yet it bugs me. When I want to run same
> method with iterations I need to specify "-Dtests.method=testFoo*
> -Dtests.iters=10". If I just specify testFoo, I get an error. Is it possible
> that the runner will add that '*' for me so that I just specify the test
> method name? Or is this also related to JUnit limitation?
>

please dont use -Dtests.iters! It gives a false sense of security.

I would like to remove this option.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Shai Erera <se...@gmail.com>.
>
> Then when you want to rerun your test and limit to that
> particular method you need to glob somehow because these descriptions
> no longer correspond to raw method names
>

I think it is related but not sure, yet it bugs me. When I want to run same
method with iterations I need to specify "-Dtests.method=testFoo*
-Dtests.iters=10". If I just specify testFoo, I get an error. Is it
possible that the runner will add that '*' for me so that I just specify
the test method name? Or is this also related to JUnit limitation?

Shai


On Tue, Oct 15, 2013 at 2:18 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> On Tue, Oct 15, 2013 at 6:56 AM, Dawid Weiss
> <da...@cs.put.poznan.pl> wrote:
> >> Hmm, you couldn't just enter the unqualified class name, and then it'd
> >> match any package containing that class name?  LIke "**/TestFooBar"
> >> patternset pattern I think?
> >
> > This isn't different to your "duplicated test match" scenario. Same
> > class could be present in multiple packages -- what then?  :)
>
> Just run both tests in that case, I think?  User can fully qualify if
> they really want to run a specific one?
>
> That's a much more benign error than saying BUILD SUCCESSFUL when no
> test actually ran.
>
> >> Hmm, so globs are a convenient way to work around a JUnit limitation?
> >
> > I'm not talking about globs, I'm talking about the fact that every
> > test needs a unique string description so if you want to re-run the
> > same method multiple times in JUnit you need to differentiate them
> > somehow. Then when you want to rerun your test and limit to that
> > particular method you need to glob somehow because these descriptions
> > no longer correspond to raw method names... eh, it's a longer
> > discussion.
>
> OK, hairy :)
>
> >> I also did not remember -Dtests.method=XXX was different.... I believe
> >> you've explained this many times already ;)
> >>
> >> I'll [hopefully remember to] use -Dtests.method from now on!
> >
> > See the top of 'ant test-help':
>
> Thanks.
>
> > You're right -- testmethod is a prefix match and testcase is a suffix/
> > class match. The runner only "understands" tests.class and
> > tests.method -- if you're running from Eclipse or Idea, for example,
> > only these will be picked up.
>
> OK.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Tue, Oct 15, 2013 at 6:56 AM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
>> Hmm, you couldn't just enter the unqualified class name, and then it'd
>> match any package containing that class name?  LIke "**/TestFooBar"
>> patternset pattern I think?
>
> This isn't different to your "duplicated test match" scenario. Same
> class could be present in multiple packages -- what then?  :)

Just run both tests in that case, I think?  User can fully qualify if
they really want to run a specific one?

That's a much more benign error than saying BUILD SUCCESSFUL when no
test actually ran.

>> Hmm, so globs are a convenient way to work around a JUnit limitation?
>
> I'm not talking about globs, I'm talking about the fact that every
> test needs a unique string description so if you want to re-run the
> same method multiple times in JUnit you need to differentiate them
> somehow. Then when you want to rerun your test and limit to that
> particular method you need to glob somehow because these descriptions
> no longer correspond to raw method names... eh, it's a longer
> discussion.

OK, hairy :)

>> I also did not remember -Dtests.method=XXX was different.... I believe
>> you've explained this many times already ;)
>>
>> I'll [hopefully remember to] use -Dtests.method from now on!
>
> See the top of 'ant test-help':

Thanks.

> You're right -- testmethod is a prefix match and testcase is a suffix/
> class match. The runner only "understands" tests.class and
> tests.method -- if you're running from Eclipse or Idea, for example,
> only these will be picked up.

OK.

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> Hmm, you couldn't just enter the unqualified class name, and then it'd
> match any package containing that class name?  LIke "**/TestFooBar"
> patternset pattern I think?

This isn't different to your "duplicated test match" scenario. Same
class could be present in multiple packages -- what then?  :)

> Hmm, so globs are a convenient way to work around a JUnit limitation?

I'm not talking about globs, I'm talking about the fact that every
test needs a unique string description so if you want to re-run the
same method multiple times in JUnit you need to differentiate them
somehow. Then when you want to rerun your test and limit to that
particular method you need to glob somehow because these descriptions
no longer correspond to raw method names... eh, it's a longer
discussion.

> I also did not remember -Dtests.method=XXX was different.... I believe
> you've explained this many times already ;)
>
> I'll [hopefully remember to] use -Dtests.method from now on!

See the top of 'ant test-help':

#
# Test case filtering. --------------------------------------------
#
# - 'tests.class' is a class-filtering shell-like glob pattern,
#   'testcase' is an alias of "tests.class=*.${testcase}"
# - 'tests.method' is a method-filtering glob pattern.
#   'testmethod' is an alias of "tests.method=${testmethod}*"
#

You're right -- testmethod is a prefix match and testcase is a suffix/
class match. The runner only "understands" tests.class and
tests.method -- if you're running from Eclipse or Idea, for example,
only these will be picked up.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Tue, Oct 15, 2013 at 4:11 AM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
>> Is this (the "globbing") really an important feature?  Is it somehow
>> necessary in the implementation for other reasons?
>
> There are a few reasons for that:
>
> - without globbing you'd have to provide a fully qualified class name
> (and test name);

Hmm, you couldn't just enter the unqualified class name, and then it'd
match any package containing that class name?  LIke "**/TestFooBar"
patternset pattern I think?

> - Lucene doesn't use it but if we did have one test-covers-all type of
> execution then you could run all of a given module's tests by giving
> its package prefix,

Hmm ok.

> - remember that the test case "names" are not necessarily the method
> name -- when repeat annotation (or sysprop) is used each test method
> expands into multiple tests (the name then includes the seed and
> repetition number if that's not enough to distinguish individual test
> cases). These things stem from JUnit limitations. And globbing is a
> simple way to run all test cases expanded from a single method.

Hmm, so globs are a convenient way to work around a JUnit limitation?

> I personally do use globbing to restrict to a package (when I wish to
> test a single module), so it's not a dead feature :)

Fair enough :)

I feel you are putting the cart before the horse here, i.e. you are
"implementation blind": you justify the current behavior (saying BUILD
SUCCESSFUL when no test actually ran) because of the current
implementation, i.e. "it's the glob pattern that matched nothing, and
that's not an error in general".  Whereas what I see is a nasty trap,
when the user makes a typo (e.g. javac Foo*.java fails when there is
no match), when an assumption is hit, etc.  But at this point I think
we simply disagree ...

>> Another (third) trap I've hit is trying to run a single method, but
>> accidentally running two because the first method is a prefix of the
>> second one ... when this happens, I go and rename the methods so there
>> is no prefix anymore.
>
> This is because in Lucene's ANT code there are backcompat settings
> that expand testcase/testmethod into any substring match. If you used
> tests.method and tests.class you could specify these accurately.

Oh, I hadn't realized/remembered -Dtestmethod=XXX previously did
substring (did you mean prefix?) expansion too!

I also did not remember -Dtests.method=XXX was different.... I believe
you've explained this many times already ;)

I'll [hopefully remember to] use -Dtests.method from now on!

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> Is this (the "globbing") really an important feature?  Is it somehow
> necessary in the implementation for other reasons?

There are a few reasons for that:

- without globbing you'd have to provide a fully qualified class name
(and test name);
- Lucene doesn't use it but if we did have one test-covers-all type of
execution then you could run all of a given module's tests by giving
its package prefix,
- remember that the test case "names" are not necessarily the method
name -- when repeat annotation (or sysprop) is used each test method
expands into multiple tests (the name then includes the seed and
repetition number if that's not enough to distinguish individual test
cases). These things stem from JUnit limitations. And globbing is a
simple way to run all test cases expanded from a single method.

I personally do use globbing to restrict to a package (when I wish to
test a single module), so it's not a dead feature :)

> Another (third) trap I've hit is trying to run a single method, but
> accidentally running two because the first method is a prefix of the
> second one ... when this happens, I go and rename the methods so there
> is no prefix anymore.

This is because in Lucene's ANT code there are backcompat settings
that expand testcase/testmethod into any substring match. If you used
tests.method and tests.class you could specify these accurately.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 14, 2013 at 3:38 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:

> You need to know your tools -- we should make it more explicit that
> these filters are glob patterns rather than hide this even deeper.

Actually I find it very strange that they are glob patterns, and I
never (intentionally!) take advantage of that.

When I specify a test case and test method, it's always because I'm
trying to specify precisely one.

Is this (the "globbing") really an important feature?  Is it somehow
necessary in the implementation for other reasons?

Another (third) trap I've hit is trying to run a single method, but
accidentally running two because the first method is a prefix of the
second one ... when this happens, I go and rename the methods so there
is no prefix anymore.

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> I just wish more devs were able to help out on our test infra ... it
> shouldn't have to be you always responding to our crazy feature
> requests!

I don't mind that at all. I am just resistant to adding things that
will complicate the ant build even more than it already is...

I still don't think your comparison to javac is correct. It's closer
to adding a special-case exception/warning/ confirmation to "rm"
command just for the particular scenario of it receiving a combination
of "-rf /" arguments -- the effects are annoying, but the scenario is
rare and the code mostly dead.

You need to know your tools -- we should make it more explicit that
these filters are glob patterns rather than hide this even deeper.

And now - back to work, comrades!

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 14, 2013 at 2:22 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
>> Wait, I think it is an error :)  Yes, a hard to fix error (our test
>> infra is complex), but still an error.
>
> It's a mistake in the filter declaration, not an error in its application.

The difference is really an implementation detail :)

I just want to run one test; that this is a 2-step process (append
wildcard & apply filter, execute tests that matched that filter) ...
is not important to the user.

>> It's like "javac foo.java" returning successfully when foo.java doesn't exist.
>
> I don't think this is particularly matching since filters are pattern
> matching. It's more like:
>
> grep "nonexistent" < input
>
> returning an error just because a pattern didn't occur in input.

Well, javac foo*.java does result in an error, if no files match
foo*.java.  Ie javac is angry that it had nothing to compile.

> Like I said -- I'm really indifferent to this, I can add it. I just
> don't think we should cater for all the possible typos and errors one
> can make - this would be insane.

OK, thanks for opening the "wish" issue.

I just wish more devs were able to help out on our test infra ... it
shouldn't have to be you always responding to our crazy feature
requests!

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
Anyway, I filed LUCENE-5283 as a "wish". :) I'll add it in the spare
cycle -- it'll work at the module level and you can override your
local settings if you wish zero-test executions to fail your build.

Dawid

On Mon, Oct 14, 2013 at 8:22 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
>> Wait, I think it is an error :)  Yes, a hard to fix error (our test
>> infra is complex), but still an error.
>
> It's a mistake in the filter declaration, not an error in its application.
>
>> It's like "javac foo.java" returning successfully when foo.java doesn't exist.
>
> I don't think this is particularly matching since filters are pattern
> matching. It's more like:
>
> grep "nonexistent" < input
>
> returning an error just because a pattern didn't occur in input.
>
> Like I said -- I'm really indifferent to this, I can add it. I just
> don't think we should cater for all the possible typos and errors one
> can make - this would be insane.
>
> D.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> Wait, I think it is an error :)  Yes, a hard to fix error (our test
> infra is complex), but still an error.

It's a mistake in the filter declaration, not an error in its application.

> It's like "javac foo.java" returning successfully when foo.java doesn't exist.

I don't think this is particularly matching since filters are pattern
matching. It's more like:

grep "nonexistent" < input

returning an error just because a pattern didn't occur in input.

Like I said -- I'm really indifferent to this, I can add it. I just
don't think we should cater for all the possible typos and errors one
can make - this would be insane.

D.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Mon, Oct 14, 2013 at 2:36 PM, Michael McCandless
<lu...@mikemccandless.com> wrote:

> In fact, this would close another test trap I've hit, where I run a
> single test (spelled correctly!), yet the test hit an Assume, appears
> to pass (BUILD SUCCESSFUL) but actually did not run anything, I
> commit, test then fails in jenkins for a stupid reason and I'm like
> "WTF?  I swear I tested it".
>

This is the complexity i dont like though. Now assume() sometimes
fails the build?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
We differ in opinions on this one, but I don't see any point in
arguing -- I'll implement it, whoever wants to use it, their free
will.

> Net/net I think your simple solution would be great?
> Really I just want a "run this exact test name, for sure!" test target.

Mind you it's not exactly like that. The condition here will be "at
least one test was executed"; it's still a pattern so if you make a
typo that runs something then it will pass as successful.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 14, 2013 at 2:20 PM, Robert Muir <rc...@gmail.com> wrote:
> On Mon, Oct 14, 2013 at 11:11 AM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>>
>> Maybe a simple compromise: could we make a new ant target,
>> "run-specific-test-for-certain", whose purpose was to run that test
>> and fail if no such test was not found?  I think it's fine if this
>> only works at the "module" level, if ant makes it too hairy to
>> recurse.
>>
>
> But is that really a compromise or will it only lead to increased
> complexity? I can see how i would implement this now,
> e.g. i'd just create a fileset of build/*pattern.class and fail if it
> was non-empty, and then invoke 'test' target.
>
> But I'm worried the simple compromise would lead us down a slippery
> slope, once we opened the can of worms, someone would eventually want
> it to validate that you didn't typo the -Dtestmethod too right?

I'm less concerned about that -- if you typo that, then all tests run,
right?  That becomes quickly obvious user error :)

I think the other direction (BUILD SUCCESSFUL when the test did not in
fact run) is much worse: it's non-obvious user error.  You go away
gleeful that your test passed.

> And someone might be upset that my simple solution fails always if
> they run clean first (since class file doesnt exist), or that it
> doesnt fail if the test is @Ignored, or @Nightly and they forgot to
> supply -Dtests.nightly, or @BadApples or @Slow, or throws an
> assumption always because it suppresses Lucene3xCodec and you supply
> -Dtests.codec=Lucene3x, or ... (it goes on and on and on). And one
> could argue tehse are all traps and its trappy if we dont fix it.  :)

Actually I think these would be good failures, i.e. if it was a
nightly test and I did not specify -Dtests.nightly then it should
fail, so I know something went wrong when I tried to run "the one
test".

I think the restriction of "you must be in the module's directory" is
acceptable.

In fact, this would close another test trap I've hit, where I run a
single test (spelled correctly!), yet the test hit an Assume, appears
to pass (BUILD SUCCESSFUL) but actually did not run anything, I
commit, test then fails in jenkins for a stupid reason and I'm like
"WTF?  I swear I tested it".

Net/net I think your simple solution would be great?

Really I just want a "run this exact test name, for sure!" test target.

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Mon, Oct 14, 2013 at 11:11 AM, Michael McCandless
<lu...@mikemccandless.com> wrote:
>
> Maybe a simple compromise: could we make a new ant target,
> "run-specific-test-for-certain", whose purpose was to run that test
> and fail if no such test was not found?  I think it's fine if this
> only works at the "module" level, if ant makes it too hairy to
> recurse.
>

But is that really a compromise or will it only lead to increased
complexity? I can see how i would implement this now,
e.g. i'd just create a fileset of build/*pattern.class and fail if it
was non-empty, and then invoke 'test' target.

But I'm worried the simple compromise would lead us down a slippery
slope, once we opened the can of worms, someone would eventually want
it to validate that you didn't typo the -Dtestmethod too right?

And someone might be upset that my simple solution fails always if
they run clean first (since class file doesnt exist), or that it
doesnt fail if the test is @Ignored, or @Nightly and they forgot to
supply -Dtests.nightly, or @BadApples or @Slow, or throws an
assumption always because it suppresses Lucene3xCodec and you supply
-Dtests.codec=Lucene3x, or ... (it goes on and on and on). And one
could argue tehse are all traps and its trappy if we dont fix it.  :)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 14, 2013 at 12:46 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:

> This isn't an error condition (to me).

Wait, I think it is an error :)  Yes, a hard to fix error (our test
infra is complex), but still an error.

Ie, if I ask "ant test" to run a specific test, and it finds no such
test, yet declares "BUILD SUCCESSFUL" in the end, how can such lenient
error checking be good?  It's trappy.

It's like "javac foo.java" returning successfully when foo.java doesn't exist.

I'm not using wildcards when I ask it to run a test :)  Sure, maybe
wildcards are used under the hood, but this is an implementation
detail.

Maybe a simple compromise: could we make a new ant target,
"run-specific-test-for-certain", whose purpose was to run that test
and fail if no such test was not found?  I think it's fine if this
only works at the "module" level, if ant makes it too hairy to
recurse.

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
Exactly. If you pass a wildcard that filters out everything then you
end up without tests. This isn't an error condition (to me).

I can add an option to the runner to fail on zero executed test cases
but then Robert's example won't work. I don't think there is any
sensible way to do it across "modules" -- aggregation would be very
difficult because every ant call is pretty much isolated and doesn't
know anything about the past/future. It'd be extremely hacky.

If you need to filter out a test case for a particular module, cd to
that module and run your tests there -- then your command line will
end with the number of test cases actually run. If you're running from
the top level then filtering just shouldn't fail -- it's very likely
that one module or another won't contain any tests matching your
filter. You can always resort to Mike's pythacks and spawn your tests
from there.

Dawid

On Mon, Oct 14, 2013 at 5:01 PM, Robert Muir <rc...@gmail.com> wrote:
> I know understand why Dawid tried to make it clear that this stuff is
> wildcard matching.
>
>   <!-- Aliases for tests filters -->
>   <condition property="tests.class" value="*.${testcase}">
>     <isset property="testcase" />
>   </condition>
>
> Its sorta like shell expansion on the unix prompt: 'echo *' shouldnt
> return non-zero because there are no files in the current directory.
> thats because its very general and has a lot of use cases. On the
> other hand, it makes sense that 'ls *' returns 1 in this case, because
> its sole purpose is listing files.
>
> The same can be said for your python test-repeater
>
>
> On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>> This has actually bit me before too ...
>>
>> I mean, sure, I do eventually notice that it ran too quickly and so it
>> was not in fact really SUCCESSFUL.
>>
>> Why would Rob's example fail?  In that case, it would have in fact run
>> TestIndexWriter, right?  (Sure, other modules didn't have such a test,
>> but the fact that one of the visited modules did have the test should
>> mean that the overall ant run is SUCCESSFUL?).  Is it just too hard
>> with ant to make this logic be "across modules"?
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Mon, Oct 14, 2013 at 9:41 AM, Shai Erera <se...@gmail.com> wrote:
>>> I see, didn't think about that usecase. Ok so let's not do it.
>>>
>>> Shai
>>>
>>>
>>> On Mon, Oct 14, 2013 at 4:27 PM, Robert Muir <rc...@gmail.com> wrote:
>>>>
>>>> On Mon, Oct 14, 2013 at 9:11 AM, Shai Erera <se...@gmail.com> wrote:
>>>> >
>>>> > What's the harm of failing the build in that case?
>>>> >
>>>>
>>>> because i should be able to do this and for it to pass:
>>>>
>>>> cd lucene/
>>>> ant test -Dtestcase=TestIndexWriter
>>>>
>>>> So please, don't make this change. it would totally screw everything up
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>>>> For additional commands, e-mail: dev-help@lucene.apache.org
>>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
I know understand why Dawid tried to make it clear that this stuff is
wildcard matching.

  <!-- Aliases for tests filters -->
  <condition property="tests.class" value="*.${testcase}">
    <isset property="testcase" />
  </condition>

Its sorta like shell expansion on the unix prompt: 'echo *' shouldnt
return non-zero because there are no files in the current directory.
thats because its very general and has a lot of use cases. On the
other hand, it makes sense that 'ls *' returns 1 in this case, because
its sole purpose is listing files.

The same can be said for your python test-repeater


On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
<lu...@mikemccandless.com> wrote:
> This has actually bit me before too ...
>
> I mean, sure, I do eventually notice that it ran too quickly and so it
> was not in fact really SUCCESSFUL.
>
> Why would Rob's example fail?  In that case, it would have in fact run
> TestIndexWriter, right?  (Sure, other modules didn't have such a test,
> but the fact that one of the visited modules did have the test should
> mean that the overall ant run is SUCCESSFUL?).  Is it just too hard
> with ant to make this logic be "across modules"?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Oct 14, 2013 at 9:41 AM, Shai Erera <se...@gmail.com> wrote:
>> I see, didn't think about that usecase. Ok so let's not do it.
>>
>> Shai
>>
>>
>> On Mon, Oct 14, 2013 at 4:27 PM, Robert Muir <rc...@gmail.com> wrote:
>>>
>>> On Mon, Oct 14, 2013 at 9:11 AM, Shai Erera <se...@gmail.com> wrote:
>>> >
>>> > What's the harm of failing the build in that case?
>>> >
>>>
>>> because i should be able to do this and for it to pass:
>>>
>>> cd lucene/
>>> ant test -Dtestcase=TestIndexWriter
>>>
>>> So please, don't make this change. it would totally screw everything up
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>>> For additional commands, e-mail: dev-help@lucene.apache.org
>>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
I went ahead and pushed it to sonatype. A patch is attached to
kLUCENE-5283. Let me know if this helps anyhow.

Dawid

On Tue, Oct 15, 2013 at 8:07 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
> Oh, stop moaning. Just be more careful about the filters when you type them ;)
>
> I've committed a fix; this will only work from module level -- like I
> said, it may be very tricky to aggregate this across modules.
>
> https://github.com/carrotsearch/randomizedtesting/commit/78f40f506f933a5ec8dad38e9505fbf2aa6f7974
>
> I'll make a release tomorrow and integrate it with Lucene/ Solr builds.
>
> Dawid
>
>
> On Tue, Oct 15, 2013 at 5:59 PM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>> On Tue, Oct 15, 2013 at 11:47 AM, Yonik Seeley <ys...@gmail.com> wrote:
>>> On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
>>> <lu...@mikemccandless.com> wrote:
>>>> This has actually bit me before too ...
>>>
>>> Me too.
>>> I'm never sure if I may have made a typo... the only way to be sure is
>>> to tee the output to a file and go back through and look for the test
>>> output.
>>> I've also run things in a loop for an hour before I realized that they
>>> were running slightly too fast.
>>>
>>> I guess one answer is for me to create my own test script (not just
>>> for running in a loop, but even running a test a single time) that
>>> ensures (via grep) that at least one test was run.
>>
>> The python runner takes a "-once" argument to run the test only once
>> ... it's sort of backwards, but I originally created this runner in
>> order to beast a single test across multiple JVMs ... so when I want
>> to run a single test I do this:
>>
>>   python repeatLuceneTest.py TestFoo.testBar -once -nolog
>>
>> (-nolog sends all output to the console)
>>
>> And it will (now!) fail if you have a typo in your test case ...
>>
>> But ... I don't think it sets the classpath to run Solr tests now ...
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
> Some of us do not type pefectly!

it'f nefer thoo latte two try!

> Thanks!

Check out the patch, if it's all right then feel free and commit it --
it's getting late and I'm out of office tomorrow so I won't be able to
do it.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Tue, Oct 15, 2013 at 2:07 PM, Dawid Weiss
<da...@cs.put.poznan.pl> wrote:
> Oh, stop moaning.

:)

> Just be more careful about the filters when you type them ;)

Some of us do not type pefectly!

> I've committed a fix; this will only work from module level -- like I
> said, it may be very tricky to aggregate this across modules.
>
> https://github.com/carrotsearch/randomizedtesting/commit/78f40f506f933a5ec8dad38e9505fbf2aa6f7974
>
> I'll make a release tomorrow and integrate it with Lucene/ Solr builds.

Thanks!

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@cs.put.poznan.pl>.
Oh, stop moaning. Just be more careful about the filters when you type them ;)

I've committed a fix; this will only work from module level -- like I
said, it may be very tricky to aggregate this across modules.

https://github.com/carrotsearch/randomizedtesting/commit/78f40f506f933a5ec8dad38e9505fbf2aa6f7974

I'll make a release tomorrow and integrate it with Lucene/ Solr builds.

Dawid


On Tue, Oct 15, 2013 at 5:59 PM, Michael McCandless
<lu...@mikemccandless.com> wrote:
> On Tue, Oct 15, 2013 at 11:47 AM, Yonik Seeley <ys...@gmail.com> wrote:
>> On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
>> <lu...@mikemccandless.com> wrote:
>>> This has actually bit me before too ...
>>
>> Me too.
>> I'm never sure if I may have made a typo... the only way to be sure is
>> to tee the output to a file and go back through and look for the test
>> output.
>> I've also run things in a loop for an hour before I realized that they
>> were running slightly too fast.
>>
>> I guess one answer is for me to create my own test script (not just
>> for running in a loop, but even running a test a single time) that
>> ensures (via grep) that at least one test was run.
>
> The python runner takes a "-once" argument to run the test only once
> ... it's sort of backwards, but I originally created this runner in
> order to beast a single test across multiple JVMs ... so when I want
> to run a single test I do this:
>
>   python repeatLuceneTest.py TestFoo.testBar -once -nolog
>
> (-nolog sends all output to the console)
>
> And it will (now!) fail if you have a typo in your test case ...
>
> But ... I don't think it sets the classpath to run Solr tests now ...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Tue, Oct 15, 2013 at 11:47 AM, Yonik Seeley <ys...@gmail.com> wrote:
> On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>> This has actually bit me before too ...
>
> Me too.
> I'm never sure if I may have made a typo... the only way to be sure is
> to tee the output to a file and go back through and look for the test
> output.
> I've also run things in a loop for an hour before I realized that they
> were running slightly too fast.
>
> I guess one answer is for me to create my own test script (not just
> for running in a loop, but even running a test a single time) that
> ensures (via grep) that at least one test was run.

The python runner takes a "-once" argument to run the test only once
... it's sort of backwards, but I originally created this runner in
order to beast a single test across multiple JVMs ... so when I want
to run a single test I do this:

  python repeatLuceneTest.py TestFoo.testBar -once -nolog

(-nolog sends all output to the console)

And it will (now!) fail if you have a typo in your test case ...

But ... I don't think it sets the classpath to run Solr tests now ...

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Yonik Seeley <ys...@gmail.com>.
On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
<lu...@mikemccandless.com> wrote:
> This has actually bit me before too ...

Me too.
I'm never sure if I may have made a typo... the only way to be sure is
to tee the output to a file and go back through and look for the test
output.
I've also run things in a loop for an hour before I realized that they
were running slightly too fast.

I guess one answer is for me to create my own test script (not just
for running in a loop, but even running a test a single time) that
ensures (via grep) that at least one test was run.

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
On Mon, Oct 14, 2013 at 10:48 AM, Robert Muir <rc...@gmail.com> wrote:
> On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
> <lu...@mikemccandless.com> wrote:
>> This has actually bit me before too ...
>>
>> I mean, sure, I do eventually notice that it ran too quickly and so it
>> was not in fact really SUCCESSFUL.
>>
>> Why would Rob's example fail?  In that case, it would have in fact run
>> TestIndexWriter, right?  (Sure, other modules didn't have such a test,
>> but the fact that one of the visited modules did have the test should
>> mean that the overall ant run is SUCCESSFUL?).  Is it just too hard
>> with ant to make this logic be "across modules"?
>>
>
> 'ant test' needs to do a lot more than the specialized python script
> you have to repeat one test.

Right, I agree this is hard to fix, because of ant / randomizedtesting
/ our build scripts limitations.

But I still think it's wrong that "ant test -Dtestcase=foo
-Dtestmethod=bar" finishes with "BUILD SUCCESSFUL" when you have an
accidental typo and in fact nothing ran.

It's like javac declaring success when you mis-typed one of your java
source files.

I know and agree this is really, really hard for us to fix, but I
still think it's wrong: it's so trappy.  Maybe we need a new "ant
run-this-test-for-certain" target or something.

> so I think you should modify the latter instead of trying to make the
> whole build system complicated.

Yeah, I fixed luceneutil ... it's of course hackity, since I peek in
the stdout for "OK (0 tests)" and then call that a failure.

Also, luceneutil "cheats" since this particular beasting tool
(repeatLuceneTest.py) only runs one module (you have to cd to that
directory first).  The distributed beasting tool (runRemoteTests.py)
runs all modules though ...

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless
<lu...@mikemccandless.com> wrote:
> This has actually bit me before too ...
>
> I mean, sure, I do eventually notice that it ran too quickly and so it
> was not in fact really SUCCESSFUL.
>
> Why would Rob's example fail?  In that case, it would have in fact run
> TestIndexWriter, right?  (Sure, other modules didn't have such a test,
> but the fact that one of the visited modules did have the test should
> mean that the overall ant run is SUCCESSFUL?).  Is it just too hard
> with ant to make this logic be "across modules"?
>

'ant test' needs to do a lot more than the specialized python script
you have to repeat one test.

so I think you should modify the latter instead of trying to make the
whole build system complicated.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Michael McCandless <lu...@mikemccandless.com>.
This has actually bit me before too ...

I mean, sure, I do eventually notice that it ran too quickly and so it
was not in fact really SUCCESSFUL.

Why would Rob's example fail?  In that case, it would have in fact run
TestIndexWriter, right?  (Sure, other modules didn't have such a test,
but the fact that one of the visited modules did have the test should
mean that the overall ant run is SUCCESSFUL?).  Is it just too hard
with ant to make this logic be "across modules"?

Mike McCandless

http://blog.mikemccandless.com


On Mon, Oct 14, 2013 at 9:41 AM, Shai Erera <se...@gmail.com> wrote:
> I see, didn't think about that usecase. Ok so let's not do it.
>
> Shai
>
>
> On Mon, Oct 14, 2013 at 4:27 PM, Robert Muir <rc...@gmail.com> wrote:
>>
>> On Mon, Oct 14, 2013 at 9:11 AM, Shai Erera <se...@gmail.com> wrote:
>> >
>> > What's the harm of failing the build in that case?
>> >
>>
>> because i should be able to do this and for it to pass:
>>
>> cd lucene/
>> ant test -Dtestcase=TestIndexWriter
>>
>> So please, don't make this change. it would totally screw everything up
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: dev-help@lucene.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Shai Erera <se...@gmail.com>.
I see, didn't think about that usecase. Ok so let's not do it.

Shai


On Mon, Oct 14, 2013 at 4:27 PM, Robert Muir <rc...@gmail.com> wrote:

> On Mon, Oct 14, 2013 at 9:11 AM, Shai Erera <se...@gmail.com> wrote:
> >
> > What's the harm of failing the build in that case?
> >
>
> because i should be able to do this and for it to pass:
>
> cd lucene/
> ant test -Dtestcase=TestIndexWriter
>
> So please, don't make this change. it would totally screw everything up
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
> For additional commands, e-mail: dev-help@lucene.apache.org
>
>

Re: Should build fail if test does not exist?

Posted by Robert Muir <rc...@gmail.com>.
On Mon, Oct 14, 2013 at 9:11 AM, Shai Erera <se...@gmail.com> wrote:
>
> What's the harm of failing the build in that case?
>

because i should be able to do this and for it to pass:

cd lucene/
ant test -Dtestcase=TestIndexWriter

So please, don't make this change. it would totally screw everything up

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Re: Should build fail if test does not exist?

Posted by Shai Erera <se...@gmail.com>.
I don't always read the entire log -- if a build is SUCCESSFUL, I assume
it's successful.

I hit this while using luceneutil/repeatLuceneTest.py -- it happily ran a
couple of hundred iterations until I suspected something's wrong because
the iterations finished too quickly.

Indeed, if I run 'ant test -Dtestcase=TestFoo* -Dtests.iters=1000', nothing
is run and then it's easy to note that in the console because I'm missing
the 1000 OK lines.

Maybe it should be fixed in luceneutil, I'm not sure. To fix it there would
mean that we'll need to parse the test log to search for "0 suites" or "0
tests", which is both fragile and only possible if you choose to log the
output to a file (if it's logged to the console there's nothing you can do
I believe).

So I thought if the runner would fail the build, it will help more easily
detect such errors. Why rely on someone to note within the hundreds of
characters the framework spits at you that no tests were actually ran?

What's the harm of failing the build in that case?

Shai


On Mon, Oct 14, 2013 at 4:00 PM, Dawid Weiss <da...@gmail.com> wrote:

>
> I don't think this should be an error. If you make a typo you will see no
> tests were executed. I don't see thr benefit of failing the build.
>
> In any case, this would belong to the runner.
>
> D.
> On Oct 14, 2013 2:17 PM, "Shai Erera" <se...@gmail.com> wrote:
>
>> Hi
>>
>> If you run something like "ant test -Dtestcase=TestFoo" where there's no
>> TestFoo.java, or "ant test -Dtestcase=TestBar -Dtests.method=testFoo" where
>> there's TestBar.java but no testFoo() method, the build currently passes as
>> SUCCESSFUL. Though, the report says "0 suits" or "1 suits, 0 tests".
>>
>> I wonder if it should be a build failure? The problem I have is that you
>> can e.g. make a very stupid typo (usually around plurals as in
>> testSomethingBadOnIOException vs testSomethingBadOnIOExceptions) and get a
>> false SUCCESSFUL notification.
>>
>> If we want to fix/change it, where does it belong - build scripts,
>> LuceneTestCase or randomizedrunner?
>>
>> Shai
>>
>

Re: Should build fail if test does not exist?

Posted by Dawid Weiss <da...@gmail.com>.
I don't think this should be an error. If you make a typo you will see no
tests were executed. I don't see thr benefit of failing the build.

In any case, this would belong to the runner.

D.
On Oct 14, 2013 2:17 PM, "Shai Erera" <se...@gmail.com> wrote:

> Hi
>
> If you run something like "ant test -Dtestcase=TestFoo" where there's no
> TestFoo.java, or "ant test -Dtestcase=TestBar -Dtests.method=testFoo" where
> there's TestBar.java but no testFoo() method, the build currently passes as
> SUCCESSFUL. Though, the report says "0 suits" or "1 suits, 0 tests".
>
> I wonder if it should be a build failure? The problem I have is that you
> can e.g. make a very stupid typo (usually around plurals as in
> testSomethingBadOnIOException vs testSomethingBadOnIOExceptions) and get a
> false SUCCESSFUL notification.
>
> If we want to fix/change it, where does it belong - build scripts,
> LuceneTestCase or randomizedrunner?
>
> Shai
>