You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Grant Ingersoll <gs...@apache.org> on 2011/08/06 21:34:40 UTC

Test Execution Time

Granted, I'm on a slow machine, but our tests take forever to run.  On an 2 core MBP, it takes well over an hour to run all the tests (I did just order a new MBP, so it will be faster, but it doesn't lend itself to a good OOTB experience for people)

One idea would be to add in parallel test execution in Maven.  I think this requires Mvn 3, but I am not sure.  Another is to take a look at our tests, especially the slow ones and see if we can speed them up.

When I try adding in parallel tests to Maven, I get a bunch of failures in the tests.

I was using: 
<plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <configuration>
          <forkMode>once</forkMode>
          <argLine>-Xms256m -Xmx512m</argLine>
          <testFailureIgnore>false</testFailureIgnore>
          <redirectTestOutputToFile>true</redirectTestOutputToFile>
          <parallel>classes</parallel>
          <threadCount>5</threadCount>
        </configuration>
      </plugin>

Anyone played around with this stuff?  I suspect the failures are due to tests stomping on each other, but I am still digging in.

-Grant

Re: Test Execution Time

Posted by Dhruv Kumar <dk...@ecs.umass.edu>.
Thankfully the tests for Mahout-627 (complete patch coming today), are
taking 57 seconds to run.

On Tue, Aug 16, 2011 at 2:19 PM, Sean Owen <sr...@gmail.com> wrote:

> (I disabled it. I had already tried to squeeze down its size without making
> it fail. I will write a note on the back of my hand to make sure to enable
> it before, say, releasing again.)
>
> On Tue, Aug 16, 2011 at 7:16 PM, Jake Mannix <ja...@gmail.com>
> wrote:
>
> > As long as nobody's changing the logic going into Lanczos, disabling it
> > should be fine.  But I agree with Grant, that we should have it running
> > somewhere (continuous integration?).
> >
> > I can try and squeeze it's size down, but you don't get very good
> > convergence
> > on a small matrix when using these iterative matrix multiplier-style
> > algorithms,
> > which makes it hard to see that it's still working as expected.
> >
> >  -jake
> >
> > On Wed, Aug 10, 2011 at 11:54 PM, Sean Owen <sr...@gmail.com> wrote:
> >
> > > That would reduce memory requirements unilaterally? no, don't think so.
> I
> > > have not run into memory problems.
> > > The issue here is execution time and it's a big problem indeed.
> > >
> > > Would anyone object to disabling this test for now? it's getting costly
> > in
> > > the dev cycle.
> > >
> > > On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com>
> > wrote:
> > >
> > > > Another aspect of testing Map/Reduce jobs is memory bounds. An M/R
> job
> > > > should be able to run in a fairly constant amount of ram per JVM from
> > > > beginning to end, to avoid blowing up late in the game. Is there any
> > > > harness around that would do this?
> > > >
> > > > Lance
> > > >
> > > > On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com>
> > > wrote:
> > > > > We have used testNg at work for just this segregation of fast and
> > slow
> > > > tests and it works pretty well. It is also useful for stripping out
> > > > cascading failures. This means that integration tests can be executed
> > > only
> > > > if the underlying unit tests succeed. Another nice thing is that
> testNg
> > > says
> > > > how many tests were skipped.
> > > > >
> > > > > Junit has recently been adding similar capabilities but I haven't
> > kept
> > > > up.
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org>
> > wrote:
> > > > >
> > > > >> It would likely help if we could get them to run in parallel,
> > perhaps.
> > > >  Also, seems like TestNG might have some better features on paper for
> > > this
> > > > kind of stuff (I think you can annotate some things as "slow" or
> "fast"
> > > and
> > > > then choose to run them separately).  I haven't explored much yet in
> > this
> > > > way.  Has anyone else used TestNG?
> > > > >>
> > > > >> -Grant
> > > > >>
> > > > >> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
> > > > >>
> > > > >>> I really don't think that this distinction needs to be made.  The
> > > > >>> distinction between unit and integration test is important from a
> > > > technical
> > > > >>> point of view, but as an organizing principle the topic and
> target
> > of
> > > > the
> > > > >>> test is probably a better idea than whether the test is a
> > functional
> > > or
> > > > unit
> > > > >>> test or whether it has randomized initial conditions or whether
> it
> > > has
> > > > for
> > > > >>> loops in it.  Tests should be organized by what they test.
> > > > >>>
> > > > >>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <goksron@gmail.com
> >
> > > > wrote:
> > > > >>>
> > > > >>>> Sure! Perhaps the long-running ones can move to a new
> 'regression'
> > > > >>>> area? examples/ is partly what these are, so examples/regression
> > > makes
> > > > >>>> sense.
> > > > >>>>
> > > > >>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com>
> > > wrote:
> > > > >>>>> This test is indeed by far the culprit. I already reduced its
> > test
> > > > input
> > > > >>>>> size to hurry it up, but it's gone slow again.
> > > > >>>>>
> > > > >>>>> Lance, indeed, these are not all unit tests -- nobody said they
> > > were.
> > > > The
> > > > >>>>> test is useful.
> > > > >>>>>
> > > > >>>>> I do suggest, however, we comment it out. Jake suggested it
> coudl
> > > be
> > > > made
> > > > >>>>> faster but I don't think he followed up.
> > > > >>>>>
> > > > >>>>> Sean
> > > > >>>>>
> > > > >>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <
> > goksron@gmail.com>
> > > > >>>> wrote:
> > > > >>>>>
> > > > >>>>>> Comment out DistributedLanczosWhatsit. Zing!
> > > > >>>>>>
> > > > >>>>>> A unit test takes a bit of code X and checks that code path A
> > goes
> > > > >>>>>> "tick" and code path B goes "tock" and bogus input C throws an
> > > > >>>>>> exception. There's no such thing as a "unit test" that runs
> > twelve
> > > > M/R
> > > > >>>>>> jobs in a row.
> > > > >>>>>>
> > > > >>>>>> There's MRUnit, which seems trapped in the Hadoop
> > > > 0.20/0.21/0.22/0.23
> > > > >>>>>> morass. This is a squib about how to do unit testing of
> mappers
> > > and
> > > > >>>>>> reducers with Mockito:
> > > > >>>>>>
> > > > >>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
> > > > >>>>>>
> > > > >>>>>> What the Mahout jobs want is more of a regression test, which
> > > would
> > > > >>>>>> have two purposes:
> > > > >>>>>> 1) does the whole orchestration still work, and
> > > > >>>>>> 2) does it still acquire the information it is supposed to
> > > acquire?
> > > > >>>>>> 2a) this requires some amount of real data and a "gold
> standard"
> > > > >>>>>> output to match against.
> > > > >>>>>>
> > > > >>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
> > > > gsingers@apache.org>
> > > > >>>>>> wrote:
> > > > >>>>>>> Granted, I'm on a slow machine, but our tests take forever to
> > > run.
> > > >  On
> > > > >>>> an
> > > > >>>>>> 2 core MBP, it takes well over an hour to run all the tests (I
> > did
> > > > just
> > > > >>>>>> order a new MBP, so it will be faster, but it doesn't lend
> > itself
> > > to
> > > > a
> > > > >>>> good
> > > > >>>>>> OOTB experience for people)
> > > > >>>>>>>
> > > > >>>>>>> One idea would be to add in parallel test execution in Maven.
> >  I
> > > > think
> > > > >>>>>> this requires Mvn 3, but I am not sure.  Another is to take a
> > look
> > > > at
> > > > >>>> our
> > > > >>>>>> tests, especially the slow ones and see if we can speed them
> up.
> > > > >>>>>>>
> > > > >>>>>>> When I try adding in parallel tests to Maven, I get a bunch
> of
> > > > >>>> failures
> > > > >>>>>> in the tests.
> > > > >>>>>>>
> > > > >>>>>>> I was using:
> > > > >>>>>>> <plugin>
> > > > >>>>>>>      <groupId>org.apache.maven.plugins</groupId>
> > > > >>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
> > > > >>>>>>>      <configuration>
> > > > >>>>>>>        <forkMode>once</forkMode>
> > > > >>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
> > > > >>>>>>>        <testFailureIgnore>false</testFailureIgnore>
> > > > >>>>>>>
> >  <redirectTestOutputToFile>true</redirectTestOutputToFile>
> > > > >>>>>>>        <parallel>classes</parallel>
> > > > >>>>>>>        <threadCount>5</threadCount>
> > > > >>>>>>>      </configuration>
> > > > >>>>>>>    </plugin>
> > > > >>>>>>>
> > > > >>>>>>> Anyone played around with this stuff?  I suspect the failures
> > are
> > > > due
> > > > >>>> to
> > > > >>>>>> tests stomping on each other, but I am still digging in.
> > > > >>>>>>>
> > > > >>>>>>> -Grant
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> --
> > > > >>>>>> Lance Norskog
> > > > >>>>>> goksron@gmail.com
> > > > >>>>>>
> > > > >>>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> --
> > > > >>>> Lance Norskog
> > > > >>>> goksron@gmail.com
> > > > >>>>
> > > > >>
> > > > >> --------------------------------------------
> > > > >> Grant Ingersoll
> > > > >>
> > > > >>
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Lance Norskog
> > > > goksron@gmail.com
> > > >
> > >
> >
>

Re: Test Execution Time

Posted by Sean Owen <sr...@gmail.com>.
(I disabled it. I had already tried to squeeze down its size without making
it fail. I will write a note on the back of my hand to make sure to enable
it before, say, releasing again.)

On Tue, Aug 16, 2011 at 7:16 PM, Jake Mannix <ja...@gmail.com> wrote:

> As long as nobody's changing the logic going into Lanczos, disabling it
> should be fine.  But I agree with Grant, that we should have it running
> somewhere (continuous integration?).
>
> I can try and squeeze it's size down, but you don't get very good
> convergence
> on a small matrix when using these iterative matrix multiplier-style
> algorithms,
> which makes it hard to see that it's still working as expected.
>
>  -jake
>
> On Wed, Aug 10, 2011 at 11:54 PM, Sean Owen <sr...@gmail.com> wrote:
>
> > That would reduce memory requirements unilaterally? no, don't think so. I
> > have not run into memory problems.
> > The issue here is execution time and it's a big problem indeed.
> >
> > Would anyone object to disabling this test for now? it's getting costly
> in
> > the dev cycle.
> >
> > On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com>
> wrote:
> >
> > > Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
> > > should be able to run in a fairly constant amount of ram per JVM from
> > > beginning to end, to avoid blowing up late in the game. Is there any
> > > harness around that would do this?
> > >
> > > Lance
> > >
> > > On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com>
> > wrote:
> > > > We have used testNg at work for just this segregation of fast and
> slow
> > > tests and it works pretty well. It is also useful for stripping out
> > > cascading failures. This means that integration tests can be executed
> > only
> > > if the underlying unit tests succeed. Another nice thing is that testNg
> > says
> > > how many tests were skipped.
> > > >
> > > > Junit has recently been adding similar capabilities but I haven't
> kept
> > > up.
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org>
> wrote:
> > > >
> > > >> It would likely help if we could get them to run in parallel,
> perhaps.
> > >  Also, seems like TestNG might have some better features on paper for
> > this
> > > kind of stuff (I think you can annotate some things as "slow" or "fast"
> > and
> > > then choose to run them separately).  I haven't explored much yet in
> this
> > > way.  Has anyone else used TestNG?
> > > >>
> > > >> -Grant
> > > >>
> > > >> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
> > > >>
> > > >>> I really don't think that this distinction needs to be made.  The
> > > >>> distinction between unit and integration test is important from a
> > > technical
> > > >>> point of view, but as an organizing principle the topic and target
> of
> > > the
> > > >>> test is probably a better idea than whether the test is a
> functional
> > or
> > > unit
> > > >>> test or whether it has randomized initial conditions or whether it
> > has
> > > for
> > > >>> loops in it.  Tests should be organized by what they test.
> > > >>>
> > > >>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com>
> > > wrote:
> > > >>>
> > > >>>> Sure! Perhaps the long-running ones can move to a new 'regression'
> > > >>>> area? examples/ is partly what these are, so examples/regression
> > makes
> > > >>>> sense.
> > > >>>>
> > > >>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com>
> > wrote:
> > > >>>>> This test is indeed by far the culprit. I already reduced its
> test
> > > input
> > > >>>>> size to hurry it up, but it's gone slow again.
> > > >>>>>
> > > >>>>> Lance, indeed, these are not all unit tests -- nobody said they
> > were.
> > > The
> > > >>>>> test is useful.
> > > >>>>>
> > > >>>>> I do suggest, however, we comment it out. Jake suggested it coudl
> > be
> > > made
> > > >>>>> faster but I don't think he followed up.
> > > >>>>>
> > > >>>>> Sean
> > > >>>>>
> > > >>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <
> goksron@gmail.com>
> > > >>>> wrote:
> > > >>>>>
> > > >>>>>> Comment out DistributedLanczosWhatsit. Zing!
> > > >>>>>>
> > > >>>>>> A unit test takes a bit of code X and checks that code path A
> goes
> > > >>>>>> "tick" and code path B goes "tock" and bogus input C throws an
> > > >>>>>> exception. There's no such thing as a "unit test" that runs
> twelve
> > > M/R
> > > >>>>>> jobs in a row.
> > > >>>>>>
> > > >>>>>> There's MRUnit, which seems trapped in the Hadoop
> > > 0.20/0.21/0.22/0.23
> > > >>>>>> morass. This is a squib about how to do unit testing of mappers
> > and
> > > >>>>>> reducers with Mockito:
> > > >>>>>>
> > > >>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
> > > >>>>>>
> > > >>>>>> What the Mahout jobs want is more of a regression test, which
> > would
> > > >>>>>> have two purposes:
> > > >>>>>> 1) does the whole orchestration still work, and
> > > >>>>>> 2) does it still acquire the information it is supposed to
> > acquire?
> > > >>>>>> 2a) this requires some amount of real data and a "gold standard"
> > > >>>>>> output to match against.
> > > >>>>>>
> > > >>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
> > > gsingers@apache.org>
> > > >>>>>> wrote:
> > > >>>>>>> Granted, I'm on a slow machine, but our tests take forever to
> > run.
> > >  On
> > > >>>> an
> > > >>>>>> 2 core MBP, it takes well over an hour to run all the tests (I
> did
> > > just
> > > >>>>>> order a new MBP, so it will be faster, but it doesn't lend
> itself
> > to
> > > a
> > > >>>> good
> > > >>>>>> OOTB experience for people)
> > > >>>>>>>
> > > >>>>>>> One idea would be to add in parallel test execution in Maven.
>  I
> > > think
> > > >>>>>> this requires Mvn 3, but I am not sure.  Another is to take a
> look
> > > at
> > > >>>> our
> > > >>>>>> tests, especially the slow ones and see if we can speed them up.
> > > >>>>>>>
> > > >>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
> > > >>>> failures
> > > >>>>>> in the tests.
> > > >>>>>>>
> > > >>>>>>> I was using:
> > > >>>>>>> <plugin>
> > > >>>>>>>      <groupId>org.apache.maven.plugins</groupId>
> > > >>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
> > > >>>>>>>      <configuration>
> > > >>>>>>>        <forkMode>once</forkMode>
> > > >>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
> > > >>>>>>>        <testFailureIgnore>false</testFailureIgnore>
> > > >>>>>>>
>  <redirectTestOutputToFile>true</redirectTestOutputToFile>
> > > >>>>>>>        <parallel>classes</parallel>
> > > >>>>>>>        <threadCount>5</threadCount>
> > > >>>>>>>      </configuration>
> > > >>>>>>>    </plugin>
> > > >>>>>>>
> > > >>>>>>> Anyone played around with this stuff?  I suspect the failures
> are
> > > due
> > > >>>> to
> > > >>>>>> tests stomping on each other, but I am still digging in.
> > > >>>>>>>
> > > >>>>>>> -Grant
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> --
> > > >>>>>> Lance Norskog
> > > >>>>>> goksron@gmail.com
> > > >>>>>>
> > > >>>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> --
> > > >>>> Lance Norskog
> > > >>>> goksron@gmail.com
> > > >>>>
> > > >>
> > > >> --------------------------------------------
> > > >> Grant Ingersoll
> > > >>
> > > >>
> > > >
> > >
> > >
> > >
> > > --
> > > Lance Norskog
> > > goksron@gmail.com
> > >
> >
>

Re: Test Execution Time

Posted by Jake Mannix <ja...@gmail.com>.
As long as nobody's changing the logic going into Lanczos, disabling it
should be fine.  But I agree with Grant, that we should have it running
somewhere (continuous integration?).

I can try and squeeze it's size down, but you don't get very good
convergence
on a small matrix when using these iterative matrix multiplier-style
algorithms,
which makes it hard to see that it's still working as expected.

  -jake

On Wed, Aug 10, 2011 at 11:54 PM, Sean Owen <sr...@gmail.com> wrote:

> That would reduce memory requirements unilaterally? no, don't think so. I
> have not run into memory problems.
> The issue here is execution time and it's a big problem indeed.
>
> Would anyone object to disabling this test for now? it's getting costly in
> the dev cycle.
>
> On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com> wrote:
>
> > Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
> > should be able to run in a fairly constant amount of ram per JVM from
> > beginning to end, to avoid blowing up late in the game. Is there any
> > harness around that would do this?
> >
> > Lance
> >
> > On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com>
> wrote:
> > > We have used testNg at work for just this segregation of fast and slow
> > tests and it works pretty well. It is also useful for stripping out
> > cascading failures. This means that integration tests can be executed
> only
> > if the underlying unit tests succeed. Another nice thing is that testNg
> says
> > how many tests were skipped.
> > >
> > > Junit has recently been adding similar capabilities but I haven't kept
> > up.
> > >
> > > Sent from my iPhone
> > >
> > > On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:
> > >
> > >> It would likely help if we could get them to run in parallel, perhaps.
> >  Also, seems like TestNG might have some better features on paper for
> this
> > kind of stuff (I think you can annotate some things as "slow" or "fast"
> and
> > then choose to run them separately).  I haven't explored much yet in this
> > way.  Has anyone else used TestNG?
> > >>
> > >> -Grant
> > >>
> > >> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
> > >>
> > >>> I really don't think that this distinction needs to be made.  The
> > >>> distinction between unit and integration test is important from a
> > technical
> > >>> point of view, but as an organizing principle the topic and target of
> > the
> > >>> test is probably a better idea than whether the test is a functional
> or
> > unit
> > >>> test or whether it has randomized initial conditions or whether it
> has
> > for
> > >>> loops in it.  Tests should be organized by what they test.
> > >>>
> > >>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com>
> > wrote:
> > >>>
> > >>>> Sure! Perhaps the long-running ones can move to a new 'regression'
> > >>>> area? examples/ is partly what these are, so examples/regression
> makes
> > >>>> sense.
> > >>>>
> > >>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com>
> wrote:
> > >>>>> This test is indeed by far the culprit. I already reduced its test
> > input
> > >>>>> size to hurry it up, but it's gone slow again.
> > >>>>>
> > >>>>> Lance, indeed, these are not all unit tests -- nobody said they
> were.
> > The
> > >>>>> test is useful.
> > >>>>>
> > >>>>> I do suggest, however, we comment it out. Jake suggested it coudl
> be
> > made
> > >>>>> faster but I don't think he followed up.
> > >>>>>
> > >>>>> Sean
> > >>>>>
> > >>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
> > >>>> wrote:
> > >>>>>
> > >>>>>> Comment out DistributedLanczosWhatsit. Zing!
> > >>>>>>
> > >>>>>> A unit test takes a bit of code X and checks that code path A goes
> > >>>>>> "tick" and code path B goes "tock" and bogus input C throws an
> > >>>>>> exception. There's no such thing as a "unit test" that runs twelve
> > M/R
> > >>>>>> jobs in a row.
> > >>>>>>
> > >>>>>> There's MRUnit, which seems trapped in the Hadoop
> > 0.20/0.21/0.22/0.23
> > >>>>>> morass. This is a squib about how to do unit testing of mappers
> and
> > >>>>>> reducers with Mockito:
> > >>>>>>
> > >>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
> > >>>>>>
> > >>>>>> What the Mahout jobs want is more of a regression test, which
> would
> > >>>>>> have two purposes:
> > >>>>>> 1) does the whole orchestration still work, and
> > >>>>>> 2) does it still acquire the information it is supposed to
> acquire?
> > >>>>>> 2a) this requires some amount of real data and a "gold standard"
> > >>>>>> output to match against.
> > >>>>>>
> > >>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
> > gsingers@apache.org>
> > >>>>>> wrote:
> > >>>>>>> Granted, I'm on a slow machine, but our tests take forever to
> run.
> >  On
> > >>>> an
> > >>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did
> > just
> > >>>>>> order a new MBP, so it will be faster, but it doesn't lend itself
> to
> > a
> > >>>> good
> > >>>>>> OOTB experience for people)
> > >>>>>>>
> > >>>>>>> One idea would be to add in parallel test execution in Maven.  I
> > think
> > >>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look
> > at
> > >>>> our
> > >>>>>> tests, especially the slow ones and see if we can speed them up.
> > >>>>>>>
> > >>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
> > >>>> failures
> > >>>>>> in the tests.
> > >>>>>>>
> > >>>>>>> I was using:
> > >>>>>>> <plugin>
> > >>>>>>>      <groupId>org.apache.maven.plugins</groupId>
> > >>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
> > >>>>>>>      <configuration>
> > >>>>>>>        <forkMode>once</forkMode>
> > >>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
> > >>>>>>>        <testFailureIgnore>false</testFailureIgnore>
> > >>>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
> > >>>>>>>        <parallel>classes</parallel>
> > >>>>>>>        <threadCount>5</threadCount>
> > >>>>>>>      </configuration>
> > >>>>>>>    </plugin>
> > >>>>>>>
> > >>>>>>> Anyone played around with this stuff?  I suspect the failures are
> > due
> > >>>> to
> > >>>>>> tests stomping on each other, but I am still digging in.
> > >>>>>>>
> > >>>>>>> -Grant
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> --
> > >>>>>> Lance Norskog
> > >>>>>> goksron@gmail.com
> > >>>>>>
> > >>>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Lance Norskog
> > >>>> goksron@gmail.com
> > >>>>
> > >>
> > >> --------------------------------------------
> > >> Grant Ingersoll
> > >>
> > >>
> > >
> >
> >
> >
> > --
> > Lance Norskog
> > goksron@gmail.com
> >
>

Re: Test Execution Time

Posted by Lance Norskog <go...@gmail.com>.
Oops, different context: I meant about automated testing of the mahout
multi-stage jobs.  If one of the stages gets unruly in its memory use,
that is a regression in software quality that is hard to spot.

On Wed, Aug 10, 2011 at 11:54 PM, Sean Owen <sr...@gmail.com> wrote:
> That would reduce memory requirements unilaterally? no, don't think so. I
> have not run into memory problems.
> The issue here is execution time and it's a big problem indeed.
>
> Would anyone object to disabling this test for now? it's getting costly in
> the dev cycle.
>
> On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com> wrote:
>
>> Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
>> should be able to run in a fairly constant amount of ram per JVM from
>> beginning to end, to avoid blowing up late in the game. Is there any
>> harness around that would do this?
>>
>> Lance
>>
>> On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com> wrote:
>> > We have used testNg at work for just this segregation of fast and slow
>> tests and it works pretty well. It is also useful for stripping out
>> cascading failures. This means that integration tests can be executed only
>> if the underlying unit tests succeed. Another nice thing is that testNg says
>> how many tests were skipped.
>> >
>> > Junit has recently been adding similar capabilities but I haven't kept
>> up.
>> >
>> > Sent from my iPhone
>> >
>> > On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:
>> >
>> >> It would likely help if we could get them to run in parallel, perhaps.
>>  Also, seems like TestNG might have some better features on paper for this
>> kind of stuff (I think you can annotate some things as "slow" or "fast" and
>> then choose to run them separately).  I haven't explored much yet in this
>> way.  Has anyone else used TestNG?
>> >>
>> >> -Grant
>> >>
>> >> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
>> >>
>> >>> I really don't think that this distinction needs to be made.  The
>> >>> distinction between unit and integration test is important from a
>> technical
>> >>> point of view, but as an organizing principle the topic and target of
>> the
>> >>> test is probably a better idea than whether the test is a functional or
>> unit
>> >>> test or whether it has randomized initial conditions or whether it has
>> for
>> >>> loops in it.  Tests should be organized by what they test.
>> >>>
>> >>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com>
>> wrote:
>> >>>
>> >>>> Sure! Perhaps the long-running ones can move to a new 'regression'
>> >>>> area? examples/ is partly what these are, so examples/regression makes
>> >>>> sense.
>> >>>>
>> >>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
>> >>>>> This test is indeed by far the culprit. I already reduced its test
>> input
>> >>>>> size to hurry it up, but it's gone slow again.
>> >>>>>
>> >>>>> Lance, indeed, these are not all unit tests -- nobody said they were.
>> The
>> >>>>> test is useful.
>> >>>>>
>> >>>>> I do suggest, however, we comment it out. Jake suggested it coudl be
>> made
>> >>>>> faster but I don't think he followed up.
>> >>>>>
>> >>>>> Sean
>> >>>>>
>> >>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>>> Comment out DistributedLanczosWhatsit. Zing!
>> >>>>>>
>> >>>>>> A unit test takes a bit of code X and checks that code path A goes
>> >>>>>> "tick" and code path B goes "tock" and bogus input C throws an
>> >>>>>> exception. There's no such thing as a "unit test" that runs twelve
>> M/R
>> >>>>>> jobs in a row.
>> >>>>>>
>> >>>>>> There's MRUnit, which seems trapped in the Hadoop
>> 0.20/0.21/0.22/0.23
>> >>>>>> morass. This is a squib about how to do unit testing of mappers and
>> >>>>>> reducers with Mockito:
>> >>>>>>
>> >>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>> >>>>>>
>> >>>>>> What the Mahout jobs want is more of a regression test, which would
>> >>>>>> have two purposes:
>> >>>>>> 1) does the whole orchestration still work, and
>> >>>>>> 2) does it still acquire the information it is supposed to acquire?
>> >>>>>> 2a) this requires some amount of real data and a "gold standard"
>> >>>>>> output to match against.
>> >>>>>>
>> >>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
>> gsingers@apache.org>
>> >>>>>> wrote:
>> >>>>>>> Granted, I'm on a slow machine, but our tests take forever to run.
>>  On
>> >>>> an
>> >>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did
>> just
>> >>>>>> order a new MBP, so it will be faster, but it doesn't lend itself to
>> a
>> >>>> good
>> >>>>>> OOTB experience for people)
>> >>>>>>>
>> >>>>>>> One idea would be to add in parallel test execution in Maven.  I
>> think
>> >>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look
>> at
>> >>>> our
>> >>>>>> tests, especially the slow ones and see if we can speed them up.
>> >>>>>>>
>> >>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
>> >>>> failures
>> >>>>>> in the tests.
>> >>>>>>>
>> >>>>>>> I was using:
>> >>>>>>> <plugin>
>> >>>>>>>      <groupId>org.apache.maven.plugins</groupId>
>> >>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
>> >>>>>>>      <configuration>
>> >>>>>>>        <forkMode>once</forkMode>
>> >>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
>> >>>>>>>        <testFailureIgnore>false</testFailureIgnore>
>> >>>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
>> >>>>>>>        <parallel>classes</parallel>
>> >>>>>>>        <threadCount>5</threadCount>
>> >>>>>>>      </configuration>
>> >>>>>>>    </plugin>
>> >>>>>>>
>> >>>>>>> Anyone played around with this stuff?  I suspect the failures are
>> due
>> >>>> to
>> >>>>>> tests stomping on each other, but I am still digging in.
>> >>>>>>>
>> >>>>>>> -Grant
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>> Lance Norskog
>> >>>>>> goksron@gmail.com
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Lance Norskog
>> >>>> goksron@gmail.com
>> >>>>
>> >>
>> >> --------------------------------------------
>> >> Grant Ingersoll
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Lance Norskog
>> goksron@gmail.com
>>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Test Execution Time

Posted by Grant Ingersoll <gs...@apache.org>.
It would be nice if it could still run somewhere, even if it isn't run every time locally.

-Grant

On Aug 11, 2011, at 2:54 AM, Sean Owen wrote:

> That would reduce memory requirements unilaterally? no, don't think so. I
> have not run into memory problems.
> The issue here is execution time and it's a big problem indeed.
> 
> Would anyone object to disabling this test for now? it's getting costly in
> the dev cycle.
> 
> On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com> wrote:
> 
>> Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
>> should be able to run in a fairly constant amount of ram per JVM from
>> beginning to end, to avoid blowing up late in the game. Is there any
>> harness around that would do this?
>> 
>> Lance
>> 
>> On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com> wrote:
>>> We have used testNg at work for just this segregation of fast and slow
>> tests and it works pretty well. It is also useful for stripping out
>> cascading failures. This means that integration tests can be executed only
>> if the underlying unit tests succeed. Another nice thing is that testNg says
>> how many tests were skipped.
>>> 
>>> Junit has recently been adding similar capabilities but I haven't kept
>> up.
>>> 
>>> Sent from my iPhone
>>> 
>>> On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:
>>> 
>>>> It would likely help if we could get them to run in parallel, perhaps.
>> Also, seems like TestNG might have some better features on paper for this
>> kind of stuff (I think you can annotate some things as "slow" or "fast" and
>> then choose to run them separately).  I haven't explored much yet in this
>> way.  Has anyone else used TestNG?
>>>> 
>>>> -Grant
>>>> 
>>>> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
>>>> 
>>>>> I really don't think that this distinction needs to be made.  The
>>>>> distinction between unit and integration test is important from a
>> technical
>>>>> point of view, but as an organizing principle the topic and target of
>> the
>>>>> test is probably a better idea than whether the test is a functional or
>> unit
>>>>> test or whether it has randomized initial conditions or whether it has
>> for
>>>>> loops in it.  Tests should be organized by what they test.
>>>>> 
>>>>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com>
>> wrote:
>>>>> 
>>>>>> Sure! Perhaps the long-running ones can move to a new 'regression'
>>>>>> area? examples/ is partly what these are, so examples/regression makes
>>>>>> sense.
>>>>>> 
>>>>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
>>>>>>> This test is indeed by far the culprit. I already reduced its test
>> input
>>>>>>> size to hurry it up, but it's gone slow again.
>>>>>>> 
>>>>>>> Lance, indeed, these are not all unit tests -- nobody said they were.
>> The
>>>>>>> test is useful.
>>>>>>> 
>>>>>>> I do suggest, however, we comment it out. Jake suggested it coudl be
>> made
>>>>>>> faster but I don't think he followed up.
>>>>>>> 
>>>>>>> Sean
>>>>>>> 
>>>>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>>> Comment out DistributedLanczosWhatsit. Zing!
>>>>>>>> 
>>>>>>>> A unit test takes a bit of code X and checks that code path A goes
>>>>>>>> "tick" and code path B goes "tock" and bogus input C throws an
>>>>>>>> exception. There's no such thing as a "unit test" that runs twelve
>> M/R
>>>>>>>> jobs in a row.
>>>>>>>> 
>>>>>>>> There's MRUnit, which seems trapped in the Hadoop
>> 0.20/0.21/0.22/0.23
>>>>>>>> morass. This is a squib about how to do unit testing of mappers and
>>>>>>>> reducers with Mockito:
>>>>>>>> 
>>>>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>>>>>>>> 
>>>>>>>> What the Mahout jobs want is more of a regression test, which would
>>>>>>>> have two purposes:
>>>>>>>> 1) does the whole orchestration still work, and
>>>>>>>> 2) does it still acquire the information it is supposed to acquire?
>>>>>>>> 2a) this requires some amount of real data and a "gold standard"
>>>>>>>> output to match against.
>>>>>>>> 
>>>>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
>> gsingers@apache.org>
>>>>>>>> wrote:
>>>>>>>>> Granted, I'm on a slow machine, but our tests take forever to run.
>> On
>>>>>> an
>>>>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did
>> just
>>>>>>>> order a new MBP, so it will be faster, but it doesn't lend itself to
>> a
>>>>>> good
>>>>>>>> OOTB experience for people)
>>>>>>>>> 
>>>>>>>>> One idea would be to add in parallel test execution in Maven.  I
>> think
>>>>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look
>> at
>>>>>> our
>>>>>>>> tests, especially the slow ones and see if we can speed them up.
>>>>>>>>> 
>>>>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
>>>>>> failures
>>>>>>>> in the tests.
>>>>>>>>> 
>>>>>>>>> I was using:
>>>>>>>>> <plugin>
>>>>>>>>>     <groupId>org.apache.maven.plugins</groupId>
>>>>>>>>>     <artifactId>maven-surefire-plugin</artifactId>
>>>>>>>>>     <configuration>
>>>>>>>>>       <forkMode>once</forkMode>
>>>>>>>>>       <argLine>-Xms256m -Xmx512m</argLine>
>>>>>>>>>       <testFailureIgnore>false</testFailureIgnore>
>>>>>>>>>       <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>>>>>>       <parallel>classes</parallel>
>>>>>>>>>       <threadCount>5</threadCount>
>>>>>>>>>     </configuration>
>>>>>>>>>   </plugin>
>>>>>>>>> 
>>>>>>>>> Anyone played around with this stuff?  I suspect the failures are
>> due
>>>>>> to
>>>>>>>> tests stomping on each other, but I am still digging in.
>>>>>>>>> 
>>>>>>>>> -Grant
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Lance Norskog
>>>>>>>> goksron@gmail.com
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Lance Norskog
>>>>>> goksron@gmail.com
>>>>>> 
>>>> 
>>>> --------------------------------------------
>>>> Grant Ingersoll
>>>> 
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Lance Norskog
>> goksron@gmail.com
>> 

--------------------------------------------
Grant Ingersoll



Re: Test Execution Time

Posted by Sean Owen <sr...@gmail.com>.
That would reduce memory requirements unilaterally? no, don't think so. I
have not run into memory problems.
The issue here is execution time and it's a big problem indeed.

Would anyone object to disabling this test for now? it's getting costly in
the dev cycle.

On Thu, Aug 11, 2011 at 5:52 AM, Lance Norskog <go...@gmail.com> wrote:

> Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
> should be able to run in a fairly constant amount of ram per JVM from
> beginning to end, to avoid blowing up late in the game. Is there any
> harness around that would do this?
>
> Lance
>
> On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com> wrote:
> > We have used testNg at work for just this segregation of fast and slow
> tests and it works pretty well. It is also useful for stripping out
> cascading failures. This means that integration tests can be executed only
> if the underlying unit tests succeed. Another nice thing is that testNg says
> how many tests were skipped.
> >
> > Junit has recently been adding similar capabilities but I haven't kept
> up.
> >
> > Sent from my iPhone
> >
> > On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:
> >
> >> It would likely help if we could get them to run in parallel, perhaps.
>  Also, seems like TestNG might have some better features on paper for this
> kind of stuff (I think you can annotate some things as "slow" or "fast" and
> then choose to run them separately).  I haven't explored much yet in this
> way.  Has anyone else used TestNG?
> >>
> >> -Grant
> >>
> >> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
> >>
> >>> I really don't think that this distinction needs to be made.  The
> >>> distinction between unit and integration test is important from a
> technical
> >>> point of view, but as an organizing principle the topic and target of
> the
> >>> test is probably a better idea than whether the test is a functional or
> unit
> >>> test or whether it has randomized initial conditions or whether it has
> for
> >>> loops in it.  Tests should be organized by what they test.
> >>>
> >>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com>
> wrote:
> >>>
> >>>> Sure! Perhaps the long-running ones can move to a new 'regression'
> >>>> area? examples/ is partly what these are, so examples/regression makes
> >>>> sense.
> >>>>
> >>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
> >>>>> This test is indeed by far the culprit. I already reduced its test
> input
> >>>>> size to hurry it up, but it's gone slow again.
> >>>>>
> >>>>> Lance, indeed, these are not all unit tests -- nobody said they were.
> The
> >>>>> test is useful.
> >>>>>
> >>>>> I do suggest, however, we comment it out. Jake suggested it coudl be
> made
> >>>>> faster but I don't think he followed up.
> >>>>>
> >>>>> Sean
> >>>>>
> >>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>>> Comment out DistributedLanczosWhatsit. Zing!
> >>>>>>
> >>>>>> A unit test takes a bit of code X and checks that code path A goes
> >>>>>> "tick" and code path B goes "tock" and bogus input C throws an
> >>>>>> exception. There's no such thing as a "unit test" that runs twelve
> M/R
> >>>>>> jobs in a row.
> >>>>>>
> >>>>>> There's MRUnit, which seems trapped in the Hadoop
> 0.20/0.21/0.22/0.23
> >>>>>> morass. This is a squib about how to do unit testing of mappers and
> >>>>>> reducers with Mockito:
> >>>>>>
> >>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
> >>>>>>
> >>>>>> What the Mahout jobs want is more of a regression test, which would
> >>>>>> have two purposes:
> >>>>>> 1) does the whole orchestration still work, and
> >>>>>> 2) does it still acquire the information it is supposed to acquire?
> >>>>>> 2a) this requires some amount of real data and a "gold standard"
> >>>>>> output to match against.
> >>>>>>
> >>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <
> gsingers@apache.org>
> >>>>>> wrote:
> >>>>>>> Granted, I'm on a slow machine, but our tests take forever to run.
>  On
> >>>> an
> >>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did
> just
> >>>>>> order a new MBP, so it will be faster, but it doesn't lend itself to
> a
> >>>> good
> >>>>>> OOTB experience for people)
> >>>>>>>
> >>>>>>> One idea would be to add in parallel test execution in Maven.  I
> think
> >>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look
> at
> >>>> our
> >>>>>> tests, especially the slow ones and see if we can speed them up.
> >>>>>>>
> >>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
> >>>> failures
> >>>>>> in the tests.
> >>>>>>>
> >>>>>>> I was using:
> >>>>>>> <plugin>
> >>>>>>>      <groupId>org.apache.maven.plugins</groupId>
> >>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
> >>>>>>>      <configuration>
> >>>>>>>        <forkMode>once</forkMode>
> >>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
> >>>>>>>        <testFailureIgnore>false</testFailureIgnore>
> >>>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
> >>>>>>>        <parallel>classes</parallel>
> >>>>>>>        <threadCount>5</threadCount>
> >>>>>>>      </configuration>
> >>>>>>>    </plugin>
> >>>>>>>
> >>>>>>> Anyone played around with this stuff?  I suspect the failures are
> due
> >>>> to
> >>>>>> tests stomping on each other, but I am still digging in.
> >>>>>>>
> >>>>>>> -Grant
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Lance Norskog
> >>>>>> goksron@gmail.com
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Lance Norskog
> >>>> goksron@gmail.com
> >>>>
> >>
> >> --------------------------------------------
> >> Grant Ingersoll
> >>
> >>
> >
>
>
>
> --
> Lance Norskog
> goksron@gmail.com
>

Re: Test Execution Time

Posted by Lance Norskog <go...@gmail.com>.
Another aspect of testing Map/Reduce jobs is memory bounds. An M/R job
should be able to run in a fairly constant amount of ram per JVM from
beginning to end, to avoid blowing up late in the game. Is there any
harness around that would do this?

Lance

On Sun, Aug 7, 2011 at 9:04 PM, Ted Dunning <te...@gmail.com> wrote:
> We have used testNg at work for just this segregation of fast and slow tests and it works pretty well. It is also useful for stripping out cascading failures. This means that integration tests can be executed only if the underlying unit tests succeed. Another nice thing is that testNg says how many tests were skipped.
>
> Junit has recently been adding similar capabilities but I haven't kept up.
>
> Sent from my iPhone
>
> On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:
>
>> It would likely help if we could get them to run in parallel, perhaps.  Also, seems like TestNG might have some better features on paper for this kind of stuff (I think you can annotate some things as "slow" or "fast" and then choose to run them separately).  I haven't explored much yet in this way.  Has anyone else used TestNG?
>>
>> -Grant
>>
>> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
>>
>>> I really don't think that this distinction needs to be made.  The
>>> distinction between unit and integration test is important from a technical
>>> point of view, but as an organizing principle the topic and target of the
>>> test is probably a better idea than whether the test is a functional or unit
>>> test or whether it has randomized initial conditions or whether it has for
>>> loops in it.  Tests should be organized by what they test.
>>>
>>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com> wrote:
>>>
>>>> Sure! Perhaps the long-running ones can move to a new 'regression'
>>>> area? examples/ is partly what these are, so examples/regression makes
>>>> sense.
>>>>
>>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
>>>>> This test is indeed by far the culprit. I already reduced its test input
>>>>> size to hurry it up, but it's gone slow again.
>>>>>
>>>>> Lance, indeed, these are not all unit tests -- nobody said they were. The
>>>>> test is useful.
>>>>>
>>>>> I do suggest, however, we comment it out. Jake suggested it coudl be made
>>>>> faster but I don't think he followed up.
>>>>>
>>>>> Sean
>>>>>
>>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
>>>> wrote:
>>>>>
>>>>>> Comment out DistributedLanczosWhatsit. Zing!
>>>>>>
>>>>>> A unit test takes a bit of code X and checks that code path A goes
>>>>>> "tick" and code path B goes "tock" and bogus input C throws an
>>>>>> exception. There's no such thing as a "unit test" that runs twelve M/R
>>>>>> jobs in a row.
>>>>>>
>>>>>> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
>>>>>> morass. This is a squib about how to do unit testing of mappers and
>>>>>> reducers with Mockito:
>>>>>>
>>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>>>>>>
>>>>>> What the Mahout jobs want is more of a regression test, which would
>>>>>> have two purposes:
>>>>>> 1) does the whole orchestration still work, and
>>>>>> 2) does it still acquire the information it is supposed to acquire?
>>>>>> 2a) this requires some amount of real data and a "gold standard"
>>>>>> output to match against.
>>>>>>
>>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
>>>>>> wrote:
>>>>>>> Granted, I'm on a slow machine, but our tests take forever to run.  On
>>>> an
>>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did just
>>>>>> order a new MBP, so it will be faster, but it doesn't lend itself to a
>>>> good
>>>>>> OOTB experience for people)
>>>>>>>
>>>>>>> One idea would be to add in parallel test execution in Maven.  I think
>>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look at
>>>> our
>>>>>> tests, especially the slow ones and see if we can speed them up.
>>>>>>>
>>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
>>>> failures
>>>>>> in the tests.
>>>>>>>
>>>>>>> I was using:
>>>>>>> <plugin>
>>>>>>>      <groupId>org.apache.maven.plugins</groupId>
>>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
>>>>>>>      <configuration>
>>>>>>>        <forkMode>once</forkMode>
>>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
>>>>>>>        <testFailureIgnore>false</testFailureIgnore>
>>>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>>>>        <parallel>classes</parallel>
>>>>>>>        <threadCount>5</threadCount>
>>>>>>>      </configuration>
>>>>>>>    </plugin>
>>>>>>>
>>>>>>> Anyone played around with this stuff?  I suspect the failures are due
>>>> to
>>>>>> tests stomping on each other, but I am still digging in.
>>>>>>>
>>>>>>> -Grant
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Lance Norskog
>>>>>> goksron@gmail.com
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Lance Norskog
>>>> goksron@gmail.com
>>>>
>>
>> --------------------------------------------
>> Grant Ingersoll
>>
>>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Test Execution Time

Posted by Ted Dunning <te...@gmail.com>.
We have used testNg at work for just this segregation of fast and slow tests and it works pretty well. It is also useful for stripping out cascading failures. This means that integration tests can be executed only if the underlying unit tests succeed. Another nice thing is that testNg says how many tests were skipped. 

Junit has recently been adding similar capabilities but I haven't kept up. 

Sent from my iPhone

On Aug 7, 2011, at 19:18, Grant Ingersoll <gs...@apache.org> wrote:

> It would likely help if we could get them to run in parallel, perhaps.  Also, seems like TestNG might have some better features on paper for this kind of stuff (I think you can annotate some things as "slow" or "fast" and then choose to run them separately).  I haven't explored much yet in this way.  Has anyone else used TestNG?
> 
> -Grant
> 
> On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:
> 
>> I really don't think that this distinction needs to be made.  The
>> distinction between unit and integration test is important from a technical
>> point of view, but as an organizing principle the topic and target of the
>> test is probably a better idea than whether the test is a functional or unit
>> test or whether it has randomized initial conditions or whether it has for
>> loops in it.  Tests should be organized by what they test.
>> 
>> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com> wrote:
>> 
>>> Sure! Perhaps the long-running ones can move to a new 'regression'
>>> area? examples/ is partly what these are, so examples/regression makes
>>> sense.
>>> 
>>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
>>>> This test is indeed by far the culprit. I already reduced its test input
>>>> size to hurry it up, but it's gone slow again.
>>>> 
>>>> Lance, indeed, these are not all unit tests -- nobody said they were. The
>>>> test is useful.
>>>> 
>>>> I do suggest, however, we comment it out. Jake suggested it coudl be made
>>>> faster but I don't think he followed up.
>>>> 
>>>> Sean
>>>> 
>>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
>>> wrote:
>>>> 
>>>>> Comment out DistributedLanczosWhatsit. Zing!
>>>>> 
>>>>> A unit test takes a bit of code X and checks that code path A goes
>>>>> "tick" and code path B goes "tock" and bogus input C throws an
>>>>> exception. There's no such thing as a "unit test" that runs twelve M/R
>>>>> jobs in a row.
>>>>> 
>>>>> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
>>>>> morass. This is a squib about how to do unit testing of mappers and
>>>>> reducers with Mockito:
>>>>> 
>>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>>>>> 
>>>>> What the Mahout jobs want is more of a regression test, which would
>>>>> have two purposes:
>>>>> 1) does the whole orchestration still work, and
>>>>> 2) does it still acquire the information it is supposed to acquire?
>>>>> 2a) this requires some amount of real data and a "gold standard"
>>>>> output to match against.
>>>>> 
>>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
>>>>> wrote:
>>>>>> Granted, I'm on a slow machine, but our tests take forever to run.  On
>>> an
>>>>> 2 core MBP, it takes well over an hour to run all the tests (I did just
>>>>> order a new MBP, so it will be faster, but it doesn't lend itself to a
>>> good
>>>>> OOTB experience for people)
>>>>>> 
>>>>>> One idea would be to add in parallel test execution in Maven.  I think
>>>>> this requires Mvn 3, but I am not sure.  Another is to take a look at
>>> our
>>>>> tests, especially the slow ones and see if we can speed them up.
>>>>>> 
>>>>>> When I try adding in parallel tests to Maven, I get a bunch of
>>> failures
>>>>> in the tests.
>>>>>> 
>>>>>> I was using:
>>>>>> <plugin>
>>>>>>      <groupId>org.apache.maven.plugins</groupId>
>>>>>>      <artifactId>maven-surefire-plugin</artifactId>
>>>>>>      <configuration>
>>>>>>        <forkMode>once</forkMode>
>>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
>>>>>>        <testFailureIgnore>false</testFailureIgnore>
>>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>>>        <parallel>classes</parallel>
>>>>>>        <threadCount>5</threadCount>
>>>>>>      </configuration>
>>>>>>    </plugin>
>>>>>> 
>>>>>> Anyone played around with this stuff?  I suspect the failures are due
>>> to
>>>>> tests stomping on each other, but I am still digging in.
>>>>>> 
>>>>>> -Grant
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Lance Norskog
>>>>> goksron@gmail.com
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Lance Norskog
>>> goksron@gmail.com
>>> 
> 
> --------------------------------------------
> Grant Ingersoll
> 
> 

Re: Test Execution Time

Posted by Grant Ingersoll <gs...@apache.org>.
It would likely help if we could get them to run in parallel, perhaps.  Also, seems like TestNG might have some better features on paper for this kind of stuff (I think you can annotate some things as "slow" or "fast" and then choose to run them separately).  I haven't explored much yet in this way.  Has anyone else used TestNG?

-Grant

On Aug 7, 2011, at 9:12 PM, Ted Dunning wrote:

> I really don't think that this distinction needs to be made.  The
> distinction between unit and integration test is important from a technical
> point of view, but as an organizing principle the topic and target of the
> test is probably a better idea than whether the test is a functional or unit
> test or whether it has randomized initial conditions or whether it has for
> loops in it.  Tests should be organized by what they test.
> 
> On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com> wrote:
> 
>> Sure! Perhaps the long-running ones can move to a new 'regression'
>> area? examples/ is partly what these are, so examples/regression makes
>> sense.
>> 
>> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
>>> This test is indeed by far the culprit. I already reduced its test input
>>> size to hurry it up, but it's gone slow again.
>>> 
>>> Lance, indeed, these are not all unit tests -- nobody said they were. The
>>> test is useful.
>>> 
>>> I do suggest, however, we comment it out. Jake suggested it coudl be made
>>> faster but I don't think he followed up.
>>> 
>>> Sean
>>> 
>>> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
>> wrote:
>>> 
>>>> Comment out DistributedLanczosWhatsit. Zing!
>>>> 
>>>> A unit test takes a bit of code X and checks that code path A goes
>>>> "tick" and code path B goes "tock" and bogus input C throws an
>>>> exception. There's no such thing as a "unit test" that runs twelve M/R
>>>> jobs in a row.
>>>> 
>>>> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
>>>> morass. This is a squib about how to do unit testing of mappers and
>>>> reducers with Mockito:
>>>> 
>>>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>>>> 
>>>> What the Mahout jobs want is more of a regression test, which would
>>>> have two purposes:
>>>> 1) does the whole orchestration still work, and
>>>> 2) does it still acquire the information it is supposed to acquire?
>>>> 2a) this requires some amount of real data and a "gold standard"
>>>> output to match against.
>>>> 
>>>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
>>>> wrote:
>>>>> Granted, I'm on a slow machine, but our tests take forever to run.  On
>> an
>>>> 2 core MBP, it takes well over an hour to run all the tests (I did just
>>>> order a new MBP, so it will be faster, but it doesn't lend itself to a
>> good
>>>> OOTB experience for people)
>>>>> 
>>>>> One idea would be to add in parallel test execution in Maven.  I think
>>>> this requires Mvn 3, but I am not sure.  Another is to take a look at
>> our
>>>> tests, especially the slow ones and see if we can speed them up.
>>>>> 
>>>>> When I try adding in parallel tests to Maven, I get a bunch of
>> failures
>>>> in the tests.
>>>>> 
>>>>> I was using:
>>>>> <plugin>
>>>>>       <groupId>org.apache.maven.plugins</groupId>
>>>>>       <artifactId>maven-surefire-plugin</artifactId>
>>>>>       <configuration>
>>>>>         <forkMode>once</forkMode>
>>>>>         <argLine>-Xms256m -Xmx512m</argLine>
>>>>>         <testFailureIgnore>false</testFailureIgnore>
>>>>>         <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>>         <parallel>classes</parallel>
>>>>>         <threadCount>5</threadCount>
>>>>>       </configuration>
>>>>>     </plugin>
>>>>> 
>>>>> Anyone played around with this stuff?  I suspect the failures are due
>> to
>>>> tests stomping on each other, but I am still digging in.
>>>>> 
>>>>> -Grant
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Lance Norskog
>>>> goksron@gmail.com
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Lance Norskog
>> goksron@gmail.com
>> 

--------------------------------------------
Grant Ingersoll



Re: Test Execution Time

Posted by Ted Dunning <te...@gmail.com>.
I really don't think that this distinction needs to be made.  The
distinction between unit and integration test is important from a technical
point of view, but as an organizing principle the topic and target of the
test is probably a better idea than whether the test is a functional or unit
test or whether it has randomized initial conditions or whether it has for
loops in it.  Tests should be organized by what they test.

On Sun, Aug 7, 2011 at 5:08 PM, Lance Norskog <go...@gmail.com> wrote:

> Sure! Perhaps the long-running ones can move to a new 'regression'
> area? examples/ is partly what these are, so examples/regression makes
> sense.
>
> On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
> > This test is indeed by far the culprit. I already reduced its test input
> > size to hurry it up, but it's gone slow again.
> >
> > Lance, indeed, these are not all unit tests -- nobody said they were. The
> > test is useful.
> >
> > I do suggest, however, we comment it out. Jake suggested it coudl be made
> > faster but I don't think he followed up.
> >
> > Sean
> >
> > On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com>
> wrote:
> >
> >> Comment out DistributedLanczosWhatsit. Zing!
> >>
> >> A unit test takes a bit of code X and checks that code path A goes
> >> "tick" and code path B goes "tock" and bogus input C throws an
> >> exception. There's no such thing as a "unit test" that runs twelve M/R
> >> jobs in a row.
> >>
> >> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
> >> morass. This is a squib about how to do unit testing of mappers and
> >> reducers with Mockito:
> >>
> >> http://nubetech.co/testing-hadoop-map-reduce-jobs
> >>
> >> What the Mahout jobs want is more of a regression test, which would
> >> have two purposes:
> >> 1) does the whole orchestration still work, and
> >> 2) does it still acquire the information it is supposed to acquire?
> >> 2a) this requires some amount of real data and a "gold standard"
> >> output to match against.
> >>
> >> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
> >> wrote:
> >> > Granted, I'm on a slow machine, but our tests take forever to run.  On
> an
> >> 2 core MBP, it takes well over an hour to run all the tests (I did just
> >> order a new MBP, so it will be faster, but it doesn't lend itself to a
> good
> >> OOTB experience for people)
> >> >
> >> > One idea would be to add in parallel test execution in Maven.  I think
> >> this requires Mvn 3, but I am not sure.  Another is to take a look at
> our
> >> tests, especially the slow ones and see if we can speed them up.
> >> >
> >> > When I try adding in parallel tests to Maven, I get a bunch of
> failures
> >> in the tests.
> >> >
> >> > I was using:
> >> > <plugin>
> >> >        <groupId>org.apache.maven.plugins</groupId>
> >> >        <artifactId>maven-surefire-plugin</artifactId>
> >> >        <configuration>
> >> >          <forkMode>once</forkMode>
> >> >          <argLine>-Xms256m -Xmx512m</argLine>
> >> >          <testFailureIgnore>false</testFailureIgnore>
> >> >          <redirectTestOutputToFile>true</redirectTestOutputToFile>
> >> >          <parallel>classes</parallel>
> >> >          <threadCount>5</threadCount>
> >> >        </configuration>
> >> >      </plugin>
> >> >
> >> > Anyone played around with this stuff?  I suspect the failures are due
> to
> >> tests stomping on each other, but I am still digging in.
> >> >
> >> > -Grant
> >>
> >>
> >>
> >> --
> >> Lance Norskog
> >> goksron@gmail.com
> >>
> >
>
>
>
> --
> Lance Norskog
> goksron@gmail.com
>

Re: Test Execution Time

Posted by Lance Norskog <go...@gmail.com>.
Sure! Perhaps the long-running ones can move to a new 'regression'
area? examples/ is partly what these are, so examples/regression makes
sense.

On Sun, Aug 7, 2011 at 11:11 AM, Sean Owen <sr...@gmail.com> wrote:
> This test is indeed by far the culprit. I already reduced its test input
> size to hurry it up, but it's gone slow again.
>
> Lance, indeed, these are not all unit tests -- nobody said they were. The
> test is useful.
>
> I do suggest, however, we comment it out. Jake suggested it coudl be made
> faster but I don't think he followed up.
>
> Sean
>
> On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com> wrote:
>
>> Comment out DistributedLanczosWhatsit. Zing!
>>
>> A unit test takes a bit of code X and checks that code path A goes
>> "tick" and code path B goes "tock" and bogus input C throws an
>> exception. There's no such thing as a "unit test" that runs twelve M/R
>> jobs in a row.
>>
>> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
>> morass. This is a squib about how to do unit testing of mappers and
>> reducers with Mockito:
>>
>> http://nubetech.co/testing-hadoop-map-reduce-jobs
>>
>> What the Mahout jobs want is more of a regression test, which would
>> have two purposes:
>> 1) does the whole orchestration still work, and
>> 2) does it still acquire the information it is supposed to acquire?
>> 2a) this requires some amount of real data and a "gold standard"
>> output to match against.
>>
>> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
>> wrote:
>> > Granted, I'm on a slow machine, but our tests take forever to run.  On an
>> 2 core MBP, it takes well over an hour to run all the tests (I did just
>> order a new MBP, so it will be faster, but it doesn't lend itself to a good
>> OOTB experience for people)
>> >
>> > One idea would be to add in parallel test execution in Maven.  I think
>> this requires Mvn 3, but I am not sure.  Another is to take a look at our
>> tests, especially the slow ones and see if we can speed them up.
>> >
>> > When I try adding in parallel tests to Maven, I get a bunch of failures
>> in the tests.
>> >
>> > I was using:
>> > <plugin>
>> >        <groupId>org.apache.maven.plugins</groupId>
>> >        <artifactId>maven-surefire-plugin</artifactId>
>> >        <configuration>
>> >          <forkMode>once</forkMode>
>> >          <argLine>-Xms256m -Xmx512m</argLine>
>> >          <testFailureIgnore>false</testFailureIgnore>
>> >          <redirectTestOutputToFile>true</redirectTestOutputToFile>
>> >          <parallel>classes</parallel>
>> >          <threadCount>5</threadCount>
>> >        </configuration>
>> >      </plugin>
>> >
>> > Anyone played around with this stuff?  I suspect the failures are due to
>> tests stomping on each other, but I am still digging in.
>> >
>> > -Grant
>>
>>
>>
>> --
>> Lance Norskog
>> goksron@gmail.com
>>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Test Execution Time

Posted by Sean Owen <sr...@gmail.com>.
This test is indeed by far the culprit. I already reduced its test input
size to hurry it up, but it's gone slow again.

Lance, indeed, these are not all unit tests -- nobody said they were. The
test is useful.

I do suggest, however, we comment it out. Jake suggested it coudl be made
faster but I don't think he followed up.

Sean

On Sun, Aug 7, 2011 at 12:13 AM, Lance Norskog <go...@gmail.com> wrote:

> Comment out DistributedLanczosWhatsit. Zing!
>
> A unit test takes a bit of code X and checks that code path A goes
> "tick" and code path B goes "tock" and bogus input C throws an
> exception. There's no such thing as a "unit test" that runs twelve M/R
> jobs in a row.
>
> There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
> morass. This is a squib about how to do unit testing of mappers and
> reducers with Mockito:
>
> http://nubetech.co/testing-hadoop-map-reduce-jobs
>
> What the Mahout jobs want is more of a regression test, which would
> have two purposes:
> 1) does the whole orchestration still work, and
> 2) does it still acquire the information it is supposed to acquire?
> 2a) this requires some amount of real data and a "gold standard"
> output to match against.
>
> On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org>
> wrote:
> > Granted, I'm on a slow machine, but our tests take forever to run.  On an
> 2 core MBP, it takes well over an hour to run all the tests (I did just
> order a new MBP, so it will be faster, but it doesn't lend itself to a good
> OOTB experience for people)
> >
> > One idea would be to add in parallel test execution in Maven.  I think
> this requires Mvn 3, but I am not sure.  Another is to take a look at our
> tests, especially the slow ones and see if we can speed them up.
> >
> > When I try adding in parallel tests to Maven, I get a bunch of failures
> in the tests.
> >
> > I was using:
> > <plugin>
> >        <groupId>org.apache.maven.plugins</groupId>
> >        <artifactId>maven-surefire-plugin</artifactId>
> >        <configuration>
> >          <forkMode>once</forkMode>
> >          <argLine>-Xms256m -Xmx512m</argLine>
> >          <testFailureIgnore>false</testFailureIgnore>
> >          <redirectTestOutputToFile>true</redirectTestOutputToFile>
> >          <parallel>classes</parallel>
> >          <threadCount>5</threadCount>
> >        </configuration>
> >      </plugin>
> >
> > Anyone played around with this stuff?  I suspect the failures are due to
> tests stomping on each other, but I am still digging in.
> >
> > -Grant
>
>
>
> --
> Lance Norskog
> goksron@gmail.com
>

Re: Test Execution Time

Posted by Lance Norskog <go...@gmail.com>.
Comment out DistributedLanczosWhatsit. Zing!

A unit test takes a bit of code X and checks that code path A goes
"tick" and code path B goes "tock" and bogus input C throws an
exception. There's no such thing as a "unit test" that runs twelve M/R
jobs in a row.

There's MRUnit, which seems trapped in the Hadoop 0.20/0.21/0.22/0.23
morass. This is a squib about how to do unit testing of mappers and
reducers with Mockito:

http://nubetech.co/testing-hadoop-map-reduce-jobs

What the Mahout jobs want is more of a regression test, which would
have two purposes:
1) does the whole orchestration still work, and
2) does it still acquire the information it is supposed to acquire?
2a) this requires some amount of real data and a "gold standard"
output to match against.

On Sat, Aug 6, 2011 at 12:34 PM, Grant Ingersoll <gs...@apache.org> wrote:
> Granted, I'm on a slow machine, but our tests take forever to run.  On an 2 core MBP, it takes well over an hour to run all the tests (I did just order a new MBP, so it will be faster, but it doesn't lend itself to a good OOTB experience for people)
>
> One idea would be to add in parallel test execution in Maven.  I think this requires Mvn 3, but I am not sure.  Another is to take a look at our tests, especially the slow ones and see if we can speed them up.
>
> When I try adding in parallel tests to Maven, I get a bunch of failures in the tests.
>
> I was using:
> <plugin>
>        <groupId>org.apache.maven.plugins</groupId>
>        <artifactId>maven-surefire-plugin</artifactId>
>        <configuration>
>          <forkMode>once</forkMode>
>          <argLine>-Xms256m -Xmx512m</argLine>
>          <testFailureIgnore>false</testFailureIgnore>
>          <redirectTestOutputToFile>true</redirectTestOutputToFile>
>          <parallel>classes</parallel>
>          <threadCount>5</threadCount>
>        </configuration>
>      </plugin>
>
> Anyone played around with this stuff?  I suspect the failures are due to tests stomping on each other, but I am still digging in.
>
> -Grant



-- 
Lance Norskog
goksron@gmail.com

Re: Test Execution Time

Posted by Lance Norskog <go...@gmail.com>.
>From the Sonar page today: (August 10, 2011):

https://analysis.apache.org/drilldown/measures/63921?metric=test_failures&rids[]=63933#

	129993 ms	testLanczosSolver
	
Lanczos taking too long! Are you in the debugger? :)
java.lang.AssertionError: Lanczos taking too long! Are you in the debugger? :)
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.mahout.math.decomposer.lanczos.TestLanczosSolver.testLanczosSolver(TestLanczosSolver.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597

On Mon, Aug 8, 2011 at 11:11 AM, Grant Ingersoll <gs...@apache.org> wrote:
> Will do.
>
> On Aug 8, 2011, at 1:08 PM, Sean Owen wrote:
>
>> <forkMode>always</forkMode> works for me. I think that it's also good to run
>> with <parallel>classes<parallel> so that it only spawns a JVM per class. Try
>> that?
>>
>> On Mon, Aug 8, 2011 at 12:18 PM, Grant Ingersoll <gs...@apache.org>wrote:
>>
>>> I'll have to run again, but perhaps it was just my machine.  Try the config
>>> below if you get a chance and see if you get failures.  I too was wondering
>>> about the forkMode as the culprit, but didn't have time to test it just yet.
>>>
>>> -Grant
>>>
>>> On Aug 8, 2011, at 3:01 AM, Sean Owen wrote:
>>>
>>>> What failures do you see?
>>>> The tests ought to be isolated as they (should) reserve unique temp
>>>> directories in which to work.
>>>> Does forkMode = once mean there's one JVM? That could be the problem, due
>>> to
>>>> RNG differences. It really needs a JVM per thread.
>>>>
>>>> On Sat, Aug 6, 2011 at 8:34 PM, Grant Ingersoll <gs...@apache.org>
>>> wrote:
>>>>
>>>>> Granted, I'm on a slow machine, but our tests take forever to run.  On
>>> an 2
>>>>> core MBP, it takes well over an hour to run all the tests (I did just
>>> order
>>>>> a new MBP, so it will be faster, but it doesn't lend itself to a good
>>> OOTB
>>>>> experience for people)
>>>>>
>>>>> One idea would be to add in parallel test execution in Maven.  I think
>>> this
>>>>> requires Mvn 3, but I am not sure.  Another is to take a look at our
>>> tests,
>>>>> especially the slow ones and see if we can speed them up.
>>>>>
>>>>> When I try adding in parallel tests to Maven, I get a bunch of failures
>>> in
>>>>> the tests.
>>>>>
>>>>> I was using:
>>>>> <plugin>
>>>>>      <groupId>org.apache.maven.plugins</groupId>
>>>>>      <artifactId>maven-surefire-plugin</artifactId>
>>>>>      <configuration>
>>>>>        <forkMode>once</forkMode>
>>>>>        <argLine>-Xms256m -Xmx512m</argLine>
>>>>>        <testFailureIgnore>false</testFailureIgnore>
>>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>>        <parallel>classes</parallel>
>>>>>        <threadCount>5</threadCount>
>>>>>      </configuration>
>>>>>    </plugin>
>>>>>
>>>>> Anyone played around with this stuff?  I suspect the failures are due to
>>>>> tests stomping on each other, but I am still digging in.
>>>>>
>>>>> -Grant
>>>
>>> --------------------------------------------
>>> Grant Ingersoll
>>>
>>>
>>>
>
> --------------------------
> Grant Ingersoll
>
>
>
>



-- 
Lance Norskog
goksron@gmail.com

Re: Test Execution Time

Posted by Grant Ingersoll <gs...@apache.org>.
Will do.

On Aug 8, 2011, at 1:08 PM, Sean Owen wrote:

> <forkMode>always</forkMode> works for me. I think that it's also good to run
> with <parallel>classes<parallel> so that it only spawns a JVM per class. Try
> that?
> 
> On Mon, Aug 8, 2011 at 12:18 PM, Grant Ingersoll <gs...@apache.org>wrote:
> 
>> I'll have to run again, but perhaps it was just my machine.  Try the config
>> below if you get a chance and see if you get failures.  I too was wondering
>> about the forkMode as the culprit, but didn't have time to test it just yet.
>> 
>> -Grant
>> 
>> On Aug 8, 2011, at 3:01 AM, Sean Owen wrote:
>> 
>>> What failures do you see?
>>> The tests ought to be isolated as they (should) reserve unique temp
>>> directories in which to work.
>>> Does forkMode = once mean there's one JVM? That could be the problem, due
>> to
>>> RNG differences. It really needs a JVM per thread.
>>> 
>>> On Sat, Aug 6, 2011 at 8:34 PM, Grant Ingersoll <gs...@apache.org>
>> wrote:
>>> 
>>>> Granted, I'm on a slow machine, but our tests take forever to run.  On
>> an 2
>>>> core MBP, it takes well over an hour to run all the tests (I did just
>> order
>>>> a new MBP, so it will be faster, but it doesn't lend itself to a good
>> OOTB
>>>> experience for people)
>>>> 
>>>> One idea would be to add in parallel test execution in Maven.  I think
>> this
>>>> requires Mvn 3, but I am not sure.  Another is to take a look at our
>> tests,
>>>> especially the slow ones and see if we can speed them up.
>>>> 
>>>> When I try adding in parallel tests to Maven, I get a bunch of failures
>> in
>>>> the tests.
>>>> 
>>>> I was using:
>>>> <plugin>
>>>>      <groupId>org.apache.maven.plugins</groupId>
>>>>      <artifactId>maven-surefire-plugin</artifactId>
>>>>      <configuration>
>>>>        <forkMode>once</forkMode>
>>>>        <argLine>-Xms256m -Xmx512m</argLine>
>>>>        <testFailureIgnore>false</testFailureIgnore>
>>>>        <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>>>        <parallel>classes</parallel>
>>>>        <threadCount>5</threadCount>
>>>>      </configuration>
>>>>    </plugin>
>>>> 
>>>> Anyone played around with this stuff?  I suspect the failures are due to
>>>> tests stomping on each other, but I am still digging in.
>>>> 
>>>> -Grant
>> 
>> --------------------------------------------
>> Grant Ingersoll
>> 
>> 
>> 

--------------------------
Grant Ingersoll




Re: Test Execution Time

Posted by Sean Owen <sr...@gmail.com>.
<forkMode>always</forkMode> works for me. I think that it's also good to run
with <parallel>classes<parallel> so that it only spawns a JVM per class. Try
that?

On Mon, Aug 8, 2011 at 12:18 PM, Grant Ingersoll <gs...@apache.org>wrote:

> I'll have to run again, but perhaps it was just my machine.  Try the config
> below if you get a chance and see if you get failures.  I too was wondering
> about the forkMode as the culprit, but didn't have time to test it just yet.
>
> -Grant
>
> On Aug 8, 2011, at 3:01 AM, Sean Owen wrote:
>
> > What failures do you see?
> > The tests ought to be isolated as they (should) reserve unique temp
> > directories in which to work.
> > Does forkMode = once mean there's one JVM? That could be the problem, due
> to
> > RNG differences. It really needs a JVM per thread.
> >
> > On Sat, Aug 6, 2011 at 8:34 PM, Grant Ingersoll <gs...@apache.org>
> wrote:
> >
> >> Granted, I'm on a slow machine, but our tests take forever to run.  On
> an 2
> >> core MBP, it takes well over an hour to run all the tests (I did just
> order
> >> a new MBP, so it will be faster, but it doesn't lend itself to a good
> OOTB
> >> experience for people)
> >>
> >> One idea would be to add in parallel test execution in Maven.  I think
> this
> >> requires Mvn 3, but I am not sure.  Another is to take a look at our
> tests,
> >> especially the slow ones and see if we can speed them up.
> >>
> >> When I try adding in parallel tests to Maven, I get a bunch of failures
> in
> >> the tests.
> >>
> >> I was using:
> >> <plugin>
> >>       <groupId>org.apache.maven.plugins</groupId>
> >>       <artifactId>maven-surefire-plugin</artifactId>
> >>       <configuration>
> >>         <forkMode>once</forkMode>
> >>         <argLine>-Xms256m -Xmx512m</argLine>
> >>         <testFailureIgnore>false</testFailureIgnore>
> >>         <redirectTestOutputToFile>true</redirectTestOutputToFile>
> >>         <parallel>classes</parallel>
> >>         <threadCount>5</threadCount>
> >>       </configuration>
> >>     </plugin>
> >>
> >> Anyone played around with this stuff?  I suspect the failures are due to
> >> tests stomping on each other, but I am still digging in.
> >>
> >> -Grant
>
> --------------------------------------------
> Grant Ingersoll
>
>
>

Re: Test Execution Time

Posted by Grant Ingersoll <gs...@apache.org>.
I'll have to run again, but perhaps it was just my machine.  Try the config below if you get a chance and see if you get failures.  I too was wondering about the forkMode as the culprit, but didn't have time to test it just yet.

-Grant

On Aug 8, 2011, at 3:01 AM, Sean Owen wrote:

> What failures do you see?
> The tests ought to be isolated as they (should) reserve unique temp
> directories in which to work.
> Does forkMode = once mean there's one JVM? That could be the problem, due to
> RNG differences. It really needs a JVM per thread.
> 
> On Sat, Aug 6, 2011 at 8:34 PM, Grant Ingersoll <gs...@apache.org> wrote:
> 
>> Granted, I'm on a slow machine, but our tests take forever to run.  On an 2
>> core MBP, it takes well over an hour to run all the tests (I did just order
>> a new MBP, so it will be faster, but it doesn't lend itself to a good OOTB
>> experience for people)
>> 
>> One idea would be to add in parallel test execution in Maven.  I think this
>> requires Mvn 3, but I am not sure.  Another is to take a look at our tests,
>> especially the slow ones and see if we can speed them up.
>> 
>> When I try adding in parallel tests to Maven, I get a bunch of failures in
>> the tests.
>> 
>> I was using:
>> <plugin>
>>       <groupId>org.apache.maven.plugins</groupId>
>>       <artifactId>maven-surefire-plugin</artifactId>
>>       <configuration>
>>         <forkMode>once</forkMode>
>>         <argLine>-Xms256m -Xmx512m</argLine>
>>         <testFailureIgnore>false</testFailureIgnore>
>>         <redirectTestOutputToFile>true</redirectTestOutputToFile>
>>         <parallel>classes</parallel>
>>         <threadCount>5</threadCount>
>>       </configuration>
>>     </plugin>
>> 
>> Anyone played around with this stuff?  I suspect the failures are due to
>> tests stomping on each other, but I am still digging in.
>> 
>> -Grant

--------------------------------------------
Grant Ingersoll



Re: Test Execution Time

Posted by Sean Owen <sr...@gmail.com>.
What failures do you see?
The tests ought to be isolated as they (should) reserve unique temp
directories in which to work.
Does forkMode = once mean there's one JVM? That could be the problem, due to
RNG differences. It really needs a JVM per thread.

On Sat, Aug 6, 2011 at 8:34 PM, Grant Ingersoll <gs...@apache.org> wrote:

> Granted, I'm on a slow machine, but our tests take forever to run.  On an 2
> core MBP, it takes well over an hour to run all the tests (I did just order
> a new MBP, so it will be faster, but it doesn't lend itself to a good OOTB
> experience for people)
>
> One idea would be to add in parallel test execution in Maven.  I think this
> requires Mvn 3, but I am not sure.  Another is to take a look at our tests,
> especially the slow ones and see if we can speed them up.
>
> When I try adding in parallel tests to Maven, I get a bunch of failures in
> the tests.
>
> I was using:
> <plugin>
>        <groupId>org.apache.maven.plugins</groupId>
>        <artifactId>maven-surefire-plugin</artifactId>
>        <configuration>
>          <forkMode>once</forkMode>
>          <argLine>-Xms256m -Xmx512m</argLine>
>          <testFailureIgnore>false</testFailureIgnore>
>          <redirectTestOutputToFile>true</redirectTestOutputToFile>
>          <parallel>classes</parallel>
>          <threadCount>5</threadCount>
>        </configuration>
>      </plugin>
>
> Anyone played around with this stuff?  I suspect the failures are due to
> tests stomping on each other, but I am still digging in.
>
> -Grant