You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@maven.apache.org by Kristian Rosenvold <kr...@gmail.com> on 2012/11/07 15:38:41 UTC

In-progress patch for reusable surefire forks (SUREFIRE-751)

Andreas Gudian sent me a private email about this, but since this is
some of the awesomest stuff I've seen in a long while, I simply have
to share and invite the rest of you to participate in this discussion
;)

The patch provides reusable forks for surefire (we already support
parallel forks, now you can on-the fly add more tests to each fork
too, possibly dynamically scheduling to the first free fork). I have
for the last year or two been refactoring surefire to reduce
complexity enough to make this an option, and I'm really happy someone
just implemented this without further ado ;)

His work is currently located at
https://github.com/agudian/maven-surefire/tree/fm-onceperthread

There is at least one thing I'd like to discuss with all of you, and
this regards concurrency ;)

The current code views each forked testrunner process as "server" and
the surefire plugin has a single ForkClient for each server that I
originally intended be a front-end for all comms to that server.

The thing I wonder most about is that Andreas's code also maintains a
queue of tests within each forked server (=test runner). When the
queue runs empty, it requests more tests, which are provided from the
ForkClient within the plugin. In my mind I had always envisioned that
the server would run a single test, respond back to the plugin. The
plugin would send a new test or "quit" back to the fork. (As a side
note, this fature could probably be adapted to re-use the forked
process across modules in a reactor build too; which is a really
interesting optimalization especially if you can amend classpath when
possible)

Now the thing is from a classical textbook perspective my solution is
far inferior to what Andreas has implemented. This is one of those
areas
I believe threading blows classical wisdom right out of the water and
the round-trip cost becomes irellevant for most practical use cases.
I'm wondering if Andreas has tried to see if there's any measurable
difference between queue inside fork and not.

Additionally I would have /enjoyed/ to make the "controller" concept a
bit more explicit so we could have a clear pattern of a single
ForkController object owning one or more ForkClients and some tasks to
be done. I'm thinking that the "ForkStarter" class could probably
instantiate a "ForkController" and hand all the
created ForkClients over to that. I'm still cheweing on the
impressions from your excellent code, but I have this feeling that
would improve the awesomeness even further.

It's definitely the awesomest I've seen for quite some time. The patch
is still going to take me some time to absorb and there might be a few
detils to sort out,  I sincerely hope others will look at it too ;)

Kristian

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Kristian Rosenvold <kr...@gmail.com>.
2012/11/7 Kristian Rosenvold <kr...@gmail.com>:
> The thing I wonder most about is that Andreas's code also maintains a
> queue of tests within each forked server (=test runner). When the
> queue runs empty, it requests more tests, which are provided from the
> ForkClient within the plugin. In my mind I had always envisioned that
> the server would run a single test, respond back to the plugin. The
> plugin would send a new test or "quit" back to the fork. (As a side
> note, this fature could probably be adapted to re-use the forked
> process across modules in a reactor build too; which is a really
> interesting optimalization especially if you can amend classpath when
> possible)

It got lost somewhere, but the reason /why/ I dislike the queue is because it
makes the forked server asynchronous and multithreaded whereas
I originally figured it could be single-threaded (and synchronous) like today.
Of course,if it proves to be significantly better I'm all cool, but
the increased cost
in complexity needs to justification since I generally find this kind of
latency-optimization to be useless in threaded environments.

Kristian

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Dawid Weiss <da...@gmail.com>.
> If you have tests like that you end up having to use a new process for each
> test class. That's like having to decide between forkMode once and always
> in the single-threaded configuration. But yes, I've seen such tests as well
> :).

I agree that running a suite-per-jvm is the ultimate isolation but it
just adds so much overhead... But you're right -- it's a possibility,
sure.

> Gladly, I didn't have to do anything there - it pretty much was there
> already. The current code uses pipes (stdin/stdout/stderr) to communicate
> between the processes, with some specific escaping and op-code mechanism
> that allows to sort out test results and events, and stuff written to
> stdout / stderr within the tests.

This just doesn't work around the corner cases of a child vm crashing.
Believe me or not, this is pretty frequent in Lucene/Solr tests as
folks run with bleeding edge releases and on exotic platforms. What
happens is different VMs report crashes to different output
descriptors (stderr, stdout), this bypasses any Java code and  can
basically happen at any time so it's kind of difficult for parsing on
the receiver side... I initially had it done this way because it's a
natural pipe but it didn't work on more than one occasion, hence the
decision to switch to another file for propagating events back to the
master/ controller node. Another upside is that this event file is
fully asynchronous (unlike the pipe which is blocking once it reaches
a system limit) and the VM just flushes all the events without ever
waiting for the master to even read them. This matters for folks with
beefy hardware (24 cores running, etc.) when the master had trouble
keeping up flushing I/O buffers and was effectively becoming the
bottleneck.

> I know exactly what you mean - I have to deal with tests that require
> database access (they set up test data, do some JPA, and perform a
> rollback). For such scenarios, I have added a feature that replaces the
> string ${surefire.threadNumber} in the system properties and in the argLine
> with the number of the executing thread, ranging [1..threadCount]. For me,

My idealistic idea was to have tests running in parallel out of the
box. In cases like yours it's obviously not going to happen. I like
the system property idea.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Andreas Gudian <an...@gmail.com>.
 Kristian Rosenvold :

> Andreas;
>
> We have somewhat of a tradition for staying backward compatible om plugin
> options. So normally we would add any alternate parameters in addition to
> the existing (causing more code and duplication).
>
> I know I am probably massively disqualified because total lack of
> "newcomer" perspective, but it seems to me like the proposed changes are
> fairly minor in both gain and simplification. Additionally I see no future
> needs that will make the existing scheme break in any fundamental way.
>
> There are strong relationships between all the fork/parallel settings and
> I'm not sure increasing the symmetry on 'forkMode' is worth the cost.
>
> Kristian
>

Fair enough. Perhaps we can pimp the online docs a bit, perhaps by adding a
new example page that covers the whole parallel / forkMode topic,
explaining what's working together and what not. I'm not shy and would step
up making a draft if that's welcome... :).

Andreas

Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Kristian Rosenvold <kr...@gmail.com>.
Andreas;

We have somewhat of a tradition for staying backward compatible om plugin
options. So normally we would add any alternate parameters in addition to
the existing (causing more code and duplication).

I know I am probably massively disqualified because total lack of
"newcomer" perspective, but it seems to me like the proposed changes are
fairly minor in both gain and simplification. Additionally I see no future
needs that will make the existing scheme break in any fundamental way.

There are strong relationships between all the fork/parallel settings and
I'm not sure increasing the symmetry on 'forkMode' is worth the cost.

Kristian

Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Andreas Gudian <an...@gmail.com>.
2012/11/7 Kristian Rosenvold <kr...@gmail.com>

> Having given this some more thought and corrolated with Dawid's mail,
> I think the main
> concern with free-running forks is in the logging/reporting bits. I'm
> still a firm beliver that *more* threads
> never make things simpler and I think you should clean up the patch
> without the threading; we can always
> add that later. I still think that latency has lost much of its
> relevance with lots of threads ;)
>

Done :)


>
> >
> > I guess another thing needs to be sorted out before deciding on whether
> to
> > drop the extra threading: and that is the TestNG provider. I don't yet
> have
> > an idea which way to go there. The current implementation requires all
> test
> > classes to be known upfront, in order for the provider to decide where to
> > store the test results: in case both JUnit and TestNG tests are present,
> > distinct directories will be created. Otherwise, they just land directly
> > test result directory. So it might have to be necessary to further
> > pre-process the test set in the parent process (which would require some
> > more API changes for the providers); to always hand over all the tests
> > known and to let each forked process somehow select a subset for the
> actual
> > execution; or maybe to decide that TestNG always uses special
> > sub-directories for JUnit results and TestNG result, regardless of
> whether
> > both types of test classes are present or not. (That currently unused
> > attribute multipleTestsForFork in ProviderConfiguration was meant to
> > somehow address the TestNG problem, but there's still some stuff to think
> > about first.)
>
> I wonder why we can't just always split them....?
> Did you actually try just running multiple times over & over ?
>

I think always splitting would be the best option, given that is actually
an option, as it might confuse users. But then again... if you actually
"run mvn test -Dtest=..." multiple times today, it would generate the
output into one directory, regardless of the type of test I'm specifying.
So in case mixing both output types is an issue, we would have that issue
today already. Does anybody know why exactly the output directories are
different in case both test types are in the test set?


>
> > And, one more thing, the current code does not support the concurrent
> > execution of test classes within one process (the parallel option). But I
> > guess that threaded handling of the input in LazyTestsToRun would not
> make
> > a difference there, either. ;) However, it might go in a direction
> similar
> > to the TestNG thing. But I'd prefer to leave out combining the parallel
> > option with the *forkperthread options for now... :)
>
> I think we can live with that. Achieving full symmetry of options would
> be cool, but this stuff is also quite hard sometimes and the end-user
> gains questionable ;)
>

Then, while we're at it, how about cleaning up the forkMode options? As
Kristian already suggested, there could be a new attribute "reuseFork"
(boolean, default: true). We could eliminate forkMode and just have "fork"
(boolean, default: false). Depending on the value of threadCount (default:
1), that would give us:
* threadCount=1:
  + fork=false, reuseFork=<does not matter> --> today's forkMode=never
  + fork=true, reuseFork=true --> today's forkMode=once
  + fork=true, reuseFork=false --> today's forkMode=always
* threadCount>1:
  + fork=true, reuseFork=false --> today's forkMode=perthread
  + fork=true, reuseFork=true --> forkMode=onceperthread
  + fork=false --> requires JUnit47 or TestNG and some setting of the
parallel attribute?

Or, we could leave forkMode=[never|once|always] (remove "perthread") and
use the strategies perthread or onceperthread implicitly in case of
threadCount>1 and forkMode=always or once (respectively).
What do you think?

Andreas

Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Kristian Rosenvold <kr...@gmail.com>.
2012/11/7 Andreas Gudian <an...@gmail.com>:
> Hey guys,
>
> thanks for your input. Kristian, I'd like to start with your concerns about
> the threading in the forked process. With the code as it is now, you are
> totally right, that threading in this LazyTestsToRun class is not really
> required. It's a left-over from a previous attempt where the forked servers
> did not /ask/ to receive more work, but were that was pushed to them
> implicitly, with a special RunListener reacting on finished tests. The only
> advantage now left is that with the decoupled threads we can react on a not
> responding (hanging) main process, as I can easily observe a timeout with
> different threads. Being stuck in an InputStream#read() method might get us
> into trouble. But perhaps it's just paranoia and everything will unravel
> once the parent process gets killed.

Having given this some more thought and corrolated with Dawid's mail,
I think the main
concern with free-running forks is in the logging/reporting bits. I'm
still a firm beliver that *more* threads
never make things simpler and I think you should clean up the patch
without the threading; we can always
add that later. I still think that latency has lost much of its
relevance with lots of threads ;)

>
> I guess another thing needs to be sorted out before deciding on whether to
> drop the extra threading: and that is the TestNG provider. I don't yet have
> an idea which way to go there. The current implementation requires all test
> classes to be known upfront, in order for the provider to decide where to
> store the test results: in case both JUnit and TestNG tests are present,
> distinct directories will be created. Otherwise, they just land directly
> test result directory. So it might have to be necessary to further
> pre-process the test set in the parent process (which would require some
> more API changes for the providers); to always hand over all the tests
> known and to let each forked process somehow select a subset for the actual
> execution; or maybe to decide that TestNG always uses special
> sub-directories for JUnit results and TestNG result, regardless of whether
> both types of test classes are present or not. (That currently unused
> attribute multipleTestsForFork in ProviderConfiguration was meant to
> somehow address the TestNG problem, but there's still some stuff to think
> about first.)

I wonder why we can't just always split them....?
Did you actually try just running multiple times over & over ?


> And, one more thing, the current code does not support the concurrent
> execution of test classes within one process (the parallel option). But I
> guess that threaded handling of the input in LazyTestsToRun would not make
> a difference there, either. ;) However, it might go in a direction similar
> to the TestNG thing. But I'd prefer to leave out combining the parallel
> option with the *forkperthread options for now... :)

I think we can live with that. Achieving full symmetry of options would
be cool, but this stuff is also quite hard sometimes and the end-user
gains questionable ;)


Kristian

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Andreas Gudian <an...@gmail.com>.
Hey guys,

thanks for your input. Kristian, I'd like to start with your concerns about
the threading in the forked process. With the code as it is now, you are
totally right, that threading in this LazyTestsToRun class is not really
required. It's a left-over from a previous attempt where the forked servers
did not /ask/ to receive more work, but were that was pushed to them
implicitly, with a special RunListener reacting on finished tests. The only
advantage now left is that with the decoupled threads we can react on a not
responding (hanging) main process, as I can easily observe a timeout with
different threads. Being stuck in an InputStream#read() method might get us
into trouble. But perhaps it's just paranoia and everything will unravel
once the parent process gets killed.

I guess another thing needs to be sorted out before deciding on whether to
drop the extra threading: and that is the TestNG provider. I don't yet have
an idea which way to go there. The current implementation requires all test
classes to be known upfront, in order for the provider to decide where to
store the test results: in case both JUnit and TestNG tests are present,
distinct directories will be created. Otherwise, they just land directly
test result directory. So it might have to be necessary to further
pre-process the test set in the parent process (which would require some
more API changes for the providers); to always hand over all the tests
known and to let each forked process somehow select a subset for the actual
execution; or maybe to decide that TestNG always uses special
sub-directories for JUnit results and TestNG result, regardless of whether
both types of test classes are present or not. (That currently unused
attribute multipleTestsForFork in ProviderConfiguration was meant to
somehow address the TestNG problem, but there's still some stuff to think
about first.)

And, one more thing, the current code does not support the concurrent
execution of test classes within one process (the parallel option). But I
guess that threaded handling of the input in LazyTestsToRun would not make
a difference there, either. ;) However, it might go in a direction similar
to the TestNG thing. But I'd prefer to leave out combining the parallel
option with the *forkperthread options for now... :)

1) Lots of hard to fix bugs will occur for users once you split
> execution across forked JVMs if there are dependencies between suite
> classes (pollution of static fields, left-over threads etc.). These
> are hard to debug enough if you have a predictable suite assignment
> (which can be done, see the runner's static assignment policy) but if
> you have dynamic job stealing the order is pretty much a race
> condition and the information which suites ran on which JVM (and in
> which order) is crucial.
>

If you have tests like that you end up having to use a new process for each
test class. That's like having to decide between forkMode once and always
in the single-threaded configuration. But yes, I've seen such tests as well
:).


>
> 2) Didn't look into Andreas's code and don't know how he solved the
> problem of communicating between JVMs. This is quite tricky if you
> want to detect hung/crashed JVMs and capture the output they dump to
> stdout/stderr descriptors directly (bypassing any System.*
> replacements). We tail event files manually, this turned simple and
> robust.
>

Gladly, I didn't have to do anything there - it pretty much was there
already. The current code uses pipes (stdin/stdout/stderr) to communicate
between the processes, with some specific escaping and op-code mechanism
that allows to sort out test results and events, and stuff written to
stdout / stderr within the tests.


> 4) Many test suites implicitly rely on the fact that they're executed
> in isolation. This may result in unexpected failures as in two test
> suites try to write to the same file in a current working directory,
> for example. We opted for running each forked JVM in a separate CWD so
> that they kind of get their own scratch space to work with.
>

I know exactly what you mean - I have to deal with tests that require
database access (they set up test data, do some JPA, and perform a
rollback). For such scenarios, I have added a feature that replaces the
string ${surefire.threadNumber} in the system properties and in the argLine
with the number of the executing thread, ranging [1..threadCount]. For me,
that did the trick. It's actually something separate from this forking
feature (works with all the other forkModes as well), but I hacked it down
together with this one in an offline, non-git copy of the source and I
didn't want to scratch it out before pushing. :)

Andreas

Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Dawid Weiss <da...@gmail.com>.
> I'll take  a look at that. The only thing new to surefire in this patch
> is really dynamic allocation of tests to forked processes, since all
> the other stuff you mention is already supported.

Well, it's a lot more than just that ;) But I agree most of this stuff
(randomization, thread leak detection etc.) is beyond Surefire's scope
which is well defined.

> Right; there is little chance of actually tracking what went wrong
> in the current logging regime; it would need to be beefed up
> with which vm and when things happened.

It's really tricky. I'm not even saying we got to the point where it's
good but there's been a good few attempts... If you peek here:

https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3366/consoleText

it's one of the possible reports, it just lists suites that executed
and VMs they executed on, as in:

[junit4:junit4] Suite: org.apache.solr.cloud.OverseerTest
[junit4:junit4] Completed on J1 in 36.72s, 8 tests

If a suite ends with a failure it'll have an indented output
(interleaved stdout 1> and stderr 2>), as in:

[junit4:junit4] Suite: org.apache.solr.TestSolrCoreProperties
[junit4:junit4]   2> 3615 T571 oejs.Server.doStart jetty-8.1.2.v20120308
[junit4:junit4]   2> 3855 T571 oejs.AbstractConnector.doStart Started
SelectChannelConnector@0.0.0.0:29098
...

here it's not even a particular test that caused the failure but an
afterclass hook.

[junit4:junit4] ERROR   0.00s J1 | TestSolrCoreProperties (suite) <<<
[junit4:junit4]    > Throwable #1:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked
from SUITE scope at org.apache.solr.TestSolrCoreProperties:

The number of places where things can start logging or where they can
go bonkers are absolutely astonishing -- the static initializers, etc.
you mentioned but also hanging shutdown hooks, finalizers or threads
stuck in native code (preventing jvm exit).

> This is basically unchanged in his patch; we capture all process output.

Ok.

> Yuck. Andreas's patch sets distinct environment variable within each forked vm,
> which is about as far as I think it's /nice/ to go. Separate cwd
> sounds very un-maven like;
> I wouldn't really expect maven users to have this problem ;)

It's about tests not being a maven or ant or whatever else. I can bet
a lot of people will have conflicts like the ones I mentioned, it's
just life -- a lot of badly written tests just happen. I recalled
another reason we split cwds -- once you get a JVM crash it's easier
to tell which one it was (and J9 produces several several files in the
CWD, for example).

> We're not adding support for starting vmware images ;)

Everything in the right time ;)

D.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Kristian Rosenvold <kr...@gmail.com>.
2012/11/7 Dawid Weiss <da...@gmail.com>:
> http://labs.carrotsearch.com/randomizedtesting.html
> https://github.com/carrotsearch/randomizedtesting

I'll take  a look at that. The only thing new to surefire in this patch
is really dynamic allocation of tests to forked processes, since all
the other stuff you mention is already supported.

>
> Some things to watch for, based on this experience:
>
> 1) Lots of hard to fix bugs will occur for users once you split
> execution across forked JVMs if there are dependencies between suite
> classes (pollution of static fields, left-over threads etc.). These
> are hard to debug enough if you have a predictable suite assignment
> (which can be done, see the runner's static assignment policy) but if
> you have dynamic job stealing the order is pretty much a race
> condition and the information which suites ran on which JVM (and in
> which order) is crucial.

Right; there is little chance of actually tracking what went wrong
in the current logging regime; it would need to be beefed up
with which vm and when things happened.

>
> 2) Didn't look into Andreas's code and don't know how he solved the
> problem of communicating between JVMs. This is quite tricky if you
> want to detect hung/crashed JVMs and capture the output they dump to
> stdout/stderr descriptors directly (bypassing any System.*
> replacements). We tail event files manually, this turned simple and
> robust.

This is basically unchanged in his patch; we capture all process output.

>
> 3) Most folks in Lucene were very attached to seeing plain console
> output, even if concurrent JVMs are executing tests. I don't need to
> mention this is again tricky and requires some coordination. Instead
> of synchronizing slave VMs with the master (possibly delaying the
> tests) we again went with tailing -- the forked VMs emit console
> events much like they do anything else and once the master spots a
> suite/ test was completed it pipes any output it produced to user's
> console. There are many different scenarios here and all of them have
> had both supporters and haters -- I'm not going to delve into it.

This was also solved in surefire around version 2.6 or so and is
unchanged in his
patch; we also buffer until test completion. Output from static
initializers/beforeclass when
running concurrently within a single vm are the only glitches I am
aware of in this logic.

>
> 4) Many test suites implicitly rely on the fact that they're executed
> in isolation. This may result in unexpected failures as in two test
> suites try to write to the same file in a current working directory,
> for example. We opted for running each forked JVM in a separate CWD so
> that they kind of get their own scratch space to work with. You could
> agree that this is catering for badly designed tests but the reality
> is many people will just have such tests in their code and won't know
> why it's not a good idea to use cwd/ the same socket/ the same
> resource across different suites.

Yuck. Andreas's patch sets distinct environment variable within each forked vm,
which is about as far as I think it's /nice/ to go. Separate cwd
sounds very un-maven like;
I wouldn't really expect maven users to have this problem ;)

for all the other concurrent modes, there's tons of reasons why it
won't work for
any "random" code you try to run in parallel. I can live with that.
We're not adding
support for starting vmware images ;)


Kristian

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: In-progress patch for reusable surefire forks (SUREFIRE-751)

Posted by Dawid Weiss <da...@gmail.com>.
> some of the awesomest stuff I've seen in a long while, I simply have

I implemented such stuff for Apache Lucene/ Solr a while back, it's
been running for quite a while now. It's not technically Maven (it's
an Ant task) but a Maven plugin wrapper for it is also provided and
does an equivalent thing -- load-balancing of test suites across
workers based on previous execution statistics, job-stealing to
minimize the execution time when times are unknown (or vary across
runs).

http://labs.carrotsearch.com/randomizedtesting.html
https://github.com/carrotsearch/randomizedtesting

Some things to watch for, based on this experience:

1) Lots of hard to fix bugs will occur for users once you split
execution across forked JVMs if there are dependencies between suite
classes (pollution of static fields, left-over threads etc.). These
are hard to debug enough if you have a predictable suite assignment
(which can be done, see the runner's static assignment policy) but if
you have dynamic job stealing the order is pretty much a race
condition and the information which suites ran on which JVM (and in
which order) is crucial.

2) Didn't look into Andreas's code and don't know how he solved the
problem of communicating between JVMs. This is quite tricky if you
want to detect hung/crashed JVMs and capture the output they dump to
stdout/stderr descriptors directly (bypassing any System.*
replacements). We tail event files manually, this turned simple and
robust.

3) Most folks in Lucene were very attached to seeing plain console
output, even if concurrent JVMs are executing tests. I don't need to
mention this is again tricky and requires some coordination. Instead
of synchronizing slave VMs with the master (possibly delaying the
tests) we again went with tailing -- the forked VMs emit console
events much like they do anything else and once the master spots a
suite/ test was completed it pipes any output it produced to user's
console. There are many different scenarios here and all of them have
had both supporters and haters -- I'm not going to delve into it.

4) Many test suites implicitly rely on the fact that they're executed
in isolation. This may result in unexpected failures as in two test
suites try to write to the same file in a current working directory,
for example. We opted for running each forked JVM in a separate CWD so
that they kind of get their own scratch space to work with. You could
agree that this is catering for badly designed tests but the reality
is many people will just have such tests in their code and won't know
why it's not a good idea to use cwd/ the same socket/ the same
resource across different suites.

Don't know if any of the code in that github account will be of use
for you -- RandomizedRunner is designed with some specific goals in
mind -- but I do have some experience in the matter of running JUnit
tests concurrently and if this can be helpful ping me.

Dawid

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org