You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@qpid.apache.org by Alan Conway <ac...@redhat.com> on 2007/03/02 21:42:01 UTC

Re: Draft Interop Testing Spec - Please Read

On Tue, 2007-02-27 at 10:23 +0000, Rupert Smith wrote:
> On 2/27/07, Alan Conway <ac...@redhat.com> wrote:
snip
> > I disagree. <snip> The actual components involved in a given test
> > run should be determined at runtime by the controller, not baked into
> > the tests. 

> What I was imagining is that each client would hear the declarations
> of the other clients and when each of those clients declared itself it
> declared its name and that it is these declared names that would be
> used to name the test outputs. 

Apologies for irrelevant rant - including language info in test reports
is good, I got the wrong end of the stick.

> It might even be
> advantageous to get the broker type in there too somewhere?
It might indeed. We'll have to play with some real reporting output to
figure out the right level of detail.

> Originally, I was thinking that each client would be responsible for
> writing out the results of the tests where it is the sending part, in
> the JUnit XML format. When Gordon suggested a more centralized
> approach, I liked the idea because only the coordinator is going to do
> the result logging, saving us the trouble of writing it in each
> implementation language. So, now I'm thinking that the coordinator
> sends out an invite for test case X, "Java-32765" and "Cpp-21364"
> reply to it, it sets up one with the sender role, one with the
> receiver role and runs test case X (through broker Y) and so on for
> all the other permutations. So the coordinator knows that this is a
> Java to Cpp test for case X through broker Y so can name the test
> results appropriately. If the coordinator is written in Java, I know
> that it is definitely possible to make it use JUnit to dynamically
> create and name test cases like this; it may require writing a special
> test decorator or test case implementation or something, but can be
> done.
> 

I like it - each interop tests is a runtime composite of a selection of
"compatible" JUnit/CppUnit/pythonunit/rubyunit tests. We write the
per-language tests once and we get the harness to generate the
combinations we want to test.

On results: regardless of who produces the final report, we do have to
collect results from all participants. I'd be inclined to go lo-tech:
everybody dumps their assertions to the file system and we scrape it all
up afterwards, or use some simple non-qpid protocol like syslog to
gather results. If we use XML output we can stitch it all back together
in nice HTML pages at the end. It is tempting to use qpid  to gather the
reports, but then qpid failures could hide their own tracks.

> > I want to be able to do something like this:
> >
> > svn co https//blah/qpid
> > cd qpid/interop/bin
> > build_everything
> > run_interop_tests
> 
> What I'm thinking is that you will have to do a little bit more than
> this. 
Only the first time, after that I'll write scripts :) Seriously - lets
get it working, then we can make it easier to use. We'll need the finer
granularity anyway for investigating specific problems.

A couple of caveats:
 - No script that starts background processes returns until all such
processes are fully initialized (see QPID-304)
 - scripts return 0 exit status if and only if everything really is OK.
 - scripts finish in a finite time even in the event of failures.

Once we have that we can automate to our hearts content. Without those
guarantees reliable automation is impossible. (See the unreliable
automation in Qpid C++ for proof :)

> Is this an acceptable approach? The build scripts for each client can
> inject whatever paths and environment variables they need into their
> start scripts during their builds?\

Absolutely. I think the overall ideas are sound, and we can iron out the
wrinkles as we go.  Thanks for putting this together.

Cheers,
Alan.