You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@qpid.apache.org by Gordon Sim <gs...@redhat.com> on 2007/05/31 14:05:08 UTC

[python] run-tests: unit tests v. broker tests

Some excellent new unit tests have been added to the python code on 
trunk. Not all of these pass at present.

Brokers using the python run-tests script to test themselves will pick 
these new tests up and will report failures. One option is to add the 
failures to the list of expected failures for each broker.

However as these new tests don't even open a connection to a broker, I 
wondered whether it would be more sensible to start partitioning the 
tests into unit tests for the python code itself and tests used to 
verify broker behaviour. That then seemed worth raising as a question 
for the group... thoughts?



Re: [python] run-tests: unit tests v. broker tests

Posted by Alan Conway <ac...@redhat.com>.
On Thu, 2007-05-31 at 09:55 -0400, Rafael Schloming wrote:
> 
> Robert Godfrey wrote:
> > I just think that the test pack for testing brokers isn't related to
> > the "python-ness" of the python tests... In fact they are designed for
> > testing *any* AMQP implementation.  This is fundamentally different to
> > the concept of unit testing the Qpid Python Library...
> 
> Agreed
> 
> > Looking at it the other way, for your unit tests, you shouldn't be
> > needing to run a broker :-)
> 
> The tests only connect to the broker on demand, so you don't have to run 
> the broker if you just run unit tests, but I agree that these are pretty 
> distinct kinds of tests.
> 
> What exactly are you suggesting, completely separate packaging for the 
> broker tests, a separate directory, ...?
> 
> I still think it's reasonable for an AMQP broker test suite to do some 
> amount of self testing before announcing that the broker is in error.
> 
I'm inclined to agree. It makes it easier to distinguish "python tests
failing because broker is broken" from "python tests failing because
python client is broken" and could save much wasted time looking for
non-existent broker bugs.

Unless they take a long time to run (doubtful!) I see no reason not to
run them along with the broker tests - the fact that they're failing now
is just a temporary quirk, they should be subject to the same "never
fail" rules as all tests.

I do agree that you need to be able to clearly distinguish which type of
test is failing, that could be done with just naming conventions for
test classes.

Cheers,
Alan.


Re: [python] run-tests: unit tests v. broker tests

Posted by Rafael Schloming <ra...@redhat.com>.

Robert Godfrey wrote:
> I just think that the test pack for testing brokers isn't related to
> the "python-ness" of the python tests... In fact they are designed for
> testing *any* AMQP implementation.  This is fundamentally different to
> the concept of unit testing the Qpid Python Library...

Agreed

> Looking at it the other way, for your unit tests, you shouldn't be
> needing to run a broker :-)

The tests only connect to the broker on demand, so you don't have to run 
the broker if you just run unit tests, but I agree that these are pretty 
distinct kinds of tests.

What exactly are you suggesting, completely separate packaging for the 
broker tests, a separate directory, ...?

I still think it's reasonable for an AMQP broker test suite to do some 
amount of self testing before announcing that the broker is in error.

--Rafael

> 
> -- Rob
> 
> On 31/05/07, Rafael Schloming <ra...@redhat.com> wrote:
>> This is my fault, I accidentally checked in the codec tests before
>> applying the fixes. I've disabled them for the moment.
>>
>> I did create some more partitioning by adding them in the "tests"
>> directory rather than the "tests_0-8" or "tests_0-9" directories. I
>> modified the test-runner to run "tests" regardless of the version in
>> use. This isn't quite the same as the internal/protocol distinction
>> since there are probably useful protocol tests that are not version
>> specific. It would be easy enough to add something like an "internal"
>> directory to make it clear which failures are protocol related and which
>> aren't. I'd prefer still running all tests by default as the internal
>> tests are quite fast and should normally be a good sanity check as to
>> whether protocol failures indicate a broker issue or a python client 
>> issue.
>>
>> --Rafael
>>
>> Robert Godfrey wrote:
>> > +1
>> >
>> > I think there definitely needs to be a distinction between tests which
>> > happen to be written in Python, which are testing the broker; and
>> > tests which are testing the Python code.
>> >
>> > -- Rob
>> >
>> >
>> >
>> > On 31/05/07, Gordon Sim <gs...@redhat.com> wrote:
>> >> Some excellent new unit tests have been added to the python code on
>> >> trunk. Not all of these pass at present.
>> >>
>> >> Brokers using the python run-tests script to test themselves will pick
>> >> these new tests up and will report failures. One option is to add the
>> >> failures to the list of expected failures for each broker.
>> >>
>> >> However as these new tests don't even open a connection to a broker, I
>> >> wondered whether it would be more sensible to start partitioning the
>> >> tests into unit tests for the python code itself and tests used to
>> >> verify broker behaviour. That then seemed worth raising as a question
>> >> for the group... thoughts?
>> >>
>> >>
>> >>
>>

Re: [python] run-tests: unit tests v. broker tests

Posted by Robert Godfrey <ro...@gmail.com>.
I just think that the test pack for testing brokers isn't related to
the "python-ness" of the python tests... In fact they are designed for
testing *any* AMQP implementation.  This is fundamentally different to
the concept of unit testing the Qpid Python Library...

Looking at it the other way, for your unit tests, you shouldn't be
needing to run a broker :-)

-- Rob

On 31/05/07, Rafael Schloming <ra...@redhat.com> wrote:
> This is my fault, I accidentally checked in the codec tests before
> applying the fixes. I've disabled them for the moment.
>
> I did create some more partitioning by adding them in the "tests"
> directory rather than the "tests_0-8" or "tests_0-9" directories. I
> modified the test-runner to run "tests" regardless of the version in
> use. This isn't quite the same as the internal/protocol distinction
> since there are probably useful protocol tests that are not version
> specific. It would be easy enough to add something like an "internal"
> directory to make it clear which failures are protocol related and which
> aren't. I'd prefer still running all tests by default as the internal
> tests are quite fast and should normally be a good sanity check as to
> whether protocol failures indicate a broker issue or a python client issue.
>
> --Rafael
>
> Robert Godfrey wrote:
> > +1
> >
> > I think there definitely needs to be a distinction between tests which
> > happen to be written in Python, which are testing the broker; and
> > tests which are testing the Python code.
> >
> > -- Rob
> >
> >
> >
> > On 31/05/07, Gordon Sim <gs...@redhat.com> wrote:
> >> Some excellent new unit tests have been added to the python code on
> >> trunk. Not all of these pass at present.
> >>
> >> Brokers using the python run-tests script to test themselves will pick
> >> these new tests up and will report failures. One option is to add the
> >> failures to the list of expected failures for each broker.
> >>
> >> However as these new tests don't even open a connection to a broker, I
> >> wondered whether it would be more sensible to start partitioning the
> >> tests into unit tests for the python code itself and tests used to
> >> verify broker behaviour. That then seemed worth raising as a question
> >> for the group... thoughts?
> >>
> >>
> >>
>

Re: [python] run-tests: unit tests v. broker tests

Posted by Rafael Schloming <ra...@redhat.com>.
This is my fault, I accidentally checked in the codec tests before 
applying the fixes. I've disabled them for the moment.

I did create some more partitioning by adding them in the "tests" 
directory rather than the "tests_0-8" or "tests_0-9" directories. I 
modified the test-runner to run "tests" regardless of the version in 
use. This isn't quite the same as the internal/protocol distinction 
since there are probably useful protocol tests that are not version 
specific. It would be easy enough to add something like an "internal" 
directory to make it clear which failures are protocol related and which 
aren't. I'd prefer still running all tests by default as the internal 
tests are quite fast and should normally be a good sanity check as to 
whether protocol failures indicate a broker issue or a python client issue.

--Rafael

Robert Godfrey wrote:
> +1
> 
> I think there definitely needs to be a distinction between tests which
> happen to be written in Python, which are testing the broker; and
> tests which are testing the Python code.
> 
> -- Rob
> 
> 
> 
> On 31/05/07, Gordon Sim <gs...@redhat.com> wrote:
>> Some excellent new unit tests have been added to the python code on
>> trunk. Not all of these pass at present.
>>
>> Brokers using the python run-tests script to test themselves will pick
>> these new tests up and will report failures. One option is to add the
>> failures to the list of expected failures for each broker.
>>
>> However as these new tests don't even open a connection to a broker, I
>> wondered whether it would be more sensible to start partitioning the
>> tests into unit tests for the python code itself and tests used to
>> verify broker behaviour. That then seemed worth raising as a question
>> for the group... thoughts?
>>
>>
>>

Re: [python] run-tests: unit tests v. broker tests

Posted by Robert Godfrey <ro...@gmail.com>.
+1

I think there definitely needs to be a distinction between tests which
happen to be written in Python, which are testing the broker; and
tests which are testing the Python code.

-- Rob



On 31/05/07, Gordon Sim <gs...@redhat.com> wrote:
> Some excellent new unit tests have been added to the python code on
> trunk. Not all of these pass at present.
>
> Brokers using the python run-tests script to test themselves will pick
> these new tests up and will report failures. One option is to add the
> failures to the list of expected failures for each broker.
>
> However as these new tests don't even open a connection to a broker, I
> wondered whether it would be more sensible to start partitioning the
> tests into unit tests for the python code itself and tests used to
> verify broker behaviour. That then seemed worth raising as a question
> for the group... thoughts?
>
>
>