You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Kim van der Riet <ki...@redhat.com> on 2015/05/13 16:52:50 UTC

Create qpid-interop sub-project

I would like to propose and start a vote on asking infa@ to create a 
Qpid subproject called qpid-interop. The requested infrastructure will 
be limited to a git repo "qpid-interop" and a JIRA. All mailing and 
other communication would be performed through regular users@ and dev@ 
mailing lists.

What is qpid-interop?

This project is a test framework for interoperability between various 
AMQP 1.0 clients. This includes, but is not limited to Qpid clients such 
as qpid-jms, proton python, c, c++ etc.  The tests will run on any 
running AMQP broker, and should include:

* AMQP types
* AMQP functionality
* Transactions

The idea is to have a top-level control program which sends its requests 
to specific clients to send or receive data to a "shim" or light-weight 
send and receive client through the command-line and which performs 
narrow specific tasks.  The process for editing and adding both clients 
and tests will be well-documented so that anyone from the community can 
add to them.

I have already have some code in Python and docs to share which focus on 
the first category above, the AMQP types. This is intended to illustrate 
the idea and open the way for others to contribute both other type tests 
and other clients. These need a landing place, and I am proposing to 
start by asking infra to create a git repo and a JIRA, and I'll check in 
the code and docs I have so far.

I would welcome your comments and ideas, and hope to start a vote in a 
day or so.

Kim van der Riet

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Create qpid-interop sub-project

Posted by Alan Conway <ac...@redhat.com>.
I would suggest we handle dependencies and multi-version using normal
installation conventions.

# Single configuration test
 
We have a bunch of components (qpid c++, java, proton, dispatch etc.)
and we pick a version of each that we think should inter-operate. We
build and install each component into $HOME/test1 say, in dependency
order. E.g. first proton, then qpid using proton from $HOME/test1 etc.

Now we can build and run the interop suite with 

    cmake -DCMAKE_INSTALL_PREFIX=$HOME/test1 && make install
    $HOME/test1/bin/interop_dostuff

The interop cmake looks for its dependencies in PREFIX, auto-detecting
the ones that are there. It builds all the tests for the dependencies it
has found and installs them to PREFIX and you run them. 

# Multi configuration tests

We need to cross-test different versions of components. Say we have a
new version of proton. We install it to a new tree $HOME/test2. We
create a new build directory and configuration for interop with  PREFIX=
$HOME/test2, build and install. . Cmake allows (encourages) multiple
build directories for multiple configurations - use it.

Now the interop tests are a mix of test programs built against component
libraries and python scripts to run those programs. By default those
scripts run the executables from their own tree. BUT they can also be
configured (config file, command line args etc.) to run executables from
a different tree.

# Install gives isolation

The install tree is a way to have multiple configurations of different
component versions on a single host. We can burn the install location
into binaries and libraries (via rpath), and into config files, scripts
etc. (via cmake substitution) so we don't rely on PATH, LDLIBRARYPATH,
PYTHONPATH etc. Running test1/bin/testclient will always use libraries
etc. from the test1 tree. 

The scripts don't look *across* trees until they are explicitly
configured to do so: since we don't rely on PATH we can write a config
file for $HOME/test1/bin/interop_foo.py that points to
$HOME/test2/bin/someclient without getting into a mess about which
proton lib someclient links - it's the test2 lib.

This sounds complicated but it's a lot simpler than the alternatives ;)

The install tree layout may not be perfect but it is definitely Good
Enough (I've used it.) It would be a *massive* waste of effort trying to
come up with a better layout, and it won't be much better. It saves us
from inventing conventions about where to put binaries, libraries,
python libraries, ruby/go/blah libraries etc - just put them in the
standard place they would install to relative to PREFIX.

# Windows

I don't know windows install conventions well but I assume make install
will put stuff under the prefix, and that's the starting point. We
should use python for all scripting (not bash) and text or python files
for configuration. There may be win/unix specific env settings somewhere
but they should be very small and executed from a uniform python
wrapper. CMake and python already abstract almost everything for us.

# NOTES

We can avoid burning-in everything by having a few burnt-in entry-point
scripts that override environment to suit the local tree before running
anything else - that's a detail for implementation.

We may need to tinker with cmake RPATH settings to get isolation to work
properly, it's easy enough shout if you need help.




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Create qpid-interop sub-project

Posted by Kim van der Riet <ki...@redhat.com>.
Hi Rafi,

Thanks for your questions and comments. As there are several issues 
brought up, I'll try to answer them in-line.

On 05/14/2015 08:14 AM, Rafael Schloming wrote:
> Hi Kim,
>
> How exactly are you picturing the dependencies working here? Is this
> project going to be an assemblage of other projects, i.e. an integration
> test suite, or do you intend other projects to depend on this one and pull
> it in as part of their standard test suite?

I foresee that the first option would be the most practical, but we need 
to minimize the degree of dependency. The overall idea is to have a set 
of shims that depend on an installed instance of a client lib, and on a 
running broker. The test suite would then need to be given the following 
info:

qpid-interop broker-url client-1 client-1-location client-2 
client-2-location

but whether this is fully practical, I don't yet know.  Initially, the 
test will be built against the client dev environment, and whatever 
version that represents is the version that will be tested.

Testing against various versions of the same client is certainly 
possible, and with a little trouble could certainly be done, provided 
that there is a way for these clients to co-exist on the same test box. 
Alternatively, the various client versions could be run from different 
remote boxes, but my current dev effort has not allowed for this 
possibility. I would prefer to start out by keeping the test framework 
as simple as possible.
>
> For some context here, the recent sasl changes on proton illustrate some of
> the challenges of maintaining interop. The changes worked fine on master.
> All the test suites passed because the changes introduced compensating
> behaviors on the client and server ends. However they did break interop
> with previous versions of proton, and this was not apparent from our test
> coverage because our tests are generally constructed from two live proton
> endpoints talking to each other.
>
> This same issue is relevant to the approach you describe, because even
> though you are assembling a variety of different endpoints, many/most use
> proton underneath which means the same kind of compensating change would go
> undetected unless you start introducing different versions of each client
> into the mix. Once you have a project that has lots of different versions
> of lots of different clients though, it's hard to imagine being able to
> introduce an actual dependency on your project without a big mutual
> dependency mess.

I think this is a very valid issue and should be addressed. IIUC, it 
would reduce to being able to handle different versions of the same 
client as independent clients under test (CUT) in the test framework.

That being said, the initial goal of the project was to provide some 
kind of interop testing between _current_ clients, as we don't have a 
lot of this kind of testing right now. If the project is initially 
limited to the most recent version of each client, that will still 
perform a useful function. I would rather have this more limited 
functionality and a simple test framework than one that is more complex 
and difficult to add to or to contribute to.

>
> In the end, I was able to extend the proton test suite to provide interop
> testing against the sasl cases that broke by using a simple but potentially
> powerful technique. After encountering the interop issues "in the wild",
> rather than trying to set up a live version of current proton running
> against a live version of older protons, I was simply able to capture the
> raw byte sequences from the older versions of proton and construct several
> tests that used these raw byte sequences to drive the current proton
> endpoint. In other words instead of doing this (which is a nigh impossible
> to do in an automated way from inside the proton test suite):
>
>      proton-new <-------> proton-old
>
> I was able to do this:
>
>      proton-new <-------> dumb-mimic
>
> Where dumb-mimic is constructed by simply pumping out the same bytes
> observed from proton-old "in the wild."
>
> The benefit of this approach is that the dumb-mimic will always do whatever
> the old version of proton did, regardless of whatever compensating changes
> are introduced in the tree, but it doesn't actually depend on having old
> proton around, so it's really simple and easy to integrate into the
> standard test suite and run all the time.

The idea of using wire-level mimics was considered, but as the idea is 
to get a snapshot of current interop status and get a quick warning of 
breakages, this idea is too convoluted and complex for this goal. I'm 
not saying for a moment that there is no value in this approach, but I 
suspect if it is pursued, it will be in the form of a different test 
which is more centered on wire conformity.  In this test, even if the 
wire protocol is completely ignored or flouted, yet the various clients 
agree upon it such that interoperability is guaranteed, then the clients 
will pass the test.

To illustrate this, I notice that if the proton python client sends a 
message body containing an AMQP null type, it results in totally empty 
message content, rather than the AMQP null type as I might expect. 
According to my interpretation of the AMQP 1.0 spec, this should not be 
allowed, as every message needs to have one of three possible payloads. 
But if all our clients use this convention, then it will allow the tests 
on this type to pass on this test framework.

In short, I want this framework to be an end-to-end test with as much 
simplicity as possible.  Interop and integration testing is by its 
nature a difficult and complex area, and often results in large test 
coverage holes because of this.  I hope to create something that is easy 
to add to and for others to contribute to.

>
> I guess the upshot is that I can see how an integration framework that
> makes it easy to assemble and test live endpoints could be useful, but I
> don't think it actually buys you interop testing without a fair amount of
> manual work, i.e. you would need to stand the thing up somewhere in a CI
> system and run it constantly in a configuration that tests current versions
> against prior ones. To be clear, I'm not saying this is a bad thing, I'm
> just trying to understand what your expectations are. Such a system could
> be a nice source of frame captures to feed into the "dumb-mimic" approach.

I agree that such a test framework can be a source of wire-level packets 
for a "dumb-mimic" approach, and I'm sure it would not be too difficult 
to have the wire monitored by wireshark (or something similar) to 
capture the AMQP traffic.  However, at least initially, I don't see it 
being a part of this test suite.

>
> Sorry for the long/meandering post, and apologies for quibbling over
> "interop" vs "integration", I'm just trying to be precise in order to
> better understand your intent/goals.
>
> --Rafael

On the question of "integration" vs "interop", I still see this as the 
latter... at least from a client viewpoint. The running broker is an 
almost invisible convenience to the clients, whose sole issue is to be 
able to reliably communicate with each other. Their ability to send data 
and make sure they understand the data they send each other is what is 
at stake here, and to me that is more of an interop issue.

Thanks for your questions, I hope that my answers have given you a 
better idea of what I am thinking at this point.  I don't think I have 
anticipated every complexity or potential difficulty, so having others 
comment and give ideas is very helpful. Other problems will no doubt 
emerge as the project progresses.
>
>
> On Wed, May 13, 2015 at 10:52 AM, Kim van der Riet <ki...@redhat.com>
> wrote:
>
>> I would like to propose and start a vote on asking infa@ to create a Qpid
>> subproject called qpid-interop. The requested infrastructure will be
>> limited to a git repo "qpid-interop" and a JIRA. All mailing and other
>> communication would be performed through regular users@ and dev@ mailing
>> lists.
>>
>> What is qpid-interop?
>>
>> This project is a test framework for interoperability between various AMQP
>> 1.0 clients. This includes, but is not limited to Qpid clients such as
>> qpid-jms, proton python, c, c++ etc.  The tests will run on any running
>> AMQP broker, and should include:
>>
>> * AMQP types
>> * AMQP functionality
>> * Transactions
>>
>> The idea is to have a top-level control program which sends its requests
>> to specific clients to send or receive data to a "shim" or light-weight
>> send and receive client through the command-line and which performs narrow
>> specific tasks.  The process for editing and adding both clients and tests
>> will be well-documented so that anyone from the community can add to them.
>>
>> I have already have some code in Python and docs to share which focus on
>> the first category above, the AMQP types. This is intended to illustrate
>> the idea and open the way for others to contribute both other type tests
>> and other clients. These need a landing place, and I am proposing to start
>> by asking infra to create a git repo and a JIRA, and I'll check in the code
>> and docs I have so far.
>>
>> I would welcome your comments and ideas, and hope to start a vote in a day
>> or so.
>>
>> Kim van der Riet
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Create qpid-interop sub-project

Posted by Alan Conway <ac...@redhat.com>.
On Thu, 2015-05-14 at 08:14 -0400, Rafael Schloming wrote:
> Hi Kim,
> 
> How exactly are you picturing the dependencies working here? Is this
> project going to be an assemblage of other projects, i.e. an integration
> test suite, or do you intend other projects to depend on this one and pull
> it in as part of their standard test suite?

I sent a detailed email about this - I think the trick is to install the
projects you want to test and build/run interop against them. By using
multiple install trees you can do cross-tests between different
versions.

>     proton-new <-------> proton-old
> 
> I was able to do this:
> 
>     proton-new <-------> dumb-mimic
> 
> Where dumb-mimic is constructed by simply pumping out the same bytes
> observed from proton-old "in the wild."

Dumb-mimic is very useful for in-project testing (I did it in the
proton/tests/interop also.) I think the new interop tests are for
inter-project and inter-version testing. Problems found by interop
testing could be fed back as component regression tests using dumb-mimic
as you describe.

> I guess the upshot is that I can see how an integration framework that
> makes it easy to assemble and test live endpoints could be useful, but I
> don't think it actually buys you interop testing without a fair amount of
> manual work, i.e. you would need to stand the thing up somewhere in a CI
> system and run it constantly in a configuration that tests current versions
> against prior ones.

Single version proton-proton tests do not warrant a new test framework,
but we have a bunch of other projects and they need to talk to each
other.  A complete cross-version matrix is not feasible but just testing
all the project trunks against each other and against all the last
releases would go a long way.

> --Rafael
> 
> 
> On Wed, May 13, 2015 at 10:52 AM, Kim van der Riet <ki...@redhat.com>
> wrote:
> 
> > I would like to propose and start a vote on asking infa@ to create a Qpid
> > subproject called qpid-interop. The requested infrastructure will be
> > limited to a git repo "qpid-interop" and a JIRA. All mailing and other
> > communication would be performed through regular users@ and dev@ mailing
> > lists.
> >
> > What is qpid-interop?
> >
> > This project is a test framework for interoperability between various AMQP
> > 1.0 clients. This includes, but is not limited to Qpid clients such as
> > qpid-jms, proton python, c, c++ etc.  The tests will run on any running
> > AMQP broker, and should include:
> >
> > * AMQP types
> > * AMQP functionality
> > * Transactions
> >
> > The idea is to have a top-level control program which sends its requests
> > to specific clients to send or receive data to a "shim" or light-weight
> > send and receive client through the command-line and which performs narrow
> > specific tasks.  The process for editing and adding both clients and tests
> > will be well-documented so that anyone from the community can add to them.
> >
> > I have already have some code in Python and docs to share which focus on
> > the first category above, the AMQP types. This is intended to illustrate
> > the idea and open the way for others to contribute both other type tests
> > and other clients. These need a landing place, and I am proposing to start
> > by asking infra to create a git repo and a JIRA, and I'll check in the code
> > and docs I have so far.
> >
> > I would welcome your comments and ideas, and hope to start a vote in a day
> > or so.
> >
> > Kim van der Riet
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > For additional commands, e-mail: users-help@qpid.apache.org
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Create qpid-interop sub-project

Posted by Rafael Schloming <rh...@alum.mit.edu>.
Hi Kim,

How exactly are you picturing the dependencies working here? Is this
project going to be an assemblage of other projects, i.e. an integration
test suite, or do you intend other projects to depend on this one and pull
it in as part of their standard test suite?

For some context here, the recent sasl changes on proton illustrate some of
the challenges of maintaining interop. The changes worked fine on master.
All the test suites passed because the changes introduced compensating
behaviors on the client and server ends. However they did break interop
with previous versions of proton, and this was not apparent from our test
coverage because our tests are generally constructed from two live proton
endpoints talking to each other.

This same issue is relevant to the approach you describe, because even
though you are assembling a variety of different endpoints, many/most use
proton underneath which means the same kind of compensating change would go
undetected unless you start introducing different versions of each client
into the mix. Once you have a project that has lots of different versions
of lots of different clients though, it's hard to imagine being able to
introduce an actual dependency on your project without a big mutual
dependency mess.

In the end, I was able to extend the proton test suite to provide interop
testing against the sasl cases that broke by using a simple but potentially
powerful technique. After encountering the interop issues "in the wild",
rather than trying to set up a live version of current proton running
against a live version of older protons, I was simply able to capture the
raw byte sequences from the older versions of proton and construct several
tests that used these raw byte sequences to drive the current proton
endpoint. In other words instead of doing this (which is a nigh impossible
to do in an automated way from inside the proton test suite):

    proton-new <-------> proton-old

I was able to do this:

    proton-new <-------> dumb-mimic

Where dumb-mimic is constructed by simply pumping out the same bytes
observed from proton-old "in the wild."

The benefit of this approach is that the dumb-mimic will always do whatever
the old version of proton did, regardless of whatever compensating changes
are introduced in the tree, but it doesn't actually depend on having old
proton around, so it's really simple and easy to integrate into the
standard test suite and run all the time.

I guess the upshot is that I can see how an integration framework that
makes it easy to assemble and test live endpoints could be useful, but I
don't think it actually buys you interop testing without a fair amount of
manual work, i.e. you would need to stand the thing up somewhere in a CI
system and run it constantly in a configuration that tests current versions
against prior ones. To be clear, I'm not saying this is a bad thing, I'm
just trying to understand what your expectations are. Such a system could
be a nice source of frame captures to feed into the "dumb-mimic" approach.

Sorry for the long/meandering post, and apologies for quibbling over
"interop" vs "integration", I'm just trying to be precise in order to
better understand your intent/goals.

--Rafael


On Wed, May 13, 2015 at 10:52 AM, Kim van der Riet <ki...@redhat.com>
wrote:

> I would like to propose and start a vote on asking infa@ to create a Qpid
> subproject called qpid-interop. The requested infrastructure will be
> limited to a git repo "qpid-interop" and a JIRA. All mailing and other
> communication would be performed through regular users@ and dev@ mailing
> lists.
>
> What is qpid-interop?
>
> This project is a test framework for interoperability between various AMQP
> 1.0 clients. This includes, but is not limited to Qpid clients such as
> qpid-jms, proton python, c, c++ etc.  The tests will run on any running
> AMQP broker, and should include:
>
> * AMQP types
> * AMQP functionality
> * Transactions
>
> The idea is to have a top-level control program which sends its requests
> to specific clients to send or receive data to a "shim" or light-weight
> send and receive client through the command-line and which performs narrow
> specific tasks.  The process for editing and adding both clients and tests
> will be well-documented so that anyone from the community can add to them.
>
> I have already have some code in Python and docs to share which focus on
> the first category above, the AMQP types. This is intended to illustrate
> the idea and open the way for others to contribute both other type tests
> and other clients. These need a landing place, and I am proposing to start
> by asking infra to create a git repo and a JIRA, and I'll check in the code
> and docs I have so far.
>
> I would welcome your comments and ideas, and hope to start a vote in a day
> or so.
>
> Kim van der Riet
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>