You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Nathan Hartman <ha...@gmail.com> on 2019/10/13 04:20:33 UTC

Make check with different client and server versions

Recently, in another thread ("PMCs: any Hackathon requests? (deadline 11
October)")...

On Thu, Oct 10, 2019 at 4:54 PM Daniel Shahaf <d....@daniel.shahaf.name>
wrote:

> Something in the test harness.  For example, make it easier to run «make
> check»
> with client version ≠ server version, to actually test on-the-wire
> compatibility.


Testing with different client and server versions has been mentioned here
several times recently.

I've been giving this some thought. I think this is important given how svn
is used in the real world.

How would we do this?

I assume it would be something along these lines:

A test "driver" program would contain a list of versions to be tested. That
program would download, configure, make, make check, and make install each
listed version to a different prefix. Then, it would iterate through all
permutations of client and server versions (except equal client/server
versions, since those are already tested by "make check") and run the tests.

Actually, all permutations sounds like overkill and would take an
unreasonable length of time. We would probably test the latest client
against all listed server versions, and the latest server against all
listed client versions.

Is this a reasonable initial concept?

If so, answers / solutions are needed for the following:

(1) Which versions are we interested in cross-testing in this manner?

Do we want to limit ourselves to only cross-testing currently supported
versions?

Do we want to test unsupported versions that are likely to be in reasonably
widespread use today, including 1.8.x and 1.9.x? [1]

Do we want to go as far back as some antique version like 1.5 (e.g. test a
1.13 client against a 1.5 server; test a 1.5 client against a 1.13 server)?

Do we want to go for ultimate flexibility and allow testing any two trunk
and/or branch revisions against each other (which is different than, say,
testing released or rc code from tarballs).

Do we want this to be configurable, i.e. the tester could choose a
"shallow" or "deep" test?

(2) How do we handle differences between versions?

For example, newer versions probably contain more features and their
associated tests, and more tests in general than older versions.

Is the test driver program supposed to contain knowledge of these
differences and prevent some of the tests from running under certain
combinations of client and server versions?

(3) How do we handle dependencies? For example IIRC until some recent
version we couldn't build against APR 1.7.0. Now we can. Do we try to find
a least common denominator version of each dependency and build all
versions with that? Or is it better to build each version with the
dependency versions as listed in get-deps.sh?

Am I on the right track?

Nathan

Notes:
[1] I'm basing that on what's in a certain popular OS's package manager and
recent messages to users@.

Re: Make check with different client and server versions

Posted by Julian Foad <ju...@apache.org>.
+1. I'd like to support us doing this.

Nathan Hartman wrote:
> How would we do this?

As a starting point, for each client version to be tested, the new 
multi-combination test driver should:

   * Run the regression test suite that is supplied with that client 
version.  (This will be easiest because most variation is associated 
with client side changes.)

   * Tell the test suite ("make check") which server version to expect.

This is partly done: the Python and C tests take an argument

   --server-minor-version
     Set the minor version for the server ('3'..'14')

   or, for the C tests, docs apparently not updated recently:
     set the minor version for the server ('3', '4', '5', or '6')

I'm not sure exactly how one sets up the server appropriately, before 
running with that option, for various kinds of server.  Maybe 'make 
check' and/or 'make svnserveautocheck' and/or 'make davautocheck' have 
ways to specify how to find and run the desired server version.  We'll 
probably need to check and update that.


Ideally, later, the tests should also be divided or tagged so we can 
select sets of tests:
   - client-server tests
   - client-only tests
   - server-only tests
because we could then eliminate running redundant sets.  Initially, that 
isn't critical.


> (1) Which versions are we interested in cross-testing in this manner?

Start with a simple fixed set, such as

   - (client: trunk, server: 1.10)
   - (client: 1.10, server: trunk)

Review later, once that's working.

(I suggested 1.10 there because it's the most recent LTS version, but 
the important thing is just to start with something.)

> (2) How do we handle differences between versions?
[...]
> Is the test driver program supposed to contain knowledge of these 
> differences and prevent some of the tests from running under certain 
> combinations of client and server versions?

Annotate the tests according to what server versions they require.

This is at least partly done.  The test suite already uses conditionals like

   if svntest.main.options.server_minor_version < 9:

I'm not sure if this is already done everywhere it needs to be.  I would 
expect to see some of the Python "decorators" such as

   @SkipUnless(server_authz_has_aliases)

using "server_minor_version" but I don't see any.


> (3) How do we handle dependencies?

Initially: whatever works.  Probably neatest to install the dependencies 
for each built version of Subversion into a location dedicated to that 
version of Subversion, so they don't affect each other and don't affect 
the rest of the system.

- Julian