You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Lee Burgess <le...@red-bean.com> on 2001/03/07 18:57:34 UTC

subversion client test suite

So I spent some time Tuesday talking to Ben and Karl about the client
test suite.  Basically what is need is something that is fully
automated rather than partially automated.

What we have is two shell scripts that invoke the client for various
operations (checkout, checkin, update, etc.).  This is satisfactory in
that the return value for each invokation is checked.  That is, as
long as each client operation does not fail, the test passes.  This is
also somewhat useful because some tests are dependent on the success
of previous tests.

The test suite is not satisfactory in at least two ways:

* The real, qualitative result of each client operation is not
  checked; only the return value of the client process is checked and
  we take for granted that things are really working like we expect
  them to.  If I do a checkout, a checkin and then update, I want to
  know positively that the checkin generated a delta with the correct
  information; likewise, the update should be verified to have
  correctly updated my "working copy".

* Each test should have the exact same result whether it is run alone
  or as part of the suite; tests should not be dependent on other
  tests.  Like, I should not have to run test 1 and test 3 to get the
  necessary state for test 5 to succeed.  Tests 1, 3 and 5 should
  really call the same functions from a library.

Now, I have volunteered to take on the task of cleaning up the client
test suite.  As mentioned, the tests are run by two shell scripts.
Note that the client test suite does not utilize the test framework
used by the rest of Subversion.  Since the client is a separate
program, it needs to be tested on a per-invocation basis.  And the
real testing is actually looking at the files affected by that
invocation to make sure they were modified in the correct manner.

I think this could be done in Bash; but might be more trouble than
it's worth (harder to maintain and extend?).  This could also be done
in C, but why bother?  I think there are better tools than C for
file/string testing, grepping and manipulation, especially for this
particular task.  You all see where I am going with this.

The way I see it, I have two choices: Perl or Python.  I am more
fluent in Perl, but I like Python more.  I would just as soon use
Python, but I wanted to put it to the list before acting. So I am
looking for *constructive* feedback regarding what other people think
is the Right Tool For This Job.

By "constructive feedback", I mean I am asking for concise, objective
statements about how the use of Perl or Python will better serve
Subversion.  I am NOT looking for Perl or Python advocacy.  I already
know both languages to the extent that I know what each does and what
each does better or worse than the other.  Anything any of you can say
about either language, we have all heard already.  So please, save the
evangelization for some other time and place.

A couple of issues to think about are portability and maintenance.
Portability is probably paramount.  A close second, is the combination
of clarity, maintainability and the ability to easily extend what will
be written.  Most of that is basically good programming practice and
largely independent of language.  This new test suite is very
important, so let's not make typical assumptions about one language or
the other.  

Do not think of this as some sort of competition between Perl or
Python.  What we want is what wins for Subverison.  If you think that
is Bash or C or something else, then say so and back it up.

Thank you.

-- 
Lee P. W. Burgess  <<!>>  Manipulate eternity. Power is a symphony:
Programmer         <<!>>  elaborate, enormous, essential.
Red Bean Software  <<!>>  Dream the moment with a fiddle in summer 
lefty@red-bean.com <<!>>  and a knife in winter.

Re: subversion client test suite

Posted by Karl Fogel <kf...@galois.collab.net>.
Greg Stein <gs...@lyra.org> writes:
> I would like to see the same test suite / mechanism applied to both the
> client *and* the libraries. Using two mechanisms feels like a non-starter.
> Certainly, I could see setting up a suite for one part, and migrating the
> other over time. But long term? I'd think "one".

The trouble is that some tests test an executable (say, the client
binary), and others test C interfaces.  Trying to use one mechanism
for both may be very difficult.

> Please explain this one. I don't see an issue with a bash script that runs
> the client in interesting ways, then compares the resulting output against a
> snapshot/template of the "correct" output. What more is there?
> 
> diff, diff -r, and/or cmp can be used to compare output. sed can be used to
> replace (changing) timestamps with a fixed value before comparison. etc
> 
> You obviously have some kind of function in mind that /bin/sh and some other
> tools can't handle. What were you thinking of?
> 
> Note that I would also suggest "awk" for another card in your test suite
> deck. Awk is probably more common than Perl/Python/Tcl. I also know there
> are handy prebuilt awk.exe files floating around (we use it in Apache 2.0
> httpd for some stuff on Windows installation).

I agree that Bourne shell is functionally fine, and that a good test
suite could be written in it.  The problem with Bourne is portability
to non-Unix systems, not the shell language itself.

> > A couple of issues to think about are portability and maintenance.
> > Portability is probably paramount.
> 
> Shell tools, Perl, and Python are all quite portable. The question that came
> up over the past day is "availability" :-) All are quite available for "all"
> platforms, even though they may not be installed by default. I agree with
> the general sentiment of "it is optional end-user functionality, so we don't
> have to work overly hard to make it available for people." In that sense, a
> scripting language is fine with me, but I think that may be overkill
> relative to shell tools.
> 
> Note that if you're talking about testing a separate executable, it is
> actually quite a bit easier / clearer to use /bin/sh rather than P*. For
> example:

If a compatible Bourne shell is available for Windows and Mac systems,
without requiring Cygwin, then I think we should stick with Bourne.
However, I wasn't aware of such availability... (?)

-K

Re: subversion client test suite

Posted by Lee Burgess <le...@red-bean.com>.
+1, Ben.

Ben Collins-Sussman writes:
 > Greg Stein <gs...@lyra.org> writes:
 >  
 > > I would like to see the same test suite / mechanism applied to both the
 > > client *and* the libraries. Using two mechanisms feels like a non-starter.
 > 
 > The problem, Greg, is that our current C test framework is ideal for
 > testing *library* routines, which is exactly how we're using it. 
 > 
 > The client test suite needs to test the svn binary from the
 > *outside*... hence our need for a testing system written in an
 > interpreted scripting language.
 > 
 > > Certainly, I could see setting up a suite for one part, and migrating the
 > > other over time. But long term? I'd think "one".
 > 
 > Migrating the other?  This means either using C to make system() calls
 > to svn and then grepping through the SVN/ dirs (yuck), or it means
 > using a scripting language to test internal library routines. (The
 > latter would mean tossing the perfectly good C framework we have and
 > then writing script wrappers around each library.  What a crazy waste
 > of work!)
 > 
 > I'm really not following you here.  C is ideal for internal testing,
 > and scripts are ideal for external testing.  Why mix apples and
 > oranges?
 > 
 >  
 > > Please explain this one. I don't see an issue with a bash script that runs
 > > the client in interesting ways, then compares the resulting output against a
 > > snapshot/template of the "correct" output. What more is there?
 > > 
 > > diff, diff -r, and/or cmp can be used to compare output. sed can be used to
 > > replace (changing) timestamps with a fixed value before comparison. etc
 > > 
 > > You obviously have some kind of function in mind that /bin/sh and some other
 > > tools can't handle. What were you thinking of?
 > 
 > Perl (or Python) is just a happier, more friendly, more integrated
 > environment for doing all the things that {sh, diff, cmp, awk, sed,
 > ..} do.  
 > 
 > Here's an example:
 > 
 >   After telling svn to commit to XML, and then telling svn to update a
 > -different- working copy from that same XML file, we should have two
 > identical working copies.  Of course, we need to do a lot of work to
 > verify that; we'd want to read each `entries' file into a hash and the
 > compare hashes.  Isn't Perl or Python better suited for this than
 > bash?
 > 

-- 
Lee P. W. Burgess  <<!>>  Manipulate eternity. Power is a symphony:
Programmer         <<!>>  elaborate, enormous, essential.
Red Bean Software  <<!>>  Dream the moment with a fiddle in summer 
lefty@red-bean.com <<!>>  and a knife in winter.

Re: subversion client test suite

Posted by Ben Collins-Sussman <su...@newton.ch.collab.net>.
Greg Stein <gs...@lyra.org> writes:
 
> I would like to see the same test suite / mechanism applied to both the
> client *and* the libraries. Using two mechanisms feels like a non-starter.

The problem, Greg, is that our current C test framework is ideal for
testing *library* routines, which is exactly how we're using it. 

The client test suite needs to test the svn binary from the
*outside*... hence our need for a testing system written in an
interpreted scripting language.

> Certainly, I could see setting up a suite for one part, and migrating the
> other over time. But long term? I'd think "one".

Migrating the other?  This means either using C to make system() calls
to svn and then grepping through the SVN/ dirs (yuck), or it means
using a scripting language to test internal library routines. (The
latter would mean tossing the perfectly good C framework we have and
then writing script wrappers around each library.  What a crazy waste
of work!)

I'm really not following you here.  C is ideal for internal testing,
and scripts are ideal for external testing.  Why mix apples and
oranges?

 
> Please explain this one. I don't see an issue with a bash script that runs
> the client in interesting ways, then compares the resulting output against a
> snapshot/template of the "correct" output. What more is there?
> 
> diff, diff -r, and/or cmp can be used to compare output. sed can be used to
> replace (changing) timestamps with a fixed value before comparison. etc
> 
> You obviously have some kind of function in mind that /bin/sh and some other
> tools can't handle. What were you thinking of?

Perl (or Python) is just a happier, more friendly, more integrated
environment for doing all the things that {sh, diff, cmp, awk, sed,
..} do.  

Here's an example:

  After telling svn to commit to XML, and then telling svn to update a
-different- working copy from that same XML file, we should have two
identical working copies.  Of course, we need to do a lot of work to
verify that; we'd want to read each `entries' file into a hash and the
compare hashes.  Isn't Perl or Python better suited for this than
bash?

Re: subversion client test suite

Posted by Greg Stein <gs...@lyra.org>.
On Wed, Mar 07, 2001 at 12:57:34PM -0600, Lee Burgess wrote:
> 
> So I spent some time Tuesday talking to Ben and Karl about the client
> test suite.  Basically what is need is something that is fully
> automated rather than partially automated.

I would like to see the same test suite / mechanism applied to both the
client *and* the libraries. Using two mechanisms feels like a non-starter.
Certainly, I could see setting up a suite for one part, and migrating the
other over time. But long term? I'd think "one".

>...
> I think this could be done in Bash; but might be more trouble than
> it's worth (harder to maintain and extend?).  

Please explain this one. I don't see an issue with a bash script that runs
the client in interesting ways, then compares the resulting output against a
snapshot/template of the "correct" output. What more is there?

diff, diff -r, and/or cmp can be used to compare output. sed can be used to
replace (changing) timestamps with a fixed value before comparison. etc

You obviously have some kind of function in mind that /bin/sh and some other
tools can't handle. What were you thinking of?

Note that I would also suggest "awk" for another card in your test suite
deck. Awk is probably more common than Perl/Python/Tcl. I also know there
are handy prebuilt awk.exe files floating around (we use it in Apache 2.0
httpd for some stuff on Windows installation).

>...
> The way I see it, I have two choices: Perl or Python.  I am more
> fluent in Perl, but I like Python more.  I would just as soon use
> Python, but I wanted to put it to the list before acting. So I am
> looking for *constructive* feedback regarding what other people think
> is the Right Tool For This Job.

Both can do the task; I'd say Python is more approachable for building and
maintaining test cases. Do I have quantitative/objective reasons for stating
that? Not really. A Perl expert is going to find heaps of Perl code quite
approachable. I'd just believe that you do need an (medium?) experienced
Perl person to do these right, whereas Python is probably not so demanding.
But I have nothing concrete to refer to or back that up other than long
experience.

In any case, I'd rather explore /bin/sh, sed, diff, and awk before pulling
out full-on scripting languages.

There is also something to be said for looking at other possible test
suites, rather than rolling our own. I have no references for this, though,
as I'm not hardcore about test suites (so I've never bothered to look).

>...
> A couple of issues to think about are portability and maintenance.
> Portability is probably paramount.

Shell tools, Perl, and Python are all quite portable. The question that came
up over the past day is "availability" :-) All are quite available for "all"
platforms, even though they may not be installed by default. I agree with
the general sentiment of "it is optional end-user functionality, so we don't
have to work overly hard to make it available for people." In that sense, a
scripting language is fine with me, but I think that may be overkill
relative to shell tools.

Note that if you're talking about testing a separate executable, it is
actually quite a bit easier / clearer to use /bin/sh rather than P*. For
example:

--- test-foo.sh
./client blah blah > output

--- test-foo.py
os.system("./client blah blah > output")

(dunno the Perl, but it won't be as clear as .sh)

Oh, I'm sure you can build up nifty utility functions and whatnot, but that
is just hiding the basic issue of inherent clarity.

> A close second, is the combination
> of clarity, maintainability and the ability to easily extend what will
> be written.  Most of that is basically good programming practice and
> largely independent of language.

Agreed in general, although I'd tend to (subjectively) disagree with the
"largely" charactization. I definitely think that the .sh example above is
clearer than the .py example, and then my earlier comments about Perl vs
Python.

>...
> Do not think of this as some sort of competition between Perl or
> Python.  What we want is what wins for Subverison.  If you think that
> is Bash or C or something else, then say so and back it up.

Sure :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

Re: subversion client test suite

Posted by Lee Burgess <le...@red-bean.com>.
Greg, well said and duly noted.  Exactly the kind of feedback I am
looking for.

Greg Hudson writes:
 > > I think this could be done in Bash; but might be more trouble than
 > > it's worth (harder to maintain and extend?).
 > [...]
 > > The way I see it, I have two choices: Perl or Python.
 > 
 > As a site integrator, I find it obnoxious when packages rely on perl
 > for any part of the build or regression test procedure.  (Regression
 > tests are valuble to site integrators as well as developers.)  Python
 > would be much much worse, since it is much less universal than perl.
 > 
 > So, I strongly advocate sticking to bourne shell and C and make, as we
 > do for the rest of the build system.  This has nothing to do with my
 > like or dislike of perl and python as languages.

-- 
Lee P. W. Burgess  <<!>>  Manipulate eternity. Power is a symphony:
Programmer         <<!>>  elaborate, enormous, essential.
Red Bean Software  <<!>>  Dream the moment with a fiddle in summer 
lefty@red-bean.com <<!>>  and a knife in winter.

Re: subversion client test suite

Posted by Mo DeJong <md...@cygnus.com>.
Lee Burgess Wrote:

> So I spent some time Tuesday talking to Ben and Karl about the client
> test suite.  Basically what is need is something that is fully
> automated rather than partially automated.
>
> The test suite is not satisfactory in at least two ways:
>
> * The real, qualitative result of each client operation is not
>  checked;

...

> The way I see it, I have two choices: Perl or Python.  I am more
> fluent in Perl, but I like Python more.


Daniel Stenberg Wrote:

> Is portability an issue? I mean, is there any plans of ever bringing 
this to
> something like windows and is that then an issue when selecting language?


Greg Hudson <gh...@MIT.EDU>:

> As a site integrator, I find it obnoxious when packages rely on perl
> for any part of the build or regression test procedure.


Hi all.

It looks like keeping this thread from degenerating into
an all out language advocacy war is going to be very
interesting.

About 8 months ago I sat down and started writing
a regression test system that does much of what
you describe here. My focus was on testing Java
compilers, but the mechanics are basically the
same as what subversion needs. A high level
overview of the results of that work can be
found here:

http://www-106.ibm.com/developerworks/library/l-jacks/?dwzone=linux

Since the Jacks project was begun, a large number
of tests have been added. There are currently
about 1600 individual test. As the number of test
cases increased, there were a number of
"scalability" issues that came up.

I found that solving these problems as they
came up was made significantly easier because
the test suite was implemented in Tcl
(vs a compiled language like C, Java, ...)

Now, at this point the more reactionary elements
will be thinking, "Noooo! we can't use Tcl,
both RMS and ESR said it was bad!" I am
going to try to avoid emotional issues
and focus on the actual problems and how
solving them with a scripting language
like Tcl was the right solution for this
problem set.

First the "scalability issue". When you have
a small number of tests, the mechanics of
how the test runs, how you examine results,
and how new tests are integrated into the
suite are not that critical. When you
have to deal with a couple of hundred test
cases, the mechanics become really important.

Let me provide one quick example of a
"scalability" problem and how it
was solved in the Jacks regression test
suite.

A Jacks test involves sending some known
input to a compiler and then checking
for the return status of the compiler
and possibly the output (a .class file).

Early on, we were saving the test case
in a .java file and then writing a test
case that would compile that given
.java file and check the result.

(assume that One.java is on the filesystem)

test example-1 { compile One.java } {
    compile One.java
} PASS

Looks simple right. What kind of
"scalability problem" could this test have?
Well there are quite a few, but lets just
focus on the actual input to the test case
for right now.

The One.java file needs to exist on the
filesystem for this test to work. That
means you as the developer need to keep track
of One.java. Of course, you need to create
it, then you need to add it to the CVS, possibly
modify a ChangeLog, and so on. Not too hard for
1 file, but it becomes a big deal when you want
to add 50 new tests. You also run into an ugly
"lookup" problem here. The mapping from test
"example-1" to source code "One.java" exists
only in the test case. When it comes time to review
test cases for correctness or diagnose a failure, you
end up with a bunch of files open in an editor.
Believe me, it is quite a pain and can be very
error prone.

Things get a lot easier if you combine the
test input and expected output. In this
example, the source code is saved and then
compiled by the test case:

test example-1 { compile example } {
    saveas One.java {
class One {}
    }

    compile One.java
} PASS


That one change means you no longer have to deal
with another file that stores the test input.
The next step is to store more than one test case
in the same file, it seems simple but it is
quite important that the system provide the
ability to do this. I can not say this strongly
enough, a system that depends on a 1 test
case to 1 file mapping is doomed! Tests
need to be grouped by function. When regressions
show up during development (and they will),
simply knowing the general location of the
set of tests that are having problem can be
half the battle.




Now lets talk about the really hard problem.
The regression test system needs to provide
consistent results and the results need to
be interpreted automatically. The expected
test results must not change from one run
to the next. This is critical since we
need the test system to examine results
and inform us when there is a problem.

Let's be honest here, nobody likes to
be blamed for adding a bug to the system.
Developers like to fix bugs not add them.
Most of the time, bugs are added accidently.
Changes in one part of the code broke
something else in another part of the
code and the developer did not know
about it. This is one of the main
things the test system needs to
help us avoid. To do that, a developer
really needs to be able to press a button
and then wait for the system to tell him
if there were any regressions. The
system must not require anything of
the developer at this stage. If the
developer needs to examine test results
and compare them to previous runs,
we are just asking for trouble. Yes,
someone could do all this by hand,
but they could also just bust out an
abacus and avoid the middleman!


Here is a quick example of the kind
of test results and logging provided
by the Jacks test suite. This snip
is from the logging/changes file
that is automatically generated
after a full test run.

2001-02-17 {Passed {1448 1450} Failed {160 158}} {
15.18.1-2 {FAILED PASSED}
15.18.1-3 {FAILED PASSED}
15.18.1-7 {FAILED PASSED}
15.28-null-4 {PASSED FAILED}
}

This shows that on 2001-02-17, bugs
that caused 3 test cases to fail were
fixed. In the process, an unrelated
test case regressed.

When the system provides you concrete
data like this, it actually becomes
hard to break something without
noticing. As the number of test cases
increase, the chances of accidently
breaking something also decrease.

This also greatly simplifies porting
to a new system. Obviously, there
are some details I am glossing
over here. I have not really talked
about the ability to restart tests
that have crashed or about restarting
the test suite itself after a crash.
I have also avoided the issue of
in-process testing vs an exec
of a subprocess (this is a big
deal on a Mac OS since exec is not
really supported on Mac OS classic).

Before I sign off on this brain
dump, I just want to point out
a couple of other really nice
Jacks features that make actually
using the regression testing system
easy.


Earlier, I presented an example of
a test case that suffered a regression:

15.28-null-4

Looking up this test case is very
easy in Jacks since the suite
automatically generates test case
documentaiton from the test
cases themselves. Try it out
for yourself, go to:

http://oss.software.ibm.com/developerworks/opensource/cvs/jikes/~checkout~/jacks/docs/tests.html

The test case is located in section
15.28, scroll down to the link for
section 15.28 and click on it. You
can now scroll down the list and
click on the link for 15.28-null-4.

Neat eh?



I would like to implement this same
testing framework for subversion.
In fact, I have already started
working on it. At this point, I
have only written tests for the
svn client front end. I had
hoped to get it 95% working
before letting people try it out,
but since folks are talking about
the issue now it seemed like a
good time to mention it.


Here are a couple of quick examples
from the tests I have written for
the svn client. The check_err command
just execs the snv with the given args
and returns a pair (list) containing
the exit status and the output of
the cvs command.

set help_add {add (ad, new): Add new files and directories to version 
control.
usage: add [TARGETS]
}

test client-help-add-1 { print help text on help add } {
    check_err {svn help add}
} [list 0 $help_add]

test client-help-add-2 { print help text on add without args } {
    check_err {svn add}
} [list 1 $help_add]



I don't see any major problems incorporating
this test infrastructure into subversion.
Initially, we would want to simply exec
the existing test cases written in C
and check the expected results. Once
the initial framework is done, I
would like to work on a Tcl API
to the subversion libraries themselves.
That would make it easy to convert the
C test cases over to scripts (test
cases that do not need to be compiled
are a real benefit). It would also
be easy to make use of the Tcl API
to Berkeley DB to write tests
that examined the database directly.
(no dbdump needed)


What do you folks think? Does that
sound good? I know what needs to
be done and I am willing to do
the work. It seems like this
"language advocacy" thing is
the only thing that folks
might object to. Thing is,
I have already written and
debugged the tools needed
to implement this so I
am not really too interested
in rewriting them in another
language to to make some
language advocates happy.
Don't take that the wrong
way, I am just lazy :)

Mo DeJong
Red Hat Inc

Re: subversion client test suite

Posted by Daniel Stenberg <da...@haxx.se>.
On Wed, 7 Mar 2001, Lee Burgess wrote:

>  >  I say we add some kind of very simple memory tracing functions.
>
> I would like to see what Karl and Ben think about this.  Offhand, it
> sounds to me like you are really talking about functions internal to the
> client to be called by a command line (--mem-debug or --mem-trace)
> switch?

Well, since they would probably be dependent on a special define, they don't
even have to be controlled by command line options but instead do things by
default or be controlled by environment variables. It doesn't really matter
how it is controlled. My point is the functionality and that it is only there
when SVN_RESOURCE_DEBUG is defined. Of course, we should decide once and for
all how things like this is best added (if at all).

I've personally written this kind of stuff for an different open source
library I tend to hack on, and it has helped me a lot.

> So part of the test suite I will be writing can certainly parse the
> output log files of this debug mode.  On the other hand, the actual
> tracing functions need to be coded and included in the client itself, not
> the test suite.
>
> Am I understanding you correctly, Daniel?

You certainly do.

-- 
      Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
   ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Re: subversion client test suite

Posted by Lee Burgess <le...@red-bean.com>.
Daniel Stenberg writes:
 > 
 > Without being a bigot of either camp, I consider both languages quite capable
 > of the task. We've already heard both Perl and Python advocates speak up so I
 > don't think there's a lack of either kind here.
 > 
 > Is portability an issue? I mean, is there any plans of ever bringing this to
 > something like windows and is that then an issue when selecting language?

There are certainly plans of building and testing the command line
client on Windows.  Portability can be an issue if the selected
language does not run natively on a given platform.  As far as I know
both Python and Perl run natively on Windows.  I don't know about Mac.

 > I have two minor (totally unrelated to the above) ideas for the test suite:
 > 
 > 1.
 >  I say we add some kind of very simple memory tracing functions. So when we
 > compile with some weirdo debug define, all malloc/free (and other resource
 > using functions) log their activity to a file. When the client is done, a
 > script analyzes the resource usage. This is neat in memory usage measuring,
 > but most of all it helps us track memory/resource leaks at a very early state
 > at a very low cost. We do make libraries that hopefully will be used by other
 > applications, we want them to be nice.

I would like to see what Karl and Ben think about this.  Offhand, it
sounds to me like you are really talking about functions internal to
the client to be called by a command line (--mem-debug or --mem-trace)
switch?

So part of the test suite I will be writing can certainly parse the
output log files of this debug mode.  On the other hand, the actual
tracing functions need to be coded and included in the client itself,
not the test suite.

Am I understanding you correctly, Daniel?

 > 2.
 >  An even more minor detail: we add a switch so that when running a single
 > test case, we can have the client (or server) run with gdb using the same
 > options the test case otherwise has (by generating a gdb command file that
 > sets command args). It makes it very conveniant when a test case
 > fails/crashes and you wanna rerun it with a debugger.

This is outside of the scope of the client test suite, I think.  But I
don't want to discourage the idea.

Anyone?

 > Just my thoughts.
 > 
 > -- 
 >       Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
 >    ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
 > 

-- 
Lee P. W. Burgess  <<!>>  Manipulate eternity. Power is a symphony:
Programmer         <<!>>  elaborate, enormous, essential.
Red Bean Software  <<!>>  Dream the moment with a fiddle in summer 
lefty@red-bean.com <<!>>  and a knife in winter.

Re: subversion client test suite

Posted by Daniel Stenberg <da...@haxx.se>.
On Wed, 7 Mar 2001, Lee Burgess wrote:

> Now, I have volunteered to take on the task of cleaning up the client
> test suite.

> The way I see it, I have two choices: Perl or Python.  I am more fluent
> in Perl, but I like Python more.  I would just as soon use Python, but I
> wanted to put it to the list before acting. So I am looking for
> *constructive* feedback regarding what other people think is the Right
> Tool For This Job.

Without being a bigot of either camp, I consider both languages quite capable
of the task. We've already heard both Perl and Python advocates speak up so I
don't think there's a lack of either kind here.

Is portability an issue? I mean, is there any plans of ever bringing this to
something like windows and is that then an issue when selecting language?

I have two minor (totally unrelated to the above) ideas for the test suite:

1.
 I say we add some kind of very simple memory tracing functions. So when we
compile with some weirdo debug define, all malloc/free (and other resource
using functions) log their activity to a file. When the client is done, a
script analyzes the resource usage. This is neat in memory usage measuring,
but most of all it helps us track memory/resource leaks at a very early state
at a very low cost. We do make libraries that hopefully will be used by other
applications, we want them to be nice.

2.
 An even more minor detail: we add a switch so that when running a single
test case, we can have the client (or server) run with gdb using the same
options the test case otherwise has (by generating a gdb command file that
sets command args). It makes it very conveniant when a test case
fails/crashes and you wanna rerun it with a debugger.

Just my thoughts.

-- 
      Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
   ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol