You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Brandon Ehle <az...@yahoo.com> on 2002/11/12 23:05:02 UTC

Subverion Performance Tests

First off, does anyone know of a way to get the maximum memory used of a 
process?  I'd like to add that to these results.

Fixed my suite of performance tests and reran them.  Added a couple more 
performance tests.  These tests are ra_local vs local cvs dir.  The 
results are the averages over 3 runs.  The perf test code is available at:
http://fishbowl.digitalbytes.net:81/svn/scripts/reposbench.py

Tests run on:
Dual 1.0GHZ P3, 1GB RAM
ext3 fs on ide hard drive ~36 MB/s max (hdparm -t)
Redhat 8.0
subversion-2002111212-0 daily redhat RPM (3758)

nullimport = import of empty directory
new = import
checkout = checkout

* These are all run on a wc with no changes vs the repository
localstatus = cvs -qn up vs svn st (CVS doesn't really have a 
localstatus AFAIK)
serverstatus = cvs -qn up vs svn st -u
emptyupdate = update

======================================== Results
========== nullimport
CVS new (real 0.00866667s  user 0.00266667s  sys 0.00333333s)
SVN new (real 0.048s  user 0.0193333s  sys 0.00933333s)
========== smalltextfileimport100
CVS new (real 0.119s  user 0.0106667s  sys 0.0196667s)
CVS checkout (real 0.714333s  user 0.037s  sys 0.0343333s)
CVS localstatus (real 0.019s  user 0.006s  sys 0.014s)
CVS serverstatus (real 0.019s  user 0.006s  sys 0.014s)
CVS emptyupdate (real 0.021s  user 0.014s  sys 0.008s)
SVN new (real 1.57s  user 0.310667s  sys 0.054s)
SVN checkout (real 3.311s  user 1.674s  sys 0.399s)
SVN localstatus (real 0.062s  user 0.043s  sys 0.014s)
SVN serverstatus (real 0.167s  user 0.096s  sys 0.018s)
SVN emptyupdate (real 1.144s  user 0.074s  sys 0.02s)
========== smalltextfileimport1000
CVS new (real 1.23633s  user 0.0683333s  sys 0.196s)
CVS checkout (real 1.75s  user 0.271333s  sys 0.354s)
CVS localstatus (real 0.146s  user 0.072s  sys 0.064s)
CVS serverstatus (real 0.15s  user 0.072s  sys 0.055s)
CVS emptyupdate (real 0.139s  user 0.076s  sys 0.057s)
SVN new (real 21.174s  user 10.8327s  sys 1.01333s)
SVN checkout (real 153.586s  user 134.762s  sys 16.3013s)
SVN localstatus (real 0.392s  user 0.285s  sys 0.105s)
SVN serverstatus (real 3.441s  user 3.27s  sys 0.133s)
SVN emptyupdate (real 4.28s  user 3.186s  sys 0.047s)
========== smallbinaryfileimport100
CVS new (real 1.18233s  user 0.214667s  sys 0.86s)
CVS checkout (real 6.742s  user 3.54667s  sys 1.043s)
CVS localstatus (real 0.018s  user 0.014s  sys 0.004s)
CVS serverstatus (real 0.017s  user 0.006s  sys 0.012s)
CVS emptyupdate (real 0.018s  user 0.012s  sys 0.006s)
SVN new (real 19.3353s  user 5.974s  sys 2.462s)
SVN checkout (real 14.9313s  user 7.11133s  sys 4.379s)
SVN localstatus (real 0.057s  user 0.037s  sys 0.021s)
SVN serverstatus (real 0.863s  user 0.094s  sys 0.018s)
SVN emptyupdate (real 1.136s  user 0.082s  sys 0.012s)

As you can see, CVS is still faster by a large margin.



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by Philip Martin <ph...@codematters.co.uk>.
Brandon Ehle <az...@yahoo.com> writes:

> Fixed my suite of performance tests and reran them.  Added a couple
> more performance tests.  These tests are ra_local vs local cvs dir.
> The results are the averages over 3 runs.  The perf test code is
> available at:

[snip]

> As you can see, CVS is still faster by a large margin.

Have you done any profiling to identify the problem?  I've run
oprofile on httpd in the past, and it was internal BDB mutex functions
that took the vast majority (>90%) of the time.

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by Greg Stein <gs...@lyra.org>.
On Wed, Nov 13, 2002 at 12:27:40AM +0000, Philip Martin wrote:
> Brandon Ehle <az...@yahoo.com> writes:
> > >--- reposbench.py.orig  Tue Nov 12 23:52:52 2002
> > >+++ reposbench.py       Tue Nov 12 23:51:28 2002
> > >@@ -36,7 +36,7 @@
> > >     err=p.childerr.read()
> > >     if p.poll()<>0:
> > >         print err
> > >-        assert(p.poll()==0)
> > >+        #assert(p.poll()==0)
> > >     m=re_collect.search(err)
> > >     assert(m)

As an aside, note that assert is a *statement* in Python rather than a
function. And the syntax is:

  assert expr [, expr]

(or something like that)  Specifically, the parens are superfluous:

  assert p.poll() == 0
  assert m

  etc

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by Philip Martin <ph...@codematters.co.uk>.
Brandon Ehle <az...@yahoo.com> writes:

> >--- reposbench.py.orig  Tue Nov 12 23:52:52 2002
> >+++ reposbench.py       Tue Nov 12 23:51:28 2002
> >@@ -36,7 +36,7 @@
> >     err=p.childerr.read()
> >     if p.poll()<>0:
> >         print err
> >-        assert(p.poll()==0)
> >+        #assert(p.poll()==0)
> >     m=re_collect.search(err)
> >     assert(m)
> >     return map(strToSec, m.groups())
> >
> That means a command returned an error code.  The results are probably
> not any good.

I've just realised that the problem below is what caused the problem
above :)

> >@@ -115,7 +115,7 @@
> >     def __init__(self):
> >         self.dir=os.path.join(os.getcwd(), 'svnrepo')
> >         self.wcdir=os.path.join(os.getcwd(), 'svnwc')
> >-        self.url='file://'+os.path.join(os.getcwd(), 'svn')
> >+        self.url='file://'+os.path.join(os.getcwd(), 'svnrepo')
> >     def name(self):
> >         return 'SVN'
> >     def create(self):

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by Brandon Ehle <az...@yahoo.com>.
> 
>
>It doesn't run :-(
>
>With this change it runs, I don't know if this makes the results
>invalid
>
>--- reposbench.py.orig  Tue Nov 12 23:52:52 2002
>+++ reposbench.py       Tue Nov 12 23:51:28 2002
>@@ -36,7 +36,7 @@
>     err=p.childerr.read()
>     if p.poll()<>0:
>         print err
>-        assert(p.poll()==0)
>+        #assert(p.poll()==0)
>     m=re_collect.search(err)
>     assert(m)
>     return map(strToSec, m.groups())
>
That means a command returned an error code.  The results are probably 
not any good.

>@@ -115,7 +115,7 @@
>     def __init__(self):
>         self.dir=os.path.join(os.getcwd(), 'svnrepo')
>         self.wcdir=os.path.join(os.getcwd(), 'svnwc')
>-        self.url='file://'+os.path.join(os.getcwd(), 'svn')
>+        self.url='file://'+os.path.join(os.getcwd(), 'svnrepo')
>     def name(self):
>         return 'SVN'
>     def create(self):
>  
>
This one I've checked in, it was a type on my part.

>
>On my system, running the CVS tests produces a low level of disk
>activity, much like when I compile, I suspect the CVS filesystem is
>cached in RAM and flushed periodically.  When running the SVN test
>there is intense disk activity, which I suspect is the BDB log files
>being written.  Does your test simply indicate that the OS native
>filesystem is faster that Subversion's BDB filesystem?
>  
>
Not sure of that, I've been meaning to hook atsar into the process to 
show io usage.  In the meantime I just want to start tracking subversion 
performance so that any performance enhancements will be visible.


> Have you done any profiling to identify the problem?  I've run
> oprofile on httpd in the past, and it was internal BDB mutex functions
> that took the vast majority (>90%) of the time.


I've done so in the past and added an issue in the tracker for the 
biggest problem that I've identified, but that issue makes it somewhat 
different to track down the rest because everything is so small compared 
to that.  See Issue 913 in the tracker.



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by Philip Martin <ph...@codematters.co.uk>.
Brandon Ehle <az...@yahoo.com> writes:

> The results are the averages over 3 runs.  The perf test code is
> available at:
> 
> http://fishbowl.digitalbytes.net:81/svn/scripts/reposbench.py

It doesn't run :-(

With this change it runs, I don't know if this makes the results
invalid

--- reposbench.py.orig  Tue Nov 12 23:52:52 2002
+++ reposbench.py       Tue Nov 12 23:51:28 2002
@@ -36,7 +36,7 @@
     err=p.childerr.read()
     if p.poll()<>0:
         print err
-        assert(p.poll()==0)
+        #assert(p.poll()==0)
     m=re_collect.search(err)
     assert(m)
     return map(strToSec, m.groups())
@@ -115,7 +115,7 @@
     def __init__(self):
         self.dir=os.path.join(os.getcwd(), 'svnrepo')
         self.wcdir=os.path.join(os.getcwd(), 'svnwc')
-        self.url='file://'+os.path.join(os.getcwd(), 'svn')
+        self.url='file://'+os.path.join(os.getcwd(), 'svnrepo')
     def name(self):
         return 'SVN'
     def create(self):


On my system, running the CVS tests produces a low level of disk
activity, much like when I compile, I suspect the CVS filesystem is
cached in RAM and flushed periodically.  When running the SVN test
there is intense disk activity, which I suspect is the BDB log files
being written.  Does your test simply indicate that the OS native
filesystem is faster that Subversion's BDB filesystem?

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Subverion Performance Tests

Posted by kb...@gte.net.
Brandon Ehle wrote:
> First off, does anyone know of a way to get the maximum memory used of a 
> process?  I'd like to add that to these results.
...
> Tests run on:
> Redhat 8.0

If you know where the process ends, add in code like:

system("ps -auxww > /tmp/ps.out");

just before it exits and look at the VSZ and RSS columns for your process in 
the ps.out file.  If you don't know where the code ends, then create a 
function that does the above, then at the top of main(), add a call to 
atexit() with an arg of the function pointer to your "ps function".  That's 
the quickest way I can think of.

The only other way I can quickly think of is to open up the correct file in 
/proc, but that takes root privileges (not impossible, just not desireable. 
:-)  But this is where ps gets it's data from, just you might as well do that.

There's a function called getrusage() that might work for you, but I've never 
used it, so I don't really know how helpful it will be (but the man page makes 
it look good).

HTH,
Kevin


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org