You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Mark Phippard <ma...@gmail.com> on 2011/03/25 18:33:25 UTC

Performance benchmarks

Hi,

I have been working on a framework for writing tests to record
performance.  I have something good enough to share:

https://ctf.open.collab.net/sf/projects/csvn

It is pretty easy to add new tests if you have ideas on more tests you
think we should add.  I think I have pretty good coverage of the major
functions.  The wiki on the site I linked to above has details on how
I have constructed the current tests.  I am going to put out a call to
users for feedback and try to get more people to run the tests and
record results.

I am not claiming these are anything definitive or even that we will
use them to help us make the release decision, but I think it is a
start on coming up with some reproducible tests that people can run
easily.  If after people look at and run the tests they think they are
useful or can be tweaked to be useful, then great.  If not, then at
least I got to write some code for a change :)

The tests are written in Java because that is what I know and it gives
me good cross platform coverage.  However, the Java just drives the
command line so all you need to do is have the svn command line in
your PATH and that is what it uses for all the work.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Stefan Sperling <st...@elego.de>.
On Sun, Mar 27, 2011 at 05:30:53PM -0400, Mark Phippard wrote:
> On Sun, Mar 27, 2011 at 2:54 PM, Stefan Sperling <st...@elego.de> wrote:
> 
> > I've run these tests on OpenBSD 4.9 (amd64) and got the following results:
> 
> Thanks.  I added your results to the wiki
> 
> https://ctf.open.collab.net/sf/wiki/do/viewPage/projects.csvn/wiki/HomePage

New results with current trunk, same system:

=================== TEST RESULTS ==================
SVN Version: 1.7.0-dev


Tests: Basic Tests
                Action       Time    Millis
            ----------  --------- ---------
              Checkout:  0:30.401     30401
                Update:  1:02.236     62236
                Switch:  0:02.076      2076
              Proplist:  0:01.078      1078
                Status:  0:00.212       212
                Commit:  0:02.701      2701
            svnversion:  0:00.143       143

Tests: Merge Tests
                Action       Time    Millis
            ----------  --------- ---------
             Merge-all:  0:13.008     13008
          Merge-revert:  0:03.715      3715
           Merge-synch:  0:06.428      6428
     Merge-reintegrate:  0:07.642      7642

Tests: Folder Tests
                Action       Time    Millis
            ----------  --------- ---------
             Folder-co:  6:52.888    412888
             Folder-st:  0:01.221      1221
             Folder-ci:  0:12.875     12875
             Folder-up:  0:05.152      5152
            svnversion:  0:01.698      1698

Tests: Binaries Tests
                Action       Time    Millis
            ----------  --------- ---------
                Bin-co:  3:16.865    196865
            Bin-up-r25:  0:10.058     10058
                Bin-sw:  1:50.304    110304
           Bin-cleanup:  0:00.173       173
                Bin-rm:  0:22.459     22459
                Bin-st:  0:00.413       413
            Bin-commit:  0:04.443      4443
                Bin-mv:  0:15.467     15467
             Bin-st-mv:  0:00.415       415
            Bin-commit:  0:07.940      7940
            svnversion:  0:00.414       414

===================  END RESULTS ==================
  Total execution time: 17:58.256   1078256

Results in wiki format:

Basic Tests:
| 1.7.0-dev | r1088692 | 0:30.401 | 1:02.236 | 0:02.076 | 0:01.078 | 0:00.212 | 0:02.701 | 0:00.143

Merge Tests:
| 1.7.0-dev | r1088692 | 0:13.008 | 0:03.715 | 0:06.428 | 0:07.642

Folder Tests:
| 1.7.0-dev | r1088692 | 6:52.888 | 0:01.221 | 0:12.875 | 0:05.152 | 0:01.698

Binaries Tests:
| 1.7.0-dev | r1088692 | 3:16.865 | 0:10.058 | 1:50.304 | 0:00.173 | 0:22.459 | 0:00.413 | 0:04.443 | 0:15.467 | 0:00.415 | 0:07.940 | 0:00.414
$

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Sun, Mar 27, 2011 at 2:54 PM, Stefan Sperling <st...@elego.de> wrote:

> I've run these tests on OpenBSD 4.9 (amd64) and got the following results:

Thanks.  I added your results to the wiki

https://ctf.open.collab.net/sf/wiki/do/viewPage/projects.csvn/wiki/HomePage

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Stefan Sperling <st...@elego.de>.
On Fri, Mar 25, 2011 at 01:33:25PM -0400, Mark Phippard wrote:
> Hi,
> 
> I have been working on a framework for writing tests to record
> performance.  I have something good enough to share:
> 
> https://ctf.open.collab.net/sf/projects/csvn
> 
> It is pretty easy to add new tests if you have ideas on more tests you
> think we should add.  I think I have pretty good coverage of the major
> functions.  The wiki on the site I linked to above has details on how
> I have constructed the current tests.  I am going to put out a call to
> users for feedback and try to get more people to run the tests and
> record results.
> 
> I am not claiming these are anything definitive or even that we will
> use them to help us make the release decision, but I think it is a
> start on coming up with some reproducible tests that people can run
> easily.  If after people look at and run the tests they think they are
> useful or can be tweaked to be useful, then great.  If not, then at
> least I got to write some code for a change :)
> 
> The tests are written in Java because that is what I know and it gives
> me good cross platform coverage.  However, the Java just drives the
> command line so all you need to do is have the svn command line in
> your PATH and that is what it uses for all the work.

I've run these tests on OpenBSD 4.9 (amd64) and got the following results:

=================== TEST RESULTS ==================
SVN Version: 1.6.17-dev


Tests: Basic Tests
                Action       Time    Millis
            ----------  --------- ---------
              Checkout:  0:21.232     21232
                Update:  1:07.533     67533
                Switch:  0:16.122     16122
              Proplist:  0:00.226       226
                Status:  0:00.174       174
                Commit:  0:05.883      5883
            svnversion:  0:00.104       104

Tests: Merge Tests
                Action       Time    Millis
            ----------  --------- ---------
             Merge-all:  0:27.993     27993
          Merge-revert:  0:12.054     12054
           Merge-synch:  0:15.082     15082
     Merge-reintegrate:  0:13.653     13653

Tests: Folder Tests
                Action       Time    Millis
            ----------  --------- ---------
             Folder-co: 10:02.645    602645
             Folder-st:  0:00.872       872
             Folder-ci:  1:16.709     76709
             Folder-up:  3:26.989    206989
            svnversion:  0:00.768       768

Tests: Binaries Tests
                Action       Time    Millis
            ----------  --------- ---------
                Bin-co:  4:09.777    249777
            Bin-up-r25:  2:49.036    169036
                Bin-sw:  3:46.945    226945
           Bin-cleanup:  0:01.069      1069
                Bin-rm:  0:03.232      3232
                Bin-st:  0:00.270       270
            Bin-commit:  0:07.356      7356
                Bin-mv:  0:05.662      5662
             Bin-st-mv:  0:00.275       275
            Bin-commit:  0:03.844      3844
            svnversion:  0:00.166       166

===================  END RESULTS ==================
  Total execution time: 34:44.375   2084375

Results in wiki format:

Basic Tests:
| 1.6.17-dev | r1085946 | 0:21.232 | 1:07.533 | 0:16.122 | 0:00.226 | 0:00.174

Merge Tests:
| 1.6.17-dev | r1085946 | 0:27.993 | 0:12.054 | 0:15.082 | 0:13.653

Folder Tests:
| 1.6.17-dev | r1085946 | 10:02.645 | 0:00.872 | 1:16.709 | 3:26.989 | 0:00.768

Binaries Tests:
| 1.6.17-dev | r1085946 | 4:09.777 | 2:49.036 | 3:46.945 | 0:01.069 | 0:03.232
.166


=================== TEST RESULTS ==================
SVN Version: 1.7.0-dev


Tests: Basic Tests
                Action       Time    Millis
            ----------  --------- ---------
              Checkout:  0:36.024     36024
                Update:  1:01.197     61197
                Switch:  0:02.004      2004
              Proplist:  0:00.198       198
                Status:  0:00.215       215
                Commit:  0:01.358      1358
            svnversion:  0:00.208       208

Tests: Merge Tests
                Action       Time    Millis
            ----------  --------- ---------
             Merge-all:  0:05.227      5227
          Merge-revert:  0:05.827      5827
           Merge-synch:  0:05.393      5393
     Merge-reintegrate:  0:11.169     11169

Tests: Folder Tests
                Action       Time    Millis
            ----------  --------- ---------
             Folder-co:  7:19.267    439267
             Folder-st:  0:01.209      1209
             Folder-ci:  0:10.676     10676
             Folder-up:  0:04.626      4626
            svnversion:  0:01.705      1705

Tests: Binaries Tests
                Action       Time    Millis
            ----------  --------- ---------
                Bin-co:  3:15.021    195021
            Bin-up-r25:  0:09.250      9250
                Bin-sw:  1:35.125     95125
           Bin-cleanup:  0:00.181       181
                Bin-rm:  0:42.754     42754
                Bin-st:  0:00.422       422
            Bin-commit:  0:04.432      4432
                Bin-mv:  0:12.726     12726
             Bin-st-mv:  0:00.415       415
            Bin-commit:  0:08.612      8612
            svnversion:  0:00.419       419

===================  END RESULTS ==================
  Total execution time: 18:38.683   1118683

Results in wiki format:

Basic Tests:
| 1.7.0-dev | r1085943 | 0:36.024 | 1:01.197 | 0:02.004 | 0:00.198 | 0:00.215 | 0:01.358 | 0:00.208

Merge Tests:
| 1.7.0-dev | r1085943 | 0:05.227 | 0:05.827 | 0:05.393 | 0:11.169

Folder Tests:
| 1.7.0-dev | r1085943 | 7:19.267 | 0:01.209 | 0:10.676 | 0:04.626 | 0:01.705

Binaries Tests:
| 1.7.0-dev | r1085943 | 3:15.021 | 0:09.250 | 1:35.125 | 0:00.181 | 0:42.754 | 0:00.422 | 0:04.432 | 0:12.726 | 0:00.415 | 0:08.612 | 0:00.419

Re: Performance benchmarks

Posted by Daniel Shahaf <d....@daniel.shahaf.name>.
Mark Phippard wrote on Mon, Mar 28, 2011 at 13:08:05 -0400:
> On Mon, Mar 28, 2011 at 11:28 AM, Arwin Arni <ar...@collab.net> wrote:
> 
> > I'm running Ubuntu 10.04 on an Intel Pentium 4 CPU 2.26GHz with 2GiB of RAM.
> >
> > Here are the benchmark results for svn 1.6.6 (provided by canonical for my
> > OS) and svn trunk (r1086245).
> >
> > Trunk is taking nearly twice as long as 1.6.6... Am I doing something
> > wrong... is it because of enable-maintainer-mode...
> 
> Thanks, I added your results to the wiki.  AFAIK,
> enable-maintainer-mode does not impact performance.

Inaccurate, we do have a few places where debug checks are enabled only
in maintainer mode.  (For example, enabling the FOREIGN_KEYS pragma and
a stat in the pristines code.)

Oh, and compile-time optimizations are disabled by default in maintainer mode.

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Mon, Mar 28, 2011 at 11:28 AM, Arwin Arni <ar...@collab.net> wrote:

> I'm running Ubuntu 10.04 on an Intel Pentium 4 CPU 2.26GHz with 2GiB of RAM.
>
> Here are the benchmark results for svn 1.6.6 (provided by canonical for my
> OS) and svn trunk (r1086245).
>
> Trunk is taking nearly twice as long as 1.6.6... Am I doing something
> wrong... is it because of enable-maintainer-mode...

Thanks, I added your results to the wiki.  AFAIK,
enable-maintainer-mode does not impact performance.  Your results were
somewhat similar to what Mike Pilato posted for Ubuntu.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Arwin Arni <ar...@collab.net>.
Hi Mark,

I'm running Ubuntu 10.04 on an Intel Pentium 4 CPU 2.26GHz with 2GiB of RAM.

Here are the benchmark results for svn 1.6.6 (provided by canonical for 
my OS) and svn trunk (r1086245).

Trunk is taking nearly twice as long as 1.6.6... Am I doing something 
wrong... is it because of enable-maintainer-mode...

Regards,
Arwin Arni

Re: Performance benchmarks

Posted by vijay <vi...@collab.net>.
Hi,

I have run these tests on ubuntu 10.10 with svn 1.6.12 and svn 
1.7.0-dev(r1086476).

Repository access: file://

Attached the results.

Thanks & Regards,
Vijayaguru


On Friday 25 March 2011 11:03 PM, Mark Phippard wrote:
> Hi,
>
> I have been working on a framework for writing tests to record
> performance.  I have something good enough to share:
>
> https://ctf.open.collab.net/sf/projects/csvn
>
> It is pretty easy to add new tests if you have ideas on more tests you
> think we should add.  I think I have pretty good coverage of the major
> functions.  The wiki on the site I linked to above has details on how
> I have constructed the current tests.  I am going to put out a call to
> users for feedback and try to get more people to run the tests and
> record results.
>
> I am not claiming these are anything definitive or even that we will
> use them to help us make the release decision, but I think it is a
> start on coming up with some reproducible tests that people can run
> easily.  If after people look at and run the tests they think they are
> useful or can be tweaked to be useful, then great.  If not, then at
> least I got to write some code for a change :)
>
> The tests are written in Java because that is what I know and it gives
> me good cross platform coverage.  However, the Java just drives the
> command line so all you need to do is have the svn command line in
> your PATH and that is what it uses for all the work.
>


Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Mon, Mar 28, 2011 at 1:12 PM, Hyrum K Wright <hy...@hyrumwright.org> wrote:

> Very cool to see something which will hopefully give us some
> quantitative measure of performance.
>
> I've seen people submit reports based on particular revisions.  Would
> it be possible to run the same suite of tools across a number of
> different revisions to give us some sense of change over time?  It'd
> be nice to know if we're getting better or worse, or how particular
> changes impacted performance, etc.
>
> Just a thought.

Not sure if you have looked at the wiki but I have been posting that
info with the stats for that reason.  I figured many of the same
people would just post new results for 1.7 over time and we would
start seeing a pattern of improvement.

I did not go back to some of the older 1.7 revisions to show some of
the recent performance work, but that is something I have considered.

I am currently running the tests to an HTTP server and changing the
server between 1.6 and 1.7.  I will post the results when all the
combinations(serf/neon1.6/1.7) are done.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Hyrum K Wright <hy...@hyrumwright.org>.
On Fri, Mar 25, 2011 at 12:33 PM, Mark Phippard <ma...@gmail.com> wrote:
> Hi,
>
> I have been working on a framework for writing tests to record
> performance.  I have something good enough to share:
>
> https://ctf.open.collab.net/sf/projects/csvn
>
> It is pretty easy to add new tests if you have ideas on more tests you
> think we should add.  I think I have pretty good coverage of the major
> functions.  The wiki on the site I linked to above has details on how
> I have constructed the current tests.  I am going to put out a call to
> users for feedback and try to get more people to run the tests and
> record results.
>
> I am not claiming these are anything definitive or even that we will
> use them to help us make the release decision, but I think it is a
> start on coming up with some reproducible tests that people can run
> easily.  If after people look at and run the tests they think they are
> useful or can be tweaked to be useful, then great.  If not, then at
> least I got to write some code for a change :)
>
> The tests are written in Java because that is what I know and it gives
> me good cross platform coverage.  However, the Java just drives the
> command line so all you need to do is have the svn command line in
> your PATH and that is what it uses for all the work.

Very cool to see something which will hopefully give us some
quantitative measure of performance.

I've seen people submit reports based on particular revisions.  Would
it be possible to run the same suite of tools across a number of
different revisions to give us some sense of change over time?  It'd
be nice to know if we're getting better or worse, or how particular
changes impacted performance, etc.

Just a thought.

-Hyrum

Re: Performance benchmarks

Posted by Johan Corveleyn <jc...@gmail.com>.
Forgot to add: this was with the repository via svnserve on localhost.

Johan

On Tue, Mar 29, 2011 at 12:16 AM, Johan Corveleyn <jc...@gmail.com> wrote:
> Hi Mark,
>
> Here is another data point, for my old (t)rusty Windows XP (32-bit)
> this time, on a system with a pretty slow hard disk (5.4k rpm), 1.83
> GHz Intel T2400 cpu, 3 GB RAM.
>
> I must say the results look very good for 1.7 (r1086021) compared to
> 1.6.16 on this system. Especially for the "Folder tests" and the
> "Binaries tests" (rather: you can see how much it hurts sometimes to
> use 1.6.x on this kind of system, when working with working copies
> with lots of folders). Apart from "Bin-commit" and "Bin-rm", I can't
> see a significant performance degradation within your test-suite on my
> system.
>
> Both test runs were run with antivirus enabled (AVG free). I also did
> a run of 1.7 with AV disabled, but it made no significant difference.
> I rebooted between each test run just to be sure about disk caching
> effects.
>
> Oh, and I made my 1.7 build with "Release" configuration. At least on
> Windows, I know there is a significant performance difference between
> "Debug" builds and "Release" builds (that became clear during my
> perf-work on "svn diff"). Best to compare release builds with release
> builds, I think...
>
> Cheers,
> --
> Johan
>

Re: Performance benchmarks

Posted by Johan Corveleyn <jc...@gmail.com>.
Hi Mark,

Here is another data point, for my old (t)rusty Windows XP (32-bit)
this time, on a system with a pretty slow hard disk (5.4k rpm), 1.83
GHz Intel T2400 cpu, 3 GB RAM.

I must say the results look very good for 1.7 (r1086021) compared to
1.6.16 on this system. Especially for the "Folder tests" and the
"Binaries tests" (rather: you can see how much it hurts sometimes to
use 1.6.x on this kind of system, when working with working copies
with lots of folders). Apart from "Bin-commit" and "Bin-rm", I can't
see a significant performance degradation within your test-suite on my
system.

Both test runs were run with antivirus enabled (AVG free). I also did
a run of 1.7 with AV disabled, but it made no significant difference.
I rebooted between each test run just to be sure about disk caching
effects.

Oh, and I made my 1.7 build with "Release" configuration. At least on
Windows, I know there is a significant performance difference between
"Debug" builds and "Release" builds (that became clear during my
perf-work on "svn diff"). Best to compare release builds with release
builds, I think...

Cheers,
-- 
Johan

Re: Performance benchmarks

Posted by John Beranek <jo...@redux.org.uk>.
On 28/03/2011 23:00, Mark Phippard wrote:
> On Mon, Mar 28, 2011 at 5:42 PM, John Beranek <jo...@redux.org.uk> wrote:
>> On 25/03/2011 17:33, Mark Phippard wrote:
>>> Hi,
>>>
>>> I have been working on a framework for writing tests to record
>>> performance.  I have something good enough to share:
>>
>> May I make an observation about these benchmarks...?
>>
>> When I provided some benchmarks that included 'checkout' tests I was
>> specifically asked to make tests that separate WC and RA functionality.
>>
>> I did this, released results, and the (portable) benchmark code.
> 
> If your point is why didn't I use your code it is becuase it is in
> Perl and I do not know Perl.  I also did not see any conversation
> happening around your benchmarks (or else I would not have bothered to
> try get things going again).

Don't get me wrong, I'm not trying to put down your efforts, just
restating some things from the previous discussion that did seem to make
sense to me.

For reference, the thread I'm talking about was entitled "Subversion
trunk (r1078338) HTTP(/WC?) performance problems?"
<http://news.gmane.org/find-root.php?group=gmane.comp.version-control.subversion.devel&article=126473>.

> I have tried to make it clear that this is just something I decided to
> work on to help.  Whether it means anything or not or whether we use
> these benchmarks to make decisions remains to be seen.  Feel free to
> try to revive discussion around the tests you wrote, I will not be
> offended.

My benchmark script certainly can't be considered "finished", but I had
added extra individual, averages tests to it. The tests that I got
around to implementing were mostly RA-related, but I could certainly
look at adding some more tests on the WC side. I was somewhat shot down
for showing WC performance problems, because of a "We know about that"
sentiment.

>> Now Mark has released a new set of benchmarks, which don't separate WC
>> and RA functionality. No one has (yet) noted this fact. ;)
> 
> I focused my tests on WC functions.  I am not sure what you mean by RA
> functionality.  Some of our biggest problems are in walking the tree
> during things like update and commit.

Well, as I understand it, a 'checkout' over HTTP is affected both by RA
performance _and_ WC performance. So, my benchmarks were modified to do
'export' instead, to separate out the RA component. Equally if you
wanted to separate out the WC component, you'd do 'checkout' operations
with ra_local, or (as in your tests) other WC operations like
'proplist', 'status' etc.

> Anyway, I was not trying to offend you.  I just wanted to help and I
> have no desire to learn Perl (or even Python which obviously would
> have been preferred).  I was going to find the email where you posted
> your tests, but since I never recalled anyone else running them or
> discussing them I did not see the benefit in doing so and it would not
> have accomplished my goal to help.

No offence taken, I too am just trying to help the testing effort.

John.

-- 
John Beranek                         To generalise is to be an idiot.
http://redux.org.uk/                                 -- William Blake

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Mon, Mar 28, 2011 at 5:42 PM, John Beranek <jo...@redux.org.uk> wrote:
> On 25/03/2011 17:33, Mark Phippard wrote:
>> Hi,
>>
>> I have been working on a framework for writing tests to record
>> performance.  I have something good enough to share:
>
> May I make an observation about these benchmarks...?
>
> When I provided some benchmarks that included 'checkout' tests I was
> specifically asked to make tests that separate WC and RA functionality.
>
> I did this, released results, and the (portable) benchmark code.

If your point is why didn't I use your code it is becuase it is in
Perl and I do not know Perl.  I also did not see any conversation
happening around your benchmarks (or else I would not have bothered to
try get things going again).

I have tried to make it clear that this is just something I decided to
work on to help.  Whether it means anything or not or whether we use
these benchmarks to make decisions remains to be seen.  Feel free to
try to revive discussion around the tests you wrote, I will not be
offended.

> Now Mark has released a new set of benchmarks, which don't separate WC
> and RA functionality. No one has (yet) noted this fact. ;)

I focused my tests on WC functions.  I am not sure what you mean by RA
functionality.  Some of our biggest problems are in walking the tree
during things like update and commit.  So we have to run those
commands in order to see the performance issues.  I have created WC's
with three different "shapes" to show different areas where there
might be problems.  I have avoided all purely RA functions like log
and I also do not bother to repeatedly show the results for commands
like checkout.  I generally show it once for each shape working copy
and then just skip reporting how long it took in subsequent tests.

Anyway, I was not trying to offend you.  I just wanted to help and I
have no desire to learn Perl (or even Python which obviously would
have been preferred).  I was going to find the email where you posted
your tests, but since I never recalled anyone else running them or
discussing them I did not see the benefit in doing so and it would not
have accomplished my goal to help.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Mon, Mar 28, 2011 at 6:45 PM, Greg Stein <gs...@gmail.com> wrote:

> I think your benchmarks are going to be more helpful for us to locate
> hotspots and get them fixed. Mark's seem more high-level, for
> policy-making rather than coding.

>From what I can see both are just driving the command line.  Main
difference will likely be the repositories we provide for the tests.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by John Beranek <jo...@redux.org.uk>.
On 29/03/11 01:33, Greg Stein wrote:
> On Mon, Mar 28, 2011 at 18:51, John Beranek <jo...@redux.org.uk> wrote:
>> On 28/03/2011 23:45, Greg Stein wrote:
>>> On Mon, Mar 28, 2011 at 17:42, John Beranek <jo...@redux.org.uk> wrote:
>>>> On 25/03/2011 17:33, Mark Phippard wrote:
>>>>> Hi,
>>>>>
>>>>> I have been working on a framework for writing tests to record
>>>>> performance.  I have something good enough to share:
>>>>
>>>> May I make an observation about these benchmarks...?
>>>>
>>>> When I provided some benchmarks that included 'checkout' tests I was
>>>> specifically asked to make tests that separate WC and RA functionality.
>>>>
>>>> I did this, released results, and the (portable) benchmark code.
>>>>
>>>> Now Mark has released a new set of benchmarks, which don't separate WC
>>>> and RA functionality. No one has (yet) noted this fact. ;)
>>>
>>> I think your benchmarks are going to be more helpful for us to locate
>>> hotspots and get them fixed. Mark's seem more high-level, for
>>> policy-making rather than coding.
>>>
>>> Did your benchmark scripts get checked in? (I've been out a couple
>>> weeks and may have missed that) And whether they did or not, would you
>>> want commit access to get them committed, and/or continue work on them
>>> within the svn repository?
>>
>> I checked them into a Git repository, both for ease of repository
>> creation, and for ease of cloning. Of course, hosting SVN tools in Git
>> may be seen as sacrilegious by some... ;)
> 
> Heh. I certainly don't mind. They just aren't going to get a lot of
> attention outside of our own repository, I think.
> 
> At a minimum, what's the URL?

https://github.com/jberanek/svn_scripts

I'm certainly not averse to putting the script into the SVN repository
if anyone thinks it's worthwhile though. I think I'd need to at least
add some usage information before doing so.

Cheers,

John.

-- 
John Beranek                         To generalise is to be an idiot.
http://redux.org.uk/                                 -- William Blake


Re: Performance benchmarks

Posted by Greg Stein <gs...@gmail.com>.
On Mon, Mar 28, 2011 at 18:51, John Beranek <jo...@redux.org.uk> wrote:
> On 28/03/2011 23:45, Greg Stein wrote:
>> On Mon, Mar 28, 2011 at 17:42, John Beranek <jo...@redux.org.uk> wrote:
>>> On 25/03/2011 17:33, Mark Phippard wrote:
>>>> Hi,
>>>>
>>>> I have been working on a framework for writing tests to record
>>>> performance.  I have something good enough to share:
>>>
>>> May I make an observation about these benchmarks...?
>>>
>>> When I provided some benchmarks that included 'checkout' tests I was
>>> specifically asked to make tests that separate WC and RA functionality.
>>>
>>> I did this, released results, and the (portable) benchmark code.
>>>
>>> Now Mark has released a new set of benchmarks, which don't separate WC
>>> and RA functionality. No one has (yet) noted this fact. ;)
>>
>> I think your benchmarks are going to be more helpful for us to locate
>> hotspots and get them fixed. Mark's seem more high-level, for
>> policy-making rather than coding.
>>
>> Did your benchmark scripts get checked in? (I've been out a couple
>> weeks and may have missed that) And whether they did or not, would you
>> want commit access to get them committed, and/or continue work on them
>> within the svn repository?
>
> I checked them into a Git repository, both for ease of repository
> creation, and for ease of cloning. Of course, hosting SVN tools in Git
> may be seen as sacrilegious by some... ;)

Heh. I certainly don't mind. They just aren't going to get a lot of
attention outside of our own repository, I think.

At a minimum, what's the URL?

Cheers,
-g

Re: Performance benchmarks

Posted by John Beranek <jo...@redux.org.uk>.
On 28/03/2011 23:45, Greg Stein wrote:
> On Mon, Mar 28, 2011 at 17:42, John Beranek <jo...@redux.org.uk> wrote:
>> On 25/03/2011 17:33, Mark Phippard wrote:
>>> Hi,
>>>
>>> I have been working on a framework for writing tests to record
>>> performance.  I have something good enough to share:
>>
>> May I make an observation about these benchmarks...?
>>
>> When I provided some benchmarks that included 'checkout' tests I was
>> specifically asked to make tests that separate WC and RA functionality.
>>
>> I did this, released results, and the (portable) benchmark code.
>>
>> Now Mark has released a new set of benchmarks, which don't separate WC
>> and RA functionality. No one has (yet) noted this fact. ;)
> 
> I think your benchmarks are going to be more helpful for us to locate
> hotspots and get them fixed. Mark's seem more high-level, for
> policy-making rather than coding.
> 
> Did your benchmark scripts get checked in? (I've been out a couple
> weeks and may have missed that) And whether they did or not, would you
> want commit access to get them committed, and/or continue work on them
> within the svn repository?

I checked them into a Git repository, both for ease of repository
creation, and for ease of cloning. Of course, hosting SVN tools in Git
may be seen as sacrilegious by some... ;)

John.

-- 
John Beranek                         To generalise is to be an idiot.
http://redux.org.uk/                                 -- William Blake


Re: Performance benchmarks

Posted by Greg Stein <gs...@gmail.com>.
On Mon, Mar 28, 2011 at 17:42, John Beranek <jo...@redux.org.uk> wrote:
> On 25/03/2011 17:33, Mark Phippard wrote:
>> Hi,
>>
>> I have been working on a framework for writing tests to record
>> performance.  I have something good enough to share:
>
> May I make an observation about these benchmarks...?
>
> When I provided some benchmarks that included 'checkout' tests I was
> specifically asked to make tests that separate WC and RA functionality.
>
> I did this, released results, and the (portable) benchmark code.
>
> Now Mark has released a new set of benchmarks, which don't separate WC
> and RA functionality. No one has (yet) noted this fact. ;)

I think your benchmarks are going to be more helpful for us to locate
hotspots and get them fixed. Mark's seem more high-level, for
policy-making rather than coding.

Did your benchmark scripts get checked in? (I've been out a couple
weeks and may have missed that) And whether they did or not, would you
want commit access to get them committed, and/or continue work on them
within the svn repository?

Cheers,
-g

Re: Performance benchmarks

Posted by John Beranek <jo...@redux.org.uk>.
On 25/03/2011 17:33, Mark Phippard wrote:
> Hi,
> 
> I have been working on a framework for writing tests to record
> performance.  I have something good enough to share:

May I make an observation about these benchmarks...?

When I provided some benchmarks that included 'checkout' tests I was
specifically asked to make tests that separate WC and RA functionality.

I did this, released results, and the (portable) benchmark code.

Now Mark has released a new set of benchmarks, which don't separate WC
and RA functionality. No one has (yet) noted this fact. ;)

Cheers,

John.

-- 
John Beranek                         To generalise is to be an idiot.
http://redux.org.uk/                                 -- William Blake

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
I thought I did.  Sorry will do so now.

You can also click button and request to join the project and edit the
wiki yourself (applies to anyone).


On Tue, Mar 29, 2011 at 3:40 PM, Johan Corveleyn <jc...@gmail.com> wrote:
> On Tue, Mar 29, 2011 at 6:08 PM, Mark Phippard <ma...@gmail.com> wrote:
>> Thanks!  All results have been added to the wiki.
>
> Hi Mark,
>
> Can you add my WinXP results as well? (sent 21 hours ago, according to
> gmail :-)).
>
> Or do you have enough benchmark data from Windowses for now (I think
> they are all 64 bit)?
>
> Cheers,
> --
> Johan
>



-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by Johan Corveleyn <jc...@gmail.com>.
On Tue, Mar 29, 2011 at 6:08 PM, Mark Phippard <ma...@gmail.com> wrote:
> Thanks!  All results have been added to the wiki.

Hi Mark,

Can you add my WinXP results as well? (sent 21 hours ago, according to
gmail :-)).

Or do you have enough benchmark data from Windowses for now (I think
they are all 64 bit)?

Cheers,
-- 
Johan

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
Thanks!  All results have been added to the wiki.



On Tue, Mar 29, 2011 at 11:36 AM, vijay <vi...@collab.net> wrote:
> On Sunday 27 March 2011 12:21 AM, Mark Phippard wrote:
>>
>> I would love to see someone do some tests with the WC on local disk vs
>> network mount (1.6 and 1.7).  I tried to do it using some virtual
>> machines I have access to at CollabNet.  The problem is that the
>> connection of these boxes to the NetApp with our home folders is too
>> slow.  Some of the checkouts (even using 1.6) were running for an hour
>> and I finally killed the test.
>>
>
> I have run the tests on RHEL-5.3 x86_64 with subversion 1.6.9 and
> 1.7.0-dev(r1086490). It covers the cases *WC on local disk vs WC on network
> mount*.
>
> Repository access layer: file://
>
> Attached the results.
>
> Thanks & Regards,
> Vijayaguru
>
>
>



-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: Performance benchmarks

Posted by vijay <vi...@collab.net>.
On Sunday 27 March 2011 12:21 AM, Mark Phippard wrote:
> I would love to see someone do some tests with the WC on local disk vs
> network mount (1.6 and 1.7).  I tried to do it using some virtual
> machines I have access to at CollabNet.  The problem is that the
> connection of these boxes to the NetApp with our home folders is too
> slow.  Some of the checkouts (even using 1.6) were running for an hour
> and I finally killed the test.
>

I have run the tests on RHEL-5.3 x86_64 with subversion 1.6.9 and 
1.7.0-dev(r1086490). It covers the cases *WC on local disk vs WC on 
network mount*.

Repository access layer: file://

Attached the results.

Thanks & Regards,
Vijayaguru



Re: Performance benchmarks

Posted by Daniel Becroft <dj...@gmail.com>.
On Sun, Mar 27, 2011 at 4:51 AM, Mark Phippard <ma...@gmail.com> wrote:

> On Fri, Mar 25, 2011 at 1:33 PM, Mark Phippard <ma...@gmail.com> wrote:
>
> > I have been working on a framework for writing tests to record
> > performance.  I have something good enough to share:
> >
> > https://ctf.open.collab.net/sf/projects/csvn
> >
> > It is pretty easy to add new tests if you have ideas on more tests you
> > think we should add.  I think I have pretty good coverage of the major
> > functions.  The wiki on the site I linked to above has details on how
> > I have constructed the current tests.  I am going to put out a call to
> > users for feedback and try to get more people to run the tests and
> > record results.
> >
> > I am not claiming these are anything definitive or even that we will
> > use them to help us make the release decision, but I think it is a
> > start on coming up with some reproducible tests that people can run
> > easily.  If after people look at and run the tests they think they are
> > useful or can be tweaked to be useful, then great.  If not, then at
> > least I got to write some code for a change :)
> >
> > The tests are written in Java because that is what I know and it gives
> > me good cross platform coverage.  However, the Java just drives the
> > command line so all you need to do is have the svn command line in
> > your PATH and that is what it uses for all the work.
>
> These tests are showing some interesting results.
>
> * There are a number of cases where 1.7 shows large performance gains
> over 1.6.  For example, a WC with a lot of folders on Windows:
>
>
> https://ctf.open.collab.net/sf/wiki/do/viewPage/projects.csvn/wiki/FolderTests
>
> * Checkout seems to be the biggest remaining hotspot where 1.7 tends
> to be slower, although commit seems to still be slower too.
>
> * Delete and move are slower than I would have expected.  Is this
> because we can actually delete the folder now, as opposed to waiting
> until commit time?  I actually thought that made would make it faster
> as I thought the folder used to get deleted and then put back (maybe
> just in IDE's).
>
> * There is something seriously bad going on with Anti-Virus on my
> Windows laptop (Symantec Endpoint Protection).  I noticed the times
> for commit were way out of whack so I configured my A/V to ignore the
> folder where the benchmarks were running.  When I ran the tests again,
> the performance of 1.7 was waaay better.  I am about 95% certain the
> A/V is the problem but I need to make sure.  While it was great to see
> the good performance, it is still a big concern to see that A/V might
> be having an even bigger impact than it has before.  1.6 also showed
> performance improvements but they were less dramatic than some of the
> 1.7 commands.
>
> Stefan K�ng asked me to add svnversion to the benchmark and I have
> done so in the latest release.
>
> I would love to see someone do some tests with the WC on local disk vs
> network mount (1.6 and 1.7).  I tried to do it using some virtual
> machines I have access to at CollabNet.  The problem is that the
> connection of these boxes to the NetApp with our home folders is too
> slow.  Some of the checkouts (even using 1.6) were running for an hour
> and I finally killed the test.
>
> Anyway, overall I am encouraged by the results I am seeing so far.  I
> look forward to more people running the tests and sharing the results.
>
>
Attached are my results from the benchmark suite. I've provided two versions
of the 1.7.0 run. One was an older build (prior to the optimizations), and
the last one was a recent build.

These were all run over file:///. I'll try and see if I can re-run it over
svn://.

Cheers,
Daniel B.

Re: Performance benchmarks

Posted by Philip Martin <ph...@wandisco.com>.
Mark Phippard <ma...@gmail.com> writes:

> * Delete and move are slower than I would have expected.

These are slow because delete has no recursive optimisation.
Non-recursive delete of a node with all children already deleted is the
most basic operation, and recursive delete can be implemented in terms
of this operation.  That's what we have, since we only need to implement
the one fundamental operation.  However it would be more efficient to
implement some recursive delete optimisations.

-- 
Philip

Re: Performance benchmarks

Posted by Mark Phippard <ma...@gmail.com>.
On Fri, Mar 25, 2011 at 1:33 PM, Mark Phippard <ma...@gmail.com> wrote:

> I have been working on a framework for writing tests to record
> performance.  I have something good enough to share:
>
> https://ctf.open.collab.net/sf/projects/csvn
>
> It is pretty easy to add new tests if you have ideas on more tests you
> think we should add.  I think I have pretty good coverage of the major
> functions.  The wiki on the site I linked to above has details on how
> I have constructed the current tests.  I am going to put out a call to
> users for feedback and try to get more people to run the tests and
> record results.
>
> I am not claiming these are anything definitive or even that we will
> use them to help us make the release decision, but I think it is a
> start on coming up with some reproducible tests that people can run
> easily.  If after people look at and run the tests they think they are
> useful or can be tweaked to be useful, then great.  If not, then at
> least I got to write some code for a change :)
>
> The tests are written in Java because that is what I know and it gives
> me good cross platform coverage.  However, the Java just drives the
> command line so all you need to do is have the svn command line in
> your PATH and that is what it uses for all the work.

These tests are showing some interesting results.

* There are a number of cases where 1.7 shows large performance gains
over 1.6.  For example, a WC with a lot of folders on Windows:

 https://ctf.open.collab.net/sf/wiki/do/viewPage/projects.csvn/wiki/FolderTests

* Checkout seems to be the biggest remaining hotspot where 1.7 tends
to be slower, although commit seems to still be slower too.

* Delete and move are slower than I would have expected.  Is this
because we can actually delete the folder now, as opposed to waiting
until commit time?  I actually thought that made would make it faster
as I thought the folder used to get deleted and then put back (maybe
just in IDE's).

* There is something seriously bad going on with Anti-Virus on my
Windows laptop (Symantec Endpoint Protection).  I noticed the times
for commit were way out of whack so I configured my A/V to ignore the
folder where the benchmarks were running.  When I ran the tests again,
the performance of 1.7 was waaay better.  I am about 95% certain the
A/V is the problem but I need to make sure.  While it was great to see
the good performance, it is still a big concern to see that A/V might
be having an even bigger impact than it has before.  1.6 also showed
performance improvements but they were less dramatic than some of the
1.7 commands.

Stefan Küng asked me to add svnversion to the benchmark and I have
done so in the latest release.

I would love to see someone do some tests with the WC on local disk vs
network mount (1.6 and 1.7).  I tried to do it using some virtual
machines I have access to at CollabNet.  The problem is that the
connection of these boxes to the NetApp with our home folders is too
slow.  Some of the checkouts (even using 1.6) were running for an hour
and I finally killed the test.

Anyway, overall I am encouraged by the results I am seeing so far.  I
look forward to more people running the tests and sharing the results.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/