You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@subversion.apache.org by Yves Martin <ym...@free.fr> on 2010/07/14 16:24:37 UTC

Re: Poor performance for large software repositories downloading to CIFS shares

On Tue, 2010-07-13 at 20:40 -0400, Nico Kadel-Garcia wrote:

> Well, yes, except that updating an "export" can't be done since it
> will lack the rest of the .svn information. The point is that they can
> download an up-to-date working copy directly, rather than over the
> poor performance of the CIFS share.

So why are your users unable to access directly to the Subversion
repository either with http(s) or svn protocols ?

> > I have seen 1 Gb working copy properly checkouted on a local disk.
> > When the working copy is there, just use "update" and "switch" to limit
> > transfer and disk writes... Why doing a new checkout each time ?
> 
> And that actually works. There are problems with this approach: this
> local disk is inaccessible from other working systems without serious
> crossmounting craziness, is not workable for high availability
> services, and causes any local modifications that haven't been checked
> in to be lost when switching to another system.

Do I guess you try to prevent a work-day job loss by such a complex
system ? I think it is cheaper and more comfortable to setup RAID-1
disks on workstation...

If you want your user to commit to the repository regularly (twice a day
for instance even when code does not compile), maybe an option is to
make them commit their work in individual branches which are merged when
job is over.