You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Erik Scrafford <er...@scrafford.org> on 2006/06/02 03:00:22 UTC

Total failure in dav layer when >1024 files already open

I've run into a strange problem where the dav layer completely fails  
when more than 1024 files are opened. Every request ends up returning  
a  PROPFIND error (when linked against subversion 1.3 - 1.2 seems to  
fail in a different manner). svn:// and file:// urls still seem to  
work fine in this case, only http:// and https:// urls fail. I've  
created a repro case that I'm attaching to this message. I'm on  
macosx 10.4.6, and I have no idea if this test code will work on  
other systems. Below is the output when running my repro program. Why  
do I have that many files open? ZigVersion is monitoring it's working  
copies by using kqueue/kevents, and it ends up keeping a file handle  
open on each file/directory. So it's easy to see how I quickly exceed  
1024 files open in many cases.

[eriks@eriks-g5:~/files/svntest] ./svn-broken-http svn://dev- 
server.local
Before file handles used
------------------------
UUID for url: ca633d2f-06ef-0310-9b47-e91f935bd6fd

number of the last handle opened: 1024
After file handles used
-----------------------
UUID for url: ca633d2f-06ef-0310-9b47-e91f935bd6fd
[eriks@eriks-g5:~/files/svntest] ./svn-broken-http http://dev- 
server.local/svn
Before file handles used
------------------------
UUID for url: ca633d2f-06ef-0310-9b47-e91f935bd6fd

number of the last handle opened: 1024
After file handles used
-----------------------
SVN Error Code: 175002
PROPFIND request failed on '/svn'
PROPFIND of '/svn': could not connect to server (http://dev- 
server.local)




Erik Scrafford
ZigZig Software - http://zigzig.com/



Re: Total failure in dav layer when >1024 files already open

Posted by Jonathan Gilbert <o2...@sneakemail.com>.
At 01:37 PM 05/06/2006 -0400, Michael Sweet wrote:
[snip]
>For select(), you can just loop through your state data and use the
>FD_ISSET macro to test the corresponding fd_set.  That is O(n).
>
>The situation is somewhat similar for fd_set vs. pollfd array
>management.  Adding and/or removing a file descriptor in a dynamic
>pollfd array requires O(n log n) for every addition or removal and
>O(log n) for update.  Doing so for fd_set is O(1) (constant time)
>for all of the UNIX fd_set implementations I have worked with.

You say 'UNIX' here explicitly, so I suspect you maybe already know what
I'm about to say. :-)

I just wanted to point out that on Windows, FD_SET, FD_CLR and FD_ISSET are
O(n), not O(1). FD_ISSET even involves a function call. This is the natural
trade-off between the efficiency of a bitset and the ability to use
arbitrarily-large file descriptors. UNIX systems chose efficiency, while
Microsoft chose to limit only the total number of file descriptors and not
their actual values.

UNIX systems were so interested in efficiency, in fact, that they didn't
even bother to check for errors. Thus, if you get a file descriptor that is
too large and pass it into FD_SET or FD_CLR, what you get is stack or heap
corruption. With Microsoft's version, when you try to add too many file
descriptors, the ones which are over the limit are silently ignored. Not a
perfect solution either, but it is at least detectable. In any event, the
fd_set interface is too simplified, too generalized, for code to figure out
what will work and what won't without assuming certain internal
implementation details.

On Windows, FD_SETSIZE is precisely the number of fds you can place into an
fd_set, while on UNIX systems, it is typically one greater than the maximum
file descriptor value that can be set. Some UNIX systems pay attention if
you set FD_SETSIZE before including the requisite headers, while others
(like an ancient linux box of mine, for instance -- so crazed for
efficiency that it implements the various FD_* macros in inline assembler!)
define everything about fd_sets with two underscores first in "bits"
headers, and then typedef/#define them over. Clearly, changing the value of
FD_SETSIZE won't work in that situation (though changing __FD_SETSIZE
might). Incidentally, Windows is counted among those operating systems
which *do* allow the value to be changed by the user :-)

At least there's one thing that Windows' version does more efficiently than
the traditional UNIX bitset approach: FD_ZERO is O(1), because all it has
to do is reset the count of items in the list to 0. :-)

Jonathan Gilbert

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Michael Sweet <mi...@easysw.com>.
Martin v. Löwis wrote:
> Michael Sweet wrote:
>> The select() method is O(n).  The poll() method is O(n log n) unless
>> you manage a large array that maps from FD to the corresponding
>> state data for that FD, which can eat up a LOT of memory...
> 
> You are assuming an implementation strategy here.

Not of select() or poll(), but certainly for the application.

For a single file descriptor, there is no difference for an
application - use fd_set, use pollfd, both are simple and
constant time.

However, for any non-trivial application managing multiple file 
descriptors, poll() adds overhead in the form of pollfd array
management, no matter how else you implement things in your
code.

 > poll() is
> O(number-of-polled-fds) on Linux. Yes, there is an array of all
> files, per process, but that does not consume a LOT of memory.
> Instead, it consumes one pointer per file number, up to some
> dynamically-determined maximum.
> 
> OTOH, select is O(maximum-fd-value). Assuming you always pass
> only a few file descriptors to select/poll, poll is more
> efficient if the process has many open files.

Again, I'm not talking about the implementation complexity of
either interface, but the application code complexity to use
those interfaces.  Regardless, O(maximum-fd-value) is equivalent
to O(number-of-polled-fds) in terms of algorithmic complexity
[O(n) == O(n)], and very likely in practice (clock time) as well.

In the case of CUPS, the number of file descriptors is often the
same as the number of files we are doing a select() on, so for
us poll() is a performance loser.

On the application side, with poll() you need to either loop
through the pollfd array to determine which file descriptors are
active, or loop through the (separately maintained) state array
which references those file descriptors.  That is O(n).  Then you
have the lookup the state data for that file descriptor; for a
sparse array (index != fd), the best you can do is O(log N),
making the total (application) code complexity O(n log n).
Similarly, if you loop through your state data [O(n)] and lookup
the file descriptors in a sorted pollfd array [O(log n)], you
still have O(n log n).  The fastest implementation would be to
use an array that maps file descriptors (or pollfd array indices)
to your state data, which reduces the total complexity to O(n).

For select(), you can just loop through your state data and use the
FD_ISSET macro to test the corresponding fd_set.  That is O(n).

The situation is somewhat similar for fd_set vs. pollfd array
management.  Adding and/or removing a file descriptor in a dynamic
pollfd array requires O(n log n) for every addition or removal and
O(log n) for update.  Doing so for fd_set is O(1) (constant time)
for all of the UNIX fd_set implementations I have worked with.

-- 
______________________________________________________________________
Michael Sweet, Easy Software Products           mike at easysw dot com
Internet Printing and Document Software          http://www.easysw.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org


Re: Total failure in dav layer when >1024 files already open

Posted by "Martin v. Löwis" <Ma...@hpi.uni-potsdam.de>.
Michael Sweet wrote:
> The select() method is O(n).  The poll() method is O(n log n) unless
> you manage a large array that maps from FD to the corresponding
> state data for that FD, which can eat up a LOT of memory...

You are assuming an implementation strategy here. poll() is
O(number-of-polled-fds) on Linux. Yes, there is an array of all
files, per process, but that does not consume a LOT of memory.
Instead, it consumes one pointer per file number, up to some
dynamically-determined maximum.

OTOH, select is O(maximum-fd-value). Assuming you always pass
only a few file descriptors to select/poll, poll is more
efficient if the process has many open files.

Regards,
Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Michael Sweet <mi...@easysw.com>.
Greg Hudson wrote:
> ...
>> Every OS other than Linux allows select() to work with an arbitrary
>> number of file descriptors; even the Linux kernel allows it, just not
>> glibc.
> 
> I believe you're simply mistaken.  Solaris has a default limit of 1024
> (or 65536 for 64-bit code).  NetBSD has a default limit of 256.  It's
> extremely unlikely that you can find even one OS which tries to do a
> dynamically-sized fd_set.

Yes, each OS has a default limit, but the point is that you can
allocate your own fd_set to get a larger number of FDs.  CUPS has
been doing this for years, and it isn't until very recently that
glibc has disabled that particular feature that this has become a
problem.

FWIW, Windows implements fd_set quite differently from the UNIX
world, essentially providing a dynamically-sized poll array.

> The Linux kernel can allow an arbitrary number of file descriptors
> because it isn't responsible for the particular semantics of FD_SET and
> friends.

It is an artificial limit.  The "fd_set" type may be fixed-size, but
there is no reason to limit select() when other operating systems
don't and

>> Try managing and array of thousands of poll entries sometime, and
>> then compare the efficiency of a bit test vs. scanning an array
>> after the fact...
> 
> Both select() and poll() require you to scan the fd set structure after
> the call to find out what I/O events actually happened.  No difference
> there.  This is not normally an issue, as performing a few thousand
> memory accesses is generally cheap compared to a single I/O operation,
> but there have been some stabs at solving the theoretical scaling
> problem, such as Linux's epoll(), Solaris's /dev/poll, and the like.

Sure, with select() you need to scan *one* array - your active
connections (or whatever it is that you are using select() for),
but with poll() you need to scan the poll array *and* then do a
lookup in your active connections array (or visa-versa).

The select() method is O(n).  The poll() method is O(n log n) unless
you manage a large array that maps from FD to the corresponding
state data for that FD, which can eat up a LOT of memory...

-- 
______________________________________________________________________
Michael Sweet, Easy Software Products           mike at easysw dot com
Internet Printing and Document Software          http://www.easysw.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Greg Hudson <gh...@MIT.EDU>.
On Sat, 2006-06-03 at 08:12 -0400, Michael Sweet wrote:
> > fd_set cannot be a dynamically allocated structure because you have to
> > be able to copy it with assignment
> 
> Why?  No code I have ever seen does this...

Because select() modifies its input arguments, it's natural to keep a
master copy of the fds you want to track, and copy it before invoking
select().  Since there is no copy method, structure assignment has to
suffice.  A lot of code sets up the fd_set from scratch every time, but
there certainly exists code which does not.

At any rate, there's no deallocation function.  If FD_SET() and friends
were to dynamically allocate memory pointed to by the fd_set structure,
that memory would be leaked when the lifetime of the fd_set structure
ends.

(There's also no failure return for FD_SET to handle out of memory
conditions.)

> Every OS other than Linux allows select() to work with an arbitrary
> number of file descriptors; even the Linux kernel allows it, just not
> glibc.

I believe you're simply mistaken.  Solaris has a default limit of 1024
(or 65536 for 64-bit code).  NetBSD has a default limit of 256.  It's
extremely unlikely that you can find even one OS which tries to do a
dynamically-sized fd_set.

The Linux kernel can allow an arbitrary number of file descriptors
because it isn't responsible for the particular semantics of FD_SET and
friends.

> Try managing and array of thousands of poll entries sometime, and
> then compare the efficiency of a bit test vs. scanning an array
> after the fact...

Both select() and poll() require you to scan the fd set structure after
the call to find out what I/O events actually happened.  No difference
there.  This is not normally an issue, as performing a few thousand
memory accesses is generally cheap compared to a single I/O operation,
but there have been some stabs at solving the theoretical scaling
problem, such as Linux's epoll(), Solaris's /dev/poll, and the like.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Michael Sweet <mi...@easysw.com>.
On Jun 2, 2006, at 12:29 PM, Greg Hudson wrote:

> On Fri, 2006-06-02 at 10:35 -0400, Michael Sweet wrote:
>> Recent versions of glibc (>= 2.3.2 IIRC) *do* limit FDSET_SIZE to
>> 1024, even though the kernel interface has not such limit.  The
>> response I've been given from the glibc folks is that "we should be
>> using poll() instead"...
>
> select has to have a fixed limit because of the nature of its  
> interface.
> fd_set cannot be a dynamically allocated structure because you have to
> be able to copy it with assignment

Why?  No code I have ever seen does this...

> (plus, there's no deallocation
> method), so it must have a fixed size.  A limit of 1024 means each
> fd_set takes up 128 bytes; if that limit were upped by default to
> something larger like 32K, each fd_set would take up 4K of space,
> probably on the stack, which would start to cause problems in
> multi-threaded programs.

Every OS other than Linux allows select() to work with an arbitrary
number of file descriptors; even the Linux kernel allows it, just not
glibc.

> poll has a saner interface, and isn't subject to this problem.

That's really just a matter of opinion, and not one that I share.

Try managing and array of thousands of poll entries sometime, and
then compare the efficiency of a bit test vs. scanning an array
after the fact...  You already have to track your connections
separately, but with poll you need to be able to lookup those
connections via file descriptor *fast*.  Manage several different
types of connections and you add yet another complication.  Finally,
some systems don't support or work well with poll(), so you need to
support both select() and poll() now...

This is just an arbitrary change to glibc for a bug that didn't
exist - glibc is artificially limiting select() for no good reason.

______________________________________________________________________
Michael Sweet, Easy Software Products           mike at easysw dot com
Internet Printing and Document Software          http://www.easysw.com




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Greg Hudson <gh...@MIT.EDU>.
On Fri, 2006-06-02 at 10:35 -0400, Michael Sweet wrote:
> Recent versions of glibc (>= 2.3.2 IIRC) *do* limit FDSET_SIZE to
> 1024, even though the kernel interface has not such limit.  The
> response I've been given from the glibc folks is that "we should be
> using poll() instead"...

select has to have a fixed limit because of the nature of its interface.
fd_set cannot be a dynamically allocated structure because you have to
be able to copy it with assignment (plus, there's no deallocation
method), so it must have a fixed size.  A limit of 1024 means each
fd_set takes up 128 bytes; if that limit were upped by default to
something larger like 32K, each fd_set would take up 4K of space,
probably on the stack, which would start to cause problems in
multi-threaded programs.

poll has a saner interface, and isn't subject to this problem.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Michael Sweet <mi...@easysw.com>.
Peter N. Lundblad wrote:
> Garrett Rooney writes:
>  > On 6/2/06, Malcolm Rowe <ma...@farside.org.uk> wrote:
>  > > On Fri, Jun 02, 2006 at 09:37:36AM -0400, Garrett Rooney wrote:
>  > > > It's not exactly strange that it would fail, you've hit the per-user
>  > > > file descriptor limit.
>  > >
>  > > Exactly what I thought, except that the first thing he does is to bump
>  > > the ulimit to 32k, which should be plenty.  I wonder if Neon has a
>  > > problem with fds > 1024?
>  > 
>  > Hmm.  Interesting.  Maybe something OS X specific?
>  > 
> 
> I wonder if this is related to FDSET_SIZE and select.  On Linux, this
> seems to be 1024 and that obviously doesn't change when increasing the
> limit.

Recent versions of glibc (>= 2.3.2 IIRC) *do* limit FDSET_SIZE to
1024, even though the kernel interface has not such limit.  The
response I've been given from the glibc folks is that "we should be
using poll() instead"...

I don't think MacOS X duplicates this madness, and Solaris and other
commercial UNIX's have no problem with more than 1024 file descriptors
as long as you allocate your own fd_set's.

-- 
______________________________________________________________________
Michael Sweet, Easy Software Products           mike at easysw dot com
Internet Printing and Document Software          http://www.easysw.com

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by "Peter N. Lundblad" <pe...@famlundblad.se>.
Garrett Rooney writes:
 > On 6/2/06, Malcolm Rowe <ma...@farside.org.uk> wrote:
 > > On Fri, Jun 02, 2006 at 09:37:36AM -0400, Garrett Rooney wrote:
 > > > It's not exactly strange that it would fail, you've hit the per-user
 > > > file descriptor limit.
 > >
 > > Exactly what I thought, except that the first thing he does is to bump
 > > the ulimit to 32k, which should be plenty.  I wonder if Neon has a
 > > problem with fds > 1024?
 > 
 > Hmm.  Interesting.  Maybe something OS X specific?
 > 

I wonder if this is related to FDSET_SIZE and select.  On Linux, this
seems to be 1024 and that obviously doesn't change when increasing the
limit.

Regards,
//Peter

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Malcolm Rowe <ma...@farside.org.uk>.
Oops, missed the reference:

On Fri, Jun 02, 2006 at 05:07:20PM +0100, Malcolm Rowe wrote:
> Yes, that'll be it.  According to [1], "the default size FD_SETSIZE

[1] Darwin's select(2) man page
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man2/select.2.html

Regards,
Malcolm

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Erik Scrafford <er...@scrafford.org>.
Thanks everyone for your help, setting FD_SETSIZE in CFLAGS and  
recompiling neon/subversion fixed the problem.

erik

On Jun 2, 2006, at 9:07 AM, Malcolm Rowe wrote:

> On Fri, Jun 02, 2006 at 06:53:54PM +0300, Kalle Olavi Niemitalo wrote:
>> If Neon cannot use poll(), then it must use select(), which has
>> an intrinsic FD_SETSIZE limit.  GNU libc on Linux defines
>> FD_SETSIZE as 1024; perhaps it is the same on OS X.
>
> Yes, that'll be it.  According to [1], "the default size FD_SETSIZE
> (currently 1024) is somewhat smaller than the current kernel limit to
> the number of open files".
>
> It looks like you can probably override this by defining FD_SETSIZE
> yourself, in which case 'CFLAGS='-DFD_SETSIZE=8192' make' (or similar)
> should fix the problem.
>
> Regards,
> Malcolm
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: dev-help@subversion.tigris.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Malcolm Rowe <ma...@farside.org.uk>.
On Fri, Jun 02, 2006 at 06:53:54PM +0300, Kalle Olavi Niemitalo wrote:
> If Neon cannot use poll(), then it must use select(), which has
> an intrinsic FD_SETSIZE limit.  GNU libc on Linux defines
> FD_SETSIZE as 1024; perhaps it is the same on OS X.

Yes, that'll be it.  According to [1], "the default size FD_SETSIZE
(currently 1024) is somewhat smaller than the current kernel limit to
the number of open files".

It looks like you can probably override this by defining FD_SETSIZE
yourself, in which case 'CFLAGS='-DFD_SETSIZE=8192' make' (or similar)
should fix the problem.

Regards,
Malcolm

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Kalle Olavi Niemitalo <ko...@iki.fi>.
Garrett Rooney <ro...@electricjellyfish.net> writes:

> On 6/2/06, Malcolm Rowe <ma...@farside.org.uk> wrote:
>> On Fri, Jun 02, 2006 at 09:37:36AM -0400, Garrett Rooney wrote:
>> > It's not exactly strange that it would fail, you've hit the per-user
>> > file descriptor limit.
>>
>> Exactly what I thought, except that the first thing he does is to bump
>> the ulimit to 32k, which should be plenty.  I wonder if Neon has a
>> problem with fds > 1024?
>
> Hmm.  Interesting.  Maybe something OS X specific?

neon-0.25.5/NEWS:

| Changes in release 0.25.3:
| * ne_lock() and ne_unlock(): fix cases where NE_ERROR would be returned
|   instead of e.g. NE_AUTH on auth failure.
| * Prevent use of poll() on Darwin.
| * Fix gethostbyname-based resolver on LP64 platforms (Matthew Sanderson).

If Neon cannot use poll(), then it must use select(), which has
an intrinsic FD_SETSIZE limit.  GNU libc on Linux defines
FD_SETSIZE as 1024; perhaps it is the same on OS X.

Re: Total failure in dav layer when >1024 files already open

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 6/2/06, Malcolm Rowe <ma...@farside.org.uk> wrote:
> On Fri, Jun 02, 2006 at 09:37:36AM -0400, Garrett Rooney wrote:
> > It's not exactly strange that it would fail, you've hit the per-user
> > file descriptor limit.
>
> Exactly what I thought, except that the first thing he does is to bump
> the ulimit to 32k, which should be plenty.  I wonder if Neon has a
> problem with fds > 1024?

Hmm.  Interesting.  Maybe something OS X specific?

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Malcolm Rowe <ma...@farside.org.uk>.
On Fri, Jun 02, 2006 at 09:37:36AM -0400, Garrett Rooney wrote:
> It's not exactly strange that it would fail, you've hit the per-user
> file descriptor limit.

Exactly what I thought, except that the first thing he does is to bump
the ulimit to 32k, which should be plenty.  I wonder if Neon has a
problem with fds > 1024?

Regards,
Malcolm

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Total failure in dav layer when >1024 files already open

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 6/1/06, Erik Scrafford <er...@scrafford.org> wrote:
> I've run into a strange problem where the dav layer completely fails
> when more than 1024 files are opened. Every request ends up returning
> a  PROPFIND error (when linked against subversion 1.3 - 1.2 seems to
> fail in a different manner). svn:// and file:// urls still seem to
> work fine in this case, only http:// and https:// urls fail. I've
> created a repro case that I'm attaching to this message. I'm on
> macosx 10.4.6, and I have no idea if this test code will work on
> other systems. Below is the output when running my repro program. Why
> do I have that many files open? ZigVersion is monitoring it's working
> copies by using kqueue/kevents, and it ends up keeping a file handle
> open on each file/directory. So it's easy to see how I quickly exceed
> 1024 files open in many cases.

It's not exactly strange that it would fail, you've hit the per-user
file descriptor limit.  While it's certainly possible that you could
make svn open less files, there's always going to be a limit.  You can
bump the limit up via ulimit -n.  The fact that it fails over
http/https and not svn or file implies that a place to look for files
being held open might be the wcprops, as the other ra layers don't
actually use them.

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org