You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@apr.apache.org by Cliff Woolley <cl...@yahoo.com> on 2000/12/09 17:26:07 UTC

Re: cvs commit: apr-util STATUS

--- gstein@locus.apache.org wrote:
>   +    * turn ap_bucket_split_any() into ap_brigade_split().
>   +      Message-ID: ???
>   +      Status: Greg +1
>   +      Note: awaiting code from Cliff Wooley for this

No sweat... I'll do this this afternoon.  One question, though... should
ap_bucket_copy_any() also change to ap_brigade_copy()?  The same issues apply to it
as apply to _split_any().

--Cliff

__________________________________________________
Do You Yahoo!?
Yahoo! Shopping - Thousands of Stores. Millions of Products.
http://shopping.yahoo.com/

Re: cvs commit: apr-util STATUS

Posted by rb...@covalent.net.
> No sweat... I'll do this this afternoon.  One question, though... should
> ap_bucket_copy_any() also change to ap_brigade_copy()?  The same issues apply to it
> as apply to _split_any().

No, ap_bucket_copy_any should not be named ap_brigade_copy.  The whole
idea behind a copy_brigade function is inherently flawed.  There are even
more error conditions behind copying a brigade than there are for
splitting it.  We haven't solved half the problems involved with splitting
a brigade yet.  We are not trying to copy a brigade with copy_any, we are
trying to copy the data pointed to by a single bucket.  In reality, the
copy_any bucket as it is currently implemented is bogus and unusable.  The
problem is that we are only allowing the copying of full buckets.  But,
take a look at the one case that we currently care about for copying
buckets, byte-range requests.

In this filter, we find the block of data that we want to copy, and just
copy that bucket.  Of course, if the copy function isn't implemented, then
the length == -1.  So, we can't isolate the data that we want to copy into
a range of buckets, because we don't know how much data the bucket
represents.  In order to be able to use the copy_any function, we would
need to read the data, to find the limits of what we want to copy, and
then call the copy_any function.  Of course at this point we know that we
have a bucket that can be copied, so we are better off just using the copy
function directly.

The only way to make this function useful for the byte-range filter, is to
add an offset parameter so that we know how much data to copy.  Then it
kind of becomes a brigade function instead of a bucket function.  Of
course, we are then faced with all of the regular problems.  If I am
reading from a pipe, and the data isn't there, when does the copy function
return?  If only half the data is on the pipe when we call the
copy_any function should we just copy what we have and return, or wait 
for the rest of the data or the pipe to close?  If we wait, then we
remove any hope of streaming responses.  If we don't wait, then we
have to keep state someplace.  What about error conditions?  

This function is bogus IMHO.  It should be removed, and the functions that
want to copy data should do the read themselves.  Of course, the exact
same can be said of the split_any function, which can and should be
implemented by just reading until we hit the location to split, and then
calling ap_brigade_split.  IMHO, this should be implemented by the
function that wants the split, not some bogus utility function.  Notice
that mod_include already needs this functionality, but as somebody who has
spend some time in mod_include, I can also tell you that using
ap_brigade_split_offset (ap_bucket_split_any) wouldn't work, because
mod_include needs to do other operations between reading the data and
actually splitting it.  Which I believe is what 99% of the modules will
need to do.

Please provide an example of where these functions are useful, because I
don't see it.  I would like to just remove them unless we are actually
going to use them.  If we aren't going to use them, then we are just
making the API more complex for no reason at all.

Ryan
_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------