You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@subversion.apache.org by Philip Martin <ph...@wandisco.com> on 2014/10/01 12:48:38 UTC

serf errors on responses bigger than 4GB

Andreas Stieger <an...@gmx.de> writes:

> I
> will once again point to the serf issues below and httpd/network config.
> https://code.google.com/p/serf/issues/detail?id=152
> https://code.google.com/p/serf/source/detail?r=2419

Andreas identified a bug in serf that causes decompression to fail when
the compressed size is bigger than 4GB. This bug has been fixed on trunk
but not in any release.  This bug does not affect commit but does affect
checkout/update.

In my testing a commit of a 5GB /dev/urandom file over HTTP using serf
1.3.x works with compression both disabled and enabled.  A checkout over
HTTP using serf 1.3.x fails:

  svn: E120104: ra_serf: An error occurred during decompression

I also tried the checkout with compression disabled by the client and
saw the error:

  svn: E120106: ra_serf: The server sent a truncated HTTP response body.

but this turned out to be the known mod_deflate memory leak causing the
server to abort.  With compression disabled on the server the
uncompressed checkout works.

Doing a search I see users reporting both the above serf errors.  The
way to fix the decompression error is to disable compression.  This can
be done on the client if the server is a recent 2.4 as it is not
affected by the mod_deflate bug.  If the server is older then a client
disabling compression will probably cause the truncated error and the
fix is to disable mod_deflate on the server or to revert to a 1.7/neon
client.

I merged r2419 to my 1.3.x build and it fixes the compressed checkout.
Are there any plans for a serf release that includes this fix?

- 
Philip Martin | Subversion Committer
WANdisco // *Non-Stop Data*

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Mark Phippard <ma...@gmail.com>.
On Wed, Oct 1, 2014 at 11:36 AM, Philip Martin <ph...@wandisco.com>
wrote:

> Mark Phippard <ma...@gmail.com> writes:
>
> > On Wed, Oct 1, 2014 at 10:03 AM, Philip Martin <
> philip.martin@wandisco.com>
> > wrote:
> >>
> >> I can trigger the decompression error on a 5GB REPORT by setting
> >> http-bulk-updates=yes on the client side.
> >>
> >>
> > This does not really answer the question.
> >
> > Was your REPORT 5GB because it had a single file > 4GB or because it had
> > tens of thousands of small files?  Mike's question is about the latter.
> >
> > Does Serf only fail when decompressing a single large file, or also if
> the
> > entire REPORT response happens to be > 4 GB?  The latter probably would
> be
> > a much more common problem to run into if it can happen.
>
> I don't think it makes a difference, serf will generate the error in
> both cases.
>
> Serf is decompressing the HTTP body of the REPORT.  At the Subversion
> level the body is an XML <S:update-report> but as far as serf is
> concerned it is just a block of data that has to be decompressed.  Serf
> doesn't look to see whether the uncompressed data really is XML so it
> certainly doesn't care whether there is one <S:txdelta> or many.
>


 Understood.


-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Philip Martin <ph...@wandisco.com>.
Mark Phippard <ma...@gmail.com> writes:

> On Wed, Oct 1, 2014 at 10:03 AM, Philip Martin <ph...@wandisco.com>
> wrote:
>>
>> I can trigger the decompression error on a 5GB REPORT by setting
>> http-bulk-updates=yes on the client side.
>>
>>
> This does not really answer the question.
>
> Was your REPORT 5GB because it had a single file > 4GB or because it had
> tens of thousands of small files?  Mike's question is about the latter.
>
> Does Serf only fail when decompressing a single large file, or also if the
> entire REPORT response happens to be > 4 GB?  The latter probably would be
> a much more common problem to run into if it can happen.

I don't think it makes a difference, serf will generate the error in
both cases.

Serf is decompressing the HTTP body of the REPORT.  At the Subversion
level the body is an XML <S:update-report> but as far as serf is
concerned it is just a block of data that has to be decompressed.  Serf
doesn't look to see whether the uncompressed data really is XML so it
certainly doesn't care whether there is one <S:txdelta> or many.

-- 
Philip Martin | Subversion Committer
WANdisco // *Non-Stop Data*

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Mark Phippard <ma...@gmail.com>.
On Wed, Oct 1, 2014 at 10:03 AM, Philip Martin <ph...@wandisco.com>
wrote:

> "C. Michael Pilato" <cm...@gmail.com> writes:
>
> > The log message for r2419 mentions "files" larger than 4Gb, and leads me
> > to believe that this problem only affects GETs.  But here, Philip avoids
> > the term "files" and talks about the "compressed size".  Does the bug
> > fixed in r2419 manifest on any response > 4GB, such as a bulk-mode
> > REPORT carrying a whole Subversion tree that's larger than 4GB?
>
> I can trigger the decompression error on a 5GB REPORT by setting
> http-bulk-updates=yes on the client side.
>
>
This does not really answer the question.

Was your REPORT 5GB because it had a single file > 4GB or because it had
tens of thousands of small files?  Mike's question is about the latter.

Does Serf only fail when decompressing a single large file, or also if the
entire REPORT response happens to be > 4 GB?  The latter probably would be
a much more common problem to run into if it can happen.

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by "C. Michael Pilato" <cm...@gmail.com>.
On 10/01/2014 10:16 AM, Philip Martin wrote:
> Philip Martin <ph...@wandisco.com> writes:
>
>> "C. Michael Pilato" <cm...@gmail.com> writes:
>>
>>> The log message for r2419 mentions "files" larger than 4Gb, and leads me
>>> to believe that this problem only affects GETs.  But here, Philip avoids
>>> the term "files" and talks about the "compressed size".  Does the bug
>>> fixed in r2419 manifest on any response > 4GB, such as a bulk-mode
>>> REPORT carrying a whole Subversion tree that's larger than 4GB?
>> I can trigger the decompression error on a 5GB REPORT by setting
>> http-bulk-updates=yes on the client side.
> When error is produced for a REPORT the client has successfully produced
> a working copy containing the large file in the correct place.  I
> suppose this means that the error triggers after reading/parsing the XML
> that makes up the report.
>
> When the error occurs on a GET the large file has also been successfully
> downloaded but is left as a temporary file in .svn/tmp and not installed
> in the pristine store or the working copy.
>

Thanks, Philip!

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Philip Martin <ph...@codematters.co.uk>.
Philip Martin <ph...@wandisco.com> writes:

> "C. Michael Pilato" <cm...@gmail.com> writes:
>
>> The log message for r2419 mentions "files" larger than 4Gb, and leads me
>> to believe that this problem only affects GETs.  But here, Philip avoids
>> the term "files" and talks about the "compressed size".  Does the bug
>> fixed in r2419 manifest on any response > 4GB, such as a bulk-mode
>> REPORT carrying a whole Subversion tree that's larger than 4GB?
>
> I can trigger the decompression error on a 5GB REPORT by setting
> http-bulk-updates=yes on the client side.

When error is produced for a REPORT the client has successfully produced
a working copy containing the large file in the correct place.  I
suppose this means that the error triggers after reading/parsing the XML
that makes up the report.

When the error occurs on a GET the large file has also been successfully
downloaded but is left as a temporary file in .svn/tmp and not installed
in the pristine store or the working copy.

-- 
Philip

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Philip Martin <ph...@wandisco.com>.
"C. Michael Pilato" <cm...@gmail.com> writes:

> The log message for r2419 mentions "files" larger than 4Gb, and leads me
> to believe that this problem only affects GETs.  But here, Philip avoids
> the term "files" and talks about the "compressed size".  Does the bug
> fixed in r2419 manifest on any response > 4GB, such as a bulk-mode
> REPORT carrying a whole Subversion tree that's larger than 4GB?

I can trigger the decompression error on a 5GB REPORT by setting
http-bulk-updates=yes on the client side.

-- 
Philip Martin | Subversion Committer
WANdisco // *Non-Stop Data*

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by "C. Michael Pilato" <cm...@gmail.com>.
On 10/01/2014 06:48 AM, Philip Martin wrote:
> Andreas Stieger <an...@gmx.de> writes:
>
>> I
>> will once again point to the serf issues below and httpd/network config.
>> https://code.google.com/p/serf/issues/detail?id=152
>> https://code.google.com/p/serf/source/detail?r=2419
> Andreas identified a bug in serf that causes decompression to fail when
> the compressed size is bigger than 4GB. This bug has been fixed on trunk
> but not in any release.  This bug does not affect commit but does affect
> checkout/update.

The log message for r2419 mentions "files" larger than 4Gb, and leads me
to believe that this problem only affects GETs.  But here, Philip avoids
the term "files" and talks about the "compressed size".  Does the bug
fixed in r2419 manifest on any response > 4GB, such as a bulk-mode
REPORT carrying a whole Subversion tree that's larger than 4GB?

Re: [serf-dev] serf errors on responses bigger than 4GB

Posted by Lieven Govaerts <lg...@mobsol.be>.
Hi,

On Wed, Oct 1, 2014 at 12:48 PM, Philip Martin
<ph...@wandisco.com> wrote:
> Andreas Stieger <an...@gmx.de> writes:
>
>> I
>> will once again point to the serf issues below and httpd/network config.
>> https://code.google.com/p/serf/issues/detail?id=152
>> https://code.google.com/p/serf/source/detail?r=2419
>
> Andreas identified a bug in serf that causes decompression to fail when
> the compressed size is bigger than 4GB. This bug has been fixed on trunk
> but not in any release.  This bug does not affect commit but does affect
> checkout/update.
>
> In my testing a commit of a 5GB /dev/urandom file over HTTP using serf
> 1.3.x works with compression both disabled and enabled.  A checkout over
> HTTP using serf 1.3.x fails:
>
>   svn: E120104: ra_serf: An error occurred during decompression
>
> I also tried the checkout with compression disabled by the client and
> saw the error:
>
>   svn: E120106: ra_serf: The server sent a truncated HTTP response body.
>
> but this turned out to be the known mod_deflate memory leak causing the
> server to abort.  With compression disabled on the server the
> uncompressed checkout works.
>
> Doing a search I see users reporting both the above serf errors.  The
> way to fix the decompression error is to disable compression.  This can
> be done on the client if the server is a recent 2.4 as it is not
> affected by the mod_deflate bug.  If the server is older then a client
> disabling compression will probably cause the truncated error and the
> fix is to disable mod_deflate on the server or to revert to a 1.7/neon
> client.
>
> I merged r2419 to my 1.3.x build and it fixes the compressed checkout.
> Are there any plans for a serf release that includes this fix?

I've learned from earlier releases that (most) packagers won't upgrade
serf unless there's a svn release,

As a result, I plan a serf (patch) release right before a svn (patch)
release, but not earlier.

regards,

Lieven

> -
> Philip Martin | Subversion Committer
> WANdisco // *Non-Stop Data*
>