You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@subversion.apache.org by Justin Johnson <ju...@gmail.com> on 2011/11/18 22:27:40 UTC

Could not read chunk size: connection was closed by server

Hi,

We are running Subversion 1.6.16 and Apache 2.2.17 on Solaris 10, with
1.6.12 clients connecting from Windows and various flavors of UNIX.
For as long as I can remember users on Windows and UNIX clients have
been getting the following error every once in a while, typically
during a checkout.

  Could not read chunk size: connection was closed by server

In the server logs the following errors appear around the same time.

  Provider encountered an error while streaming a REPORT response.  [500, #0]
  A failure occurred while driving the update report editor  [500, #130]
  Error writing base64 data: Software caused connection abort  [500, #130]

When the error happens you can generally just do an update in the
working copy and it will pick up where it left off.

The frequency of this error has been increasing lately so I'm trying
again to determine the cause.  Numerous people have posted about it
online, but none of the solutions have seemed applicable to my
situation.  For example, there are no signs of too many open file
descriptors and I have verified that the repositories are not corrupt.
 The time that it takes for the checkout to fail with this error shows
no linkage with our timeout settings in Apache as far as I can tell.
Sometimes it fails after about 20 seconds... other times 180 or so.
For what it's worth, the only two timeout settings we have are:

TimeOut 1800
KeepAliveTimeout 10

This page seemed potentially related, but I'm not sure.  Our Solaris
box is actually a Sun zone, so there's some virtualization involved.

http://pve.proxmox.com/pipermail/pve-user/2009-December/001087.html

Does anyone have any suggestions?

Thanks.
Justin

Re: Could not read chunk size: connection was closed by server

Posted by Justin Johnson <ju...@gmail.com>.
On Wed, Nov 23, 2011 at 11:46 AM, Justin Johnson
<ju...@gmail.com> wrote:
>>
>>> during a checkout.
>>>
>>>   Could not read chunk size: connection was closed by server
>>>
>>> In the server logs the following errors appear around the same time.
>>>
>>>   Provider encountered an error while streaming a REPORT response.  [500, #0]
>>>   A failure occurred while driving the update report editor  [500, #130]
>>>   Error writing base64 data: Software caused connection abort  [500, #130]
>>
>> The server failed to write to the client and the client failed to read
>> from the server.  Looks like a network problem caused the connection to
>> be shut down.  To diagnose it you probably need to capture a network
>> trace of some sort.
>>
>
> What we've seen is that normal behavior is to have numerous TCP Zero
> Window flags occur during a checkout.
>
> http://wiki.wireshark.org/TCP%20ZeroWindow
>
> Occasionally we get the errors above and the Wireshark capture
> indicates the Subversion server eventually just terminates the
> connection.  I can gather more details if it would be helpful, but I
> won't be able to include all of the capture details on his mailing
> list.

For the record, we resolved the chunk size error by reconfiguring
Apache to use the prefork MPM instead of worker.

The base64 errors appear to be unrelated and only show up in log files.

Re: Could not read chunk size: connection was closed by server

Posted by Justin Johnson <ju...@gmail.com>.
>
>> during a checkout.
>>
>>   Could not read chunk size: connection was closed by server
>>
>> In the server logs the following errors appear around the same time.
>>
>>   Provider encountered an error while streaming a REPORT response.  [500, #0]
>>   A failure occurred while driving the update report editor  [500, #130]
>>   Error writing base64 data: Software caused connection abort  [500, #130]
>
> The server failed to write to the client and the client failed to read
> from the server.  Looks like a network problem caused the connection to
> be shut down.  To diagnose it you probably need to capture a network
> trace of some sort.
>

What we've seen is that normal behavior is to have numerous TCP Zero
Window flags occur during a checkout.

http://wiki.wireshark.org/TCP%20ZeroWindow

Occasionally we get the errors above and the Wireshark capture
indicates the Subversion server eventually just terminates the
connection.  I can gather more details if it would be helpful, but I
won't be able to include all of the capture details on his mailing
list.

Re: Could not read chunk size: connection was closed by server

Posted by Philip Martin <ph...@wandisco.com>.
Justin Johnson <ju...@gmail.com> writes:

> during a checkout.
>
>   Could not read chunk size: connection was closed by server
>
> In the server logs the following errors appear around the same time.
>
>   Provider encountered an error while streaming a REPORT response.  [500, #0]
>   A failure occurred while driving the update report editor  [500, #130]
>   Error writing base64 data: Software caused connection abort  [500, #130]

The server failed to write to the client and the client failed to read
from the server.  Looks like a network problem caused the connection to
be shut down.  To diagnose it you probably need to capture a network
trace of some sort.

-- 
Philip

Re: Could not read chunk size: connection was closed by server

Posted by Justin Johnson <ju...@gmail.com>.
On Fri, Nov 18, 2011 at 3:27 PM, Justin Johnson
<ju...@gmail.com> wrote:
> Hi,
>
> We are running Subversion 1.6.16 and Apache 2.2.17 on Solaris 10, with
> 1.6.12 clients connecting from Windows and various flavors of UNIX.
> For as long as I can remember users on Windows and UNIX clients have
> been getting the following error every once in a while, typically
> during a checkout.
>
>  Could not read chunk size: connection was closed by server
>
> In the server logs the following errors appear around the same time.
>
>  Provider encountered an error while streaming a REPORT response.  [500, #0]
>  A failure occurred while driving the update report editor  [500, #130]
>  Error writing base64 data: Software caused connection abort  [500, #130]
>
> When the error happens you can generally just do an update in the
> working copy and it will pick up where it left off.
>
> The frequency of this error has been increasing lately so I'm trying
> again to determine the cause.  Numerous people have posted about it
> online, but none of the solutions have seemed applicable to my
> situation.  For example, there are no signs of too many open file
> descriptors and I have verified that the repositories are not corrupt.
>  The time that it takes for the checkout to fail with this error shows
> no linkage with our timeout settings in Apache as far as I can tell.
> Sometimes it fails after about 20 seconds... other times 180 or so.
> For what it's worth, the only two timeout settings we have are:
>
> TimeOut 1800
> KeepAliveTimeout 10
>
> This page seemed potentially related, but I'm not sure.  Our Solaris
> box is actually a Sun zone, so there's some virtualization involved.
>
> http://pve.proxmox.com/pipermail/pve-user/2009-December/001087.html
>
> Does anyone have any suggestions?
>
> Thanks.
> Justin
>

While researching this problem further I came across
http://stackoverflow.com/questions/772894/updating-from-svn-repository-returns-could-not-read-chunk-size-error,
where one user said the following.

   I eventually solved the problem by using the apache2-mpm-prefork package
   rather than the apache2-mpm-worker package

I also found http://serverfault.com/questions/194233/use-prefork-or-worker-in-apache-configuration,
where one user said the following.

   Are you doing things with Apache modules (SVN, PHP are two examples)?
   In that case prefork might be safer.

Is it true that mpm worker is not safe for use with Subversion?  Could
that possibly be related to this problem?  I see no mention of this in
the svn book or the README or INSTALL files that come with Subversion.