You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Dirk-Willem van Gulik <di...@webweaving.org> on 2002/08/27 12:22:40 UTC

Re: Segmentation fault when downloading large files

This look like a filter issue I've seen before; but never could not quite
reproduce. You may want to take this to dev@httpd.apache.org; as this is
most likely related to the filters in apache; and not proxy specific.

Dw.

On Tue, 27 Aug 2002, Peter Van Biesen wrote:

> Hello,
>
> I'm using an apache 2.0.39 on a HPUX 11.0 system as a webserver/proxy.
> When I try to download large files through the proxy, I get the
> following error :
>
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(109): proxy: HTTP:
> canonicalising URL
> //download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> [Tue Aug 27 11:44:08 2002] [debug] mod_proxy.c(442): Trying to run
> scheme_handler against proxy
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(1051): proxy: HTTP:
> serving URL
> http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(221): proxy: HTTP
> connecting
> http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> to download.microsoft.com:80
> [Tue Aug 27 11:44:08 2002] [debug] proxy_util.c(1164): proxy: HTTP: fam
> 2 socket created to connect to vlafo3.vlafo.be
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(370): proxy: socket is
> connected
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(404): proxy: connection
> complete to 193.190.145.66:80 (vlafo3.vlafo.be)
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Date: Tue, 27 Aug 2002 09:44:09 GMT
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Server: Microsoft-IIS/5.0
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Content-Type: application/octet-stream
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Accept-Ranges: bytes
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Last-Modified: Tue, 23 Jul 2002 16:23:09 GMT
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = ETag: "f2138b3b6532c21:8f9"
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Via: 1.1 download.microsoft.com
> [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> = Transfer-Encoding: chunked
> [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(893): proxy: start body
> send
> [Tue Aug 27 11:57:45 2002] [notice] child pid 7099 exit signal
> Segmentation fault (11)
>
> I'm sorry for the example ... ;-))
>
> Anyway, I've tried on several machine that are configured differently (
> swap, memory ), but the download stops always around 70 Mb. Does anybody
> have an idea what's wrong ? Is there a core I could gdb ( I didn't find
> any ) ?
>
> Thanks !
>
> Peter.
>


Re: Segmentation fault when downloading large files

Posted by Cliff Woolley <jw...@virginia.edu>.
On Tue, 27 Aug 2002, Graham Leggett wrote:

> The filter code behaves differently depending on where the data is
> coming from, eg an area in memory, or a file on a disk. As a result it
> is quite possible that a large file from disk works and a large file
> from proxy does not.

APR's concept of a "large file" (which is the concept used by file
buckets, btw) is >2GB.

--Cliff


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> However, when downloading a large file from the server itself ( not
> using the proxy ), works fine ... either its a problem in the proxy or a
> timeout somewhere ( locally is a lot faster ).

The proxy is very "dumb" code, it relies almost exclusively on the 
filter code to do everything. As a result it's very unlikely this 
problem is in proxy.

The filter code behaves differently depending on where the data is 
coming from, eg an area in memory, or a file on a disk. As a result it 
is quite possible that a large file from disk works and a large file 
from proxy does not.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> Program received signal SIGSEGV, Segmentation fault.
> 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> (gdb) where
> #0  0xc1bfb06c in apr_bucket_alloc () from
> /opt/httpd/lib/libaprutil.sl.0

> The resources used by the process increase linearly until the maximum
> per process is reached after which the crash occurs. Did we do an alloc
> without a free ?

It looks like each bucket is being created but never freed, which 
eventually causes a segfault when buckets can no longer be created.

This might be the bucket code leaking, or it could be the proxy code not 
freeing buckets after the buckets are sent to the client.

Anyone know how you free buckets?

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Brian Pane wrote:

> But the memory involved here ought to be in buckets (which can
> be freed long before the entire request is done).
> 
> In 2.0.39 and 2.0.40, the content-length filter's habit of
> buffering the entire response would keep the httpd from freeing
> buckets incrementally during the request.  That particular
> problem is gone in the latest 2.0.41-dev CVS head.  If the
> segfault problem still exists in 2.0.41-dev, we need to take
> a look at whether there's any buffering in the proxy code that
> can be similarly fixed.

The proxy code doesn't buffer anything, it basically goes "get a bucket 
from backend stack, put the bucket to frontend stack, cleanup bucket, 
repeat".

There are some filters (like include I think) that "put away" buckets as 
the response is handled, it is possible one of these filters is also 
causing a "leak".

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


The best hook ?

Posted by Estrade Matthieu <es...@ifrance.com>.
Hi,

I would like to know the best hook to register my module init function.
In this function, my aim is:

1- Open files
2- Take data
3- Put this data in structures (alloc memory)
4- All Childs must be able to read/modify all this data.


I did my hook with ap_hook_post_config.
Do you think it's the best way ?

When I setup a MaxRequestPerChild 10000, my childs restart.

I alloc memory to structure in my init function with calloc because, I
am unable to use the apr_pool_p in subfunctions.

When the child restart, does it call the post_config function.?
Because I see debug message like when apache init the module.


Best regards

Estrade Matthieu




______________________________________________________________________________
Pour mieux recevoir vos emails, utilisez un PC plus performant !
D�couvrez la nouvelle gamme DELL en exclusivit� sur i (france)
http://www.ifrance.com/_reloc/signedell


Re: Segmentation fault when downloading large files

Posted by Brian Pane <br...@cnet.com>.
Graham Leggett wrote:

> Peter Van Biesen wrote:
>
>> I now have a reproducable error, a httpd which I can recompile ( it's
>> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
>> I've seen in the code of ap_proxy_http_request that the variable e is
>> used many times but I can't seem to find a free somewhere ...
>
>
> This may be part of the problem. In apr memory is allocated from a 
> pool of memory, and is then freed in one go. In this case, there is 
> one pool per request, which is only freed when the request is 
> complete. But during the request, 100MB of data is transfered, 
> resulting buckets which are allocated, but not freed (yet). The 
> machine runs out of memory and that process segfaults. 


But the memory involved here ought to be in buckets (which can
be freed long before the entire request is done).

In 2.0.39 and 2.0.40, the content-length filter's habit of
buffering the entire response would keep the httpd from freeing
buckets incrementally during the request.  That particular
problem is gone in the latest 2.0.41-dev CVS head.  If the
segfault problem still exists in 2.0.41-dev, we need to take
a look at whether there's any buffering in the proxy code that
can be similarly fixed.

Brian




Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> I've seen in the code of ap_proxy_http_request that the variable e is
> used many times but I can't seem to find a free somewhere ...

This may be part of the problem. In apr memory is allocated from a pool 
of memory, and is then freed in one go. In this case, there is one pool 
per request, which is only freed when the request is complete. But 
during the request, 100MB of data is transfered, resulting buckets which 
are allocated, but not freed (yet). The machine runs out of memory and 
that process segfaults.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Justin Erenkrantz <je...@apache.org>.
On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,

Can you upgrade to at least .40 or better yet the latest CVS
version?  -- justin

Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
I now have a reproducable error, a httpd which I can recompile ( it's
till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
I've seen in the code of ap_proxy_http_request that the variable e is
used many times but I can't seem to find a free somewhere ...

I'm sorry I'm not trying to find the error myself, but I haven't got the
time to familiarize myself with the apr code ...

Peter.

"William A. Rowe, Jr." wrote:
> 
> At 07:06 AM 8/28/2002, Graham Leggett wrote:
> >Peter Van Biesen wrote:
> >
> >>>>Program received signal SIGSEGV, Segmentation fault.
> >>>>0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> >>>>(gdb) where
> >>>>#0  0xc1bfb06c in apr_bucket_alloc () from
> >>>>/opt/httpd/lib/libaprutil.sl.0
> >>>>#1  0xc1bf8d18 in socket_bucket_read () from
> >>>>/opt/httpd/lib/libaprutil.sl.0
> >>>>#2  0x00129ffc in core_input_filter ()
> >>>>#3  0x0011a630 in ap_get_brigade ()
> >>>>#4  0x000bb26c in ap_http_filter ()
> >>>>#5  0x0011a630 in ap_get_brigade ()
> >>>>#6  0x0012999c in net_time_filter ()
> >>>>#7  0x0011a630 in ap_get_brigade ()
> >
> >The ap_get_brigade() is followed by a ap_pass_brigade(), then a
> >apr_brigade_cleanup(bb).
> >
> >What could be happening is that either:
> >
> >a) brigade cleanup is hosed or leaks
> >b) one of the filters is leaking along the way
> 
> Or it simply tries to slurp all 100's of MBs of this huge download.
> 
> As I guessed, we are out of memory.
> 
> Someone asked why I asserted that input filtering still sucks.  Heh.
> 
> Bill

Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> Recompiled and tested, the problem remains ... :
> 
> [Wed Sep 04 13:22:27 2002] [info] Server: Apache/2.0.41-dev, Interface:
> mod_ssl/2.0.41-dev, Library: OpenSSL/0.9.6c
> [Wed Sep 04 13:22:27 2002] [notice] Apache/2.0.41-dev (Unix)
> mod_ssl/2.0.41-dev OpenSSL/0.9.6c DAV/2 configured -- resuming normal
> operations
> [Wed Sep 04 13:22:27 2002] [info] Server built: Sep  3 2002 16:31:17
> [Wed Sep 04 13:38:28 2002] [notice] child pid 29748 exit signal
> Segmentation fault (11)
> 
> Crash after 71 Mb . When I have the time, I'll investigate further !

Can you try and configure apache with no modules installed whatsoever, 
to see if just mod_proxy plus core has this problem...?

I have a feeling one of the filters in the stack is leaking buckets 
(leaking in the sense that the bucks are only freed at the end of the 
request, which is too late), if we can remove all the filters we can 
(specifically mod_include) we might be able to more clearly isolate 
where the problem lies.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Recompiled and tested, the problem remains ... :

[Wed Sep 04 13:22:27 2002] [info] Server: Apache/2.0.41-dev, Interface:
mod_ssl/2.0.41-dev, Library: OpenSSL/0.9.6c
[Wed Sep 04 13:22:27 2002] [notice] Apache/2.0.41-dev (Unix)
mod_ssl/2.0.41-dev OpenSSL/0.9.6c DAV/2 configured -- resuming normal
operations
[Wed Sep 04 13:22:27 2002] [info] Server built: Sep  3 2002 16:31:17
[Wed Sep 04 13:38:28 2002] [notice] child pid 29748 exit signal
Segmentation fault (11)

Crash after 71 Mb . When I have the time, I'll investigate further !

Peter.

Graham Leggett wrote:
> 
> Peter Van Biesen wrote:
> 
> > I've checked out the latest version from CVS, but I see there's no
> > configure script in there. How do I get/generate it ? Do I need it to
> > compile ?
> 
> Pull the following three from cvs:
> 
> - httpd-2.0
> - apr
> - apr-util
> 
> Copy both the apr and apr-util directories to httpd-2.0/srclib, like so:
> 
> [root@jessica srclib]# pwd
> /home/minfrin/src/apache/sandbox/proxy/httpd-2.0/srclib
> [root@jessica srclib]# ls -al
> total 40
> drwxr-xr-x    7 minfrin  minfrin      4096 Sep  2 10:31 .
> drwxr-xr-x   14 minfrin  minfrin      4096 Sep  2 10:39 ..
> drwxr-xr-x   32 minfrin  minfrin      4096 Sep  2 10:34 apr
> drwxr-xr-x   21 minfrin  minfrin      4096 Sep  2 10:35 apr-util
> drwxr-xr-x    2 minfrin  minfrin      4096 Jun 20 00:01 CVS
> -rw-r--r--    1 minfrin  minfrin        32 Jun 20 00:01 .cvsignore
> -rw-r--r--    1 root     root            0 Sep  2 10:31 .deps
> drwxr-xr-x    3 minfrin  minfrin      4096 Dec  6  2001 expat-lite
> -rw-r--r--    1 root     root          477 Sep  2 10:31 Makefile
> -rw-r--r--    1 minfrin  minfrin       136 May 20 10:53 Makefile.in
> drwxr-xr-x    7 minfrin  minfrin      4096 Sep  2 10:36 pcre
> 
> In the httpd-2.0 directory, run the following to create the ./configure
> scripts:
> 
> ./buildconf
> 
> Then run ./configure as you normally would.
> 
> Regards,
> Graham
> --
> -----------------------------------------
> minfrin@sharp.fm
>         "There's a moon
>                                         over Bourbon Street
>                                                 tonight..."

Re: Vote: mod_jk connector in /experimental

Posted by Henning Brauer <hb...@bsws.de>.
On Tue, Sep 03, 2002 at 01:15:43PM +0200, Peter Van Biesen wrote:
> servlets, most apaches will use mod_jk anyway.

I beg to differ.

Re: Vote: mod_jk connector in /experimental

Posted by Peter Van Biesen <pe...@vlafo.be>.
Mladen Turk wrote:
> There is no need to take that personal. You should post that question to
> the tomcat-dev@jakarta.apache.org first. No one is pushing you out, and
> all your ideas and thoughts will be highly appreciated.
OK
> 
> Second, if you are asking for a vote then IMO there should be some sort
> of discussion prior to that?
I don't remember a discussion prior to the vote of putting mod_auth_ldap
into the core, but I can be mistaken ... Anyway, shouldn't I have a
discussion prior to a vote on tomcat-dev then too ? No discussion, no
vote, no vote, no discussion ? 

Let's leave it at no vote, no vote ;-)
> 
> Read the:
> http://www.tuxedo.org/~esr/faqs/smart-questions.html
Especially the "dealing with rudeness" part ... ;-)
> 
> MT.
Peter.

RE: Vote: mod_jk connector in /experimental

Posted by Mladen Turk <mt...@mappingsoft.com>.
> From Peter Van Biesen

> Anyway, I gathered that apache was a organization that 
> promoted public initiative. Apparently, it is not 
> appreciated. I hope your attitude will get you far in your 
> carreer ( probably a management position, I'm sure ... ).
>

There is no need to take that personal. You should post that question to
the tomcat-dev@jakarta.apache.org first. No one is pushing you out, and
all your ideas and thoughts will be highly appreciated.

Second, if you are asking for a vote then IMO there should be some sort
of discussion prior to that?

Read the:
http://www.tuxedo.org/~esr/faqs/smart-questions.html

MT.


Re: Vote: mod_jk connector in /experimental

Posted by Peter Van Biesen <pe...@vlafo.be>.
Mladen Turk wrote:
> 
> > I'd like to start a vote to get mod_jk in the apache core
> > distribution.
> 
> The jk is not in the TC distribution, but rather in the
> jakarta-tomcat-connectors.
My mistake.

> > It seems silly to me to leave it in the tomcat distribution,
> 
> That's your opinion, and you should first ask the question to the right
> dev group.
Both groups are involved, the httpd AND the tomcat group. I was just
asking, no pressure ... ( calm down, please 8-| )
> 
> > what if an other container implements the protocol ?
> 
> Yes, what if?
Then they should always come to tomcat to get an interface to ajp ...
d'oh. Proving that it actually belongs in the httpd distribution.
> 
> > most apaches will use mod_jk anyway.
> 
> How did you get to this statement? By experiment?
Did we get out of bed with the wrong foot ? Coffee cold ? Yes, by
experiment, I'm not in a position to do an exhaustive search, you know
...
> 
> MT.
Anyway, I gathered that apache was a organization that promoted public
initiative. Apparently, it is not appreciated. I hope your attitude will
get you far in your carreer ( probably a management position, I'm sure
... ).

The vote has ended.

Peter.

RE: Vote: mod_jk connector in /experimental

Posted by Mladen Turk <mt...@mappingsoft.com>.
> I'd like to start a vote to get mod_jk in the apache core 
> distribution.

The jk is not in the TC distribution, but rather in the
jakarta-tomcat-connectors.

> It seems silly to me to leave it in the tomcat distribution, 

That's your opinion, and you should first ask the question to the right
dev group.

> what if an other container implements the protocol ?

Yes, what if?
 
> most apaches will use mod_jk anyway.

How did you get to this statement? By experiment?

MT.



Re: Vote: mod_jk connector in /experimental

Posted by "Jess M. Holle" <je...@ptc.com>.
alex@foogod.com wrote:

>On Tue, Sep 03, 2002 at 09:51:20AM -0500, Jess M. Holle wrote:
>  
>
>>It would be nicest of all to have builds of each version of the core for 
>>each platform -- and pluggable binaries of all the extra modules for 
>>each version/platform as well.
>>
>Eergh.. this sounds like a maintenance nightmare.
>
Why?

If the builds are automated, then there's no maintenance in producing 
new binaries.  If the builds don't work, then the releases should not be 
done.

>>This could be cranked out by automated 
>>scripts as a release criteria/requirement, i.e. it's not a release until 
>>everything builds on all platforms with the automated scripts (and 
>>ideally passes some basic tests on all of them too).
>>
>
>I can almost guarantee you this will translate to "we will never again have a
>release."
>
>There are still several significant official apache distribution modules from
>1.3 which do not yet work under the current 2.0 line.
>
I was not referring to modules from 1.3 that don't work with 2.0. 
 Rather I was talking about modules which ostensibly work against 1.3.x 
or 2.0.x respectively.

>Considering that we're
>talking about creating a repository which presumably will be containing not
>only all of this stuff but lots of third-party modules with various levels of
>maintenance and stability, requiring that they all compile and work before
>releasing a new version of httpd is, frankly, insane.
>
Actually, you raise a good point.  Third party modules should be 
referenced by hyperlink and the party involved should be e-mailed to 
notify them when a new build label is produced, but the Apache group 
cannot take responsibility for 3rd-party modules.  They can, however, 
provide:

   1. Something like http://modules.apache.org/, but with links direct
      to download directories wherever possible.
   2. Minimalistic coordination with such 3rd-parties to allow/encourage
      them to rebuild with each Apache build.

Note that I am assuming a DSO-based distribution.

>Personally, what I would like to see is something along the following lines:
>
>1.  A core Apache distribution containing a minimal server.  This would contain
>the core code and the few modules required to get the basic HTTPD behavior
>everybody expects from any server (serve files off a disk, run CGIs, and not
>much else).  This would be useful for those wanting a "lean and mean" httpd
>server, or for those who want to build everything custom from the module
>repository.  It would also make it easy to release core updates in a timely
>fashion, as new releases of this package could be made with a minimum of
>modules needing to be updated/tested.
>
>2.  An "enhanced" Apache distribution, containing everything from the minimal
>distribution, plus a bunch of really commonly used modules.  This would be
>equivalent to what generally gets distributed now.  Criteria for what modules
>get bundled into this should be based primarily on demand (only modules that
>lots of people out in the real world need and use), and of course there would
>be a requirement that any included modules must have people willing and able to
>actively develop and debug them in a timely fashion, so that if something
>breaks, it doesn't seriously slow down the release schedule (without good
>reason).  It would be nice if releases of this package corresponded roughly to
>releases of the core package, but if a core change was made which required
>updating a lot of stuff, the core package could be released first, while work
>is still going on on updating all the other modules in this package to work
>with the new core before the enhanced package goes out the door.
>
>3.  A repository of all apache modules (including all the ones from the
>enhanced distribution, and from everybody else out there in the world) in a
>consistent, well-defined form with a modular build system for the core which
>you can just drop them into.  Ideally, I would like to be able to download one
>of the above two distributions, unpack the source, cd into the source
>directory, and then unpack mod_foo.tar.gz and mod_bar.tar.gz (obtained from the
>repository), run configure/make, and get a server which includes the foo and
>bar modules just as if they'd been part of the initial distribution.  With a
>well-defined module distribution file format and a build system which
>automagically supported modular-inclusions, this shouldn't be too hard to
>achieve.
>
I agree up until the point where you say configure/make.  I have little 
trouble with this at this point personally, but after you watch the 
uninitiated do this for a while -- especially given some esoteric 
misconfiguration in their build support software (e.g. gcc) you come to 
appreciate *binary* distributions.

--
Jess Holle


Re: Vote: mod_jk connector in /experimental

Posted by al...@foogod.com.
On Tue, Sep 03, 2002 at 09:51:20AM -0500, Jess M. Holle wrote:
> It would be nicest of all to have builds of each version of the core for 
> each platform -- and pluggable binaries of all the extra modules for 
> each version/platform as well.

Eergh.. this sounds like a maintenance nightmare.

> This could be cranked out by automated 
> scripts as a release criteria/requirement, i.e. it's not a release until 
> everything builds on all platforms with the automated scripts (and 
> ideally passes some basic tests on all of them too).

I can almost guarantee you this will translate to "we will never again have a
release."

There are still several significant official apache distribution modules from
1.3 which do not yet work under the current 2.0 line.  Considering that we're
talking about creating a repository which presumably will be containing not
only all of this stuff but lots of third-party modules with various levels of
maintenance and stability, requiring that they all compile and work before
releasing a new version of httpd is, frankly, insane.

Personally, what I would like to see is something along the following lines:

1.  A core Apache distribution containing a minimal server.  This would contain
the core code and the few modules required to get the basic HTTPD behavior
everybody expects from any server (serve files off a disk, run CGIs, and not
much else).  This would be useful for those wanting a "lean and mean" httpd
server, or for those who want to build everything custom from the module
repository.  It would also make it easy to release core updates in a timely
fashion, as new releases of this package could be made with a minimum of
modules needing to be updated/tested.

2.  An "enhanced" Apache distribution, containing everything from the minimal
distribution, plus a bunch of really commonly used modules.  This would be
equivalent to what generally gets distributed now.  Criteria for what modules
get bundled into this should be based primarily on demand (only modules that
lots of people out in the real world need and use), and of course there would
be a requirement that any included modules must have people willing and able to
actively develop and debug them in a timely fashion, so that if something
breaks, it doesn't seriously slow down the release schedule (without good
reason).  It would be nice if releases of this package corresponded roughly to
releases of the core package, but if a core change was made which required
updating a lot of stuff, the core package could be released first, while work
is still going on on updating all the other modules in this package to work
with the new core before the enhanced package goes out the door.

3.  A repository of all apache modules (including all the ones from the
enhanced distribution, and from everybody else out there in the world) in a
consistent, well-defined form with a modular build system for the core which
you can just drop them into.  Ideally, I would like to be able to download one
of the above two distributions, unpack the source, cd into the source
directory, and then unpack mod_foo.tar.gz and mod_bar.tar.gz (obtained from the
repository), run configure/make, and get a server which includes the foo and
bar modules just as if they'd been part of the initial distribution.  With a
well-defined module distribution file format and a build system which
automagically supported modular-inclusions, this shouldn't be too hard to
achieve.

I don't think it's worth trying to do a global binary module repository
(officially).  Those responsible for building binary distributions for any
given platform can obtain and build in all the modules from the repository
which make sense and are well enough maintained to be feasable.  Obviously, it
would be good to compile things in such a way that third-party developers could
also distribute their own binary modules, but I think any
repositories/collections for that sort of thing would best be done on an
as-needed, per-platform basis.

-alex

Re: Vote: mod_jk connector in /experimental

Posted by "Jess M. Holle" <je...@ptc.com>.
It would be nicest of all to have builds of each version of the core for 
each platform -- and pluggable binaries of all the extra modules for 
each version/platform as well. This could be cranked out by automated 
scripts as a release criteria/requirement, i.e. it's not a release until 
everything builds on all platforms with the automated scripts (and 
ideally passes some basic tests on all of them too).

That way folk could piece together just what they want without having to 
be Apache build gurus.

--
Jess Holle

Peter Van Biesen wrote:

>Point taken. I didn't think about that. The problem is that it is not at
>all clear what should get in. Indeed, a repository would be a better
>idea, with an apache distribution with no modules ( or only the core
>ones ).
>
>Peter.
>
>Dirk-Willem van Gulik wrote:
>  
>
>>Aye ! Well said.
>>
>>Dw.
>>
>>On Tue, 3 Sep 2002, John K. Sterling wrote:
>>
>>    
>>
>>>Here we go.....
>>>
>>>kitchen sink come on - we let a module into experimental (auth_ldap) and
>>>suddenly experimental will become the CPAN of apache.
>>>
>>>I think this is a silly idea personally.  More cruft to maintain and to
>>>hold back releases, etc. etc. etc.  Until Aaron's (et. al) idea of a module
>>>registry/repository becomes reality, jk should stay where it is.
>>>
>>>sterling
>>>
>>>      
>>>
>>>>-- Original Message --
>>>>Reply-To: dev@httpd.apache.org
>>>>Date: Tue, 03 Sep 2002 13:15:43 +0200
>>>>From: Peter Van Biesen <pe...@vlafo.be>
>>>>To: dev@httpd.apache.org
>>>>Subject: Vote: mod_jk connector in /experimental
>>>>
>>>>Hello,
>>>>
>>>>I'd like to start a vote to get mod_jk in the apache core distribution.
>>>>It seems silly to me to leave it in the tomcat distribution, what if an
>>>>other container implements the protocol ? Moreover, the mod_jk is of no
>>>>use to other webservers than apache and with the increased use of
>>>>servlets, most apaches will use mod_jk anyway.
>>>>
>>>>Anyhow, let me know what you think !
>>>>        
>>>>
>>>
>>>
>>>      
>>>
>
>  
>



Re: Vote: mod_jk connector in /experimental

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.

On Wed, 4 Sep 2002, Peter Van Biesen wrote:

> how do you see this ? A core server with a bunch of .so's or hooks in
> the build process to statically link optional modules ?

Check out FreeBSD ports; basically a set of simple make files like:

ls /usr/ports/wwww/mod_*

mod_access_identd       mod_backhand            mod_fastcgi             mod_mysqluserdir        mod_sed
mod_access_referer      mod_bf                  mod_frontpage           mod_pcgi2               mod_sequester
mod_auth_any            mod_blowchunks          mod_gzip                mod_perl                mod_snake
mod_auth_external       mod_cgi_debug           mod_hosts_access        mod_php3                mod_sqlinclude
mod_auth_kerb           mod_color               mod_index_rss           mod_php4                mod_throttle
mod_auth_mysql          mod_csacek              mod_jk                  mod_proxy_add_forward   mod_ticket
mod_auth_mysql_another  mod_cvs                 mod_layout              mod_put                 mod_trigger
mod_auth_pam            mod_dav                 mod_log_mysql           mod_python              mod_tsunami
mod_auth_pgsql          mod_dtcl                mod_mp3                 mod_roaming             mod_watch
mod_auth_pwcheck        mod_extract_forwarded   mod_mylo                mod_ruby                mod_zap

And each then has a makefile:

# New ports collection makefile for:    mod_mp3
# Date created:                         7 April 2001
# Whom:                                 will
#
# $FreeBSD: ports/www/mod_mp3/Makefile,v 1.17 2002/03/18 01:34:24 anders Exp $
#

PORTNAME=       mod_mp3
PORTVERSION=    0.35
CATEGORIES=     www audio
MASTER_SITES=   http://software.tangent.org/download/ \
                ftp://ftp.tangent.org/pub/apache/ \
                http://atreides.freenix.no/~anders/

MAINTAINER=     ports@FreeBSD.org

BUILD_DEPENDS=  ${APXS}:${PORTSDIR}/www/apache13
RUN_DEPENDS=    ${APXS}:${PORTSDIR}/www/apache13

HAS_CONFIGURE=  yes
MAKE_ARGS+=     APXS="${APXS}"

APXS?=          ${LOCALBASE}/sbin/apxs
DOCS=           ChangeLog README TODO faq.html

do-install:
        ${APXS} -i -A -n mp3 ${WRKSRC}/src/mod_mp3.so
.if !defined(NOPORTDOCS)
        @${INSTALL} -d -m 0755 ${PREFIX}/share/doc/mod_mp3
.for f in ${DOCS}
        ${INSTALL_DATA} ${WRKSRC}/${f} ${PREFIX}/share/doc/mod_mp3/
.endfor
.endif
        ${CAT} ${PKGMESSAGE}

.include <bsd.port.mk>

all you do is cd into the directory and do a make, make install.

If you look at 'fink' you see a more cross-platform sort of approach. Both
work well.

Dw


Re: Vote: mod_jk connector in /experimental

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hi,

how do you see this ? A core server with a bunch of .so's or hooks in
the build process to statically link optional modules ?

Peter.

"John K. Sterling" wrote:
> 
> >-- Original Message --
> >Reply-To: dev@httpd.apache.org
> >Date: Tue, 03 Sep 2002 16:24:01 +0200
> >From: Peter Van Biesen <pe...@vlafo.be>
> >To: dev@httpd.apache.org
> >Subject: Re: Vote: mod_jk connector in /experimental
> >
> >
> >Point taken. I didn't think about that. The problem is that it is not at
> >all clear what should get in. Indeed, a repository would be a better
> >idea, with an apache distribution with no modules ( or only the core
> >ones ).
> >
> 
> As I stated, many people expressed interest in this a few weeks ago.  I
> would really LOVE to see some folks get together (i'll volunteer to help
> out, but a member should lead the charge) and come up with an architecture
> for this - there are many obvious systems we could copy.  As a module author,
> it would be nice to have tighter integration with the core, without having
> to become a part of the core :)
> 
> sterling

Re: Vote: mod_jk connector in /experimental

Posted by "John K. Sterling" <jo...@sterls.com>.
>-- Original Message --
>Reply-To: dev@httpd.apache.org
>Date: Tue, 03 Sep 2002 16:24:01 +0200
>From: Peter Van Biesen <pe...@vlafo.be>
>To: dev@httpd.apache.org
>Subject: Re: Vote: mod_jk connector in /experimental
>
>
>Point taken. I didn't think about that. The problem is that it is not at
>all clear what should get in. Indeed, a repository would be a better
>idea, with an apache distribution with no modules ( or only the core
>ones ).
>

As I stated, many people expressed interest in this a few weeks ago.  I
would really LOVE to see some folks get together (i'll volunteer to help
out, but a member should lead the charge) and come up with an architecture
for this - there are many obvious systems we could copy.  As a module author,
it would be nice to have tighter integration with the core, without having
to become a part of the core :)

sterling


Re: Vote: mod_jk connector in /experimental

Posted by Peter Van Biesen <pe...@vlafo.be>.
Point taken. I didn't think about that. The problem is that it is not at
all clear what should get in. Indeed, a repository would be a better
idea, with an apache distribution with no modules ( or only the core
ones ).

Peter.

Dirk-Willem van Gulik wrote:
> 
> Aye ! Well said.
> 
> Dw.
> 
> On Tue, 3 Sep 2002, John K. Sterling wrote:
> 
> > Here we go.....
> >
> > kitchen sink come on - we let a module into experimental (auth_ldap) and
> > suddenly experimental will become the CPAN of apache.
> >
> > I think this is a silly idea personally.  More cruft to maintain and to
> > hold back releases, etc. etc. etc.  Until Aaron's (et. al) idea of a module
> > registry/repository becomes reality, jk should stay where it is.
> >
> > sterling
> >
> > >-- Original Message --
> > >Reply-To: dev@httpd.apache.org
> > >Date: Tue, 03 Sep 2002 13:15:43 +0200
> > >From: Peter Van Biesen <pe...@vlafo.be>
> > >To: dev@httpd.apache.org
> > >Subject: Vote: mod_jk connector in /experimental
> > >
> > >Hello,
> > >
> > >I'd like to start a vote to get mod_jk in the apache core distribution.
> > >It seems silly to me to leave it in the tomcat distribution, what if an
> > >other container implements the protocol ? Moreover, the mod_jk is of no
> > >use to other webservers than apache and with the increased use of
> > >servlets, most apaches will use mod_jk anyway.
> > >
> > >Anyhow, let me know what you think !
> >
> >
> >
> >

RE: Vote: mod_jk connector in /experimental

Posted by Dirk-Willem van Gulik <di...@webweaving.org>.
Aye ! Well said.

Dw.

On Tue, 3 Sep 2002, John K. Sterling wrote:

> Here we go.....
>
> kitchen sink come on - we let a module into experimental (auth_ldap) and
> suddenly experimental will become the CPAN of apache.
>
> I think this is a silly idea personally.  More cruft to maintain and to
> hold back releases, etc. etc. etc.  Until Aaron's (et. al) idea of a module
> registry/repository becomes reality, jk should stay where it is.
>
> sterling
>
> >-- Original Message --
> >Reply-To: dev@httpd.apache.org
> >Date: Tue, 03 Sep 2002 13:15:43 +0200
> >From: Peter Van Biesen <pe...@vlafo.be>
> >To: dev@httpd.apache.org
> >Subject: Vote: mod_jk connector in /experimental
> >
> >Hello,
> >
> >I'd like to start a vote to get mod_jk in the apache core distribution.
> >It seems silly to me to leave it in the tomcat distribution, what if an
> >other container implements the protocol ? Moreover, the mod_jk is of no
> >use to other webservers than apache and with the increased use of
> >servlets, most apaches will use mod_jk anyway.
> >
> >Anyhow, let me know what you think !
>
>
>
>


RE: Vote: mod_jk connector in /experimental

Posted by "John K. Sterling" <jo...@sterls.com>.
Here we go.....

kitchen sink come on - we let a module into experimental (auth_ldap) and
suddenly experimental will become the CPAN of apache.

I think this is a silly idea personally.  More cruft to maintain and to
hold back releases, etc. etc. etc.  Until Aaron's (et. al) idea of a module
registry/repository becomes reality, jk should stay where it is.

sterling

>-- Original Message --
>Reply-To: dev@httpd.apache.org
>Date: Tue, 03 Sep 2002 13:15:43 +0200
>From: Peter Van Biesen <pe...@vlafo.be>
>To: dev@httpd.apache.org
>Subject: Vote: mod_jk connector in /experimental
>
>Hello,
>
>I'd like to start a vote to get mod_jk in the apache core distribution.
>It seems silly to me to leave it in the tomcat distribution, what if an
>other container implements the protocol ? Moreover, the mod_jk is of no
>use to other webservers than apache and with the increased use of
>servlets, most apaches will use mod_jk anyway.
>
>Anyhow, let me know what you think !



Vote: mod_jk connector in /experimental

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hello,

I'd like to start a vote to get mod_jk in the apache core distribution.
It seems silly to me to leave it in the tomcat distribution, what if an
other container implements the protocol ? Moreover, the mod_jk is of no
use to other webservers than apache and with the increased use of
servlets, most apaches will use mod_jk anyway.

Anyhow, let me know what you think !

Peter.

Re: Segmentation fault when downloading large files

Posted by Brian Pane <br...@apache.org>.
Joe Schaefer wrote:

>There's also a refcount problem in http_protocol.c wrt chunked
>transfer codings. The problem is that the bucket holding the
>chunk size isn't ever freed, so the corresponding data block
>is overcounted.
>
>Here's a diff for http_protocol.c against current anon-cvs (which
>doesn't seem to have any of the newer changes to ap_get_client_block).
>Light testing seems to indicate that this fixes the refcount problem.
>

Thanks, I'll commit this now.  I've applied a slightly simplified
form of this change: clearing the brigade right after the 
apr_brigade_flatten
regardless of the return code.

I think there may be a couple of other bucket leaks in that same
file; I'll scan through it and fix any other leaks I can find.

Brian

>diff -u -r1.454 http_protocol.c
>--- http_protocol.c     13 Aug 2002 14:27:39 -0000      1.454
>+++ http_protocol.c     5 Sep 2002 17:15:12 -0000
>@@ -901,6 +901,7 @@
>             if (rv == APR_SUCCESS) {
>                 rv = apr_brigade_flatten(bb, line, &len);
>                 if (rv == APR_SUCCESS) {
>+                    apr_brigade_cleanup(bb);
>                     ctx->remaining = get_chunk_size(line);
>                 }
>             }
>@@ -966,6 +967,7 @@
>                     if (rv == APR_SUCCESS) {
>                         rv = apr_brigade_flatten(bb, line, &len);
>                         if (rv == APR_SUCCESS) {
>+                            apr_brigade_cleanup(bb);
>                             ctx->remaining = get_chunk_size(line);
>                         }
>                     }
>  
>




Re: Segmentation fault when downloading large files

Posted by Joe Schaefer <jo...@sunstarsys.com>.
Graham Leggett <mi...@sharp.fm> writes:

> Peter Van Biesen wrote:
> 
> > Does anybody have another idea for me to try ?
> 
> Have you tried the latest fix for the client_block stuff, I think I saw 
> a very recent CVS checkin...?
> 
> There could of course be more than one leak, and we'll only fix the 
> problem once all of them are found...


There's also a refcount problem in http_protocol.c wrt chunked
transfer codings. The problem is that the bucket holding the
chunk size isn't ever freed, so the corresponding data block
is overcounted.

Here's a diff for http_protocol.c against current anon-cvs (which
doesn't seem to have any of the newer changes to ap_get_client_block).
Light testing seems to indicate that this fixes the refcount problem.

diff -u -r1.454 http_protocol.c
--- http_protocol.c     13 Aug 2002 14:27:39 -0000      1.454
+++ http_protocol.c     5 Sep 2002 17:15:12 -0000
@@ -901,6 +901,7 @@
             if (rv == APR_SUCCESS) {
                 rv = apr_brigade_flatten(bb, line, &len);
                 if (rv == APR_SUCCESS) {
+                    apr_brigade_cleanup(bb);
                     ctx->remaining = get_chunk_size(line);
                 }
             }
@@ -966,6 +967,7 @@
                     if (rv == APR_SUCCESS) {
                         rv = apr_brigade_flatten(bb, line, &len);
                         if (rv == APR_SUCCESS) {
+                            apr_brigade_cleanup(bb);
                             ctx->remaining = get_chunk_size(line);
                         }
                     }


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> Does anybody have another idea for me to try ?

Have you tried the latest fix for the client_block stuff, I think I saw 
a very recent CVS checkin...?

There could of course be more than one leak, and we'll only fix the 
problem once all of them are found...

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Brian Pane <br...@apache.org>.
Peter Van Biesen wrote:

>Hi,
>
>I've recompiled the server with only the proxy and the core modules :
>

>But the problem remains : 
>

>Does anybody have another idea for me to try ?
>

There was a fix for http_protocol.c last night that addressed
at least one problem that could cause the proxy to run out of
memory on a large request.  I recommend trying the latest code
in cvs.

Brian



Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hi,

I've recompiled the server with only the proxy and the core modules :

Compiled in modules:
  core.c
  mod_proxy.c
  proxy_connect.c
  proxy_ftp.c
  proxy_http.c
  prefork.c
  http_core.c

But the problem remains : 

[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:04:55 2002] [info] (32)Broken pipe: core_output_filter:
writing data to the network
[Thu Sep 05 10:07:39 2002] [notice] child pid 17923 exit signal
Segmentation fault (11)

Does anybody have another idea for me to try ?

Thanx,

Peter.

Re: Segmentation fault when downloading large files

Posted by Brian Pane <br...@apache.org>.
Peter Van Biesen wrote:

>I've continued to investigate the problem, maybe you know what could
>cause it.
>
>I'm using a proxy chain, a proxy running internally and forwarding all
>requests to an other proxy in the DMZ. Both proxies are identical. It is
>always the internal proxy that crashes; the external proxy has no
>problem downloading large files ( I haven't tested the memory usage yet
>). Therefor, when the proxy connects directly to the site, the memory is
>freed, but when it forwards the request to another proxy, it is not. 
>
>Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
>solve the problem. Does anybody know when this will be ?
>

There's no specific date planned for 2.0.41 yet.  My own thinking
is that we should release 2.0.41 "soon," because it contains a few
important performance and reliability fixes (mostly related to cases
where 2.0.40 and prior releases were trying to buffer unreasonably
large amounts of data).  In the meantime, if you have time, can you
try your proxy test case against the current CVS head?  I ran some
reverse-proxy tests successfully today using the latest 2.0.41-dev
code, and it properly streamed large responses without buffering,
but I'm not certain that my test case covered all the code paths
involved in your two-proxy setup.

Thanks,
Brian

>
>Peter.
>
>Graham Leggett wrote:
>  
>
>>Brian Pane wrote:
>>
>>    
>>
>>>But the memory involved here ought to be in buckets (which can
>>>be freed long before the entire request is done).
>>>
>>>In 2.0.39 and 2.0.40, the content-length filter's habit of
>>>buffering the entire response would keep the httpd from freeing
>>>buckets incrementally during the request.  That particular
>>>problem is gone in the latest 2.0.41-dev CVS head.  If the
>>>segfault problem still exists in 2.0.41-dev, we need to take
>>>a look at whether there's any buffering in the proxy code that
>>>can be similarly fixed.
>>>      
>>>
>>The proxy code doesn't buffer anything, it basically goes "get a bucket
>>from backend stack, put the bucket to frontend stack, cleanup bucket,
>>repeat".
>>
>>There are some filters (like include I think) that "put away" buckets as
>>the response is handled, it is possible one of these filters is also
>>causing a "leak".
>>
>>Regards,
>>Graham
>>--
>>-----------------------------------------
>>minfrin@sharp.fm
>>        "There's a moon
>>                                        over Bourbon Street
>>                                                tonight..."
>>    
>>




Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> I'm using a proxy chain, a proxy running internally and forwarding all
> requests to an other proxy in the DMZ. Both proxies are identical. It is
> always the internal proxy that crashes; the external proxy has no
> problem downloading large files ( I haven't tested the memory usage yet
> ). Therefor, when the proxy connects directly to the site, the memory is
> freed, but when it forwards the request to another proxy, it is not. 

Not necessarily. Your outer proxy that doesn't crash might have more RAM 
in the machine, or the inner proxy has already crashed, not allowing the 
outer proxy the opportunity to have a request large enough to crash it.

Have you tested the outer proxy with a large file on it's own, ie a file 
greater in size than the box's RAM+swap?

> Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
> solve the problem. Does anybody know when this will be ?

Pull the latest HEAD from cvs and give it a try to see if it is fixed.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
I've continued to investigate the problem, maybe you know what could
cause it.

I'm using a proxy chain, a proxy running internally and forwarding all
requests to an other proxy in the DMZ. Both proxies are identical. It is
always the internal proxy that crashes; the external proxy has no
problem downloading large files ( I haven't tested the memory usage yet
). Therefor, when the proxy connects directly to the site, the memory is
freed, but when it forwards the request to another proxy, it is not. 

Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
solve the problem. Does anybody know when this will be ?

Peter.

Graham Leggett wrote:
> 
> Brian Pane wrote:
> 
> > But the memory involved here ought to be in buckets (which can
> > be freed long before the entire request is done).
> >
> > In 2.0.39 and 2.0.40, the content-length filter's habit of
> > buffering the entire response would keep the httpd from freeing
> > buckets incrementally during the request.  That particular
> > problem is gone in the latest 2.0.41-dev CVS head.  If the
> > segfault problem still exists in 2.0.41-dev, we need to take
> > a look at whether there's any buffering in the proxy code that
> > can be similarly fixed.
> 
> The proxy code doesn't buffer anything, it basically goes "get a bucket
> from backend stack, put the bucket to frontend stack, cleanup bucket,
> repeat".
> 
> There are some filters (like include I think) that "put away" buckets as
> the response is handled, it is possible one of these filters is also
> causing a "leak".
> 
> Regards,
> Graham
> --
> -----------------------------------------
> minfrin@sharp.fm
>         "There's a moon
>                                         over Bourbon Street
>                                                 tonight..."

Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

> I've checked out the latest version from CVS, but I see there's no
> configure script in there. How do I get/generate it ? Do I need it to
> compile ?

Pull the following three from cvs:

- httpd-2.0
- apr
- apr-util

Copy both the apr and apr-util directories to httpd-2.0/srclib, like so:

[root@jessica srclib]# pwd
/home/minfrin/src/apache/sandbox/proxy/httpd-2.0/srclib
[root@jessica srclib]# ls -al
total 40
drwxr-xr-x    7 minfrin  minfrin      4096 Sep  2 10:31 .
drwxr-xr-x   14 minfrin  minfrin      4096 Sep  2 10:39 ..
drwxr-xr-x   32 minfrin  minfrin      4096 Sep  2 10:34 apr
drwxr-xr-x   21 minfrin  minfrin      4096 Sep  2 10:35 apr-util
drwxr-xr-x    2 minfrin  minfrin      4096 Jun 20 00:01 CVS
-rw-r--r--    1 minfrin  minfrin        32 Jun 20 00:01 .cvsignore
-rw-r--r--    1 root     root            0 Sep  2 10:31 .deps
drwxr-xr-x    3 minfrin  minfrin      4096 Dec  6  2001 expat-lite
-rw-r--r--    1 root     root          477 Sep  2 10:31 Makefile
-rw-r--r--    1 minfrin  minfrin       136 May 20 10:53 Makefile.in
drwxr-xr-x    7 minfrin  minfrin      4096 Sep  2 10:36 pcre

In the httpd-2.0 directory, run the following to create the ./configure 
scripts:

./buildconf

Then run ./configure as you normally would.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hi,

I've checked out the latest version from CVS, but I see there's no
configure script in there. How do I get/generate it ? Do I need it to
compile ?

Thanx,

Peter.

Brian Pane wrote:
> 
> Peter Van Biesen wrote:
> 
> >I've continued to investigate the problem, maybe you know what could
> >cause it.
> >
> >I'm using a proxy chain, a proxy running internally and forwarding all
> >requests to an other proxy in the DMZ. Both proxies are identical. It is
> >always the internal proxy that crashes; the external proxy has no
> >problem downloading large files ( I haven't tested the memory usage yet
> >). Therefor, when the proxy connects directly to the site, the memory is
> >freed, but when it forwards the request to another proxy, it is not.
> >
> >Anyhow, I'll wait until the 2.0.41 will be released, maybe this will
> >solve the problem. Does anybody know when this will be ?
> >
> 
> There's no specific date planned for 2.0.41 yet.  My own thinking
> is that we should release 2.0.41 "soon," because it contains a few
> important performance and reliability fixes (mostly related to cases
> where 2.0.40 and prior releases were trying to buffer unreasonably
> large amounts of data).  In the meantime, if you have time, can you
> try your proxy test case against the current CVS head?  I ran some
> reverse-proxy tests successfully today using the latest 2.0.41-dev
> code, and it properly streamed large responses without buffering,
> but I'm not certain that my test case covered all the code paths
> involved in your two-proxy setup.
> 
> Thanks,
> Brian
> 
> >
> >Peter.
> >
> >Graham Leggett wrote:
> >
> >
> >>Brian Pane wrote:
> >>
> >>
> >>
> >>>But the memory involved here ought to be in buckets (which can
> >>>be freed long before the entire request is done).
> >>>
> >>>In 2.0.39 and 2.0.40, the content-length filter's habit of
> >>>buffering the entire response would keep the httpd from freeing
> >>>buckets incrementally during the request.  That particular
> >>>problem is gone in the latest 2.0.41-dev CVS head.  If the
> >>>segfault problem still exists in 2.0.41-dev, we need to take
> >>>a look at whether there's any buffering in the proxy code that
> >>>can be similarly fixed.
> >>>
> >>>
> >>The proxy code doesn't buffer anything, it basically goes "get a bucket
> >>from backend stack, put the bucket to frontend stack, cleanup bucket,
> >>repeat".
> >>
> >>There are some filters (like include I think) that "put away" buckets as
> >>the response is handled, it is possible one of these filters is also
> >>causing a "leak".
> >>
> >>Regards,
> >>Graham
> >>--
> >>-----------------------------------------
> >>minfrin@sharp.fm
> >>        "There's a moon
> >>                                        over Bourbon Street
> >>                                                tonight..."
> >>
> >>

Re: Segmentation fault when downloading large files

Posted by "Jess M. Holle" <je...@ptc.com>.
Really?

I've built mod_jk v1.2.0 (i.e. from jtc 4.0.4 sources) against 2.0.40 on 
Windows, Solaris, and AIX (and HP provides one for 2.0.39 on HPUX, but 
hasn't gotten to 2.0.40 last I saw) -- though on AIX I had crashes until 
Jeff Trawick helped me navigate the insanity of AIX linking (which the 
2.0.40 build process did not reliably do out-of-the-box).

--
Jess Holle

Peter Van Biesen wrote:

>That's a bit of a problem for the moment, I've compiled 2.0.40, but
>httpd complains at runtime about mod_jk, apparently something has
>changed in the module api ... I'm using the last version of the
>connectors ( 4.0.4 ). Or is there a newer version somewhere ?
>
>Peter.
>
>Justin Erenkrantz wrote:
>  
>
>>On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
>>    
>>
>>>I now have a reproducable error, a httpd which I can recompile ( it's
>>>till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
>>>      
>>>
>>Can you upgrade to at least .40 or better yet the latest CVS
>>version?  -- justin
>>    
>>
>
>  
>



Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
That's a bit of a problem for the moment, I've compiled 2.0.40, but
httpd complains at runtime about mod_jk, apparently something has
changed in the module api ... I'm using the last version of the
connectors ( 4.0.4 ). Or is there a newer version somewhere ?

Peter.

Justin Erenkrantz wrote:
> 
> On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> > I now have a reproducable error, a httpd which I can recompile ( it's
> > till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> 
> Can you upgrade to at least .40 or better yet the latest CVS
> version?  -- justin

Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
I give up, I can't find what's wrong ... 

Peter.

Peter Van Biesen wrote:
> 
> That's a bit of a problem for the moment, I've compiled 2.0.40, but
> httpd complains at runtime about mod_jk, apparently something has
> changed in the module api ... I'm using the last version of the
> connectors ( 4.0.4 ). Or is there a newer version somewhere ?
> 
> Peter.
> 
> Justin Erenkrantz wrote:
> >
> > On Wed, Aug 28, 2002 at 02:43:08PM +0200, Peter Van Biesen wrote:
> > > I now have a reproducable error, a httpd which I can recompile ( it's
> > > till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> >
> > Can you upgrade to at least .40 or better yet the latest CVS
> > version?  -- justin

Re: Segmentation fault when downloading large files

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
At 07:06 AM 8/28/2002, Graham Leggett wrote:
>Peter Van Biesen wrote:
>
>>>>Program received signal SIGSEGV, Segmentation fault.
>>>>0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
>>>>(gdb) where
>>>>#0  0xc1bfb06c in apr_bucket_alloc () from
>>>>/opt/httpd/lib/libaprutil.sl.0
>>>>#1  0xc1bf8d18 in socket_bucket_read () from
>>>>/opt/httpd/lib/libaprutil.sl.0
>>>>#2  0x00129ffc in core_input_filter ()
>>>>#3  0x0011a630 in ap_get_brigade ()
>>>>#4  0x000bb26c in ap_http_filter ()
>>>>#5  0x0011a630 in ap_get_brigade ()
>>>>#6  0x0012999c in net_time_filter ()
>>>>#7  0x0011a630 in ap_get_brigade ()
>
>The ap_get_brigade() is followed by a ap_pass_brigade(), then a 
>apr_brigade_cleanup(bb).
>
>What could be happening is that either:
>
>a) brigade cleanup is hosed or leaks
>b) one of the filters is leaking along the way

Or it simply tries to slurp all 100's of MBs of this huge download.

As I guessed, we are out of memory.

Someone asked why I asserted that input filtering still sucks.  Heh.

Bill


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Peter Van Biesen wrote:

>>>Program received signal SIGSEGV, Segmentation fault.
>>>0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
>>>(gdb) where
>>>#0  0xc1bfb06c in apr_bucket_alloc () from
>>>/opt/httpd/lib/libaprutil.sl.0
>>>#1  0xc1bf8d18 in socket_bucket_read () from
>>>/opt/httpd/lib/libaprutil.sl.0
>>>#2  0x00129ffc in core_input_filter ()
>>>#3  0x0011a630 in ap_get_brigade ()
>>>#4  0x000bb26c in ap_http_filter ()
>>>#5  0x0011a630 in ap_get_brigade ()
>>>#6  0x0012999c in net_time_filter ()
>>>#7  0x0011a630 in ap_get_brigade ()

The ap_get_brigade() is followed by a ap_pass_brigade(), then a 
apr_brigade_cleanup(bb).

What could be happening is that either:

a) brigade cleanup is hosed or leaks
b) one of the filters is leaking along the way

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Euh, in function apr_bucket_heap_make . Sorry.

Peter Van Biesen wrote:
> 
> Hi,
> 
> can anybody look into apr_buckets_heap.c ? I'm not familiar with the apr
> code, but I don't see the free_func called anywhere ( which frees up the
> memory ), or am I mistaken ?
> 
> Thanks !
> 
> Peter.
> 
> Peter Van Biesen wrote:
> >
> > Hi,
> >
> > I started my server with MaxClients=1, started the download and attached
> > to the process with gdb. The process crashed; This is the trace :
> >
> > vfsi3>gdb httpd 7840
> > GNU gdb 5.2.1
> > Copyright 2002 Free Software Foundation, Inc.
> > GDB is free software, covered by the GNU General Public License, and you
> > are
> > welcome to change it and/or distribute copies of it under certain
> > conditions.
> > Type "show copying" to see the conditions.
> > There is absolutely no warranty for GDB.  Type "show warranty" for
> > details.
> > This GDB was configured as "hppa2.0n-hp-hpux11.00"...
> > Attaching to program: /opt/httpd/bin/httpd, process 7840
> >
> > warning: The shared libraries were not privately mapped; setting a
> > breakpoint in a shared library will not work until you rerun the
> > program.
> >
> > Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
> > Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
> > Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
> > Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
> > Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
> > Reading symbols from /usr/lib/libnsl.1...done.
> > Reading symbols from /usr/lib/libxti.2...done.
> > Reading symbols from /usr/lib/libpthread.1...done.
> > Reading symbols from /usr/lib/libc.2...done.
> > Reading symbols from /usr/lib/libdld.2...done.
> > Reading symbols from /usr/lib/libnss_files.1...done.
> > Reading symbols from /usr/lib/libnss_nis.1...done.
> > Reading symbols from /usr/lib/libnss_dns.1...done.
> > 0xc0115b68 in _select_sys () from /usr/lib/libc.2
> > (gdb) continue
> > Continuing.
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> > (gdb) where
> > #0  0xc1bfb06c in apr_bucket_alloc () from
> > /opt/httpd/lib/libaprutil.sl.0
> > #1  0xc1bf8d18 in socket_bucket_read () from
> > /opt/httpd/lib/libaprutil.sl.0
> > #2  0x00129ffc in core_input_filter ()
> > #3  0x0011a630 in ap_get_brigade ()
> > #4  0x000bb26c in ap_http_filter ()
> > #5  0x0011a630 in ap_get_brigade ()
> > #6  0x0012999c in net_time_filter ()
> > #7  0x0011a630 in ap_get_brigade ()
> > #8  0x00092f3c in ap_proxy_http_process_response ()
> > #9  0x000935e0 in ap_proxy_http_handler ()
> > #10 0x0008484c in proxy_run_scheme_handler ()
> > #11 0x0008259c in proxy_handler ()
> > #12 0x000fdc40 in ap_run_handler ()
> > #13 0x000fea04 in ap_invoke_handler ()
> > #14 0x000c0d9c in ap_process_request ()
> > #15 0x000b8348 in ap_process_http_connection ()
> > #16 0x00115a00 in ap_run_process_connection ()
> > #17 0x001160c0 in ap_process_connection ()
> > #18 0x000fae00 in child_main ()
> > #19 0x000fb0ac in make_child ()
> > #20 0x000fb47c in perform_idle_server_maintenance ()
> > #21 0x000fbc88 in ap_mpm_run ()
> > #22 0x001079f0 in main ()
> > (gdb)
> >
> > The resources used by the process increase linearly until the maximum
> > per process is reached after which the crash occurs. Did we do an alloc
> > without a free ?
> >
> > Peter.

Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hi,

can anybody look into apr_buckets_heap.c ? I'm not familiar with the apr
code, but I don't see the free_func called anywhere ( which frees up the
memory ), or am I mistaken ?

Thanks !

Peter.

Peter Van Biesen wrote:
> 
> Hi,
> 
> I started my server with MaxClients=1, started the download and attached
> to the process with gdb. The process crashed; This is the trace :
> 
> vfsi3>gdb httpd 7840
> GNU gdb 5.2.1
> Copyright 2002 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you
> are
> welcome to change it and/or distribute copies of it under certain
> conditions.
> Type "show copying" to see the conditions.
> There is absolutely no warranty for GDB.  Type "show warranty" for
> details.
> This GDB was configured as "hppa2.0n-hp-hpux11.00"...
> Attaching to program: /opt/httpd/bin/httpd, process 7840
> 
> warning: The shared libraries were not privately mapped; setting a
> breakpoint in a shared library will not work until you rerun the
> program.
> 
> Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
> Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
> Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
> Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
> Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
> Reading symbols from /usr/lib/libnsl.1...done.
> Reading symbols from /usr/lib/libxti.2...done.
> Reading symbols from /usr/lib/libpthread.1...done.
> Reading symbols from /usr/lib/libc.2...done.
> Reading symbols from /usr/lib/libdld.2...done.
> Reading symbols from /usr/lib/libnss_files.1...done.
> Reading symbols from /usr/lib/libnss_nis.1...done.
> Reading symbols from /usr/lib/libnss_dns.1...done.
> 0xc0115b68 in _select_sys () from /usr/lib/libc.2
> (gdb) continue
> Continuing.
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
> (gdb) where
> #0  0xc1bfb06c in apr_bucket_alloc () from
> /opt/httpd/lib/libaprutil.sl.0
> #1  0xc1bf8d18 in socket_bucket_read () from
> /opt/httpd/lib/libaprutil.sl.0
> #2  0x00129ffc in core_input_filter ()
> #3  0x0011a630 in ap_get_brigade ()
> #4  0x000bb26c in ap_http_filter ()
> #5  0x0011a630 in ap_get_brigade ()
> #6  0x0012999c in net_time_filter ()
> #7  0x0011a630 in ap_get_brigade ()
> #8  0x00092f3c in ap_proxy_http_process_response ()
> #9  0x000935e0 in ap_proxy_http_handler ()
> #10 0x0008484c in proxy_run_scheme_handler ()
> #11 0x0008259c in proxy_handler ()
> #12 0x000fdc40 in ap_run_handler ()
> #13 0x000fea04 in ap_invoke_handler ()
> #14 0x000c0d9c in ap_process_request ()
> #15 0x000b8348 in ap_process_http_connection ()
> #16 0x00115a00 in ap_run_process_connection ()
> #17 0x001160c0 in ap_process_connection ()
> #18 0x000fae00 in child_main ()
> #19 0x000fb0ac in make_child ()
> #20 0x000fb47c in perform_idle_server_maintenance ()
> #21 0x000fbc88 in ap_mpm_run ()
> #22 0x001079f0 in main ()
> (gdb)
> 
> The resources used by the process increase linearly until the maximum
> per process is reached after which the crash occurs. Did we do an alloc
> without a free ?
> 
> Peter.

Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
Hi,

I started my server with MaxClients=1, started the download and attached
to the process with gdb. The process crashed; This is the trace : 


vfsi3>gdb httpd 7840
GNU gdb 5.2.1
Copyright 2002 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "hppa2.0n-hp-hpux11.00"...
Attaching to program: /opt/httpd/bin/httpd, process 7840

warning: The shared libraries were not privately mapped; setting a
breakpoint in a shared library will not work until you rerun the
program.

Reading symbols from /opt/openssl/lib/libssl.sl.0.9.6...done.
Reading symbols from /opt/openssl/lib/libcrypto.sl.0.9.6...done.
Reading symbols from /opt/httpd/lib/libaprutil.sl.0...done.
Reading symbols from /opt/httpd/lib/libexpat.sl.1...done.
Reading symbols from /opt/httpd/lib/libapr.sl.0...done.
Reading symbols from /usr/lib/libnsl.1...done.
Reading symbols from /usr/lib/libxti.2...done.
Reading symbols from /usr/lib/libpthread.1...done.
Reading symbols from /usr/lib/libc.2...done.
Reading symbols from /usr/lib/libdld.2...done.
Reading symbols from /usr/lib/libnss_files.1...done.
Reading symbols from /usr/lib/libnss_nis.1...done.
Reading symbols from /usr/lib/libnss_dns.1...done.
0xc0115b68 in _select_sys () from /usr/lib/libc.2
(gdb) continue
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0xc1bfb06c in apr_bucket_alloc () from /opt/httpd/lib/libaprutil.sl.0
(gdb) where
#0  0xc1bfb06c in apr_bucket_alloc () from
/opt/httpd/lib/libaprutil.sl.0
#1  0xc1bf8d18 in socket_bucket_read () from
/opt/httpd/lib/libaprutil.sl.0
#2  0x00129ffc in core_input_filter ()
#3  0x0011a630 in ap_get_brigade ()
#4  0x000bb26c in ap_http_filter ()
#5  0x0011a630 in ap_get_brigade ()
#6  0x0012999c in net_time_filter ()
#7  0x0011a630 in ap_get_brigade ()
#8  0x00092f3c in ap_proxy_http_process_response ()
#9  0x000935e0 in ap_proxy_http_handler ()
#10 0x0008484c in proxy_run_scheme_handler ()
#11 0x0008259c in proxy_handler ()
#12 0x000fdc40 in ap_run_handler ()
#13 0x000fea04 in ap_invoke_handler ()
#14 0x000c0d9c in ap_process_request ()
#15 0x000b8348 in ap_process_http_connection ()
#16 0x00115a00 in ap_run_process_connection ()
#17 0x001160c0 in ap_process_connection ()
#18 0x000fae00 in child_main ()
#19 0x000fb0ac in make_child ()
#20 0x000fb47c in perform_idle_server_maintenance ()
#21 0x000fbc88 in ap_mpm_run ()
#22 0x001079f0 in main ()
(gdb)

The resources used by the process increase linearly until the maximum
per process is reached after which the crash occurs. Did we do an alloc
without a free ?

Peter.

Re: Segmentation fault when downloading large files

Posted by Peter Van Biesen <pe...@vlafo.be>.
However, when downloading a large file from the server itself ( not
using the proxy ), works fine ... either its a problem in the proxy or a
timeout somewhere ( locally is a lot faster ).

Peter.

Dirk-Willem van Gulik wrote:
> 
> This look like a filter issue I've seen before; but never could not quite
> reproduce. You may want to take this to dev@httpd.apache.org; as this is
> most likely related to the filters in apache; and not proxy specific.
> 
> Dw.
> 
> On Tue, 27 Aug 2002, Peter Van Biesen wrote:
> 
> > Hello,
> >
> > I'm using an apache 2.0.39 on a HPUX 11.0 system as a webserver/proxy.
> > When I try to download large files through the proxy, I get the
> > following error :
> >
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(109): proxy: HTTP:
> > canonicalising URL
> > //download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > [Tue Aug 27 11:44:08 2002] [debug] mod_proxy.c(442): Trying to run
> > scheme_handler against proxy
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(1051): proxy: HTTP:
> > serving URL
> > http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_http.c(221): proxy: HTTP
> > connecting
> > http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe
> > to download.microsoft.com:80
> > [Tue Aug 27 11:44:08 2002] [debug] proxy_util.c(1164): proxy: HTTP: fam
> > 2 socket created to connect to vlafo3.vlafo.be
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(370): proxy: socket is
> > connected
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(404): proxy: connection
> > complete to 193.190.145.66:80 (vlafo3.vlafo.be)
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Date: Tue, 27 Aug 2002 09:44:09 GMT
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Server: Microsoft-IIS/5.0
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Content-Type: application/octet-stream
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Accept-Ranges: bytes
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Last-Modified: Tue, 23 Jul 2002 16:23:09 GMT
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = ETag: "f2138b3b6532c21:8f9"
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Via: 1.1 download.microsoft.com
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_util.c(444): proxy: headerline
> > = Transfer-Encoding: chunked
> > [Tue Aug 27 11:44:09 2002] [debug] proxy_http.c(893): proxy: start body
> > send
> > [Tue Aug 27 11:57:45 2002] [notice] child pid 7099 exit signal
> > Segmentation fault (11)
> >
> > I'm sorry for the example ... ;-))
> >
> > Anyway, I've tried on several machine that are configured differently (
> > swap, memory ), but the download stops always around 70 Mb. Does anybody
> > have an idea what's wrong ? Is there a core I could gdb ( I didn't find
> > any ) ?
> >
> > Thanks !
> >
> > Peter.
> >