You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Graham Leggett <mi...@sharp.fm> on 2002/09/01 18:03:19 UTC

Re: Segmentation fault when downloading large files

Peter Van Biesen wrote:

> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> I've seen in the code of ap_proxy_http_request that the variable e is
> used many times but I can't seem to find a free somewhere ...

This may be part of the problem. In apr memory is allocated from a pool 
of memory, and is then freed in one go. In this case, there is one pool 
per request, which is only freed when the request is complete. But 
during the request, 100MB of data is transfered, resulting buckets which 
are allocated, but not freed (yet). The machine runs out of memory and 
that process segfaults.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


Re: Segmentation fault when downloading large files

Posted by Graham Leggett <mi...@sharp.fm>.
Brian Pane wrote:

> But the memory involved here ought to be in buckets (which can
> be freed long before the entire request is done).
> 
> In 2.0.39 and 2.0.40, the content-length filter's habit of
> buffering the entire response would keep the httpd from freeing
> buckets incrementally during the request.  That particular
> problem is gone in the latest 2.0.41-dev CVS head.  If the
> segfault problem still exists in 2.0.41-dev, we need to take
> a look at whether there's any buffering in the proxy code that
> can be similarly fixed.

The proxy code doesn't buffer anything, it basically goes "get a bucket 
from backend stack, put the bucket to frontend stack, cleanup bucket, 
repeat".

There are some filters (like include I think) that "put away" buckets as 
the response is handled, it is possible one of these filters is also 
causing a "leak".

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."


The best hook ?

Posted by Estrade Matthieu <es...@ifrance.com>.
Hi,

I would like to know the best hook to register my module init function.
In this function, my aim is:

1- Open files
2- Take data
3- Put this data in structures (alloc memory)
4- All Childs must be able to read/modify all this data.


I did my hook with ap_hook_post_config.
Do you think it's the best way ?

When I setup a MaxRequestPerChild 10000, my childs restart.

I alloc memory to structure in my init function with calloc because, I
am unable to use the apr_pool_p in subfunctions.

When the child restart, does it call the post_config function.?
Because I see debug message like when apache init the module.


Best regards

Estrade Matthieu




______________________________________________________________________________
Pour mieux recevoir vos emails, utilisez un PC plus performant !
D�couvrez la nouvelle gamme DELL en exclusivit� sur i (france)
http://www.ifrance.com/_reloc/signedell


Re: Segmentation fault when downloading large files

Posted by Brian Pane <br...@cnet.com>.
Graham Leggett wrote:

> Peter Van Biesen wrote:
>
>> I now have a reproducable error, a httpd which I can recompile ( it's
>> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
>> I've seen in the code of ap_proxy_http_request that the variable e is
>> used many times but I can't seem to find a free somewhere ...
>
>
> This may be part of the problem. In apr memory is allocated from a 
> pool of memory, and is then freed in one go. In this case, there is 
> one pool per request, which is only freed when the request is 
> complete. But during the request, 100MB of data is transfered, 
> resulting buckets which are allocated, but not freed (yet). The 
> machine runs out of memory and that process segfaults. 


But the memory involved here ought to be in buckets (which can
be freed long before the entire request is done).

In 2.0.39 and 2.0.40, the content-length filter's habit of
buffering the entire response would keep the httpd from freeing
buckets incrementally during the request.  That particular
problem is gone in the latest 2.0.41-dev CVS head.  If the
segfault problem still exists in 2.0.41-dev, we need to take
a look at whether there's any buffering in the proxy code that
can be similarly fixed.

Brian