You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by George Carrette <Ge...@iacnet.com> on 1997/07/09 13:53:31 UTC

Re: memory allocations stuff (question)

stanleyg@cs.bu.edu (Stanley Gambarin)  asks:
>Second: would it be reasonable to provide a limit on the maximum amount 
>of memory that the server may allocate (this may prevent infinite loops
>taking down whole machine with them).  The server may just exit if the 
>maximum amount of memory is reached (configurable at runtime) ?

If you apply the patches at http://cpartner.iacnet.com/apache/
then the RLimitMEM configuration parameter used in the global
context in httpd.conf will be in effect for all children of the initial httpd
process.

Currently the RLimitMEM parameter only applies to cgi scripts
and server-side-include exec statements.

When a process runs out of RLimitMEM the result is that malloc returns NULL.

I've been asked to invent new config parameter names before these
patches can be applied to the Apache sources, but I haven't been
able to think up a good name, nobody has suggested one,
and I have a preference myself to cleaning up the documentation
so that it is more carefully worded and accurate.




Re: memory allocations stuff (question)

Posted by Dean Gaudet <dg...@arctic.org>.
limits are global, and what the thing below is trying to do is institute a
per-request cpu limit. 

Dean

On Wed, 9 Jul 1997, Brian Behlendorf wrote:

> 
> In the interests of avoiding having the code do what the OS can...
> 
> If we had a good "start_apache" shell script (like what most of us with
> SVR4 systems have done by hand anyways for /etc/rc.d/), could we not put a
> few good "set limit ...." calls into it?  Among some other things I can
> think of... is there a reason this wouldn't be a good idea?
> 
> 	Brian
> 
> At 10:15 AM 7/9/97 -0700, you wrote:
> >This is what I was hoping for:
> >
> >    for(;;) {
> >	begin new request (read_request_line)
> >	getrusage();
> >	calculate usage + max_usage_per_request
> >	setrlimit();
> >	do the request
> >    }
> >
> >That really only applies to the CPU usage limit though.  A memory limit
> >doesn't need to be recalculated per request.
> >
> >As for directive names... how about ServerRLimitXXX.
> >
> >It's too bad that there's no way to catch a signal when the soft limits
> >are reached and respond with a 5XX error.
> 
> --=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
> "Why not?" - TL           brian@organic.com - hyperreal.org - apache.org
> 


Re: memory allocations stuff (question)

Posted by Brian Behlendorf <br...@organic.com>.
In the interests of avoiding having the code do what the OS can...

If we had a good "start_apache" shell script (like what most of us with
SVR4 systems have done by hand anyways for /etc/rc.d/), could we not put a
few good "set limit ...." calls into it?  Among some other things I can
think of... is there a reason this wouldn't be a good idea?

	Brian

At 10:15 AM 7/9/97 -0700, you wrote:
>This is what I was hoping for:
>
>    for(;;) {
>	begin new request (read_request_line)
>	getrusage();
>	calculate usage + max_usage_per_request
>	setrlimit();
>	do the request
>    }
>
>That really only applies to the CPU usage limit though.  A memory limit
>doesn't need to be recalculated per request.
>
>As for directive names... how about ServerRLimitXXX.
>
>It's too bad that there's no way to catch a signal when the soft limits
>are reached and respond with a 5XX error.

--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
"Why not?" - TL           brian@organic.com - hyperreal.org - apache.org

Re: memory allocations stuff (question)

Posted by Dean Gaudet <dg...@arctic.org>.
This is what I was hoping for:

    for(;;) {
	begin new request (read_request_line)
	getrusage();
	calculate usage + max_usage_per_request
	setrlimit();
	do the request
    }

That really only applies to the CPU usage limit though.  A memory limit
doesn't need to be recalculated per request.

As for directive names... how about ServerRLimitXXX.

It's too bad that there's no way to catch a signal when the soft limits
are reached and respond with a 5XX error.

Dean

On Wed, 9 Jul 1997, George Carrette wrote:

> stanleyg@cs.bu.edu (Stanley Gambarin)  asks:
> >Second: would it be reasonable to provide a limit on the maximum amount 
> >of memory that the server may allocate (this may prevent infinite loops
> >taking down whole machine with them).  The server may just exit if the 
> >maximum amount of memory is reached (configurable at runtime) ?
> 
> If you apply the patches at http://cpartner.iacnet.com/apache/
> then the RLimitMEM configuration parameter used in the global
> context in httpd.conf will be in effect for all children of the initial httpd
> process.
> 
> Currently the RLimitMEM parameter only applies to cgi scripts
> and server-side-include exec statements.
> 
> When a process runs out of RLimitMEM the result is that malloc returns NULL.
> 
> I've been asked to invent new config parameter names before these
> patches can be applied to the Apache sources, but I haven't been
> able to think up a good name, nobody has suggested one,
> and I have a preference myself to cleaning up the documentation
> so that it is more carefully worded and accurate.
> 
> 
> 
>