You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Marc Slemko <ma...@znep.com> on 1999/11/10 20:03:24 UTC

Re: cvs commit: apache-1.3/conf highperformance.conf-dist httpd.conf-dist

On 20 Apr 1999 jim@hyperreal.org wrote:

>   1.41      +8 -2      apache-1.3/conf/httpd.conf-dist
>   
>   Index: httpd.conf-dist
>   ===================================================================
>   RCS file: /export/home/cvs/apache-1.3/conf/httpd.conf-dist,v
>   retrieving revision 1.40
>   retrieving revision 1.41
>   diff -u -r1.40 -r1.41
>   --- httpd.conf-dist	1999/04/20 18:03:09	1.40
>   +++ httpd.conf-dist	1999/04/20 21:40:59	1.41
>   @@ -158,9 +158,15 @@
>    # as to avoid problems after prolonged use when Apache (and maybe the
>    # libraries it uses) leak memory or other resources.  On most systems, this
>    # isn't really needed, but a few (such as Solaris) do have notable leaks
>   -# in the libraries.
>   +# in the libraries. For these platforms, set to something like 10000
>   +# or so; a setting of 0 means unlimited.
>    #
>   -MaxRequestsPerChild 30
>   +# NOTE: This value does not include keepalive requests after the initial
>   +#       request per connection. For example, if a child process handles
>   +#       an initial request and 10 subsequent "keptalive" requests, it
>   +#       would only count as 1 request towards this limit.
>   +#
>   +MaxRequestsPerChild 0

-1

Yea, it has been a while but I never noticed this before, and really
really really do not like it.  There is absolutely no gain from setting it
to unlimited.

Sure, 30 is too low.  But there are lots of platforms and situations where
setting it to unlimited is asking for trouble.  Over the past few months,
I have seen a significantly increased number of situations where people
were complaining about httpds eating memory on them; when investigated, it
turned out they had MaxRequestsPerChild set to 0 so they weren't getting
killed ever and were causing problems due to some bug somewhere, likely in
the OS or in third party libraries.

>From a performance standpoint, the difference between having it set to a
reasonably high number (say 1000) and having it set to 0 is negligible.

>From a use standpoint, this is the sort of tuning that makes Apache less
"drop in and work" and contributes to a lot of problems that end up
requires special tuning to avoid the system from falling over.

Also, I really don't think that the config file is the place to explain
semantic things like "oh, this is counted in this particular manner,
etc.".  That should be in the documentation, and shouldn't be cluttering
up the config file.  The config file should be readable without being full
of stuff that is extraneous to 99.9% of the users.

The only reason I noticed it now was when I started looking into why all
these people were having problems with httpds growing to horrendous
sizes.  This is a real problem that people are having.


Re: cvs commit: apache-1.3/conf highperformance.conf-dist httpd.conf-dist

Posted by Peter Galbavy <Pe...@knowledge.com>.
On Fri, Nov 12, 1999 at 08:07:22AM +0000, Peter Galbavy wrote:
> Is there any metrix rather than "number of requests" that could be

I seem to be in a daze this morning. Metrix was a great product in
it's day, but I of course meant "metrics" throughout.

-- 
Peter Galbavy
Knowledge Matters Ltd
http://www.knowledge.com/

Re: cvs commit: apache-1.3/conf highperformance.conf-dist httpd.conf-dist

Posted by Peter Galbavy <Pe...@knowledge.com>.
On Fri, Nov 12, 1999 at 02:01:57AM -0600, Manoj Kasichainula wrote:
> +1 on making MaxRequestsPerChild default to something like 1000 or
> 10000.

Is there any metrix rather than "number of requests" that could be
used in a future version of Apache - such as rusage limits ? CPU time,
data-size etc. 

Maybe the semantics should be "if any one of these limits is exceeded,
die after end of next request". With enough logging and such, an admin
could easily after a while tune the limits to local requirements.

Being a mod_perl user, for example, memory usage is a very bad metrix
to use, but CPU usage may be a good one. I don't know.

?? "MaxResourcePerChild data-size-mb cpu-seconds requests" ??

Regards,
-- 
Peter Galbavy
Knowledge Matters Ltd
http://www.knowledge.com/

Re: cvs commit: apache-1.3/conf highperformance.conf-dist httpd.conf-dist

Posted by Manoj Kasichainula <ma...@io.com>.
On Wed, Nov 10, 1999 at 12:03:24PM -0700, Marc Slemko wrote:
> Sure, 30 is too low.  But there are lots of platforms and situations where
> setting it to unlimited is asking for trouble.  Over the past few months,
> I have seen a significantly increased number of situations where people
> were complaining about httpds eating memory on them; when investigated, it
> turned out they had MaxRequestsPerChild set to 0 so they weren't getting
> killed ever and were causing problems due to some bug somewhere, likely in
> the OS or in third party libraries.
> 
> >From a performance standpoint, the difference between having it set to a
> reasonably high number (say 1000) and having it set to 0 is negligible.

+1 on making MaxRequestsPerChild default to something like 1000 or
10000.

-- 
Manoj Kasichainula - manojk at io dot com - http://www.io.com/~manojk/
"if it ain't broke, break it." -- Ray Brune