You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Joshua Chamas <jo...@chamas.com> on 2001/01/09 08:41:48 UTC

Apache::SizeLimit for unshared RAM ???

Hey,

I like the idea of Apache::SizeLimit, to no longer worry about
setting MaxRequestsPerChild.  That just seems smart, and might
get maximum usage out of each Apache child.

What I would like to see though is instead of killing the 
child based on VmRSS on Linux, which seems to be the apparent
size of the process in virtual memory RAM, I would like to
kill it based on the amount of unshared RAM, which is ultimately
what we care about.

Here's why: any time I add a new module to the code base, 
I am going to grow the RAM of all processes when I preload
them with PerlModule or use in startup.pl, but I DON'T CARE
about those, because they are shared, right?  Problem is
I do care because I have to retweak the Apache::SizeLimit
every time my code base grows, because my RAM of each process
just grew at the post fork baseline.

I guess you could say, SO WHAT!, get over it, but it seems
like there should be a better way.  * Dreamy *

-- Josh

Re: Apache::SizeLimit for unshared RAM ???

Posted by Perrin Harkins <pe...@primenet.com>.
On Tue, 9 Jan 2001, Joshua Chamas wrote:

> Perrin Harkins wrote:
> > 
> > We added that in, but haven't contributed a patch back because our hack only
> > works on Linux.  It's actually pretty simple, since the data is already
> > there on Linux and you don't need to do any special tricks with remembering
> > the child init size.  If you think it would help, I'll try to get an okay to
> > release a patch for it.
> > 
> > This is definitely a better way to do it than by setting max size or min
> > shared size.  We had a dramatic improvement in process lifespan after
> > changing it.
> > 
> 
> I would like to see this, but how is it better than the min 
> shared size of Apache::GTopLimit

It's like this:
What you want to control is the maximum REAL memory that each process will
take.  That's not max size or min shared, it's max unshared.  If you try
to control this using the traditional max size and min shared settings,
processes often get killed too soon because it's hard to predict how much
of the max size will be shared in any given child.

Doing it this way also means you never have to adjust the settings when
you add in or remove modules.  The thing you care about - how much actual
RAM is used perprocess - is constant. 

> On the other hand, it seems nice to NOT HAVE to install libgtop for
> this feature, as Apache::SizeLimit is just a raw perl module.  

That's the main drawback to GTopLimit.

- Perrin

Re: Apache::SizeLimit for unshared RAM ???

Posted by Joshua Chamas <jo...@chamas.com>.
Perrin Harkins wrote:
> 
> We added that in, but haven't contributed a patch back because our hack only
> works on Linux.  It's actually pretty simple, since the data is already
> there on Linux and you don't need to do any special tricks with remembering
> the child init size.  If you think it would help, I'll try to get an okay to
> release a patch for it.
> 
> This is definitely a better way to do it than by setting max size or min
> shared size.  We had a dramatic improvement in process lifespan after
> changing it.
> 

I would like to see this, but how is it better than the min 
shared size of Apache::GTopLimit ... I'm feeling a bit slow
to be missing this point.  On the other hand, it seems nice
to NOT HAVE to install libgtop for this feature, as 
Apache::SizeLimit is just a raw perl module.  Sometimes 
when you are trying to get things right, the less new code
the better!

-- Josh

Re: Apache::SizeLimit for unshared RAM ???

Posted by Perrin Harkins <pe...@primenet.com>.
> What I would like to see though is instead of killing the
> child based on VmRSS on Linux, which seems to be the apparent
> size of the process in virtual memory RAM, I would like to
> kill it based on the amount of unshared RAM, which is ultimately
> what we care about.

We added that in, but haven't contributed a patch back because our hack only
works on Linux.  It's actually pretty simple, since the data is already
there on Linux and you don't need to do any special tricks with remembering
the child init size.  If you think it would help, I'll try to get an okay to
release a patch for it.

This is definitely a better way to do it than by setting max size or min
shared size.  We had a dramatic improvement in process lifespan after
changing it.

- Perrin


Re: Apache::SizeLimit for unshared RAM ???

Posted by Buddy Lee Haystack <ha...@email.rentzone.org>.
IMHO, he has a point. I'd also benefit from memory usage based upon an application threshold. He has a good idea...



Rob Bloodgood wrote:
> I have a machine w/ 512MB of ram.
> unload the webserver, see that I have, say, 450MB free.
> So I would like to tell apache that it is allowed to use at most 425MB.
> L8r,
> Rob

-- 
www.RentZone.org

RE: Apache::SizeLimit for unshared RAM ???

Posted by Stas Bekman <st...@stason.org>.
On Tue, 9 Jan 2001, Rob Bloodgood wrote:

> > > I like the idea of Apache::SizeLimit, to no longer worry about
> > > setting MaxRequestsPerChild.  That just seems smart, and might
> > > get maximum usage out of each Apache child.
> > >
> > > What I would like to see though is instead of killing the
> > > child based on VmRSS on Linux, which seems to be the apparent
> > > size of the process in virtual memory RAM, I would like to
> > > kill it based on the amount of unshared RAM, which is ultimately
> > > what we care about.
> >
> > It exists for a long time: Apache::GTopLimit. Of course if you have GTop.
> > And it's in the guide including all the calculations of the real memory
> > used (used by Apache::VMonitor)
>
> So, forgive me for not "getting it," but is there a way to do this without
> endless retries and experimentation?  It seems to me that blocking on a
> per-child size usage is silly (even tho I'm shure it's what is available at
> the programming level).
>
> I mean,
> I have a machine w/ 512MB of ram.
> unload the webserver, see that I have, say, 450MB free.
> So I would like to tell apache that it is allowed to use at most 425MB.
>
> It's not out there as far as I can find.
>
> So far all I've been able to find is:
> Run your service for awhile.
> Do some math and guesswork about size/totals/available.
> Run it again.
> Recheck your math.
> Use (per-process limiting module).
> Pray that your processes never grow because of rarely used
> functionality/peak usage/larger than usual queries ...
>
> because then all of your hard work before goes RIGHT out the window, and I'm
> talking about a 10-15 MB difference between JUST FINE and DEATH SPIRAL,
> because we've now just crossed that horrible, horrible threshold of (say it
> quietly now) swapping! <shudder>
>
> Have I jumped to the wrong conclusion?  Is there a module (or usage) I've
> missed?  Somehow I doubt I'm the only one who sees the problem in these
> terms... has anybody seen the SOLUTION in these terms??

it's all explained here:
http://perl.apache.org/guide/performance.html#Choosing_MaxClients

Using GTopLimit you delimit the upper and lower memory boundaries, which
allows you to calculate the optimal MaxClients for a given memory size.
And it'll never go over this size (well may be for a few secs for a few
MBs if the process is still running, since the killing happens at the end
of the request).


_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



RE: Apache::SizeLimit for unshared RAM ???

Posted by Perrin Harkins <pe...@primenet.com>.
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> > It's not a hard limit, and I actually only have it check on every other
> > request.  We do use hard limits with BSD::Resource to set maximums on CPU
> > and RAM, in case something goes totally out of control.  That's just a
> > safety though.
> 
> <chokes> JUST a safety, huh? :-)

Why is that surprising?  We had a dev server get into a tight loop once
and use up all the CPU.  We fixed that problem, but wanted to be sure that
a similar problem couldn't take down a production server.

> since I never saw a worthwhile resolution to the thread "the edge of chaos,"

The problem of how to get a web server to still provide some service when
it's overwhelmed by traffic is pretty universal.  It's not exactly a
mod_perl problem.  Ultimately you can't fit 10 pounds of traffic in a 5
pound web server, so you have to improve performance or deny service to
some users.

> In a VERY busy mod_perl environment (and I'm taking 12.1M hits/mo right
> now), which has the potential to melt VERY badly if something hiccups (like,
> the DB gets locked into a transaction that holds up all MaxClient httpd
> processes, and YES it's happened more than once in the last couple of
> weeks),
> 
> What specific modules/checks/balances would you install into your webserver
> to prevent such a melt from killing a box?

The things I already mentioned prevent the box from running out of memory.  
Your web service can still become unresponsive if it depends on a shared
resource and that resource becomes unavailable (database down, etc.).  
You can put timers on your calls to those resources so that mod_perl will
continue if they're hung, but it's still useless if you've got to have the
database.  If there's a particularly flaky resource that is only used in
part of your application, you could segregate that on it's own mod_perl
server so that it won't bring anything else down with it, but the
usefulness of this approach depends a lot on the situation.

- Perrin


RE: Apache::SizeLimit for unshared RAM ???

Posted by Rob Bloodgood <ro...@empire2.com>.
> On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> > OK, so my next question about per-process size limits is this:
> > Is it a hard limit???
> >
> > As in,
> > what if I alloc 10MB/per and every now & then my one of my
> processes spikes
> > to a (not unreasonable) 11MB?  Will it be nuked in mid process?  Or just
> > instructed to die at the end of the current request?
>
> It's not a hard limit, and I actually only have it check on every other
> request.  We do use hard limits with BSD::Resource to set maximums on CPU
> and RAM, in case something goes totally out of control.  That's just a
> safety though.

<chokes> JUST a safety, huh? :-)
Alright, then to you and the mod_perl community in general,
since I never saw a worthwhile resolution to the thread "the edge of chaos,"

In a VERY busy mod_perl environment (and I'm taking 12.1M hits/mo right
now), which has the potential to melt VERY badly if something hiccups (like,
the DB gets locked into a transaction that holds up all MaxClient httpd
processes, and YES it's happened more than once in the last couple of
weeks),

What specific modules/checks/balances would you install into your webserver
to prevent such a melt from killing a box?

Red Hat Linux release 6.1 (Cartman)
Kernel 2.2.16-3smp on an i686
login: Out of memory for httpd

Out of memory for httpd

Out of memory for httpd

Out of memory for httpd
root

Out of memory for mingetty

Out of memory for httpd

Out of memory for httpd
<sigh>
<reset>

...and before the comments about client/server/DBA/caching/proxy/loadbalance
design start flying, I *know*!  I'm working on it right now, but for right
now I have what I have and I'm trying to keep it alive for just a little
longer until the real fix is done. :-)

TIA!

L8r,
Rob


RE: Apache::SizeLimit for unshared RAM ???

Posted by Perrin Harkins <pe...@primenet.com>.
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> OK, so my next question about per-process size limits is this:
> Is it a hard limit???
> 
> As in,
> what if I alloc 10MB/per and every now & then my one of my processes spikes
> to a (not unreasonable) 11MB?  Will it be nuked in mid process?  Or just
> instructed to die at the end of the current request?

It's not a hard limit, and I actually only have it check on every other
request.  We do use hard limits with BSD::Resource to set maximums on CPU
and RAM, in case something goes totally out of control.  That's just a
safety though.

- Perrin


RE: Apache::SizeLimit for unshared RAM ???

Posted by Rob Bloodgood <ro...@empire2.com>.
> > because then all of your hard work before goes RIGHT out the window,
> > and I'm talking about a 10-15 MB difference between JUST FINE and
> > DEATH SPIRAL, because we've now just crossed that horrible, horrible
> > threshold of (say it quietly now) swapping! <shudder>
>
> That won't happen if you use a size limit and MaxClients.  The worst that
> can happen is processes will be killed too quickly, which will drive
> the load up.  Yes, that would be bad, but probably not as bad as swapping.

OK, so my next question about per-process size limits is this:
Is it a hard limit???

As in,
what if I alloc 10MB/per and every now & then my one of my processes spikes
to a (not unreasonable) 11MB?  Will it be nuked in mid process?  Or just
instructed to die at the end of the current request?


RE: Apache::SizeLimit for unshared RAM ???

Posted by Perrin Harkins <pe...@primenet.com>.
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> I have a machine w/ 512MB of ram.
> unload the webserver, see that I have, say, 450MB free.
> So I would like to tell apache that it is allowed to use at most 425MB.

I was thinking about that at some point too.  The catch is, different
applications have different startup costs per child.  If, for example,
each child ends up caching a bunch of stuff in RAM, compiling some
templates, etc. you may get better performance by running a lower
MaxClients and letting each child use more unshared RAM, so that they will
live longer.  On the other hand, some apps have very low ramp up per
child, and don't cache much of anything except the RAM allocated for
lexical variables.  Those might scale better by running more clients and
keeping them smaller.  You kind of have to try it to know.

The only drawback of per-process limiting is that your server could be
performing better when fewer than MaxClients processes are running.  It
will be killing off child processes when it isn't really necessary because
you're miles from MaxClients.  Not that big of a deal, but unfortunate.

> because then all of your hard work before goes RIGHT out the window,
> and I'm talking about a 10-15 MB difference between JUST FINE and
> DEATH SPIRAL, because we've now just crossed that horrible, horrible
> threshold of (say it quietly now) swapping! <shudder>

That won't happen if you use a size limit and MaxClients.  The worst that
can happen is processes will be killed too quickly, which will drive
the load up.  Yes, that would be bad, but probably not as bad as swapping.

- Perrin


RE: Apache::SizeLimit for unshared RAM ???

Posted by Rob Bloodgood <ro...@empire2.com>.
> > I like the idea of Apache::SizeLimit, to no longer worry about
> > setting MaxRequestsPerChild.  That just seems smart, and might
> > get maximum usage out of each Apache child.
> >
> > What I would like to see though is instead of killing the
> > child based on VmRSS on Linux, which seems to be the apparent
> > size of the process in virtual memory RAM, I would like to
> > kill it based on the amount of unshared RAM, which is ultimately
> > what we care about.
>
> It exists for a long time: Apache::GTopLimit. Of course if you have GTop.
> And it's in the guide including all the calculations of the real memory
> used (used by Apache::VMonitor)

So, forgive me for not "getting it," but is there a way to do this without
endless retries and experimentation?  It seems to me that blocking on a
per-child size usage is silly (even tho I'm shure it's what is available at
the programming level).

I mean,
I have a machine w/ 512MB of ram.
unload the webserver, see that I have, say, 450MB free.
So I would like to tell apache that it is allowed to use at most 425MB.

It's not out there as far as I can find.

So far all I've been able to find is:
Run your service for awhile.
Do some math and guesswork about size/totals/available.
Run it again.
Recheck your math.
Use (per-process limiting module).
Pray that your processes never grow because of rarely used
functionality/peak usage/larger than usual queries ...

because then all of your hard work before goes RIGHT out the window, and I'm
talking about a 10-15 MB difference between JUST FINE and DEATH SPIRAL,
because we've now just crossed that horrible, horrible threshold of (say it
quietly now) swapping! <shudder>

Have I jumped to the wrong conclusion?  Is there a module (or usage) I've
missed?  Somehow I doubt I'm the only one who sees the problem in these
terms... has anybody seen the SOLUTION in these terms??

L8r,
Rob


Re: Apache::SizeLimit for unshared RAM ???

Posted by Stas Bekman <st...@stason.org>.
On Mon, 8 Jan 2001, Joshua Chamas wrote:

> Hey,
>
> I like the idea of Apache::SizeLimit, to no longer worry about
> setting MaxRequestsPerChild.  That just seems smart, and might
> get maximum usage out of each Apache child.
>
> What I would like to see though is instead of killing the
> child based on VmRSS on Linux, which seems to be the apparent
> size of the process in virtual memory RAM, I would like to
> kill it based on the amount of unshared RAM, which is ultimately
> what we care about.

It exists for a long time: Apache::GTopLimit. Of course if you have GTop.
And it's in the guide including all the calculations of the real memory
used (used by Apache::VMonitor)

> Here's why: any time I add a new module to the code base,
> I am going to grow the RAM of all processes when I preload
> them with PerlModule or use in startup.pl, but I DON'T CARE
> about those, because they are shared, right?  Problem is
> I do care because I have to retweak the Apache::SizeLimit
> every time my code base grows, because my RAM of each process
> just grew at the post fork baseline.
>
> I guess you could say, SO WHAT!, get over it, but it seems
> like there should be a better way.  * Dreamy *
>
> -- Josh
>



_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/