You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Jeffrey Burgoyne <bu...@keenuh.com> on 2005/03/01 00:24:47 UTC

Re: Puzzling News

I can go even one step further. 255 servers, 2.5 Gig of ram, huge config
(200 virtuals hosts, 1500 redirect rules, 2000 rewrite rules, 300 proxy
rules) and I never go into swap using prefork.

Mind you, no PHP, and that helps significantly.

I'll max out on CPU long before memory is all likelyhood, so worker offers
no advantages. I'd say in a large enterprise system where RAM is expensive
with maybe 8 CPU's, then worker may make sense. For smaller scale out
installations, no need.


Jeffrey Burgoyne

Chief Technology Architect
KCSI Keenuh Consulting Services Inc
burgoyne@keenuh.com

On Mon, 28 Feb 2005, Paul A. Houle wrote:

> On Mon, 28 Feb 2005 21:31:19 +0000, Wayne S. Frazee <wf...@wynweb.net>
> wrote:
>
> >
> > Correct me if I am wrong, but I have seen much that would purport the
> > worker MPM to deliever gains in terms of capacity handling and
> > capacity-burst-handling as well as slimming down the resource footprint
> > of the Apache 2 server on a running system under normal load conditions.
>
> 	Well,  our big production machine has 6G of RAM and never gets close to
> running out even in testing when we stacked it up to the (compiled in)
> limit of 255 processes.  Under normal operations we have 50 running,
> mostly because of keep-alive (helps a lot with the performance of our
> cookie-based authentication system) and people downloading moderately big
> (>100k) files.
>
> 	Even though RAM is pretty cheap,  there probably are people who are more
> constrained.
>
> > I would also like to point out I too have seen inconclusive evidence on
> > MPM "advantage".  I think that is part of the problem... without a clear
> > business-case-defendable advantage to the features implemented in Apache
> > 2... why upgrade?
>
> 	Altruism.  If people don't use Apache 2,  then Apache development will
> keep going sideways forever.
>

Re: Puzzling News

Posted by Paul Querna <ch...@force-elite.com>.
Jeffrey Burgoyne wrote:
> All true, but we are running a 100K (Canadian) blade center, and at 255
> apaches per server and 10 blades, thats ~2500 concurrent users. You have
> to have a pretty honking Sun box to manage that, certainly within the same
> price range, and another 15K buys me 40% more power.

One white-box machine (say 2Ghz) with 2 gigs of ram, running the Worker 
or Event MPM can easily handle ~2500 concurrent clients.

Buy more machines for redundancy.

-Paul

Re: Puzzling News

Posted by Jeffrey Burgoyne <bu...@keenuh.com>.
All true, but we are running a 100K (Canadian) blade center, and at 255
apaches per server and 10 blades, thats ~2500 concurrent users. You have
to have a pretty honking Sun box to manage that, certainly within the same
price range, and another 15K buys me 40% more power.

I have come to the conclusion that Apache is ideally suited to the scale
out vs scale up model, and in that instance prefork is probably even in
terms of performance and flexibility with worker.

Jeffrey Burgoyne

Chief Technology Architect
KCSI Keenuh Consulting Services Inc
burgoyne@keenuh.com

On Mon, 28 Feb 2005, Justin Erenkrantz wrote:

> --On Monday, February 28, 2005 6:24 PM -0500 Jeffrey Burgoyne
> <bu...@keenuh.com> wrote:
>
> > I can go even one step further. 255 servers, 2.5 Gig of ram, huge config
> > (200 virtuals hosts, 1500 redirect rules, 2000 rewrite rules, 300 proxy
> > rules) and I never go into swap using prefork.
>
> I believe 255 concurrent clients is really low now-a-days for high-end
> production servers.  Heck, 2.x's hard limit is 200,000 not 256.  (Reason
> #1001 why 2.x is better than 1.3.)  =)
>
> It's when you start to get into several thousand concurrent connections
> that I've found that the memory model of prefork starts to get painful.
> And memory usage also depends on whether your OS does optimistic or
> pessimistic memory allocation.  It's impossible to run high MaxClients with
> prefork on, say, Solaris without having large amounts of swap dedicated.
> -- justin
>

Re: Puzzling News

Posted by Jeffrey Burgoyne <bu...@keenuh.com>.
But how many people really need 10,000+ concurrent connections?

Obviously CNN does. I'll make a bet Amazon does. Lets add ebay. Those are
power users.

The web site I manage does about 5 million hits per day (not including
graphics, style sheets, etc which are served by a different server), 80%
of which are in a ten hour window. Thats 400,000 per hour, 7000 per minute. Thats about 100 hits per second.
Average delivery time is running about 1 second on it per hit or so, and
we see a need to run about 150 preforks during peak times.

Now what percentage of installations see more then 5 million hits per day?
I'd dare say it is pretty small.

I'd also wager a godo cold ale that the larger sites also have a decent
level of expertise to tune their whole system to betetr handle their load.
In ym case I realized an issue with the URI translation phase was causing
issues, and one week and one apache module later I reduced the number of
Apache pre forks required by 70% and reduced the latency time by 60%.
Those numbers are both above what 2.0 would have bought me with less work.

It wouldn't suprise me if many sites are in the same boat. Those with
eough hits to justify a move to 2.0 likely have a higher level of
expertise that would allow for a better understanding ofApache and better
tuning to provide maximum performance.


Jeffrey Burgoyne

Chief Technology Architect
KCSI Keenuh Consulting Services Inc
burgoyne@keenuh.com

On Tue, 1 Mar 2005, Brian Akins wrote:

> Justin Erenkrantz wrote:
> > --On Monday, February 28, 2005 6:24 PM -0500 Jeffrey Burgoyne
>
> > I believe 255 concurrent clients is really low now-a-days for high-end
> > production servers.
> > It's when you start to get into several thousand concurrent connections
> > that I've found that the memory model of prefork starts to get painful.
>
> We have run 10,000 + threads on our webservers routinely.  Can't do that
> with 1.x
>
>
> --
> Brian Akins
> Lead Systems Engineer
> CNN Internet Technologies
>

Re: Puzzling News

Posted by Brian Akins <ba...@web.turner.com>.
Justin Erenkrantz wrote:
> --On Monday, February 28, 2005 6:24 PM -0500 Jeffrey Burgoyne 

> I believe 255 concurrent clients is really low now-a-days for high-end 
> production servers.  
> It's when you start to get into several thousand concurrent connections 
> that I've found that the memory model of prefork starts to get painful. 

We have run 10,000 + threads on our webservers routinely.  Can't do that 
with 1.x


-- 
Brian Akins
Lead Systems Engineer
CNN Internet Technologies

Re: Puzzling News

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
--On Monday, February 28, 2005 6:24 PM -0500 Jeffrey Burgoyne 
<bu...@keenuh.com> wrote:

> I can go even one step further. 255 servers, 2.5 Gig of ram, huge config
> (200 virtuals hosts, 1500 redirect rules, 2000 rewrite rules, 300 proxy
> rules) and I never go into swap using prefork.

I believe 255 concurrent clients is really low now-a-days for high-end 
production servers.  Heck, 2.x's hard limit is 200,000 not 256.  (Reason 
#1001 why 2.x is better than 1.3.)  =)

It's when you start to get into several thousand concurrent connections 
that I've found that the memory model of prefork starts to get painful. 
And memory usage also depends on whether your OS does optimistic or 
pessimistic memory allocation.  It's impossible to run high MaxClients with 
prefork on, say, Solaris without having large amounts of swap dedicated. 
-- justin