You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Stanislav Rost <pr...@cs.bu.edu> on 2001/03/22 01:44:03 UTC

Behavior Under Linux, Benchmarking Curves and Cry for Help

Dear Apache developers,

I had been conducting high-stress experiments involving Apache on the
latest Linux 2.2.x kernels, in order to produce the usual "decline in
throughput and growth in latency with load" graphs for my research
paper.  To my dismay, I was unable to produce such graphs due to weird
behavior under high loads.  Namely, the benchmarking program that I was
using (BU's own SURGE) would frequently report a "Connection reset" error
during higher-load test runs (oftentimes upon a call to read(), so after 
the connection establishment), totally decimating any chance for obtaining
clean data points for the [majority of] test runs.  The Apache error log
produces no entries corresponding to such resets.

I was hoping some of the Apache developers may know the reason/fix to this
problem.

A little about the setup:  two P2-400's stressing another P2-400 with
>400-500 concurrent ongoing downloads of large files  
at any given instant of time (so high concurrency levels are
implicit).  Apache is directed to create at least that many threads on
startup.

My other question is for anyone who has ever successfully
benchmarked Apache under Linux and produced nice-looking graphs:  what
system and web server parameters did you tweak to obtain
high-stress results?  What was your setup?

Thank you very much, your responses are appreciated.

Stan Rost



Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Lee Chee Kean <ch...@internetappliance.com>.
I found the belows when i'm doing my own test on a PIII 800 machine with
256M ram, Linux kernel 2.2.18, Apache 1.3.17. (using httperf on another
machine)

Setting 1:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
I found that setting MaxClient to 256 make no difference due to the cpu is
used up. The max httpd processes reached 256 when i did the testing.
It is even worse after setting MaxClient to 512 (after changing httpd.h and
set the max_user_process). The max httpd processes reached 512 when i did
the testing.
I think the forking of new httpd processes also slow down the performance.

Setting 2:
StartServers 200
MinSpareServers 200
MaxSpareServers 200
This will always have 200 httpd processes and there will have no forking
after the 200 httpd processes start up.
This get the best connection time and throughput as below. But I think it
should be able to have better performance if I've tune those in
/proc/sys/net (Any clue here?)

Setting 3:
StartServers 256/200
MinSpaceServers 256
MaxSpareServers 256
The connection time and throughput gain no better but worse then the Setting
2.

And of course, the more modules added to httpd will decrease it's
performance as well.

The best result I've got:
httperf --hog --timeout=30 --client=0/1 --server=www.k.com --port=80 --uri=/
vvv --rate=400 --send-buffer=4096 --recv-buffer=16384 --num-conns=16000 --nu
m-calls=1 --wset=4050,1.000
Maximum connect burst length: 1

Total: connections 16000 requests 16000 replies 16000 test-duration 40.004 s

Connection rate: 400.0 conn/s (2.5 ms/conn, <=14 concurrent connections)
Connection time [ms]: min 2.8 avg 6.1 max 154.4 median 5.5 stddev 4.8
Connection time [ms]: connect 0.8
Connection length [replies/conn]: 1.000

Request rate: 400.0 req/s (2.5 ms/req)
Request size [B]: 76.0

Reply rate [replies/s]: min 399.6 avg 400.0 max 400.2 stddev 0.2 (8 samples)
Reply time [ms]: response 3.3 transfer 2.0
Reply size [B]: header 262.0 content 12344.0 footer 0.0 (total 12606.0)
Reply status: 1xx=0 2xx=16000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 7.40 system 32.61 (user 18.5% system 81.5% total 100.0%)
Net I/O: 4954.0 KB/s (40.6*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0



----- Original Message -----
From: "Stanislav Rost" <pr...@cs.bu.edu>
To: <ne...@apache.org>
Cc: <cu...@apache.org>
Sent: Friday, March 23, 2001 3:41 AM
Subject: Re: Behavior Under Linux, Benchmarking Curves and Cry for Help


> Blue and many others,
>
> Thanks for the response.  You are right, I need to provide a few more
> details.
>
> The files are 10 MBs each, the Apache version is 1.2.12 (should I
> upgrade?  were there scalability enhancements since then?), the Linux
> kernel version is 2.2.19pre16 (unfortunately, I cannot go above 2.2.x for
> the purposes of this test).
>
> The following limits were increased in the Linux kernel:  NR_TASKS (max
> processes) to 4000, FD_SETSIZE (max open files) to 4096, various TCP
> parameters in /proc/sys/net/ipv4, etc.  Am I forgetting something?  Is
> there a good online guide to tweaking your Linux to handle higher loads
> that I am perhaps not aware of?
>
> Actually, the problem that I am having is not so much with Apache waiting.
> It's with the resets.
>
> Another thing that I was not aware of is the need to bump up MAX_PROCESSES
> in http_main.c manually.  I always sort of assumed changing
> process-related parameters in the httpd.conf would do the trick.  So I
> will try increasing that now...  Are there any other limits
> defined in the source-code that may need to be increased?
>
> Thank you very much.
>
> Stan Rost
>
> On Thu, 22 Mar 2001, Blue Lang wrote:
>
> > On Wed, 21 Mar 2001, Stanislav Rost wrote:
> >
> > > A little about the setup:  two P2-400's stressing another P2-400 with
> > > >400-500 concurrent ongoing downloads of large files at any given
> > > instant of time (so high concurrency levels are implicit).  Apache is
> > > directed to create at least that many threads on startup.
> >
> > how large are the files? what version of apache are you using? why are
you
> > using http to transfer large files? it's stinky at it.
> >
> > if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
> > link) and apache will queue up the rest of your downloads and wait,
wait,
> > wait.
> >
> > > My other question is for anyone who has ever successfully benchmarked
> > > Apache under Linux and produced nice-looking graphs:  what system and
> > > web server parameters did you tweak to obtain high-stress results?
> > > What was your setup?
> >
> > i've been able to maintain load avgs of around 230 on a late 2.3 kernel
on
> > a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
> > using ab and a smallish (~3k) index.html. the only things i did were
turn
> > off logging and set max clients to 255. i was experimenting with serving
> > files from RAM disks and loopback mounted file systems at the time, it
was
> > nothing scientific.
> >
> > anyways, that's 1,000,000 small requests every 10 minutes or so.  this
was
> > probably with apache 1.3.12 or so.
> >
> > on a really nice switch with a dual proc sun Netra, i was able to get
> > really close to 11MB/sec from an apache install tuned pretty much the
same
> > way with very, very low load on the web server.
> >
> > --
> >    Blue Lang
http://www.gator.net/~blue
> >    2315 McMullan Circle, Raleigh, North Carolina, USA         919 835
1540
> >
> >
>
>
> --------------------------------------------------
>   Stanislav "Stan" Rost                         /
>   A.K.A. The Progressor                        /
>                                               /
>   http://cs-people.bu.edu/prgrssor           /
> --------------------------------------------'
>
>
>


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Lee Chee Kean <ch...@internetappliance.com>.
I found the belows when i'm doing my own test on a PIII 800 machine with
256M ram, Linux kernel 2.2.18, Apache 1.3.17. (using httperf on another
machine)

Setting 1:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
I found that setting MaxClient to 256 make no difference due to the cpu is
used up. The max httpd processes reached 256 when i did the testing.
It is even worse after setting MaxClient to 512 (after changing httpd.h and
set the max_user_process). The max httpd processes reached 512 when i did
the testing.
I think the forking of new httpd processes also slow down the performance.

Setting 2:
StartServers 200
MinSpareServers 200
MaxSpareServers 200
This will always have 200 httpd processes and there will have no forking
after the 200 httpd processes start up.
This get the best connection time and throughput as below. But I think it
should be able to have better performance if I've tune those in
/proc/sys/net (Any clue here?)

Setting 3:
StartServers 256/200
MinSpaceServers 256
MaxSpareServers 256
The connection time and throughput gain no better but worse then the Setting
2.

And of course, the more modules added to httpd will decrease it's
performance as well.

The best result I've got:
httperf --hog --timeout=30 --client=0/1 --server=www.k.com --port=80 --uri=/
vvv --rate=400 --send-buffer=4096 --recv-buffer=16384 --num-conns=16000 --nu
m-calls=1 --wset=4050,1.000
Maximum connect burst length: 1

Total: connections 16000 requests 16000 replies 16000 test-duration 40.004 s

Connection rate: 400.0 conn/s (2.5 ms/conn, <=14 concurrent connections)
Connection time [ms]: min 2.8 avg 6.1 max 154.4 median 5.5 stddev 4.8
Connection time [ms]: connect 0.8
Connection length [replies/conn]: 1.000

Request rate: 400.0 req/s (2.5 ms/req)
Request size [B]: 76.0

Reply rate [replies/s]: min 399.6 avg 400.0 max 400.2 stddev 0.2 (8 samples)
Reply time [ms]: response 3.3 transfer 2.0
Reply size [B]: header 262.0 content 12344.0 footer 0.0 (total 12606.0)
Reply status: 1xx=0 2xx=16000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 7.40 system 32.61 (user 18.5% system 81.5% total 100.0%)
Net I/O: 4954.0 KB/s (40.6*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0



----- Original Message -----
From: "Stanislav Rost" <pr...@cs.bu.edu>
To: <ne...@apache.org>
Cc: <cu...@apache.org>
Sent: Friday, March 23, 2001 3:41 AM
Subject: Re: Behavior Under Linux, Benchmarking Curves and Cry for Help


> Blue and many others,
>
> Thanks for the response.  You are right, I need to provide a few more
> details.
>
> The files are 10 MBs each, the Apache version is 1.2.12 (should I
> upgrade?  were there scalability enhancements since then?), the Linux
> kernel version is 2.2.19pre16 (unfortunately, I cannot go above 2.2.x for
> the purposes of this test).
>
> The following limits were increased in the Linux kernel:  NR_TASKS (max
> processes) to 4000, FD_SETSIZE (max open files) to 4096, various TCP
> parameters in /proc/sys/net/ipv4, etc.  Am I forgetting something?  Is
> there a good online guide to tweaking your Linux to handle higher loads
> that I am perhaps not aware of?
>
> Actually, the problem that I am having is not so much with Apache waiting.
> It's with the resets.
>
> Another thing that I was not aware of is the need to bump up MAX_PROCESSES
> in http_main.c manually.  I always sort of assumed changing
> process-related parameters in the httpd.conf would do the trick.  So I
> will try increasing that now...  Are there any other limits
> defined in the source-code that may need to be increased?
>
> Thank you very much.
>
> Stan Rost
>
> On Thu, 22 Mar 2001, Blue Lang wrote:
>
> > On Wed, 21 Mar 2001, Stanislav Rost wrote:
> >
> > > A little about the setup:  two P2-400's stressing another P2-400 with
> > > >400-500 concurrent ongoing downloads of large files at any given
> > > instant of time (so high concurrency levels are implicit).  Apache is
> > > directed to create at least that many threads on startup.
> >
> > how large are the files? what version of apache are you using? why are
you
> > using http to transfer large files? it's stinky at it.
> >
> > if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
> > link) and apache will queue up the rest of your downloads and wait,
wait,
> > wait.
> >
> > > My other question is for anyone who has ever successfully benchmarked
> > > Apache under Linux and produced nice-looking graphs:  what system and
> > > web server parameters did you tweak to obtain high-stress results?
> > > What was your setup?
> >
> > i've been able to maintain load avgs of around 230 on a late 2.3 kernel
on
> > a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
> > using ab and a smallish (~3k) index.html. the only things i did were
turn
> > off logging and set max clients to 255. i was experimenting with serving
> > files from RAM disks and loopback mounted file systems at the time, it
was
> > nothing scientific.
> >
> > anyways, that's 1,000,000 small requests every 10 minutes or so.  this
was
> > probably with apache 1.3.12 or so.
> >
> > on a really nice switch with a dual proc sun Netra, i was able to get
> > really close to 11MB/sec from an apache install tuned pretty much the
same
> > way with very, very low load on the web server.
> >
> > --
> >    Blue Lang
http://www.gator.net/~blue
> >    2315 McMullan Circle, Raleigh, North Carolina, USA         919 835
1540
> >
> >
>
>
> --------------------------------------------------
>   Stanislav "Stan" Rost                         /
>   A.K.A. The Progressor                        /
>                                               /
>   http://cs-people.bu.edu/prgrssor           /
> --------------------------------------------'
>
>
>


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Stanislav Rost <pr...@cs.bu.edu>.
Blue and many others,

Thanks for the response.  You are right, I need to provide a few more
details.

The files are 10 MBs each, the Apache version is 1.2.12 (should I
upgrade?  were there scalability enhancements since then?), the Linux
kernel version is 2.2.19pre16 (unfortunately, I cannot go above 2.2.x for
the purposes of this test).

The following limits were increased in the Linux kernel:  NR_TASKS (max
processes) to 4000, FD_SETSIZE (max open files) to 4096, various TCP
parameters in /proc/sys/net/ipv4, etc.  Am I forgetting something?  Is
there a good online guide to tweaking your Linux to handle higher loads
that I am perhaps not aware of?

Actually, the problem that I am having is not so much with Apache waiting.
It's with the resets.

Another thing that I was not aware of is the need to bump up MAX_PROCESSES
in http_main.c manually.  I always sort of assumed changing
process-related parameters in the httpd.conf would do the trick.  So I
will try increasing that now...  Are there any other limits
defined in the source-code that may need to be increased?

Thank you very much.

Stan Rost

On Thu, 22 Mar 2001, Blue Lang wrote:

> On Wed, 21 Mar 2001, Stanislav Rost wrote:
> 
> > A little about the setup:  two P2-400's stressing another P2-400 with
> > >400-500 concurrent ongoing downloads of large files at any given
> > instant of time (so high concurrency levels are implicit).  Apache is
> > directed to create at least that many threads on startup.
> 
> how large are the files? what version of apache are you using? why are you
> using http to transfer large files? it's stinky at it.
> 
> if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
> link) and apache will queue up the rest of your downloads and wait, wait,
> wait.
> 
> > My other question is for anyone who has ever successfully benchmarked
> > Apache under Linux and produced nice-looking graphs:  what system and
> > web server parameters did you tweak to obtain high-stress results?
> > What was your setup?
> 
> i've been able to maintain load avgs of around 230 on a late 2.3 kernel on
> a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
> using ab and a smallish (~3k) index.html. the only things i did were turn
> off logging and set max clients to 255. i was experimenting with serving
> files from RAM disks and loopback mounted file systems at the time, it was
> nothing scientific.
> 
> anyways, that's 1,000,000 small requests every 10 minutes or so.  this was
> probably with apache 1.3.12 or so.
> 
> on a really nice switch with a dual proc sun Netra, i was able to get
> really close to 11MB/sec from an apache install tuned pretty much the same
> way with very, very low load on the web server.
> 
> -- 
>    Blue Lang                                    http://www.gator.net/~blue
>    2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540
> 
> 


--------------------------------------------------
  Stanislav "Stan" Rost                         /
  A.K.A. The Progressor                        / 
                                              /
  http://cs-people.bu.edu/prgrssor           / 
--------------------------------------------' 





[OT] RE: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Blue Lang <bl...@gator.net>.
On Thu, 22 Mar 2001, Peter J. Cranstone wrote:

> Any chance of someone running this same test using mod_gzip and compressing
> the data in real time for each request?
>
> If it's a 1MB *text* file we should compress it down by roughly 75% - 80%
> and then your bandwidth figure will change.

eh.. lies, damned lies, and benchmarks. it's useless data.

is there a general discussion apache list outside of the newsgroups? i'd
like to jaw about this kind of stuff without using nntp and without my
comments becoming someone else's IP (*cough*questionexchange*cough*).

Dean, wanna sponsor an apache-users or apache-tuning list? :)

-- 
   Blue Lang                                    http://www.gator.net/~blue
   2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540


RE: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by "Peter J. Cranstone" <Cr...@remotecommunications.com>.
Any chance of someone running this same test using mod_gzip and compressing
the data in real time for each request?

If it's a 1MB *text* file we should compress it down by roughly 75% - 80%
and then your bandwidth figure will change.

Two issues to think about here... do we compress the 1MB file every time
(which we can do) or we could just compress it once and then transmit the
.gz file in response to the stress test.

Anyway, some testing would really be appreciated.

Thanks


Peter
mod_gzip can be found at
http://www.remotecommunications.com/apache/mod_gzip/

-----Original Message-----
From: Blue Lang [mailto:blue@gator.net]
Sent: Thursday, March 22, 2001 11:21 AM
To: new-httpd@apache.org
Cc: current-testers@apache.org
Subject: Re: Behavior Under Linux, Benchmarking Curves and Cry for Help


On Wed, 21 Mar 2001, Stanislav Rost wrote:

> A little about the setup:  two P2-400's stressing another P2-400 with
> >400-500 concurrent ongoing downloads of large files at any given
> instant of time (so high concurrency levels are implicit).  Apache is
> directed to create at least that many threads on startup.

how large are the files? what version of apache are you using? why are you
using http to transfer large files? it's stinky at it.

if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
link) and apache will queue up the rest of your downloads and wait, wait,
wait.

> My other question is for anyone who has ever successfully benchmarked
> Apache under Linux and produced nice-looking graphs:  what system and
> web server parameters did you tweak to obtain high-stress results?
> What was your setup?

i've been able to maintain load avgs of around 230 on a late 2.3 kernel on
a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
using ab and a smallish (~3k) index.html. the only things i did were turn
off logging and set max clients to 255. i was experimenting with serving
files from RAM disks and loopback mounted file systems at the time, it was
nothing scientific.

anyways, that's 1,000,000 small requests every 10 minutes or so.  this was
probably with apache 1.3.12 or so.

on a really nice switch with a dual proc sun Netra, i was able to get
really close to 11MB/sec from an apache install tuned pretty much the same
way with very, very low load on the web server.

--
   Blue Lang                                    http://www.gator.net/~blue
   2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Stanislav Rost <pr...@cs.bu.edu>.
Blue and many others,

Thanks for the response.  You are right, I need to provide a few more
details.

The files are 10 MBs each, the Apache version is 1.2.12 (should I
upgrade?  were there scalability enhancements since then?), the Linux
kernel version is 2.2.19pre16 (unfortunately, I cannot go above 2.2.x for
the purposes of this test).

The following limits were increased in the Linux kernel:  NR_TASKS (max
processes) to 4000, FD_SETSIZE (max open files) to 4096, various TCP
parameters in /proc/sys/net/ipv4, etc.  Am I forgetting something?  Is
there a good online guide to tweaking your Linux to handle higher loads
that I am perhaps not aware of?

Actually, the problem that I am having is not so much with Apache waiting.
It's with the resets.

Another thing that I was not aware of is the need to bump up MAX_PROCESSES
in http_main.c manually.  I always sort of assumed changing
process-related parameters in the httpd.conf would do the trick.  So I
will try increasing that now...  Are there any other limits
defined in the source-code that may need to be increased?

Thank you very much.

Stan Rost

On Thu, 22 Mar 2001, Blue Lang wrote:

> On Wed, 21 Mar 2001, Stanislav Rost wrote:
> 
> > A little about the setup:  two P2-400's stressing another P2-400 with
> > >400-500 concurrent ongoing downloads of large files at any given
> > instant of time (so high concurrency levels are implicit).  Apache is
> > directed to create at least that many threads on startup.
> 
> how large are the files? what version of apache are you using? why are you
> using http to transfer large files? it's stinky at it.
> 
> if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
> link) and apache will queue up the rest of your downloads and wait, wait,
> wait.
> 
> > My other question is for anyone who has ever successfully benchmarked
> > Apache under Linux and produced nice-looking graphs:  what system and
> > web server parameters did you tweak to obtain high-stress results?
> > What was your setup?
> 
> i've been able to maintain load avgs of around 230 on a late 2.3 kernel on
> a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
> using ab and a smallish (~3k) index.html. the only things i did were turn
> off logging and set max clients to 255. i was experimenting with serving
> files from RAM disks and loopback mounted file systems at the time, it was
> nothing scientific.
> 
> anyways, that's 1,000,000 small requests every 10 minutes or so.  this was
> probably with apache 1.3.12 or so.
> 
> on a really nice switch with a dual proc sun Netra, i was able to get
> really close to 11MB/sec from an apache install tuned pretty much the same
> way with very, very low load on the web server.
> 
> -- 
>    Blue Lang                                    http://www.gator.net/~blue
>    2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540
> 
> 


--------------------------------------------------
  Stanislav "Stan" Rost                         /
  A.K.A. The Progressor                        / 
                                              /
  http://cs-people.bu.edu/prgrssor           / 
--------------------------------------------' 





Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Blue Lang <bl...@gator.net>.
On Wed, 21 Mar 2001, Stanislav Rost wrote:

> A little about the setup:  two P2-400's stressing another P2-400 with
> >400-500 concurrent ongoing downloads of large files at any given
> instant of time (so high concurrency levels are implicit).  Apache is
> directed to create at least that many threads on startup.

how large are the files? what version of apache are you using? why are you
using http to transfer large files? it's stinky at it.

if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
link) and apache will queue up the rest of your downloads and wait, wait,
wait.

> My other question is for anyone who has ever successfully benchmarked
> Apache under Linux and produced nice-looking graphs:  what system and
> web server parameters did you tweak to obtain high-stress results?
> What was your setup?

i've been able to maintain load avgs of around 230 on a late 2.3 kernel on
a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
using ab and a smallish (~3k) index.html. the only things i did were turn
off logging and set max clients to 255. i was experimenting with serving
files from RAM disks and loopback mounted file systems at the time, it was
nothing scientific.

anyways, that's 1,000,000 small requests every 10 minutes or so.  this was
probably with apache 1.3.12 or so.

on a really nice switch with a dual proc sun Netra, i was able to get
really close to 11MB/sec from an apache install tuned pretty much the same
way with very, very low load on the web server.

-- 
   Blue Lang                                    http://www.gator.net/~blue
   2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Austin Gonyou <au...@coremetrics.com>.
I have done such testing rather extensively. One of the things I had to do
was test 20Million hits in a day against a singler Apache server. It
actually handled it quite well, even on the SSL side too. I made sure that
I set max childred to about 5000-10,000, max processes set to 4096, you
have to change that parm in httpd.c. I also made sure the OS could handle
it. I tested RH6.0, 6.2, and 7.0 in these respects. All performed very
similarly. The newest apache I've ever tested this way is only 1.3.14
however.

-- 
Austin Gonyou
Systems Architect
Coremetrics, Inc.
Phone: 512-796-9023
email: austin@coremetrics.com

On Wed, 21 Mar 2001, Stanislav Rost wrote:

> Dear Apache developers,
>
> I had been conducting high-stress experiments involving Apache on the
> latest Linux 2.2.x kernels, in order to produce the usual "decline in
> throughput and growth in latency with load" graphs for my research
> paper.  To my dismay, I was unable to produce such graphs due to weird
> behavior under high loads.  Namely, the benchmarking program that I was
> using (BU's own SURGE) would frequently report a "Connection reset" error
> during higher-load test runs (oftentimes upon a call to read(), so after
> the connection establishment), totally decimating any chance for obtaining
> clean data points for the [majority of] test runs.  The Apache error log
> produces no entries corresponding to such resets.
>
> I was hoping some of the Apache developers may know the reason/fix to this
> problem.
>
> A little about the setup:  two P2-400's stressing another P2-400 with
> >400-500 concurrent ongoing downloads of large files
> at any given instant of time (so high concurrency levels are
> implicit).  Apache is directed to create at least that many threads on
> startup.
>
> My other question is for anyone who has ever successfully
> benchmarked Apache under Linux and produced nice-looking graphs:  what
> system and web server parameters did you tweak to obtain
> high-stress results?  What was your setup?
>
> Thank you very much, your responses are appreciated.
>
> Stan Rost
>


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Austin Gonyou <au...@coremetrics.com>.
I have done such testing rather extensively. One of the things I had to do
was test 20Million hits in a day against a singler Apache server. It
actually handled it quite well, even on the SSL side too. I made sure that
I set max childred to about 5000-10,000, max processes set to 4096, you
have to change that parm in httpd.c. I also made sure the OS could handle
it. I tested RH6.0, 6.2, and 7.0 in these respects. All performed very
similarly. The newest apache I've ever tested this way is only 1.3.14
however.

-- 
Austin Gonyou
Systems Architect
Coremetrics, Inc.
Phone: 512-796-9023
email: austin@coremetrics.com

On Wed, 21 Mar 2001, Stanislav Rost wrote:

> Dear Apache developers,
>
> I had been conducting high-stress experiments involving Apache on the
> latest Linux 2.2.x kernels, in order to produce the usual "decline in
> throughput and growth in latency with load" graphs for my research
> paper.  To my dismay, I was unable to produce such graphs due to weird
> behavior under high loads.  Namely, the benchmarking program that I was
> using (BU's own SURGE) would frequently report a "Connection reset" error
> during higher-load test runs (oftentimes upon a call to read(), so after
> the connection establishment), totally decimating any chance for obtaining
> clean data points for the [majority of] test runs.  The Apache error log
> produces no entries corresponding to such resets.
>
> I was hoping some of the Apache developers may know the reason/fix to this
> problem.
>
> A little about the setup:  two P2-400's stressing another P2-400 with
> >400-500 concurrent ongoing downloads of large files
> at any given instant of time (so high concurrency levels are
> implicit).  Apache is directed to create at least that many threads on
> startup.
>
> My other question is for anyone who has ever successfully
> benchmarked Apache under Linux and produced nice-looking graphs:  what
> system and web server parameters did you tweak to obtain
> high-stress results?  What was your setup?
>
> Thank you very much, your responses are appreciated.
>
> Stan Rost
>


Re: Behavior Under Linux, Benchmarking Curves and Cry for Help

Posted by Blue Lang <bl...@gator.net>.
On Wed, 21 Mar 2001, Stanislav Rost wrote:

> A little about the setup:  two P2-400's stressing another P2-400 with
> >400-500 concurrent ongoing downloads of large files at any given
> instant of time (so high concurrency levels are implicit).  Apache is
> directed to create at least that many threads on startup.

how large are the files? what version of apache are you using? why are you
using http to transfer large files? it's stinky at it.

if it's a 1MB file, you're out of bandwidth in .002 seconds (on a 100Mb
link) and apache will queue up the rest of your downloads and wait, wait,
wait.

> My other question is for anyone who has ever successfully benchmarked
> Apache under Linux and produced nice-looking graphs:  what system and
> web server parameters did you tweak to obtain high-stress results?
> What was your setup?

i've been able to maintain load avgs of around 230 on a late 2.3 kernel on
a celeron 433, pushing about 1300 reqs/second over a 100Mb switch. i was
using ab and a smallish (~3k) index.html. the only things i did were turn
off logging and set max clients to 255. i was experimenting with serving
files from RAM disks and loopback mounted file systems at the time, it was
nothing scientific.

anyways, that's 1,000,000 small requests every 10 minutes or so.  this was
probably with apache 1.3.12 or so.

on a really nice switch with a dual proc sun Netra, i was able to get
really close to 11MB/sec from an apache install tuned pretty much the same
way with very, very low load on the web server.

-- 
   Blue Lang                                    http://www.gator.net/~blue
   2315 McMullan Circle, Raleigh, North Carolina, USA         919 835 1540