You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@httpd.apache.org by George Adams <g_...@hotmail.com> on 2005/08/09 17:05:49 UTC

[users@httpd] Why does Apache use up all my memory?

I read an earlier thread on memory consumption (http://tinyurl.com/bly4d), 
which may be related to my problem... but because of some differences, I'm 
not so sure.  Any help would be appreciated!

I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has 1Gig 
RAM and an additional 1Gig swap space.  The server handles a lot of people 
downloading sermons from our church website (which are no larger than 18Meg 
MP3 files), but I can't figure out how to keep the server from running out 
of memory.

Here's my Apache2 prefork configuration:
------------------------------------------------------------------------
<IfModule prefork.c>
    StartServers         5
    MinSpareServers      5
    MaxSpareServers     10
    MaxClients          20
    MaxRequestsPerChild  5000
</IfModule>


And here's what the Apache "/server-status" URL showed earlier today (I had 
just restarted the server, but it immediately filled up with download 
requests, all from the same guy, apparently using a download accelerator 
judging by the duplicate requests):
------------------------------------------------------------------------
Srv      PID     M     CPU            Req     Request
0-0    15822    W     0.48             0    GET /out/181.mp3 HTTP/1.1
1-0    15823    W     0.00    1742573500    GET /out/388.mp3 HTTP/1.1
2-0    15824    W     0.00    1742573499    GET /out/238.mp3 HTTP/1.1
3-0    15825    W     0.00    1742573499    GET /out/504.mp3 HTTP/1.1
4-0    15826    W     0.00    1742573496    GET /out/388.mp3 HTTP/1.1
5-0    15832    W     0.00    1742572495    GET /out/801.mp3 HTTP/1.1
6-0    15834    W     0.00    1742571493    GET /out/504.mp3 HTTP/1.1
7-0    15835    W     0.00    1742571489    GET /out/504.mp3 HTTP/1.1
8-0    15838    W     0.00    1742570476    GET /out/388.mp3 HTTP/1.1
9-0    15839    W     0.00    1742570484    GET /out/504.mp3 HTTP/1.1
10-0    15840    W     0.60             0    GET /out/238.mp3 HTTP/1.1
11-0    15841    W     0.00    1742570477    GET /out/388.mp3 HTTP/1.1
12-0    15846    W     0.25             0    GET /out/181.mp3 HTTP/1.1
13-0    15847    W     0.00    1742569347    GET /out/181.mp3 HTTP/1.1
14-0    15848    W     0.00    1742568761    GET /out/801.mp3 HTTP/1.1
15-0    15849    W     0.00    1742568761    GET /out/801.mp3 HTTP/1.1
16-0    15852    W     0.19             0    GET /out/181.mp3 HTTP/1.1
17-0    15853    W     0.17             0    GET /out/801.mp3 HTTP/1.1
18-0    15854    W     0.22             0    GET /out/504.mp3 HTTP/1.1
19-0    15855    W     0.28             0    GET /server-status HTTP/1.1


And here's a portion of what "top" showed at the same time:
------------------------------------------------------------------------
top - 18:09:59 up 64 days,  7:08,  3 users, load avg: 21.62, 10.57, 4.70
Tasks: 154 total,   1 running, 143 sleeping,   1 stopped,   9 zombie
Cpu(s):  0.8% us,  2.3% sy, 0.0% ni, 0.0% id, 96.3% wa, 0.3% hi, 0.2% si
Mem:   1034276k total,  1021772k used,    12504k free,     6004k buffers
Swap:  1030316k total,   985832k used,    44484k free,    83812k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
15846 apache    16   0  132m  89m 1968 S  0.3  8.9   0:01.46 apache2
15840 apache    17   0  130m  83m 2008 D  0.0  8.3   0:00.90 apache2
15849 apache    16   0  120m  82m 1968 S  0.3  8.1   0:01.02 apache2
15852 apache    16   0  120m  81m 1968 S  0.3  8.1   0:00.91 apache2
15848 apache    16   0  109m  73m 2008 S  0.3  7.2   0:00.85 apache2
15855 apache    16   0  107m  70m 2076 D  0.3  7.0   0:00.76 apache2
15822 apache    17   0  179m  55m 1968 D  0.3  5.5   0:00.88 apache2
15854 apache    16   0 98024  55m 1968 D  0.0  5.5   0:00.58 apache2
15853 apache    18   0 98.9m  53m 2000 S  0.0  5.3   0:00.51 apache2
15847 apache    17   0 86884  52m 1968 D  0.0  5.2   0:00.42 apache2
15841 apache    17   0  110m  36m 1964 D  0.3  3.6   0:00.64 apache2
15826 apache    17   0  173m  20m 1968 D  0.0  2.0   0:00.57 apache2
15825 apache    16   0 97.7m  19m 1968 D  0.0  1.9   0:00.36 apache2
15834 apache    16   0  117m  14m 1968 D  0.3  1.5   0:00.42 apache2
15839 apache    17   0  115m  12m 1968 D  0.0  1.2   0:00.40 apache2
15838 apache    15   0  182m  12m 1968 D  0.0  1.2   0:00.59 apache2
15823 apache    16   0  180m  11m 1968 D  0.0  1.1   0:00.65 apache2
15824 apache    15   0  103m 9980 1968 D  0.0  1.0   0:00.27 apache2
15832 apache    16   0  116m 9112 1968 D  0.0  0.9   0:00.29 apache2
15835 apache    16   0  162m 8844 1968 D  0.0  0.9   0:00.41 apache2
(everything else listed on "top" below this was less than 0.5 for %MEM)


The memory usage swelled very fast as the download requests came in, and 
based on previous experience, the server would have slowed to a crawl and 
possible crashed as it tried to save itself if I hadn't run "killall 
apache2" at this point.

So it seems like this guy's 19 download requests are enough to pretty much 
exhaust my 1 Gig of physical RAM and 1 Gig of swap space.  That just doesn't 
seem right.  EVEN IF something weird was happening where every Apache child 
loaded an entire MP3 file into RAM before serving it, that still only 
accounts for 20 servers * 18Meg files = 360Meg RAM - a lot, but nowhere near 
2 Gig.  Yet these 20 processes have consumed almost 2 Gig.

What am I doing wrong that so few download requests can bring the server to 
its knees?  How can I fix this configuration?

Thanks to anyone who can help!



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by Joe Orton <jo...@redhat.com>.
On Thu, Aug 18, 2005 at 02:48:26PM -0400, George Adams wrote:
> Joe, I just wanted to thank you again.  The byterange patch you gave me 
> worked just beautifully.

Great, thanks for the feedback.  I've proposed this for backport to the 
2.0.x branch now so it should show up in a 2.0.x release eventually, 
pending review.

joe

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by Joe Orton <jo...@redhat.com>.
On Wed, Aug 17, 2005 at 12:12:05PM -0400, George Adams wrote:
> >> Joe> Are these all simple static files, or is /out/ handled by some CGI
> >> Joe> script etc?
> >>
> >> Joe, you're right - they do get passed through a Perl script for
> >> processing.  However, unless I'm mistaken, I don't THINK the following
> >> code would produce the kind of problems I'm seeing:
> >
> >OK, no, it's not your code at fault, it's a bug in httpd.  You can apply
> >this patch: http://people.apache.org/~jorton/ap_byterange.diff and I
> >guess I should really submit this for backport to 2.0.x.
> 
> 
> Joe, thanks for the patch.  I'll apply it and see if it helps.
> 
> One last followup question, though.  It seems like there must be tons of 
> sites in the world doing what I'm doing - serving a large amount of 
> downloads.  And probably most of those sites are running Apache, and 
> probably a lot of them are using Apache 2.0.x .  How is it that they don't 
> seem to have the same problem?  If this bug has survived in Apache 2 this 
> long, it must be fairly obscure.  Is there some unique set of circumstances 
> that is causing this bug to affect only me and a few others, and not a 
> large number of other Apache servers?

The bug only triggers with:

- a CGI/... script which generates a large response
- a user pointing a download accelerator (or suchlike) at said script.

and it has been reported two times on this list in as many weeks - so 
not that uncommon I guess.

joe

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by George Adams <g_...@hotmail.com>.
> > Joe> Are these all simple static files, or is /out/ handled by some CGI
> > Joe> script etc?
> >
> > Joe, you're right - they do get passed through a Perl script for
> > processing.  However, unless I'm mistaken, I don't THINK the following
> > code would produce the kind of problems I'm seeing:
>
>OK, no, it's not your code at fault, it's a bug in httpd.  You can apply
>this patch: http://people.apache.org/~jorton/ap_byterange.diff and I
>guess I should really submit this for backport to 2.0.x.


Joe, thanks for the patch.  I'll apply it and see if it helps.

One last followup question, though.  It seems like there must be tons of 
sites in the world doing what I'm doing - serving a large amount of 
downloads.  And probably most of those sites are running Apache, and 
probably a lot of them are using Apache 2.0.x .  How is it that they don't 
seem to have the same problem?  If this bug has survived in Apache 2 this 
long, it must be fairly obscure.  Is there some unique set of circumstances 
that is causing this bug to affect only me and a few others, and not a large 
number of other Apache servers?

Thanks again.



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by George Adams <g_...@hotmail.com>.
Joe, I just wanted to thank you again.  The byterange patch you gave me 
worked just beautifully.

Once I understood what the problem was, I was able to test it more 
thoroughly.  I took a copy of Star Downloader and configured it to split up 
a single file into 10 chunks for faster downloading.  Then I took a 
freshly-started Apache 2.0.54 server and told Star Downloader to being 
downloading a single file.  It immediately broke it into several chunks and 
made 10 requests of the Apache server, each with a different byterange.

The effects on my server were dramatic, as you can see from this "vmstat 1" 
(every line represents one second)

procs -----------memory---------- ---swap-- --system-- ----cpu----
2  0 128368 728244  12044 105432    0    0 1358  9823  6 14 50 29
0  0 128368 626192  12048 108288    0    0 1303 13096  8 19 63  9
10  0 128368 470044  12048 108288    0    0 1283 49044 18 38 45  0
12  0 128368 169204  12048 108548    0    0 1228 106594 29 71  0  0
8  4 127664  12260   5496  71652    0  204 1298 73347 22 69  0  8
0 17 178472  11996   5544  37556   32 51780 1380  7073  8 43  0 49
0 16 179200  12368   5564  38704   32 1712 1319  1277  1  3  0 96
0 14 180056  12244   5612  41284  568  968 1332  4709  2  4  0 94
0 14 180056  12120   5636  41884  696 1484 1394  1692  1  4  0 96
0 14 182116  11988   5652  43704  176 4772 1488  2841  2  5  0 93
0 16 198168  12236   5676  44920  128 17276 1466   873  0  4  0 95
0 17 218620  12416   5680  45568  260 22432 1455   694  1  5  0 95
0 19 221760  12108   5688  47040  224 5560 1396  3476  2  6  0 92
0 18 224692  11988   5724  48012  612 3744 1403  1796  0  3  0 96
0 20 233944  11980   5752  48544    0 10452 1355   811  1  3  0 97
0 17 234612  12096   5808  50344  204 1456 1356  2878  2  4  0 94
0 16 234876  12096   5868  51816  300 1524 1377  1578  1  3  0 97
0 19 271892  12092   5896  51888  384 39996 1518  2744  2 11  0 88
0 17 314812  12156   5920  51572  752 43876 1467   959  0  9  0 90

Within 5 seconds, I had dropped from 728,244b free RAM to only 12,260b free. 
  Immediately after that and the swap space usage began to rise at an 
alarming rate.  All that just from a SINGLE download request split up into 
10 smaller requests!  No wonder the server was choking so often.

I then restarted the server and tested the same process with Star 
Downloader, but this time requesting the file directly rather than going 
through a download CGI script.  This time Apache didn't blink an eye - it 
handled all 10 split-up requests with barely a flicker in memory usage.  So 
your theory was right - the problem only occurred when a file was requested 
via a download accelerator and only when an intermediary CGI was handling 
the file transfer on the server side.

Next, I applied the patch, rebuilt and restarted Apache, and made sure Star 
Downloader could still download the file directly (without the CGI download 
script).  Everything was still fine - the request was split into 10 requests 
and Apache handled them all easily.

Finally, the big test.  I again requested the file with Star Downloader 
using the server's CGI download script.  This time, though, the request was 
not split into 10 smaller requests.  Instead, Star Downloader had to content 
itself with a single downloading process, and Apache's memory usage held 
steady.

So, hooray!  Thanks again, and if you have any influence to get that script 
backported to the Apache 2.0.x branch, that'd be wonderful.



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by Joe Orton <jo...@redhat.com>.
On Mon, Aug 15, 2005 at 11:00:02AM -0400, George Adams wrote:
> Thanks, Joe and Jon for your helpful thoughts regarding my Apache
> memory problem.  Here's some more information:
> 
> Joe> > 1-0    15823    W     0.00    1742573500    GET /out/388.mp3
> Joe> > 2-0 15824 W 0.00 1742573499 GET /out/238.mp3
> Joe>
> Joe> Are these all simple static files, or is /out/ handled by some CGI
> Joe> script etc?
> 
> Joe, you're right - they do get passed through a Perl script for
> processing.  However, unless I'm mistaken, I don't THINK the following
> code would produce the kind of problems I'm seeing:

OK, no, it's not your code at fault, it's a bug in httpd.  You can apply 
this patch: http://people.apache.org/~jorton/ap_byterange.diff and I 
guess I should really submit this for backport to 2.0.x.

joe

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by George Adams <g_...@hotmail.com>.
Thanks, Joe and Jon for your helpful thoughts regarding my Apache
memory problem.  Here's some more information:

Joe> > 1-0    15823    W     0.00    1742573500    GET /out/388.mp3
Joe> > 2-0 15824 W 0.00 1742573499 GET /out/238.mp3
Joe>
Joe> Are these all simple static files, or is /out/ handled by some CGI
Joe> script etc?

Joe, you're right - they do get passed through a Perl script for
processing.  However, unless I'm mistaken, I don't THINK the following
code would produce the kind of problems I'm seeing:

my $filesize = (stat($sermonfile))[7];
print "Content-Disposition: inline;filename=$sermonfile_short\n";
print "Content-Length: $filesize\n";
print "Content-Type: application/octet-stream\n\n";
open(SERMONFILE, $sermonfile);
binmode(SERMONFILE);
binmode(STDOUT);
my $chunk;
while (read(SERMONFILE,$chunk,1024)) {
    print $chunk;
}
close SERMONFILE;

But even in the worst case, where a bad script reads in the entire 18M MP3
file into memory, that STILL wouldn't seem to account for my 2Gig
memory loss...

------------------------------------

Jon> If your clients are downloading 18Mb files over slow links they
Jon> may keep trying the connection, breaking the original therefore
Jon> leaving you with multiple connections to the same file from the
Jon> same client.

Jon, in this particular case I don't think that's happening.  I
generally don't have very long to test stuff because once the memory is
exhausted by Apache proceses, my SSH connections to the server slow to
a crawl, and even logging in at the console becomes nearly impossible.

I was fortunate enough to catch this occurrence before the memory was
completely used up.  As the RAM got dangerously low, I decided to shut
down all my Apache processes and waited about 2 minutes just to see
what the memory looked like.  Without Apache running, I was back to
about 1.6 Gig free.) Then, while keeping an eye on "top" and the
/server-status page, I started Apache again.

Immediately the visitor's download accelerator program began hitting my
site again.  Within seconds all 20 Apache processes were in use (by
the one guy), and before 2 minutes had elapsed, I was forced to
shutdown all Apache processes as my total memory fell below 50M.  I was
able to quickly grab those "top" and "/server-status" shots just before
I killed all the Apache clients.

So in my case, I don't think the problem is half-closed connections or
timeouts.  Under the right circumstances of heavy downloading, a virgin
Apache server can exhaust my 2Gig of memory in less than 2 minutes.

Jon> Your 20 concurrent connections are limited by MaxClients. I assume
Jon> you are keeping this small because of the size they are growing to
Jon> as you should be able to get to approx 175-200 of the top of my
Jon> head using prefork with 1Gb of memory. I would have thought this
Jon> would max out pretty quickly with many 18Mb downloads as they will
Jon> take time.

Yes, normally I would have MaxClients set to something larger.  Setting
it to 20 has been something of a desperation measure, trying to keep
the memory usage under control.  Apparently it's not working.



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by Joe Orton <jo...@redhat.com>.
On Tue, Aug 09, 2005 at 11:05:49AM -0400, George Adams wrote:
> I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has 
> 1Gig RAM and an additional 1Gig swap space.  The server handles a lot of 
> people downloading sermons from our church website (which are no larger 
> than 18Meg MP3 files), but I can't figure out how to keep the server from 
> running out of memory.
...
> And here's what the Apache "/server-status" URL showed earlier today (I had 
> just restarted the server, but it immediately filled up with download 
> requests, all from the same guy, apparently using a download accelerator 
> judging by the duplicate requests):
> ------------------------------------------------------------------------
> Srv      PID     M     CPU            Req     Request
> 0-0    15822    W     0.48             0    GET /out/181.mp3 HTTP/1.1
> 1-0    15823    W     0.00    1742573500    GET /out/388.mp3 HTTP/1.1
> 2-0    15824    W     0.00    1742573499    GET /out/238.mp3 HTTP/1.1

Are these all simple static files, or is /out/ handled by some CGI 
script etc?

...
> 15853 apache    18   0 98.9m  53m 2000 S  0.0  5.3   0:00.51 apache2

if when this happens, you can capture the output of e.g. "strace -p 
15853" as root, that might help.

joe

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Why does Apache use up all my memory?

Posted by Jon Snow <js...@gatesec.net>.
George,

I have something similar...

I have been debugging an issue where I have seen processes growing to 800Mb on 
a forward proxy configuration using the worker model. Perhaps interestingly 
on reverse proxy configurations I get 100% CPU states occasionally as well. 
What I have noticed is that these are almost always associated with the TCP 
state condition of CLOSE_WAIT. What is netstat saying on your system when the 
problems occur?

Just today I had a win where I was able to truss the process on a forward 
proxy reading data from a server but had nowhere to write as the socket to 
the client was closed/half closed. The error returned (EPIPE) was not being 
caught by apache so there was this continual read then failure to write. 
While this process did not result in a large increase of memory or CPU or 
CLOSE_WAIT states it did verify for me something I had suspected for a long 
time, that the apache code is not checking it's socket calls at some point 
(in this case writev), and/or does not catch a close on the socket. This 
trace was on an ftp connection so the results may be different for an http 
connection e.g. memory usage.

I will be discussing these issues on the dev mailing list in the near future  
but it would be good to see if we are seeing anything in common first.

My hunch (which is a very long one) has been for a while now that when the 
client breaks the connection, the process somehow misses the close and 
continues to read but cannot write as verified by the above. It appears that 
in some instances the memory consumption may be caused by the buffering of 
data or bad memory management which is due to socket issues i.e. I have 
noticed that the larger the download file such as an iso image the larger the 
process grows. As the processes handle additional connections the memory is 
not freed and it grows. Note at this point it is speculation on my part but I 
consistantly get these stateless connections in conjunction with high CPU or 
memory usage so something is going on. I have noticed that processes will 
always creep in size with time so I believe there may be memory leaks but the 
cause of large memory consumption may be caused by weird socket states.

If your clients are downloading 18Mb files over slow links they may keep 
trying the connection, breaking the original therefore leaving you with 
multiple connections to the same file from the same client. If there is a 
memory leak under half close conditions your process grows and your processes 
will handle 5000 connections before they are cycled as per your 
configuration.

But why does this not appear to affect many other people? Firewalls perhaps, 
do you have any network or host based firewalls which may be preventing 
proper shutdown of connections? If so do they drop or reject packets? Which 
firewalls are they? I work in an Internet Gateway environment so I have 
firewalls all over the place and have added these as a variable to my list of 
possibilities.

Your 20 concurrent connections are limited by MaxClients. I assume you are 
keeping this small because of the size they are growing to as you should be 
able to get to approx 175-200 of the top of my head using prefork with 1Gb of 
memory. I would have thought this would max out pretty quickly with many 18Mb 
downloads as they will take time.

As a workaround you may try to lower the MaxRequestsPerChild to turnover 
processes which may be affected by memory leakage and raise the MaxClients to 
handle more concurrent connections. Say initially MaxClients 150 and 
MaxRequestsPerChild 100 or more agressively 10. This will produce more CPU 
overhead in forking processes but modern CPUs are pretty fast. Or go for a 
threaded model such as worker and you should be able to get 10-15 times as 
many concurrent connections (based on proxy configurations - I have never 
used apache as a web server). But another model may simply have the same 
issues if it is socket related. Funnily I was considering going to a prefork 
model to eliminate the possibility of threading and mutex issues - won't be 
doing that for a while.

This may not help but I would be interested in whether there are similarities 
in network states or hardware etc. to what I have.

Regards,
Jon

On Wednesday 10 August 2005 01:05, George Adams wrote:
> I read an earlier thread on memory consumption (http://tinyurl.com/bly4d),
> which may be related to my problem... but because of some differences, I'm
> not so sure.  Any help would be appreciated!
>
> I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has
> 1Gig RAM and an additional 1Gig swap space.  The server handles a lot of
> people downloading sermons from our church website (which are no larger
> than 18Meg MP3 files), but I can't figure out how to keep the server from
> running out of memory.
>
> Here's my Apache2 prefork configuration:
> ------------------------------------------------------------------------
> <IfModule prefork.c>
>     StartServers         5
>     MinSpareServers      5
>     MaxSpareServers     10
>     MaxClients          20
>     MaxRequestsPerChild  5000
> </IfModule>
>
>
> And here's what the Apache "/server-status" URL showed earlier today (I had
> just restarted the server, but it immediately filled up with download
> requests, all from the same guy, apparently using a download accelerator
> judging by the duplicate requests):
> ------------------------------------------------------------------------
> Srv      PID     M     CPU            Req     Request
> 0-0    15822    W     0.48             0    GET /out/181.mp3 HTTP/1.1
> 1-0    15823    W     0.00    1742573500    GET /out/388.mp3 HTTP/1.1
> 2-0    15824    W     0.00    1742573499    GET /out/238.mp3 HTTP/1.1
> 3-0    15825    W     0.00    1742573499    GET /out/504.mp3 HTTP/1.1
> 4-0    15826    W     0.00    1742573496    GET /out/388.mp3 HTTP/1.1
> 5-0    15832    W     0.00    1742572495    GET /out/801.mp3 HTTP/1.1
> 6-0    15834    W     0.00    1742571493    GET /out/504.mp3 HTTP/1.1
> 7-0    15835    W     0.00    1742571489    GET /out/504.mp3 HTTP/1.1
> 8-0    15838    W     0.00    1742570476    GET /out/388.mp3 HTTP/1.1
> 9-0    15839    W     0.00    1742570484    GET /out/504.mp3 HTTP/1.1
> 10-0    15840    W     0.60             0    GET /out/238.mp3 HTTP/1.1
> 11-0    15841    W     0.00    1742570477    GET /out/388.mp3 HTTP/1.1
> 12-0    15846    W     0.25             0    GET /out/181.mp3 HTTP/1.1
> 13-0    15847    W     0.00    1742569347    GET /out/181.mp3 HTTP/1.1
> 14-0    15848    W     0.00    1742568761    GET /out/801.mp3 HTTP/1.1
> 15-0    15849    W     0.00    1742568761    GET /out/801.mp3 HTTP/1.1
> 16-0    15852    W     0.19             0    GET /out/181.mp3 HTTP/1.1
> 17-0    15853    W     0.17             0    GET /out/801.mp3 HTTP/1.1
> 18-0    15854    W     0.22             0    GET /out/504.mp3 HTTP/1.1
> 19-0    15855    W     0.28             0    GET /server-status HTTP/1.1
>
>
> And here's a portion of what "top" showed at the same time:
> ------------------------------------------------------------------------
> top - 18:09:59 up 64 days,  7:08,  3 users, load avg: 21.62, 10.57, 4.70
> Tasks: 154 total,   1 running, 143 sleeping,   1 stopped,   9 zombie
> Cpu(s):  0.8% us,  2.3% sy, 0.0% ni, 0.0% id, 96.3% wa, 0.3% hi, 0.2% si
> Mem:   1034276k total,  1021772k used,    12504k free,     6004k buffers
> Swap:  1030316k total,   985832k used,    44484k free,    83812k cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 15846 apache    16   0  132m  89m 1968 S  0.3  8.9   0:01.46 apache2
> 15840 apache    17   0  130m  83m 2008 D  0.0  8.3   0:00.90 apache2
> 15849 apache    16   0  120m  82m 1968 S  0.3  8.1   0:01.02 apache2
> 15852 apache    16   0  120m  81m 1968 S  0.3  8.1   0:00.91 apache2
> 15848 apache    16   0  109m  73m 2008 S  0.3  7.2   0:00.85 apache2
> 15855 apache    16   0  107m  70m 2076 D  0.3  7.0   0:00.76 apache2
> 15822 apache    17   0  179m  55m 1968 D  0.3  5.5   0:00.88 apache2
> 15854 apache    16   0 98024  55m 1968 D  0.0  5.5   0:00.58 apache2
> 15853 apache    18   0 98.9m  53m 2000 S  0.0  5.3   0:00.51 apache2
> 15847 apache    17   0 86884  52m 1968 D  0.0  5.2   0:00.42 apache2
> 15841 apache    17   0  110m  36m 1964 D  0.3  3.6   0:00.64 apache2
> 15826 apache    17   0  173m  20m 1968 D  0.0  2.0   0:00.57 apache2
> 15825 apache    16   0 97.7m  19m 1968 D  0.0  1.9   0:00.36 apache2
> 15834 apache    16   0  117m  14m 1968 D  0.3  1.5   0:00.42 apache2
> 15839 apache    17   0  115m  12m 1968 D  0.0  1.2   0:00.40 apache2
> 15838 apache    15   0  182m  12m 1968 D  0.0  1.2   0:00.59 apache2
> 15823 apache    16   0  180m  11m 1968 D  0.0  1.1   0:00.65 apache2
> 15824 apache    15   0  103m 9980 1968 D  0.0  1.0   0:00.27 apache2
> 15832 apache    16   0  116m 9112 1968 D  0.0  0.9   0:00.29 apache2
> 15835 apache    16   0  162m 8844 1968 D  0.0  0.9   0:00.41 apache2
> (everything else listed on "top" below this was less than 0.5 for %MEM)
>
>
> The memory usage swelled very fast as the download requests came in, and
> based on previous experience, the server would have slowed to a crawl and
> possible crashed as it tried to save itself if I hadn't run "killall
> apache2" at this point.
>
> So it seems like this guy's 19 download requests are enough to pretty much
> exhaust my 1 Gig of physical RAM and 1 Gig of swap space.  That just
> doesn't seem right.  EVEN IF something weird was happening where every
> Apache child loaded an entire MP3 file into RAM before serving it, that
> still only accounts for 20 servers * 18Meg files = 360Meg RAM - a lot, but
> nowhere near 2 Gig.  Yet these 20 processes have consumed almost 2 Gig.
>
> What am I doing wrong that so few download requests can bring the server to
> its knees?  How can I fix this configuration?
>
> Thanks to anyone who can help!
>
>
>
> ---------------------------------------------------------------------
> The official User-To-User support forum of the Apache HTTP Server Project.
> See <URL:http://httpd.apache.org/userslist.html> for more info.
> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>    "   from the digest: users-digest-unsubscribe@httpd.apache.org
> For additional commands, e-mail: users-help@httpd.apache.org


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org