You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by David Burry <db...@tagnet.org> on 2003/01/01 20:26:03 UTC

mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Apache 2.0.43, Solaris 8, Sun E220R, 4 gig memory, gig ethernet.  We tried
both Sun forte and gcc compilers.  The problem was mod_mem_cache was just
way too resource intensive when pounding on a machine that hard, trying to
see if everything would fit into the cache... cpu/mutexes were very high,
especially memory was out of control (we had many very large files, ranging
from half dozen to two dozen megs, the most popular of those were what we
really wanted cached), and we were running several hundred concurrent
connections at once.  Maybe a new cache loading/hit/removal algorithm that
works better for many hits to very large files would solve it I dunno.

We finally settled on listing out some of the most popular files out in the
httpd.conf file for mod_file_cache, but that presents a management problem
as what's most popular changes.  It would have been nicer if apache could
auto-sense the most popular files.  Also it seems mod_file_cache has a file
size limit but at least we got enough in there the disk wasn't bottlenecking
anymore...

Dave

----- Original Message -----
From: "Bill Stoddard" <bi...@wstoddard.com>
To: <de...@httpd.apache.org>
Sent: Wednesday, January 01, 2003 6:38 AM
Subject: RE: [PATCH] remove some mutex locks in the worker MPM


> > it may also have to do with caching we were doing (mod_mem_cache crashed
and
> burned,
> What version were you running?  What was the failure? If you can give me
enough
> info to debug the problem, I'll work on it.
>
> Bill
>


Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Brian Pane <br...@cnet.com>.
On Wed, 2003-01-01 at 11:26, David Burry wrote:
> Apache 2.0.43, Solaris 8, Sun E220R, 4 gig memory, gig ethernet.  We tried
> both Sun forte and gcc compilers.  The problem was mod_mem_cache was just
> way too resource intensive when pounding on a machine that hard, trying to
> see if everything would fit into the cache... cpu/mutexes were very high,
> especially memory was out of control (we had many very large files, ranging
> from half dozen to two dozen megs, the most popular of those were what we
> really wanted cached), and we were running several hundred concurrent
> connections at once.  Maybe a new cache loading/hit/removal algorithm that
> works better for many hits to very large files would solve it I dunno.

I know of a couple of things that cause mutex contention in
mod_mem_cache:

* Too many malloc/free calls

  This may be easy to improve.  Currently, mod_mem_cache does
  many mallocs for strings and nested objects within a cache object.
  We could probably malloc one big buffer containing enough space
  to hold all those objects.

* Global lock around the hash table and priority queue

  This will be difficult to fix.  It's straightforward to provide
  thread-safe, highly-concurrent access to a hash table (either use
  a separate lock for each hash bucket, or use atomic-CAS based
  pointer swapping when traversing the hash chains).  The problem
  is that we need to read/update the priority queue as part of the
  same transaction in which we read/update the hash table, which
  leaves us stuck with a big global lock.

  If we could modify the mod_mem_cache design to not require the
  priority queue operations and the hash table operations to be
  done as part of the same critical region, I think that would
  open up the door to some major concurrency improvements.  But
  I'm not sure whether that's actually possible.

Brian



Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Graham Leggett <mi...@sharp.fm>.
Bill Stoddard wrote:

> - It's probably worth noting in the doc that -each- child process will cache up
> to MCacheSize KBytes.  If you have 10 child processes, then you need
> 10xMCacheSize Kbytes RAM available just for the cache (the same files could be
> cached in each process). I wonder if we should, at startup, allocate MCacheSize
> KB of shared storage and have mod_mem_cache allocate out of the shared pool.
> Each child process would have it's own unique reference to the object, but the
> object itself would only be cached once for all processes to access.

The idea originally was to have a separate module called mod_shmem_cache 
that did this for systems that needed it, or we can make mod_mem_cache 
cleverer. I prefer the separate module though.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm		"There's a moon
					over Bourbon Street
						tonight..."


Re: mod_mem_cache bad for large/busy files (Was: [PATCH] removesome mutex locks in the worker MPM)

Posted by Glenn <gs...@gluelogic.com>.
On Thu, Jan 02, 2003 at 09:54:58PM -0800, David Burry wrote:
> interesting... so then why did using mod_file_cache to specify caching a
> couple dozen known-most-often-accessed files decrease disk io significantly?
> I'll try the test you mention next time I get a chance.

Out of curiosity, what are your mount options?
Are atime updates enabled?  Try mounting noatime.
  (man mount_ufs)
Are disk quotas enabled?  Try disabling them if they are not needed.
  (man quotaoff)

If you have memory to burn, you can try creating a huge tmpfs partition
  in memory and copying the entire document repository there.
  (man tmpfs; man mount_tmpfs)
  (man mount_cachefs; man cfsadmin)

What sort of resource limits are on the httpd or apache user (under which
the webserver runs)?  (man ulimit)  Think you might be bumping into any
arbitrary limits?  What about on the system level (man sysdef)
  Run 'sysdef' and look for the "Tunable Parameters" section.

Are you using sac (man sac) and what environment is it setting?

(I'm by no means an expert on Solaris, but I hope this helps)

Aha!  Try this link:
  http://ultra.litpixel.com:84/articles/ftat/frameset.html
Fun with noatime, logging, and forcedirectio options.

-Glenn

Re: mod_mem_cache bad for large/busy files (Was: [PATCH] removesome mutex locks in the worker MPM)

Posted by David Burry <db...@tagnet.org>.
interesting... so then why did using mod_file_cache to specify caching a
couple dozen known-most-often-accessed files decrease disk io significantly?
I'll try the test you mention next time I get a chance.

Dave

----- Original Message -----
From: "Brian Pane" <br...@cnet.com>
To: <de...@httpd.apache.org>
Sent: Thursday, January 02, 2003 9:43 PM
Subject: Re: mod_mem_cache bad for large/busy files (Was: [PATCH] removesome
mutex locks in the worker MPM)


> On Thu, 2003-01-02 at 21:21, David Burry wrote:
> > ----- Original Message -----
> > From: "Brian Pane" <br...@cnet.com>
> > Sent: Thursday, January 02, 2003 2:19 PM
> > >
> > > For large files, I'd anticipate that mod_cache wouldn't provide much
> > benefit
> > > at all.  If you characterize the cost of delivering a file as
> > >
> > >    time_to_stat_and_open_and_close +
> > time_to_transfer_from_memory_to_network
> > >
> > > mod_mem_cache can help reduce the first term but not the second.  For
> > small
> > > files, the first term is significant, so it makes sense to try to
optimize
> > > away the stat/open/close with an in-httpd cache.  But for large files,
> > where
> > > the second term is much larger than the first, mod_mem_cache doesn't
> > > necessarily
> > > have an advantage.
> >
> > Unless... of course, you're requesting the same file dozens of times per
> > second (i.e. high hundreds of concurrent downloads per machine, because
it
> > takes a few minutes for most people to get the file).... then caching it
in
> > memory can help, because your disk drive would sit there thrashing
> > otherwise.  If you don't have gig ethernet don't even worry you won't
see
> > the problem really, ethernet will be your bottleneck.  What we're trying
to
> > do is get close to maxing out a gig ethernet with these large files
without
> > the machine dying...
>
> Definitely, caching the file in memory will help in this scenario.
> But that's happening already; the filesystem cache is sitting
> between the httpd and the disk, so you're getting the benefits
> of block caching for oft-used files by default.
>
>
> > > What sort of results do you get if you bypass mod_cache and just rely
on
> > > the Unix filesystem cache to keep large files in memory?
> >
> > Not sure how to configure that so that it will use a few hundred megs to
> > cache often-accessed large files... but I could ask around here to more
> > solaris-knowledgable people...
>
> In my experience with Solaris, the OS is pretty proactive about
> using all available memory for the filesystem cache by default.
> One low-tech way you could check is:
>   - Reboot
>   - Run something to monitor free memory (top works fine)
>   - Run something to read a bunch of your large files
>     (e.g., "cksum [file]").
> In the third step, you should see the free memory decrease by
> roughly the total size of the files you've read.
>
> Brian
>
>


Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Brian Pane <br...@cnet.com>.
On Thu, 2003-01-02 at 21:21, David Burry wrote:
> ----- Original Message -----
> From: "Brian Pane" <br...@cnet.com>
> Sent: Thursday, January 02, 2003 2:19 PM
> >
> > For large files, I'd anticipate that mod_cache wouldn't provide much
> benefit
> > at all.  If you characterize the cost of delivering a file as
> >
> >    time_to_stat_and_open_and_close +
> time_to_transfer_from_memory_to_network
> >
> > mod_mem_cache can help reduce the first term but not the second.  For
> small
> > files, the first term is significant, so it makes sense to try to optimize
> > away the stat/open/close with an in-httpd cache.  But for large files,
> where
> > the second term is much larger than the first, mod_mem_cache doesn't
> > necessarily
> > have an advantage.
> 
> Unless... of course, you're requesting the same file dozens of times per
> second (i.e. high hundreds of concurrent downloads per machine, because it
> takes a few minutes for most people to get the file).... then caching it in
> memory can help, because your disk drive would sit there thrashing
> otherwise.  If you don't have gig ethernet don't even worry you won't see
> the problem really, ethernet will be your bottleneck.  What we're trying to
> do is get close to maxing out a gig ethernet with these large files without
> the machine dying...

Definitely, caching the file in memory will help in this scenario.
But that's happening already; the filesystem cache is sitting
between the httpd and the disk, so you're getting the benefits
of block caching for oft-used files by default.


> > What sort of results do you get if you bypass mod_cache and just rely on
> > the Unix filesystem cache to keep large files in memory?
> 
> Not sure how to configure that so that it will use a few hundred megs to
> cache often-accessed large files... but I could ask around here to more
> solaris-knowledgable people...

In my experience with Solaris, the OS is pretty proactive about
using all available memory for the filesystem cache by default.
One low-tech way you could check is:
  - Reboot
  - Run something to monitor free memory (top works fine)
  - Run something to read a bunch of your large files
    (e.g., "cksum [file]").
In the third step, you should see the free memory decrease by
roughly the total size of the files you've read.

Brian



Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by David Burry <db...@tagnet.org>.
----- Original Message -----
From: "Brian Pane" <br...@cnet.com>
Sent: Thursday, January 02, 2003 2:19 PM
>
> For large files, I'd anticipate that mod_cache wouldn't provide much
benefit
> at all.  If you characterize the cost of delivering a file as
>
>    time_to_stat_and_open_and_close +
time_to_transfer_from_memory_to_network
>
> mod_mem_cache can help reduce the first term but not the second.  For
small
> files, the first term is significant, so it makes sense to try to optimize
> away the stat/open/close with an in-httpd cache.  But for large files,
where
> the second term is much larger than the first, mod_mem_cache doesn't
> necessarily
> have an advantage.

Unless... of course, you're requesting the same file dozens of times per
second (i.e. high hundreds of concurrent downloads per machine, because it
takes a few minutes for most people to get the file).... then caching it in
memory can help, because your disk drive would sit there thrashing
otherwise.  If you don't have gig ethernet don't even worry you won't see
the problem really, ethernet will be your bottleneck.  What we're trying to
do is get close to maxing out a gig ethernet with these large files without
the machine dying...

>  And it has at least three disadvantages that I can
> think of:
>   1. With mod_mem_cache, you can't use sendfile(2) to send the content.
>      If your kernel does zero-copy on sendfile but not on writev, it
>      could be faster to deliver a file instead of a cached copy.

Memory is always faster than a spinning disk.  It should be possible to make
a memory cache that's faster than the disk, and uses the same amount of
space as disk too.

>   2. And as long as mod_mem_cache maintains a separate cache per worker
>      process, it will use memory less efficiently than the filesystem
>      cache.

yes, that is definitely a problem.  good that mod_file_cache does not have
this problem, but it has other file list maintainability problems.

>   3. On a cache miss, mod_mem_cache needs to read the file in order to
>      cache it.  By default, it uses mmap/munmap to do this.  We've seen
>      mutex contention problems in munmap on high-volume Solaris servers.

sounds familiar...

> What sort of results do you get if you bypass mod_cache and just rely on
> the Unix filesystem cache to keep large files in memory?

Not sure how to configure that so that it will use a few hundred megs to
cache often-accessed large files... but I could ask around here to more
solaris-knowledgable people...

Dave


RE: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Brian Pane <br...@cnet.com>.
On Fri, 2003-01-10 at 12:40, Bill Stoddard wrote:
> I was meaning to respond to this, but forgot until I saw the blurb in ApacheWeek
> :-)

We all really need to find time to write some code, so that
ApacheWeek will have something to cover besides design debates. :-)


> > For large files, I'd anticipate that mod_cache wouldn't provide much benefit
> > at all.  If you characterize the cost of delivering a file as
> >
> >    time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network
> >
> > mod_mem_cache can help reduce the first term but not the second.  For small
> > files, the first term is significant, so it makes sense to try to optimize
> > away the stat/open/close with an in-httpd cache.  But for large files, where
> > the second term is much larger than the first, mod_mem_cache doesn't
> > necessarily
> > have an advantage.
> 
> The read can be expensive over NFS. Yes, one would hope the file system cache
> would cover this. And perhaps it does in most cases. 

Yeah, in practice I've found that most of the load on our
NFS servers is in the form of file and attribute lookup requests
(in support of stat and cache coherency checks) rather than
actual reads, due to the effects of client side caching.
Your mileage may vary, of course.

> Generally I agree with the
> analysis. The big expenses are in the stat/open/close.
> 
> > And it has at least three disadvantages that I can
> > think of:
> >   1. With mod_mem_cache, you can't use sendfile(2) to send the content.
> >      If your kernel does zero-copy on sendfile but not on writev, it
> >      could be faster to deliver a file instead of a cached copy.
> 
> mod_mem_cache can cache open fds (CacheEnable fd /). Works really nicely on
> Windows. I have not seen much benefit testing on AIX and I don't know if there
> are other performance implications on *ix with maintaining a large number of
> open fds.
> 
> >   2. And as long as mod_mem_cache maintains a separate cache per worker
> >      process, it will use memory less efficiently than the filesystem
> >      cache.
> 
> Yep. Not a big deal if you are caching open fds though.

Definitely, caching fds is in some ways an ideal solution: it lets
the OS manage a single cache image per file, but we still get to
eliminate the stat/open/close.

> >   3. On a cache miss, mod_mem_cache needs to read the file in order to
> >      cache it.  By default, it uses mmap/munmap to do this.  We've seen
> >      mutex contention problems in munmap on high-volume Solaris servers.
> 
> This is a result of mod_mem_cache using the bucket code (apr_buckets_file). I
> think we could extrace the fd from the bucket then so a read rather than an
> mmap. Should I work on a fix for this?

I think the "EnableMMAP off" directive will turn mod_mem_cache's
mmap into a read.  It works by setting a flag in the file bucket
that tells the bucket's read function whether or not it's allowed
to use mmap.

Brian



RE: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Bill Stoddard <bi...@wstoddard.com>.
I was meaning to respond to this, but forgot until I saw the blurb in ApacheWeek
:-)

> David Burry wrote:
>
> >>Random thoughts:
> >>- Did the content have short expiration times (or recent change dates
> >>
> >>
> >which
> >
> >
> >>would result in the cache making agressive expiration estimates). That
> >>
> >>
> >could
> >
> >
> >>churn the cache.
> >>
> >>
> >
> >No.  files literally never change, when updates appear they are always new
> >files, web pages just point to new ones each update.  In this application
> >these are all executable downloadable files, think FTP repository over HTTP.
> >
>
> For large files, I'd anticipate that mod_cache wouldn't provide much benefit
> at all.  If you characterize the cost of delivering a file as
>
>    time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network
>
> mod_mem_cache can help reduce the first term but not the second.  For small
> files, the first term is significant, so it makes sense to try to optimize
> away the stat/open/close with an in-httpd cache.  But for large files, where
> the second term is much larger than the first, mod_mem_cache doesn't
> necessarily
> have an advantage.

The read can be expensive over NFS. Yes, one would hope the file system cache
would cover this. And perhaps it does in most cases. Generally I agree with the
analysis. The big expenses are in the stat/open/close.

> And it has at least three disadvantages that I can
> think of:
>   1. With mod_mem_cache, you can't use sendfile(2) to send the content.
>      If your kernel does zero-copy on sendfile but not on writev, it
>      could be faster to deliver a file instead of a cached copy.

mod_mem_cache can cache open fds (CacheEnable fd /). Works really nicely on
Windows. I have not seen much benefit testing on AIX and I don't know if there
are other performance implications on *ix with maintaining a large number of
open fds.

>   2. And as long as mod_mem_cache maintains a separate cache per worker
>      process, it will use memory less efficiently than the filesystem
>      cache.

Yep. Not a big deal if you are caching open fds though.

>   3. On a cache miss, mod_mem_cache needs to read the file in order to
>      cache it.  By default, it uses mmap/munmap to do this.  We've seen
>      mutex contention problems in munmap on high-volume Solaris servers.

This is a result of mod_mem_cache using the bucket code (apr_buckets_file). I
think we could extrace the fd from the bucket then so a read rather than an
mmap. Should I work on a fix for this?

Bill


Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Brian Pane <br...@cnet.com>.
David Burry wrote:

>>Random thoughts:
>>- Did the content have short expiration times (or recent change dates
>>    
>>
>which
>  
>
>>would result in the cache making agressive expiration estimates). That
>>    
>>
>could
>  
>
>>churn the cache.
>>    
>>
>
>No.  files literally never change, when updates appear they are always new
>files, web pages just point to new ones each update.  In this application
>these are all executable downloadable files, think FTP repository over HTTP.
>

For large files, I'd anticipate that mod_cache wouldn't provide much benefit
at all.  If you characterize the cost of delivering a file as

   time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network

mod_mem_cache can help reduce the first term but not the second.  For small
files, the first term is significant, so it makes sense to try to optimize
away the stat/open/close with an in-httpd cache.  But for large files, where
the second term is much larger than the first, mod_mem_cache doesn't 
necessarily
have an advantage.  And it has at least three disadvantages that I can 
think of:
  1. With mod_mem_cache, you can't use sendfile(2) to send the content.
     If your kernel does zero-copy on sendfile but not on writev, it
     could be faster to deliver a file instead of a cached copy.
  2. And as long as mod_mem_cache maintains a separate cache per worker
     process, it will use memory less efficiently than the filesystem
     cache.
  3. On a cache miss, mod_mem_cache needs to read the file in order to
     cache it.  By default, it uses mmap/munmap to do this.  We've seen
     mutex contention problems in munmap on high-volume Solaris servers.

What sort of results do you get if you bypass mod_cache and just rely on
the Unix filesystem cache to keep large files in memory?

Brian


Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by David Burry <db...@tagnet.org>.
> Random thoughts:
> - Did the content have short expiration times (or recent change dates
which
> would result in the cache making agressive expiration estimates). That
could
> churn the cache.

No.  files literally never change, when updates appear they are always new
files, web pages just point to new ones each update.  In this application
these are all executable downloadable files, think FTP repository over HTTP.

> - Was CacheMaxStreamingBuffer set appopropriately? (it may not be needed
at all
> if the content length header is included on all replies).

C-L is included (SSIs and all similar dynamic stuff is disabled), not sure
about CacheMaxStreamingBuffer, I'd need to go check.

> - Did you try caching open file descriptors? I am rather curious if
caching open
> fds will be useful/practicle on Unix systems.  Oh..., but this probably
will not
> help your disk throughput... nevermind.

:-D

> - It's probably worth noting in the doc that -each- child process will
cache up
> to MCacheSize KBytes.  If you have 10 child processes, then you need
> 10xMCacheSize Kbytes RAM available just for the cache (the same files
could be
> cached in each process). I wonder if we should, at startup, allocate
MCacheSize
> KB of shared storage and have mod_mem_cache allocate out of the shared
pool.
> Each child process would have it's own unique reference to the object, but
the
> object itself would only be cached once for all processes to access.

I suspected that's where our memory running out was coming from, but it
would have been helpful to have confirmation of my suspicion in the docs,
yes.  The problem is our cache needed to be quite large to cache very many
of those large files, and we needed to run a lot of processes due to the
mutex contentions with too many threads in one process (see the "[PATCH]
remove some mutex locks in the worker MPM" thread)... so we kind of gave up
on mod_mem_cache.  This is kind of how this discussion branched off of that
thread, sorry I didn't state that clearly earlier.

It would be nice if there were some kind of shared cache, shared between
processes.  With large files like this it only needs to be read once it's
primed with the most popular files... and which files are the most popular
files doesn't change that often since we only make new releases every couple
months... mod_file_cache works ok for this, but we need to develop something
that "guesses" what will be most popular and generates the httpd.conf list
and restarts apache before each new release is publicly linked on web pages
but after the files are put live, to avoid our servers falling over with
each new release... it's quite a pain and quite scary what will happen if
those steps aren't followed correctly, that's why I was hoping Apache could
manage it automatically with mod_mem_cache.

Dave

>
> Bill
>
> > Apache 2.0.43, Solaris 8, Sun E220R, 4 gig memory, gig ethernet.  We
tried
> > both Sun forte and gcc compilers.  The problem was mod_mem_cache was
just
> > way too resource intensive when pounding on a machine that hard, trying
to
> > see if everything would fit into the cache... cpu/mutexes were very
high,
> > especially memory was out of control (we had many very large files,
ranging
> > from half dozen to two dozen megs, the most popular of those were what
we
> > really wanted cached), and we were running several hundred concurrent
> > connections at once.  Maybe a new cache loading/hit/removal algorithm
that
> > works better for many hits to very large files would solve it I dunno.
> >
> > We finally settled on listing out some of the most popular files out in
the
> > httpd.conf file for mod_file_cache, but that presents a management
problem
> > as what's most popular changes.  It would have been nicer if apache
could
> > auto-sense the most popular files.  Also it seems mod_file_cache has a
file
> > size limit but at least we got enough in there the disk wasn't
bottlenecking
> > anymore...
> >
> > Dave
> >
> > ----- Original Message -----
> > From: "Bill Stoddard" <bi...@wstoddard.com>
> > To: <de...@httpd.apache.org>
> > Sent: Wednesday, January 01, 2003 6:38 AM
> > Subject: RE: [PATCH] remove some mutex locks in the worker MPM
> >
> >
> > > > it may also have to do with caching we were doing (mod_mem_cache
crashed
> > and
> > > burned,
> > > What version were you running?  What was the failure? If you can give
me
> > enough
> > > info to debug the problem, I'll work on it.
> > >
> > > Bill
> > >
>


RE: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)

Posted by Bill Stoddard <bi...@wstoddard.com>.
Random thoughts:
- Did the content have short expiration times (or recent change dates which
would result in the cache making agressive expiration estimates). That could
churn the cache.
- Was CacheMaxStreamingBuffer set appopropriately? (it may not be needed at all
if the content length header is included on all replies).
- Did you try caching open file descriptors? I am rather curious if caching open
fds will be useful/practicle on Unix systems.  Oh..., but this probably will not
help your disk throughput... nevermind.
- It's probably worth noting in the doc that -each- child process will cache up
to MCacheSize KBytes.  If you have 10 child processes, then you need
10xMCacheSize Kbytes RAM available just for the cache (the same files could be
cached in each process). I wonder if we should, at startup, allocate MCacheSize
KB of shared storage and have mod_mem_cache allocate out of the shared pool.
Each child process would have it's own unique reference to the object, but the
object itself would only be cached once for all processes to access.

Bill

> Apache 2.0.43, Solaris 8, Sun E220R, 4 gig memory, gig ethernet.  We tried
> both Sun forte and gcc compilers.  The problem was mod_mem_cache was just
> way too resource intensive when pounding on a machine that hard, trying to
> see if everything would fit into the cache... cpu/mutexes were very high,
> especially memory was out of control (we had many very large files, ranging
> from half dozen to two dozen megs, the most popular of those were what we
> really wanted cached), and we were running several hundred concurrent
> connections at once.  Maybe a new cache loading/hit/removal algorithm that
> works better for many hits to very large files would solve it I dunno.
>
> We finally settled on listing out some of the most popular files out in the
> httpd.conf file for mod_file_cache, but that presents a management problem
> as what's most popular changes.  It would have been nicer if apache could
> auto-sense the most popular files.  Also it seems mod_file_cache has a file
> size limit but at least we got enough in there the disk wasn't bottlenecking
> anymore...
>
> Dave
>
> ----- Original Message -----
> From: "Bill Stoddard" <bi...@wstoddard.com>
> To: <de...@httpd.apache.org>
> Sent: Wednesday, January 01, 2003 6:38 AM
> Subject: RE: [PATCH] remove some mutex locks in the worker MPM
>
>
> > > it may also have to do with caching we were doing (mod_mem_cache crashed
> and
> > burned,
> > What version were you running?  What was the failure? If you can give me
> enough
> > info to debug the problem, I'll work on it.
> >
> > Bill
> >