You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Paul Querna <ch...@force-elite.com> on 2004/10/26 03:49:06 UTC
Event MPM w/ multiple processes
Brian Akins wrote:
> Greg Ames wrote:
>
>> one thread per connection with an active http request, plus the
>> listener/event thread who owns all the connections in keepalive. I
>> believe Paul is saying set ThreadsPerChild to 1200 to handle the worst
>> case behavior - 100% of the connections are doing real work at some
>> instant and none are in keepalive timeouts.
>
>
> Can you still have multiple processes? We use 10k plus threads per box
> with worker.
The updated patch for today adds multiple processes. (same directives as
the worker MPM):
http://www.apache.org/~pquerna/event-mpm/event-mpm-2004-10-25.patch
However, the big thing it doesn't use is accept serialization.
This means all event threads are listening for incoming clients. The
first one to process the incoming connection gets it. This does not
block the other event threads, since they set the listening socket to
non-blocking before starting their loop.
This seems to work fine on my tests. It has the sucky side effect of
waking up threads sometimes when they are not needed, but on a busy
server, trying to accept() will likely be fine, as there will be a
backlog of clients to accept().
-Paul Querna
Re: Event MPM w/ multiple processes
Posted by Paul Querna <ch...@force-elite.com>.
Greg Ames wrote:
>> However, the big thing it doesn't use is accept serialization.
>
> hmmm, that would be challenging with a merged listener/event thread. If
> the event thread is blocked waiting for its turn to accept(), it can't
> react to a poll popping due to an older connection becoming readable.
Yup. I am thinking about different ways of passing the listening sockets
around. Both EPoll and KQueue support methods to cheaply disable an FD
in their pollset. It just needs exposure in the APR API.
>> This means all event threads are listening for incoming clients. The
>> first one to process the incoming connection gets it. This does not
>> block the other event threads, since they set the listening socket to
>> non-blocking before starting their loop.
>
> >
>
>> This seems to work fine on my tests. It has the sucky side effect of
>> waking up threads sometimes when they are not needed, but on a busy
>> server, trying to accept() will likely be fine, as there will be a
>> backlog of clients to accept().
>
>
> short war story: we had a bug a couple of years ago where whenever we
> tried putting the latest httpd into production on daedalus, the load
> average spiked way up. Brian B and Manoj would get paged. It was
> caused by using unserialized poll()s rather than unserialized accept()s
> in the prefork mpm.
>
> But that was 200-300 unthreaded processes each using plain ol' vanilla
> poll() on one or two fd's. I'm thinking we would want to tune for 2-3
> processes with the event mpm so this shouldn't be the same situation.
That was my feeling aswell. The 'thundering herd' problem isn't as
signifigant with a relatively small number of Event MPM Processes,
compared to 1000+ Prefork Childern.
-Paul
Re: Event MPM w/ multiple processes
Posted by Greg Ames <gr...@remulak.net>.
Paul Querna wrote:
> The updated patch for today adds multiple processes.
cool!
> However, the big thing it doesn't use is accept serialization.
hmmm, that would be challenging with a merged listener/event thread. If the
event thread is blocked waiting for its turn to accept(), it can't react to a
poll popping due to an older connection becoming readable.
> This means all event threads are listening for incoming clients. The
> first one to process the incoming connection gets it. This does not
> block the other event threads, since they set the listening socket to
> non-blocking before starting their loop.
>
> This seems to work fine on my tests. It has the sucky side effect of
> waking up threads sometimes when they are not needed, but on a busy
> server, trying to accept() will likely be fine, as there will be a
> backlog of clients to accept().
short war story: we had a bug a couple of years ago where whenever we tried
putting the latest httpd into production on daedalus, the load average spiked
way up. Brian B and Manoj would get paged. It was caused by using unserialized
poll()s rather than unserialized accept()s in the prefork mpm.
But that was 200-300 unthreaded processes each using plain ol' vanilla poll() on
one or two fd's. I'm thinking we would want to tune for 2-3 processes with the
event mpm so this shouldn't be the same situation.
off to see how a 2.6 kernel gets along with a Pentium Pro/study diffs/etc.
Greg