You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by "Rose, Billy" <wr...@loislaw.com> on 2002/04/26 18:13:55 UTC

RE: [PATCH] Possible fix for worker MPM performance problem (Upda ted patch)

As per a previous email, I was going to create a new MPM that had a
"dispatcher" sitting between the listener and the workers that would handle
all of the queueing problems via signaling. I will have to defer that
project at present due to workload at my job. However, in the current
discussion on the worker MPM, how about having an overflow buffer that the
listener stuffs connections into. Once threads wake up (or are created),
they check that buffer first for any other work and before sleeping. A mutex
would regulate access to the queue. Comments???

Billy Rose 
wrose@loislaw.com

> -----Original Message-----
> From: Bill Stoddard [mailto:bill@wstoddard.com]
> Sent: Friday, April 26, 2002 11:06 AM
> To: dev@httpd.apache.org
> Subject: Re: [PATCH] Possible fix for worker MPM performance problem
> (Updated patch)
> 
> 
> 
> 
> > On Fri, Apr 26, 2002 at 11:32:19AM -0400, Paul J. Reder wrote:
> > > In my tests, this patch allows existing worker threads to continue
> > > procesing requests while the new threads are started.
> > >
> > > In the previous code the server would pause while new threads were
> > > being created. The new threads started accepting work immediately,
> > > causing the existing threads to starve even though there are a
> > > small (but growing) number of new threads.
> > >
> > > This patch allows the server to maintain a higher level 
> of responsiveness
> > > during the ramp up time.
> >
> > I don't quite understand what you are saying here. AIUI the 
> worker MPM
> > creates all threads as soon as it is started, and as an 
> optimization it
> > creates the listener thread as soon as there are at least one worker
> > thread availble. By delaying the startup of the listener 
> thread we're
> > merely increasing the amount of time it takes to start a 
> new child and
> > start accepting connections.
> 
> By deferring the start-up of the listener, we are decreasing 
> the amount of time it takes
> to start the new process. My speculation in creating the 
> patch was that we could save time
> spent context switching between a few active workers and the 
> listen thread and use that
> time to startup the new threads. More speculation...contect 
> switching may be particularly
> expensive when threads are starting, or conversly, thread 
> starting may be really expensive
> when lots of context switches are happening in the process. 
> What is interesting is that,
> at least by Paul's measurements, the patch does make a difference.
> 
> I think Jeff's comment was close to on target as well. If the 
> listener thread can
> efficiently defer accepting connections when there are no 
> workers available, that would
> probably accomplish much the same.
> 
> Bill
> 
> > Please correct me if I'm missing something.
> >
> > The reason I think you were seeing a pause while new 
> threads were being
> > created, as Jeff points out, was because our listener 
> thread was able
> > to accept far more connections than we had available 
> workers or would
> > have available workers. In the worst case, since we create 
> the listener
> > as soon as there is 1 worker, it is possible to have a queue filled
> > with ap_threads_per_child accept()ed connections and only 1 worker.
> > As soon as the next worker is created the listener is able 
> to accept()
> > yet another connection and stuff that into the queue.
> >
> > And I think I've just realized something else. Since the scoreboard
> > is not updated until a worker thread pulls the connection off of the
> > queue, the parent is not going to create another child in accordance
> > with how many connections are accept()ed. This means that 
> we are able to
> > accept up to 2*ThreadsPerChild*number_of_children 
> connections while the
> > parent will only count us as having 1/2 that amount of 
> concurrency, and
> > therefore will not match the demand. This is another bug in 
> the worker
> > MPM that would be fixed if we prevented the listener from 
> accepting more
> > connections that workers.
> 
> Yep and that is closly related to another problem Paul is 
> tracking down.
> process_idle_server maintenance is thrashing a bit when a 
> load spike comes in (ie,
> processes are actually being told to shutdown in the midst of 
> a load spike).
> 
> Bill
> 
> >
> > -aaron
> >
> 

Re: [PATCH] Possible fix for worker MPM performance problem (Upda ted patch)

Posted by Aaron Bannert <aa...@clove.org>.
On Fri, Apr 26, 2002 at 11:13:55AM -0500, Rose, Billy wrote:
> As per a previous email, I was going to create a new MPM that had a
> "dispatcher" sitting between the listener and the workers that would handle
> all of the queueing problems via signaling. I will have to defer that
> project at present due to workload at my job. However, in the current
> discussion on the worker MPM, how about having an overflow buffer that the
> listener stuffs connections into. Once threads wake up (or are created),
> they check that buffer first for any other work and before sleeping. A mutex
> would regulate access to the queue. Comments???

That is what we have now, and having the "overflow" portion is the essense
of our problem.

-aaron