You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by "Paul J. Reder" <re...@raleigh.ibm.com> on 2001/08/03 04:26:28 UTC

[Patch]: Scoreboard as linked list.

Ok, I have finally finished this version of the scoreboard redesign. The
basic idea is to implement the scoreboard as a linked list. The design,
test results, benefits, and patch are at http://24.25.12.102
Please check it out and give it a try. It performs very well.

The brief performance results for this patch are:
   Current Time: Thursday, 02-Aug-2001 13:32:45 EDT 
   Restart Time: Thursday, 02-Aug-2001 11:02:44 EDT 
   Parent Server Generation: 0 
   Server uptime: 2 hours 30 minutes 
   Total accesses: 2456532 - Total Traffic: 69.3 GB 
   CPU Usage: u42.52 s92.34 cu.29 cs2.35 - 1.53% CPU load 
   273 requests/sec - 7.9 MB/second - 29.6 kB/request 
   327 requests currently being processed, 172 idle workers

compared to the current cvs code (reported yesterday):
   Current Time: Wednesday, 01-Aug-2001 10:48:54 EDT 
   Restart Time: Wednesday, 01-Aug-2001 08:09:19 EDT 
   Parent Server Generation: 0 
   Server uptime: 2 hours 39 minutes 35 seconds 
   Total accesses: 2259384 - Total Traffic: 63.1 GB 
   CPU Usage: u31.79 s96.78 cu0 cs.06 - 1.34% CPU load 
   236 requests/sec - 6.8 MB/second - 29.3 kB/request 
   190 requests currently being processed, 0 idle workers

There is a lot more info about the design, benefits, and test results
at that web page so rather than take up any more bandwidth, please
check it out. The patch applied cleanly to cvs head as of Thursday
at about 12:00 noon east coast time.

Also, please let me know if you have any problems getting at the page.

Thanks.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: [Patch]: Scoreboard as linked list.

Posted by Jeff Trawick <tr...@attglobal.net>.
Ryan Bloom <rb...@covalent.net> writes:

> -1.  As I have stated multiple times, if this uses a mutex to lock the list whenever something
> walks the scoreboard, I can't accept it.  It will kill the performance for modules that I have.

Why not a reader/writer lock?

-- 
Jeff Trawick | trawick@attglobal.net | PGP public key at web site:
       http://www.geocities.com/SiliconValley/Park/9289/
             Born in Roswell... married an alien...

Re: [Patch]: Scoreboard as linked list.

Posted by Aaron Bannert <aa...@ebuilt.com>.
On Thu, Aug 02, 2001 at 10:19:09PM -0700, Ian Holsman wrote:
> 
> 
> Ryan Bloom wrote:
> 
> >My modules are walking the scoreboard on every request to gather information
> >that is only stored there.  Any locking will kill the performance of those modules.
> >
> that sounds kinda ugly performance wise anyway,
> just out of interest, why does you module need scoreboard info on each 
> request.

I agree. To me, the scoreboard (being by definition a shared resource)
just seems like one of those places where you have to bite the bullet
and do locking, and try not to put too much of the work in the critical
sections. But that's just me.

> couldn't we use a rw_lock/spin lock  instead of a mutex? that wouldn't 
> be as big a hit as a mutex

Although I'm a big rwlock fan, after seeing those number you (Ian) gave
me from your 8-way box on rwlocks, I'm afraid to use them anymore :)

"pthread_mutex all the way!"

-aaron


Re: [Patch]: Scoreboard as linked list.

Posted by Jeff Trawick <tr...@attglobal.net>.
Justin Erenkrantz <je...@ebuilt.com> writes:

> FWIW, is Jeff's b publicly available?  

I just put a stable version at http://www.apache.org/~trawick/b.c.

>                                        If so, would it be worth it to 
> commit b to the httpd-test repository?

dunno...  unclear how many people would find useful... 

-- 
Jeff Trawick | trawick@attglobal.net | PGP public key at web site:
       http://www.geocities.com/SiliconValley/Park/9289/
             Born in Roswell... married an alien...

Re: [Patch]: Scoreboard as linked list.

Posted by Justin Erenkrantz <je...@ebuilt.com>.
On Fri, Aug 03, 2001 at 09:24:12AM -0400, Paul J. Reder wrote:
> Yes, please, benchmark this. Show me where my results were flawed. I didn't see the 
> problems in real life that some of you are seeing in theory. Show me the bottlenecks
> and we can see if they can be addressed.

FWIW, is Jeff's b publicly available?  If so, would it be worth it to 
commit b to the httpd-test repository?

Have you checked out flood in the httpd-test repository?  It's meant to
be much more sophisticated than ab (all XML configured).  If you can 
get a chance, you might like what you see.  If you don't, help us 
improve it (all httpd committers have access to httpd-test).  =-)

(I'm guessing that b is similar to ab...)  -- justin


Re: [Patch]: Scoreboard as linked list.

Posted by "Paul J. Reder" <re...@raleigh.ibm.com>.
Bill Stoddard wrote:
> 
> >
> > Ryan Bloom wrote:
> >
> > >My modules are walking the scoreboard on every request to gather information
> > >that is only stored there.  Any locking will kill the performance of those modules.
> > >
> > that sounds kinda ugly performance wise anyway,
> > just out of interest, why does you module need scoreboard info on each
> > request.
> >
> > couldn't we use a rw_lock/spin lock  instead of a mutex? that wouldn't
> > be as big a hit as a mutex
> >
> > ..Ian
> 
> There are two things impacting performance in this patch. The first is the overhead of
> following pointers.  If you do that each request, it can add up if you have large numbers
> of concurrent clients. I don't have a feel for the overhead relative to the rest of the
> server though. The additional overhead of running 10,000 pointer may be noise in the
> server. Or maybe not.

The proposed patch does not incurr a noticeable amount of extra overhead due to the pointers.
The worker is accessed directly via a pointer in the conn_rec. So no walking or dereferencing
is required. The old code had to compute indexes (and internally convert the indexes via
array derefs to an actual address). It pans out about the same.

For SB walks, the current code loops through row/col indexes and computes addresses. The
proposed patch just follows process/worker pointer chains. It works out about the same.

> 
> The second performance issue is lock contention. Acquiring a lock with no contention is
> fast. If the lock has contention, the performance goes to hell fast. So the suggestion of
> using a rw_lock (spin on multi cpu systems) sounds just right since accesses during normal
> HTTP requests can just acquire the reader lock.

Normal HTTP requests don't need a lock at all. Updating the worker counts is done without a
lock based on the fact that if the worker is handling the request, it cannot be in the process
of being returned to the free list. The update follows the current pattern of behavior, allowing
the workers to be updated even during a mod_status walk. The worst that can happen is the
status report might be slightly inaccurate for that precise moment.

As I said in my first response to Ryan, even under very heavy abuse with the pathological
MRPC = 3000, lock contention was low. If someone can prove to me that contention is a problem
then we can discuss which of the several alternative optimizations would be best.

> 
> According to Paul's testing, his patch tends to manage the processes a bit better. At any
> point in time, he has fewer processes active. Need to think about this some to determine
> why that is. If the observation holds up, this is a peformance mark in favor of Paul's
> patch as fewer processes means less memory and that's goodness.

My code *does* still reach the user defined maximum number of processes. It just takes longer
and happens less frequently than the current code.

> 
> Paul's patch also lets us eliminate HARD_THREAD_LIMIT and HARD_SERVER_LIMIT which is cool
> IMO. It also lets us not allocate scoreboard for mod_status if mod_status is not loaded
> (not implemented yet, but the design enables it).
> 
> Benchmarking will tell us what we need to know on the performance front.

Yes, please, benchmark this. Show me where my results were flawed. I didn't see the 
problems in real life that some of you are seeing in theory. Show me the bottlenecks
and we can see if they can be addressed.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: [Patch]: Scoreboard as linked list.

Posted by Bill Stoddard <bi...@wstoddard.com>.
>
> Ryan Bloom wrote:
>
> >My modules are walking the scoreboard on every request to gather information
> >that is only stored there.  Any locking will kill the performance of those modules.
> >
> that sounds kinda ugly performance wise anyway,
> just out of interest, why does you module need scoreboard info on each
> request.
>
> couldn't we use a rw_lock/spin lock  instead of a mutex? that wouldn't
> be as big a hit as a mutex
>
> ..Ian

There are two things impacting performance in this patch. The first is the overhead of
following pointers.  If you do that each request, it can add up if you have large numbers
of concurrent clients. I don't have a feel for the overhead relative to the rest of the
server though. The additional overhead of running 10,000 pointer may be noise in the
server. Or maybe not.

The second performance issue is lock contention. Acquiring a lock with no contention is
fast. If the lock has contention, the performance goes to hell fast. So the suggestion of
using a rw_lock (spin on multi cpu systems) sounds just right since accesses during normal
HTTP requests can just acquire the reader lock.

According to Paul's testing, his patch tends to manage the processes a bit better. At any
point in time, he has fewer processes active. Need to think about this some to determine
why that is. If the observation holds up, this is a peformance mark in favor of Paul's
patch as fewer processes means less memory and that's goodness.

Paul's patch also lets us eliminate HARD_THREAD_LIMIT and HARD_SERVER_LIMIT which is cool
IMO. It also lets us not allocate scoreboard for mod_status if mod_status is not loaded
(not implemented yet, but the design enables it).

Benchmarking will tell us what we need to know on the performance front.

Bill


Re: [Patch]: Scoreboard as linked list.

Posted by Ian Holsman <ia...@cnet.com>.

Ryan Bloom wrote:

>My modules are walking the scoreboard on every request to gather information
>that is only stored there.  Any locking will kill the performance of those modules.
>
that sounds kinda ugly performance wise anyway,
just out of interest, why does you module need scoreboard info on each 
request.

couldn't we use a rw_lock/spin lock  instead of a mutex? that wouldn't 
be as big a hit as a mutex

..Ian

>
>Ryan
>
>On Thursday 02 August 2001 20:49, Brian Pane wrote:
>
>>Ryan Bloom wrote:
>>
>>>-1.  As I have stated multiple times, if this uses a mutex to lock the
>>>list whenever something walks the scoreboard, I can't accept it.  It will
>>>kill the performance for modules that I have.
>>>
>>I'm not convinced that you actually have to lock the whole list
>>during a scoreboard traversal.  In fact, if a node's contents
>>are left intact when it's 'deleted' and put back on the free
>>list, it may even be possible to add/remove nodes without using
>>locks (assuming that only one thread can add/remove notes at a
>>time and the amount of time that a deleted node spends on the
>>free list is long enough for a scoreboard-walking reader that
>>happens to have a pointer to that node to finish reading from
>>that node before the node is reallocated).  Also, the documentation
>>that Paul posted mentions the option of using per-process or
>>per-worker locking; that might offer sufficiently small granularity,
>>depending on what specifically your modules are doing with the
>>scoreboard.
>>--Brian
>>
>>>Ryan
>>>
>>>On Thursday 02 August 2001 19:26, Paul J. Reder wrote:
>>>
>>>>Ok, I have finally finished this version of the scoreboard redesign. The
>>>>basic idea is to implement the scoreboard as a linked list. The design,
>>>>test results, benefits, and patch are at http://24.25.12.102
>>>>Please check it out and give it a try. It performs very well.
>>>>
>>>>The brief performance results for this patch are:
>>>>  Current Time: Thursday, 02-Aug-2001 13:32:45 EDT
>>>>  Restart Time: Thursday, 02-Aug-2001 11:02:44 EDT
>>>>  Parent Server Generation: 0
>>>>  Server uptime: 2 hours 30 minutes
>>>>  Total accesses: 2456532 - Total Traffic: 69.3 GB
>>>>  CPU Usage: u42.52 s92.34 cu.29 cs2.35 - 1.53% CPU load
>>>>  273 requests/sec - 7.9 MB/second - 29.6 kB/request
>>>>  327 requests currently being processed, 172 idle workers
>>>>
>>>>compared to the current cvs code (reported yesterday):
>>>>  Current Time: Wednesday, 01-Aug-2001 10:48:54 EDT
>>>>  Restart Time: Wednesday, 01-Aug-2001 08:09:19 EDT
>>>>  Parent Server Generation: 0
>>>>  Server uptime: 2 hours 39 minutes 35 seconds
>>>>  Total accesses: 2259384 - Total Traffic: 63.1 GB
>>>>  CPU Usage: u31.79 s96.78 cu0 cs.06 - 1.34% CPU load
>>>>  236 requests/sec - 6.8 MB/second - 29.3 kB/request
>>>>  190 requests currently being processed, 0 idle workers
>>>>
>>>>There is a lot more info about the design, benefits, and test results
>>>>at that web page so rather than take up any more bandwidth, please
>>>>check it out. The patch applied cleanly to cvs head as of Thursday
>>>>at about 12:00 noon east coast time.
>>>>
>>>>Also, please let me know if you have any problems getting at the page.
>>>>
>>>>Thanks.
>>>>
>




Re: [Patch]: Scoreboard as linked list.

Posted by "Paul J. Reder" <re...@raleigh.ibm.com>.
Ryan Bloom wrote:
> 
> My modules are walking the scoreboard on every request to gather information
> that is only stored there.  Any locking will kill the performance of those modules.

Only if there is contention. If there is no contention, then you pay the price
of some function call overhead. As demonstrated by my test results and explained in
my previous posted response, even under heavy abuse with a pathological config,
contention is low. 

If you can prove that the current locking is too heavy, there are solutions. Please
show me results that prove the patch causes you problems that can't be optimized.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: [Patch]: Scoreboard as linked list.

Posted by "Paul J. Reder" <re...@raleigh.ibm.com>.
Ryan Bloom wrote:
> My modules are walking the scoreboard on every request to gather information
> that is only stored there.  Any locking will kill the performance of those modules.

Since you have not offered any details about these modules, allow me to make some
uneducated guesses and questions.

I can assume, based on the limited information, that you are collecting information
for some version of real-time information. Either to determine a single set of
statistics (i.e. average X, or total Y), or to report some individual results/problems
(i.e. snmp alerts, or real-time worker stats).

I can certainly see that if you need to walk the entire SB every time that locking
would be bad, causing all requests to be serialized by the SB lock.

But do you *really* need to walk the whole SB? What I mean is that there are only two
reasons that the information in any given worker changes: The worker processes a 
request or the worker goes away. Actually, both of these can be lumped into - the
worker experiences a status change. It seems to me that you are walking the entire
SB each time even though, at any given time from the requests perspective, only the
worker processing the current request has changed. Any changes you happen to
pick up for workers processing other requests will be incomplete until they have
arrived at their next stable state, at which time you will walk the tree as a result
of their request.

What if we set up a "state change hook" that would provide your modules a hashable
value and access to the worker. You would never need to walk the whole SB. Your
function could use the current info to update just the part of your stats related
to the worker that changed. It seems to me that something like this would provide
your module with much better performance, as well as eliminate any problems with
locking.

Of course, due to my lack of knowledge of your code, this may not make any sense,
but I'm always open to education. Enlighten me as to why walking the whole SB on each
request is required when only the worker processing the request will be updated.

My suggestion seems to reduce the processing to a single discrete event (i.e. worker
X has changed state) instead of having potentially multiple overlapping walks of
the SB happening, picking up bits and pieces of other state changes in progress. You
will never run into the case where a single worker has multiple state changes
happening simultaneously (unless your module code is really slow - which I doubt),
therefore, locking should not be an issue and serialization should never be a
concern. Access to the worker is guaranteed since the workers are in existence
(either active or on the free list) until Apache has been restarted and all
workers have quiesced (hence no state changes...)

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: [Patch]: Scoreboard as linked list.

Posted by Ryan Bloom <rb...@covalent.net>.
My modules are walking the scoreboard on every request to gather information
that is only stored there.  Any locking will kill the performance of those modules.

Ryan

On Thursday 02 August 2001 20:49, Brian Pane wrote:
> Ryan Bloom wrote:
> >-1.  As I have stated multiple times, if this uses a mutex to lock the
> > list whenever something walks the scoreboard, I can't accept it.  It will
> > kill the performance for modules that I have.
>
> I'm not convinced that you actually have to lock the whole list
> during a scoreboard traversal.  In fact, if a node's contents
> are left intact when it's 'deleted' and put back on the free
> list, it may even be possible to add/remove nodes without using
> locks (assuming that only one thread can add/remove notes at a
> time and the amount of time that a deleted node spends on the
> free list is long enough for a scoreboard-walking reader that
> happens to have a pointer to that node to finish reading from
> that node before the node is reallocated).  Also, the documentation
> that Paul posted mentions the option of using per-process or
> per-worker locking; that might offer sufficiently small granularity,
> depending on what specifically your modules are doing with the
> scoreboard.
> --Brian
>
> >Ryan
> >
> >On Thursday 02 August 2001 19:26, Paul J. Reder wrote:
> >>Ok, I have finally finished this version of the scoreboard redesign. The
> >>basic idea is to implement the scoreboard as a linked list. The design,
> >>test results, benefits, and patch are at http://24.25.12.102
> >>Please check it out and give it a try. It performs very well.
> >>
> >>The brief performance results for this patch are:
> >>   Current Time: Thursday, 02-Aug-2001 13:32:45 EDT
> >>   Restart Time: Thursday, 02-Aug-2001 11:02:44 EDT
> >>   Parent Server Generation: 0
> >>   Server uptime: 2 hours 30 minutes
> >>   Total accesses: 2456532 - Total Traffic: 69.3 GB
> >>   CPU Usage: u42.52 s92.34 cu.29 cs2.35 - 1.53% CPU load
> >>   273 requests/sec - 7.9 MB/second - 29.6 kB/request
> >>   327 requests currently being processed, 172 idle workers
> >>
> >>compared to the current cvs code (reported yesterday):
> >>   Current Time: Wednesday, 01-Aug-2001 10:48:54 EDT
> >>   Restart Time: Wednesday, 01-Aug-2001 08:09:19 EDT
> >>   Parent Server Generation: 0
> >>   Server uptime: 2 hours 39 minutes 35 seconds
> >>   Total accesses: 2259384 - Total Traffic: 63.1 GB
> >>   CPU Usage: u31.79 s96.78 cu0 cs.06 - 1.34% CPU load
> >>   236 requests/sec - 6.8 MB/second - 29.3 kB/request
> >>   190 requests currently being processed, 0 idle workers
> >>
> >>There is a lot more info about the design, benefits, and test results
> >>at that web page so rather than take up any more bandwidth, please
> >>check it out. The patch applied cleanly to cvs head as of Thursday
> >>at about 12:00 noon east coast time.
> >>
> >>Also, please let me know if you have any problems getting at the page.
> >>
> >>Thanks.

-- 

_____________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
Covalent Technologies			rbb@covalent.net
-----------------------------------------------------------------------------

Re: [Patch]: Scoreboard as linked list.

Posted by "Roy T. Fielding" <fi...@ebuilt.com>.
> Understand, this isn't a theoretical concern for me.   I have modules that
> walk the scoreboard on every request.  They are looking to determine what
> each of the other workers is doing.
> Requiring any locking to walk the scoreboard is a non-starter.

Well, that's bizarre.  Doing that in a worker doesn't make any sense. I could
understand the parent process doing a walk per second for stats collection,
which would also obviate the need for locking, but having every child walk
every other child on every request is going to make for a sucky server
whether the scoreboard locks or not.

Anyway, I think this discussion is pointless.  We need a server that is
very fast and another server that is very extensible and yet another server
that is highly managed, so I guess y'all will have to write three MPMs for
every process model.  Just do me a favor and choose names that differentiate
one MPM from another rather than names that are common to many MPMs
(threaded, worker) or completely meaningless (dexter).

....Roy


Re: [Patch]: Scoreboard as linked list.

Posted by Marc Slemko <ma...@znep.com>.
On Sat, 4 Aug 2001, Ryan Bloom wrote:

> Understand, this isn't a theoretical concern for me.   I have modules that walk the scoreboard
> on every request.  They are looking to determine what each of the other workers is doing.
> Requiring any locking to walk the scoreboard is a non-starter.

I'm not sure that the fact that you have some unnamed module doing some 
unnamed function that for some unnamed reason "wants to" (not needs, 
since as you admitted there are other ways to do it) walk the scoreboard
on every request is really something that is a very good basis for 
objecting to a change...

I'm not saying that there isn't a valid reason to oppose using a
linked list and/or this particular implementation, but I think it
is important to be careful to careful to separate "I have some non
Apache code that would be impacted by this" from "the design is
bad / this makes a common and reasonable task much harder / there
are implementation bugs"...




Re: [Patch]: Scoreboard as linked list.

Posted by Ryan Bloom <rb...@covalent.net>.
On Saturday 04 August 2001 12:57, Brian Pane wrote:
> Paul J. Reder wrote:
> >Brian Pane wrote:
>
> [...]
>
> >>Also, the documentation
> >>that Paul posted mentions the option of using per-process or
> >>per-worker locking; that might offer sufficiently small granularity,
> >>depending on what specifically your modules are doing with the
> >>scoreboard.
> >>--Brian
> >
> >Again, this is a possibility, *if* performance requires it. Using
> >finer granularity locking adds complexity to the code. I would
> >discourage moving to this unless the current scheme proves to be
> >a problem.
>
> I'm fundamentally in agreement.  My point was not that finer-grained
> locking is inherently necessary, but rather that the ability of your
> design to support finer-grained locking refutes Ryan's theoretical
> concern about locking being a fundamental bottleneck.

Understand, this isn't a theoretical concern for me.   I have modules that walk the scoreboard
on every request.  They are looking to determine what each of the other workers is doing.
Requiring any locking to walk the scoreboard is a non-starter.

Are there other ways to handle what I need to do?  Yes.  They add greatly to the complexity of
my code.  I have also posted two other problems with the current patch.  One is a simple bug that
Paul is fixing, the other is a fundamental design flaw that also requires I give the patch a -1.

Ryan

_____________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
Covalent Technologies			rbb@covalent.net
-----------------------------------------------------------------------------

Re: [Patch]: Scoreboard as linked list.

Posted by Brian Pane <bp...@pacbell.net>.
Paul J. Reder wrote:

>Brian Pane wrote:
>
[...]

>>Also, the documentation
>>that Paul posted mentions the option of using per-process or
>>per-worker locking; that might offer sufficiently small granularity,
>>depending on what specifically your modules are doing with the
>>scoreboard.
>>--Brian
>>
>
>Again, this is a possibility, *if* performance requires it. Using
>finer granularity locking adds complexity to the code. I would 
>discourage moving to this unless the current scheme proves to be
>a problem.
>
I'm fundamentally in agreement.  My point was not that finer-grained
locking is inherently necessary, but rather that the ability of your
design to support finer-grained locking refutes Ryan's theoretical
concern about locking being a fundamental bottleneck.

--Brian



Re: [Patch]: Scoreboard as linked list.

Posted by "Paul J. Reder" <re...@raleigh.ibm.com>.
Brian Pane wrote:
> 
> Ryan Bloom wrote:
> 
> >-1.  As I have stated multiple times, if this uses a mutex to lock the list whenever something
> >walks the scoreboard, I can't accept it.  It will kill the performance for modules that I have.
> >
> I'm not convinced that you actually have to lock the whole list
> during a scoreboard traversal.

True. Finer granularity locking could be used *if* needed.

>                                 In fact, if a node's contents
> are left intact when it's 'deleted' and put back on the free
> list, it may even be possible to add/remove nodes without using
> locks (assuming that only one thread can add/remove notes at a
> time

This is probably a bad assumption since each process and worker
returns itself to the free list as it exits, thus there can be
multiple happening at one time...

>      and the amount of time that a deleted node spends on the
> free list is long enough for a scoreboard-walking reader that
> happens to have a pointer to that node to finish reading from
> that node before the node is reallocated).

Since the goal, under heavy load, is to make the best possible use
of workers, we want to minimize the amount of time workers spend
on the free list. We can't assume that the worker spends much time
on the free list, and we certainly don't want to extend that time.

To that end, however, I could alter the routines to put returned
nodes at the end of the free list and take them off the head. This
would provide the longest possible time on the free list without
artificially adding delay.

>                                            Also, the documentation
> that Paul posted mentions the option of using per-process or
> per-worker locking; that might offer sufficiently small granularity,
> depending on what specifically your modules are doing with the
> scoreboard.
> --Brian

Again, this is a possibility, *if* performance requires it. Using
finer granularity locking adds complexity to the code. I would 
discourage moving to this unless the current scheme proves to be
a problem. According to my testing it isn't currently a problem.
Please prove me wrong and we can change it.

-- 
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein

Re: [Patch]: Scoreboard as linked list.

Posted by Brian Pane <bp...@pacbell.net>.
Ryan Bloom wrote:

>-1.  As I have stated multiple times, if this uses a mutex to lock the list whenever something
>walks the scoreboard, I can't accept it.  It will kill the performance for modules that I have.
>
I'm not convinced that you actually have to lock the whole list
during a scoreboard traversal.  In fact, if a node's contents
are left intact when it's 'deleted' and put back on the free
list, it may even be possible to add/remove nodes without using
locks (assuming that only one thread can add/remove notes at a
time and the amount of time that a deleted node spends on the
free list is long enough for a scoreboard-walking reader that
happens to have a pointer to that node to finish reading from
that node before the node is reallocated).  Also, the documentation
that Paul posted mentions the option of using per-process or
per-worker locking; that might offer sufficiently small granularity,
depending on what specifically your modules are doing with the
scoreboard.
--Brian

>
>
>Ryan
>
>On Thursday 02 August 2001 19:26, Paul J. Reder wrote:
>
>>Ok, I have finally finished this version of the scoreboard redesign. The
>>basic idea is to implement the scoreboard as a linked list. The design,
>>test results, benefits, and patch are at http://24.25.12.102
>>Please check it out and give it a try. It performs very well.
>>
>>The brief performance results for this patch are:
>>   Current Time: Thursday, 02-Aug-2001 13:32:45 EDT
>>   Restart Time: Thursday, 02-Aug-2001 11:02:44 EDT
>>   Parent Server Generation: 0
>>   Server uptime: 2 hours 30 minutes
>>   Total accesses: 2456532 - Total Traffic: 69.3 GB
>>   CPU Usage: u42.52 s92.34 cu.29 cs2.35 - 1.53% CPU load
>>   273 requests/sec - 7.9 MB/second - 29.6 kB/request
>>   327 requests currently being processed, 172 idle workers
>>
>>compared to the current cvs code (reported yesterday):
>>   Current Time: Wednesday, 01-Aug-2001 10:48:54 EDT
>>   Restart Time: Wednesday, 01-Aug-2001 08:09:19 EDT
>>   Parent Server Generation: 0
>>   Server uptime: 2 hours 39 minutes 35 seconds
>>   Total accesses: 2259384 - Total Traffic: 63.1 GB
>>   CPU Usage: u31.79 s96.78 cu0 cs.06 - 1.34% CPU load
>>   236 requests/sec - 6.8 MB/second - 29.3 kB/request
>>   190 requests currently being processed, 0 idle workers
>>
>>There is a lot more info about the design, benefits, and test results
>>at that web page so rather than take up any more bandwidth, please
>>check it out. The patch applied cleanly to cvs head as of Thursday
>>at about 12:00 noon east coast time.
>>
>>Also, please let me know if you have any problems getting at the page.
>>
>>Thanks.
>>
>




Re: [Patch]: Scoreboard as linked list.

Posted by Ryan Bloom <rb...@covalent.net>.
-1.  As I have stated multiple times, if this uses a mutex to lock the list whenever something
walks the scoreboard, I can't accept it.  It will kill the performance for modules that I have.

Ryan

On Thursday 02 August 2001 19:26, Paul J. Reder wrote:
> Ok, I have finally finished this version of the scoreboard redesign. The
> basic idea is to implement the scoreboard as a linked list. The design,
> test results, benefits, and patch are at http://24.25.12.102
> Please check it out and give it a try. It performs very well.
>
> The brief performance results for this patch are:
>    Current Time: Thursday, 02-Aug-2001 13:32:45 EDT
>    Restart Time: Thursday, 02-Aug-2001 11:02:44 EDT
>    Parent Server Generation: 0
>    Server uptime: 2 hours 30 minutes
>    Total accesses: 2456532 - Total Traffic: 69.3 GB
>    CPU Usage: u42.52 s92.34 cu.29 cs2.35 - 1.53% CPU load
>    273 requests/sec - 7.9 MB/second - 29.6 kB/request
>    327 requests currently being processed, 172 idle workers
>
> compared to the current cvs code (reported yesterday):
>    Current Time: Wednesday, 01-Aug-2001 10:48:54 EDT
>    Restart Time: Wednesday, 01-Aug-2001 08:09:19 EDT
>    Parent Server Generation: 0
>    Server uptime: 2 hours 39 minutes 35 seconds
>    Total accesses: 2259384 - Total Traffic: 63.1 GB
>    CPU Usage: u31.79 s96.78 cu0 cs.06 - 1.34% CPU load
>    236 requests/sec - 6.8 MB/second - 29.3 kB/request
>    190 requests currently being processed, 0 idle workers
>
> There is a lot more info about the design, benefits, and test results
> at that web page so rather than take up any more bandwidth, please
> check it out. The patch applied cleanly to cvs head as of Thursday
> at about 12:00 noon east coast time.
>
> Also, please let me know if you have any problems getting at the page.
>
> Thanks.

-- 

_____________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
Covalent Technologies			rbb@covalent.net
-----------------------------------------------------------------------------