You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Ben Hyde <bh...@gensym.com> on 1997/12/04 17:47:53 UTC

mutex in palloc

Is the critical region in palloc so narrow
because allocation in a given pool is never
done by more than a single thread?

 - ben h.

Re: mutex in palloc

Posted by Martin Kraemer <Ma...@mch.sni.de>.
On Wed, Dec 10, 1997 at 07:02:43AM -0400, Ben Hyde wrote:
> >> 
> >> This untested, uncompiled rewrite illustrates my concern.
> >
Yes, that looks ok and sensible to me (after unpacking and doing a
regular "diff -ub"). An (untested) +1.

    Martin
-- 
| S I E M E N S |  <Ma...@mch.sni.de>  |      Siemens Nixdorf
| ------------- |   Voice: +49-89-636-46021     |  Informationssysteme AG
| N I X D O R F |   FAX:   +49-89-636-44994     |   81730 Munich, Germany
~~~~~~~~~~~~~~~~My opinions only, of course; pgp key available on request

Re: mutex in palloc

Posted by Ben Laurie <be...@algroup.co.uk>.
Ben Hyde wrote:
> 
> > >> >Ben Hyde wrote:
> > >> >> Is the critical region in palloc so narrow
> > >> >> because allocation in a given pool is never
> > >> >> done by more than a single thread?
> 
> Ok so Dean's awnser to my original question is:   yes
> Ben L's awnser is:  "seems like a rash assumption".

I'm quite happy to go along with Dean, but my reply still stands. If we
are going to do this, it must be made clear that its what we are doing.
Assuming that people will guess correctly still seems rash to me.

> I asked since it tells me a lot about the overall structure
> of things.  Not that my opinion counts for much here abouts
> but I was expecting Dean's awnser since I currently
> think the pool is THE thread specific data structure and
> any attempt to share makes me confused.
> 
> Those wild and crazy module authors.  An assert that
> "expected" thread is doing the allocate comes to mind.

Sounds like a good idea. Budding NT patch authors should note, though,
that the obvious way of doing this gets you a handle that is the same
for every thread (really).

> I don't pretend to be any kind of an NT expert but
> EnterCriticalSection et.al. appears to be a better substrate
> for acquire_mutex et. al.

You are probably right - I haven't looked at the low-level stuff in
detail, yet.

Cheers,

Ben.

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache

Re: mutex in palloc

Posted by Ben Hyde <bh...@gensym.com>.
> >> >Ben Hyde wrote:
> >> >> Is the critical region in palloc so narrow
> >> >> because allocation in a given pool is never
> >> >> done by more than a single thread?

Ok so Dean's awnser to my original question is:   yes
Ben L's awnser is:  "seems like a rash assumption".

I asked since it tells me a lot about the overall structure
of things.  Not that my opinion counts for much here abouts
but I was expecting Dean's awnser since I currently
think the pool is THE thread specific data structure and
any attempt to share makes me confused.

Those wild and crazy module authors.  An assert that
"expected" thread is doing the allocate comes to mind.


I don't pretend to be any kind of an NT expert but
EnterCriticalSection et.al. appears to be a better substrate
for acquire_mutex et. al.

 - ben

Re: mutex in palloc

Posted by Dean Gaudet <dg...@arctic.org>.
I don't agree with this.  I don't agree that two threads should be using
the same pool without doing their own protection from each other. The path
that's protected right now is the path which requires access to the shared
pool of free blocks (or malloc).

My reason:  speed.  If we have to take a mutex on every allocation then we
kill the most common server stuff, you'll note we do allocations left and
right all over the place.  There's no need in the existing server code for
this change either, we never have two threads using the same pool (if we
do, then that's the bug, not palloc). 

Here's an example where multiple threads can be used in a single request: 
mod_cgi can be rewritten to use three threads.  One thread to shuffle bits
from the client to the CGI (doing dechunking as necessary, and logging it
into the script log as necessary).  One thread to shuffle bits from the
CGI to the client (chunk, log, whatever).  And the third thread to shuffle
bits from the CGI's stderr to the error_log while prefixing the lines with
a useful token (i.e. timestamp and UNIQUE_ID).

It should be possible to arrange things such that three subpools are used
so that each thread still has private access to a pool.  You can ensure
that none of the pools are cleaned up until after all the threads have
completed -- so the threads can share data from their private pools (using
whatever synchronization they need or don't need). 

Can you think of an example where subpools can't be used in this way? 

Dean

On Wed, 10 Dec 1997, Ben Hyde wrote:

> Ben Laurie wrote:
> >Ben Hyde wrote:
> >> Ben Laurie wrote:
> >> >Ben Hyde wrote:
> >> >> Is the critical region in palloc so narrow
> >> >> because allocation in a given pool is never
> >> >> done by more than a single thread?
> >> >I haven't checked the code, but that seems like a rash assumption, if
> >> >true.
> >> 
> >> This untested, uncompiled rewrite illustrates my concern.
> >
> >Perhaps I need new glasses, but I can't actually see the difference
> >between the two...
> your glasses ok, my fingers not - ben h.
> 
> --- Before ---
> API_EXPORT(void *) palloc(struct pool *a, int reqsize)
> {
> #ifdef ALLOC_USE_MALLOC
> ...
> #else
> 
>     /* Round up requested size to an even number of alignment units (core clicks)
>      */
> 
>     int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
>     int size = nclicks * CLICK_SZ;
> 
>     /* First, see if we have space in the block most recently
>      * allocated to this pool
>      */
> 
>     union block_hdr *blok = a->last;
>     char *first_avail = blok->h.first_avail;
>     char *new_first_avail;
> 
>     if (reqsize <= 0)
> 	return NULL;
> 
>     new_first_avail = first_avail + size;
> 
>     if (new_first_avail <= blok->h.endp) {
> 	debug_verify_filled(first_avail, blok->h.endp,
> 	    "Ouch!  Someone trounced past the end of their allocation!\n");
> 	blok->h.first_avail = new_first_avail;
> 	return (void *) first_avail;
>     }
> 
>     /* Nope --- get a new one that's guaranteed to be big enough */
> 
>     block_alarms();
> 
>     (void) acquire_mutex(alloc_mutex);
> 
>     blok = new_block(size);
>     a->last->h.next = blok;
>     a->last = blok;
> 
>     (void) release_mutex(alloc_mutex);
> 
>     unblock_alarms();
> 
>     first_avail = blok->h.first_avail;
>     blok->h.first_avail += size;
> 
>     return (void *) first_avail;
> #endif
> }
> --- After ---
> API_EXPORT(void *) palloc(struct pool *a, int reqsize)
> {
> #ifdef ALLOC_USE_MALLOC
> ...
> #else
>     int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
>     int size = nclicks * CLICK_SZ; /* Alignable */
>     char *result;
> 
>     if (reqsize <= 0)
>       return NULL;
>     block_alarms();
>     (void)acquire_mutex(alloc_mutex);
>     {
>       union block_hdr *blok = a->last;
>       char *first_avail = blok->h.first_avail;
>       char *new_first_avail = first_avail + size;
> 
>       if (new_first_avail <= blok->h.endp) {
> 	debug_verify_filled(first_avail, blok->h.endp,
> 			    "Ouch!  Someone trounced past the end of their allocation!\n");
> 	blok->h.first_avail = new_first_avail;
> 	result = first_avail;
>       } else {
> 	blok = new_block(size);
> 	a->last->h.next = blok;
> 	a->last = blok;
> 	result = blok->h.first_avail;
> 	blok->h.first_avail += size;
>       }
>     }
>     (void)release_mutex(alloc_mutex);
>     unblock_alarms();
> 
>     return (void *)result;
> #endif
> }
> ---
> 


Re: mutex in palloc

Posted by Ben Hyde <bh...@gensym.com>.
Ben Laurie wrote:
>Ben Hyde wrote:
>> Ben Laurie wrote:
>> >Ben Hyde wrote:
>> >> Is the critical region in palloc so narrow
>> >> because allocation in a given pool is never
>> >> done by more than a single thread?
>> >I haven't checked the code, but that seems like a rash assumption, if
>> >true.
>> 
>> This untested, uncompiled rewrite illustrates my concern.
>
>Perhaps I need new glasses, but I can't actually see the difference
>between the two...
your glasses ok, my fingers not - ben h.

--- Before ---
API_EXPORT(void *) palloc(struct pool *a, int reqsize)
{
#ifdef ALLOC_USE_MALLOC
...
#else

    /* Round up requested size to an even number of alignment units (core clicks)
     */

    int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
    int size = nclicks * CLICK_SZ;

    /* First, see if we have space in the block most recently
     * allocated to this pool
     */

    union block_hdr *blok = a->last;
    char *first_avail = blok->h.first_avail;
    char *new_first_avail;

    if (reqsize <= 0)
	return NULL;

    new_first_avail = first_avail + size;

    if (new_first_avail <= blok->h.endp) {
	debug_verify_filled(first_avail, blok->h.endp,
	    "Ouch!  Someone trounced past the end of their allocation!\n");
	blok->h.first_avail = new_first_avail;
	return (void *) first_avail;
    }

    /* Nope --- get a new one that's guaranteed to be big enough */

    block_alarms();

    (void) acquire_mutex(alloc_mutex);

    blok = new_block(size);
    a->last->h.next = blok;
    a->last = blok;

    (void) release_mutex(alloc_mutex);

    unblock_alarms();

    first_avail = blok->h.first_avail;
    blok->h.first_avail += size;

    return (void *) first_avail;
#endif
}
--- After ---
API_EXPORT(void *) palloc(struct pool *a, int reqsize)
{
#ifdef ALLOC_USE_MALLOC
...
#else
    int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
    int size = nclicks * CLICK_SZ; /* Alignable */
    char *result;

    if (reqsize <= 0)
      return NULL;
    block_alarms();
    (void)acquire_mutex(alloc_mutex);
    {
      union block_hdr *blok = a->last;
      char *first_avail = blok->h.first_avail;
      char *new_first_avail = first_avail + size;

      if (new_first_avail <= blok->h.endp) {
	debug_verify_filled(first_avail, blok->h.endp,
			    "Ouch!  Someone trounced past the end of their allocation!\n");
	blok->h.first_avail = new_first_avail;
	result = first_avail;
      } else {
	blok = new_block(size);
	a->last->h.next = blok;
	a->last = blok;
	result = blok->h.first_avail;
	blok->h.first_avail += size;
      }
    }
    (void)release_mutex(alloc_mutex);
    unblock_alarms();

    return (void *)result;
#endif
}
---

Re: mutex in palloc

Posted by Ben Laurie <be...@algroup.co.uk>.
Ben Hyde wrote:
> 
> Ben Laurie wrote:
> >Ben Hyde wrote:
> >> Is the critical region in palloc so narrow
> >> because allocation in a given pool is never
> >> done by more than a single thread?
> >I haven't checked the code, but that seems like a rash assumption, if
> >true.
> 
> This untested, uncompiled rewrite illustrates my concern.

Perhaps I need new glasses, but I can't actually see the difference
between the two...

Cheers,

Ben.

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache

Re: mutex in palloc

Posted by Ben Hyde <bh...@gensym.com>.
Ben Laurie wrote:
>Ben Hyde wrote:
>> Is the critical region in palloc so narrow
>> because allocation in a given pool is never
>> done by more than a single thread?
>I haven't checked the code, but that seems like a rash assumption, if
>true.

This untested, uncompiled rewrite illustrates my concern.

  --- BEFORE  ---

API_EXPORT(void *) palloc(struct pool *a, int reqsize)
{
#ifdef ALLOC_USE_MALLOC
  ...
#else

    /* Round up requested size to an even number of alignment units (core clicks)
     */

    int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
    int size = nclicks * CLICK_SZ;
    char *result;

    if (reqsize <= 0)
      return NULL;
    block_alarms();
    (void)acquire_mutex(alloc_mutex);
    {
      /* First, see if we have space in the block most recently
       * allocated to this pool
       */
      union block_hdr *blok = a->last;
      char *first_avail = blok->h.first_avail;
      char *new_first_avail = first_avail + size;

      if (new_first_avail <= blok->h.endp) {
	debug_verify_filled(first_avail, blok->h.endp,
			    "Ouch!  Someone trounced past the end of their allocation!\n");
	blok->h.first_avail = new_first_avail;
	result = first_avail;
      } else {
	/* Nope --- get a new one that's guaranteed to be big enough */
	blok = new_block(size);
	a->last->h.next = blok;
	a->last = blok;
	result = blok->h.first_avail;
	blok->h.first_avail += size;
      }
    }
    (void)release_mutex(alloc_mutex);
    unblock_alarms();

    return (void *)result;
#endif
}

  --- AFTER ---

API_EXPORT(void *) palloc(struct pool *a, int reqsize)
{
#ifdef ALLOC_USE_MALLOC
   ...
#else
    int nclicks = 1 + ((reqsize - 1) / CLICK_SZ);
    int size = nclicks * CLICK_SZ;  /* Align size */
    char *result;

    if (reqsize <= 0)
      return NULL;
    block_alarms();
    (void)acquire_mutex(alloc_mutex);
    {
      union block_hdr *blok = a->last;
      char *first_avail = blok->h.first_avail;
      char *new_first_avail = first_avail + size;

      if (new_first_avail <= blok->h.endp) {
	debug_verify_filled(first_avail, blok->h.endp,
			    "Ouch!  Someone trounced past the end of their allocation!\n");
	blok->h.first_avail = new_first_avail;
	result = first_avail;
      } else {
	blok = new_block(size);
	a->last->h.next = blok;
	a->last = blok;
	result = blok->h.first_avail;
	blok->h.first_avail += size;
      }
    }
    (void)release_mutex(alloc_mutex);
    unblock_alarms();

    return (void *)result;
#endif
}

Re: mutex in palloc

Posted by Paul Sutton <pa...@eu.c2.net>.
On Mon, 8 Dec 1997, Ben Laurie wrote:
> Ben Hyde wrote:
> > Is the critical region in palloc so narrow
> > because allocation in a given pool is never
> > done by more than a single thread?
> 
> I haven't checked the code, but that seems like a rash assumption, if
> true.

No, it is true and valid. Each thread has its own transaction pool (called
pchild in the current source, ptrans after my recent win32-fixup patch),
so this pool is local to each thread (or fiber, in later versions).  This
pool is cleared after each connection (like pchild in the Unix
child_main). ptrans is local to child_sub_main.

The only issue is modules which make their own threads. They are then
responsible for synchronisation or using sub-pools, as Dean explained. 

I don't think it is a good idea to expand the mutual exclusion area in
alloc.c.

//pcs



Re: mutex in palloc

Posted by Ben Laurie <be...@algroup.co.uk>.
Ben Hyde wrote:
> 
> Is the critical region in palloc so narrow
> because allocation in a given pool is never
> done by more than a single thread?

I haven't checked the code, but that seems like a rash assumption, if
true.

Cheers,

Ben.

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache