You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Chris Darroch <ch...@pearsoncmg.com> on 2009/05/14 06:37:25 UTC

slotmem API notes

Hi --

   Well, I should be working on mod_fcgid but I wanted to first
follow up on a long-ago promise to review the slotmem API by developing
a "test harness" framework and a slotmem provider that worked with
ZooKeeper as an experiment.

   To that end, mod_shmap now offers a simple "RESTful" interface to the
slotmem API, and mod_slotmem_zk and mod_zk together provide data storage
using ZooKeeper.  See http://people.apache.org/~chrisd/projects/shared_map/
and http://people.apache.org/~chrisd/projects/zookeeper/ for the
gory details.  (Note that the current version of mod_sharedmem in
trunk won't fully work with mod_shmap because of the work-in-progress
on the inuse flag array.  A slightly older version such as r773977
should work OK.)


   I have a few thoughts and concerns about the slotmem API as it
stands.  I'll try to divvy them up into sections below.  First, some
ideas about how to make the existing interface cleaner.

   Remove mod_slotmem (server/slotmem.c) and the associated ap_slotmem_*()
"wrapper" functions.  They do nothing that I can see except add an extra
layer of complexity.  I had no problems using the standard provider
interface to the slotmem providers.  Why would anyone not just do this?

   The wrappers also add confusion because the meanings of common
terms are changed.  For example, providers become "methods" (i.e.,
"storage methods"); the familiar ap_list_provider_names() becomes
ap_slotmem_methods().  But "methods" are what I expect the provider
to provide.  I suggest removing or renaming away any references
to "storage methods" in favour of conventional providers, instances, etc.

   So, define AP_SLOTMEM_PROVIDER_GROUP as "slotmem" and
AP_SLOTMEM_PROVIDER_VERSION as "0".  Remove AP_SLOTMEM_STORAGE.

   Rename mod_sharedmem to mod_slotmem_shm and mod_plainmem to
mod_slotmem_plain.

   Rename ap_slotmem_storage_method to ap_slotmem_provider_t.

   Rename ap_slotmem_t to ap_slotmem_instance_t.

   Remove the "slotmem_" prefix from method names (i.e., the names
of the functions provided by a provider).  If I've got a slotmem
provider, it's just extra typing to have to call
slotmem->slotmem_create() instead of slotmem->create().


   Next, there are several problems I see with the apslotmem_type
enum at the moment.  For one thing, there's only one enumerated value,
SLOTMEM_PERSIST, so there's no way to specify "don't persist".
I hacked around that by passing 1 in mod_shmap.

   Also, if we retain this type, I'd suggest renaming it to a
slightly more conventional looking ap_slotmem_type_t.

   But my personal preference would be to remove it and instead
add an unsigned int flags field to ap_slotmem_provider_t, and
define some flag values, such as AP_SLOTMEM_FLAG_NOTMPSAFE = 0x0001
(read "not MP-safe") and AP_SLOTMEM_FLAG_PERSIST = 0x0002.

   While enumerated types are clean if you've got a long list of
mutually exclusive values, a flags field follows the existing socache
framework, and allows for a number of mutually non-exclusive flags to
be set.  This is actually closer to what I suspect future development
might bring.


   The mention of the socache API brings me to my next set of
points, which proceed from the experience of programming with both
APIs and from a strong feeling that the two have a great deal in
common.  That's where I'm starting from -- specifically a sense
that the two APIs should mirror each other closely.  (Obviously
not everyone might agree with that.)

   To that end, having a AP_SLOTMEM_FLAG_NOTMPSAFE flag would
make the two APIs align on the issue of determining when a caller
needs to take special care in a multi-processing context.

   It would also allow us to remove the global mutex from
mod_sharedmem.  That would simplify maintenance a good deal.

   It would also clear up some things I find odd at the moment;
for instance, why is there locking in the do() method but not
the put() method?  (I'm also not 100% sure the locking in the
create() method will be very valuable.  For example, when
APR_USE_SHMGET is true, apr_shm_remove() will call shmctl()
to mark the segment for destruction once all processes have
detacted from it.  So locking around apr_shm_remove() and
apr_shm_create() doesn't help the case where another process,
which has already attached to the old segment, just keeps on
using that old segment after the lock is released.)

   At any rate, moving responsibility for locking up to the
caller level, as the socache API does, I think makes a lot of
sense.  It means that a caller running in a single-process,
single-threaded context can simply choose not to add the overhead
of a global lock.  Other callers can make their own decisions
about what kind of locking they need.


   Getting to the end now ... next up, a few comments on the
provider methods and their arguments.

   It would be great to pass server_rec* and apr_pool_t* arguments
to all the methods (except maybe num_slots() and slot_size()).
Some providers may need to report error messages, and to do that
they really need a server_rec*.  They may also need to allocate
data from a temporary pool.  The most obvious use case is that
r->pool would be passed to a method like get() or put().

   I'd also love to see create() split into two methods, create()
and init().  The create() method is then responsible for allocating
the ap_slotmem_instance_t structure and filling in the arguments;
it's also assumed that it will do any argument checks that are
required.  To this end, the typical use case is to call it in
the *first* invocation of the post_config hook, and pass the
pconf memory pool.  Note that this implies that create() takes
two apr_pool_t arguments, one for allocating "permanent" structures
which will last the lifetime of the instance, and one for temporary
allocations.  One would normally pass pconf and ptemp from the
post_config hook.

   The init() method then does all the work of actually setting
up the provider's data storage.  The typical use case would
be to call this from the *second* or subsequent invocations of
the post_config hook.  This is how the mod_socache_shmcb provider
functions and like mod_sharedmem is allocates shared memory segments.

   I'd suggest removing both the mem() and attach() methods.

   The mem() method assumes that all providers will return a
slab of memory, and that writes to this memory are equivalent to
a put().  That's not going to be possible unless all providers
deal with memory -- in which case, the only two possible providers
that I can see are the "plain" and shared memory providers we have,
and then the purpose of a whole API and provider framework is lost
on me.  The mod_slotmem_zk provider I wrote, for example, just has
to return APR_ENOTIMPL for mem() because there's really nothing else
it can do.  Similarly any provider which uses a DB file or similar
backing store isn't going to be able to make mem() available.
Better, I think, to drop the method since it's so deeply tied to
an in-memory provider model.

   The attach() method is similarly tied to an in-memory provider
model, and in fact, even more tied to the notion of shared memory,
I think.  From the code in mod_plainmem and mod_sharedmem, moreover,
it looks to me like just calling create() will have virtually the
same effect as attach() -- if a existing segment is available with
the same name, it'll be found and used.  If we really need the
ability to call create() but have it stop short of actually creating
a new segment if no pre-existing one is found, let's add an argument
to create() that specifies this behaviour.  But my guess would be
that there will be few uses for attach() that aren't satisfied by
create() as it stands now.

   The ap_slotmem_callback_fn_t function type should, I think, be
modified not to take a void *mem argument in its signature, but
instead a slot id (i.e., an unsigned int).  Either that or we should
remove the do() method.  This is again because the existing design
assumes that all providers can provide a slab of memory which
the function can operate on.  (I suppose there is a third option
here, in which non-in-memory providers implement do() by allocating
a temporary slab of memory and then, for each in-use slot, performing
a get(), the callback function, and a put().  This obviously would
require an apr_pool_t to be passed to do(), but I'd recommend that
anyway.)

   I'd also suggest putting num_items and item_size elements
into the ap_slotmem_provider_t structure, and then we can remove
the num_items() and item_size() "getter" methods.  That seems
equally clean and slightly simpler to me.

   I'd add a destroy() method that tears down an instance, and
expect callers to use it appropriately.  That means the caller
likely needs to register a cleanup function whenever a successful
create() is performed on a slotmem provider.  This cleanup function
then should call destroy() on the returned slotmem instance.  The
advantage here is that the providers don't need to register anything,
which simplifies their code, and it gives the caller full control
over when the slotmem instance should be destroyed.

   Last but not least on the method front, I'd suggest adding
set() and reset() methods.  The set() method would work roughly like
memset() -- that is, it would accept a byte value and write that
into every byte in the slot.

   The reset() method would do the same as set() with a zero
byte value (i.e., blank the memory) but would also "release"
the slot.  The proposed grab() and return() methods would be
unnecessary.  A provider which wanted to track which slots were
"in use" would simply set the in-use flag when put(), set(), or
get() was called, and reset the flag when reset() was called.


   Finally, just in terms of code simplicity, I noticed that
the mod_plainmem and mod_sharedmem modules both have loops like this:

        if (next) {
            for (;;) {
                ...
                if (!next->next)
                    break;
                next = next->next;
            }
        }

which I think might be reduced to:

        while (next) {
            ...
            next = next->next;
        }

   I'd also be inclined to remove the sharedmem_getstorage(),
sharedmem_initgpool(), and sharedmem_initialize_cleanup() functions,
and just in-line their contents in the places they are used.
All are used just one time each that I can see.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B









Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
jean-frederic clere wrote:

>>   It would be great to pass server_rec* and apr_pool_t* arguments
>> to all the methods (except maybe num_slots() and slot_size()).
>> Some providers may need to report error messages, and to do that
>> they really need a server_rec*.  They may also need to allocate
>> data from a temporary pool.  The most obvious use case is that
>> r->pool would be passed to a method like get() or put().
> 
> I am -1 for passing a server_rec* because I think we should be able to 
> use the provider (via a wrapper) in something else that httpd.

   I think I saw two -1s on this (at least the server_rec* part).
I can live with the provider returning an apr_status_t; that's fine,
but again we have a consistency issue with the socache API, which
does its own logging.  I'd like to see that inconsistency resolved
one way or the other.

   I'm still definitely a +1 on passing a "temp" pool to most of
the methods, and both a "conf" and a "temp" pool to create().
Otherwise it's really painful for providers which just want to
do a little temporary allocation, for whatever reason, to have
to create their own sub-pools way up at config time and then
never clear them out.  Much nicer if, in the typical case,
r->pool can be passed to get(), put(), etc.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 7:41 PM, Chris Darroch wrote:

> Jim Jagielski wrote:
>
>> On May 14, 2009, at 3:36 AM, jean-frederic clere wrote:
>
>> Yeah... when a do is done, we want to ensure that
>> none of the slots change since we are touching all slots.
>> In general, we assume that with get and put, only one thread is
>> touch any particular slot at one time.
>
>  What about multiple processes performing a put() on the same
> slot?  It sounds like the current use case tends to be strictly
> intra-process, but the API doesn't require that, I believe.
>

Well, my assumption is that only 1 thread would be working
on a slot at a given time (thinking of scoreboard as an
example), but agree that that might be short-sighted :)



Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 7:41 PM, Chris Darroch wrote:

> Jim Jagielski wrote:
>
>> On May 14, 2009, at 3:36 AM, jean-frederic clere wrote:
>
>> Yeah... when a do is done, we want to ensure that
>> none of the slots change since we are touching all slots.
>> In general, we assume that with get and put, only one thread is
>> touch any particular slot at one time.
>
>  What about multiple processes performing a put() on the same
> slot?  It sounds like the current use case tends to be strictly
> intra-process, but the API doesn't require that, I believe.
>
>  This is why I'd either pull all the locking out and let
> the user decide (none, intra-process [thread mutexes] only,
> inter-process [global mutexes], or some custom thing).  The
> provider just doesn't know enough to be certain and we end up
> doing expensive global locking, or having to provide a Swiss
> army knife set of locking options and them implementing them
> all in each provider (slotmem and socache).  But that's just
> my two cents.
>

No, I agree it's a compelling argument... How about this as a
compromise: The create/init accepts 2 additional arguments,
a locking function pointer and an unlock function pointer.
So the end user needs to determine what sort of locking they
want when using the slotmem. The slotmem then mutexes all
access to the slots with this mutex. Additionally, each
slotmem function also accepts a flag that says "use provided
locking or force no-locking", so that even if a mutex is provided,
the end-user can decide "No, I know this is safe and so I want
to do this without any locking" for an optimization.

I think what I'm trying to avoid code that always has


	LOCK_SLOT();
	slotmem_put(.....);
	UNLOCK_SLOTOP;

All over the place... 

Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Jim Jagielski wrote:

> On May 14, 2009, at 3:36 AM, jean-frederic clere wrote:

> Yeah... when a do is done, we want to ensure that
> none of the slots change since we are touching all slots.
> In general, we assume that with get and put, only one thread is
> touch any particular slot at one time.

   What about multiple processes performing a put() on the same
slot?  It sounds like the current use case tends to be strictly
intra-process, but the API doesn't require that, I believe.

   This is why I'd either pull all the locking out and let
the user decide (none, intra-process [thread mutexes] only,
inter-process [global mutexes], or some custom thing).  The
provider just doesn't know enough to be certain and we end up
doing expensive global locking, or having to provide a Swiss
army knife set of locking options and them implementing them
all in each provider (slotmem and socache).  But that's just
my two cents.


>>>  The init() method then does all the work of actually setting
>>> up the provider's data storage.  The typical use case would
>>> be to call this from the *second* or subsequent invocations of
>>> the post_config hook.  This is how the mod_socache_shmcb provider
>>> functions and like mod_sharedmem is allocates shared memory segments.
> 
> I'm still not clear on the advantages of splitting...

   The primary reason is that you can call create() in a check_config
or post_config hook and report back to the administrator if there
are problems with the arguments passed to the provider.  However,
at this early stage there's no need to be allocating big slabs of
memory -- you just want a sanity check, and a handle allocated.

   Then init() does the actual work in the second post_config
phase, when the server is really getting started (and logs are
up and running, so errors don't go the console anymore, so there's
no nice way to report configuration errors).

>> Well if we want to use it for the scoreboard mem() prevents memcpy().
> 
> I also thought that removing _mem made sense...

   OK, I can see the utility of mem(), but callers should be
warned they might get an APR_ENOTIMPL, I think.


>> we have:
>> grab/return
>> put/get
>> you want:
>> put/set/get/reset.
>> I do see a big difference, except I prefer the first one.
> 
> Yeah... I don't see the advantage for set/reset when we have get/put
> since the end-user can do a set/reset via what they 'put'
> 
> Also, reset is 'release' now... we just need to "null-out" the
> data (if really required...)

   I could easily live without set(); it was just an idea to save
a get()/put() cycle for certain use cases.  Probably better to
drop the idea.

   It does seem to me that instead of adding another pair of
methods (grab() and return()) having just a single reset() or
release() method would be tidier.  For providers which need to track
per-slot usage, it's a way to say "this index now unused".  Any
other action on the slot (get(), mem(), etc.) is effectively a
"this index now used" message.  The three methods then also parallel
the store()/ retrieve()/remove() calls of socache.

   My two cents, anyway.  Thanks for listening!

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 3:36 AM, jean-frederic clere wrote:
>
>>  It would also clear up some things I find odd at the moment;
>> for instance, why is there locking in the do() method but not
>> the put() method?
>
> The do operates on all slots put/get only in one... May be a flag in  
> the create. Something like ALLOW_LAZY_LOCKING?

Yeah... when a do is done, we want to ensure that
none of the slots change since we are touching all slots.
In general, we assume that with get and put, only one thread is
touch any particular slot at one time.

>>  It would be great to pass server_rec* and apr_pool_t* arguments
>> to all the methods (except maybe num_slots() and slot_size()).
>> Some providers may need to report error messages, and to do that
>> they really need a server_rec*.  They may also need to allocate
>> data from a temporary pool.  The most obvious use case is that
>> r->pool would be passed to a method like get() or put().
>
> I am -1 for passing a server_rec* because I think we should be able  
> to use the provider (via a wrapper) in something else that httpd.

+1 on the -1. The provider should report and error and the caller
do whatever is required to report that error. Having the
provider log is weird. For a temp pool, we do have a pool
for the provider, we could create a subpool.

>
>
>>  I'd also love to see create() split into two methods, create()
>> and init().  The create() method is then responsible for allocating
>> the ap_slotmem_instance_t structure and filling in the arguments;
>> it's also assumed that it will do any argument checks that are
>> required.  To this end, the typical use case is to call it in
>> the *first* invocation of the post_config hook, and pass the
>> pconf memory pool.  Note that this implies that create() takes
>> two apr_pool_t arguments, one for allocating "permanent" structures
>> which will last the lifetime of the instance, and one for temporary
>> allocations.  One would normally pass pconf and ptemp from the
>> post_config hook.
>
> +1

>
>
>>  The init() method then does all the work of actually setting
>> up the provider's data storage.  The typical use case would
>> be to call this from the *second* or subsequent invocations of
>> the post_config hook.  This is how the mod_socache_shmcb provider
>> functions and like mod_sharedmem is allocates shared memory segments.

I'm still not clear on the advantages of splitting...

>>
>>  I'd suggest removing both the mem() and attach() methods.
>>  The mem() method assumes that all providers will return a
>> slab of memory, and that writes to this memory are equivalent to
>> a put().  That's not going to be possible unless all providers
>> deal with memory -- in which case, the only two possible providers
>> that I can see are the "plain" and shared memory providers we have,
>> and then the purpose of a whole API and provider framework is lost
>> on me.  The mod_slotmem_zk provider I wrote, for example, just has
>> to return APR_ENOTIMPL for mem() because there's really nothing else
>> it can do.  Similarly any provider which uses a DB file or similar
>> backing store isn't going to be able to make mem() available.
>> Better, I think, to drop the method since it's so deeply tied to
>> an in-memory provider model.
>
> Well if we want to use it for the scoreboard mem() prevents memcpy().

I also thought that removing _mem made sense...

>>  Last but not least on the method front, I'd suggest adding
>> set() and reset() methods.  The set() method would work roughly like
>> memset() -- that is, it would accept a byte value and write that
>> into every byte in the slot.
>>  The reset() method would do the same as set() with a zero
>> byte value (i.e., blank the memory) but would also "release"
>> the slot.  The proposed grab() and return() methods would be
>> unnecessary.  A provider which wanted to track which slots were
>> "in use" would simply set the in-use flag when put(), set(), or
>> get() was called, and reset the flag when reset() was called.
>
> we have:
> grab/return
> put/get
> you want:
> put/set/get/reset.
> I do see a big difference, except I prefer the first one.

Yeah... I don't see the advantage for set/reset when we have get/put
since the end-user can do a set/reset via what they 'put'

Also, reset is 'release' now... we just need to "null-out" the
data (if really required...)

Re: slotmem API notes

Posted by jean-frederic clere <jf...@gmail.com>.
Chris Darroch wrote:
> Hi --
> 
>   Well, I should be working on mod_fcgid but I wanted to first
> follow up on a long-ago promise to review the slotmem API by developing
> a "test harness" framework and a slotmem provider that worked with
> ZooKeeper as an experiment.
> 
>   To that end, mod_shmap now offers a simple "RESTful" interface to the
> slotmem API, and mod_slotmem_zk and mod_zk together provide data storage
> using ZooKeeper.  See http://people.apache.org/~chrisd/projects/shared_map/
> and http://people.apache.org/~chrisd/projects/zookeeper/ for the
> gory details.  (Note that the current version of mod_sharedmem in
> trunk won't fully work with mod_shmap because of the work-in-progress
> on the inuse flag array.  A slightly older version such as r773977
> should work OK.)
> 
> 
>   I have a few thoughts and concerns about the slotmem API as it
> stands.  I'll try to divvy them up into sections below.  First, some
> ideas about how to make the existing interface cleaner.
> 
>   Remove mod_slotmem (server/slotmem.c) and the associated ap_slotmem_*()
> "wrapper" functions.  They do nothing that I can see except add an extra
> layer of complexity.  I had no problems using the standard provider
> interface to the slotmem providers.  Why would anyone not just do this?

In mod_cluster I am not using it (yet).

> 
>   The wrappers also add confusion because the meanings of common
> terms are changed.  For example, providers become "methods" (i.e.,
> "storage methods"); the familiar ap_list_provider_names() becomes
> ap_slotmem_methods().  But "methods" are what I expect the provider
> to provide.  I suggest removing or renaming away any references
> to "storage methods" in favour of conventional providers, instances, etc.
> 
>   So, define AP_SLOTMEM_PROVIDER_GROUP as "slotmem" and
> AP_SLOTMEM_PROVIDER_VERSION as "0".  Remove AP_SLOTMEM_STORAGE.
> 
>   Rename mod_sharedmem to mod_slotmem_shm and mod_plainmem to
> mod_slotmem_plain.
> 
>   Rename ap_slotmem_storage_method to ap_slotmem_provider_t.
> 
>   Rename ap_slotmem_t to ap_slotmem_instance_t.
> 
>   Remove the "slotmem_" prefix from method names (i.e., the names
> of the functions provided by a provider).  If I've got a slotmem
> provider, it's just extra typing to have to call
> slotmem->slotmem_create() instead of slotmem->create().

My bad... I am bad in giving name to things, but using eclipse that is 
just a click.

> 
> 
>   Next, there are several problems I see with the apslotmem_type
> enum at the moment.  For one thing, there's only one enumerated value,
> SLOTMEM_PERSIST, so there's no way to specify "don't persist".
> I hacked around that by passing 1 in mod_shmap.

Looks like a bug :-/

> 
>   Also, if we retain this type, I'd suggest renaming it to a
> slightly more conventional looking ap_slotmem_type_t.
> 
>   But my personal preference would be to remove it and instead
> add an unsigned int flags field to ap_slotmem_provider_t, and
> define some flag values, such as AP_SLOTMEM_FLAG_NOTMPSAFE = 0x0001
> (read "not MP-safe") and AP_SLOTMEM_FLAG_PERSIST = 0x0002.

Well yes want to use it as a flag (for example to make the grab/return 
etc logic switchable.

> 
>   While enumerated types are clean if you've got a long list of
> mutually exclusive values, a flags field follows the existing socache
> framework, and allows for a number of mutually non-exclusive flags to
> be set.  This is actually closer to what I suspect future development
> might bring.
> 
> 
>   The mention of the socache API brings me to my next set of
> points, which proceed from the experience of programming with both
> APIs and from a strong feeling that the two have a great deal in
> common.  That's where I'm starting from -- specifically a sense
> that the two APIs should mirror each other closely.  (Obviously
> not everyone might agree with that.)
> 
>   To that end, having a AP_SLOTMEM_FLAG_NOTMPSAFE flag would
> make the two APIs align on the issue of determining when a caller
> needs to take special care in a multi-processing context.
> 
>   It would also allow us to remove the global mutex from
> mod_sharedmem.  That would simplify maintenance a good deal.

I am not sure... Because the type of lock depends on the type of memory 
you are using simple mutex in plainmem and file_mutex in sharemem and 
something else if the memory is in a cluster.

> 
>   It would also clear up some things I find odd at the moment;
> for instance, why is there locking in the do() method but not
> the put() method?

The do operates on all slots put/get only in one... May be a flag in the 
create. Something like ALLOW_LAZY_LOCKING?

>  (I'm also not 100% sure the locking in the
> create() method will be very valuable.  For example, when
> APR_USE_SHMGET is true, apr_shm_remove() will call shmctl()
> to mark the segment for destruction once all processes have
> detacted from it.  So locking around apr_shm_remove() and
> apr_shm_create() doesn't help the case where another process,
> which has already attached to the old segment, just keeps on
> using that old segment after the lock is released.)

The idea is to remove segments "forgotten" by dead processes.

> 
>   At any rate, moving responsibility for locking up to the
> caller level, as the socache API does, I think makes a lot of
> sense.  It means that a caller running in a single-process,
> single-threaded context can simply choose not to add the overhead
> of a global lock.  Other callers can make their own decisions
> about what kind of locking they need.
> 
> 
>   Getting to the end now ... next up, a few comments on the
> provider methods and their arguments.
> 
>   It would be great to pass server_rec* and apr_pool_t* arguments
> to all the methods (except maybe num_slots() and slot_size()).
> Some providers may need to report error messages, and to do that
> they really need a server_rec*.  They may also need to allocate
> data from a temporary pool.  The most obvious use case is that
> r->pool would be passed to a method like get() or put().

I am -1 for passing a server_rec* because I think we should be able to 
use the provider (via a wrapper) in something else that httpd.

> 
>   I'd also love to see create() split into two methods, create()
> and init().  The create() method is then responsible for allocating
> the ap_slotmem_instance_t structure and filling in the arguments;
> it's also assumed that it will do any argument checks that are
> required.  To this end, the typical use case is to call it in
> the *first* invocation of the post_config hook, and pass the
> pconf memory pool.  Note that this implies that create() takes
> two apr_pool_t arguments, one for allocating "permanent" structures
> which will last the lifetime of the instance, and one for temporary
> allocations.  One would normally pass pconf and ptemp from the
> post_config hook.

+1

> 
>   The init() method then does all the work of actually setting
> up the provider's data storage.  The typical use case would
> be to call this from the *second* or subsequent invocations of
> the post_config hook.  This is how the mod_socache_shmcb provider
> functions and like mod_sharedmem is allocates shared memory segments.
> 
>   I'd suggest removing both the mem() and attach() methods.
> 
>   The mem() method assumes that all providers will return a
> slab of memory, and that writes to this memory are equivalent to
> a put().  That's not going to be possible unless all providers
> deal with memory -- in which case, the only two possible providers
> that I can see are the "plain" and shared memory providers we have,
> and then the purpose of a whole API and provider framework is lost
> on me.  The mod_slotmem_zk provider I wrote, for example, just has
> to return APR_ENOTIMPL for mem() because there's really nothing else
> it can do.  Similarly any provider which uses a DB file or similar
> backing store isn't going to be able to make mem() available.
> Better, I think, to drop the method since it's so deeply tied to
> an in-memory provider model.

Well if we want to use it for the scoreboard mem() prevents memcpy().

> 
>   The attach() method is similarly tied to an in-memory provider
> model, and in fact, even more tied to the notion of shared memory,
> I think.  From the code in mod_plainmem and mod_sharedmem, moreover,
> it looks to me like just calling create() will have virtually the
> same effect as attach() -- if a existing segment is available with
> the same name, it'll be found and used.  If we really need the
> ability to call create() but have it stop short of actually creating
> a new segment if no pre-existing one is found, let's add an argument
> to create() that specifies this behaviour.  But my guess would be
> that there will be few uses for attach() that aren't satisfied by
> create() as it stands now.

Yep a flag to attach to get the create behaviour?

> 
>   The ap_slotmem_callback_fn_t function type should, I think, be
> modified not to take a void *mem argument in its signature, but
> instead a slot id (i.e., an unsigned int).  Either that or we should
> remove the do() method.  This is again because the existing design
> assumes that all providers can provide a slab of memory which
> the function can operate on.  (I suppose there is a third option
> here, in which non-in-memory providers implement do() by allocating
> a temporary slab of memory and then, for each in-use slot, performing
> a get(), the callback function, and a put().  This obviously would
> require an apr_pool_t to be passed to do(), but I'd recommend that
> anyway.)

Ok for the apr_pool_t.

> 
>   I'd also suggest putting num_items and item_size elements
> into the ap_slotmem_provider_t structure, and then we can remove
> the num_items() and item_size() "getter" methods.  That seems
> equally clean and slightly simpler to me.

I plan to add ap_slotmem_get_used() which belongs to the same kind of 
methods.

> 
>   I'd add a destroy() method that tears down an instance, and
> expect callers to use it appropriately.  That means the caller
> likely needs to register a cleanup function whenever a successful
> create() is performed on a slotmem provider.  This cleanup function
> then should call destroy() on the returned slotmem instance.  The
> advantage here is that the providers don't need to register anything,
> which simplifies their code, and it gives the caller full control
> over when the slotmem instance should be destroyed.

Well destroy() is complex to do: you have to wait until everyone stops 
using the stuff, no?

> 
>   Last but not least on the method front, I'd suggest adding
> set() and reset() methods.  The set() method would work roughly like
> memset() -- that is, it would accept a byte value and write that
> into every byte in the slot.
> 
>   The reset() method would do the same as set() with a zero
> byte value (i.e., blank the memory) but would also "release"
> the slot.  The proposed grab() and return() methods would be
> unnecessary.  A provider which wanted to track which slots were
> "in use" would simply set the in-use flag when put(), set(), or
> get() was called, and reset the flag when reset() was called.

we have:
grab/return
put/get
you want:
put/set/get/reset.
I do see a big difference, except I prefer the first one.

> 
> 
>   Finally, just in terms of code simplicity, I noticed that
> the mod_plainmem and mod_sharedmem modules both have loops like this:
> 
>        if (next) {
>            for (;;) {
>                ...
>                if (!next->next)
>                    break;
>                next = next->next;
>            }
>        }
> 
> which I think might be reduced to:
> 
>        while (next) {
>            ...
>            next = next->next;
>        }

Oops my bad.

> 
>   I'd also be inclined to remove the sharedmem_getstorage(),
> sharedmem_initgpool(), and sharedmem_initialize_cleanup() functions,
> and just in-line their contents in the places they are used.
> All are used just one time each that I can see.

-0

Cheers

Jean-Frederic

Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 15, 2009, at 10:15 AM, Jim Jagielski wrote:
>
> I'm going to start some of this now...
>

This is pretty complete now... for some reason, I kept on getting
"foo is still in conflict errors" when renaming/moving files so
this took more steps that it should have...

Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 15, 2009, at 9:02 AM, Jim Jagielski wrote:

>
> On May 14, 2009, at 7:14 PM, Chris Darroch wrote:
>
>> Jim Jagielski wrote:
>>
>>>> Remove mod_slotmem (server/slotmem.c) and the associated   
>>>> ap_slotmem_*() "wrapper" functions.
>>
>>> It's just an additional abstraction, agreed. I'm fine with  
>>> removing it
>>> but got the impression that people *wanted* that abstraction.
>>
>> jean-frederic clere wrote:
>>
>>> In mod_cluster I am not using it (yet).
>>
>> I'd suggest that we take it out for now, then, and do some
>> associated renaming/cleanups.  If at some future point it becomes
>> important to add another abstraction layer, it can of course be
>> added back in again.
>>
>> Unless someone shouts out with a -1 I think I'll try to do this
>> tomorrow or early next week.
>>
>
> No +1... When do you think you might work on this? I have some cycles
> today and would like to start the migration.
>

I'm going to start some of this now...

Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 15, 2009, at 2:05 PM, Chris Darroch wrote:

> Jim Jagielski wrote:
>
>> No +1... When do you think you might work on this? I have some cycles
>> today and would like to start the migration.
>
>  Thanks for tackling this ... I see much stuff going on!  I fear
> the time gap between idea and implementation seems to grow ever larger
> these days.  I was hoping to jump in later today or Monday, but
> since you're blasting ahead I figure I'll just stay clear unless
> there's something specific you'd like me deal with.  Thanks again,
>

I'm done for now... next is basically to add a flag to the required
funcs to bypass locking and then then lock/unlock fptrs... unless you
have time, I'll tackle on monday.


Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Jim Jagielski wrote:

> No +1... When do you think you might work on this? I have some cycles
> today and would like to start the migration.

   Thanks for tackling this ... I see much stuff going on!  I fear
the time gap between idea and implementation seems to grow ever larger
these days.  I was hoping to jump in later today or Monday, but
since you're blasting ahead I figure I'll just stay clear unless
there's something specific you'd like me deal with.  Thanks again,

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 7:14 PM, Chris Darroch wrote:

> Jim Jagielski wrote:
>
>>> Remove mod_slotmem (server/slotmem.c) and the associated   
>>> ap_slotmem_*() "wrapper" functions.
>
>> It's just an additional abstraction, agreed. I'm fine with removing  
>> it
>> but got the impression that people *wanted* that abstraction.
>
> jean-frederic clere wrote:
>
>> In mod_cluster I am not using it (yet).
>
>  I'd suggest that we take it out for now, then, and do some
> associated renaming/cleanups.  If at some future point it becomes
> important to add another abstraction layer, it can of course be
> added back in again.
>
>  Unless someone shouts out with a -1 I think I'll try to do this
> tomorrow or early next week.
>

No +1... When do you think you might work on this? I have some cycles
today and would like to start the migration.


Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Jim Jagielski wrote:

>>  Remove mod_slotmem (server/slotmem.c) and the associated  
>> ap_slotmem_*() "wrapper" functions.

> It's just an additional abstraction, agreed. I'm fine with removing it
> but got the impression that people *wanted* that abstraction.

jean-frederic clere wrote:

> In mod_cluster I am not using it (yet).

   I'd suggest that we take it out for now, then, and do some
associated renaming/cleanups.  If at some future point it becomes
important to add another abstraction layer, it can of course be
added back in again.

   Unless someone shouts out with a -1 I think I'll try to do this
tomorrow or early next week.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 12:37 AM, Chris Darroch wrote:

>
>  Remove mod_slotmem (server/slotmem.c) and the associated  
> ap_slotmem_*()
> "wrapper" functions.  They do nothing that I can see except add an  
> extra
> layer of complexity.  I had no problems using the standard provider
> interface to the slotmem providers.  Why would anyone not just do  
> this?
>

It's just an additional abstraction, agreed. I'm fine with removing it
but got the impression that people *wanted* that abstraction.

Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Jim Jagielski wrote:

>>  But my personal preference would be to remove it and instead
>> add an unsigned int flags field to ap_slotmem_provider_t, and
>> define some flag values, such as AP_SLOTMEM_FLAG_NOTMPSAFE = 0x0001
>> (read "not MP-safe") and AP_SLOTMEM_FLAG_PERSIST = 0x0002.

> Again, +1 for this change.

jean-frederic clere wrote:

> Well yes want to use it as a flag (for example to make the grab/return 
> etc logic switchable.

   OK, that sounds sort of like two +1s to me, so I think I'll also
try to tackle this one in the next few days (unless someone beats
me to it; I'm fairly slow due to work-related tasks).

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by "Plüm, Rüdiger, VF-Group" <ru...@vodafone.com>.
 

> -----Ursprüngliche Nachricht-----
> Von: Jim Jagielski 
> Gesendet: Donnerstag, 14. Mai 2009 14:46
> An: dev@httpd.apache.org
> Betreff: Re: slotmem API notes

> 
> In general, I think that a provider should know if it is thread-safe
> or not and simply "work" without the user needing to know or worry.
> I think if the provider knows that threads are available AND the MPM
> is threaded, it should by default mutex as required instead of having
> users all need to recreate that externally. If the user says "I don't
> care what MPM I have, don't worry about locking", then I am +1 on a
> flag of some time telling the provider to bypass that.
> 
> Even so, then, I think that we should expose lock/unlock if we make
> this a user-land "you decide if you want to do it or not"... I don't
> think it makes sense having the user/caller have to worry about
> re-inventing the wheel as far as mutexes are concerned just to
> use a provider... A provider should provide the full set of functions
> the end-user should need in order to use it. (imo)

+1

Regards

Rüdiger

Re: slotmem API notes

Posted by jean-frederic clere <jf...@gmail.com>.
On 09/03/2009 04:30 PM, Jim Jagielski wrote:
>
> On May 15, 2009, at 3:33 PM, Chris Darroch wrote:
>
>> Joe Orton wrote:
>>
>>> w.r.t. locking, my take for socache was: you either duplicate the
>>> code in every provider, or you duplicate the code in every API
>>> consumer, so it wasn't obvious what was best. I expected the latter
>>> would be both simpler and more flexible, so went for that.
>>
>> It looks like Jim has been busy pulling out the mutex in slotmem_shm,
>> so at a minimum we're in parallel at the moment, and the providers'
>> code is as simple as possible (at the expense of more work for their
>> callers).
>>
>>> I've wondered before whether some central API could simplify this,
>>> such that with little extra module code, you could get a mutex which
>>> could be configured via e.g.:
>>> Mutex slotmem fcntl:/var/run/slotmem.lck
>>> and if not configured, sensible defaults are used. But I'm not sure
>>> whether it could be done to really improve the server and avoid being
>>> an abstraction-fest code-bloat effort.
>>
>>
>
> After thinking and using slotmem, I'm no longer convinced that pulling
> out the lock/unlock was such a good idea... or at least completely.
>
> Consider this use case: Apache starts off and uses the shared memory
> slotmem. It uses it's own locking and all access to the data from Apache
> or any child process can inherit that lock and all is good.
>
> But consider an independent process that wants to attach to that
> shared memory segment... at this point, we need a global lock that
> both Apache and this ind process will honor. So somehow we need to
> store that lock *in* the shared memory segment, so that all processes
> that attach to it can use it.
>
> Ideas or comments?

I like the idea, I made prototypes but at the end it was more easy to 
write an httpd handler to expose the contents of the slotmem. May be we 
can have a module that provides locks and use to 2 providers (one for 
the slotmem and one for the lock)?

Cheers

Jean-Frederic

>


Re: slotmem API notes

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
Jim Jagielski wrote:
> 
> On Sep 3, 2009, at 1:31 PM, Akins, Brian wrote:
> 
>> On 9/3/09 10:30 AM, "Jim Jagielski" <ji...@jaguNET.com> wrote:
>>
>>> But consider an independent process that wants to attach to that
>>> shared memory segment... at this point, we need a global lock that
>>> both Apache and this ind process will honor. So somehow we need to
>>> store that lock *in* the shared memory segment, so that all processes
>>> that attach to it can use it.
>>
>> Use a semaphore? The external process already has to know certain
>> things to
>> attach to the memory anyway.
> 
> Yeah, but this means that the external process needs to know that
> Apache is using a semaphore, or the name of the file that is being
> used for the mutex lock, etc... Something needs to be exposed, and
> we need some way of doing that...

Sounds like a process management and platform portability problem.

Note that on Win32 for example, where we have a single process, the
ideal mutex is a critical section.  But that isn't shared with or
accessible by other processes.

Of course there is the possibility that on some platforms, the shm
itself [or segments thereof] can be locked.  That would sure simplify
this part of the discussion.

>> Also, could just make external process use HTTP.
> 
> +1 But that really uglies up Apache, imo... basically we're using
> it as a mutex server. Talk about multi-protocol :)

No, not to serve the mutex?  That's nonsensical.  The slotmem should
just offer it's getter/setter to specific clients (e.g. localhost,
trusted local subnet boxes).  And if HTTP is the transport, where do
you come up with multi-protocol?  No different than some soap service
hosted through httpd.

The disadvantage is that if the resources are congested, an external
DoS would impact internal services as well.  In some cases, that might
even be the desired behavior, but in others it renders the slotmem
useless.

I'm wondering if we aren't spending way too many cycles describing
what memcached and other providers already provide more efficiently
than httpd could?

Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Sep 3, 2009, at 1:31 PM, Akins, Brian wrote:

> On 9/3/09 10:30 AM, "Jim Jagielski" <ji...@jaguNET.com> wrote:
>
>> But consider an independent process that wants to attach to that
>> shared memory segment... at this point, we need a global lock that
>> both Apache and this ind process will honor. So somehow we need to
>> store that lock *in* the shared memory segment, so that all processes
>> that attach to it can use it.
>
> Use a semaphore? The external process already has to know certain  
> things to
> attach to the memory anyway.
>

Yeah, but this means that the external process needs to know that
Apache is using a semaphore, or the name of the file that is being
used for the mutex lock, etc... Something needs to be exposed, and
we need some way of doing that...

> Also, could just make external process use HTTP.

+1 But that really uglies up Apache, imo... basically we're using
it as a mutex server. Talk about multi-protocol :)

Re: slotmem API notes

Posted by "Akins, Brian" <Br...@turner.com>.
On 9/3/09 10:30 AM, "Jim Jagielski" <ji...@jaguNET.com> wrote:

> But consider an independent process that wants to attach to that
> shared memory segment... at this point, we need a global lock that
> both Apache and this ind process will honor. So somehow we need to
> store that lock *in* the shared memory segment, so that all processes
> that attach to it can use it.

Use a semaphore? The external process already has to know certain things to
attach to the memory anyway.

Also, could just make external process use HTTP.

-- 
Brian Akins


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 15, 2009, at 3:33 PM, Chris Darroch wrote:

> Joe Orton wrote:
>
>> w.r.t. locking, my take for socache was: you either duplicate the  
>> code in every provider, or you duplicate the code in every API  
>> consumer, so it wasn't obvious what was best.  I expected the  
>> latter would be both simpler and more flexible, so went for that.
>
>  It looks like Jim has been busy pulling out the mutex in slotmem_shm,
> so at a minimum we're in parallel at the moment, and the providers'
> code is as simple as possible (at the expense of more work for their
> callers).
>
>> I've wondered before whether some central API could simplify this,  
>> such that with little extra module code, you could get a mutex  
>> which could be configured via e.g.:
>>  Mutex slotmem fcntl:/var/run/slotmem.lck
>> and if not configured, sensible defaults are used.  But I'm not  
>> sure whether it could be done to really improve the server and  
>> avoid being an abstraction-fest code-bloat effort.
>
>

After thinking and using slotmem, I'm no longer convinced that pulling
out the lock/unlock was such a good idea... or at least completely.

Consider this use case: Apache starts off and uses the shared memory
slotmem. It uses it's own locking and all access to the data from Apache
or any child process can inherit that lock and all is good.

But consider an independent process that wants to attach to that
shared memory segment... at this point, we need a global lock that
both Apache and this ind process will honor. So somehow we need to
store that lock *in* the shared memory segment, so that all processes
that attach to it can use it.

Ideas or comments?

Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Joe Orton wrote:

> w.r.t. locking, my take for socache was: you either duplicate the code 
> in every provider, or you duplicate the code in every API consumer, so 
> it wasn't obvious what was best.  I expected the latter would be both 
> simpler and more flexible, so went for that.

   It looks like Jim has been busy pulling out the mutex in slotmem_shm,
so at a minimum we're in parallel at the moment, and the providers'
code is as simple as possible (at the expense of more work for their
callers).

> I've wondered before whether some central API 
> could simplify this, such that with little extra module code, you could 
> get a mutex which could be configured via e.g.:
> 
>   Mutex slotmem fcntl:/var/run/slotmem.lck
> 
> and if not configured, sensible defaults are used.  But I'm not sure 
> whether it could be done to really improve the server and avoid being an 
> abstraction-fest code-bloat effort.


Jim Jagielski wrote:

> No, I agree it's a compelling argument... How about this as a
> compromise: The create/init accepts 2 additional arguments,
> a locking function pointer and an unlock function pointer.
> So the end user needs to determine what sort of locking they
> want when using the slotmem. The slotmem then mutexes all
> access to the slots with this mutex. Additionally, each
> slotmem function also accepts a flag that says "use provided
> locking or force no-locking", so that even if a mutex is provided,
> the end-user can decide "No, I know this is safe and so I want
> to do this without any locking" for an optimization.

   I think both ideas could be combined in an interesting and
relatively bloat-free way.  You'd want a way to be able to
use a single provider (e.g., mod_slotmem_shm) with different
locking (or none) for each instance of the provider -- a
thread-local usage might use none, while a server-wide one would
need a global mutex.

   I hesitate to start proposing APIs that I know I'm not going
to have time to develop, though. (But it's so hard to resist.  :-)

   Personally I'm going to probably put this on the back burner and
try to spend any available spare cycles on keeping mod_slotmem_zk
in sync with trunk and finally getting into the mod_fcgid build
stuff (borrowing the framework from mod_ftpd, per wrowe's
suggestion).  I'm certainly keen to hear opinions on the issue
of locking on httpd (and trying to be lock-free where
feasible) though.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Joe Orton <jo...@redhat.com>.
On Thu, May 14, 2009 at 04:23:22PM -0700, Chris Darroch wrote:
>   However, note that any choices we make here also, I believe,
> impacts the socache API, which has identical issues around data
> consistency in multi-process/multi-thread contexts.  Personally
> I'd love to see these two APIs be as congruent as possible
> since they both do very similar things.  (Joe, any comments?)

w.r.t. locking, my take for socache was: you either duplicate the code 
in every provider, or you duplicate the code in every API consumer, so 
it wasn't obvious what was best.  I expected the latter would be both 
simpler and more flexible, so went for that.

There's a general problem that we have mutex code duplicated poorly 
across the server:

1) config hooks allowing the admin to specify location and maybe lock 
type,
2) initialization, mostly unified via _set_global_mutex_perms,
3) lock/unlock calls which maybe log errors, and maybe fail correctly

which is a bit of a mess.  I've wondered before whether some central API 
could simplify this, such that with little extra module code, you could 
get a mutex which could be configured via e.g.:

  Mutex slotmem fcntl:/var/run/slotmem.lck

and if not configured, sensible defaults are used.  But I'm not sure 
whether it could be done to really improve the server and avoid being an 
abstraction-fest code-bloat effort.

Regards, Joe




Re: slotmem API notes

Posted by Chris Darroch <ch...@pearsoncmg.com>.
Jim Jagielski wrote:

>>  At any rate, moving responsibility for locking up to the
>> caller level, as the socache API does, I think makes a lot of
>> sense.  It means that a caller running in a single-process,
>> single-threaded context can simply choose not to add the overhead
>> of a global lock.  Other callers can make their own decisions
>> about what kind of locking they need.

> In general, I think that a provider should know if it is thread-safe
> or not and simply "work" without the user needing to know or worry.
> I think if the provider knows that threads are available AND the MPM
> is threaded, it should by default mutex as required instead of having
> users all need to recreate that externally. If the user says "I don't
> care what MPM I have, don't worry about locking", then I am +1 on a
> flag of some time telling the provider to bypass that.
> 
> Even so, then, I think that we should expose lock/unlock if we make
> this a user-land "you decide if you want to do it or not"... I don't
> think it makes sense having the user/caller have to worry about
> re-inventing the wheel as far as mutexes are concerned just to
> use a provider... A provider should provide the full set of functions
> the end-user should need in order to use it. (imo)

   I saw at least one +1 on this, which is OK by me -- that is,
implementing locks in the providers but then offering a
"use/don't use" flag.

   However, note that any choices we make here also, I believe,
impacts the socache API, which has identical issues around data
consistency in multi-process/multi-thread contexts.  Personally
I'd love to see these two APIs be as congruent as possible
since they both do very similar things.  (Joe, any comments?)

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B


Re: slotmem API notes

Posted by Jim Jagielski <ji...@jaguNET.com>.
On May 14, 2009, at 12:37 AM, Chris Darroch wrote:

> Hi --
>  So, define AP_SLOTMEM_PROVIDER_GROUP as "slotmem" and
> AP_SLOTMEM_PROVIDER_VERSION as "0".  Remove AP_SLOTMEM_STORAGE.
>
>  Rename mod_sharedmem to mod_slotmem_shm and mod_plainmem to
> mod_slotmem_plain.
>
>  Rename ap_slotmem_storage_method to ap_slotmem_provider_t.
>
>  Rename ap_slotmem_t to ap_slotmem_instance_t.
>
>  Remove the "slotmem_" prefix from method names (i.e., the names
> of the functions provided by a provider).  If I've got a slotmem
> provider, it's just extra typing to have to call
> slotmem->slotmem_create() instead of slotmem->create().
>

Assuming we remove the abstraction, +1.

>
>  Next, there are several problems I see with the apslotmem_type
> enum at the moment.  For one thing, there's only one enumerated value,
> SLOTMEM_PERSIST, so there's no way to specify "don't persist".
> I hacked around that by passing 1 in mod_shmap.
>
>  Also, if we retain this type, I'd suggest renaming it to a
> slightly more conventional looking ap_slotmem_type_t.
>
>  But my personal preference would be to remove it and instead
> add an unsigned int flags field to ap_slotmem_provider_t, and
> define some flag values, such as AP_SLOTMEM_FLAG_NOTMPSAFE = 0x0001
> (read "not MP-safe") and AP_SLOTMEM_FLAG_PERSIST = 0x0002.
>
>  While enumerated types are clean if you've got a long list of
> mutually exclusive values, a flags field follows the existing socache
> framework, and allows for a number of mutually non-exclusive flags to
> be set.  This is actually closer to what I suspect future development
> might bring.
>

Again, +1 for this change.

>
>  At any rate, moving responsibility for locking up to the
> caller level, as the socache API does, I think makes a lot of
> sense.  It means that a caller running in a single-process,
> single-threaded context can simply choose not to add the overhead
> of a global lock.  Other callers can make their own decisions
> about what kind of locking they need.
>

In general, I think that a provider should know if it is thread-safe
or not and simply "work" without the user needing to know or worry.
I think if the provider knows that threads are available AND the MPM
is threaded, it should by default mutex as required instead of having
users all need to recreate that externally. If the user says "I don't
care what MPM I have, don't worry about locking", then I am +1 on a
flag of some time telling the provider to bypass that.

Even so, then, I think that we should expose lock/unlock if we make
this a user-land "you decide if you want to do it or not"... I don't
think it makes sense having the user/caller have to worry about
re-inventing the wheel as far as mutexes are concerned just to
use a provider... A provider should provide the full set of functions
the end-user should need in order to use it. (imo)