You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Aaron Bannert <aa...@clove.org> on 2001/09/17 20:12:27 UTC

[PATCH] worker MPM patch: "short-and-sweet"

After working with my two proposed worker MPM models, I've become more
confident in the simple model. I'll continue benchmarking both designs,
but I wanted to get this one out to fix what's in CVS right now, and
so I can provide some more tweaks I've been working on (turn the LIFO
queue to a FIFO, fix the scoreboard states, setconcurrency, etc...).

Again, I've tested this patch extensively on solaris and linux with
favorable results.

Patch description:
- Don't reuse transaction pools, instead create one for each new
  connection request. This seems to incur less overhead than trying
  to shuffle the pools around for reuse.
- Get rid of one of the fd_queue condition variables. We don't need
  to wait on the queue if it's full, that means the caller didn't
  allocate enough entries in the queue. This relieves some overhead
  from signal/condition management.

-aaron


Index: server/mpm/worker/worker.c
===================================================================
RCS file: /home/cvspublic/httpd-2.0/server/mpm/worker/worker.c,v
retrieving revision 1.21
diff -u -r1.21 worker.c
--- server/mpm/worker/worker.c	2001/08/30 20:50:06	1.21
+++ server/mpm/worker/worker.c	2001/09/17 02:46:01
@@ -123,7 +123,6 @@
 static int num_listensocks = 0;
 static apr_socket_t **listensocks;
 static fd_queue_t *worker_queue;
-static fd_queue_t *pool_queue; /* a resource pool of context pools */
 
 /* The structure used to pass unique initialization info to each thread */
 typedef struct {
@@ -204,7 +203,6 @@
     /* XXX: This will happen naturally on a graceful, and we don't care otherwise.
     ap_queue_signal_all_wakeup(worker_queue); */
     ap_queue_interrupt_all(worker_queue);
-    ap_queue_interrupt_all(pool_queue);
 }
 
 AP_DECLARE(apr_status_t) ap_mpm_query(int query_code, int *result)
@@ -558,7 +556,6 @@
     int thread_slot = ti->tid;
     apr_pool_t *tpool = apr_thread_pool_get(thd);
     apr_socket_t *csd = NULL;
-    apr_socket_t *dummycsd = NULL;
     apr_pool_t *ptrans;		/* Pool for per-transaction stuff */
     apr_socket_t *sd = NULL;
     int n;
@@ -644,20 +641,9 @@
         }
     got_fd:
         if (!workers_may_exit) {
+            /* create a new transaction pool for each accepted socket */
+            apr_pool_create(&ptrans, tpool);
 
-            /* pull the next available transaction pool from the queue */
-            if ((rv = ap_queue_pop(pool_queue, &dummycsd, &ptrans))
-                != FD_QUEUE_SUCCESS) {
-                if (rv == FD_QUEUE_EINTR) {
-                    goto got_fd;
-                }
-                else { /* got some error in the queue */
-                    csd = NULL;
-                    ap_log_error(APLOG_MARK, APLOG_ERR, rv, ap_server_conf, 
-                        "ap_queue_pop");
-                }
-            }
-
             if ((rv = apr_accept(&csd, sd, ptrans)) != APR_SUCCESS) {
                 csd = NULL;
                 ap_log_error(APLOG_MARK, APLOG_ERR, rv, ap_server_conf, 
@@ -692,7 +678,9 @@
     ap_scoreboard_image->parent[process_slot].quiescing = 1;
     kill(ap_my_pid, SIGTERM);
 
+/* Unsure if this can be safely uncommented. -aaron
     apr_thread_exit(thd, APR_SUCCESS);
+*/
     return NULL;
 }
 
@@ -718,12 +706,7 @@
         }
         process_socket(ptrans, csd, process_slot, thread_slot);
         requests_this_child--; /* FIXME: should be synchronized - aaron */
-
-        /* get this transaction pool ready for the next request */
-        apr_pool_clear(ptrans);
-        /* don't bother checking if we were interrupted in ap_queue_push,
-         * because we're going to check workers_may_exit right now anyway. */
-        ap_queue_push(pool_queue, NULL, ptrans);
+        apr_pool_destroy(ptrans);
     }
 
     ap_update_child_status(process_slot, thread_slot,
@@ -758,23 +741,11 @@
     int i = 0;
     int threads_created = 0;
     apr_thread_t *listener;
-    apr_pool_t *ptrans;
-    apr_socket_t *dummycsd = NULL;
 
     /* We must create the fd queues before we start up the listener
      * and worker threads. */
-    worker_queue = apr_pcalloc(pchild, sizeof(fd_queue_t));
+    worker_queue = apr_pcalloc(pchild, sizeof(*worker_queue));
     ap_queue_init(worker_queue, ap_threads_per_child, pchild);
-
-    /* create the resource pool of available transaction pools */
-    pool_queue = apr_pcalloc(pchild, sizeof(fd_queue_t));
-    ap_queue_init(pool_queue, ap_threads_per_child, pchild);
-    /* fill the pool_queue with real pools */
-    for (i = 0; i < ap_threads_per_child; i++) {
-        ptrans = NULL;
-        apr_pool_create(&ptrans, pchild);
-        ap_queue_push(pool_queue, dummycsd, ptrans);
-    }
 
     my_info = (proc_info *)malloc(sizeof(proc_info));
     my_info->pid = my_child_num;
Index: server/mpm/worker/fdqueue.c
===================================================================
RCS file: /home/cvspublic/httpd-2.0/server/mpm/worker/fdqueue.c,v
retrieving revision 1.5
diff -u -r1.5 fdqueue.c
--- server/mpm/worker/fdqueue.c	2001/08/27 23:50:12	1.5
+++ server/mpm/worker/fdqueue.c	2001/09/17 02:46:02
@@ -89,7 +89,6 @@
      * XXX: We should at least try to signal an error here, it is
      * indicative of a programmer error. -aaron */
     pthread_cond_destroy(&queue->not_empty);
-    pthread_cond_destroy(&queue->not_full);
     pthread_mutex_destroy(&queue->one_big_mutex);
 
     return FD_QUEUE_SUCCESS;
@@ -107,8 +106,6 @@
         return FD_QUEUE_FAILURE;
     if (pthread_cond_init(&queue->not_empty, NULL) != 0)
         return FD_QUEUE_FAILURE;
-    if (pthread_cond_init(&queue->not_full, NULL) != 0)
-        return FD_QUEUE_FAILURE;
 
     bounds = queue_capacity + 1;
     queue->head = queue->tail = 0;
@@ -136,9 +133,13 @@
         return FD_QUEUE_FAILURE;
     }
 
-    /* Keep waiting until we wake up and find that the queue is not full. */
-    while (ap_queue_full(queue)) {
-        pthread_cond_wait(&queue->not_full, &queue->one_big_mutex);
+    /* If the caller didn't allocate enough slots and tries to push
+     * too many, too bad. */
+    if (ap_queue_full(queue)) {
+        if (pthread_mutex_unlock(&queue->one_big_mutex) != 0) {
+            return FD_QUEUE_FAILURE;
+        }
+        return FD_QUEUE_OVERFLOW;
     }
 
     queue->data[queue->tail].sd = sd;
@@ -187,9 +188,6 @@
         queue->head = (queue->head + 1) % queue->bounds;
     }
     queue->blanks++;
-
-    /* we just consumed a slot, so we're no longer full */
-    pthread_cond_signal(&queue->not_full);
 
     if (pthread_mutex_unlock(&queue->one_big_mutex) != 0) {
         return FD_QUEUE_FAILURE;
Index: server/mpm/worker/fdqueue.h
===================================================================
RCS file: /home/cvspublic/httpd-2.0/server/mpm/worker/fdqueue.h,v
retrieving revision 1.5
diff -u -r1.5 fdqueue.h
--- server/mpm/worker/fdqueue.h	2001/08/24 16:49:39	1.5
+++ server/mpm/worker/fdqueue.h	2001/09/17 02:46:02
@@ -72,6 +72,7 @@
 #define FD_QUEUE_FAILURE -1 /* Needs to be an invalid file descriptor because
                                of queue_pop semantics */
 #define FD_QUEUE_EINTR APR_EINTR
+#define FD_QUEUE_OVERFLOW -2
 
 struct fd_queue_elem_t {
     apr_socket_t      *sd;
@@ -87,7 +88,6 @@
     int                blanks;
     pthread_mutex_t    one_big_mutex;
     pthread_cond_t     not_empty;
-    pthread_cond_t     not_full;
     int                cancel_state;
 };
 typedef struct fd_queue_t fd_queue_t;

Re: [PATCH] worker MPM patch: "short-and-sweet"

Posted by Aaron Bannert <aa...@clove.org>.
On Mon, Sep 17, 2001 at 04:07:06PM -0700, Brian Pane wrote:
> Aaron Bannert wrote:
> 
> >After working with my two proposed worker MPM models, I've become more
> >confident in the simple model. I'll continue benchmarking both designs,
> >but I wanted to get this one out to fix what's in CVS right now, and
> >so I can provide some more tweaks I've been working on (turn the LIFO
> >queue to a FIFO, fix the scoreboard states, setconcurrency, etc...).
> >
> >Again, I've tested this patch extensively on solaris and linux with
> >favorable results.
> >
> >Patch description:
> >- Don't reuse transaction pools, instead create one for each new
> >  connection request. This seems to incur less overhead than trying
> >  to shuffle the pools around for reuse.
> >- Get rid of one of the fd_queue condition variables. We don't need
> >  to wait on the queue if it's full, that means the caller didn't
> >  allocate enough entries in the queue. This relieves some overhead
> >  from signal/condition management.
> >
> Looks reasonable to me.  I liked the more complicated "time-space tradeoff"
> design better because it's theoretically more scalable, but if the "short
> and sweet" design can outperform prefork in multiprocessor tests (e.g.,
> Ian's 8-CPU mstone benchmark) then I'm in favor of going with the simple
> approach.

I will continue to work on the more elaborate model, but for now this patch
is an incremental improvement to what we have in CVS (faster, much cleaner,
partial rollback to an older model). The LIFO patch is ready and waiting
for this to be commited. :)

-aaron

Re: [PATCH] worker MPM patch: "short-and-sweet"

Posted by Brian Pane <bp...@pacbell.net>.
Aaron Bannert wrote:

>After working with my two proposed worker MPM models, I've become more
>confident in the simple model. I'll continue benchmarking both designs,
>but I wanted to get this one out to fix what's in CVS right now, and
>so I can provide some more tweaks I've been working on (turn the LIFO
>queue to a FIFO, fix the scoreboard states, setconcurrency, etc...).
>
>Again, I've tested this patch extensively on solaris and linux with
>favorable results.
>
>Patch description:
>- Don't reuse transaction pools, instead create one for each new
>  connection request. This seems to incur less overhead than trying
>  to shuffle the pools around for reuse.
>- Get rid of one of the fd_queue condition variables. We don't need
>  to wait on the queue if it's full, that means the caller didn't
>  allocate enough entries in the queue. This relieves some overhead
>  from signal/condition management.
>
Looks reasonable to me.  I liked the more complicated "time-space tradeoff"
design better because it's theoretically more scalable, but if the "short
and sweet" design can outperform prefork in multiprocessor tests (e.g.,
Ian's 8-CPU mstone benchmark) then I'm in favor of going with the simple
approach.

--Brian




Re: [PATCH] worker MPM patch: "short-and-sweet"

Posted by Aaron Bannert <aa...@clove.org>.
On Tue, Sep 18, 2001 at 02:21:33PM -0700, Ryan Bloom wrote:
> I finally had time to review and commit this.  Keep 'em coming.

Cool. Thanks for the quick turnaround. I just posted another improvement,
and I'll be posting a third in the next few minutes :)

-aaron

Re: [PATCH] worker MPM patch: "short-and-sweet"

Posted by Ryan Bloom <rb...@covalent.net>.
On Monday 17 September 2001 11:12 am, Aaron Bannert wrote:
> After working with my two proposed worker MPM models, I've become more
> confident in the simple model. I'll continue benchmarking both designs,
> but I wanted to get this one out to fix what's in CVS right now, and
> so I can provide some more tweaks I've been working on (turn the LIFO
> queue to a FIFO, fix the scoreboard states, setconcurrency, etc...).
>
> Again, I've tested this patch extensively on solaris and linux with
> favorable results.
>
> Patch description:
> - Don't reuse transaction pools, instead create one for each new
>   connection request. This seems to incur less overhead than trying
>   to shuffle the pools around for reuse.
> - Get rid of one of the fd_queue condition variables. We don't need
>   to wait on the queue if it's full, that means the caller didn't
>   allocate enough entries in the queue. This relieves some overhead
>   from signal/condition management.

I finally had time to review and commit this.  Keep 'em coming.

Ryan
______________________________________________________________
Ryan Bloom				rbb@apache.org
Covalent Technologies			rbb@covalent.net
--------------------------------------------------------------