You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@httpd.apache.org by Jeff Trawick <tr...@gmail.com> on 2007/10/24 01:34:45 UTC

1.3 pid table changes vs. uptime?

With 1.3.39, typically 16 bytes are lost forever in the parent process
at child process creation with the ap_table_set(). Did anyone work
through a rationalization of this?

Perhaps we could say that 8MB is the amount by which the size of the
parent could grow in three months before causing undue interest (i.e.,
wasting investigation time)...  That allows around 500,000 process
creations to leak 16 bytes each in the three months, which is around
5000 process creations a day.

5000 process creations a day doesn't seem at all out of range for a
busy 1.3 server on Unix.
8MB doesn't seem out of range for growth for a closely watched server.
3 months doesn't seem at all longer than a desirable uptime.

I don't see any hashing of keys for speedy lookups in 1.3
ap_table_get() to jpotentially ustify the memory overhead.

Alternative opinions?
-- 
Born in Roswell... married an alien...

Re: 1.3 pid table changes vs. uptime?

Posted by Jeff Trawick <tr...@gmail.com>.
On 10/24/07, Jeff Trawick <tr...@gmail.com> wrote:
> On 10/24/07, Jim Jagielski <ji...@jagunet.com> wrote:
> >
> > On Oct 24, 2007, at 10:20 AM, Jim Jagielski wrote:
> >
> > > Looking at the below... testing as we speak:
> > >
> >
> > Testing past and placed it on a test server which gets
> > hit with goodly amounts of traffic. So far, so good :)
>
> The patch looks fine to me, but I'll try to do some gdb-ing through it
> tonight anyway.
>

I ran for a while with

MaxRequestsPerChild 1
ProxyPass / http://www.ibm.com/
ProxyPassReverse / http://www.ibm.com/

and jumped around that site for a while, then did apachectl restart and got

[Wed Oct 24 21:52:40 2007] [error] Bad pid (1707) in scoreboard slot 4
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1702) in scoreboard slot 5
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1708) in scoreboard slot 6
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1712) in scoreboard slot 7
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1713) in scoreboard slot 8
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1714) in scoreboard slot 9
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1715) in scoreboard slot 10
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1716) in scoreboard slot 11
[Wed Oct 24 21:52:40 2007] [error] Bad pid (1717) in scoreboard slot 12
[Wed Oct 24 21:52:40 2007] [notice] SIGHUP received.  Attempting to restart

I can't yet reproduce it at will, but will try again tomorrow.  I
doubt this has anything to do with your current patch.

Re: 1.3 pid table changes vs. uptime?

Posted by Jeff Trawick <tr...@gmail.com>.
On 10/24/07, Jim Jagielski <ji...@jagunet.com> wrote:
>
> On Oct 24, 2007, at 10:20 AM, Jim Jagielski wrote:
>
> > Looking at the below... testing as we speak:
> >
>
> Testing past and placed it on a test server which gets
> hit with goodly amounts of traffic. So far, so good :)

The patch looks fine to me, but I'll try to do some gdb-ing through it
tonight anyway.

Re: 1.3 pid table changes vs. uptime?

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Oct 24, 2007, at 10:20 AM, Jim Jagielski wrote:

> Looking at the below... testing as we speak:
>

Testing past and placed it on a test server which gets
hit with goodly amounts of traffic. So far, so good :)

Will give 24hrs and commit.


Re: 1.3 pid table changes vs. uptime?

Posted by Jim Jagielski <ji...@jaguNET.com>.
Looking at the below... testing as we speak:

Index: main/http_main.c
===================================================================
--- main/http_main.c    (revision 587509)
+++ main/http_main.c    (working copy)
@@ -362,7 +362,7 @@
/*
   * Parent process local storage of child pids
   */
-static table *pid_table;
+static int pid_table[HARD_SERVER_LIMIT];
/*
   * Pieces for managing the contents of the Server response header
@@ -384,26 +384,34 @@
   */
static int in_pid_table(int pid) {
-    char apid[64];      /* WAY generous! */
-    const char *spid;
-    ap_snprintf(apid, sizeof(apid), "%d", pid);
-    spid = ap_table_get(pid_table, apid);
-    if (spid && spid[0] == '1' && spid[1] == '\0')
-        return 1;
-    else
-        return 0;
+    int i;
+    for (i = 0; i < HARD_SERVER_LIMIT; i++) {
+        if (pid_table[i] == pid) {
+            return 1;
+        }
+    }
+    return 0;
}
static void set_pid_table(int pid) {
-    char apid[64];
-    ap_snprintf(apid, sizeof(apid), "%d", pid);
-    ap_table_set(pid_table, apid, "1");
+    int i;
+    for (i = 0; i < HARD_SERVER_LIMIT; i++) {
+        if (pid_table[i] == 0) {
+            pid_table[i] = pid;
+            break;
+        }
+    }
+    /* NOTE: Error detection?? */
}
static void unset_pid_table(int pid) {
-    char apid[64];
-    ap_snprintf(apid, sizeof(apid), "%d", pid);
-    ap_table_unset(pid_table, apid);
+    int i;
+    for (i = 0; i < HARD_SERVER_LIMIT; i++) {
+        if (pid_table[i] == pid) {
+            pid_table[i] = 0;
+            break;
+        }
+    }
}
/*
@@ -4370,6 +4378,7 @@
   */
static void common_init(void)
{
+    int i;
      INIT_SIGLIST()
#ifdef AUX3
      (void) set42sig();
@@ -4395,7 +4404,10 @@
      ap_server_pre_read_config  = ap_make_array(pcommands, 1, sizeof 
(char *));
      ap_server_post_read_config = ap_make_array(pcommands, 1, sizeof 
(char *));
      ap_server_config_defines   = ap_make_array(pcommands, 1, sizeof 
(char *));
-    pid_table                  = ap_make_table(pglobal,  
HARD_SERVER_LIMIT);
+    /* overkill since static */
+    for (i = 0; i < HARD_SERVER_LIMIT; i++) {
+        pid_table[i] = 0;
+    }
}
#ifndef MULTITHREAD


Re: 1.3 pid table changes vs. uptime?

Posted by Jim Jagielski <ji...@apache.org>.
On Oct 24, 2007, at 9:49 AM, Jeff Trawick wrote:

>>
>> Should I look at something like the above?
>
> please ;)
>

I did a quick and dirty profile and we do save space (of
course, plus it's static space, as in non-growing) and
speed as well, even worse case.


Re: 1.3 pid table changes vs. uptime?

Posted by Jeff Trawick <tr...@gmail.com>.
On 10/24/07, Jim Jagielski <ji...@jagunet.com> wrote:
>
> On Oct 24, 2007, at 8:54 AM, Jim Jagielski wrote:
>
> >
> > On Oct 23, 2007, at 7:34 PM, Jeff Trawick wrote:
> >
> >>
> >> Alternative opinions?
> >
> > Alternative implementations are welcomed.
> >
>
> Certainly one trade-off would be speed over space; having
> pid_table an actual (C) array of pids.

current space before creating any child processes:

. a small bit of ap_table overhead
. an array of HARD_SERVER_LIMIT * sizeof(table_entry), where
table_entry is a couple of char *
. and more storage allocated/lost as we add entries to the table

future space before and after creating any child processes:

. an array of HARD_SERVER_LIMIT * sizeof(pid_t)

(though I see we pass pid around as an int so maybe sizeof(int) is a
more accurate way to describe it)

we just won the space trade-off

>                       When "setting" we would
> sequentially scan through that array for an "empty"
> space and tuck the pid in there; when removing we would
> scan though until we found our pid and unset it (0).

current lookup:
ap_snprintf(apid, sizeof(apid), "%d", pid);
for (i = 0; i < HARD_SERVER_LIMIT; i++) {
    if (!strcasecmp(pid_table[i].key, apid) {
        break;
    }
}

future lookup:

for (i = 0; i < HARD_SERVER_LIMIT; i++) {
    if (pid_table[i] == pid) {
        break;
    }
}

we just won the speed tradeoff

currently, when we actually modify the table to add or remove a pid,
we do this sort of logic:

	    else {		/* delete an extraneous element */
		for (j = i, k = i + 1; k < t->a.nelts; ++j, ++k) {
		    elts[j].key = elts[k].key;
		    elts[j].val = elts[k].val;
		}
		--t->a.nelts;
	    }

instead of simply setting the array element to 0

> I haven't profiled this so it actually might be better
> than the overhead of using ap_tables...
>
> Should I look at something like the above?

please ;)

(I need to get back to analyzing the proposed 1.3 proxy change to
ensure that differences between 1.3 utility routines and apr versions
don't make that a bad idea, as well as actually test it with more than
a cut-n-paste of the time string example in the asctime man page)

Re: 1.3 pid table changes vs. uptime?

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Oct 24, 2007, at 8:54 AM, Jim Jagielski wrote:

>
> On Oct 23, 2007, at 7:34 PM, Jeff Trawick wrote:
>
>>
>> Alternative opinions?
>
> Alternative implementations are welcomed.
>

Certainly one trade-off would be speed over space; having
pid_table an actual (C) array of pids. When "setting" we would
sequentially scan through that array for an "empty"
space and tuck the pid in there; when removing we would
scan though until we found our pid and unset it (0).
I haven't profiled this so it actually might be better
than the overhead of using ap_tables...

Should I look at something like the above?

Re: 1.3 pid table changes vs. uptime?

Posted by Jim Jagielski <ji...@jaguNET.com>.
On Oct 23, 2007, at 7:34 PM, Jeff Trawick wrote:

>
> Alternative opinions?

Alternative implementations are welcomed.