You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by "Richard F. Rebel" <rr...@whenu.com> on 2005/01/17 19:59:14 UTC
Re: mp2 + worker mpm + threads + threads::shared +
PerlChildInitHandler
Another good idea... :)
But I am transfixed by this problem... I can't seem to get each forked
apache server to have both a shared global hash between all cloned
interpreters, *and* one thread in each process that runs in the
background doing housekeeping. I can think of numerous things that this
would be useful for.
I know I am close, but I can't seem to quite grasp what I am missing. I
thought PerlChildInit's were called for each forked child from it's
first/main interpreter (the one that all the others are cloned from).
On Mon, 2005-01-17 at 13:59 -0500, Perrin Harkins wrote:
> On Mon, 2005-01-17 at 11:25 -0500, Richard F. Rebel wrote:
> > Unfortunately, it's high volume enough that it's no longer possible to
> > keep these counters in the databases updated in real time. (updates are
> > to the order of 1000's per second).
>
> I would just use BerkeleyDB for this, which can easilly keep up, rather
> than messing with threads, but I'm interested in seeing if your
> threading idea will work well.
>
> > * A overseer/manager thread that wakes up once every so often and
> > updates the MySQL database with the contents of the global shared hash.
>
> Rather than doing that, why not just update it from a cleanup handler
> every time the counter goes up by 10000 or so? Seems much easier to me.
>
> - Perrin
>
--
Richard F. Rebel
cat /dev/null > `tty`
Re: [SOLVED] mp2 + worker mpm + threads + threads::shared +
PerlChildInitHandler
Posted by "Richard F. Rebel" <rr...@whenu.com>.
Hi,
Well, the problem was my fault. :/ I had a bug in a generic base class
I use that makes it easier to build classes that work both inside and
outside mod_perl.
For those of you who are interested, this solutions works well. By
using PerlChildInit handler to create a thread to maintain a shared
global hash of hashes with some small portion of a database, and some
information acquired with XML::RPC, my systems perform much better.
We were having problems with availability and performance of the
external data sources (eg MySQL or remote XML::RPC servers) that would
cause our apache instances to wait around timing out for each request.
Yay.
Here is a demonstration module for those of interest:
package TestPerlChildInit;
use strict;
use lib '/opt/whenu/lib/whenu-perl-lib';
use threads;
use threads::shared;
use vars qw(@ISA @EXPORT @EXPORT_OK %EXPORT_TAGS);
use vars qw($DEBUG *DBG $CFG);
use vars qw(%SHARED);
@ISA = qw(Exporter);
@EXPORT = qw();
@EXPORT_OK = qw();
%EXPORT_TAGS = (':DEFAULT' => [qw()],
':handler' => [qw()]);
BEGIN {
use mod_perl;
use Apache2;
use Apache::Const qw(:common :http);
use Apache::RequestRec qw();
use Apache::RequestIO qw();
use Apache::Connection qw();
use Apache::ServerUtil qw();
use Apache::Module qw();
use Apache::Util qw();
use Apache::URI qw();
use Apache::Log qw();
use APR::OS qw();
use APR::Table qw();
share(%SHARED);
$SHARED{'test'} = &share({});
$SHARED{'test'}->{'count'} = 1;
my $res = Apache->server->push_handlers(PerlChildInitHandler => \&mod_perl_ChildInitHandler);
print STDERR "Testing[$$]: Installed ChildInitHandler result '$res'\n";
}
sub mod_perl_ChildInitHandler {
print STDERR "mod_perl_ChildInitHandler\n";
## Start a thread to restart other thread...
threads->new( sub {
while(1) {
my $ovs = threads->new(\&overseer);
print STDERR "Testing[$$]: Started overseer thread\n";
$ovs->join();
print STDERR "Testing[$$]: Joined overseer thread (probably bad)\n";
## Add backoff for spawning too quickly etc.
}
})->detach;
return &Apache::OK;
}
sub overseer {
print STDERR "Testing[$$]->", threads->self->tid, " Overseer Startup...\n";
while(sleep 2) {
lock(%{$SHARED{'test'}});
print STDERR "Testing[$$]->", threads->self->tid, ": \$SHARED{'test'}->{'count'} = $SHARED{'test'}->{'count'} \n";
## here is where you can do more interesting things such as
## get data from databases, external sources, or update them.
}
}
sub handler : method {
my $class = shift;
my $r = shift;
lock(%{$SHARED{'test'}});
$SHARED{'test'}->{'count'}++;
$r->no_cache();
$r->err_headers_out->{"Expires"} = "Sat, 1 Jan 2000 00:00:00 GMT";
$r->err_headers_out->{"Pragma"} = "no-cache";
$r->err_headers_out->{"Cache-Control"} = "no-cache";
$r->err_headers_out->{"Location"} = 'http://www.google.com';
$r->status(&Apache::REDIRECT);
$r->rflush();
return &Apache::OK;
}
On Mon, 2005-01-17 at 13:59 -0500, Richard F. Rebel wrote:
> Another good idea... :)
>
> But I am transfixed by this problem... I can't seem to get each forked
> apache server to have both a shared global hash between all cloned
> interpreters, *and* one thread in each process that runs in the
> background doing housekeeping. I can think of numerous things that this
> would be useful for.
>
> I know I am close, but I can't seem to quite grasp what I am missing. I
> thought PerlChildInit's were called for each forked child from it's
> first/main interpreter (the one that all the others are cloned from).
>
>
> On Mon, 2005-01-17 at 13:59 -0500, Perrin Harkins wrote:
> > On Mon, 2005-01-17 at 11:25 -0500, Richard F. Rebel wrote:
> > > Unfortunately, it's high volume enough that it's no longer possible to
> > > keep these counters in the databases updated in real time. (updates are
> > > to the order of 1000's per second).
> >
> > I would just use BerkeleyDB for this, which can easilly keep up, rather
> > than messing with threads, but I'm interested in seeing if your
> > threading idea will work well.
> >
> > > * A overseer/manager thread that wakes up once every so often and
> > > updates the MySQL database with the contents of the global shared hash.
> >
> > Rather than doing that, why not just update it from a cleanup handler
> > every time the counter goes up by 10000 or so? Seems much easier to me.
> >
> > - Perrin
> >
--
Richard F. Rebel
cat /dev/null > `tty`