You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Peter Pilsl <pi...@goldfisch.at> on 2001/11/13 23:18:42 UTC

cgi-object not cacheable

One run of my script takes about 2 seconds. This includes a lot of
database-queries, calculations and so on.  about 0.3 seconds are used
just for one command: $query=new CGI;

I tried to cache the retrieved object between several requests by
storing to a persistent variable to avoid this long time, but it is
not cacheable. (in the meaning of : operations on a cached CGI-object
will just produce nothing)

This is not a problem of my persistent variables, cause this works
with many other objects like db-handles (cant use Apache::DBI cause
this keeps to many handles opened, so I need to cache and pool on my
own), filehandles etc.

any idea ?

thnx,
peter



-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: pilsl@goldfisch.at
sms  : pilsl@max.mail.at

pgp-key available

RE: cgi-object not cacheable

Posted by Andy Sharp <as...@nector.com>.
> That's usually pretty accurate, so I guess it really takes 
> that long on your system.  Try Apache::Request!  Or even one 
> of the lighter CGI modules like CGI_Lite.
> 
> > in my case it means up to 4 connections per process, cause 
> in fact it 
> > is not one module but 2 (input and output) and each needs 
> to handle 2
> different
> > connections.
> 
> If you could reduce that, it would certainly help your 
> application's performance.

You should be able to recude that a fair pile.  Unless you're connecting to
multiple Boxes, via Multiple DBI->connect statements, you should be able to
piggyback all requests to a given DB server down the one connection.

SELECT fields FROM database.tablename ...  I use that a fair bit to avoid
the overhead of keeping an extra connection in Memory.

If it's a function of userids, you should work with your DBA to ensure that
one user account can do what's needed from the pages.  

I really can't imagine that you _need_ to connect to 4 different datasources
in most pages.


> > This would be something I would actually prefer to 
> Apache::DBI, but I 
> > dont know if its possible, but I'll try.  Such a thing 
> would be very 
> > important, especially on slow servers with less ram, where 
> Apache::DBI 
> > opens to many connections in peak-times and leaves the 
> system in a bad 
> > condition ('to many open filehandles')
> 
> I still think you'd be better off just limiting the total 
> number of servers with MaxClients.  Put a reverse proxy in 
> front and you'll offload all the work that doesn't require a 
> database handle.
> 

I agree with Perrin here.  On my systems, the mod_perl processes never have
a "peak time"  where they start openning more connections, since I just
configure Apache to start 110 Processes and never launch more than 110
Processes.  The Database always has 110 connections to it. 

Just about the only time you *can't* use Apache::DBI is if you're
deliberately re-connecting on page load.  Typically the only reason you'd do
this is if you were changing the authentication parameters of the connection
based on the request data.  In that case, Apache::DBI would prove no aid.

If you're getting 'too many open filehandles' it sounds like there's some
additional tuning to do on the OS level.  FreeBSD needs to be tuned to
properly run mod_perl, I suspect you may need to increase the Maxfiles and
Maxfilesperproc (or whatever OS equiv) on your system.

As Perrin says, just limit the number of httpd-Mod_perl processes, offload
any and all text/images which is not request driven to a proxy system.

--A

Re: cgi-object not cacheable

Posted by Perrin Harkins <pe...@elem.com>.
> > If it was running under CGI, it would be compiling CGI.pm on each
request,
> > which I've seen take .3 seconds.  Taking that long just to create the
new
> > CGI instance seems unusual.  How did you time it?  Are you using
> > Apache::DProf?
> >
>
> Wouldnt it be compiled at the use-statement ?

Yes, but when running under CGI (the protocol, not the module) that use
statement is executed every time.

> I timed it using
> module-internal loggingfunction which use time::hires.

That's usually pretty accurate, so I guess it really takes that long on your
system.  Try Apache::Request!  Or even one of the lighter CGI modules like
CGI_Lite.

> in my case it means up to 4 connections per process, cause in fact it
> is not one module but 2 (input and output) and each needs to handle 2
different
> connections.

If you could reduce that, it would certainly help your application's
performance.

> I hope to share databasehandles via IPC. One has to avoid that only
> one process writes to a handle at the same time !!

IPC::Shareable, and most of the other options, use Storable to serialize
data structures.  Storable can't serialize an open socket.  You *CAN* share
sockets, but you'd have to write some custom C code to do it.  You might
look at the Sybase thing that was posted here recently.  (I haven't looked
at it yet, but it sounded interesting.)

> if max. number is
> reached - return 0. The calling application can then decide to print
> an excuse due to the user 'cause we are so popular we cant server you
> :)' or create and destroy a temporary handle to process the request.

Even with temporary handles, you have the possibility of all servers being
busy at once and thus using all 4 handles.

> This would be something I would actually prefer to Apache::DBI, but I
> dont know if its possible, but I'll try.  Such a thing would be very
> important, especially on slow servers with less ram, where Apache::DBI
> opens to many connections in peak-times and leaves the system in a bad
> condition ('to many open filehandles')

I still think you'd be better off just limiting the total number of servers
with MaxClients.  Put a reverse proxy in front and you'll offload all the
work that doesn't require a database handle.

> ps: just if one is interested: today I was very happy to wear a helmet
> when I crashed with my bike ;)

I guess my mother was right about that.  Keep your helmet on!

Glad you're not dead,
Perrin


Re: cgi-object not cacheable

Posted by Peter Pilsl <pi...@goldfisch.at>.
On Wed, Nov 14, 2001 at 10:39:36AM -0500, Perrin Harkins wrote:
> > its definitely running under mod_perl. But imho the time it takes to
> > create a new cgi-object should not depend too much wheter its running
> > under mod_perl or not, cause the CGI-module is loaded before. (In fact
> > I load in httpd.conf using PerlModule-Directive)
> 
> If it was running under CGI, it would be compiling CGI.pm on each request,
> which I've seen take .3 seconds.  Taking that long just to create the new
> CGI instance seems unusual.  How did you time it?  Are you using
> Apache::DProf?
>

Wouldnt it be compiled at the use-statement ? I timed it using
module-internal loggingfunction which use time::hires.
 
> > This makes very much sense. Apache::DBI does not limit the number of
> > persistent connections. It just keeps all the connections open per
> > apache-process.
> 
> That should mean one connection per process if you're connecting with the
> same parameters every time.
> 

in my case it means up to 4 connections per process, cause in fact it
is not one module but 2 (input and output) and each needs to handle 2 different
connections.

> >  if (exists($ptr->{global}->{dbhandles}->{_some_id_string}))
> 
> You know that this is only for one process, right?  If you limit this cache
> to 20 connections, you may get hundreds of connections.
> 

yes, thats why I limit it to 1 or even 0.

> > I would prefer to handle this in a special pooling-module
> > like Apache::DBI is, but where one can specify a maximum number of
> > open connections and a timeout per connection (connection will be
> > terminated after it was not used a specified amount of time).
> 
> You can just set a timeout in your database server.  If a connection times
> out and then needs to be used, the ping will fail and Apache::DBI will
> re-connect.

thats an interesting idea. I experienced crashes on ping to dead
connections under DBD::Pg but this is worth to check.

> 
> > As soon
> > as I get IPC::Sharable to work I'll consider writing such a thingy.
> 
> You can't share database handles over IPC::Shareable, but you could share a
> global number tracking how many total database handles exist.  However, I
> think you'd be better off using Apache::DBI and limiting the number of
> Apache children to the number of connections your database can deal with.
> 

I hope to share databasehandles via IPC. One has to avoid that only
one process writes to a handle at the same time !! (hope I'm right
here) This would offer possibilities to create a pool of handles with
limited max. number and clientsided timeouts. If a process requests a
handle and there is one cached in the pool it will give this handle
back. Otherwise it will create a new handle or - if max. number is
reached - return 0. The calling application can then decide to print
an excuse due to the user 'cause we are so popular we cant server you
:)' or create and destroy a temporary handle to process the request.

This would be something I would actually prefer to Apache::DBI, but I
dont know if its possible, but I'll try.  Such a thing would be very
important, especially on slow servers with less ram, where Apache::DBI
opens to many connections in peak-times and leaves the system in a bad
condition ('to many open filehandles')

peter

ps: just if one is interested: today I was very happy to wear a helmet
when I crashed with my bike ;) At least I can write this lines after
my head touched the road. (well : it hurts in the arms when writing
fast ;)


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: pilsl@goldfisch.at
sms  : pilsl@max.mail.at

pgp-key available

Re: cgi-object not cacheable

Posted by Perrin Harkins <pe...@elem.com>.
> its definitely running under mod_perl. But imho the time it takes to
> create a new cgi-object should not depend too much wheter its running
> under mod_perl or not, cause the CGI-module is loaded before. (In fact
> I load in httpd.conf using PerlModule-Directive)

If it was running under CGI, it would be compiling CGI.pm on each request,
which I've seen take .3 seconds.  Taking that long just to create the new
CGI instance seems unusual.  How did you time it?  Are you using
Apache::DProf?

> This makes very much sense. Apache::DBI does not limit the number of
> persistent connections. It just keeps all the connections open per
> apache-process.

That should mean one connection per process if you're connecting with the
same parameters every time.

>  if (exists($ptr->{global}->{dbhandles}->{_some_id_string}))

You know that this is only for one process, right?  If you limit this cache
to 20 connections, you may get hundreds of connections.

> I would prefer to handle this in a special pooling-module
> like Apache::DBI is, but where one can specify a maximum number of
> open connections and a timeout per connection (connection will be
> terminated after it was not used a specified amount of time).

You can just set a timeout in your database server.  If a connection times
out and then needs to be used, the ping will fail and Apache::DBI will
re-connect.

> As soon
> as I get IPC::Sharable to work I'll consider writing such a thingy.

You can't share database handles over IPC::Shareable, but you could share a
global number tracking how many total database handles exist.  However, I
think you'd be better off using Apache::DBI and limiting the number of
Apache children to the number of connections your database can deal with.

- Perrin


apache restart messages question

Posted by John Michael <jo...@acadiacom.net>.
I am getting these error messages when I restart apache on a new mod perl
install.

Starting httpd: Subroutine export redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 35.
Subroutine name redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 46.
Subroutine Apache::Constants::__AUTOLOAD redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 14.
Subroutine Apache::Constants::SERVER_VERSION redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 14.
Subroutine Apache::Constants::SERVER_BUILT redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 14.
Subroutine Apache::Constants::DECLINE_CMD redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 14.
Subroutine Apache::Constants::DIR_MAGIC_TYPE redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 14.
Constant subroutine OK redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/mod_perl.pm line 65535.
Constant subroutine DECLINED redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Constant subroutine DONE redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Constant subroutine NOT_FOUND redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Constant subroutine FORBIDDEN redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Constant subroutine AUTH_REQUIRED redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Constant subroutine SERVER_ERROR redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 65535.
Subroutine AUTOLOAD redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 30.
Constant subroutine AUTH_REQUIRED redefined at /usr/lib/perl5/5.6.0/Carp.pm
line 4
Constant subroutine DONE redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Constant subroutine SERVER_ERROR redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Constant subroutine NOT_FOUND redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Constant subroutine FORBIDDEN redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Constant subroutine OK redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Constant subroutine DECLINED redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Constants.pm line 4
Subroutine handler redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Registry.pm line 27.
Subroutine compile redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Registry.pm line 174.
Subroutine parse_cmdline redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Registry.pm line 190.
Subroutine DESTROY redefined at
/usr/lib/perl5/site_perl/5.6.0/i386-linux/Apache/Registry.pm line 214.
[  OK  ]

Here is my perl conf file.
Alias /perl/ /home/www/dvdsex4u/perl/
PerlModule Apache::Registry
<Location /perl>
 SetHandler perl-script
 PerlHandler Apache::Registry
 Options ExecCGI
 allow from all
 PerlSendHeader On
</Location>

#This bit of code, causes apache to exit gracefully on any request for
Microshit files: nimbda fighter
PerlModule Apache::Constants
<LocationMatch "\.(ida|exe|dll|asp)$">
SetHandler perl-script
PerlInitHandler Apache::Constants::DONE
</LocationMatch>

PerlRequire  /etc/httpd/conf/perl_conf/startup.pl
PerlFreshRestart On
PerlWarn On


I have this same setup on another server and it starts and works fine.
The server appears to have started.
Should I just disreguard these messages.

thanks
John michael


----- Original Message -----
From: "Peter Pilsl" <pi...@goldfisch.at>
To: "Perrin Harkins" <pe...@elem.com>
Cc: <mo...@apache.org>
Sent: Wednesday, November 14, 2001 3:03 AM
Subject: Re: cgi-object not cacheable


> On Tue, Nov 13, 2001 at 09:18:04PM -0500, Perrin Harkins wrote:
> > > One run of my script takes about 2 seconds. This includes a lot of
> > > database-queries, calculations and so on.  about 0.3 seconds are used
> > > just for one command: $query=new CGI;
> >
> > That's really awfully slow.  Are you positive it's running under
mod_perl?
> > Have you considered using Apache::Request instead of CGI.pm?
> >
>
> its definitely running under mod_perl. But imho the time it takes to
> create a new cgi-object should not depend too much wheter its running
> under mod_perl or not, cause the CGI-module is loaded before. (In fact
> I load in httpd.conf using PerlModule-Directive)
>
> > > This is not a problem of my persistent variables, cause this works
> > > with many other objects like db-handles (cant use Apache::DBI cause
> > > this keeps to many handles opened, so I need to cache and pool on my
> > > own), filehandles etc.
> >
> > Whoa, you can't use Apache::DBI but you can cache database handles
yourself?
> > That doesn't make any sense.  What are you doing in your caching that's
> > different from what Apache::DBI does?
>
> This makes very much sense. Apache::DBI does not limit the number of
> persistent connections. It just keeps all the connections open per
> apache-process. This will sum up to about 20 open
> database-connections, each having one postgres-client running 'idle in
> transaction' - and my old small serversystem is going weak.  So I cant
> cache all connections, but only a limited number and so I cache on my
> own :) Beside: it is done with a few lines of code, so it wasnt much
> work either:
>
>  if (exists($ptr->{global}->{dbhandles}->{_some_id_string}))
>  {
>     $dbh=$ptr->{global}->{dbhandles}->{_some_id_string};
>     $dbh or err($ptr,19); # there must have been something wrong
internally
>     if (not $dbh->ping) {$connect=1;$r='reconnect'}  # we just reconnect
..
>     $dbh and $dbh->rollback;   # this issue a new begin-transaction and
avoid several problem with 'current_timestamp' that dedlivers the value
>                                     # of the time at the beginning of the
transaction, even if this is hours ago. see TROUBLEREPORT1
>     $r= "stored" if $r eq '-';
>   } else {$connect=1;}
>   if ($connect)
>   {
>     $dbh=DBI->connect(connectinformation)
>     ....
>   }
>
> and on exit I just disconnect all handles but keeping a specified
> amount.  I would prefer to handle this in a special pooling-module
> like Apache::DBI is, but where one can specify a maximum number of
> open connections and a timeout per connection (connection will be
> terminated after it was not used a specified amount of time).  As soon
> as I get IPC::Sharable to work I'll consider writing such a thingy.
>
> best,
> peter
>
>
> --
> mag. peter pilsl
>
> phone: +43 676 3574035
> fax  : +43 676 3546512
> email: pilsl@goldfisch.at
> sms  : pilsl@max.mail.at
>
> pgp-key available


Re: cgi-object not cacheable

Posted by Peter Pilsl <pi...@goldfisch.at>.
On Tue, Nov 13, 2001 at 09:18:04PM -0500, Perrin Harkins wrote:
> > One run of my script takes about 2 seconds. This includes a lot of
> > database-queries, calculations and so on.  about 0.3 seconds are used
> > just for one command: $query=new CGI;
> 
> That's really awfully slow.  Are you positive it's running under mod_perl?
> Have you considered using Apache::Request instead of CGI.pm?
> 

its definitely running under mod_perl. But imho the time it takes to
create a new cgi-object should not depend too much wheter its running
under mod_perl or not, cause the CGI-module is loaded before. (In fact
I load in httpd.conf using PerlModule-Directive)

> > This is not a problem of my persistent variables, cause this works
> > with many other objects like db-handles (cant use Apache::DBI cause
> > this keeps to many handles opened, so I need to cache and pool on my
> > own), filehandles etc.
> 
> Whoa, you can't use Apache::DBI but you can cache database handles yourself?
> That doesn't make any sense.  What are you doing in your caching that's
> different from what Apache::DBI does?

This makes very much sense. Apache::DBI does not limit the number of
persistent connections. It just keeps all the connections open per
apache-process. This will sum up to about 20 open
database-connections, each having one postgres-client running 'idle in
transaction' - and my old small serversystem is going weak.  So I cant
cache all connections, but only a limited number and so I cache on my
own :) Beside: it is done with a few lines of code, so it wasnt much
work either:

 if (exists($ptr->{global}->{dbhandles}->{_some_id_string}))
 {
    $dbh=$ptr->{global}->{dbhandles}->{_some_id_string};
    $dbh or err($ptr,19); # there must have been something wrong internally
    if (not $dbh->ping) {$connect=1;$r='reconnect'}  # we just reconnect ..
    $dbh and $dbh->rollback;   # this issue a new begin-transaction and avoid several problem with 'current_timestamp' that dedlivers the value
                                    # of the time at the beginning of the transaction, even if this is hours ago. see TROUBLEREPORT1
    $r= "stored" if $r eq '-';
  } else {$connect=1;}   
  if ($connect)
  {
    $dbh=DBI->connect(connectinformation)
    ....
  }

and on exit I just disconnect all handles but keeping a specified
amount.  I would prefer to handle this in a special pooling-module
like Apache::DBI is, but where one can specify a maximum number of
open connections and a timeout per connection (connection will be
terminated after it was not used a specified amount of time).  As soon
as I get IPC::Sharable to work I'll consider writing such a thingy.

best,
peter


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: pilsl@goldfisch.at
sms  : pilsl@max.mail.at

pgp-key available

Re: cgi-object not cacheable

Posted by Perrin Harkins <pe...@elem.com>.
> One run of my script takes about 2 seconds. This includes a lot of
> database-queries, calculations and so on.  about 0.3 seconds are used
> just for one command: $query=new CGI;

That's really awfully slow.  Are you positive it's running under mod_perl?
Have you considered using Apache::Request instead of CGI.pm?

> This is not a problem of my persistent variables, cause this works
> with many other objects like db-handles (cant use Apache::DBI cause
> this keeps to many handles opened, so I need to cache and pool on my
> own), filehandles etc.

Whoa, you can't use Apache::DBI but you can cache database handles yourself?
That doesn't make any sense.  What are you doing in your caching that's
different from what Apache::DBI does?

- Perrin


RE: cgi-object not cacheable

Posted by simran <si...@cse.unsw.edu.au>.
One of the reasons you should probably not have a persistent/global CGI
object is that
upon a "new" the CGI module reads in numerous environment variables and
setups up its
internal structures for that particular query. If $q (in $q=new CGI) was
persistent/global
you would possibly have the wrong internal data in $q for the current
request.

-----Original Message-----
From: Peter Pilsl [mailto:pilsl@goldfisch.at]
Sent: Wednesday, 14 November 2001 9:19 AM
To: modperl@apache.org
Subject: cgi-object not cacheable


One run of my script takes about 2 seconds. This includes a lot of
database-queries, calculations and so on.  about 0.3 seconds are used
just for one command: $query=new CGI;

I tried to cache the retrieved object between several requests by
storing to a persistent variable to avoid this long time, but it is
not cacheable. (in the meaning of : operations on a cached CGI-object
will just produce nothing)

This is not a problem of my persistent variables, cause this works
with many other objects like db-handles (cant use Apache::DBI cause
this keeps to many handles opened, so I need to cache and pool on my
own), filehandles etc.

any idea ?

thnx,
peter



--
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: pilsl@goldfisch.at
sms  : pilsl@max.mail.at

pgp-key available