You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Trevor Phillips <ph...@central.murdoch.edu.au> on 2001/06/18 07:17:47 UTC

Advanced daemon allocation

Is there any way to control which daemon handles a certain request with apache
1.x?

eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
application to 10 specific daemons would improve the efficiency of data cached
in those processes.

If this is impossible in Apache 1.x, will it be possible in 2.x? I can really
see a more advanced model for allocation improving efficiency and performance.
Even if it isn't a hard-limit, but a preferential arrangement where, for
example, hits to a particular URL tend to go to the same daemon(s), this would
improve the efficiency of data cached within the daemon.

I suppose I could do this now by having a front-end proxy, and mini-Apache
configs for each "group" I want, but that seems to be going too far (at this
stage), especially if the functionality already exists to do this within the
one server.

-- 
. Trevor Phillips             -           http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au : 
| IT Services                       -               Murdoch University | 
 >------------------- Member of the #SAS# & #CFC# --------------------<
| On nights such as this, evil deeds are done. And good deeds, of     /
| course. But mostly evil, on the whole.                             /
 \      -- (Terry Pratchett, Wyrd Sisters)                          /

Re: Advanced daemon allocation

Posted by "Keith G. Murphy" <ke...@mindspring.com>.
Matthew Byng-Maddick wrote:
> 
> On Mon, Jun 18, 2001 at 10:41:50AM -0500, Keith G. Murphy wrote:
> >Trevor Phillips wrote:
> >>
> >>Is there any way to control which daemon handles a certain request with apache
> >>1.x?
> >>
> >>eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> >>application to 10 specific daemons would improve the efficiency of data cached
> >>in those processes.
> >Making sure the browser supports HTTP 1.1 (persistent connections) will
> >get you a lot better performance in many cases, since a particular user
> >will tend to keep hitting the same daemon, so that helps if they're
> >hitting the same or a related script over and over.
> 
> This only works within the keepalive timeout. (default configuration 15s)

Yes, this can have negative implications, as Stas explained.  It's a
really good point, and one I wasn't fully aware of.  It worked well *in
my situation*.
> 
> >In one case, I was seeing really bad performance from an app, but it
> >seemed acceptable to the users, who were all running IE, where I was
> >running Netscape, which still doesn't support 1.1 in version 4
> >browsers.  :-(  Dunno about 6, Mozilla, etc.
> 
> This is only true if you're serving images off the mod_perl server which
> is crazy unless you're generating them.
> 
Well, it certainly also seemed to be true for rapid, *subsequent*
invocations of a script.  No images involved.

Re: Advanced daemon allocation

Posted by "Keith G. Murphy" <ke...@mindspring.com>.
Stas Bekman wrote:
> 
> On Tue, 19 Jun 2001, Keith G. Murphy wrote:
> 
> > Matthew Byng-Maddick wrote:
> > >
> > > On Mon, Jun 18, 2001 at 10:41:50AM -0500, Keith G. Murphy wrote:
> 
> > > This is only true if you're serving images off the mod_perl server which
> > > is crazy unless you're generating them.
> > >
> > No images involved, but I was seeing a performance improvement under
> > HTTP 1.1. What happened was that the user kept getting the same daemon
> > for each invocation of the Apache::CGI script, which seemed to be due to
> > HTTP 1.1's persistent connections.
> 
> Do you mind if I ask how many users were using the service?
> 
> Because if there were just a few, than it's true.
> 
Yes, it was very few indeed: one or two!

You and Matthew Byng-Maddick have made me realize that mine was probably
the *only* situation in which the KeepAlive technique would have been
very useful:

Rapid reinvocation of a script; extremely limited system memory; very
few users.

Almost sorry I brought it up, but it's been an informative discussion.

Re: Advanced daemon allocation

Posted by Stas Bekman <st...@stason.org>.
On Tue, 19 Jun 2001, Keith G. Murphy wrote:

> Matthew Byng-Maddick wrote:
> >
> > On Mon, Jun 18, 2001 at 10:41:50AM -0500, Keith G. Murphy wrote:

> > This is only true if you're serving images off the mod_perl server which
> > is crazy unless you're generating them.
> >
> No images involved, but I was seeing a performance improvement under
> HTTP 1.1. What happened was that the user kept getting the same daemon
> for each invocation of the Apache::CGI script, which seemed to be due to
> HTTP 1.1's persistent connections.

Do you mind if I ask how many users were using the service?

Because if there were just a few, than it's true.


_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://eXtropia.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: Advanced daemon allocation

Posted by "Keith G. Murphy" <ke...@mindspring.com>.
Matthew Byng-Maddick wrote:
> 
> On Mon, Jun 18, 2001 at 10:41:50AM -0500, Keith G. Murphy wrote:
> >Trevor Phillips wrote:
> >>
> >>Is there any way to control which daemon handles a certain request with apache
> >>1.x?
> >>
> >>eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> >>application to 10 specific daemons would improve the efficiency of data cached
> >>in those processes.
> >Making sure the browser supports HTTP 1.1 (persistent connections) will
> >get you a lot better performance in many cases, since a particular user
> >will tend to keep hitting the same daemon, so that helps if they're
> >hitting the same or a related script over and over.
> 
> This only works within the keepalive timeout. (default configuration 15s)

Yes, that is what I was using.
> 
> >In one case, I was seeing really bad performance from an app, but it
> >seemed acceptable to the users, who were all running IE, where I was
> >running Netscape, which still doesn't support 1.1 in version 4
> >browsers.  :-(  Dunno about 6, Mozilla, etc.
> 
> This is only true if you're serving images off the mod_perl server which
> is crazy unless you're generating them.
> 
No images involved, but I was seeing a performance improvement under
HTTP 1.1. What happened was that the user kept getting the same daemon
for each invocation of the Apache::CGI script, which seemed to be due to
HTTP 1.1's persistent connections.

Re: Advanced daemon allocation

Posted by Stas Bekman <st...@stason.org>.
> Although: Stas:
>   "Since keepalive connections will not incur the additional three-way TCP
>    handshake, turning it off will be kinder to the network."
> erm....???? Surely if you turn it *on* you'll be kinder to the network,
> because you're not reinitiating the handshake?

[it] refers to [handshake]. I've rephrased this sentence to make it more
clear :) thanks!

_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://eXtropia.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: Advanced daemon allocation

Posted by Matthew Byng-Maddick <mo...@lists.colondot.net>.
On Mon, Jun 18, 2001 at 10:41:50AM -0500, Keith G. Murphy wrote:
>Trevor Phillips wrote:
>> 
>>Is there any way to control which daemon handles a certain request with apache
>>1.x?
>> 
>>eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
>>application to 10 specific daemons would improve the efficiency of data cached
>>in those processes.
>Making sure the browser supports HTTP 1.1 (persistent connections) will
>get you a lot better performance in many cases, since a particular user
>will tend to keep hitting the same daemon, so that helps if they're
>hitting the same or a related script over and over.

This only works within the keepalive timeout. (default configuration 15s)

>In one case, I was seeing really bad performance from an app, but it
>seemed acceptable to the users, who were all running IE, where I was
>running Netscape, which still doesn't support 1.1 in version 4
>browsers.  :-(  Dunno about 6, Mozilla, etc.

This is only true if you're serving images off the mod_perl server which
is crazy unless you're generating them.

Anyway: the point of this post was:
  http://perl.apache.org/guide/performance.html#KeepAlive

Sorry.

Although: Stas:
  "Since keepalive connections will not incur the additional three-way TCP
   handshake, turning it off will be kinder to the network."
erm....???? Surely if you turn it *on* you'll be kinder to the network,
because you're not reinitiating the handshake?

MBM

-- 
Matthew Byng-Maddick         <mb...@colondot.net>           http://colondot.net/

Re: Advanced daemon allocation

Posted by "Keith G. Murphy" <ke...@mindspring.com>.
Stas Bekman wrote:
> 
> On Mon, 18 Jun 2001, Keith G. Murphy wrote:
> 
> > Trevor Phillips wrote:
> > >
> > > Is there any way to control which daemon handles a certain request with apache
> > > 1.x?
> > >
> > > eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> > > application to 10 specific daemons would improve the efficiency of data cached
> > > in those processes.
> > >
> > Making sure the browser supports HTTP 1.1 (persistent connections) will
> > get you a lot better performance in many cases, since a particular user
> > will tend to keep hitting the same daemon, so that helps if they're
> > hitting the same or a related script over and over.
> 
> I beg your pardon Keith, but probably in most cases this is a very bad
> suggestion. 

No offense taken.  Notice I said "many" cases, not "most" cases.  ;-) 
But your comment is appreciated:  I certainly wouldn't want to mislead
anyone.  It would be a very bad idea in a busy internet situation.

> By leaving the KeepAlive's on (I guess that's what you refer
> to by persistent connections) 

I say "persistent connections" because in my case, I saw a difference
coming into play between Netscape and IE browsers - and it was because
IE supported HTTP 1.1, thus persistent connections.  Yes, KeepAlive had
to be on to see the effect.

> you tie a server to a user. Which makes your
> service very unscalable. Given that you can afford X servers processes
> running, when X users will get their persistent connection open, your
> service becomes closed to any other users.
>
Of course.  For me it was really very much a poor man's alternative to
setting aside a certain number of daemons for mod_perl.  I really didn't
have enough system resources to do that!
> 
> Your solution is good though if you know that you can have at most X users
> over a long time span. Which is usually the case on the intranet servers
> in the small companies.

Which was exactly my situation.  :-)

Re: Advanced daemon allocation

Posted by Stas Bekman <st...@stason.org>.
On Mon, 18 Jun 2001, Keith G. Murphy wrote:

> Trevor Phillips wrote:
> >
> > Is there any way to control which daemon handles a certain request with apache
> > 1.x?
> >
> > eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> > application to 10 specific daemons would improve the efficiency of data cached
> > in those processes.
> >
> Making sure the browser supports HTTP 1.1 (persistent connections) will
> get you a lot better performance in many cases, since a particular user
> will tend to keep hitting the same daemon, so that helps if they're
> hitting the same or a related script over and over.

I beg your pardon Keith, but probably in most cases this is a very bad
suggestion. By leaving the KeepAlive's on (I guess that's what you refer
to by persistent connections) you tie a server to a user. Which makes your
service very unscalable. Given that you can afford X servers processes
running, when X users will get their persistent connection open, your
service becomes closed to any other users.

Using KeepAlive's is good mainly for static requests.

hold off... here is the story:
http://thingy.kcilink.com/modperlguide/performance/KeepAlive.html

Your solution is good though if you know that you can have at most X users
over a long time span. Which is usually the case on the intranet servers
in the small companies.

_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://eXtropia.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: Advanced daemon allocation

Posted by "Keith G. Murphy" <ke...@mindspring.com>.
Trevor Phillips wrote:
> 
> Is there any way to control which daemon handles a certain request with apache
> 1.x?
> 
> eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> application to 10 specific daemons would improve the efficiency of data cached
> in those processes.
> 
Making sure the browser supports HTTP 1.1 (persistent connections) will
get you a lot better performance in many cases, since a particular user
will tend to keep hitting the same daemon, so that helps if they're
hitting the same or a related script over and over.

In one case, I was seeing really bad performance from an app, but it
seemed acceptable to the users, who were all running IE, where I was
running Netscape, which still doesn't support 1.1 in version 4
browsers.  :-(  Dunno about 6, Mozilla, etc.

Noticed you were running Netscape on Linux; what are your users
running?  ;-)

Re: Advanced daemon allocation

Posted by Stas Bekman <st...@stason.org>.
On Mon, 18 Jun 2001, Trevor Phillips wrote:

> Is there any way to control which daemon handles a certain request with apache
> 1.x?

http://perl.apache.org/guide/strategy.html#Running_More_than_One_mod_perl_S

> eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
> application to 10 specific daemons would improve the efficiency of data cached
> in those processes.
>
> If this is impossible in Apache 1.x, will it be possible in 2.x? I can really
> see a more advanced model for allocation improving efficiency and performance.
> Even if it isn't a hard-limit, but a preferential arrangement where, for
> example, hits to a particular URL tend to go to the same daemon(s), this would
> improve the efficiency of data cached within the daemon.
>
> I suppose I could do this now by having a front-end proxy, and mini-Apache
> configs for each "group" I want, but that seems to be going too far (at this
> stage), especially if the functionality already exists to do this within the
> one server.
>
> --
> . Trevor Phillips             -           http://jurai.murdoch.edu.au/ .
> : CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au :
> | IT Services                       -               Murdoch University |
>  >------------------- Member of the #SAS# & #CFC# --------------------<
> | On nights such as this, evil deeds are done. And good deeds, of     /
> | course. But mostly evil, on the whole.                             /
>  \      -- (Terry Pratchett, Wyrd Sisters)                          /
>



_____________________________________________________________________
Stas Bekman              JAm_pH     --   Just Another mod_perl Hacker
http://stason.org/       mod_perl Guide  http://perl.apache.org/guide
mailto:stas@stason.org   http://apachetoday.com http://eXtropia.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: Advanced daemon allocation

Posted by Gunther Birznieks <gu...@extropia.com>.
At 02:29 PM 6/18/2001 +0800, Trevor Phillips wrote:
>Gunther Birznieks wrote:

> > >I suppose I could do this now by having a front-end proxy, and mini-Apache
> > >configs for each "group" I want, but that seems to be going too far 
> (at this
> > >stage), especially if the functionality already exists to do this 
> within the
> > >one server.
>
>To me, this isn't very ideal. Even sharing most of an apache configuration
>file, what is the overhead of running a separate server? And can multiple

I think this is covered in the guide.

>Apache servers share writing to the same log files?

Why would you need to? The front end can write the log file. Then don't 
bother logging the mod_perl servers. Or make them all log to syslog or some 
other shared logging mechanism.

>It also doesn't help if I have dozens of possible groupings - running 
>dozens of
>slightly different Apache's doesn't seem a clean solution. Hence me asking if
>it was possible within the one Apache server to prioritise the allocation to
>specific daemons, based on some criteria, which would be a more efficient and
>dynamic solution, if it's possible.

It's not ideal, but it's also not possible to do what you say until 
mod_perl 2.0.

You might also consider using Speedy::CGI if you aren't using handlers as 
it makes the multiple configs issue much more trivial to administer, but 
you still get a pretty fast speed up.



Re: Advanced daemon allocation

Posted by Trevor Phillips <ph...@central.murdoch.edu.au>.
Matthew Byng-Maddick wrote:
> 
> This is (in my mind) currently the most broken bit of modperl, because of
> the hacks you have to do to make it work. With a proper API for content
> filtering (apache2), it will be fantastically clean, but at the moment... :-(

The hacks are getting neater, but yes, proper content filtering support will be
wonderful. ^_^

> The fastcgi can run in a different apache again, potentially, it doesn't
> matter (unless I'm misunderstanding something you wrote)

I'm not sure you understand how the FastCGI works.
Apache has "mod_fcgi" (or was it mod_fastcgi?) which is a lightweight
dispatcher - it interfaces to FastCGI applications. The FastCGI applications
are separate processes, running as daemons. The handling of the FastCGI daemons
can be done statically (eg; "run 5 instances of this app"), or dynamically
(increase/decrease daemons based on load automatically).

So, if I have an Apache server, with a hefty Perl App that takes up 10Mb RAM,
then an Apache server with 50 daemons would take 500Mb. Having that app as a
FastCGI and limiting it to 5 daemons would mean only 50Mb of RAM is required,
but only 5 of those 50 daemons could access the application at a time - but
those other daemons can do other things, like dish up static content, access
other FastCGIs, etc...

What's more, you can host FastCGI apps on different hosts, and there's a common
protocol between webserver and CGI, so you can write the CGIs in any language,
and have them work with any webserver with FastCGI support.

IMHO, FastCGIs are a better way of doing applications, but don't have the
versatility mod_perl has of digging into Apache internals. Don't get me wrong,
most of what I do is in mod_perl, but part of that is because it's harder to
layer content from multiple FastCGIs.

-- 
. Trevor Phillips             -           http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au : 
| IT Services                       -               Murdoch University | 
 >------------------- Member of the #SAS# & #CFC# --------------------<
| On nights such as this, evil deeds are done. And good deeds, of     /
| course. But mostly evil, on the whole.                             /
 \      -- (Terry Pratchett, Wyrd Sisters)                          /

Re: Advanced daemon allocation

Posted by Trevor Phillips <ph...@central.murdoch.edu.au>.
Matthew Byng-Maddick wrote:
> 
[useful description snipped]
> Obviously, if your modperl is URL dependent, then you can't determine what
> URL they are going to ask for at the time you have to call accept. The only
> alternative way of doing what you're asking for is to use file descriptor
> passing, which is still about *the* topmost unportable bit of UNIX. :-(
> It is also quite complicated to get right.

Aah! Ok.

> It isn't, because otherwise there'd be even more context-switching, (which is
> slow). The clean solution, in this case, would be to have the one apache that
> actually accepts, does a bit of work on the URL, and then delegates to
> children (probably by passing the fd), but then you still have to do rather
> too much work on the URL before you can do anything about it.

Is this how Apache 2 works, then?

> It isn't as unclean as you might think, though.
> 
> Hope this helps

No, but it explains things a bit better. ^_^
Thanks!

I suppose another way to do it is to go the way of the "application server",
where a light apache daemon then talks to a separate, dedicated, server for the
application. I do use FastCGI for some applications, which runs an app as a
separate process (and can support multiple processes, even on remote machines),
but I like mod_perl's ability to layer multiple content handlers.

I suppose there isn't a mod_perl implementation of FastCGI, is there? (To allow
mixing FastCGI application processing with other mod_perl content handlers) (Or
layer multiple FastCGIs?).

I haven't seen much support for FastCGI as I'd expect. Is there something
similar that's better that everyone's using and not telling me about? ^_^

-- 
. Trevor Phillips             -           http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au : 
| IT Services                       -               Murdoch University | 
 >------------------- Member of the #SAS# & #CFC# --------------------<
| On nights such as this, evil deeds are done. And good deeds, of     /
| course. But mostly evil, on the whole.                             /
 \      -- (Terry Pratchett, Wyrd Sisters)                          /

Re: Advanced daemon allocation

Posted by Matthew Byng-Maddick <mo...@lists.colondot.net>.
On Mon, Jun 18, 2001 at 02:29:18PM +0800, Trevor Phillips wrote:
>Gunther Birznieks wrote:
[>>Trevor wrote:]
>>Yeah, just use the mod_proxy model and then proxy to different mod_perl
>>backend servers based on the URL itself.
>Isn't this pretty much what I said is *a* solution?

Yes, and the only one.

>>>I suppose I could do this now by having a front-end proxy, and mini-Apache
>>>configs for each "group" I want, but that seems to be going too far (at this
>>>stage), especially if the functionality already exists to do this within the
>>>one server.
> To me, this isn't very ideal. Even sharing most of an apache configuration
> file, what is the overhead of running a separate server? And can multiple
> Apache servers share writing to the same log files?

No. The way the multiple process model that apache uses works because of
the way that sockets work:

Parent process runs as root, calls:
 socket() (create the socket)
 bind()   (bind to our local sockaddr_in structure - ip/port)
 listen() (set the socket to listen mode)

now, normally, it would then call accept() to sit there and block for while
it waits for a connection to be made. Instead what it does is rather more
cunning.

It fork()s (several times) to create the children, and immediately setuid()s
to drop its root privs. However, the bit that needs the root privs is the
bind() call above, and because of the way that fork() works, we inherit the
socket from the parent.

These *children* then call accept(). And they all block.

When a connection comes in on that socket, whichever is currently in the
schedule queue will return from the accept() system call, and handle the
request. It is, however, up to the kernel, which one calls accept().
accept() returns a *new* file descriptor, which is the one for the *stream*
(as opposed to the socket).

Obviously, if your modperl is URL dependent, then you can't determine what
URL they are going to ask for at the time you have to call accept. The only
alternative way of doing what you're asking for is to use file descriptor
passing, which is still about *the* topmost unportable bit of UNIX. :-(
It is also quite complicated to get right.

>It also doesn't help if I have dozens of possible groupings - running dozens of
>slightly different Apache's doesn't seem a clean solution. Hence me asking if
>it was possible within the one Apache server to prioritise the allocation to
>specific daemons, based on some criteria, which would be a more efficient and
>dynamic solution, if it's possible.

It isn't, because otherwise there'd be even more context-switching, (which is
slow). The clean solution, in this case, would be to have the one apache that
actually accepts, does a bit of work on the URL, and then delegates to
children (probably by passing the fd), but then you still have to do rather
too much work on the URL before you can do anything about it.

It isn't as unclean as you might think, though.

Hope this helps

MBM

-- 
Matthew Byng-Maddick         <mb...@colondot.net>           http://colondot.net/

Re: Advanced daemon allocation

Posted by Trevor Phillips <ph...@central.murdoch.edu.au>.
Gunther Birznieks wrote:
> 
> Yeah, just use the mod_proxy model and then proxy to different mod_perl
> backend servers based on the URL itself.

Isn't this pretty much what I said is *a* solution?

> >I suppose I could do this now by having a front-end proxy, and mini-Apache
> >configs for each "group" I want, but that seems to be going too far (at this
> >stage), especially if the functionality already exists to do this within the
> >one server.

To me, this isn't very ideal. Even sharing most of an apache configuration
file, what is the overhead of running a separate server? And can multiple
Apache servers share writing to the same log files?

It also doesn't help if I have dozens of possible groupings - running dozens of
slightly different Apache's doesn't seem a clean solution. Hence me asking if
it was possible within the one Apache server to prioritise the allocation to
specific daemons, based on some criteria, which would be a more efficient and
dynamic solution, if it's possible.

-- 
. Trevor Phillips             -           http://jurai.murdoch.edu.au/ . 
: CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au : 
| IT Services                       -               Murdoch University | 
 >------------------- Member of the #SAS# & #CFC# --------------------<
| On nights such as this, evil deeds are done. And good deeds, of     /
| course. But mostly evil, on the whole.                             /
 \      -- (Terry Pratchett, Wyrd Sisters)                          /

Re: Advanced daemon allocation

Posted by Gunther Birznieks <gu...@extropia.com>.
Yeah, just use the mod_proxy model and then proxy to different mod_perl 
backend servers based on the URL itself.

At 01:17 PM 6/18/2001 +0800, Trevor Phillips wrote:
>Is there any way to control which daemon handles a certain request with apache
>1.x?
>
>eg; Out of a pool of 50 daemons, restricting accesses to a certain mod_perl
>application to 10 specific daemons would improve the efficiency of data cached
>in those processes.
>
>If this is impossible in Apache 1.x, will it be possible in 2.x? I can really
>see a more advanced model for allocation improving efficiency and performance.
>Even if it isn't a hard-limit, but a preferential arrangement where, for
>example, hits to a particular URL tend to go to the same daemon(s), this would
>improve the efficiency of data cached within the daemon.
>
>I suppose I could do this now by having a front-end proxy, and mini-Apache
>configs for each "group" I want, but that seems to be going too far (at this
>stage), especially if the functionality already exists to do this within the
>one server.
>
>--
>. Trevor Phillips             -           http://jurai.murdoch.edu.au/ .
>: CWIS Systems Administrator     -           T.Phillips@murdoch.edu.au :
>| IT Services                       -               Murdoch University |
>  >------------------- Member of the #SAS# & #CFC# --------------------<
>| On nights such as this, evil deeds are done. And good deeds, of     /
>| course. But mostly evil, on the whole.                             /
>  \      -- (Terry Pratchett, Wyrd Sisters)                          /

__________________________________________________
Gunther Birznieks (gunther.birznieks@eXtropia.com)
eXtropia - The Open Web Technology Company
http://www.eXtropia.com/