You are viewing a plain text version of this content. The canonical link for it is here.
Posted to modperl@perl.apache.org by Bill Moseley <mo...@hank.org> on 2000/08/04 21:21:03 UTC

Template caches

I briefly asked about this in a previous post, but I wanted to follow up.

I'm curious but not very experienced, so any comments would be welcome...

I have a home-built template cache system that is limited by size.  I was
logging failed cache hits (and thus reloading a template from disk) and was
surprised at how often it was happening. So...

I like the ideas of converting templates to subroutines and then caching
those on disk.  I really like the idea of then bringing those into the
server and compiling them and keeping a cache of those compiled subroutines
in memory.  Having the templates in the cache seems like a big win, so one
would want to keep as many templates in the cache as possible.

This in-memory cache must be limited and controlled to some degree I would
expect, especially since they cannot be shared across mod_perl server
children to save memory.

Now, let say for once we do have a limited amount of memory, and we have a
very large number of templates, and the templates are very large and mostly
plain old text.  In other words, the compiled templates are basically big
print statements with only a small part being variable interpolation.
Since the templates are large, I could imagine a situation where the
compiled template cache is thrashed, or at least the templates don't live
in the cache very long.

As someone who used to program on machines with 8K, all that plain old text
in non-shared memory bugs me a bit.

Maybe this is already being done, but I was wondering if separating the
templates into text and code segments might allow more templates (template
code, that is) to be placed in the running server's cache at any given time
would make any sense.  

It would be slower to generate a page, of course, but that might be offset
by a greater cache hit-rate and less reloading & compiling of templates.
And the text segments could be shared by all server processes by using
IPC::Sharable -- or maybe it would be fast enough to let the OS file system
buffer the commonly used templates, or use a database to load the text
segments.

So my question is only would such a system make sense, or is memory so
inexpensive and templates normally so small that there wouldn't be any
benefit?



Bill Moseley
mailto:moseley@hank.org

Re: Template caches

Posted by Matt Sergeant <ma...@sergeant.org>.
On Fri, 4 Aug 2000, Bill Moseley wrote:

> Now, let say for once we do have a limited amount of memory, and we have a
> very large number of templates, and the templates are very large and mostly
> plain old text.  In other words, the compiled templates are basically big
> print statements with only a small part being variable interpolation.
> Since the templates are large, I could imagine a situation where the
> compiled template cache is thrashed, or at least the templates don't live
> in the cache very long.

There are two situations I can realistically see this being a problem:

1. You haven't bought enough memory for your server considering your data
set and other issues.

2. You are an ISP running one of these template systems, with multiple
virtual hosts.

I actually don't have any solutions to this, and its exactly the reason I
want to bring Mason and AxKit together, because AxKit sure does use a lot
of memory.

The only real benefit is where you've got static files in the cache. You
can effectively stop the server (clearing the cache) and restart it and
deliver from the cache. Provided nothing changes the templates will never
be loaded again.

-- 
<Matt/>

Fastnet Software Ltd. High Performance Web Specialists
Providing mod_perl, XML, Sybase and Oracle solutions
Email for training and consultancy availability.
http://sergeant.org | AxKit: http://axkit.org


Re: Template caches

Posted by Perrin Harkins <pe...@primenet.com>.
On Fri, 4 Aug 2000, Bill Moseley wrote:
> Now, let say for once we do have a limited amount of memory, and we have a
> very large number of templates, and the templates are very large and mostly
> plain old text.  In other words, the compiled templates are basically big
> print statements with only a small part being variable interpolation.
> Since the templates are large, I could imagine a situation where the
> compiled template cache is thrashed, or at least the templates don't live
> in the cache very long.

Mason tries to address this by checking for the amount of text in a
template and turning large plain text sections into subroutine calls that
read the text from disk.  I think this is a good solution that other
systems would do well to emulate.  (Of course you can adjust what counts
as "a large amount of text" according to your available free RAM.)  Mason
also has a very nice size-limiting scheme for the cache, using an
least-frequently-used + aging algorithm for its removal policy.

> And the text segments could be shared by all server processes by using
> IPC::Sharable -- or maybe it would be fast enough to let the OS file system
> buffer the commonly used templates, or use a database to load the text
> segments.

Just let the filesystem do it.  It's much simpler and it lets the OS
handle things that the OS is good at.

- Perrin