You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cocoon.apache.org by Giacomo Pati <gi...@apache.org> on 2000/11/25 00:28:12 UTC

Re: [RT] Cocoon in cacheland

The mass of replies to this topic seems that nobody needs a cache for
Cocoon 2.

Ok, here are some measures I've made with Cocoon 1 & 2 some month ago
(prior to Xalan2 integration).

The szenario was a separate machine on a 100MB net hit by another
machine with ab (ApacheBench tool). The URL choosen for the test
delivers a static xml document without any transformation only
generation (DOM or SAX respectively) and serialisation. The comparison
is as follows:

The core of C2 is about 2.5 times faster that the core of C1 with cache
disabled. With cache enabled in C1 its about 3 times faster that C2.

If you all want to use C2 in a highly dynamic environment where caching
is unnecessary, we don't need it. We can make C2 proxy friendly and will
reach C1 performance that way. But it would be nice if someone can take
this and contribute a cache system for C2.

Giacomo


Stefano Mazzocchi wrote:
> 
> If Java was fast, web sites served only a few pages a day,
> transformations were cheap and managers less money-hungry, Cocoon would
> have a cache and I would stop here.
> 
> [If you think all of the above are true for you, exit here: you don't
> need a cache!]
> 
> Too bad all of the above are wrong for 99.9% of the server side
> population, so here I am talking about how to make such a awefully
> complex yet elegantly designed beast into something that you can use and
> show your managers with pride without asking for an enterprise 4500 or a
> Cray.
> 
> So, let's start with NOTE #1: caching is all about performance, nothing
> else.
> 
> Caching doesn't add elegance, doesn't improve separation of concerns,
> doesn't help you when developing your site (sometimes it even gets in
> the way!), but when everything is set, without a cache you are dead.
> 
> So, let's how caching should happen:
> 
> 1) the fastest request to handle is the request that others get :)
> 
> This is the idea behind proxies: let your friend the proxy handle
> everything it can. There are some problems for this:
> 
>  a) many proxies do not implement HTTP/1.1 correctly
>  b) proxies work using fixed ergodic periods (expiration times)
> 
> Cocoon1 didn't provide an elegant way for producers to output response
> headers, C2 will provide cache specific hooks for generators and
> transformers so that you can be as proxy friendly as possible.
> 
> Also, the Cocoon2 sitemap will allow you to avoid providing different
> views of the same resource based on user agent, if proxies do not
> implement content negotiation correctly.
> 
> 2) if unfortunately we get the request, the fastest way to process it
> would be to output something that we already have prepared, either
> preprocessed or cached out of a previous request.
> 
> Let us suppose we have something like this
> 
>  [client] ===>  [server] ===> [client]
> 
> where ===> is an ordered octet stream, the best way to cache this is to
> apply something like this
> 
>            +---------------------+
>            |        cache        |
>            |   +-------------+   |
>  [client] =|   |=> [server] =|   |=> [client]
>            |   +-------------+   |
>            |                     |
>            +---------------------+
> 
> where the cache "simulates" the server by placing a "fake" content into
> the response to the client.
> 
> PROBLEM #1: how does the cache connects the cached content to the
> incoming request?
> 
> Normally, this is done thru URI matching, but C2 sitemaps allow all
> types of matching and the above cache doesn't have a way to talk to the
> server to discover these things.
> 
> How is this solved? well, for sure, the cache must connect to the server
> to find out.
> 
> The "server" is what receives the request and generates the response
> based on request parameters and enviornment state. This means that
> 
>  response = server(request, state)
> 
> where server() is a function defined in the "server" component. If we
> define the tuple (request,state) as
> 
>  context := (request,state)
> 
> we have that
> 
>  response = server.service(context)
> 
> in order to optimize performance (since memory reads are faster than
> normal response generation almost in all cases), we want to store the
> response into a hashtable associated to the context so, the cache lookup
> function should do
> 
>  response = cache.lookup(context)
> 
> but in order to understand if the cached resource is still valid, the
> cache must contact the server using another function (normally faster)
> that just has to identify the ergodic validity of the resource. In order
> to do this, the server must be aware of all the information as for
> resource creation so
> 
>  valid = server.hasChanged(context)
> 
> another problem is the creation of a valid hashcode for the context,
> since the cache doesn't know the caching logic, the server must provide
> this as well so
> 
>  hashcode = server.hash(context)
> 
> So the algorithm is the following:
> 
>  request comes
>  if the server implements cacheable
>     call hasChanged(context)
>     if resource has changed
>        generate response
>     else
>        call server.hash(context)
>        call cache.lookup(hashcode)
>  else
>     generate response
> 
> where:
> 
>  generate response
>    call server.service(context)
>    call server.hash(context)
>    cache the response with the given hashcode
> 
> This algorithm extends C1's but works only on serialized resources, in
> fact, it deals with finished responses.
> 
> Now we have to dive deeper into how the server is structured and see
> where caching should take place.
> 
>                               -------------- o ------------
> 
> Ok, now we have a more complex picture
> 
>  [client] ===>  [g --> t --> s] ===> [client]
> 
> where
> 
>   g := generator
>   t := transformer
>   s := serializer
> 
> and
> 
>   ---> is a SAX event stream
>   ===> is an ordered octet stream
> 
> where also each generator or transformer might reference other
> subpipelines
> 
>  [client] ===>  [g --> t --> s] ===> [client]
>                  |     |
>                  t     t
>                  |     |
>                  g     g
> 
> [this is mostly done using XInclude or internal redirection]
> 
> The different here is that nature of the things to be cached: SAX events
> rather than octet streams... but if we apply SAX compilation and we turn
> SAX events into octet streams, we can cache those even in the middle of
> the pipeline... for example
> 
>  [client] ===>  [g -(*)-> t -{*}-> s] ===> [client]
>                  |        |
>                 (*)      (*)
>                  |        |
>                  t        t
>                  |        |
>                  g        g
> 
> which might shows a situation where an XSP page generates some content
> on its own and aggregates some content from a subpipeline, also creating
> dynamic XInclude code that the XInclude transformer aggregates from
> another internal resource.
> 
> Content aggregation should take place at generation level when the
> structure is fixed (stylebook layout, for example), while should take
> place at transformation level when the structure is dynamic (jetspeed
> case, for example, where you select the page layout dynamically).
> 
> Having a SAX event cache that is completely transparent eases
> implementation (you are not aware of the fact that the SAX events down
> the road are "real" or "cached") and creates huge performance
> improvements expecially in cases where content is rarely changed but
> takes very long to generate (example such as content syndication or
> database extraction).
> 
> NOTE: since the serializers should have infinite ergodicity (not change
> depending on state, but only on what comes in from the pipeline), the
> curly cache {*} is useless and can be omitted if the wrapping cache is
> present.
> 
> So, the big picture is something like this
> 
>            +---------------------+
>            |        cache        |
>            |   +-------------+   |
>  [client] =|   |=> [server] =|   |=> [client]
>            |   +-------------+   |
>            |                     |
>            +---------------------+
> 
> where
> 
>  [server] :=    [g -(*)-> t --> s]
>                  |        |
>                 (*)      (*)
>                  |        |
>                  t        t
>                  |        |
>                  g        g
> 
> Ok, enough for starting off a discussion on this.
> 
> Comments welcome.
> 
> --
> Stefano Mazzocchi      One must still have chaos in oneself to be
>                           able to give birth to a dancing star.
> <st...@apache.org>                             Friedrich Nietzsche
> --------------------------------------------------------------------

Re: [RT] Cocoon in cacheland

Posted by MJ Ray <ma...@luminas.co.uk>.
Paul Russell <pa...@luminas.co.uk> writes:

> [Apologies in advance for the terrible ASCII art]

Your apology for ASCII art is noted and will be held against you.  ;-)

[...]
> We should ensure that Cocoon2 uses 'path-info' type requests
> whereever it is semantically justifiable. For example, a news
                  ^^^^^^^^^^^^^^^^^^^^^^^^
Here we have ammunition for an entire bunfight in just two words.  For
example, is it semantically justifiable to have search URLs of the
form:
  http://www.mynewsservice.com/search/and/science/wombles
?

On the one hand, you could argue yes, as the meaning is clear and you
want it to be cacheable.  Even if the result changes quickly, you
could set the caching parameters to make it quite short-lived and
assume caches will obey your instructions.

On the other, it's going to be a pig to generate those sort of URLs in
the current browsers and in practice I would hope that "query" URLs
are cached at least as well (if not better) by modern caches (although
I know my own one screws it up sufficiently often for me not to cache
queries in it).

Personally, the day I see a URL like that above, I go hunting the
technical director with a vaxstation to drop on his toe ;-)

[...]
>  4) How should we store the cache? It's potentially 'rather
>     big', but it's crucial we have it fast. I'd be tempted
>     to use a two layer cache - first layer in ram, and second
>     layer on backing store. When something is used, it's loaded
>     from disk, and when ram gets full, we stick it back on disk.
[snip!]

Just a thought and I'm quite likely miles off-target here, but do you
know whether your storage is in ram or on disk?  Or are we just
talking about some cache parameters to play with to limit the space
taken up by the first level cache?

-- 
MJR
Luminas Ltd, Internet Applications Development
</delurk> <!-- who wants valid XML? -->

Re: [RT] Cocoon in cacheland

Posted by Sylvain Wallez <sy...@anyware-tech.com>.

Paul Russell a écrit :
> 
> [Apologies in advance for the terrible ASCII art]
> 
> On Mon, Nov 27, 2000 at 12:46:47PM +0100, Sylvain Wallez wrote:
> > Giacomo Pati a écrit :
> > > The mass of replies to this topic seems that nobody needs a cache for
> > > Cocoon 2.
> > Don't be sarcastic : cacheing is a *must have* if Cocoon wants to be
> > able to compare in terms of speed with other technologies (JSP, PHP and
> > others). The lack of response on this subject is probably caused by the
> > current stage of C2 : features are still being defined. As optimization
> > is not directly visible to the Cocoon user, it's not the main concern
> > today... but it will be as soon as C2 will be used in production
> > environments.
> 
> Indeed. I'm currently working on optimising certain aspects of
> C2, since I'm getting close to the point where I need to use it
> live. In the sights immediately are component pooling and the
> XSP code generation. The next stage for me is caching.
> 
> I think *now* is the time to start a discussion on it however,
> because before anyone (be it me or anyone else) starts implementing
> the caching architecture, we need to have worked out some of
> the technical details.
>
<large-snip/> 
>
> Questions to ponder:
> 
>  1) Does *any* of that make sense?

Sure :)

>  2) Does it cover all the eventualities you can think of?

We should also consider using the http HEAD method. For now, Cocoon
always generates content. But if we have a clean separation between
validity checking and data production, we can easily handle HEAD.

Random thought, but thinking back of the old days of the IDEF0
specification method, I was wondering if a sitemap component could be
considered to have an input, an output, but also controllers which
represent the working environment of the component. For example, a
XalanTransformer has a controller which is the XSL file. This one, like
many others, produces the same output from a given input if the
controllers didn't change. Since controllers change asynchronously from
requests, we can imagine to have all cached outputs of these components
flushed automatically when the controllers change (file monitor,
database trigger, etc). This would reduce cache memory consuption.

>  3) How are we going to handle sub-pipelines (Giacomo:
>     is there anything in the sitemap architecture for this
>     yet? We need it for content aggregation, too :/)

Couldn't we add a "cache point" instruction in the sitemap ? This will
leave the responsability of caching to the web site architect, but will
avoid "automagic" cache that may be inappropriate and/or difficult to
debug. This manual approach will also allow the site architect to
fine-tune the site by heavily caching frequently used or
long-to-generate parts while disabling cache on fast or less-used parts
of the site to save memory/storage. Combine that with a sitemap
component profiler and you have IMO an efficient solution.

>  4) How should we store the cache? It's potentially 'rather
>     big', but it's crucial we have it fast. I'd be tempted
>     to use a two layer cache - first layer in ram, and second
>     layer on backing store. When something is used, it's loaded
>     from disk, and when ram gets full, we stick it back on disk.
>     Anyone a wizz with finalizers? I guess we could use the
>     finalizer to persist the object back to stable store (is that
>     allowed?) and use WeakReferences to keep track of them while
>     they're in RAM...
> 

There was a post recently on the Avalon list about a chained Store that
would back a memory store with files or DB. For memory cache, we should
use SoftReferences so that the GC allows these objects to live up to
when it needs memory.

For SAX events cache, we should take as input one of Stephano's last
gifts before his departure : the XML bytecode he intended to use for
disk storage which could also be used for memory storage (maybe with
some optimizations regarding memory consuption/allocation in that case)

-- 
Sylvain Wallez
Anyware Technologies

C2: FOP problems

Posted by Matthew Langham <ml...@sundn.de>.
Hi,

After generating several (around 8) PDF documents with the current fop
version we get an "out of memory" error on the server. We are trying to work
out whether this is an fop problem or a problem caused by the
Apache/Cocoon/fop combo.

Has anyone encountered this before?

Matthew

--
Open Source Group               sunShine - Lighting up e:Business
=================================================================
Matthew Langham, S&N AG, Klingenderstrasse 5, D-33100 Paderborn
Tel: +49-5251-1581-30   [mlangham@sundn.de - http://www.sundn.de]
=================================================================



Re: [RT] Cocoon in cacheland

Posted by Paul Russell <pa...@luminas.co.uk>.
[Apologies in advance for the terrible ASCII art]

On Mon, Nov 27, 2000 at 12:46:47PM +0100, Sylvain Wallez wrote:
> Giacomo Pati a écrit :
> > The mass of replies to this topic seems that nobody needs a cache for
> > Cocoon 2.
> Don't be sarcastic : cacheing is a *must have* if Cocoon wants to be
> able to compare in terms of speed with other technologies (JSP, PHP and
> others). The lack of response on this subject is probably caused by the
> current stage of C2 : features are still being defined. As optimization
> is not directly visible to the Cocoon user, it's not the main concern
> today... but it will be as soon as C2 will be used in production
> environments.

Indeed. I'm currently working on optimising certain aspects of
C2, since I'm getting close to the point where I need to use it
live. In the sights immediately are component pooling and the
XSP code generation. The next stage for me is caching.

I think *now* is the time to start a discussion on it however,
because before anyone (be it me or anyone else) starts implementing
the caching architecture, we need to have worked out some of
the technical details.

As stefano said in his origional RT on this topic, there are
two sides to the C2 cache architecture.

 * HTTP 1.1 compliant cache headers.
 * Internal caching of both byte & SAX streams.

Both of these affect the performance of the engine in different
ways, so I'm going to look at them individually.

HTTP 1.1 cache headers
======================

Cocoon2 should support HTTP1.1 cache headers for a dead simple
reason: The best way to improve the overall performance of a
site is not to increase the speed of individual request
processing, but to reduce the number of requests that reach the
server at all.

In some (but by no means all) cocoon2 sites, a large proportion
of the streams served by the engine will not change often.
If this is the case, we should do our best to offload these
requests to intervening proxy servers between the client and
our server.

For those who aren't familiar with the HTTP1.1 caching model,
there is a (fairly shallow, admittedly) hieracy of caches at
most ISPs and bandwidth providers. Many modern dialup accounts
use transparent proxying to ensure that *all* requests from
inside their bounds enter the caching hierachy.

Servers can control the operation of caches by using the various
methods defined in the HTTP/1.1 specification:

 * Expiration
   Servers can specify a time beyond which the cached information
   becomes 'stale'. That is, beyond which the cache should
   revalidate the information with the server.
   
   Servers can also specify a Last-modified header containing
   the date and time the requested object was last modified.
   This value can be used by caching proxy servers en route to
   heristically determine an expiry time where no explicit
   expiry time is provided. This is not encouraged however,
   as it may lead to caches making incorrect assumptions about
   the validity of content.

 * Validation
   When a cache detects that an entry in its cache is stale,
   that is it has passed its assigned expiration time, it must
   revalidate the entry by sending a conditional GET request
   to the origin server supplying 'validators' (usualy a
   last-modified or ETag header)

We should ensure that Cocoon2 uses 'path-info' type requests
whereever it is semantically justifiable. For example, a news
server should reference the stories by URL, eg:

  http://www.mynewsservice.com/news/science/2000/11/27/1

rather than

  http://www.mynewsservice.com/news?section=science&date=20001127&index=1

Using the former URL syntax allows caching proxy servers on the
request route to cache indiviual stories, whereas the latter will
cause most caches to simply pass the request straight to the origin
server (repeat after me: This Is Bad).

Lets leave external caching for a bit, now we understand how it
works (hopefully!) and look at the internal side.

Internal caching
================

There are two types of caching which can go on inside the
Cocoon engine. One is the caching of result byte streams (that
is, the results of the serialization process of a request),
and one is the caching of SAX streams inside the pipelines.

SAX Caching
-----------

At present, the cocoon engine blindly executes the entire
pipeline for every request:


 +---+
 | C |
 | l |              +------------------------------------+
 | i | <== http ==> |           Cocoon Servlet           |
 | e |              +------------------------------------+
 | n |                  |        | Serialization  |
 | t |                  |        +----------------+
 +---+                  |               /|\
                        |                |
                        |               sax 
                        |                |
                 fire --|        +----------------+
                        |        | Transformation |
                        |        +----------------+
                        |               /|\
                        |                |
                        |               sax 
                        |                | 
                        |        +----------------+
                        \------->|   Generation   |
                                 +----------------+

To different sections of the pipeline, different parts of the
Environment of the request are important for determining (a)
the content of a request, and (b) the validity of any existing
cached content. For example, for a FileGenerator, the only thing
of any importance for determining the content and validity of
any existing cached events is the URI of the source file on
disk. Anything else is academic (the request URI, the time, the
date, the colour of author's goldfish..) and makes no difference
to the returned content.

Similarly, the only thing that makes a difference to a
XalanTransformer (other than the input to its stage of the
pipeline) is the source URI for its template.

So, if we ask each component of the pipeline to create an object
(a RequestKey) representing what's important to *them* about the
request, we can use the set of all of these objects before any
point in the pipeline to represent a particular result at that
level in the pipeline. For example, in a pipeline with one
generator and a transformer, a RequestKey from the generator alone
is enough to uniquely identify the result from that generation
stage. The RequestKeys from both the generator and the transformer
are required to uniquely identify the result of the transformation
stage.

Once we have RequestKey objects from all our cache aware pipeline
components, we can implement a simple multilayer cache:

 +-----------+   +-----------+  +-------------+
 | Generated |   | Generator |  | Transformer |  
 |  Sitemap  |   |           |  |             |
 +-----------+   +-----------+  +-------------+
       |               |               |
 req   |               |               |
 ---> +-+  get reqkey  |               |
      | |------------>+-+              |
      | |             | |              |
      | |    reqkey   | |              |
      | |<------------+-+              |
      | |              |   get reqkey  |
      | |---------------------------->+-+
      | |              |              | |
      | |              |     reqkey   | |
      | |<----------------------------+-+
      | |              |               |
      | | gen content  |               |
      | |------------>+-+  Sax events  |
      | |             | |------------>+-+
      | |             +-+             +-+
      | |              |               |
      | |              |               |
 <--- +-+              |               |
       |               |               |
       X               X               X

If the cache has the content for a subset of the
RequestKeys already, we skip the relevent steps
(pipe the SAX events from the cache to the
uncached step).

This leaves two problems:

 1) How do we get the SAX events into the cache.
 2) How do we check that a cached result is valid.

I would suggest that the first we solve by using a
sax multicaster:

public class SAXMulticaster implements XMLConsumer {
    public SAXMulticaster() {}
    public addTarget(XMLConsumer target) {
        // Foo.
    }

    /* Sax event handlers - simply multicasts to
     * the targets in this multicaster.
     */
    // handlers here.
}

and then:

public class XMLCacheEntry implements XMLConsumer, XMLProducer {
    // XMLConsumer methods
    // XMLProducer methods

    /** Generate the cached event stream.
     */
    public void serialize() {
        // Foo.
    }
}

We can then cache a stream by doing:

XMLCacheEntry cacheEntry = new XMLCacheEntry();
SAXMulticaster multicaster = new SAXMulticaster();
multicaster.addTarget(cacheEntry);
multicaster.addTarget(nextPipelineComponent);
previousPipelineComponent.setConsumer(multicaster);
generator.generate();
cache.cache(requestKey, cacheEntry);

To solve the second problem, we can borrow a leaf from
HTTP/1.1's book. Each cache entry could have a 'validator'
object which acts as a set of credentials for validating an
entry in the cache. When a cache realises it has an entry
corresponding to a certain RequestKey, it should call the
yet-to-be-defined validate method on the pipeline component
with the Validator as an argument. The pipeline component
then does whatever is necessary to check that the cached
results are still valid. In the case of the FileGenerator,
the Validator would contain the modification date of the
file, and the FileGenerator would check that the file has
not been modified since. If any of the validators fail,
then everything after that point is regenerated and
recached.

Byte stream caching would work in a very similar way, but
obviously with bytes rather than sax events.

To round the whole thing off, we could use a hash of all
the Validators as an ETag for use with the HTTP/1.1 caching
architecture talked about at the top of this document, meaning
that when HTTP cache entries expire, we can use the returned
ETag from the conditional GET request to potentially avoid
the request if the cached entry is still valid.

I'm going to quit while I can still type now, but I'd really
appreciate your thoughts on all of this - it's a big and
potentially very important issue for Cocoon2. We need to get
this bit right.

Questions to ponder:

 1) Does *any* of that make sense?
 2) Does it cover all the eventualities you can think of?
 3) How are we going to handle sub-pipelines (Giacomo:
    is there anything in the sitemap architecture for this
    yet? We need it for content aggregation, too :/)
 4) How should we store the cache? It's potentially 'rather
    big', but it's crucial we have it fast. I'd be tempted
    to use a two layer cache - first layer in ram, and second
    layer on backing store. When something is used, it's loaded
    from disk, and when ram gets full, we stick it back on disk.
    Anyone a wizz with finalizers? I guess we could use the
    finalizer to persist the object back to stable store (is that
    allowed?) and use WeakReferences to keep track of them while
    they're in RAM...

Over to you guys!



P.
-- 
Paul Russell                               <pa...@luminas.co.uk>
Technical Director,                   http://www.luminas.co.uk
Luminas Ltd.

Re: [RT] Cocoon in cacheland

Posted by Andy Lewis <an...@veritas.com>.
I'm basically a lurker now, but have used Cocoon 1.3 through 1.8 and 
anxiously await Cocoon 2. For what it is worth, Caching is a MUST. In 
fact, I was rather fond of the discussion that originated from Stefano 
that prposed a chache a virtully every step of the pipeline. This allows 
ony the changed elements of a complex output to be rebuilt.

Peter Verhage wrote:

> Sylvain Wallez wrote:
> 
>> Don't be sarcastic : cacheing is a *must have* if Cocoon wants to be
>> able to compare in terms of speed with other technologies (JSP, PHP and
>> others). The lack of response on this subject is probably caused by the
>> current stage of C2 : features are still being defined. As optimization
>> is not directly visible to the Cocoon user, it's not the main concern
>> today... but it will be as soon as C2 will be used in production
>> environments.
> 
> I agree. Cache is a must-have, because without caching Cocoon ain't that
> fast...
> 
> Peter
> 




Re: [RT] Cocoon in cacheland

Posted by Peter Verhage <pe...@ibuildings.nl>.
Sylvain Wallez wrote:
> Don't be sarcastic : cacheing is a *must have* if Cocoon wants to be
> able to compare in terms of speed with other technologies (JSP, PHP and
> others). The lack of response on this subject is probably caused by the
> current stage of C2 : features are still being defined. As optimization
> is not directly visible to the Cocoon user, it's not the main concern
> today... but it will be as soon as C2 will be used in production
> environments.

I agree. Cache is a must-have, because without caching Cocoon ain't that
fast...

Peter

-- 
Peter Verhage       <pe...@ibuildings.nl>
ibuildings.nl BV - information technology
http://www.ibuildings.nl -  0118 41 50 54

Re: [RT] Cocoon in cacheland

Posted by Sylvain Wallez <sy...@anyware-tech.com>.
Giacomo Pati a écrit :
> 
> The mass of replies to this topic seems that nobody needs a cache for
> Cocoon 2.
> 

Don't be sarcastic : cacheing is a *must have* if Cocoon wants to be
able to compare in terms of speed with other technologies (JSP, PHP and
others). The lack of response on this subject is probably caused by the
current stage of C2 : features are still being defined. As optimization
is not directly visible to the Cocoon user, it's not the main concern
today... but it will be as soon as C2 will be used in production
environments.

--
Sylvain Wallez
Anyware Technologies

Re: [RT] Cocoon in cacheland

Posted by Paul Russell <pa...@luminas.co.uk>.
On Sat, Nov 25, 2000 at 12:28:12AM +0100, Giacomo Pati wrote:
> If you all want to use C2 in a highly dynamic environment where caching
> is unnecessary, we don't need it. We can make C2 proxy friendly and will
> reach C1 performance that way. But it would be nice if someone can take
> this and contribute a cache system for C2.

I don't think there is any doubt that a caching framework is a
must for Cocoon2. As always, the problem comes with finding time
to work on it. Got 25hr/day job syndrome going on here. Is anyone
else using Cocoon2 enough to make it worth their while? I'm happy
to have a look, but it's likely to be weeks before I get a chance,
sadly...


P.

-- 
Paul Russell                               <pa...@luminas.co.uk>
Technical Director,                   http://www.luminas.co.uk
Luminas Ltd.