You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@velocity.apache.org by Bradley Wagner <br...@hannonhill.com> on 2012/07/18 23:42:00 UTC

Macro caching and other caching

Hi,

We recently made some changes to our software to use just a single
VelocityEngine as per recommendations on this group.

We ran into an issue where macros were all of the sudden being shared
across template renders because we had not
specified: velocimacro.permissions.allow.inline.local.scope = true.
However, we also had not ever turned on caching in our props file
with: class.resource.loader.cache = true.

Does this mean that macros are cached separately from whatever is being
cached in the class.resource.loader.cache cache? Is there any way to
control that caching or is just using this property the
way: velocimacro.permissions.allow.inline.local.scope = true

One side effect of our recent changes is that the app seems to have an
increased mem footprint. We're not *sure* it can be attributed to velocity
but I was trying to see what kinds of things Velocity could be hanging on
to and how much memory they might be taking up.

Thanks!

Re: Macro caching and other caching

Posted by Nathan Bubna <nb...@gmail.com>.
I dunno.  I keep casting about for possible explanations, but my
current rusty-ness with the Velocity codebase (haven't hacked on it
much in a couple years) is hampering me.  At this point, all i can say
is that i'm not sure where the cached nodes are from, but i suspect
either your few ClasspathResourceLoader templates (probably unlikely
to hit 5mil tokens unless they're huge) or evaluated macros (but i'm
not sure why they're being cached).

In any case, the proper fix would be resolving
https://issues.apache.org/jira/browse/VELOCITY-797 (the way it should
have been done in the first place) but my work schedule makes it
unlikely i'll get to it anytime soon.  Sigh.

On Tue, Jul 31, 2012 at 8:37 AM, Bradley Wagner
<br...@hannonhill.com> wrote:
> Whoops, was misreading the API. It's actually that tempTemplateName
> variable.
>
>
> On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner
> <br...@hannonhill.com> wrote:
>>
>> A StringWriter:
>>
>> String template = ... the string containing the dynamic template to
>> generate ...
>> // Get a template as stream.
>> StringWriter writer = new StringWriter();
>> StringReader reader = new StringReader(template);
>> // create a temporary template name
>> String tempTemplateName = "velocityTransform-" +
>> System.currentTimeMillis();
>>
>> // ask Velocity to evaluate it.
>> VelocityEngine engine = getEngine();
>> boolean success = engine.evaluate(context, writer, tempTemplateName,
>> reader);
>>
>> if (!success)
>> {
>>     LOG.debug("Velocity could not evaluate template with content: \n" +
>> template);
>>     return null;
>> }
>> LOG.debug("Velocity successfully evaluted template with content: \n" +
>> template);
>> String strResult = writer.getBuffer().toString();
>> return strResult;
>>
>> On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com> wrote:
>>>
>>> What do you use for logTag (template name) when you are using evaluate()?
>>>
>>> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
>>> <br...@hannonhill.com> wrote:
>>> > Doing both. In the other case we're using a classpath resource loader
>>> > to
>>> > evaluate templates like this:
>>> >
>>> > VelocityContext = ... a context that we're building each time ...
>>> > VelocityEngine engine = ... our single engine ...
>>> > Template template = engine.getTemplate(templatePath);
>>> > StringWriter writer = new StringWriter();
>>> > template.merge(context, writer);
>>> >
>>> > However, we only have 7 of those static templates in our whole system.
>>> >
>>> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com>
>>> > wrote:
>>> >>
>>> >> And you're sure you're only using VelocityEngine.evaluate?  Not
>>> >> loading templates through the resource loader?  Or are you doing both?
>>> >>
>>> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
>>> >> <br...@hannonhill.com> wrote:
>>> >> > Nathan,
>>> >> >
>>> >> > Tokens are referenced by
>>> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem to
>>> >> > be
>>> >> > referenced by arrays of
>>> >> > org.apache.velocity.runtime.parser.node.Nodes.
>>> >> > Most
>>> >> > of the classes referencing these things are AST classes in the
>>> >> > org.apache.velocity.runtime.parser.node package.
>>> >> >
>>> >> > Here's our properties file:
>>> >> >
>>> >> > runtime.log.logsystem.class =
>>> >> >
>>> >> >
>>> >> > org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
>>> >> >
>>> >> >
>>> >> > runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>>> >> >
>>> >> > runtime.log.error.stacktrace = false
>>> >> > runtime.log.warn.stacktrace = false
>>> >> > runtime.log.info.stacktrace = false
>>> >> > runtime.log.invalid.reference = true
>>> >> >
>>> >> > input.encoding=UTF-8
>>> >> > output.encoding=UTF-8
>>> >> >
>>> >> > directive.foreach.counter.name = velocityCount
>>> >> > directive.foreach.counter.initial.value = 1
>>> >> >
>>> >> > resource.loader = class
>>> >> >
>>> >> > class.resource.loader.description = Velocity Classpath Resource
>>> >> > Loader
>>> >> > class.resource.loader.class =
>>> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>>> >> >
>>> >> > velocimacro.permissions.allow.inline.local.scope = true
>>> >> >
>>> >> > Thanks!
>>> >> > Bradley
>>> >> >
>>> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
>>> >> > wrote:
>>> >> >>
>>> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>>> >> >> <br...@hannonhill.com> wrote:
>>> >> >> > Thanks for the input.
>>> >> >> >
>>> >> >> > What we're seeing is that Velocity seems to be holding on to a
>>> >> >> > lot
>>> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
>>> >> >> > million).
>>> >> >> > We
>>> >> >> > allow people to write arbitrary Velocity templates in our system
>>> >> >> > and
>>> >> >> > are
>>> >> >> > evaluating them with:
>>> >> >> >
>>> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
>>> >> >> > logTag,
>>> >> >> > Reader reader)
>>> >> >> >
>>> >> >> > I was under the impression that Templates evaluated this way are
>>> >> >> > inherently
>>> >> >> > not cacheable. Is that the case? If that's not true, is there a
>>> >> >> > way
>>> >> >> > to
>>> >> >> > control the cache Velocity is using for these?
>>> >> >>
>>> >> >> me too.  just out of curiosity, what properties are you using for
>>> >> >> configuration?  and can you tell any more about what class is
>>> >> >> holding
>>> >> >> onto those Tokens?
>>> >> >>
>>> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
>>> >> >> > wrote:
>>> >> >> >
>>> >> >> >> I think that Velocity has one global hash table for macros from
>>> >> >> >> the
>>> >> >> >> *.vm
>>> >> >> >> libraries and that is more or less static for the life time of
>>> >> >> >> the
>>> >> >> >> Velocity
>>> >> >> >> engine.
>>> >> >> >>
>>> >> >> >> I wish there there was a mechanism to control the list of the
>>> >> >> >> *.vm
>>> >> >> >> files
>>> >> >> >> and their order of lookup for each individual merge (thread).
>>> >> >> >> This
>>> >> >> >> would
>>> >> >> >> facilitate macro overloads based on the context.
>>> >> >> >> Unfortunately this feature is not available.
>>> >> >> >>
>>> >> >> >> I think the 1.7 behavior is (more or less):
>>> >> >> >>
>>> >> >> >> When template reference is found (i.e. #parse("x")) it is
>>> >> >> >> looked-up
>>> >> >> >> in
>>> >> >> >> the
>>> >> >> >> resource cache and if found there (with all the expiration
>>> >> >> >> checks,
>>> >> >> >> etc.)
>>> >> >> >> the parsed AST tree is used.
>>> >> >> >> If not found the template is loaded from the file, actually
>>> >> >> >> parsed
>>> >> >> >> and
>>> >> >> >> put
>>> >> >> >> into the cache. During the actual parsing process the macros
>>> >> >> >> that
>>> >> >> >> are
>>> >> >> >> defined in the template are put into the macro manager cache
>>> >> >> >> which
>>> >> >> >> is
>>> >> >> >> organized as:
>>> >> >> >> "defining template name (name space)" => "macro name" => AST
>>> >> >> >> macro
>>> >> >> >> code
>>> >> >> >> The AST is then rendered in the current context running #parse.
>>> >> >> >>
>>> >> >> >> When the time comes to call a macro there is a lookup process
>>> >> >> >> which
>>> >> >> >> can
>>> >> >> >> be
>>> >> >> >> influenced by some props, but the most general case is:
>>> >> >> >>
>>> >> >> >> 1. Lookup in the global *.vm files, if found use that.
>>> >> >> >> 2. Lookup in the same "name space" that calls the macro, if
>>> >> >> >> found
>>> >> >> >> use
>>> >> >> >> that.
>>> >> >> >> 3. Going back through the "list" of the #parse-d templates
>>> >> >> >> lookup in
>>> >> >> >> each
>>> >> >> >> name space on the stack.
>>> >> >> >>
>>> >> >> >> The stack can be actually very long too, for example
>>> >> >> >>
>>> >> >> >> #foreach($templ in [1..5])
>>> >> >> >>   #parse("${templ}.vtl")
>>> >> >> >> #end
>>> >> >> >>
>>> >> >> >> #mymacro()
>>> >> >> >>
>>> >> >> >> The lookup list here would contain:
>>> >> >> >>
>>> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>>> >> >> >>
>>> >> >> >> This is true even for cases where the name is the same:
>>> >> >> >>
>>> >> >> >> #foreach($item in [1..5])
>>> >> >> >>   #parse('item.vtl')
>>> >> >> >> #end
>>> >> >> >>
>>> >> >> >> The lookup list here would contain:
>>> >> >> >>
>>> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>>> >> >> >>
>>> >> >> >> There is no attempt to optimize the lookup list and collapse the
>>> >> >> >> duplicates.
>>> >> >> >>
>>> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there
>>> >> >> >> that
>>> >> >> >> had
>>> >> >> >> to do
>>> >> >> >> with clearing the name space of all the macros and repopulating
>>> >> >> >> it
>>> >> >> >> again on
>>> >> >> >> each parse which did not work at all with multiple threads.
>>> >> >> >> One thread could clear the name space while another was doing a
>>> >> >> >> lookup,
>>> >> >> >> etc.
>>> >> >> >>
>>> >> >> >> I think there was an effort to redesign that part in 2.0, but I
>>> >> >> >> have
>>> >> >> >> not
>>> >> >> >> looked at that yet.
>>> >> >> >>
>>> >> >> >> Alex
>>> >> >> >>
>>> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>>> >> >> >> bradley.wagner@hannonhill.com> wrote:
>>> >> >> >>
>>> >> >> >> > Hi,
>>> >> >> >> >
>>> >> >> >> > We recently made some changes to our software to use just a
>>> >> >> >> > single
>>> >> >> >> > VelocityEngine as per recommendations on this group.
>>> >> >> >> >
>>> >> >> >> > We ran into an issue where macros were all of the sudden being
>>> >> >> >> > shared
>>> >> >> >> > across template renders because we had not
>>> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
>>> >> >> >> > true.
>>> >> >> >> > However, we also had not ever turned on caching in our props
>>> >> >> >> > file
>>> >> >> >> > with: class.resource.loader.cache = true.
>>> >> >> >> >
>>> >> >> >> > Does this mean that macros are cached separately from whatever
>>> >> >> >> > is
>>> >> >> >> > being
>>> >> >> >> > cached in the class.resource.loader.cache cache? Is there any
>>> >> >> >> > way
>>> >> >> >> > to
>>> >> >> >> > control that caching or is just using this property the
>>> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
>>> >> >> >> >
>>> >> >> >> > One side effect of our recent changes is that the app seems to
>>> >> >> >> > have
>>> >> >> >> > an
>>> >> >> >> > increased mem footprint. We're not *sure* it can be attributed
>>> >> >> >> > to
>>> >> >> >> velocity
>>> >> >> >> > but I was trying to see what kinds of things Velocity could be
>>> >> >> >> > hanging on
>>> >> >> >> > to and how much memory they might be taking up.
>>> >> >> >> >
>>> >> >> >> > Thanks!
>>> >> >> >> >
>>> >> >> >>
>>> >> >
>>> >> >
>>> >
>>> >
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Re: Macro caching and other caching

Posted by Nathan Bubna <nb...@gmail.com>.
On Tue, Jul 31, 2012 at 10:48 AM, Bradley Wagner
<br...@hannonhill.com> wrote:
> Yea, we're starting to figure that out. It seems like it's the macro cache
> that's growing and we have no way of clearing that cache or disabling it
> altogether.
>
> Any ideas there?

hmm.  i haven't confirmed this and am rusty on the codebase, but
evaluate() uses the logTag as the template name for lack of anything
else.  VelocimacroFactory uses template names as keys for local macro
namespaces.  by generating logTags for every evaluate, you appear to
be saving innumerable macro namespaces.  ideally, these should be
dumped at the end of evaluate(), but aren't.  that's a bug and
probably deserves its own issue (wanna report it).

so, i see three options:

1) accept a certain amount of useless-to-you inline macro caching and
adjust your logTag creation to be user-associated or something like
that that doesn't make it an infinite number of templates (as your
current system would appear to).

2) go for the quick, partial fix and change
RuntimeInstance.evaluate(context,writer,logTag,reader) to call
dumpVMNamespace(logTag) before returning but after rendering.  i'm
somewhat sure that should do the trick, and probably not cause side
effects.  :)

3) do what i haven't and try out Jarkko's patch for VELOCITY-797.  it
should (if he did it as i expect) store inline macros with the
"template" and thus tie their lifecycle to the template instead of the
VelocimacroFactory.

> On Tue, Jul 31, 2012 at 1:39 PM, Alex Fedotov <al...@kayak.com> wrote:
>>
>> It's not really a Velocity specific suggestion - I would just dump the
>> heap and trace the instances to the garbage collection roots. Eclipse MAT or
>> YourKit can do it as probably can a lot of other Java tools.
>>
>>
>> On Tue, Jul 31, 2012 at 11:37 AM, Bradley Wagner
>> <br...@hannonhill.com> wrote:
>>>
>>> Whoops, was misreading the API. It's actually that tempTemplateName
>>> variable.
>>>
>>> On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner <
>>> bradley.wagner@hannonhill.com> wrote:
>>>
>>> > A StringWriter:
>>> >
>>> > String template = ... the string containing the dynamic template to
>>> > generate ...
>>> > // Get a template as stream.
>>> > StringWriter writer = new StringWriter();
>>> > StringReader reader = new StringReader(template);
>>> > // create a temporary template name
>>> > String tempTemplateName = "velocityTransform-" +
>>> > System.currentTimeMillis();
>>> >
>>> > // ask Velocity to evaluate it.
>>> > VelocityEngine engine = getEngine();
>>> > boolean success = engine.evaluate(context, writer, tempTemplateName,
>>> > reader);
>>> >
>>> > if (!success)
>>> > {
>>> >     LOG.debug("Velocity could not evaluate template with content: \n" +
>>> > template);
>>> >     return null;
>>> > }
>>> > LOG.debug("Velocity successfully evaluted template with content: \n" +
>>> > template);
>>> > String strResult = writer.getBuffer().toString();
>>> > return strResult;
>>> >
>>> > On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com>
>>> > wrote:
>>> >
>>> >> What do you use for logTag (template name) when you are using
>>> >> evaluate()?
>>> >>
>>> >> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
>>> >> <br...@hannonhill.com> wrote:
>>> >> > Doing both. In the other case we're using a classpath resource
>>> >> > loader to
>>> >> > evaluate templates like this:
>>> >> >
>>> >> > VelocityContext = ... a context that we're building each time ...
>>> >> > VelocityEngine engine = ... our single engine ...
>>> >> > Template template = engine.getTemplate(templatePath);
>>> >> > StringWriter writer = new StringWriter();
>>> >> > template.merge(context, writer);
>>> >> >
>>> >> > However, we only have 7 of those static templates in our whole
>>> >> > system.
>>> >> >
>>> >> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com>
>>> >> wrote:
>>> >> >>
>>> >> >> And you're sure you're only using VelocityEngine.evaluate?  Not
>>> >> >> loading templates through the resource loader?  Or are you doing
>>> >> >> both?
>>> >> >>
>>> >> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
>>> >> >> <br...@hannonhill.com> wrote:
>>> >> >> > Nathan,
>>> >> >> >
>>> >> >> > Tokens are referenced by
>>> >> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem
>>> >> >> > to be
>>> >> >> > referenced by arrays of
>>> >> org.apache.velocity.runtime.parser.node.Nodes.
>>> >> >> > Most
>>> >> >> > of the classes referencing these things are AST classes in the
>>> >> >> > org.apache.velocity.runtime.parser.node package.
>>> >> >> >
>>> >> >> > Here's our properties file:
>>> >> >> >
>>> >> >> > runtime.log.logsystem.class =
>>> >> >> >
>>> >> >> >
>>> >>
>>> >> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
>>> >> >> >
>>> >> >> >
>>> >>
>>> >> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>>> >> >> >
>>> >> >> > runtime.log.error.stacktrace = false
>>> >> >> > runtime.log.warn.stacktrace = false
>>> >> >> > runtime.log.info.stacktrace = false
>>> >> >> > runtime.log.invalid.reference = true
>>> >> >> >
>>> >> >> > input.encoding=UTF-8
>>> >> >> > output.encoding=UTF-8
>>> >> >> >
>>> >> >> > directive.foreach.counter.name = velocityCount
>>> >> >> > directive.foreach.counter.initial.value = 1
>>> >> >> >
>>> >> >> > resource.loader = class
>>> >> >> >
>>> >> >> > class.resource.loader.description = Velocity Classpath Resource
>>> >> Loader
>>> >> >> > class.resource.loader.class =
>>> >> >> >
>>> >> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>>> >> >> >
>>> >> >> > velocimacro.permissions.allow.inline.local.scope = true
>>> >> >> >
>>> >> >> > Thanks!
>>> >> >> > Bradley
>>> >> >> >
>>> >> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
>>> >> wrote:
>>> >> >> >>
>>> >> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>>> >> >> >> <br...@hannonhill.com> wrote:
>>> >> >> >> > Thanks for the input.
>>> >> >> >> >
>>> >> >> >> > What we're seeing is that Velocity seems to be holding on to a
>>> >> >> >> > lot
>>> >> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
>>> >> >> >> > million).
>>> >> >> >> > We
>>> >> >> >> > allow people to write arbitrary Velocity templates in our
>>> >> >> >> > system
>>> >> and
>>> >> >> >> > are
>>> >> >> >> > evaluating them with:
>>> >> >> >> >
>>> >> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
>>> >> >> >> > logTag,
>>> >> >> >> > Reader reader)
>>> >> >> >> >
>>> >> >> >> > I was under the impression that Templates evaluated this way
>>> >> >> >> > are
>>> >> >> >> > inherently
>>> >> >> >> > not cacheable. Is that the case? If that's not true, is there
>>> >> >> >> > a
>>> >> way
>>> >> >> >> > to
>>> >> >> >> > control the cache Velocity is using for these?
>>> >> >> >>
>>> >> >> >> me too.  just out of curiosity, what properties are you using
>>> >> >> >> for
>>> >> >> >> configuration?  and can you tell any more about what class is
>>> >> holding
>>> >> >> >> onto those Tokens?
>>> >> >> >>
>>> >> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov
>>> >> >> >> > <al...@kayak.com>
>>> >> >> >> > wrote:
>>> >> >> >> >
>>> >> >> >> >> I think that Velocity has one global hash table for macros
>>> >> >> >> >> from
>>> >> the
>>> >> >> >> >> *.vm
>>> >> >> >> >> libraries and that is more or less static for the life time
>>> >> >> >> >> of
>>> >> the
>>> >> >> >> >> Velocity
>>> >> >> >> >> engine.
>>> >> >> >> >>
>>> >> >> >> >> I wish there there was a mechanism to control the list of the
>>> >> *.vm
>>> >> >> >> >> files
>>> >> >> >> >> and their order of lookup for each individual merge (thread).
>>> >> This
>>> >> >> >> >> would
>>> >> >> >> >> facilitate macro overloads based on the context.
>>> >> >> >> >> Unfortunately this feature is not available.
>>> >> >> >> >>
>>> >> >> >> >> I think the 1.7 behavior is (more or less):
>>> >> >> >> >>
>>> >> >> >> >> When template reference is found (i.e. #parse("x")) it is
>>> >> looked-up
>>> >> >> >> >> in
>>> >> >> >> >> the
>>> >> >> >> >> resource cache and if found there (with all the expiration
>>> >> checks,
>>> >> >> >> >> etc.)
>>> >> >> >> >> the parsed AST tree is used.
>>> >> >> >> >> If not found the template is loaded from the file, actually
>>> >> parsed
>>> >> >> >> >> and
>>> >> >> >> >> put
>>> >> >> >> >> into the cache. During the actual parsing process the macros
>>> >> >> >> >> that
>>> >> >> >> >> are
>>> >> >> >> >> defined in the template are put into the macro manager cache
>>> >> which
>>> >> >> >> >> is
>>> >> >> >> >> organized as:
>>> >> >> >> >> "defining template name (name space)" => "macro name" => AST
>>> >> macro
>>> >> >> >> >> code
>>> >> >> >> >> The AST is then rendered in the current context running
>>> >> >> >> >> #parse.
>>> >> >> >> >>
>>> >> >> >> >> When the time comes to call a macro there is a lookup process
>>> >> which
>>> >> >> >> >> can
>>> >> >> >> >> be
>>> >> >> >> >> influenced by some props, but the most general case is:
>>> >> >> >> >>
>>> >> >> >> >> 1. Lookup in the global *.vm files, if found use that.
>>> >> >> >> >> 2. Lookup in the same "name space" that calls the macro, if
>>> >> >> >> >> found
>>> >> >> >> >> use
>>> >> >> >> >> that.
>>> >> >> >> >> 3. Going back through the "list" of the #parse-d templates
>>> >> lookup in
>>> >> >> >> >> each
>>> >> >> >> >> name space on the stack.
>>> >> >> >> >>
>>> >> >> >> >> The stack can be actually very long too, for example
>>> >> >> >> >>
>>> >> >> >> >> #foreach($templ in [1..5])
>>> >> >> >> >>   #parse("${templ}.vtl")
>>> >> >> >> >> #end
>>> >> >> >> >>
>>> >> >> >> >> #mymacro()
>>> >> >> >> >>
>>> >> >> >> >> The lookup list here would contain:
>>> >> >> >> >>
>>> >> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>>> >> >> >> >>
>>> >> >> >> >> This is true even for cases where the name is the same:
>>> >> >> >> >>
>>> >> >> >> >> #foreach($item in [1..5])
>>> >> >> >> >>   #parse('item.vtl')
>>> >> >> >> >> #end
>>> >> >> >> >>
>>> >> >> >> >> The lookup list here would contain:
>>> >> >> >> >>
>>> >> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>>> >> >> >> >>
>>> >> >> >> >> There is no attempt to optimize the lookup list and collapse
>>> >> >> >> >> the
>>> >> >> >> >> duplicates.
>>> >> >> >> >>
>>> >> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there
>>> >> >> >> >> that
>>> >> >> >> >> had
>>> >> >> >> >> to do
>>> >> >> >> >> with clearing the name space of all the macros and
>>> >> >> >> >> repopulating
>>> >> it
>>> >> >> >> >> again on
>>> >> >> >> >> each parse which did not work at all with multiple threads.
>>> >> >> >> >> One thread could clear the name space while another was doing
>>> >> >> >> >> a
>>> >> >> >> >> lookup,
>>> >> >> >> >> etc.
>>> >> >> >> >>
>>> >> >> >> >> I think there was an effort to redesign that part in 2.0, but
>>> >> >> >> >> I
>>> >> have
>>> >> >> >> >> not
>>> >> >> >> >> looked at that yet.
>>> >> >> >> >>
>>> >> >> >> >> Alex
>>> >> >> >> >>
>>> >> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>>> >> >> >> >> bradley.wagner@hannonhill.com> wrote:
>>> >> >> >> >>
>>> >> >> >> >> > Hi,
>>> >> >> >> >> >
>>> >> >> >> >> > We recently made some changes to our software to use just a
>>> >> single
>>> >> >> >> >> > VelocityEngine as per recommendations on this group.
>>> >> >> >> >> >
>>> >> >> >> >> > We ran into an issue where macros were all of the sudden
>>> >> >> >> >> > being
>>> >> >> >> >> > shared
>>> >> >> >> >> > across template renders because we had not
>>> >> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope
>>> >> >> >> >> > =
>>> >> >> >> >> > true.
>>> >> >> >> >> > However, we also had not ever turned on caching in our
>>> >> >> >> >> > props
>>> >> file
>>> >> >> >> >> > with: class.resource.loader.cache = true.
>>> >> >> >> >> >
>>> >> >> >> >> > Does this mean that macros are cached separately from
>>> >> >> >> >> > whatever
>>> >> is
>>> >> >> >> >> > being
>>> >> >> >> >> > cached in the class.resource.loader.cache cache? Is there
>>> >> >> >> >> > any
>>> >> way
>>> >> >> >> >> > to
>>> >> >> >> >> > control that caching or is just using this property the
>>> >> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope =
>>> >> >> >> >> > true
>>> >> >> >> >> >
>>> >> >> >> >> > One side effect of our recent changes is that the app seems
>>> >> >> >> >> > to
>>> >> >> >> >> > have
>>> >> >> >> >> > an
>>> >> >> >> >> > increased mem footprint. We're not *sure* it can be
>>> >> >> >> >> > attributed
>>> >> to
>>> >> >> >> >> velocity
>>> >> >> >> >> > but I was trying to see what kinds of things Velocity could
>>> >> >> >> >> > be
>>> >> >> >> >> > hanging on
>>> >> >> >> >> > to and how much memory they might be taking up.
>>> >> >> >> >> >
>>> >> >> >> >> > Thanks!
>>> >> >> >> >> >
>>> >> >> >> >>
>>> >> >> >
>>> >> >> >
>>> >> >
>>> >> >
>>> >>
>>> >
>>> >
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
Yea, we're starting to figure that out. It seems like it's the macro cache
that's growing and we have no way of clearing that cache or disabling it
altogether.

Any ideas there?

On Tue, Jul 31, 2012 at 1:39 PM, Alex Fedotov <al...@kayak.com> wrote:

> It's not really a Velocity specific suggestion - I would just dump the
> heap and trace the instances to the garbage collection roots. Eclipse MAT
> or YourKit can do it as probably can a lot of other Java tools.
>
>
> On Tue, Jul 31, 2012 at 11:37 AM, Bradley Wagner <
> bradley.wagner@hannonhill.com> wrote:
>
>> Whoops, was misreading the API. It's actually that tempTemplateName
>> variable.
>>
>> On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner <
>> bradley.wagner@hannonhill.com> wrote:
>>
>> > A StringWriter:
>> >
>> > String template = ... the string containing the dynamic template to
>> > generate ...
>> > // Get a template as stream.
>> > StringWriter writer = new StringWriter();
>> > StringReader reader = new StringReader(template);
>> > // create a temporary template name
>> > String tempTemplateName = "velocityTransform-" +
>> > System.currentTimeMillis();
>> >
>> > // ask Velocity to evaluate it.
>> > VelocityEngine engine = getEngine();
>> > boolean success = engine.evaluate(context, writer, tempTemplateName,
>> > reader);
>> >
>> > if (!success)
>> > {
>> >     LOG.debug("Velocity could not evaluate template with content: \n" +
>> > template);
>> >     return null;
>> > }
>> > LOG.debug("Velocity successfully evaluted template with content: \n" +
>> > template);
>> > String strResult = writer.getBuffer().toString();
>> > return strResult;
>> >
>> > On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com>
>> wrote:
>> >
>> >> What do you use for logTag (template name) when you are using
>> evaluate()?
>> >>
>> >> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
>> >> <br...@hannonhill.com> wrote:
>> >> > Doing both. In the other case we're using a classpath resource
>> loader to
>> >> > evaluate templates like this:
>> >> >
>> >> > VelocityContext = ... a context that we're building each time ...
>> >> > VelocityEngine engine = ... our single engine ...
>> >> > Template template = engine.getTemplate(templatePath);
>> >> > StringWriter writer = new StringWriter();
>> >> > template.merge(context, writer);
>> >> >
>> >> > However, we only have 7 of those static templates in our whole
>> system.
>> >> >
>> >> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com>
>> >> wrote:
>> >> >>
>> >> >> And you're sure you're only using VelocityEngine.evaluate?  Not
>> >> >> loading templates through the resource loader?  Or are you doing
>> both?
>> >> >>
>> >> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
>> >> >> <br...@hannonhill.com> wrote:
>> >> >> > Nathan,
>> >> >> >
>> >> >> > Tokens are referenced by
>> >> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem
>> to be
>> >> >> > referenced by arrays of
>> >> org.apache.velocity.runtime.parser.node.Nodes.
>> >> >> > Most
>> >> >> > of the classes referencing these things are AST classes in the
>> >> >> > org.apache.velocity.runtime.parser.node package.
>> >> >> >
>> >> >> > Here's our properties file:
>> >> >> >
>> >> >> > runtime.log.logsystem.class =
>> >> >> >
>> >> >> >
>> >>
>> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
>> >> >> >
>> >> >> >
>> >>
>> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>> >> >> >
>> >> >> > runtime.log.error.stacktrace = false
>> >> >> > runtime.log.warn.stacktrace = false
>> >> >> > runtime.log.info.stacktrace = false
>> >> >> > runtime.log.invalid.reference = true
>> >> >> >
>> >> >> > input.encoding=UTF-8
>> >> >> > output.encoding=UTF-8
>> >> >> >
>> >> >> > directive.foreach.counter.name = velocityCount
>> >> >> > directive.foreach.counter.initial.value = 1
>> >> >> >
>> >> >> > resource.loader = class
>> >> >> >
>> >> >> > class.resource.loader.description = Velocity Classpath Resource
>> >> Loader
>> >> >> > class.resource.loader.class =
>> >> >> >
>> org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>> >> >> >
>> >> >> > velocimacro.permissions.allow.inline.local.scope = true
>> >> >> >
>> >> >> > Thanks!
>> >> >> > Bradley
>> >> >> >
>> >> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
>> >> wrote:
>> >> >> >>
>> >> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>> >> >> >> <br...@hannonhill.com> wrote:
>> >> >> >> > Thanks for the input.
>> >> >> >> >
>> >> >> >> > What we're seeing is that Velocity seems to be holding on to a
>> lot
>> >> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
>> >> >> >> > million).
>> >> >> >> > We
>> >> >> >> > allow people to write arbitrary Velocity templates in our
>> system
>> >> and
>> >> >> >> > are
>> >> >> >> > evaluating them with:
>> >> >> >> >
>> >> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
>> >> >> >> > logTag,
>> >> >> >> > Reader reader)
>> >> >> >> >
>> >> >> >> > I was under the impression that Templates evaluated this way
>> are
>> >> >> >> > inherently
>> >> >> >> > not cacheable. Is that the case? If that's not true, is there a
>> >> way
>> >> >> >> > to
>> >> >> >> > control the cache Velocity is using for these?
>> >> >> >>
>> >> >> >> me too.  just out of curiosity, what properties are you using for
>> >> >> >> configuration?  and can you tell any more about what class is
>> >> holding
>> >> >> >> onto those Tokens?
>> >> >> >>
>> >> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <alex@kayak.com
>> >
>> >> >> >> > wrote:
>> >> >> >> >
>> >> >> >> >> I think that Velocity has one global hash table for macros
>> from
>> >> the
>> >> >> >> >> *.vm
>> >> >> >> >> libraries and that is more or less static for the life time of
>> >> the
>> >> >> >> >> Velocity
>> >> >> >> >> engine.
>> >> >> >> >>
>> >> >> >> >> I wish there there was a mechanism to control the list of the
>> >> *.vm
>> >> >> >> >> files
>> >> >> >> >> and their order of lookup for each individual merge (thread).
>> >> This
>> >> >> >> >> would
>> >> >> >> >> facilitate macro overloads based on the context.
>> >> >> >> >> Unfortunately this feature is not available.
>> >> >> >> >>
>> >> >> >> >> I think the 1.7 behavior is (more or less):
>> >> >> >> >>
>> >> >> >> >> When template reference is found (i.e. #parse("x")) it is
>> >> looked-up
>> >> >> >> >> in
>> >> >> >> >> the
>> >> >> >> >> resource cache and if found there (with all the expiration
>> >> checks,
>> >> >> >> >> etc.)
>> >> >> >> >> the parsed AST tree is used.
>> >> >> >> >> If not found the template is loaded from the file, actually
>> >> parsed
>> >> >> >> >> and
>> >> >> >> >> put
>> >> >> >> >> into the cache. During the actual parsing process the macros
>> that
>> >> >> >> >> are
>> >> >> >> >> defined in the template are put into the macro manager cache
>> >> which
>> >> >> >> >> is
>> >> >> >> >> organized as:
>> >> >> >> >> "defining template name (name space)" => "macro name" => AST
>> >> macro
>> >> >> >> >> code
>> >> >> >> >> The AST is then rendered in the current context running
>> #parse.
>> >> >> >> >>
>> >> >> >> >> When the time comes to call a macro there is a lookup process
>> >> which
>> >> >> >> >> can
>> >> >> >> >> be
>> >> >> >> >> influenced by some props, but the most general case is:
>> >> >> >> >>
>> >> >> >> >> 1. Lookup in the global *.vm files, if found use that.
>> >> >> >> >> 2. Lookup in the same "name space" that calls the macro, if
>> found
>> >> >> >> >> use
>> >> >> >> >> that.
>> >> >> >> >> 3. Going back through the "list" of the #parse-d templates
>> >> lookup in
>> >> >> >> >> each
>> >> >> >> >> name space on the stack.
>> >> >> >> >>
>> >> >> >> >> The stack can be actually very long too, for example
>> >> >> >> >>
>> >> >> >> >> #foreach($templ in [1..5])
>> >> >> >> >>   #parse("${templ}.vtl")
>> >> >> >> >> #end
>> >> >> >> >>
>> >> >> >> >> #mymacro()
>> >> >> >> >>
>> >> >> >> >> The lookup list here would contain:
>> >> >> >> >>
>> >> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>> >> >> >> >>
>> >> >> >> >> This is true even for cases where the name is the same:
>> >> >> >> >>
>> >> >> >> >> #foreach($item in [1..5])
>> >> >> >> >>   #parse('item.vtl')
>> >> >> >> >> #end
>> >> >> >> >>
>> >> >> >> >> The lookup list here would contain:
>> >> >> >> >>
>> >> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>> >> >> >> >>
>> >> >> >> >> There is no attempt to optimize the lookup list and collapse
>> the
>> >> >> >> >> duplicates.
>> >> >> >> >>
>> >> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there
>> that
>> >> >> >> >> had
>> >> >> >> >> to do
>> >> >> >> >> with clearing the name space of all the macros and
>> repopulating
>> >> it
>> >> >> >> >> again on
>> >> >> >> >> each parse which did not work at all with multiple threads.
>> >> >> >> >> One thread could clear the name space while another was doing
>> a
>> >> >> >> >> lookup,
>> >> >> >> >> etc.
>> >> >> >> >>
>> >> >> >> >> I think there was an effort to redesign that part in 2.0, but
>> I
>> >> have
>> >> >> >> >> not
>> >> >> >> >> looked at that yet.
>> >> >> >> >>
>> >> >> >> >> Alex
>> >> >> >> >>
>> >> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>> >> >> >> >> bradley.wagner@hannonhill.com> wrote:
>> >> >> >> >>
>> >> >> >> >> > Hi,
>> >> >> >> >> >
>> >> >> >> >> > We recently made some changes to our software to use just a
>> >> single
>> >> >> >> >> > VelocityEngine as per recommendations on this group.
>> >> >> >> >> >
>> >> >> >> >> > We ran into an issue where macros were all of the sudden
>> being
>> >> >> >> >> > shared
>> >> >> >> >> > across template renders because we had not
>> >> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope
>> =
>> >> >> >> >> > true.
>> >> >> >> >> > However, we also had not ever turned on caching in our props
>> >> file
>> >> >> >> >> > with: class.resource.loader.cache = true.
>> >> >> >> >> >
>> >> >> >> >> > Does this mean that macros are cached separately from
>> whatever
>> >> is
>> >> >> >> >> > being
>> >> >> >> >> > cached in the class.resource.loader.cache cache? Is there
>> any
>> >> way
>> >> >> >> >> > to
>> >> >> >> >> > control that caching or is just using this property the
>> >> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
>> >> >> >> >> >
>> >> >> >> >> > One side effect of our recent changes is that the app seems
>> to
>> >> >> >> >> > have
>> >> >> >> >> > an
>> >> >> >> >> > increased mem footprint. We're not *sure* it can be
>> attributed
>> >> to
>> >> >> >> >> velocity
>> >> >> >> >> > but I was trying to see what kinds of things Velocity could
>> be
>> >> >> >> >> > hanging on
>> >> >> >> >> > to and how much memory they might be taking up.
>> >> >> >> >> >
>> >> >> >> >> > Thanks!
>> >> >> >> >> >
>> >> >> >> >>
>> >> >> >
>> >> >> >
>> >> >
>> >> >
>> >>
>> >
>> >
>>
>
>

Re: Macro caching and other caching

Posted by Alex Fedotov <al...@kayak.com>.
It's not really a Velocity specific suggestion - I would just dump the heap
and trace the instances to the garbage collection roots. Eclipse MAT or
YourKit can do it as probably can a lot of other Java tools.

On Tue, Jul 31, 2012 at 11:37 AM, Bradley Wagner <
bradley.wagner@hannonhill.com> wrote:

> Whoops, was misreading the API. It's actually that tempTemplateName
> variable.
>
> On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner <
> bradley.wagner@hannonhill.com> wrote:
>
> > A StringWriter:
> >
> > String template = ... the string containing the dynamic template to
> > generate ...
> > // Get a template as stream.
> > StringWriter writer = new StringWriter();
> > StringReader reader = new StringReader(template);
> > // create a temporary template name
> > String tempTemplateName = "velocityTransform-" +
> > System.currentTimeMillis();
> >
> > // ask Velocity to evaluate it.
> > VelocityEngine engine = getEngine();
> > boolean success = engine.evaluate(context, writer, tempTemplateName,
> > reader);
> >
> > if (!success)
> > {
> >     LOG.debug("Velocity could not evaluate template with content: \n" +
> > template);
> >     return null;
> > }
> > LOG.debug("Velocity successfully evaluted template with content: \n" +
> > template);
> > String strResult = writer.getBuffer().toString();
> > return strResult;
> >
> > On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com> wrote:
> >
> >> What do you use for logTag (template name) when you are using
> evaluate()?
> >>
> >> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
> >> <br...@hannonhill.com> wrote:
> >> > Doing both. In the other case we're using a classpath resource loader
> to
> >> > evaluate templates like this:
> >> >
> >> > VelocityContext = ... a context that we're building each time ...
> >> > VelocityEngine engine = ... our single engine ...
> >> > Template template = engine.getTemplate(templatePath);
> >> > StringWriter writer = new StringWriter();
> >> > template.merge(context, writer);
> >> >
> >> > However, we only have 7 of those static templates in our whole system.
> >> >
> >> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com>
> >> wrote:
> >> >>
> >> >> And you're sure you're only using VelocityEngine.evaluate?  Not
> >> >> loading templates through the resource loader?  Or are you doing
> both?
> >> >>
> >> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
> >> >> <br...@hannonhill.com> wrote:
> >> >> > Nathan,
> >> >> >
> >> >> > Tokens are referenced by
> >> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem to
> be
> >> >> > referenced by arrays of
> >> org.apache.velocity.runtime.parser.node.Nodes.
> >> >> > Most
> >> >> > of the classes referencing these things are AST classes in the
> >> >> > org.apache.velocity.runtime.parser.node package.
> >> >> >
> >> >> > Here's our properties file:
> >> >> >
> >> >> > runtime.log.logsystem.class =
> >> >> >
> >> >> >
> >>
> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
> >> >> >
> >> >> >
> >>
> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
> >> >> >
> >> >> > runtime.log.error.stacktrace = false
> >> >> > runtime.log.warn.stacktrace = false
> >> >> > runtime.log.info.stacktrace = false
> >> >> > runtime.log.invalid.reference = true
> >> >> >
> >> >> > input.encoding=UTF-8
> >> >> > output.encoding=UTF-8
> >> >> >
> >> >> > directive.foreach.counter.name = velocityCount
> >> >> > directive.foreach.counter.initial.value = 1
> >> >> >
> >> >> > resource.loader = class
> >> >> >
> >> >> > class.resource.loader.description = Velocity Classpath Resource
> >> Loader
> >> >> > class.resource.loader.class =
> >> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
> >> >> >
> >> >> > velocimacro.permissions.allow.inline.local.scope = true
> >> >> >
> >> >> > Thanks!
> >> >> > Bradley
> >> >> >
> >> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
> >> wrote:
> >> >> >>
> >> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
> >> >> >> <br...@hannonhill.com> wrote:
> >> >> >> > Thanks for the input.
> >> >> >> >
> >> >> >> > What we're seeing is that Velocity seems to be holding on to a
> lot
> >> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
> >> >> >> > million).
> >> >> >> > We
> >> >> >> > allow people to write arbitrary Velocity templates in our system
> >> and
> >> >> >> > are
> >> >> >> > evaluating them with:
> >> >> >> >
> >> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
> >> >> >> > logTag,
> >> >> >> > Reader reader)
> >> >> >> >
> >> >> >> > I was under the impression that Templates evaluated this way are
> >> >> >> > inherently
> >> >> >> > not cacheable. Is that the case? If that's not true, is there a
> >> way
> >> >> >> > to
> >> >> >> > control the cache Velocity is using for these?
> >> >> >>
> >> >> >> me too.  just out of curiosity, what properties are you using for
> >> >> >> configuration?  and can you tell any more about what class is
> >> holding
> >> >> >> onto those Tokens?
> >> >> >>
> >> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
> >> >> >> > wrote:
> >> >> >> >
> >> >> >> >> I think that Velocity has one global hash table for macros from
> >> the
> >> >> >> >> *.vm
> >> >> >> >> libraries and that is more or less static for the life time of
> >> the
> >> >> >> >> Velocity
> >> >> >> >> engine.
> >> >> >> >>
> >> >> >> >> I wish there there was a mechanism to control the list of the
> >> *.vm
> >> >> >> >> files
> >> >> >> >> and their order of lookup for each individual merge (thread).
> >> This
> >> >> >> >> would
> >> >> >> >> facilitate macro overloads based on the context.
> >> >> >> >> Unfortunately this feature is not available.
> >> >> >> >>
> >> >> >> >> I think the 1.7 behavior is (more or less):
> >> >> >> >>
> >> >> >> >> When template reference is found (i.e. #parse("x")) it is
> >> looked-up
> >> >> >> >> in
> >> >> >> >> the
> >> >> >> >> resource cache and if found there (with all the expiration
> >> checks,
> >> >> >> >> etc.)
> >> >> >> >> the parsed AST tree is used.
> >> >> >> >> If not found the template is loaded from the file, actually
> >> parsed
> >> >> >> >> and
> >> >> >> >> put
> >> >> >> >> into the cache. During the actual parsing process the macros
> that
> >> >> >> >> are
> >> >> >> >> defined in the template are put into the macro manager cache
> >> which
> >> >> >> >> is
> >> >> >> >> organized as:
> >> >> >> >> "defining template name (name space)" => "macro name" => AST
> >> macro
> >> >> >> >> code
> >> >> >> >> The AST is then rendered in the current context running #parse.
> >> >> >> >>
> >> >> >> >> When the time comes to call a macro there is a lookup process
> >> which
> >> >> >> >> can
> >> >> >> >> be
> >> >> >> >> influenced by some props, but the most general case is:
> >> >> >> >>
> >> >> >> >> 1. Lookup in the global *.vm files, if found use that.
> >> >> >> >> 2. Lookup in the same "name space" that calls the macro, if
> found
> >> >> >> >> use
> >> >> >> >> that.
> >> >> >> >> 3. Going back through the "list" of the #parse-d templates
> >> lookup in
> >> >> >> >> each
> >> >> >> >> name space on the stack.
> >> >> >> >>
> >> >> >> >> The stack can be actually very long too, for example
> >> >> >> >>
> >> >> >> >> #foreach($templ in [1..5])
> >> >> >> >>   #parse("${templ}.vtl")
> >> >> >> >> #end
> >> >> >> >>
> >> >> >> >> #mymacro()
> >> >> >> >>
> >> >> >> >> The lookup list here would contain:
> >> >> >> >>
> >> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
> >> >> >> >>
> >> >> >> >> This is true even for cases where the name is the same:
> >> >> >> >>
> >> >> >> >> #foreach($item in [1..5])
> >> >> >> >>   #parse('item.vtl')
> >> >> >> >> #end
> >> >> >> >>
> >> >> >> >> The lookup list here would contain:
> >> >> >> >>
> >> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
> >> >> >> >>
> >> >> >> >> There is no attempt to optimize the lookup list and collapse
> the
> >> >> >> >> duplicates.
> >> >> >> >>
> >> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there
> that
> >> >> >> >> had
> >> >> >> >> to do
> >> >> >> >> with clearing the name space of all the macros and repopulating
> >> it
> >> >> >> >> again on
> >> >> >> >> each parse which did not work at all with multiple threads.
> >> >> >> >> One thread could clear the name space while another was doing a
> >> >> >> >> lookup,
> >> >> >> >> etc.
> >> >> >> >>
> >> >> >> >> I think there was an effort to redesign that part in 2.0, but I
> >> have
> >> >> >> >> not
> >> >> >> >> looked at that yet.
> >> >> >> >>
> >> >> >> >> Alex
> >> >> >> >>
> >> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> >> >> >> >> bradley.wagner@hannonhill.com> wrote:
> >> >> >> >>
> >> >> >> >> > Hi,
> >> >> >> >> >
> >> >> >> >> > We recently made some changes to our software to use just a
> >> single
> >> >> >> >> > VelocityEngine as per recommendations on this group.
> >> >> >> >> >
> >> >> >> >> > We ran into an issue where macros were all of the sudden
> being
> >> >> >> >> > shared
> >> >> >> >> > across template renders because we had not
> >> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
> >> >> >> >> > true.
> >> >> >> >> > However, we also had not ever turned on caching in our props
> >> file
> >> >> >> >> > with: class.resource.loader.cache = true.
> >> >> >> >> >
> >> >> >> >> > Does this mean that macros are cached separately from
> whatever
> >> is
> >> >> >> >> > being
> >> >> >> >> > cached in the class.resource.loader.cache cache? Is there any
> >> way
> >> >> >> >> > to
> >> >> >> >> > control that caching or is just using this property the
> >> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
> >> >> >> >> >
> >> >> >> >> > One side effect of our recent changes is that the app seems
> to
> >> >> >> >> > have
> >> >> >> >> > an
> >> >> >> >> > increased mem footprint. We're not *sure* it can be
> attributed
> >> to
> >> >> >> >> velocity
> >> >> >> >> > but I was trying to see what kinds of things Velocity could
> be
> >> >> >> >> > hanging on
> >> >> >> >> > to and how much memory they might be taking up.
> >> >> >> >> >
> >> >> >> >> > Thanks!
> >> >> >> >> >
> >> >> >> >>
> >> >> >
> >> >> >
> >> >
> >> >
> >>
> >
> >
>

Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
Whoops, was misreading the API. It's actually that tempTemplateName
variable.

On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner <
bradley.wagner@hannonhill.com> wrote:

> A StringWriter:
>
> String template = ... the string containing the dynamic template to
> generate ...
> // Get a template as stream.
> StringWriter writer = new StringWriter();
> StringReader reader = new StringReader(template);
> // create a temporary template name
> String tempTemplateName = "velocityTransform-" +
> System.currentTimeMillis();
>
> // ask Velocity to evaluate it.
> VelocityEngine engine = getEngine();
> boolean success = engine.evaluate(context, writer, tempTemplateName,
> reader);
>
> if (!success)
> {
>     LOG.debug("Velocity could not evaluate template with content: \n" +
> template);
>     return null;
> }
> LOG.debug("Velocity successfully evaluted template with content: \n" +
> template);
> String strResult = writer.getBuffer().toString();
> return strResult;
>
> On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com> wrote:
>
>> What do you use for logTag (template name) when you are using evaluate()?
>>
>> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
>> <br...@hannonhill.com> wrote:
>> > Doing both. In the other case we're using a classpath resource loader to
>> > evaluate templates like this:
>> >
>> > VelocityContext = ... a context that we're building each time ...
>> > VelocityEngine engine = ... our single engine ...
>> > Template template = engine.getTemplate(templatePath);
>> > StringWriter writer = new StringWriter();
>> > template.merge(context, writer);
>> >
>> > However, we only have 7 of those static templates in our whole system.
>> >
>> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com>
>> wrote:
>> >>
>> >> And you're sure you're only using VelocityEngine.evaluate?  Not
>> >> loading templates through the resource loader?  Or are you doing both?
>> >>
>> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
>> >> <br...@hannonhill.com> wrote:
>> >> > Nathan,
>> >> >
>> >> > Tokens are referenced by
>> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem to be
>> >> > referenced by arrays of
>> org.apache.velocity.runtime.parser.node.Nodes.
>> >> > Most
>> >> > of the classes referencing these things are AST classes in the
>> >> > org.apache.velocity.runtime.parser.node package.
>> >> >
>> >> > Here's our properties file:
>> >> >
>> >> > runtime.log.logsystem.class =
>> >> >
>> >> >
>> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
>> >> >
>> >> >
>> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>> >> >
>> >> > runtime.log.error.stacktrace = false
>> >> > runtime.log.warn.stacktrace = false
>> >> > runtime.log.info.stacktrace = false
>> >> > runtime.log.invalid.reference = true
>> >> >
>> >> > input.encoding=UTF-8
>> >> > output.encoding=UTF-8
>> >> >
>> >> > directive.foreach.counter.name = velocityCount
>> >> > directive.foreach.counter.initial.value = 1
>> >> >
>> >> > resource.loader = class
>> >> >
>> >> > class.resource.loader.description = Velocity Classpath Resource
>> Loader
>> >> > class.resource.loader.class =
>> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>> >> >
>> >> > velocimacro.permissions.allow.inline.local.scope = true
>> >> >
>> >> > Thanks!
>> >> > Bradley
>> >> >
>> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
>> wrote:
>> >> >>
>> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>> >> >> <br...@hannonhill.com> wrote:
>> >> >> > Thanks for the input.
>> >> >> >
>> >> >> > What we're seeing is that Velocity seems to be holding on to a lot
>> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
>> >> >> > million).
>> >> >> > We
>> >> >> > allow people to write arbitrary Velocity templates in our system
>> and
>> >> >> > are
>> >> >> > evaluating them with:
>> >> >> >
>> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
>> >> >> > logTag,
>> >> >> > Reader reader)
>> >> >> >
>> >> >> > I was under the impression that Templates evaluated this way are
>> >> >> > inherently
>> >> >> > not cacheable. Is that the case? If that's not true, is there a
>> way
>> >> >> > to
>> >> >> > control the cache Velocity is using for these?
>> >> >>
>> >> >> me too.  just out of curiosity, what properties are you using for
>> >> >> configuration?  and can you tell any more about what class is
>> holding
>> >> >> onto those Tokens?
>> >> >>
>> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
>> >> >> > wrote:
>> >> >> >
>> >> >> >> I think that Velocity has one global hash table for macros from
>> the
>> >> >> >> *.vm
>> >> >> >> libraries and that is more or less static for the life time of
>> the
>> >> >> >> Velocity
>> >> >> >> engine.
>> >> >> >>
>> >> >> >> I wish there there was a mechanism to control the list of the
>> *.vm
>> >> >> >> files
>> >> >> >> and their order of lookup for each individual merge (thread).
>> This
>> >> >> >> would
>> >> >> >> facilitate macro overloads based on the context.
>> >> >> >> Unfortunately this feature is not available.
>> >> >> >>
>> >> >> >> I think the 1.7 behavior is (more or less):
>> >> >> >>
>> >> >> >> When template reference is found (i.e. #parse("x")) it is
>> looked-up
>> >> >> >> in
>> >> >> >> the
>> >> >> >> resource cache and if found there (with all the expiration
>> checks,
>> >> >> >> etc.)
>> >> >> >> the parsed AST tree is used.
>> >> >> >> If not found the template is loaded from the file, actually
>> parsed
>> >> >> >> and
>> >> >> >> put
>> >> >> >> into the cache. During the actual parsing process the macros that
>> >> >> >> are
>> >> >> >> defined in the template are put into the macro manager cache
>> which
>> >> >> >> is
>> >> >> >> organized as:
>> >> >> >> "defining template name (name space)" => "macro name" => AST
>> macro
>> >> >> >> code
>> >> >> >> The AST is then rendered in the current context running #parse.
>> >> >> >>
>> >> >> >> When the time comes to call a macro there is a lookup process
>> which
>> >> >> >> can
>> >> >> >> be
>> >> >> >> influenced by some props, but the most general case is:
>> >> >> >>
>> >> >> >> 1. Lookup in the global *.vm files, if found use that.
>> >> >> >> 2. Lookup in the same "name space" that calls the macro, if found
>> >> >> >> use
>> >> >> >> that.
>> >> >> >> 3. Going back through the "list" of the #parse-d templates
>> lookup in
>> >> >> >> each
>> >> >> >> name space on the stack.
>> >> >> >>
>> >> >> >> The stack can be actually very long too, for example
>> >> >> >>
>> >> >> >> #foreach($templ in [1..5])
>> >> >> >>   #parse("${templ}.vtl")
>> >> >> >> #end
>> >> >> >>
>> >> >> >> #mymacro()
>> >> >> >>
>> >> >> >> The lookup list here would contain:
>> >> >> >>
>> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>> >> >> >>
>> >> >> >> This is true even for cases where the name is the same:
>> >> >> >>
>> >> >> >> #foreach($item in [1..5])
>> >> >> >>   #parse('item.vtl')
>> >> >> >> #end
>> >> >> >>
>> >> >> >> The lookup list here would contain:
>> >> >> >>
>> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>> >> >> >>
>> >> >> >> There is no attempt to optimize the lookup list and collapse the
>> >> >> >> duplicates.
>> >> >> >>
>> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there that
>> >> >> >> had
>> >> >> >> to do
>> >> >> >> with clearing the name space of all the macros and repopulating
>> it
>> >> >> >> again on
>> >> >> >> each parse which did not work at all with multiple threads.
>> >> >> >> One thread could clear the name space while another was doing a
>> >> >> >> lookup,
>> >> >> >> etc.
>> >> >> >>
>> >> >> >> I think there was an effort to redesign that part in 2.0, but I
>> have
>> >> >> >> not
>> >> >> >> looked at that yet.
>> >> >> >>
>> >> >> >> Alex
>> >> >> >>
>> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>> >> >> >> bradley.wagner@hannonhill.com> wrote:
>> >> >> >>
>> >> >> >> > Hi,
>> >> >> >> >
>> >> >> >> > We recently made some changes to our software to use just a
>> single
>> >> >> >> > VelocityEngine as per recommendations on this group.
>> >> >> >> >
>> >> >> >> > We ran into an issue where macros were all of the sudden being
>> >> >> >> > shared
>> >> >> >> > across template renders because we had not
>> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
>> >> >> >> > true.
>> >> >> >> > However, we also had not ever turned on caching in our props
>> file
>> >> >> >> > with: class.resource.loader.cache = true.
>> >> >> >> >
>> >> >> >> > Does this mean that macros are cached separately from whatever
>> is
>> >> >> >> > being
>> >> >> >> > cached in the class.resource.loader.cache cache? Is there any
>> way
>> >> >> >> > to
>> >> >> >> > control that caching or is just using this property the
>> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
>> >> >> >> >
>> >> >> >> > One side effect of our recent changes is that the app seems to
>> >> >> >> > have
>> >> >> >> > an
>> >> >> >> > increased mem footprint. We're not *sure* it can be attributed
>> to
>> >> >> >> velocity
>> >> >> >> > but I was trying to see what kinds of things Velocity could be
>> >> >> >> > hanging on
>> >> >> >> > to and how much memory they might be taking up.
>> >> >> >> >
>> >> >> >> > Thanks!
>> >> >> >> >
>> >> >> >>
>> >> >
>> >> >
>> >
>> >
>>
>
>

Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
A StringWriter:

String template = ... the string containing the dynamic template to
generate ...
// Get a template as stream.
StringWriter writer = new StringWriter();
StringReader reader = new StringReader(template);
// create a temporary template name
String tempTemplateName = "velocityTransform-" + System.currentTimeMillis();

// ask Velocity to evaluate it.
VelocityEngine engine = getEngine();
boolean success = engine.evaluate(context, writer, tempTemplateName,
reader);

if (!success)
{
    LOG.debug("Velocity could not evaluate template with content: \n" +
template);
    return null;
}
LOG.debug("Velocity successfully evaluted template with content: \n" +
template);
String strResult = writer.getBuffer().toString();
return strResult;

On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nb...@gmail.com> wrote:

> What do you use for logTag (template name) when you are using evaluate()?
>
> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
> <br...@hannonhill.com> wrote:
> > Doing both. In the other case we're using a classpath resource loader to
> > evaluate templates like this:
> >
> > VelocityContext = ... a context that we're building each time ...
> > VelocityEngine engine = ... our single engine ...
> > Template template = engine.getTemplate(templatePath);
> > StringWriter writer = new StringWriter();
> > template.merge(context, writer);
> >
> > However, we only have 7 of those static templates in our whole system.
> >
> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com> wrote:
> >>
> >> And you're sure you're only using VelocityEngine.evaluate?  Not
> >> loading templates through the resource loader?  Or are you doing both?
> >>
> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
> >> <br...@hannonhill.com> wrote:
> >> > Nathan,
> >> >
> >> > Tokens are referenced by
> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem to be
> >> > referenced by arrays of org.apache.velocity.runtime.parser.node.Nodes.
> >> > Most
> >> > of the classes referencing these things are AST classes in the
> >> > org.apache.velocity.runtime.parser.node package.
> >> >
> >> > Here's our properties file:
> >> >
> >> > runtime.log.logsystem.class =
> >> >
> >> >
> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
> >> >
> >> >
> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
> >> >
> >> > runtime.log.error.stacktrace = false
> >> > runtime.log.warn.stacktrace = false
> >> > runtime.log.info.stacktrace = false
> >> > runtime.log.invalid.reference = true
> >> >
> >> > input.encoding=UTF-8
> >> > output.encoding=UTF-8
> >> >
> >> > directive.foreach.counter.name = velocityCount
> >> > directive.foreach.counter.initial.value = 1
> >> >
> >> > resource.loader = class
> >> >
> >> > class.resource.loader.description = Velocity Classpath Resource Loader
> >> > class.resource.loader.class =
> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
> >> >
> >> > velocimacro.permissions.allow.inline.local.scope = true
> >> >
> >> > Thanks!
> >> > Bradley
> >> >
> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com>
> wrote:
> >> >>
> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
> >> >> <br...@hannonhill.com> wrote:
> >> >> > Thanks for the input.
> >> >> >
> >> >> > What we're seeing is that Velocity seems to be holding on to a lot
> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
> >> >> > million).
> >> >> > We
> >> >> > allow people to write arbitrary Velocity templates in our system
> and
> >> >> > are
> >> >> > evaluating them with:
> >> >> >
> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
> >> >> > logTag,
> >> >> > Reader reader)
> >> >> >
> >> >> > I was under the impression that Templates evaluated this way are
> >> >> > inherently
> >> >> > not cacheable. Is that the case? If that's not true, is there a way
> >> >> > to
> >> >> > control the cache Velocity is using for these?
> >> >>
> >> >> me too.  just out of curiosity, what properties are you using for
> >> >> configuration?  and can you tell any more about what class is holding
> >> >> onto those Tokens?
> >> >>
> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
> >> >> > wrote:
> >> >> >
> >> >> >> I think that Velocity has one global hash table for macros from
> the
> >> >> >> *.vm
> >> >> >> libraries and that is more or less static for the life time of the
> >> >> >> Velocity
> >> >> >> engine.
> >> >> >>
> >> >> >> I wish there there was a mechanism to control the list of the *.vm
> >> >> >> files
> >> >> >> and their order of lookup for each individual merge (thread). This
> >> >> >> would
> >> >> >> facilitate macro overloads based on the context.
> >> >> >> Unfortunately this feature is not available.
> >> >> >>
> >> >> >> I think the 1.7 behavior is (more or less):
> >> >> >>
> >> >> >> When template reference is found (i.e. #parse("x")) it is
> looked-up
> >> >> >> in
> >> >> >> the
> >> >> >> resource cache and if found there (with all the expiration checks,
> >> >> >> etc.)
> >> >> >> the parsed AST tree is used.
> >> >> >> If not found the template is loaded from the file, actually parsed
> >> >> >> and
> >> >> >> put
> >> >> >> into the cache. During the actual parsing process the macros that
> >> >> >> are
> >> >> >> defined in the template are put into the macro manager cache which
> >> >> >> is
> >> >> >> organized as:
> >> >> >> "defining template name (name space)" => "macro name" => AST macro
> >> >> >> code
> >> >> >> The AST is then rendered in the current context running #parse.
> >> >> >>
> >> >> >> When the time comes to call a macro there is a lookup process
> which
> >> >> >> can
> >> >> >> be
> >> >> >> influenced by some props, but the most general case is:
> >> >> >>
> >> >> >> 1. Lookup in the global *.vm files, if found use that.
> >> >> >> 2. Lookup in the same "name space" that calls the macro, if found
> >> >> >> use
> >> >> >> that.
> >> >> >> 3. Going back through the "list" of the #parse-d templates lookup
> in
> >> >> >> each
> >> >> >> name space on the stack.
> >> >> >>
> >> >> >> The stack can be actually very long too, for example
> >> >> >>
> >> >> >> #foreach($templ in [1..5])
> >> >> >>   #parse("${templ}.vtl")
> >> >> >> #end
> >> >> >>
> >> >> >> #mymacro()
> >> >> >>
> >> >> >> The lookup list here would contain:
> >> >> >>
> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
> >> >> >>
> >> >> >> This is true even for cases where the name is the same:
> >> >> >>
> >> >> >> #foreach($item in [1..5])
> >> >> >>   #parse('item.vtl')
> >> >> >> #end
> >> >> >>
> >> >> >> The lookup list here would contain:
> >> >> >>
> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
> >> >> >>
> >> >> >> There is no attempt to optimize the lookup list and collapse the
> >> >> >> duplicates.
> >> >> >>
> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there that
> >> >> >> had
> >> >> >> to do
> >> >> >> with clearing the name space of all the macros and repopulating it
> >> >> >> again on
> >> >> >> each parse which did not work at all with multiple threads.
> >> >> >> One thread could clear the name space while another was doing a
> >> >> >> lookup,
> >> >> >> etc.
> >> >> >>
> >> >> >> I think there was an effort to redesign that part in 2.0, but I
> have
> >> >> >> not
> >> >> >> looked at that yet.
> >> >> >>
> >> >> >> Alex
> >> >> >>
> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> >> >> >> bradley.wagner@hannonhill.com> wrote:
> >> >> >>
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > We recently made some changes to our software to use just a
> single
> >> >> >> > VelocityEngine as per recommendations on this group.
> >> >> >> >
> >> >> >> > We ran into an issue where macros were all of the sudden being
> >> >> >> > shared
> >> >> >> > across template renders because we had not
> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
> >> >> >> > true.
> >> >> >> > However, we also had not ever turned on caching in our props
> file
> >> >> >> > with: class.resource.loader.cache = true.
> >> >> >> >
> >> >> >> > Does this mean that macros are cached separately from whatever
> is
> >> >> >> > being
> >> >> >> > cached in the class.resource.loader.cache cache? Is there any
> way
> >> >> >> > to
> >> >> >> > control that caching or is just using this property the
> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
> >> >> >> >
> >> >> >> > One side effect of our recent changes is that the app seems to
> >> >> >> > have
> >> >> >> > an
> >> >> >> > increased mem footprint. We're not *sure* it can be attributed
> to
> >> >> >> velocity
> >> >> >> > but I was trying to see what kinds of things Velocity could be
> >> >> >> > hanging on
> >> >> >> > to and how much memory they might be taking up.
> >> >> >> >
> >> >> >> > Thanks!
> >> >> >> >
> >> >> >>
> >> >
> >> >
> >
> >
>

Re: Macro caching and other caching

Posted by Nathan Bubna <nb...@gmail.com>.
What do you use for logTag (template name) when you are using evaluate()?

On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
<br...@hannonhill.com> wrote:
> Doing both. In the other case we're using a classpath resource loader to
> evaluate templates like this:
>
> VelocityContext = ... a context that we're building each time ...
> VelocityEngine engine = ... our single engine ...
> Template template = engine.getTemplate(templatePath);
> StringWriter writer = new StringWriter();
> template.merge(context, writer);
>
> However, we only have 7 of those static templates in our whole system.
>
> On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com> wrote:
>>
>> And you're sure you're only using VelocityEngine.evaluate?  Not
>> loading templates through the resource loader?  Or are you doing both?
>>
>> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
>> <br...@hannonhill.com> wrote:
>> > Nathan,
>> >
>> > Tokens are referenced by
>> > org.apache.velocity.runtime.parser.node.ASTReference which seem to be
>> > referenced by arrays of org.apache.velocity.runtime.parser.node.Nodes.
>> > Most
>> > of the classes referencing these things are AST classes in the
>> > org.apache.velocity.runtime.parser.node package.
>> >
>> > Here's our properties file:
>> >
>> > runtime.log.logsystem.class =
>> >
>> > org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
>> >
>> > runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>> >
>> > runtime.log.error.stacktrace = false
>> > runtime.log.warn.stacktrace = false
>> > runtime.log.info.stacktrace = false
>> > runtime.log.invalid.reference = true
>> >
>> > input.encoding=UTF-8
>> > output.encoding=UTF-8
>> >
>> > directive.foreach.counter.name = velocityCount
>> > directive.foreach.counter.initial.value = 1
>> >
>> > resource.loader = class
>> >
>> > class.resource.loader.description = Velocity Classpath Resource Loader
>> > class.resource.loader.class =
>> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>> >
>> > velocimacro.permissions.allow.inline.local.scope = true
>> >
>> > Thanks!
>> > Bradley
>> >
>> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com> wrote:
>> >>
>> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>> >> <br...@hannonhill.com> wrote:
>> >> > Thanks for the input.
>> >> >
>> >> > What we're seeing is that Velocity seems to be holding on to a lot
>> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
>> >> > million).
>> >> > We
>> >> > allow people to write arbitrary Velocity templates in our system and
>> >> > are
>> >> > evaluating them with:
>> >> >
>> >> > VelocityEngine.evaluate(Context context, Writer writer, String
>> >> > logTag,
>> >> > Reader reader)
>> >> >
>> >> > I was under the impression that Templates evaluated this way are
>> >> > inherently
>> >> > not cacheable. Is that the case? If that's not true, is there a way
>> >> > to
>> >> > control the cache Velocity is using for these?
>> >>
>> >> me too.  just out of curiosity, what properties are you using for
>> >> configuration?  and can you tell any more about what class is holding
>> >> onto those Tokens?
>> >>
>> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
>> >> > wrote:
>> >> >
>> >> >> I think that Velocity has one global hash table for macros from the
>> >> >> *.vm
>> >> >> libraries and that is more or less static for the life time of the
>> >> >> Velocity
>> >> >> engine.
>> >> >>
>> >> >> I wish there there was a mechanism to control the list of the *.vm
>> >> >> files
>> >> >> and their order of lookup for each individual merge (thread). This
>> >> >> would
>> >> >> facilitate macro overloads based on the context.
>> >> >> Unfortunately this feature is not available.
>> >> >>
>> >> >> I think the 1.7 behavior is (more or less):
>> >> >>
>> >> >> When template reference is found (i.e. #parse("x")) it is looked-up
>> >> >> in
>> >> >> the
>> >> >> resource cache and if found there (with all the expiration checks,
>> >> >> etc.)
>> >> >> the parsed AST tree is used.
>> >> >> If not found the template is loaded from the file, actually parsed
>> >> >> and
>> >> >> put
>> >> >> into the cache. During the actual parsing process the macros that
>> >> >> are
>> >> >> defined in the template are put into the macro manager cache which
>> >> >> is
>> >> >> organized as:
>> >> >> "defining template name (name space)" => "macro name" => AST macro
>> >> >> code
>> >> >> The AST is then rendered in the current context running #parse.
>> >> >>
>> >> >> When the time comes to call a macro there is a lookup process which
>> >> >> can
>> >> >> be
>> >> >> influenced by some props, but the most general case is:
>> >> >>
>> >> >> 1. Lookup in the global *.vm files, if found use that.
>> >> >> 2. Lookup in the same "name space" that calls the macro, if found
>> >> >> use
>> >> >> that.
>> >> >> 3. Going back through the "list" of the #parse-d templates lookup in
>> >> >> each
>> >> >> name space on the stack.
>> >> >>
>> >> >> The stack can be actually very long too, for example
>> >> >>
>> >> >> #foreach($templ in [1..5])
>> >> >>   #parse("${templ}.vtl")
>> >> >> #end
>> >> >>
>> >> >> #mymacro()
>> >> >>
>> >> >> The lookup list here would contain:
>> >> >>
>> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>> >> >>
>> >> >> This is true even for cases where the name is the same:
>> >> >>
>> >> >> #foreach($item in [1..5])
>> >> >>   #parse('item.vtl')
>> >> >> #end
>> >> >>
>> >> >> The lookup list here would contain:
>> >> >>
>> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>> >> >>
>> >> >> There is no attempt to optimize the lookup list and collapse the
>> >> >> duplicates.
>> >> >>
>> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there that
>> >> >> had
>> >> >> to do
>> >> >> with clearing the name space of all the macros and repopulating it
>> >> >> again on
>> >> >> each parse which did not work at all with multiple threads.
>> >> >> One thread could clear the name space while another was doing a
>> >> >> lookup,
>> >> >> etc.
>> >> >>
>> >> >> I think there was an effort to redesign that part in 2.0, but I have
>> >> >> not
>> >> >> looked at that yet.
>> >> >>
>> >> >> Alex
>> >> >>
>> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>> >> >> bradley.wagner@hannonhill.com> wrote:
>> >> >>
>> >> >> > Hi,
>> >> >> >
>> >> >> > We recently made some changes to our software to use just a single
>> >> >> > VelocityEngine as per recommendations on this group.
>> >> >> >
>> >> >> > We ran into an issue where macros were all of the sudden being
>> >> >> > shared
>> >> >> > across template renders because we had not
>> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
>> >> >> > true.
>> >> >> > However, we also had not ever turned on caching in our props file
>> >> >> > with: class.resource.loader.cache = true.
>> >> >> >
>> >> >> > Does this mean that macros are cached separately from whatever is
>> >> >> > being
>> >> >> > cached in the class.resource.loader.cache cache? Is there any way
>> >> >> > to
>> >> >> > control that caching or is just using this property the
>> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
>> >> >> >
>> >> >> > One side effect of our recent changes is that the app seems to
>> >> >> > have
>> >> >> > an
>> >> >> > increased mem footprint. We're not *sure* it can be attributed to
>> >> >> velocity
>> >> >> > but I was trying to see what kinds of things Velocity could be
>> >> >> > hanging on
>> >> >> > to and how much memory they might be taking up.
>> >> >> >
>> >> >> > Thanks!
>> >> >> >
>> >> >>
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
Doing both. In the other case we're using a classpath resource loader to
evaluate templates like this:

VelocityContext = ... a context that we're building each time ...
VelocityEngine engine = ... our single engine ...
Template template = engine.getTemplate(templatePath);
StringWriter writer = new StringWriter();
template.merge(context, writer);

However, we only have 7 of those static templates in our whole system.

On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nb...@gmail.com> wrote:

> And you're sure you're only using VelocityEngine.evaluate?  Not
> loading templates through the resource loader?  Or are you doing both?
>
> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
> <br...@hannonhill.com> wrote:
> > Nathan,
> >
> > Tokens are referenced by
> > org.apache.velocity.runtime.parser.node.ASTReference which seem to be
> > referenced by arrays of org.apache.velocity.runtime.parser.node.Nodes.
>  Most
> > of the classes referencing these things are AST classes in the
> > org.apache.velocity.runtime.parser.node package.
> >
> > Here's our properties file:
> >
> > runtime.log.logsystem.class =
> >
> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
> >
> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
> >
> > runtime.log.error.stacktrace = false
> > runtime.log.warn.stacktrace = false
> > runtime.log.info.stacktrace = false
> > runtime.log.invalid.reference = true
> >
> > input.encoding=UTF-8
> > output.encoding=UTF-8
> >
> > directive.foreach.counter.name = velocityCount
> > directive.foreach.counter.initial.value = 1
> >
> > resource.loader = class
> >
> > class.resource.loader.description = Velocity Classpath Resource Loader
> > class.resource.loader.class =
> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
> >
> > velocimacro.permissions.allow.inline.local.scope = true
> >
> > Thanks!
> > Bradley
> >
> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com> wrote:
> >>
> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
> >> <br...@hannonhill.com> wrote:
> >> > Thanks for the input.
> >> >
> >> > What we're seeing is that Velocity seems to be holding on to a lot
> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
> million).
> >> > We
> >> > allow people to write arbitrary Velocity templates in our system and
> are
> >> > evaluating them with:
> >> >
> >> > VelocityEngine.evaluate(Context context, Writer writer, String logTag,
> >> > Reader reader)
> >> >
> >> > I was under the impression that Templates evaluated this way are
> >> > inherently
> >> > not cacheable. Is that the case? If that's not true, is there a way to
> >> > control the cache Velocity is using for these?
> >>
> >> me too.  just out of curiosity, what properties are you using for
> >> configuration?  and can you tell any more about what class is holding
> >> onto those Tokens?
> >>
> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com>
> wrote:
> >> >
> >> >> I think that Velocity has one global hash table for macros from the
> >> >> *.vm
> >> >> libraries and that is more or less static for the life time of the
> >> >> Velocity
> >> >> engine.
> >> >>
> >> >> I wish there there was a mechanism to control the list of the *.vm
> >> >> files
> >> >> and their order of lookup for each individual merge (thread). This
> >> >> would
> >> >> facilitate macro overloads based on the context.
> >> >> Unfortunately this feature is not available.
> >> >>
> >> >> I think the 1.7 behavior is (more or less):
> >> >>
> >> >> When template reference is found (i.e. #parse("x")) it is looked-up
> in
> >> >> the
> >> >> resource cache and if found there (with all the expiration checks,
> >> >> etc.)
> >> >> the parsed AST tree is used.
> >> >> If not found the template is loaded from the file, actually parsed
> and
> >> >> put
> >> >> into the cache. During the actual parsing process the macros that are
> >> >> defined in the template are put into the macro manager cache which is
> >> >> organized as:
> >> >> "defining template name (name space)" => "macro name" => AST macro
> code
> >> >> The AST is then rendered in the current context running #parse.
> >> >>
> >> >> When the time comes to call a macro there is a lookup process which
> can
> >> >> be
> >> >> influenced by some props, but the most general case is:
> >> >>
> >> >> 1. Lookup in the global *.vm files, if found use that.
> >> >> 2. Lookup in the same "name space" that calls the macro, if found use
> >> >> that.
> >> >> 3. Going back through the "list" of the #parse-d templates lookup in
> >> >> each
> >> >> name space on the stack.
> >> >>
> >> >> The stack can be actually very long too, for example
> >> >>
> >> >> #foreach($templ in [1..5])
> >> >>   #parse("${templ}.vtl")
> >> >> #end
> >> >>
> >> >> #mymacro()
> >> >>
> >> >> The lookup list here would contain:
> >> >>
> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
> >> >>
> >> >> This is true even for cases where the name is the same:
> >> >>
> >> >> #foreach($item in [1..5])
> >> >>   #parse('item.vtl')
> >> >> #end
> >> >>
> >> >> The lookup list here would contain:
> >> >>
> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
> >> >>
> >> >> There is no attempt to optimize the lookup list and collapse the
> >> >> duplicates.
> >> >>
> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there that had
> >> >> to do
> >> >> with clearing the name space of all the macros and repopulating it
> >> >> again on
> >> >> each parse which did not work at all with multiple threads.
> >> >> One thread could clear the name space while another was doing a
> lookup,
> >> >> etc.
> >> >>
> >> >> I think there was an effort to redesign that part in 2.0, but I have
> >> >> not
> >> >> looked at that yet.
> >> >>
> >> >> Alex
> >> >>
> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> >> >> bradley.wagner@hannonhill.com> wrote:
> >> >>
> >> >> > Hi,
> >> >> >
> >> >> > We recently made some changes to our software to use just a single
> >> >> > VelocityEngine as per recommendations on this group.
> >> >> >
> >> >> > We ran into an issue where macros were all of the sudden being
> shared
> >> >> > across template renders because we had not
> >> >> > specified: velocimacro.permissions.allow.inline.local.scope = true.
> >> >> > However, we also had not ever turned on caching in our props file
> >> >> > with: class.resource.loader.cache = true.
> >> >> >
> >> >> > Does this mean that macros are cached separately from whatever is
> >> >> > being
> >> >> > cached in the class.resource.loader.cache cache? Is there any way
> to
> >> >> > control that caching or is just using this property the
> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
> >> >> >
> >> >> > One side effect of our recent changes is that the app seems to have
> >> >> > an
> >> >> > increased mem footprint. We're not *sure* it can be attributed to
> >> >> velocity
> >> >> > but I was trying to see what kinds of things Velocity could be
> >> >> > hanging on
> >> >> > to and how much memory they might be taking up.
> >> >> >
> >> >> > Thanks!
> >> >> >
> >> >>
> >
> >
>

Re: Macro caching and other caching

Posted by Nathan Bubna <nb...@gmail.com>.
And you're sure you're only using VelocityEngine.evaluate?  Not
loading templates through the resource loader?  Or are you doing both?

On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
<br...@hannonhill.com> wrote:
> Nathan,
>
> Tokens are referenced by
> org.apache.velocity.runtime.parser.node.ASTReference which seem to be
> referenced by arrays of org.apache.velocity.runtime.parser.node.Nodes.  Most
> of the classes referencing these things are AST classes in the
> org.apache.velocity.runtime.parser.node package.
>
> Here's our properties file:
>
> runtime.log.logsystem.class =
> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
>
> runtime.log.error.stacktrace = false
> runtime.log.warn.stacktrace = false
> runtime.log.info.stacktrace = false
> runtime.log.invalid.reference = true
>
> input.encoding=UTF-8
> output.encoding=UTF-8
>
> directive.foreach.counter.name = velocityCount
> directive.foreach.counter.initial.value = 1
>
> resource.loader = class
>
> class.resource.loader.description = Velocity Classpath Resource Loader
> class.resource.loader.class =
> org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
>
> velocimacro.permissions.allow.inline.local.scope = true
>
> Thanks!
> Bradley
>
> On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com> wrote:
>>
>> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
>> <br...@hannonhill.com> wrote:
>> > Thanks for the input.
>> >
>> > What we're seeing is that Velocity seems to be holding on to a lot
>> > of org.apache.velocity.runtime.parser.Token objects (around 5 million).
>> > We
>> > allow people to write arbitrary Velocity templates in our system and are
>> > evaluating them with:
>> >
>> > VelocityEngine.evaluate(Context context, Writer writer, String logTag,
>> > Reader reader)
>> >
>> > I was under the impression that Templates evaluated this way are
>> > inherently
>> > not cacheable. Is that the case? If that's not true, is there a way to
>> > control the cache Velocity is using for these?
>>
>> me too.  just out of curiosity, what properties are you using for
>> configuration?  and can you tell any more about what class is holding
>> onto those Tokens?
>>
>> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com> wrote:
>> >
>> >> I think that Velocity has one global hash table for macros from the
>> >> *.vm
>> >> libraries and that is more or less static for the life time of the
>> >> Velocity
>> >> engine.
>> >>
>> >> I wish there there was a mechanism to control the list of the *.vm
>> >> files
>> >> and their order of lookup for each individual merge (thread). This
>> >> would
>> >> facilitate macro overloads based on the context.
>> >> Unfortunately this feature is not available.
>> >>
>> >> I think the 1.7 behavior is (more or less):
>> >>
>> >> When template reference is found (i.e. #parse("x")) it is looked-up in
>> >> the
>> >> resource cache and if found there (with all the expiration checks,
>> >> etc.)
>> >> the parsed AST tree is used.
>> >> If not found the template is loaded from the file, actually parsed and
>> >> put
>> >> into the cache. During the actual parsing process the macros that are
>> >> defined in the template are put into the macro manager cache which is
>> >> organized as:
>> >> "defining template name (name space)" => "macro name" => AST macro code
>> >> The AST is then rendered in the current context running #parse.
>> >>
>> >> When the time comes to call a macro there is a lookup process which can
>> >> be
>> >> influenced by some props, but the most general case is:
>> >>
>> >> 1. Lookup in the global *.vm files, if found use that.
>> >> 2. Lookup in the same "name space" that calls the macro, if found use
>> >> that.
>> >> 3. Going back through the "list" of the #parse-d templates lookup in
>> >> each
>> >> name space on the stack.
>> >>
>> >> The stack can be actually very long too, for example
>> >>
>> >> #foreach($templ in [1..5])
>> >>   #parse("${templ}.vtl")
>> >> #end
>> >>
>> >> #mymacro()
>> >>
>> >> The lookup list here would contain:
>> >>
>> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>> >>
>> >> This is true even for cases where the name is the same:
>> >>
>> >> #foreach($item in [1..5])
>> >>   #parse('item.vtl')
>> >> #end
>> >>
>> >> The lookup list here would contain:
>> >>
>> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>> >>
>> >> There is no attempt to optimize the lookup list and collapse the
>> >> duplicates.
>> >>
>> >> Unfortunately 1.7 also had some nasty concurrency bugs there that had
>> >> to do
>> >> with clearing the name space of all the macros and repopulating it
>> >> again on
>> >> each parse which did not work at all with multiple threads.
>> >> One thread could clear the name space while another was doing a lookup,
>> >> etc.
>> >>
>> >> I think there was an effort to redesign that part in 2.0, but I have
>> >> not
>> >> looked at that yet.
>> >>
>> >> Alex
>> >>
>> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>> >> bradley.wagner@hannonhill.com> wrote:
>> >>
>> >> > Hi,
>> >> >
>> >> > We recently made some changes to our software to use just a single
>> >> > VelocityEngine as per recommendations on this group.
>> >> >
>> >> > We ran into an issue where macros were all of the sudden being shared
>> >> > across template renders because we had not
>> >> > specified: velocimacro.permissions.allow.inline.local.scope = true.
>> >> > However, we also had not ever turned on caching in our props file
>> >> > with: class.resource.loader.cache = true.
>> >> >
>> >> > Does this mean that macros are cached separately from whatever is
>> >> > being
>> >> > cached in the class.resource.loader.cache cache? Is there any way to
>> >> > control that caching or is just using this property the
>> >> > way: velocimacro.permissions.allow.inline.local.scope = true
>> >> >
>> >> > One side effect of our recent changes is that the app seems to have
>> >> > an
>> >> > increased mem footprint. We're not *sure* it can be attributed to
>> >> velocity
>> >> > but I was trying to see what kinds of things Velocity could be
>> >> > hanging on
>> >> > to and how much memory they might be taking up.
>> >> >
>> >> > Thanks!
>> >> >
>> >>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
Nathan,

Tokens are referenced by org.apache.velocity.runtime.parser.node.ASTReference
which seem to be referenced by arrays of
org.apache.velocity.runtime.parser.node.Nodes.
 Most of the classes referencing these things are AST classes in the
org.apache.velocity.runtime.parser.node package.

Here's our properties file:

runtime.log.logsystem.class =
org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine

runtime.log.error.stacktrace = false
runtime.log.warn.stacktrace = false
runtime.log.info.stacktrace = false
runtime.log.invalid.reference = true

input.encoding=UTF-8
output.encoding=UTF-8

directive.foreach.counter.name = velocityCount
directive.foreach.counter.initial.value = 1

resource.loader = class

class.resource.loader.description = Velocity Classpath Resource Loader
class.resource.loader.class =
org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader

velocimacro.permissions.allow.inline.local.scope = true

Thanks!
Bradley

On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nb...@gmail.com> wrote:

> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
> <br...@hannonhill.com> wrote:
> > Thanks for the input.
> >
> > What we're seeing is that Velocity seems to be holding on to a lot
> > of org.apache.velocity.runtime.parser.Token objects (around 5 million).
> We
> > allow people to write arbitrary Velocity templates in our system and are
> > evaluating them with:
> >
> > VelocityEngine.evaluate(Context context, Writer writer, String logTag,
> > Reader reader)
> >
> > I was under the impression that Templates evaluated this way are
> inherently
> > not cacheable. Is that the case? If that's not true, is there a way to
> > control the cache Velocity is using for these?
>
> me too.  just out of curiosity, what properties are you using for
> configuration?  and can you tell any more about what class is holding
> onto those Tokens?
>
> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com> wrote:
> >
> >> I think that Velocity has one global hash table for macros from the *.vm
> >> libraries and that is more or less static for the life time of the
> Velocity
> >> engine.
> >>
> >> I wish there there was a mechanism to control the list of the *.vm files
> >> and their order of lookup for each individual merge (thread). This would
> >> facilitate macro overloads based on the context.
> >> Unfortunately this feature is not available.
> >>
> >> I think the 1.7 behavior is (more or less):
> >>
> >> When template reference is found (i.e. #parse("x")) it is looked-up in
> the
> >> resource cache and if found there (with all the expiration checks, etc.)
> >> the parsed AST tree is used.
> >> If not found the template is loaded from the file, actually parsed and
> put
> >> into the cache. During the actual parsing process the macros that are
> >> defined in the template are put into the macro manager cache which is
> >> organized as:
> >> "defining template name (name space)" => "macro name" => AST macro code
> >> The AST is then rendered in the current context running #parse.
> >>
> >> When the time comes to call a macro there is a lookup process which can
> be
> >> influenced by some props, but the most general case is:
> >>
> >> 1. Lookup in the global *.vm files, if found use that.
> >> 2. Lookup in the same "name space" that calls the macro, if found use
> that.
> >> 3. Going back through the "list" of the #parse-d templates lookup in
> each
> >> name space on the stack.
> >>
> >> The stack can be actually very long too, for example
> >>
> >> #foreach($templ in [1..5])
> >>   #parse("${templ}.vtl")
> >> #end
> >>
> >> #mymacro()
> >>
> >> The lookup list here would contain:
> >>
> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
> >>
> >> This is true even for cases where the name is the same:
> >>
> >> #foreach($item in [1..5])
> >>   #parse('item.vtl')
> >> #end
> >>
> >> The lookup list here would contain:
> >>
> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
> >>
> >> There is no attempt to optimize the lookup list and collapse the
> >> duplicates.
> >>
> >> Unfortunately 1.7 also had some nasty concurrency bugs there that had
> to do
> >> with clearing the name space of all the macros and repopulating it
> again on
> >> each parse which did not work at all with multiple threads.
> >> One thread could clear the name space while another was doing a lookup,
> >> etc.
> >>
> >> I think there was an effort to redesign that part in 2.0, but I have not
> >> looked at that yet.
> >>
> >> Alex
> >>
> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> >> bradley.wagner@hannonhill.com> wrote:
> >>
> >> > Hi,
> >> >
> >> > We recently made some changes to our software to use just a single
> >> > VelocityEngine as per recommendations on this group.
> >> >
> >> > We ran into an issue where macros were all of the sudden being shared
> >> > across template renders because we had not
> >> > specified: velocimacro.permissions.allow.inline.local.scope = true.
> >> > However, we also had not ever turned on caching in our props file
> >> > with: class.resource.loader.cache = true.
> >> >
> >> > Does this mean that macros are cached separately from whatever is
> being
> >> > cached in the class.resource.loader.cache cache? Is there any way to
> >> > control that caching or is just using this property the
> >> > way: velocimacro.permissions.allow.inline.local.scope = true
> >> >
> >> > One side effect of our recent changes is that the app seems to have an
> >> > increased mem footprint. We're not *sure* it can be attributed to
> >> velocity
> >> > but I was trying to see what kinds of things Velocity could be
> hanging on
> >> > to and how much memory they might be taking up.
> >> >
> >> > Thanks!
> >> >
> >>
>

Re: Macro caching and other caching

Posted by Nathan Bubna <nb...@gmail.com>.
On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
<br...@hannonhill.com> wrote:
> Thanks for the input.
>
> What we're seeing is that Velocity seems to be holding on to a lot
> of org.apache.velocity.runtime.parser.Token objects (around 5 million). We
> allow people to write arbitrary Velocity templates in our system and are
> evaluating them with:
>
> VelocityEngine.evaluate(Context context, Writer writer, String logTag,
> Reader reader)
>
> I was under the impression that Templates evaluated this way are inherently
> not cacheable. Is that the case? If that's not true, is there a way to
> control the cache Velocity is using for these?

me too.  just out of curiosity, what properties are you using for
configuration?  and can you tell any more about what class is holding
onto those Tokens?

> On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com> wrote:
>
>> I think that Velocity has one global hash table for macros from the *.vm
>> libraries and that is more or less static for the life time of the Velocity
>> engine.
>>
>> I wish there there was a mechanism to control the list of the *.vm files
>> and their order of lookup for each individual merge (thread). This would
>> facilitate macro overloads based on the context.
>> Unfortunately this feature is not available.
>>
>> I think the 1.7 behavior is (more or less):
>>
>> When template reference is found (i.e. #parse("x")) it is looked-up in the
>> resource cache and if found there (with all the expiration checks, etc.)
>> the parsed AST tree is used.
>> If not found the template is loaded from the file, actually parsed and put
>> into the cache. During the actual parsing process the macros that are
>> defined in the template are put into the macro manager cache which is
>> organized as:
>> "defining template name (name space)" => "macro name" => AST macro code
>> The AST is then rendered in the current context running #parse.
>>
>> When the time comes to call a macro there is a lookup process which can be
>> influenced by some props, but the most general case is:
>>
>> 1. Lookup in the global *.vm files, if found use that.
>> 2. Lookup in the same "name space" that calls the macro, if found use that.
>> 3. Going back through the "list" of the #parse-d templates lookup in each
>> name space on the stack.
>>
>> The stack can be actually very long too, for example
>>
>> #foreach($templ in [1..5])
>>   #parse("${templ}.vtl")
>> #end
>>
>> #mymacro()
>>
>> The lookup list here would contain:
>>
>> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>>
>> This is true even for cases where the name is the same:
>>
>> #foreach($item in [1..5])
>>   #parse('item.vtl')
>> #end
>>
>> The lookup list here would contain:
>>
>> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>>
>> There is no attempt to optimize the lookup list and collapse the
>> duplicates.
>>
>> Unfortunately 1.7 also had some nasty concurrency bugs there that had to do
>> with clearing the name space of all the macros and repopulating it again on
>> each parse which did not work at all with multiple threads.
>> One thread could clear the name space while another was doing a lookup,
>> etc.
>>
>> I think there was an effort to redesign that part in 2.0, but I have not
>> looked at that yet.
>>
>> Alex
>>
>> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
>> bradley.wagner@hannonhill.com> wrote:
>>
>> > Hi,
>> >
>> > We recently made some changes to our software to use just a single
>> > VelocityEngine as per recommendations on this group.
>> >
>> > We ran into an issue where macros were all of the sudden being shared
>> > across template renders because we had not
>> > specified: velocimacro.permissions.allow.inline.local.scope = true.
>> > However, we also had not ever turned on caching in our props file
>> > with: class.resource.loader.cache = true.
>> >
>> > Does this mean that macros are cached separately from whatever is being
>> > cached in the class.resource.loader.cache cache? Is there any way to
>> > control that caching or is just using this property the
>> > way: velocimacro.permissions.allow.inline.local.scope = true
>> >
>> > One side effect of our recent changes is that the app seems to have an
>> > increased mem footprint. We're not *sure* it can be attributed to
>> velocity
>> > but I was trying to see what kinds of things Velocity could be hanging on
>> > to and how much memory they might be taking up.
>> >
>> > Thanks!
>> >
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Re: Macro caching and other caching

Posted by Bradley Wagner <br...@hannonhill.com>.
Thanks for the input.

What we're seeing is that Velocity seems to be holding on to a lot
of org.apache.velocity.runtime.parser.Token objects (around 5 million). We
allow people to write arbitrary Velocity templates in our system and are
evaluating them with:

VelocityEngine.evaluate(Context context, Writer writer, String logTag,
Reader reader)

I was under the impression that Templates evaluated this way are inherently
not cacheable. Is that the case? If that's not true, is there a way to
control the cache Velocity is using for these?

On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <al...@kayak.com> wrote:

> I think that Velocity has one global hash table for macros from the *.vm
> libraries and that is more or less static for the life time of the Velocity
> engine.
>
> I wish there there was a mechanism to control the list of the *.vm files
> and their order of lookup for each individual merge (thread). This would
> facilitate macro overloads based on the context.
> Unfortunately this feature is not available.
>
> I think the 1.7 behavior is (more or less):
>
> When template reference is found (i.e. #parse("x")) it is looked-up in the
> resource cache and if found there (with all the expiration checks, etc.)
> the parsed AST tree is used.
> If not found the template is loaded from the file, actually parsed and put
> into the cache. During the actual parsing process the macros that are
> defined in the template are put into the macro manager cache which is
> organized as:
> "defining template name (name space)" => "macro name" => AST macro code
> The AST is then rendered in the current context running #parse.
>
> When the time comes to call a macro there is a lookup process which can be
> influenced by some props, but the most general case is:
>
> 1. Lookup in the global *.vm files, if found use that.
> 2. Lookup in the same "name space" that calls the macro, if found use that.
> 3. Going back through the "list" of the #parse-d templates lookup in each
> name space on the stack.
>
> The stack can be actually very long too, for example
>
> #foreach($templ in [1..5])
>   #parse("${templ}.vtl")
> #end
>
> #mymacro()
>
> The lookup list here would contain:
>
> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
>
> This is true even for cases where the name is the same:
>
> #foreach($item in [1..5])
>   #parse('item.vtl')
> #end
>
> The lookup list here would contain:
>
> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
>
> There is no attempt to optimize the lookup list and collapse the
> duplicates.
>
> Unfortunately 1.7 also had some nasty concurrency bugs there that had to do
> with clearing the name space of all the macros and repopulating it again on
> each parse which did not work at all with multiple threads.
> One thread could clear the name space while another was doing a lookup,
> etc.
>
> I think there was an effort to redesign that part in 2.0, but I have not
> looked at that yet.
>
> Alex
>
> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> bradley.wagner@hannonhill.com> wrote:
>
> > Hi,
> >
> > We recently made some changes to our software to use just a single
> > VelocityEngine as per recommendations on this group.
> >
> > We ran into an issue where macros were all of the sudden being shared
> > across template renders because we had not
> > specified: velocimacro.permissions.allow.inline.local.scope = true.
> > However, we also had not ever turned on caching in our props file
> > with: class.resource.loader.cache = true.
> >
> > Does this mean that macros are cached separately from whatever is being
> > cached in the class.resource.loader.cache cache? Is there any way to
> > control that caching or is just using this property the
> > way: velocimacro.permissions.allow.inline.local.scope = true
> >
> > One side effect of our recent changes is that the app seems to have an
> > increased mem footprint. We're not *sure* it can be attributed to
> velocity
> > but I was trying to see what kinds of things Velocity could be hanging on
> > to and how much memory they might be taking up.
> >
> > Thanks!
> >
>

Re: Macro caching and other caching

Posted by Alex Fedotov <al...@kayak.com>.
I think that Velocity has one global hash table for macros from the *.vm
libraries and that is more or less static for the life time of the Velocity
engine.

I wish there there was a mechanism to control the list of the *.vm files
and their order of lookup for each individual merge (thread). This would
facilitate macro overloads based on the context.
Unfortunately this feature is not available.

I think the 1.7 behavior is (more or less):

When template reference is found (i.e. #parse("x")) it is looked-up in the
resource cache and if found there (with all the expiration checks, etc.)
the parsed AST tree is used.
If not found the template is loaded from the file, actually parsed and put
into the cache. During the actual parsing process the macros that are
defined in the template are put into the macro manager cache which is
organized as:
"defining template name (name space)" => "macro name" => AST macro code
The AST is then rendered in the current context running #parse.

When the time comes to call a macro there is a lookup process which can be
influenced by some props, but the most general case is:

1. Lookup in the global *.vm files, if found use that.
2. Lookup in the same "name space" that calls the macro, if found use that.
3. Going back through the "list" of the #parse-d templates lookup in each
name space on the stack.

The stack can be actually very long too, for example

#foreach($templ in [1..5])
  #parse("${templ}.vtl")
#end

#mymacro()

The lookup list here would contain:

1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl

This is true even for cases where the name is the same:

#foreach($item in [1..5])
  #parse('item.vtl')
#end

The lookup list here would contain:

item.vtl, item.vtl, item.vtl, item.vtl, item.vtl

There is no attempt to optimize the lookup list and collapse the duplicates.

Unfortunately 1.7 also had some nasty concurrency bugs there that had to do
with clearing the name space of all the macros and repopulating it again on
each parse which did not work at all with multiple threads.
One thread could clear the name space while another was doing a lookup, etc.

I think there was an effort to redesign that part in 2.0, but I have not
looked at that yet.

Alex

On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
bradley.wagner@hannonhill.com> wrote:

> Hi,
>
> We recently made some changes to our software to use just a single
> VelocityEngine as per recommendations on this group.
>
> We ran into an issue where macros were all of the sudden being shared
> across template renders because we had not
> specified: velocimacro.permissions.allow.inline.local.scope = true.
> However, we also had not ever turned on caching in our props file
> with: class.resource.loader.cache = true.
>
> Does this mean that macros are cached separately from whatever is being
> cached in the class.resource.loader.cache cache? Is there any way to
> control that caching or is just using this property the
> way: velocimacro.permissions.allow.inline.local.scope = true
>
> One side effect of our recent changes is that the app seems to have an
> increased mem footprint. We're not *sure* it can be attributed to velocity
> but I was trying to see what kinds of things Velocity could be hanging on
> to and how much memory they might be taking up.
>
> Thanks!
>