You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@velocity.apache.org by Jose Alberto Fernandez <JF...@viquity.com> on 2000/12/17 12:39:12 UTC

RE: [PATCH] Local contexts and forward macros [LONG BUT IMPORTANT ]

> From: Geir Magnusson Jr. [mailto:geirm@optonline.net]
> 
> 
> 
> 0) There is a huge problem with arg passing-by-value for VMs.  I will
> explain later.  This is something I too have been working on to solve
> some recursion issues using an approach suggested in this 
> patch, and my
> work today with Jose's patch as well as my own version shows me
> conclusively that unless we wish to limit the functionality of VMs we
> need to continue to search for another solution.  (Of course, the
> performance numbers beg the question "solution to what?", but more on
> that later.)
> 
> 1) the majority of the performance improvement came from simply by
> removing the call containsKey() before get() in
> ASTReference.getVariableValue(), so that goes in.  Good one, 
> Jose!  Bad
> one, us! :)
> 
> More detail inline.
> 
> Jose Alberto Fernandez wrote:
> > 
> > Hi,
> > 
> > This patch tries to solve several things:
> > 
> > - Eliminates the multiple parsing of Macros by passing 
> macro parameters by
> > value.
> 
> <shaking head> :)  Ok.  Consider that strawman dead.
> 
> In the interest of honesty, I will point out that the above is
> conditionally true : the VM was parsed once per use per template, just
> like everything else in the template.  After that, it was cached like
> everything else in the template.  So the whole issue of 
> repeated parsing
> and patching in a given template is nonexistant if caching was turned
> on, and rather mimimal if not.
> 

OK, someone will have to explain me what is this "turned on caching" thing.
I have not found any switch or part of the code that takes into account some
switch for caching. I do not know all the code yet so any pointers will be
helpful.

With respect to parsing, my reading of the code was that for example in the
case of a recursive macro, every expansion of the macro will be parsed
again. Now since this expansions are data driven, I do not see how is the
code caching them.

> Now, I was interested in proving my assertion, so I changed 
> ASTReference
> to not search with containsKey(), but simply return the 
> output of get()
> (as suggested by Jose's patch), and this helped quite a bit. 
> Performance of the test program, 5 iterations of the killer processing
> (the actual numbers reveal the cruddyness of my box compared 
> to Jose's,
> but that doesn't matter) :
> 
> per iteration, no template caching (template parsed each time) :
> 71cvs : ave 13.988 sec
> 71new : ave 13.545 sec
> patch : ave 13.529 sec
> 
> with template caching :
> 71new : ave 13.072 sec
> patch : ave 12.939 sec
> 
> So there is a big improvement in the ASTReference fix.  The full patch
> has a slight advantage, but its very small.
> 

Well, if you want to save even more, take a look at ASTStringLiteral.java
the code is going and testing whether to interpolate strings on every string
by using Runtime.getBoolean() that function creates several object instances
every time is called, really, really expensive. It certaintly could be
cached somewhere.

> Again, I am testing the patch versus the current CVS tree with a small
> change to ASTReference - 71new is still using the 'dreaded'
> patch-and-parse approach to VMs.
> 
> The conclusion I draw, to spell it out, is that there isn't a
> performance problem with the current VM implementation. (Although I
> agree its a bit yecchy.)  But its necessary to support
> pass-by-reference.
> 

Just to make things precise, what we currently have is called pass-by-name,
which is defined as that in every ocurrence of the formal parameter in the
macro, the entire expression used as the actual argument will be
re-evaluated. In a programming language:

call proc(arr[aVeryComplicatedFunctionReturning2asResult()])

with a declaration:

procedure proc(int p);

If p is by value, p is the value contained in arr[2].
If p is by ref, p correspond to the entry arr[2], allowing arr[2] to be
modified.
If p is by name, p is arr[aVeryComplicatedFunctionReturning2asResult()] 
and every time you mention p you re-evaluate the whole thing.

Very cool, but so expensive in reality that no modern prog lang 
supports pass-by-name. They may support function references (i.e.
continuations) but that is a completely different beast.

But in any case, I will follow my answers later.

> > - It solves the problem of forward references for macros, 
> or macros used
> > before they are defined in the template. This is achieved 
> by a two pass lazy
> > compilation of macros.
> 
> No, actually, it doesn't.  Try the following :
> 
> #woogie("foo")
> 
> #macro( woogie $i)
>   Hello from woogie! >$i<
> #end
> 

You are right. The cases I had in mind is when you have multiple macros
calling each other. Macros are reparsed the first time they are actually
called and that solves that case. Unfortunately, the top level template
itself is not reparsed in my solution.

> I have had a fix in mind for a while, and will hopefully get it done
> this weekend.  If you have a fix, please let me know so I don't waste
> any [more] time.  But I will hint that a totally different
> approach must be taken.
> 

The main problem is with the current grammar, which bails out of the
directives if the velocimacro is not found. If this was not the case, it
would be easy to solve. But as with other stuff, I was not sure why is the
code bailing out in the first place, so I dicided not to touch it.
Can you please explain this feature? If it is just for the error reporting
I think that can be solved by the velocimacro itself.

> I tested it, and the timings were consistant with not supporting the
> nodal introspection caching.  I guess you might have saved 2 
> days if we
> corresponded and collaborated a little on this project since 
> I was able
> to guess the problem in about 0.1 sec (before morning coffee even...)
> after I saw the timing ratios :) but each to his own, I guess.
> 
> Once I threw that in, the performance returned to a touch slower than
> the not using the MacroContext.
> 

OK, you cought me here, what the heck is "nodal instrospection caching"?
This code seem to have all kinds of things burried inside for no one to see.
Is there a document or something that explains all the caching techniques
and other things to take into consideration when collaborating on the
project? I have not seen anything on the web pages, but I may have been
looking in the wrong place.

> > 
> > Here is the patch as well as the modified files for your 
> study, I had to
> > change a couple of method signatures in Context for 
> everything to hold
> > toghether, but no big deal.
> 
> That actually is a big deal of sorts, as you can't just drop 
> in the jar
> anymore to compiled code and have things work, I think because of the
> change in signature to Context.put().  However, I think we 
> should do it
> anyway for the future, as it's useful.  We piss off fewer people now. 
> Comments please.
> 

There are alternatives to changing the signature of Context.put(), like
adding a new method "Object replace(name, val)", if we think the cost of
changing put() is too high. The advantage of having such a signature is that
one can modify and test if the impact of the change in one operation instead
of two (by requiring a Context.containsKey()) before hand. The advantage of
modifying the signature of Context.put() is that it will put it more in line
with Hashtable.put() which is the pattern that we seem to follow.

> Anyway, regarding the issue with losing pass-by-reference :
> 
> Imagine a you have an object with a method that 'does 
> something', rather
> than just return data.  Suppose it increments an internal counter and
> returns the value :
> 
> class Foo {
>   private int i = 0;
>   public int another() { return ++i; }
> }
> 
> and you stuff it into your context : context.put( "provider", 
> new Foo()
> )
> 
> and a template :
> 
> #macro( foo $i $count )
>   $i
>   #set($count = $count - 1 )
>   #if( $count > 0)
>     #foo( $i $count )
>   #end
> #end
> 
> by reference :
> #set($count = 5)
> #set( $blargh = "a")
> #foo( $provider.another()  $count)
> 
> by value : 
> #set( $count = 5 )
> #set( $val = $provider.another() )
> #foo( $val $count )
> 
.....
> 
> So you can see, we lose something valuable, with little gain 
> performance
> wise.
> 

Well, this was exactly what I was trying to acomplish, one of the problems
with pass-by-name (a.k.a pass-by-ref in this discussion) is that it is
very expensive globally, for the rare cases in which it would be useful.

First I do not think it is too obvious for naive users to understand that
when you have somethig like:

#macro(a $p)#b($p.mb) $p#end
#macro(b $p)#c($p.mc) $p#end
#macro(c $p)#d($p.md) $p#end
#macro(d $p)$p #end
#a($var.ma)

Assume they are ordered the other way arround, due to forward references if
you want. I do not think it is clear to users that this will execute as:

 $var.ma.mb.mc.md  $var.ma.mb.mc $var.ma.mb $var.ma

without any caching of the intermediate common expressions. With pass by
value, caching is implicit a reduction from 10 to 4. I think that we should
be able to agree that this re-evaluation of expression is a major performace
hit. So, for example, in the case of exploring a tree structure, you will be
going from the root on every leaf you want to print, unless you use #set to
keep intermediate values in the context. But, the context is global, so it
will be very dificult if not impossible to use correctly on recursive
macros. Which is what one needs for generating out of things like JDOM.



> The second VM feature that breaks is that you can pass a reference
> through to the VM even if it doesn't have a value. As a 
> matter of fact,
> Velocity with this patch won't pass the testbed because of this.
> (Passing the testbed is a good thing for patches :)  Ex :
> 
> #macro (woogie $i)
>  >$i<
> #end
> 
> #woogie( $not-a-reference )
> 
> Under the current VM implementation, the output would (should) be :
> 
> >$not-a-reference<
> 
> where the pass-by-value approach results in  :
> 
> >$i<
> 
> This is valuable, I think, in that in debugging, you can see where and
> what is wrong right in the output, rather than divining from the log,
> because the output will show you what reference isn't right.
> 

Sorry, but I do not buy this particular argument. This is an error, and
errors belong on the log file, where it should do a better job on indicating
what went wrong when evaluating the reference. 

> I tried a few things to get around this, but to no avail. 
> 
> There is one further problem with pass by value - I think any solution
> that solves the above issue must make sure it doesn't break the
> following behavior :
> 
> #macro( iffy $i )
>   #if($i)
>     valid!
>   #else
>     invalid!
>   #end
> #end
> 
> #set($valid = "valid")
> #iffy( $valid )
> #iffy( $invalid )
> 
> If the solution is one that places something into the context 
> such that
> it will properly render an invalid reference, we will lose the
> functionality of #if() - no longer is that reference invalid (there is
> something in the context, after all), so #if returns true, 
> when actually
> it should return false.
> 

I think both implementations I provided work correctly with respect to #if.
I do not think otherwise my tests would have succeded. Notice that the 
implementation makes sure that parameters having NULL values are effectively
removed from the context, if they were already there, and restored to their
old value on exit from the call.

> Anyhow, I think that the observation about ASTReference was right on,
> and we got some good performance improvement from it.  But the other
> stuff simply breaks existing features (the fact that it fails the
> testbed should have hinted at that), and I wouldn't want to 
> do that.  I
> personally like the pass-by-reference feature, because I can have both
> pass-by-ref and pass-by-val.  With only pass-by-val, users lose
> something.  Now, we could abandon those features, but I am 
> not sure what
> we are getting in return.  Certainly not a tremendous 
> preformance boost,
> as far as I can tell. It does help with one specific 
> recursion bug, but
> there is an easy workaround for that, and I hope to have an 
> internal fix
> for that later.
> 

Any template that calls a macro with a time consumming operation
and then uses the parameter several times all over, will have a hidden
slowdown not easy to find by users which are not programmers and have
no idea each occurrence is actually re-evaluating the entire thing.

So for example if I just use:

 ...#quietnull($db.getFromDBtheMiddleNameOfPerson())...

it turns out I will be going to the database twice! Now how is the user
to know that such a thing is happening, he definetly does not need to know
about the implementation of #quietnull:

#macro(quietnull $a)
#if($a)$a#end
#end

It should have been:

#macro(quietnull $a)
#set($quietnullvar = $a)
#if($quietnullvar)$quietnullvar#end
#end

But, wouldn't the #set operation produce an error if $a evaluates to null? 
Humm, how can #quietnull be re-written?

Jose Alberto

Re: [PATCH] Local contexts and forward macros [LONG BUT IMPORTANT]

Posted by "Geir Magnusson Jr." <ge...@optonline.net>.
Jose Alberto Fernandez wrote:

> [snip]
>
> > > - Eliminates the multiple parsing of Macros by passing
> > macro parameters by
> > > value.
> >
> > <shaking head> :)  Ok.  Consider that strawman dead.
> >
> > In the interest of honesty, I will point out that the above is
> > conditionally true : the VM was parsed once per use per template, just
> > like everything else in the template.  After that, it was cached like
> > everything else in the template.  So the whole issue of
> > repeated parsing
> > and patching in a given template is nonexistant if caching was turned
> > on, and rather mimimal if not.
> >
> 
> OK, someone will have to explain me what is this "turned on caching" thing.
> I have not found any switch or part of the code that takes into account some
> switch for caching. I do not know all the code yet so any pointers will be
> helpful.

There's a property template.loader.File.cache or template.loader.1.cache
that does this.  Come on...

Look in runtime/defaults/velocity.properties for (most of) the
defaults.  And it didn't make much of a diff anyway, as parsing is so
cheap compared to the processing.

> 
> With respect to parsing, my reading of the code was that for example in the
> case of a recursive macro, every expansion of the macro will be parsed
> again. Now since this expansions are data driven, I do not see how is the
> code caching them.

Yes, that's true, once.  I agree that for every 'use-instance' of a VM,
it's patched and parsed.  I am not disagreeing with that.  After that,
the cached AST keeps it.

> 
> > Now, I was interested in proving my assertion, so I changed
> > ASTReference
> > to not search with containsKey(), but simply return the
> > output of get()
> > (as suggested by Jose's patch), and this helped quite a bit.
> > Performance of the test program, 5 iterations of the killer processing
> > (the actual numbers reveal the cruddyness of my box compared
> > to Jose's,
> > but that doesn't matter) :
> >
> > per iteration, no template caching (template parsed each time) :
> > 71cvs : ave 13.988 sec
> > 71new : ave 13.545 sec
> > patch : ave 13.529 sec
> >
> > with template caching :
> > 71new : ave 13.072 sec
> > patch : ave 12.939 sec
> >
> > So there is a big improvement in the ASTReference fix.  The full patch
> > has a slight advantage, but its very small.
> >
> 
> Well, if you want to save even more, take a look at ASTStringLiteral.java
> the code is going and testing whether to interpolate strings on every string
> by using Runtime.getBoolean() that function creates several object instances
> every time is called, really, really expensive. It certaintly could be
> cached somewhere.

Great!  Good call!  We'll fix that.

But that has nothing to do with the above.  My point is to figure out
the true performance issues, to kinda cut through the fog of our
perceptions, and try to actually measure. That's all.


> > Again, I am testing the patch versus the current CVS tree with a small
> > change to ASTReference - 71new is still using the 'dreaded'
> > patch-and-parse approach to VMs.
> >
> > The conclusion I draw, to spell it out, is that there isn't a
> > performance problem with the current VM implementation. (Although I
> > agree its a bit yecchy.)  But its necessary to support
> > pass-by-reference.
> >
> 
> Just to make things precise, what we currently have is called pass-by-name,
> which is defined as that in every ocurrence of the formal parameter in the
> macro, the entire expression used as the actual argument will be
> re-evaluated. In a programming language:

Yes.  Fine.  'Pass-by-name' it shall be referred to henceforth.  Thank
you.

> call proc(arr[aVeryComplicatedFunctionReturning2asResult()])
> 
> with a declaration:
> 
> procedure proc(int p);
> 
> If p is by value, p is the value contained in arr[2].
> If p is by ref, p correspond to the entry arr[2], allowing arr[2] to be
> modified.
> If p is by name, p is arr[aVeryComplicatedFunctionReturning2asResult()]
> and every time you mention p you re-evaluate the whole thing.
> 
> Very cool, but so expensive in reality that no modern prog lang
> supports pass-by-name. They may support function references (i.e.
> continuations) but that is a completely different beast.

We aren't trying to write a programming language here.  #macros() are
not #subroutines().  We are trying to write a template engine, with a
simple template language to help non-programmer people create dynamic
output.  The original intent I think is web pages, but it's good for
other things, as you have definitely proven.

> [snip]
>
> > I have had a fix in mind for a while, and will hopefully get it done
> > this weekend.  If you have a fix, please let me know so I don't waste
> > any [more] time.  But I will hint that a totally different
> > approach must be taken.
> >
> 
> The main problem is with the current grammar, which bails out of the
> directives if the velocimacro is not found. If this was not the case, it
> would be easy to solve. But as with other stuff, I was not sure why is the
> code bailing out in the first place, so I dicided not to touch it.
> Can you please explain this feature? If it is just for the error reporting
> I think that can be solved by the velocimacro itself.

No, that is done to stop the parser from parsing what is really text
into a VM or PD.  There are two approaches I can think of offhand that
one can take, the first is to let the parser go, and fix it all at
runtime, deciding then if you are a PD or VM or not, and rendering
appropriately.  The problem then seems to be that escape logic has to
propogate out of the parser into the nodes, which I think should be
avoided.  The other solution is to simply examine the template at parse
time, and parse it twice.  This is of cource intellectually offensive,
but gets things done, and if you use that magical caching thing...

> 
> > I tested it, and the timings were consistant with not supporting the
> > nodal introspection caching.  I guess you might have saved 2
> > days if we
> > corresponded and collaborated a little on this project since
> > I was able
> > to guess the problem in about 0.1 sec (before morning coffee even...)
> > after I saw the timing ratios :) but each to his own, I guess.
> >
> > Once I threw that in, the performance returned to a touch slower than
> > the not using the MacroContext.
> >
> 
> OK, you cought me here, what the heck is "nodal instrospection caching"?

The stuff that I worked on to get your performance problem to go away
after I introduced it by moving introspection stuff to runtime rather
than init() time to make us thread- and context-safe.  Not only did the
commit messages get posted to the list of course, but I think I mailed
you personally asking you to test things out to make sure all was cool. 
You did that, if I remember, and reported that things were pretty much
back to normal.

> This code seem to have all kinds of things burried inside for no one to see.
> Is there a document or something that explains all the caching techniques
> and other things to take into consideration when collaborating on the
> project? I have not seen anything on the web pages, but I may have been
> looking in the wrong place.

What?  Its for all to see.  You get the source with the distro, you can
get it with anon CVS, and you can even browse the code via the web. 
Well, we could make velocity one big huge method, which would solve that
whole burying issue, but would make maintenance, comprehension, and
collaboration challenging.  I don't want to just say 'read the code',
but we don't have those docs yet.  It's still a work in progress, as you
know, so documenting the internals might not be the best use of scarce
resources at this moment.

> 
> > >
> > > Here is the patch as well as the modified files for your
> > study, I had to
> > > change a couple of method signatures in Context for
> > everything to hold
> > > toghether, but no big deal.
> >
> > That actually is a big deal of sorts, as you can't just drop
> > in the jar
> > anymore to compiled code and have things work, I think because of the
> > change in signature to Context.put().  However, I think we
> > should do it
> > anyway for the future, as it's useful.  We piss off fewer people now.
> > Comments please.
> >
> 
> There are alternatives to changing the signature of Context.put(), like
> adding a new method "Object replace(name, val)", if we think the cost of
> changing put() is too high. 

I don't think its too high right now. It's just that we have to make
sure all the usual suspects know about it so they can rebuild ( The 3
t's : tdk, torque, turbine).

> The advantage of having such a signature is that
> one can modify and test if the impact of the change in one operation instead
> of two (by requiring a Context.containsKey()) before hand. The advantage of
> modifying the signature of Context.put() is that it will put it more in line
> with Hashtable.put() which is the pattern that we seem to follow.

That's why I liked it.

> > Anyway, regarding the issue with losing pass-by-reference :
> >
> > Imagine a you have an object with a method that 'does
> > something', rather
> > than just return data.  Suppose it increments an internal counter and
> > returns the value :
> >
> > class Foo {
> >   private int i = 0;
> >   public int another() { return ++i; }
> > }
> >
> > and you stuff it into your context : context.put( "provider",
> > new Foo()
> > )
> >
> > and a template :
> >
> > #macro( foo $i $count )
> >   $i
> >   #set($count = $count - 1 )
> >   #if( $count > 0)
> >     #foo( $i $count )
> >   #end
> > #end
> >
> > by reference :
> > #set($count = 5)
> > #set( $blargh = "a")
> > #foo( $provider.another()  $count)
> >
> > by value :
> > #set( $count = 5 )
> > #set( $val = $provider.another() )
> > #foo( $val $count )
> >
> .....
> >
> > So you can see, we lose something valuable, with little gain
> > performance
> > wise.
> >
> 
> Well, this was exactly what I was trying to acomplish, one of the problems
> with pass-by-name (a.k.a pass-by-ref in this discussion) is that it is
> very expensive globally, for the rare cases in which it would be useful.

But quite frankly, I don't think the expense has been demonstrated.  And
I think that this isn't some sort of contrived example where it would be
useful in 'rare cases'.  Suppose you had a little object that generated
alternating colors in a table?  Woudn't losing pass-by-name completely
remove the usefulness of context tools?  (I am pre coffee again...)

> First I do not think it is too obvious for naive users to understand that
> when you have somethig like:
> 
> #macro(a $p)#b($p.mb) $p#end
> #macro(b $p)#c($p.mc) $p#end
> #macro(c $p)#d($p.md) $p#end
> #macro(d $p)$p #end
> #a($var.ma)
> 
> Assume they are ordered the other way arround, due to forward references if
> you want. I do not think it is clear to users that this will execute as:
> 
>  $var.ma.mb.mc.md  $var.ma.mb.mc $var.ma.mb $var.ma

Yes, there are clearly issues with VMs.  I recognize that.  You see the
same thing in recursion. 

> without any caching of the intermediate common expressions. With pass by
> value, caching is implicit a reduction from 10 to 4. I think that we should
> be able to agree that this re-evaluation of expression is a major performace
> hit.

I don't want to 'agree'.  I want to be 'shown'.  I tried for long long
time yesterday to show myself that there is a 'major' performance hit,
and I couldn't. No one else has either, I think.  We all just look at
the code, harrumph, and say, "No, that can't be good."

Please, prove me wrong.  I want to be wrong here.

> So, for example, in the case of exploring a tree structure, you will be
> going from the root on every leaf you want to print, unless you use #set to
> keep intermediate values in the context. But, the context is global, so it
> will be very dificult if not impossible to use correctly on recursive
> macros. Which is what one needs for generating out of things like JDOM.

Do you want an example that walks a jdom tree?  I have one.  That is
what showed me the problems in VMs, that I want to fix, that I have been
working on. But I am not going to break features.  I can easily be
outvoted though, so if we wish to toss this functionality, great.  I
don't, to be frank.

> > The second VM feature that breaks is that you can pass a reference
> > through to the VM even if it doesn't have a value. As a
> > matter of fact,
> > Velocity with this patch won't pass the testbed because of this.
> > (Passing the testbed is a good thing for patches :)  Ex :
> >
> > #macro (woogie $i)
> >  >$i<
> > #end
> >
> > #woogie( $not-a-reference )
> >
> > Under the current VM implementation, the output would (should) be :
> >
> > >$not-a-reference<
> >
> > where the pass-by-value approach results in  :
> >
> > >$i<
> >
> > This is valuable, I think, in that in debugging, you can see where and
> > what is wrong right in the output, rather than divining from the log,
> > because the output will show you what reference isn't right.
> >
> 
> Sorry, but I do not buy this particular argument. This is an error, and
> errors belong on the log file, where it should do a better job on indicating
> what went wrong when evaluating the reference.

Whatever.  It's not an argument.  Its a demonstration, and an
explination of why the patch didn't pass the testbed.   You may or may
not have noticed that.  If you did, it sure would have been great to
have a little not explaining so I didn't have to go chase it down.

> > I tried a few things to get around this, but to no avail.
> >
> > There is one further problem with pass by value - I think any solution
> > that solves the above issue must make sure it doesn't break the
> > following behavior :
> >
> > #macro( iffy $i )
> >   #if($i)
> >     valid!
> >   #else
> >     invalid!
> >   #end
> > #end
> >
> > #set($valid = "valid")
> > #iffy( $valid )
> > #iffy( $invalid )
> >
> > If the solution is one that places something into the context
> > such that
> > it will properly render an invalid reference, we will lose the
> > functionality of #if() - no longer is that reference invalid (there is
> > something in the context, after all), so #if returns true,
> > when actually
> > it should return false.
> >
> 
> I think both implementations I provided work correctly with respect to #if.
> I do not think otherwise my tests would have succeded. Notice that the
> implementation makes sure that parameters having NULL values are effectively
> removed from the context, if they were already there, and restored to their
> old value on exit from the call.

Read what I said again.  I refer to the invalid reference case.  Yes,
your solution works correctly with #if, because every arg is guaranteed
to have something in the context if #if() would return true.  So by
contstruction, it ensures that the #if() works.  My observation is that
I cannot see how it would not break if you try to keep current
functionality, which you declared above to be an error.

A bit of history : VMs were motivated by trying to solve the problem
about how to 'quiet' a reference : some of us accept that there may not
be a reference in the context, and it isn't an error.  So $!<name>
renders the value of name in the context to the output, or nothing if
not.  What got me going with VMs was the observation that you could
simply do it as a VM :

#macro( isnull $i ) #if($i) $i #end  #end

and here we ar today...

> > Anyhow, I think that the observation about ASTReference was right on,
> > and we got some good performance improvement from it.  But the other
> > stuff simply breaks existing features (the fact that it fails the
> > testbed should have hinted at that), and I wouldn't want to
> > do that.  I
> > personally like the pass-by-reference feature, because I can have both
> > pass-by-ref and pass-by-val.  With only pass-by-val, users lose
> > something.  Now, we could abandon those features, but I am
> > not sure what
> > we are getting in return.  Certainly not a tremendous
> > preformance boost,
> > as far as I can tell. It does help with one specific
> > recursion bug, but
> > there is an easy workaround for that, and I hope to have an
> > internal fix
> > for that later.
> >
> 
> Any template that calls a macro with a time consumming operation
> and then uses the parameter several times all over, will have a hidden
> slowdown not easy to find by users which are not programmers and have
> no idea each occurrence is actually re-evaluating the entire thing.
> 
> So for example if I just use:
> 
>  ...#quietnull($db.getFromDBtheMiddleNameOfPerson())...
> 
> it turns out I will be going to the database twice! Now how is the user
> to know that such a thing is happening, he definetly does not need to know
> about the implementation of #quietnull:
> 
> #macro(quietnull $a)
> #if($a)$a#end
> #end

That's true -> if a person does write a macro that access the arg twice,
you get it accessed twice.  The thing I want to emphasize is that its
#macro() not #subroutine().
 
> It should have been:
> 
> #macro(quietnull $a)
> #set($quietnullvar = $a)
> #if($quietnullvar)$quietnullvar#end
> #end

Yep.
 
> But, wouldn't the #set operation produce an error if $a evaluates to null?

Is that true?  If there is something that returns null, it is returned
as "", I think, in ASTMethod.

> Humm, how can #quietnull be re-written?
> 
> Jose Alberto

-- 
Geir Magnusson Jr.                               geirm@optonline.com
Velocity : it's not just a good idea. It should be the law.
http://jakarta.apache.org/velocity