You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@freemarker.apache.org by "Pedro M. Zamboni" <za...@gmail.com> on 2017/03/04 18:19:09 UTC

Re: [FM3] improve “null” handling

> In FTL2 if you give something a default value (like `foo.y!0`), then you potentially unwillingly hide mistakes in the name (there was never an "y"). But because `foo` is not just a `Map`, we could do better. We know that `y` is not a valid name. So we could throw exception even for `foo.y!0`.

Well, if you always throw an exception when a value is absent, I don’t
think it’s that bad, I just wouldn’t want to end up with a `null` and
an `undefined` value like in Javascript. It *could* be a little
confusing since generally nulls are not tolerated in Freemarker, but I
*think* people would learn the differences soon enough for it to not
be a big problem.

> My guess is that Ceylon goes too far there. It's very unlikely that someone is able to comprehend something written in Ceylon, yet things like `var` or `val` would make reading code harder for them (surely you already know what those mean). So writing `variable` and `value` hardly have a practical value. Yes, it's consistent, I get that. But certainly many will dislike or even be annoyed by `variable` and `value`. So to me it doesn't look like the right balance. I prefer if the *very* common things has a sort syntax, after all, you will very quickly learn them if you do any real work with the language.

Well, as it turns out, `value` is only two keystrokes away from `val`.
What is believed is that the time spent designing the structure of
your module, and actually writing its logic will take a much greater
amount of time than actually occasionally typing those two characters
throughout your module, so the time lost typing out “`value`” is
insignificant to the overall development time of a module. As a
consequence, you get a much more elegant‐looking code that reads much
more nicely.

And even then, `value` still comes as a much shorter way to declare
values. Instead of writing `ArrayList<Map<String, Integer>>`, you can
just write `value`.

We rarely ever use `variable` in Ceylon, so it’s okay for it to be a
little bit more verbose.

> So it's not just an assignment as in Java (where `foo = bar` does an assignment and is also an expression with the value of `bar`). […] Ah, so the assigment is part of the `exists` syntax...

Yeah, sorry if I didn’t make that clear. The assignment is part of the
`exists` syntax, which in turn is part of the `if` syntax.

> […] or just allow assignments as the operands of a top-level && in the case of #if exclusively... which is kind of a hack […].

Gavin (the creator of Ceylon) said that he did consider using `&&`
instead of `,` for separating expressions in a condition list. The
problem he faced was that the assignment operator should bind closer
than the separator, however `&&` binded closer than `=`. I’m not sure
if this is a problem in Freemarker, since the `=` is part of the
`assign` directive syntax (so `&& could be made to bind more loosely
than `=` in an `if`), but *at least I* would be weirded out by `<#if
exists foo = bar && baz>`.

> […] In FTL3 I plan to replace #assign/#local/#global with #var/#val and #set.

To be honest, I think you should only have two directives. One to mean
“set” and one to mean “declare locally”. No differentiation would be
made for constants/variables. I think the perfect directive names to
use are `set` and `let`.

> Note that "behave slightly differently" in practice often just means pushing the null requirement on your operand expression.

I don’t understand. It seems to me that most expressions will ignore
their null policy when evaluating their operands by asking for
non‐null.

For example, consider this expression: `foo.bar`. It seems to me that
`.bar` will evaluate `foo` by asking for non‐null regardless of
whether it has a `!` appended after it or not. Only if it’s asked to
*really* not return null, it will pass that restriction down the
evaluation chain, but for the other two types of evaluation, it’d
evaluate its “subordinate” expression the same way.

> Again, this is just the implementation. It's not how you explain the language rules.

If I understand correctly from the rest of your message, then this
would be explained like this:

There are two types of null: a “good null” and a “bad null”. Most
expressions (like `${}`, `.foo`, etc) can’t tolerate bad nulls but are
okay with good nulls. Some expressions (arithmetic operations, and
occasionally others) can’t tolerate either type of null; a few
expressions (`!` and `!:`) can tolerate both types of null.

Okay, then. That addresses my main concern with this approach: that it
would be hard to understand. More advanced users could investigate how
Freemarker actually implements this feature under the covers (more
atent users could be curious by seeing better error messages than they
would expect), but the average user wouldn’t have to think about it
too much.

> […] [N]amespaces accessed with colon (like in XML) are better than those accessed with dot (as in FTL2) […].

I agree.

-----

About the whole built‐ins thing (using `?`): the more I think about
it, the more it feels to me like it shouldn’t be a thing. What is so
good about writing `foo?bar` compared to `bar(foo)`? I think the
argument would be for writing `foo?bar` instead of *`core:bar(foo)`*,
but I think it’d be a better solution overall to simply have a
different syntax for language variables (like `#uppercase("hello")`
instead of `.uppercase("hello")` or `core:uppercase("hello")` or
`"hello"?uppercase`).

I’m not completely sure about that, though. People might be too used
to their postfix function (“method”) call to be able to give it up.
For the sake of understandability, I’ll keep using `foo?bar` to mean a
similar thing to today for the rest of this message.

> The null bypassing thing addresses a quite common problem.

What I said is that we *would* have “null bypassing”, but for every
function and every parameter. I was surprised that you couldn’t store
nulls in variables (and parameters) in Freemarker 2, but that doesn’t
mean I don’t like it.

The rules would be simple:

For regular users: a function call throws if any argument is a bad
null, and returns a good null without executing the function if any
argument is a good null.

For advanced users: a function call returns null if one of its
parameters is null, but it evaluates its parameter expressions by
asking for non‐null.

> […] So let's say you are naive and write ${x!'N/A'?transformLikeThis?transformLikeThat}. First, after some hair loss you realize that you got precedence problem there, and after you have fixed that you end up with ${(x!'N/A')?transformLikeThis?transformLikeThat}. […]

That’s another thing that gets fixed by preferring the `#builtin`
syntax. Instead of writing
`(x!:"N/A")?transformLikeThis?transformLikeThat`, one would write
`transformLikeThat(transformLikeThis(x!:"N/A"))`.

> […] Like, x is a number or date, so you format it with the transform, but 'N/A' is not a number or date, it's just substitute for the whole interpolated value. So you want to put it at the end, like this: ${x?transformLikeThis?transformLikeThat!'N/A'}. […]

Well, with my approach you’d be able to do the same thing. Suppose we
keep the `?` syntax (as opposed to the `#` syntax I suggested), you’d
do it like this:

```
${x!?transformLikeThis?transformLikeThat!:"N/A"}
```

> […] We are shooting for the ${What How OrElseWhat} order for aesthetic reasons too. […]

Right, that’s the main concern I have with the `#` syntax I proposed:
it’s regular and nice, but it might not look as good.

> […] For the user point of view, ?upperCase etc, allows null, while ${} doesn't. If a null touches it, it explodes. But you want FM to shut up, so you apply a `!` on the null to tell FreeMarker not to freak out because of that null. […]

So, what you are saying is that, from the point of view of a regular
user, function call expressions whose function is a “null bypassers”
return “bad nulls” if one of its arguments is a good null, and `${}`
only handles good nulls.

> From the implementation perspective, `${}` does handle null, but it asks for a non-null value […]. In `${maybeMissing?upperCase?trim?upperCase}` (no `!` anywhere), it's the `maybeMissing` that will explode because it obeys the "don't dare to return null" command. […]

So, from the point of view of an advanced user, call expressions whose
function is a “null bypasser” would pass their “nullability” down to
their argument expressions.

This approach has an important flaw: the call expressions would have
to know if the function is a null bypasser. This wouldn’t be possible
if the user did something like this:

```
<#assign x = fun> <#-- or let/var/val/whatever -->
${x(thisIsNull)}
```

The call expression (the parentheses) wouldn’t be able to know if `x`
is a null bypasser or not, so it wouldn’t know if it should call
`thisIsNull` by allowing null or not.

However, if, instead, all functions were null bypassers (like I
suggested), it’d always evaluate its argument expressions by asking
for non‐nulls. (Unless they were asked to really not return nulls, in
which case they’d evaluate their argument expressions by asking them
to really not return null).

That is, in regular user terms, functions would accept good nulls (and
return a good null back while not executing their bodies), but
wouldn’t accept bad nulls.

> `!:` […] looks less cute. […]

Well, I think it looks adorable!

But seriously now, I’ve written this part of the message in a
different day than the rest (to be honest, I’ve written this message
over the span of a couple days, but that’s unimportant), and I’ve
recently had an idea: what if `:` was its own operator?

In regular user terms, it’d explode on bad nulls, but it’d accept good
nulls and return the right‐hand‐side in that case.

In advanced user terms, it’d evaluate its left‐hand‐side by asking for
non‐null, but if null is returned anyways, it’d evaluate its
right‐hand‐side.

It wouldn’t always need a `!` before it. For example, consider this:

```
${maybeNull!?foo?bar:"xxx"}
<#-- or ${#bar(#foo(maybeNull!)):"xxx"} -->

${maybeNull!.foo.bar:"xxx"}
```

Now, whenever `maybeNull` is null, `"xxx"` will be shown. But
whenever, for example, `.bar` is null, it will throw.

-----

By the way, sorry for not responding sooner.

Re: [FM3] improve “null” handling

Posted by Daniel Dekany <dd...@freemail.hu>.
Saturday, March 4, 2017, 11:26:04 PM, Daniel Dekany wrote:

> Saturday, March 4, 2017, 7:19:09 PM, Pedro M. Zamboni wrote:
[snip]
>> But seriously now, I\u2019ve written this part of the message in a
>> different day than the rest (to be honest, I\u2019ve written this message
>> over the span of a couple days, but that\u2019s unimportant), and I\u2019ve
>> recently had an idea: what if `:` was its own operator?
>>
>> In regular user terms, it\u2019d explode on bad nulls, but it\u2019d accept good
>> nulls and return the right\u2010hand\u2010side in that case.
[snip]
>> ${maybeNull!.foo.bar:"xxx"}
>> ```
>>
>> Now, whenever `maybeNull` is null, `"xxx"` will be shown. But
>> whenever, for example, `.bar` is null, it will throw.
>
> That's a good idea, and I especially like that now we don't suppress
> the null problem at `.bar`. I wish we had `?` instead of `!`, and then
> we not only managed to "generalize" `?.`, but also the `?:` operator.
> (Such a same that some 14 years ago the role of `?` and `!` was
> selected the other way around...)
>
> But... it has some problems, because of which I think we better resist
> the temptation. At least in the primary (<#>-ish) language. I was here
> earlier BTW (not in this thread, but months ago and alone). Not sure
> the syntax/semantic was the same, but I wanted to prevent suppressing
> the null at `.bar`, and that resulted in an extra symbol to be used
> after the `!` (just like here, the `:`), and the resulting problems
> were the same:
>
> - I wanted to use `:` for namespace prefix separator... now it's
>   taken. What to do? I can use something like `|`, but it's less ideal
>   maybe...

Actually, if step back a little we ignore FreeMarker tradition,
there's another combination... (Though ignoring tradition is
politically problematic, as I have already noted in this thread.)

The main reason we want `:` for namespace prefix separator is so that
you can write `foo?my:f`, where `:` hash higher precedence (is more
sticky) than `?` or `.`. As `?` and `.` has the same precedence,
`foo?my.f` wouldn't work because it means `(foo?my).f`. But there's
another way around that; decreasing the precedence of `?`. Then it
stats to behave more like the pipe operator you can be familiar from
some other languages. So let's use `|` instead of `?`, and now instead
of `x?upperCase?trim`, you write `x|upperCase|trim`. Now
`x|upperCase|trim`. And now `x|my.f` (or `x|my.f|trim|my.2`) just
naturally works. We have still given up two things with this though,
but if we ignore tradition, this is maybe still the best compromise
you can have:

- We won't have separate namespace for namespace prefixes (or we will
  have some strange syntax for it... but let's say we won't have for
  now). That is, if you have `#import '/my.ftl' as my`, then the `my`
  variable it produces might shadows `my` in the data-model, and a
  local `my` variable might shadows the namespace prefix (if the
  parser allows such bad practice at all). This is can become a
  maintainability/readability a problem with auto-imports (because,
  you can break templates by adding auto-import to the Configuration,
  etc.). But it might as well manageable in practice. It's pretty much
  like when you make up some global rules regarding your data-model,
  like, that in your framework it always contains the "spring"
  variable (as it is FM2 Spring integration). Actually if we don't
  need `:` for namespace prefixes that badly anymore (because
  `foo|my.bar` have solved the "call a custom function with postfix
  syntax" problem), we mights as well better of with the the simpler
  looking "just use `.` everywhere" approach (which FM2 follows too).
  That has a quite substantial advantage to balance out its known
  problems: Sometimes its not obvious if a group of methods/variables
  should be just part of an object in the data-model, in which case
  you write `my.f` in templates, or it should be orthogonal to the
  data-model and solved with an auto-import, in which case you would
  write `my:f` if we don't hijack `:` for null handling. But, you might
  not want the template authors to be affected by such fine details
  (whether you decided the put your "library" into the data-model or
  you have opted for using auto-import feature instead). Most of them
  won't even understand the difference, and it just comes through as
  an inconsistently (that sometimes you have to write `stuff.aMember`,
  and some othertimes `stuff:aMember`).

- In FTL2 you can do this: `x?foo.bar`, which means `(x?foo).bar`.
  With the pipe syntax you had to write `(x|foo).bar`. It's certainly
  not a that big deal, as you hardly ever access the members of the
  result of a built-in. Except, there's `?api`, where you do exactly
  that. Like `myMap?api.myAppSpecMethod()`. Well, we can introduce
  `|.` if that's often a problem, like `myMap|api|.myAppSpecMethod()`.
  Or, we can attack the problem of `?api` outside built-ins, and say
  that when you write `foo.bar`, there are possibly multiple
  namespaces *inside* `foo` where "bar" can be searched. (In Java for
  example, there's a filed namespace and a method namespace - you can
  have both a field and method called "x", and they won't clash, and
  the syntax of the member access tells Java which namespace should be
  searched.) And so we can come up with a syntax for that, like
  `foo.bar@inThisNamespace`, and so in the last example we had
  `myMap.myAppSpecMethod@api()`. It's a bit like e-mail addresses.

Better yet, with this you now have freed up `?` for what it should
really mean; a marker on something that is optional (something that
produces "good null"-s when it produces null-s). So we can replace `!`
with `?`. And so you can write `?.` and `?:`, which is more familiar
(like from Groovy, and almost from Java too - see Project Coin), and
`varName?` (as in `${foo?}`) can be familiar too from Ceylon, Kotling,
C#, etc. That we don't have an alien looking syntax is a big
"political" advantage. There was always some hate generated towards FM
because of that. By the time the user realizes (if he does) that those
aren't really the familiar operators, but are just some common
applications of the FM-specific `?` and `:` operators, it's too late,
they already have written some templates... <evil-grin/>.

So the big quesiton is, if I haven't missed something above, and guys
around here like the new syntax too, do we dare to introduce a so
striking change and still call the result FreeMarker? Well, I'm brave
and all, but I think then the honorable way is calling the result
FreeMarker NG 1.x.x, rather than FreeMarker 3.x.x. It's based on the
FM2 code base, keeps the core FM2 principles, but has a too different
look-and-fell.

> - It's not how you did it in FM2 (breaks tradition... possible but hurts)
> - It's one more symbol for specifying a default, which meant to be
>   a basic templating operation.

(Here I meant `x!default` VS `x!:default`. Although if you write it as
`x?default` and `x?:default`, it's more familiar, at least if someone
has used Groovy, and so is less acceptable psychologically.)

> - It's yet again something that's kind of difficult to grasp for the
>   average user. I mean, users will keep writing ${x:'-'}, which
>   *never* works. So we can catch it during parsing and tell in the
>   error message why it's wrong, but still.

>> -----
>>
>> By the way, sorry for not responding sooner.
>>
>

-- 
Thanks,
 Daniel Dekany


Re: [FM3] improve “null” handling

Posted by Daniel Dekany <dd...@freemail.hu>.
Saturday, March 4, 2017, 7:19:09 PM, Pedro M. Zamboni wrote:

>> In FTL2 if you give something a default value (like `foo.y!0`), then you potentially unwillingly hide mistakes in the name (there was never an "y"). But because `foo` is not just a `Map`, we could do better. We know that `y` is not a valid name. So we could throw exception even for `foo.y!0`.
>
> Well, if you always throw an exception when a value is absent, I don\u2019t
> think it\u2019s that bad,

Note sure what you mean by absent. With the FM2 logic we always throw
exception when something is null or undefined (this two isn't
differentiated in FM2), except if it's covered by an `!` or `??` or
such. What I'm considering is that if something is undefined, we might
as well throw an exception regardless of if it's covered by an `!`,
`??`, etc. (What counts as undefined though depends on the
ObjectWrapper or on the MOP implementation.)

> I just wouldn\u2019t want to end up with a `null` and
> an `undefined` value like in Javascript.

Definitely not like in JavaScript... that undefined is quite
different.

> It *could* be a little confusing since generally nulls are not
> tolerated in Freemarker,

That wouldn't change. Undefined is tolerated even less. See the
example of `foo.x`. `${foo.x}` where is x is null is still an error,
but `${foo.x!}` isn't, just as in FM2. However, `${foo.y!}` is an
error, if foo is a bean without getY() method.

> but I *think* people would learn the differences soon enough for it
> to not be a big problem.
>
>> My guess is that Ceylon goes too far there. It's very unlikely that someone is able to comprehend something written in Ceylon, yet things like `var` or `val` would make reading code harder for them (surely you already know what those mean). So writing `variable` and `value` hardly have a practical value. Yes, it's consistent, I get that. But certainly many will dislike or even be annoyed by `variable` and `value`. So to me it doesn't look like the right balance. I prefer if the *very* common things has a sort syntax, after all, you will very quickly learn them if you do any real work with the language.
>
> Well, as it turns out, `value` is only two keystrokes away from `val`.
> What is believed is that the time spent designing the structure of
> your module, and actually writing its logic will take a much greater
> amount of time than actually occasionally typing those two characters
> throughout your module, so the time lost typing out \u201c`value`\u201d is
> insignificant to the overall development time of a module. As a
> consequence, you get a much more elegant\u2010looking code that reads much
> more nicely.
>
> And even then, `value` still comes as a much shorter way to declare
> values. Instead of writing `ArrayList<Map<String, Integer>>`, you can
> just write `value`.

It's not a question (to me) that Ceylon beats poor old Java when it
comes to design and other technical merits.

> We rarely ever use `variable` in Ceylon, so it\u2019s okay for it to be a
> little bit more verbose.

People's brain functions differently. To me, `val`/`var` is easier to
spot visually. You don't actually read keywords after all, very
quickly they become to ideograms basically. And then, shorter
ideograms are easier to recognize. Anyway, I'm not the kind of person
who rejects a language because of such details. But I know many are
put off by such things, while I guess almost nobody would have problem
with var/val.

>> So it's not just an assignment as in Java (where `foo = bar` does an assignment and is also an expression with the value of `bar`). [\u2026] Ah, so the assigment is part of the `exists` syntax...
>
> Yeah, sorry if I didn\u2019t make that clear. The assignment is part of the
> `exists` syntax, which in turn is part of the `if` syntax.
>
>> [\u2026] or just allow assignments as the operands of a top-level && in the case of #if exclusively... which is kind of a hack [\u2026].
>
> Gavin (the creator of Ceylon) said that he did consider using `&&`
> instead of `,` for separating expressions in a condition list. The
> problem he faced was that the assignment operator should bind closer
> than the separator, however `&&` binded closer than `=`. I\u2019m not sure
> if this is a problem in Freemarker, since the `=` is part of the
> `assign` directive syntax (so `&& could be made to bind more loosely
> than `=` in an `if`), but *at least I* would be weirded out by `<#if
> exists foo = bar && baz>`.

`&&` would bind closer than assignment `=`, so you had to use
parentheses:

  <#if (val x = a.b.x)?? && (val y = a.b.y)??>

In practice, even without `&&` you had to use parentheses, just as
you have to in Java most of the time, such as in:

  while ((bytesRead = r.read(buff)) != -1) { ... }

>> [\u2026] In FTL3 I plan to replace #assign/#local/#global with #var/#val and #set.
>
> To be honest, I think you should only have two directives. One to mean
> \u201cset\u201d and one to mean \u201cdeclare locally\u201d. No differentiation would be
> made for constants/variables. I think the perfect directive names to
> use are `set` and `let`.

As of differentiating var from val, maybe that's overly sophisticated
for a template language indeed.

As of `var` VS `let`, because we have `function` and `macro` (or... as
far as we have those), `var` is might be a more logical choice. OTOH
for block scope declarations they are using `let` in modern
JavaScript, while they also have `var` but with a different meaning.
So considering the influence of JavaScript, and to avoid confusion
because of that, `let` could be the winner.

>> Note that "behave slightly differently" in practice often just means pushing the null requirement on your operand expression.
>
> I don\u2019t understand. It seems to me that most expressions will ignore
> their null policy when evaluating their operands by asking for
> non\u2010null.
>
> For example, consider this expression: `foo.bar`. It seems to me that
> `.bar` will evaluate `foo` by asking for non\u2010null regardless of
> whether it has a `!` appended after it or not.

It's an implementation detail, but no, it doesn't. But it's only
because if there will be an error, then we want it explode as deep in
the syntax tree as possible, so that we can give a better error
message.

> Only if it\u2019s asked to *really* not return null, it will pass that
> restriction down the evaluation chain, but for the other two types
> of evaluation, it\u2019d evaluate its \u201csubordinate\u201d expression the same
> way.
>
>> Again, this is just the implementation. It's not how you explain the language rules.
>
> If I understand correctly from the rest of your message, then this
> would be explained like this:
>
> There are two types of null: a \u201cgood null\u201d and a \u201cbad null\u201d. Most
> expressions (like `${}`, `.foo`, etc) can\u2019t tolerate bad nulls but are
> okay with good nulls.

Yes. (I wonder what the terminology should be. "bad null" and "good
null" sounds a bit too informal.)

> Some expressions (arithmetic operations, and
> occasionally others) can\u2019t tolerate either type of null;

Yes. (Sometimes there's just no obvious way to continue, like in the
case of `1/thisIsNull!`.)

> a few expressions (`!` and `!:`) can tolerate both types of null.

Yes. Though for `exp!` it's perhaps better to say that it changes a
"bad null" to a "good null".

> Okay, then. That addresses my main concern with this approach: that it
> would be hard to understand. More advanced users could investigate how
> Freemarker actually implements this feature under the covers (more
> atent users could be curious by seeing better error messages than they
> would expect), but the average user wouldn\u2019t have to think about it
> too much.
>
>> [\u2026] [N]amespaces accessed with colon (like in XML) are better than those accessed with dot (as in FTL2) [\u2026].
>
> I agree.
>
> -----
>
> About the whole built\u2010ins thing (using `?`): the more I think about
> it, the more it feels to me like it shouldn\u2019t be a thing. What is so
> good about writing `foo?bar` compared to `bar(foo)`? I think the
> argument would be for writing `foo?bar` instead of *`core:bar(foo)`*,
> but I think it\u2019d be a better solution overall to simply have a
> different syntax for language variables (like `#uppercase("hello")`
> instead of `.uppercase("hello")` or `core:uppercase("hello")` or
> `"hello"?uppercase`).
>
> I\u2019m not completely sure about that, though. People might be too used
> to their postfix function (\u201cmethod\u201d) call to be able to give it up.
> For the sake of understandability, I\u2019ll keep using `foo?bar` to mean a
> similar thing to today for the rest of this message.


While tradition is an important factor, FM2's foo?bar syntax serve
multiple purposes:

- The obvious one is avoiding name clashes with the other variables.
  As you said, #upperCase("hello") or c:upperCase("hello") could be
  another solution for that.

- It allows you to keep a What How order. The What is usually what's
  interesting, the How is less interesting. Like, in ${title?upperCase}
  the important thing is that you print the title. It's a secondary
  thing that you want to it to be shown in upper case.

- Most of the built-ins has no parameters, except the LHO itself, so
  the syntax allow avoiding the parentheses. Because formatting
  inserted values is a goal of a template language, ${foo?bar} VS
  ${foo?bar()} matters. (Some template languages even avoid the {}-s.
  How terse it is to insert and format is important.)

- Chaining (pipelining) tranformations is much readable.
  Compare `#c(#b(#a(x)))` and `x?a?b?c`. Note the ordering.

If we ignore tradition, I would probably go for `"hello".#upperCase`,
as it shows clearly that this is basically an extension method, or
maybe `"hello"#upperCase`. But I don't think ignoring the tradition
would be wise, as we want to use the FreeMarker "trademark".

>> The null bypassing thing addresses a quite common problem.
>
> What I said is that we *would* have \u201cnull bypassing\u201d, but for every
> function and every parameter. I was surprised that you couldn\u2019t store
> nulls in variables (and parameters) in Freemarker 2, but that doesn\u2019t
> mean I don\u2019t like it.
>
> The rules would be simple:
>
> For regular users: a function call throws if any argument is a bad
> null, and returns a good null without executing the function if any
> argument is a good null.

Some functions may want to return non-null for a "good null" input
though. So while some functions work like that, a hard and fast rule
like that won't fit some. The hardest cases where such rules can fall
apart is with calling Java API-s from the template (where by "Java
API" I mean any application- or framework-specific API implemented in
Java). Perhaps the cleanest problematic case is that you receive some
variable from the data model (or as the return value from a Java API
call), and then pass it to a Java API call. So then, you just naively
write `foo.someApi(someVar)`, and because you just pass through that
variable, you certainly won't remember writing
``foo.someApi(someVar!)`, because it's not really a templating
construct. You just have some variable from the world outside the
template, you don't care what is it, you just pass it to a method
that's also outside the template. Anyway, the most important part:
what if `someApi` actually supports a null argument, and would return
something non-null for it. We can't even tell that, because it's a
Java API, a Java has no (widely accepted) way of declaring how null
arguments are treaded. Clearly, it would be rather counter intuitive
for the user if we skip calling that method and return a "good null",
or if we deny calling it because of a bad null argument. Regardless if
the argument is a good null or a bad null, it seems we just must to
call that method, and see what's going to happen. Maybe it will throw
an NPE or IllegalArgumentException back at us, in which case we can't
show an error message as helpful as usual (we can still show which
arguments were null, in the "tips" section), but we still behave as
expected.

> For advanced users: a function call returns null if one of its
> parameters is null, but it evaluates its parameter expressions by
> asking for non\u2010null.
>
>> [\u2026] So let's say you are naive and write
>> ${x!'N/A'?transformLikeThis?transformLikeThat}. First, after some
>> hair loss you realize that you got precedence problem there, and
>> after you have fixed that you end up with
>> ${(x!'N/A')?transformLikeThis?transformLikeThat}. [\u2026]
>
> That\u2019s another thing that gets fixed by preferring the `#builtin`
> syntax. Instead of writing
> `(x!:"N/A")?transformLikeThis?transformLikeThat`, one would write
> `transformLikeThat(transformLikeThis(x!:"N/A"))`.

Yes, but on what price... Losing the whole postfix call syntax for a
corner case.

>> [\u2026] Like, x is a number or date, so you format it with the
>> transform, but 'N/A' is not a number or date, it's just substitute
>> for the whole interpolated value. So you want to put it at the end,
>> like this: ${x?transformLikeThis?transformLikeThat!'N/A'}. [\u2026]
>
> Well, with my approach you\u2019d be able to do the same thing. Suppose we
> keep the `?` syntax (as opposed to the `#` syntax I suggested), you\u2019d
> do it like this:
>
> ```
> ${x!?transformLikeThis?transformLikeThat!:"N/A"}
> ```

You had to add that extra `!` though. Not only an extra key stroke,
but it kind of kicks you out of the flow. Think of how the user adds
the new pieces:

1. What to show: `x`
2. Yeah, but how to show it (formatting): `x?foo`
3. But what if it's missing: `x?foo!'default'`

Now if at 3. he had to go *back* with the cursor to add another `!` at
the beginning, that's annoying I believe. I might sound too picky
here, but these things people tend to write into the ${...} should be
concise and flow nicely, because that's what a template language does,
80% of the time.

>> [\u2026] We are shooting for the ${What How OrElseWhat} order for aesthetic reasons too. [\u2026]
>
> Right, that\u2019s the main concern I have with the `#` syntax I proposed:
> it\u2019s regular and nice, but it might not look as good.
>
>> [\u2026] For the user point of view, ?upperCase etc, allows null, while
>> ${} doesn't. If a null touches it, it explodes. But you want FM to
>> shut up, so you apply a `!` on the null to tell FreeMarker not to
>> freak out because of that null. [\u2026]
>
> So, what you are saying is that, from the point of view of a regular
> user, function call expressions whose function is a \u201cnull bypassers\u201d
> return \u201cbad nulls\u201d if one of its arguments is a good null, and `${}`
> only handles good nulls.

No, what I'm saying is that f(x) is a null bypasser, then if x is a
"good null", it returns a "good null", and if x is a "bad null", it
returns a "bad null". Because it bypasses that null unspoiled.

In the case of non-null bypasser f(x), if x is a null (good or bad...
whichever it accepts) and it returns null, we just don't know why it
is. Is it the same null as the argument, or a completely new null
comes from some other source? Did it return null because x was null at
all, or it's just a coincidence? You just can't know.

>> From the implementation perspective, `${}` does handle null, but it
>> asks for a non-null value [\u2026]. In
>> `${maybeMissing?upperCase?trim?upperCase}` (no `!` anywhere), it's
>> the `maybeMissing` that will explode because it obeys the "don't
>> dare to return null" command. [\u2026]
>
> So, from the point of view of an advanced user, call expressions
> whose function is a \u201cnull bypasser\u201d would pass their \u201cnullability\u201d
> down to their argument expressions.

Actually, not from his point of view, but from the point of view of a
contributor who works on something close to that part of the core.
Even for an advanced user, a null bypasser just returns the *same*
null as the argument. So the null keeps it's good/bad property.

> This approach has an important flaw: the call expressions would have
> to know if the function is a null bypasser.

I has to, yes.

> This wouldn\u2019t be possible if the user did something like this:
>
> ```
> <#assign x = fun> <#-- or let/var/val/whatever -->
> ${x(thisIsNull)}
> ```
>
> The call expression (the parentheses) wouldn\u2019t be able to know if `x`
> is a null bypasser or not, so it wouldn\u2019t know if it should call
> `thisIsNull` by allowing null or not.

It would know it. `x(thisIsNull)` is no different from
`fun(thisIsNull)`. `x` or `fun` is evaluated to a function object.
Then, as a separate step, we invoke it with `(thisIsNull)`. The null
policy of the parameters is part of the function object.

> However, if, instead, all functions were null bypassers (like I
> suggested), it\u2019d always evaluate its argument expressions by asking
> for non\u2010nulls. (Unless they were asked to really not return nulls, in
> which case they\u2019d evaluate their argument expressions by asking them
> to really not return null).
>
> That is, in regular user terms, functions would accept good nulls (and
> return a good null back while not executing their bodies), but
> wouldn\u2019t accept bad nulls.
>
>> `!:` [\u2026] looks less cute. [\u2026]
>
> Well, I think it looks adorable!

It's one more "magic" characters I mean...

> But seriously now, I\u2019ve written this part of the message in a
> different day than the rest (to be honest, I\u2019ve written this message
> over the span of a couple days, but that\u2019s unimportant), and I\u2019ve
> recently had an idea: what if `:` was its own operator?
>
> In regular user terms, it\u2019d explode on bad nulls, but it\u2019d accept good
> nulls and return the right\u2010hand\u2010side in that case.
>
> In advanced user terms, it\u2019d evaluate its left\u2010hand\u2010side by asking for
> non\u2010null, but if null is returned anyways, it\u2019d evaluate its
> right\u2010hand\u2010side.
>
> It wouldn\u2019t always need a `!` before it. For example, consider this:
>
> ```
> ${maybeNull!?foo?bar:"xxx"}
> <#-- or ${#bar(#foo(maybeNull!)):"xxx"} -->
>
> ${maybeNull!.foo.bar:"xxx"}
> ```
>
> Now, whenever `maybeNull` is null, `"xxx"` will be shown. But
> whenever, for example, `.bar` is null, it will throw.

That's a good idea, and I especially like that now we don't suppress
the null problem at `.bar`. I wish we had `?` instead of `!`, and then
we not only managed to "generalize" `?.`, but also the `?:` operator.
(Such a same that some 14 years ago the role of `?` and `!` was
selected the other way around...)

But... it has some problems, because of which I think we better resist
the temptation. At least in the primary (<#>-ish) language. I was here
earlier BTW (not in this thread, but months ago and alone). Not sure
the syntax/semantic was the same, but I wanted to prevent suppressing
the null at `.bar`, and that resulted in an extra symbol to be used
after the `!` (just like here, the `:`), and the resulting problems
were the same:

- I wanted to use `:` for namespace prefix separator... now it's
  taken. What to do? I can use something like `|`, but it's less ideal
  maybe...
- It's not how you did it in FM2 (breaks tradition... possible but hurts)
- It's one more symbol for specifying a default, which meant to be
  a basic templating operation.
- It's yet again something that's kind of difficult to grasp for the
  average user. I mean, users will keep writing ${x:'-'}, which
  *never* works. So we can catch it during parsing and tell in the
  error message why it's wrong, but still.

> -----
>
> By the way, sorry for not responding sooner.
>

-- 
Thanks,
 Daniel Dekany