You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by Christian Tzolov <ct...@pivotal.io> on 2016/11/29 22:07:49 UTC

Rule to expand tha JdbcTableScan expression

We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
would allow us "emulate" UPDATE and DELETE operations while using only
append-only (e.g. INSERT) in the background.

Using the JDBC adapter i've been trying to convert an input sql query like
this:

SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
amount  FROM foodmart.expense_fact

Into one that looks like this:

SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
amount
FROM (
    SELECT store_id, account_id, exp_date, time_id, category_id,
currency_id, amount
    FROM (
        SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
last_version_number
        FROM foodmart.expense_fact
     ) as link_last
    WHERE exp_date = last_version_number
) as current_version;

If you run the second query directly the output relation is:

"PLAN=JdbcToEnumerableConverter
  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
category_id=[$4], currency_id=[$5], amount=[$6])
    JdbcFilter(condition=[=($2, $7)])
      JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
                  last_version_number=[MAX($2) OVER (PARTITION BY $1 RANGE
BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
        JdbcTableScan(table=[[foodmart, expense_fact]])


I've created a rule meant to do this:
https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On match
it creates an expression that matches the one above:

"Expanded RelNode:
JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
category_id=[$4], currency_id=[$5], amount=[$6])
  JdbcFilter(condition=[=($2, $7)])
    JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
PRECEDING AND UNBOUNDED FOLLOWING)])
      JdbcTableScan(table=[[foodmart, expense_fact]])"

But after the Planner completes the optimization the final output is this
(e.g. the initial query):

"PLAN=JdbcToEnumerableConverter
  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
category_id=[$4], currency_id=[$5])
    JdbcTableScan(table=[[foodmart, expense_fact]])

1. What am i missing? Are the field/type references copied correctly?

2. Also what is the right approach to prevent this Rule get in loop?

I'm a bit stuck so any ideas and suggestions appreciated!

Cheers,
Christian

-- 
Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
EMEA Practice Team | Pivotal <http://pivotal.io/>
ctzolov@pivotal.io|+31610285517

Re: Rule to expand tha JdbcTableScan expression

Posted by Julian Hyde <jh...@apache.org>.
In this case, you could probably use a planner rule (RelOptRule). Be sure to use HepPlanner, not VolcanoPlanner. The latter will apply cost and deduce that the original is cheaper.

Or you could manually iterate over the tree, using RelNode.getInputs(), RelNode.copy(RelTraitSet, List<RelNode>), and recursion.

Or you could use a good IDE and see whether there are any sub-classes of RelShuttle that meet your needs.

Julian

> On Nov 29, 2016, at 4:25 PM, Christian Tzolov <ct...@pivotal.io> wrote:
> 
> Thanks for the reaction Julian!
> 
> I'm aware of the "semantically equivalent" expressions but our earlier
> converstation (http://bit.ly/2gTi3ZE) have left me with the impression the
> you can "abuse" this requirement.
> 
> Do you think Calcite shouldn't be used for such use case?  Mind that in
> addition to the select example above i would need to implement rules (or
> visitors) that overrides and convert SQL UPDATEs into INSERTs.
> 
> Can you please elaborate on how i can plug my own RelVisitor impl? Is there
> any sample code (don't dare asking of docs ;).
> Also i'm afraid i'm completely unfamiliar with the Marco (i guess it is
> TableMacro)
> 
> 
> Thanks,
> Christian
> 
> 
> 
> On 30 November 2016 at 00:47, Julian Hyde <jh...@apache.org> wrote:
> 
>> What you've written isn't really a planner rule. A planner rule is
>> supposed to create something semantically equivalent. And the planner
>> then chooses the cheapest of those equivalent expressions.
>> 
>> In contrast, you want to do search-and-replace on the tree. I think
>> you should use a RelVisitor, or perhaps replace TableScan with a table
>> macro and let the macro expander do its stuff.
>> 
>> (There are other cases where planner rules don't cut it. One is field
>> trimming, because when you visit each node, you want to return a node
>> that has fewer fields.)
>> 
>> On Tue, Nov 29, 2016 at 2:07 PM, Christian Tzolov <ct...@pivotal.io>
>> wrote:
>>> We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
>>> would allow us "emulate" UPDATE and DELETE operations while using only
>>> append-only (e.g. INSERT) in the background.
>>> 
>>> Using the JDBC adapter i've been trying to convert an input sql query
>> like
>>> this:
>>> 
>>> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
>>> amount  FROM foodmart.expense_fact
>>> 
>>> Into one that looks like this:
>>> 
>>> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
>>> amount
>>> FROM (
>>>    SELECT store_id, account_id, exp_date, time_id, category_id,
>>> currency_id, amount
>>>    FROM (
>>>        SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
>>> last_version_number
>>>        FROM foodmart.expense_fact
>>>     ) as link_last
>>>    WHERE exp_date = last_version_number
>>> ) as current_version;
>>> 
>>> If you run the second query directly the output relation is:
>>> 
>>> "PLAN=JdbcToEnumerableConverter
>>>  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
>> time_id=[$3],
>>> category_id=[$4], currency_id=[$5], amount=[$6])
>>>    JdbcFilter(condition=[=($2, $7)])
>>>      JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
>>> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
>>>                  last_version_number=[MAX($2) OVER (PARTITION BY $1
>> RANGE
>>> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
>>>        JdbcTableScan(table=[[foodmart, expense_fact]])
>>> 
>>> 
>>> I've created a rule meant to do this:
>>> https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On
>> match
>>> it creates an expression that matches the one above:
>>> 
>>> "Expanded RelNode:
>>> JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
>>> category_id=[$4], currency_id=[$5], amount=[$6])
>>>  JdbcFilter(condition=[=($2, $7)])
>>>    JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
>>> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
>>> last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
>>> PRECEDING AND UNBOUNDED FOLLOWING)])
>>>      JdbcTableScan(table=[[foodmart, expense_fact]])"
>>> 
>>> But after the Planner completes the optimization the final output is this
>>> (e.g. the initial query):
>>> 
>>> "PLAN=JdbcToEnumerableConverter
>>>  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
>> time_id=[$3],
>>> category_id=[$4], currency_id=[$5])
>>>    JdbcTableScan(table=[[foodmart, expense_fact]])
>>> 
>>> 1. What am i missing? Are the field/type references copied correctly?
>>> 
>>> 2. Also what is the right approach to prevent this Rule get in loop?
>>> 
>>> I'm a bit stuck so any ideas and suggestions appreciated!
>>> 
>>> Cheers,
>>> Christian
>>> 
>>> --
>>> Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution
>> Architect,
>>> EMEA Practice Team | Pivotal <http://pivotal.io/>
>>> ctzolov@pivotal.io|+31610285517
>> 
> 
> 
> 
> -- 
> Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
> EMEA Practice Team | Pivotal <http://pivotal.io/>
> ctzolov@pivotal.io|+31610285517


Re: Rule to expand tha JdbcTableScan expression

Posted by Christian Tzolov <ct...@pivotal.io>.
Thanks for the reaction Julian!

I'm aware of the "semantically equivalent" expressions but our earlier
converstation (http://bit.ly/2gTi3ZE) have left me with the impression the
you can "abuse" this requirement.

Do you think Calcite shouldn't be used for such use case?  Mind that in
addition to the select example above i would need to implement rules (or
visitors) that overrides and convert SQL UPDATEs into INSERTs.

Can you please elaborate on how i can plug my own RelVisitor impl? Is there
any sample code (don't dare asking of docs ;).
Also i'm afraid i'm completely unfamiliar with the Marco (i guess it is
TableMacro)


Thanks,
Christian



On 30 November 2016 at 00:47, Julian Hyde <jh...@apache.org> wrote:

> What you've written isn't really a planner rule. A planner rule is
> supposed to create something semantically equivalent. And the planner
> then chooses the cheapest of those equivalent expressions.
>
> In contrast, you want to do search-and-replace on the tree. I think
> you should use a RelVisitor, or perhaps replace TableScan with a table
> macro and let the macro expander do its stuff.
>
> (There are other cases where planner rules don't cut it. One is field
> trimming, because when you visit each node, you want to return a node
> that has fewer fields.)
>
> On Tue, Nov 29, 2016 at 2:07 PM, Christian Tzolov <ct...@pivotal.io>
> wrote:
> > We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
> > would allow us "emulate" UPDATE and DELETE operations while using only
> > append-only (e.g. INSERT) in the background.
> >
> > Using the JDBC adapter i've been trying to convert an input sql query
> like
> > this:
> >
> > SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> > amount  FROM foodmart.expense_fact
> >
> > Into one that looks like this:
> >
> > SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> > amount
> > FROM (
> >     SELECT store_id, account_id, exp_date, time_id, category_id,
> > currency_id, amount
> >     FROM (
> >         SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
> > last_version_number
> >         FROM foodmart.expense_fact
> >      ) as link_last
> >     WHERE exp_date = last_version_number
> > ) as current_version;
> >
> > If you run the second query directly the output relation is:
> >
> > "PLAN=JdbcToEnumerableConverter
> >   JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3],
> > category_id=[$4], currency_id=[$5], amount=[$6])
> >     JdbcFilter(condition=[=($2, $7)])
> >       JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> > time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> >                   last_version_number=[MAX($2) OVER (PARTITION BY $1
> RANGE
> > BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
> >         JdbcTableScan(table=[[foodmart, expense_fact]])
> >
> >
> > I've created a rule meant to do this:
> > https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On
> match
> > it creates an expression that matches the one above:
> >
> > "Expanded RelNode:
> > JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> > category_id=[$4], currency_id=[$5], amount=[$6])
> >   JdbcFilter(condition=[=($2, $7)])
> >     JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> > time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> > last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
> > PRECEDING AND UNBOUNDED FOLLOWING)])
> >       JdbcTableScan(table=[[foodmart, expense_fact]])"
> >
> > But after the Planner completes the optimization the final output is this
> > (e.g. the initial query):
> >
> > "PLAN=JdbcToEnumerableConverter
> >   JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3],
> > category_id=[$4], currency_id=[$5])
> >     JdbcTableScan(table=[[foodmart, expense_fact]])
> >
> > 1. What am i missing? Are the field/type references copied correctly?
> >
> > 2. Also what is the right approach to prevent this Rule get in loop?
> >
> > I'm a bit stuck so any ideas and suggestions appreciated!
> >
> > Cheers,
> > Christian
> >
> > --
> > Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution
> Architect,
> > EMEA Practice Team | Pivotal <http://pivotal.io/>
> > ctzolov@pivotal.io|+31610285517
>



-- 
Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
EMEA Practice Team | Pivotal <http://pivotal.io/>
ctzolov@pivotal.io|+31610285517

Re: Rule to expand tha JdbcTableScan expression

Posted by Julian Hyde <jh...@apache.org>.
What you've written isn't really a planner rule. A planner rule is
supposed to create something semantically equivalent. And the planner
then chooses the cheapest of those equivalent expressions.

In contrast, you want to do search-and-replace on the tree. I think
you should use a RelVisitor, or perhaps replace TableScan with a table
macro and let the macro expander do its stuff.

(There are other cases where planner rules don't cut it. One is field
trimming, because when you visit each node, you want to return a node
that has fewer fields.)

On Tue, Nov 29, 2016 at 2:07 PM, Christian Tzolov <ct...@pivotal.io> wrote:
> We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
> would allow us "emulate" UPDATE and DELETE operations while using only
> append-only (e.g. INSERT) in the background.
>
> Using the JDBC adapter i've been trying to convert an input sql query like
> this:
>
> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> amount  FROM foodmart.expense_fact
>
> Into one that looks like this:
>
> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> amount
> FROM (
>     SELECT store_id, account_id, exp_date, time_id, category_id,
> currency_id, amount
>     FROM (
>         SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
> last_version_number
>         FROM foodmart.expense_fact
>      ) as link_last
>     WHERE exp_date = last_version_number
> ) as current_version;
>
> If you run the second query directly the output relation is:
>
> "PLAN=JdbcToEnumerableConverter
>   JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5], amount=[$6])
>     JdbcFilter(condition=[=($2, $7)])
>       JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
>                   last_version_number=[MAX($2) OVER (PARTITION BY $1 RANGE
> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
>         JdbcTableScan(table=[[foodmart, expense_fact]])
>
>
> I've created a rule meant to do this:
> https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On match
> it creates an expression that matches the one above:
>
> "Expanded RelNode:
> JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5], amount=[$6])
>   JdbcFilter(condition=[=($2, $7)])
>     JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
> PRECEDING AND UNBOUNDED FOLLOWING)])
>       JdbcTableScan(table=[[foodmart, expense_fact]])"
>
> But after the Planner completes the optimization the final output is this
> (e.g. the initial query):
>
> "PLAN=JdbcToEnumerableConverter
>   JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5])
>     JdbcTableScan(table=[[foodmart, expense_fact]])
>
> 1. What am i missing? Are the field/type references copied correctly?
>
> 2. Also what is the right approach to prevent this Rule get in loop?
>
> I'm a bit stuck so any ideas and suggestions appreciated!
>
> Cheers,
> Christian
>
> --
> Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
> EMEA Practice Team | Pivotal <http://pivotal.io/>
> ctzolov@pivotal.io|+31610285517

Re: Rule to expand tha JdbcTableScan expression

Posted by Christian Tzolov <ct...@pivotal.io>.
Thanks,

Indeed by playing with (and overriding) the cost of the Jdbc rel nodes i'm
managed to expand the trees (in many cases).

How can i set the HepPlanner instead of Volcano one?



On 30 November 2016 at 20:50, jordan.halterman@gmail.com <
jordan.halterman@gmail.com> wrote:

> Which planner are you using? If the rule is being fired, what you may be
> missing is that the cost of the converted expression is more than the cost
> of the input expression, resulting in the VolcanoPlanner throwing out the
> converted expression. You should use the HepPlanner for this.
>
> > On Nov 29, 2016, at 2:07 PM, Christian Tzolov <ct...@pivotal.io>
> wrote:
> >
> > We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
> > would allow us "emulate" UPDATE and DELETE operations while using only
> > append-only (e.g. INSERT) in the background.
> >
> > Using the JDBC adapter i've been trying to convert an input sql query
> like
> > this:
> >
> > SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> > amount  FROM foodmart.expense_fact
> >
> > Into one that looks like this:
> >
> > SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> > amount
> > FROM (
> >    SELECT store_id, account_id, exp_date, time_id, category_id,
> > currency_id, amount
> >    FROM (
> >        SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
> > last_version_number
> >        FROM foodmart.expense_fact
> >     ) as link_last
> >    WHERE exp_date = last_version_number
> > ) as current_version;
> >
> > If you run the second query directly the output relation is:
> >
> > "PLAN=JdbcToEnumerableConverter
> >  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> > category_id=[$4], currency_id=[$5], amount=[$6])
> >    JdbcFilter(condition=[=($2, $7)])
> >      JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> > time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> >                  last_version_number=[MAX($2) OVER (PARTITION BY $1 RANGE
> > BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
> >        JdbcTableScan(table=[[foodmart, expense_fact]])
> >
> >
> > I've created a rule meant to do this:
> > https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On
> match
> > it creates an expression that matches the one above:
> >
> > "Expanded RelNode:
> > JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> > category_id=[$4], currency_id=[$5], amount=[$6])
> >  JdbcFilter(condition=[=($2, $7)])
> >    JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> > time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> > last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
> > PRECEDING AND UNBOUNDED FOLLOWING)])
> >      JdbcTableScan(table=[[foodmart, expense_fact]])"
> >
> > But after the Planner completes the optimization the final output is this
> > (e.g. the initial query):
> >
> > "PLAN=JdbcToEnumerableConverter
> >  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> > category_id=[$4], currency_id=[$5])
> >    JdbcTableScan(table=[[foodmart, expense_fact]])
> >
> > 1. What am i missing? Are the field/type references copied correctly?
> >
> > 2. Also what is the right approach to prevent this Rule get in loop?
> >
> > I'm a bit stuck so any ideas and suggestions appreciated!
> >
> > Cheers,
> > Christian
> >
> > --
> > Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution
> Architect,
> > EMEA Practice Team | Pivotal <http://pivotal.io/>
> > ctzolov@pivotal.io|+31610285517
>



-- 
Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
EMEA Practice Team | Pivotal <http://pivotal.io/>
ctzolov@pivotal.io|+31610285517

Re: Rule to expand tha JdbcTableScan expression

Posted by "jordan.halterman@gmail.com" <jo...@gmail.com>.
Which planner are you using? If the rule is being fired, what you may be missing is that the cost of the converted expression is more than the cost of the input expression, resulting in the VolcanoPlanner throwing out the converted expression. You should use the HepPlanner for this. 

> On Nov 29, 2016, at 2:07 PM, Christian Tzolov <ct...@pivotal.io> wrote:
> 
> We're working on a JDBC wrapper for HAWQ (SQL-on-HADOOP) that hopefully
> would allow us "emulate" UPDATE and DELETE operations while using only
> append-only (e.g. INSERT) in the background.
> 
> Using the JDBC adapter i've been trying to convert an input sql query like
> this:
> 
> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> amount  FROM foodmart.expense_fact
> 
> Into one that looks like this:
> 
> SELECT store_id, account_id, exp_date, time_id, category_id, currency_id,
> amount
> FROM (
>    SELECT store_id, account_id, exp_date, time_id, category_id,
> currency_id, amount
>    FROM (
>        SELECT *, MAX(exp_date) OVER (PARTITION BY account_id) AS
> last_version_number
>        FROM foodmart.expense_fact
>     ) as link_last
>    WHERE exp_date = last_version_number
> ) as current_version;
> 
> If you run the second query directly the output relation is:
> 
> "PLAN=JdbcToEnumerableConverter
>  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5], amount=[$6])
>    JdbcFilter(condition=[=($2, $7)])
>      JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
>                  last_version_number=[MAX($2) OVER (PARTITION BY $1 RANGE
> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)])
>        JdbcTableScan(table=[[foodmart, expense_fact]])
> 
> 
> I've created a rule meant to do this:
> https://gist.github.com/tzolov/a426d10a55fdd712a67830b0154b33db   On match
> it creates an expression that matches the one above:
> 
> "Expanded RelNode:
> JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5], amount=[$6])
>  JdbcFilter(condition=[=($2, $7)])
>    JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2],
> time_id=[$3], category_id=[$4], currency_id=[$5], amount=[$6],
> last_version_number=[MAX($2) OVER (PARTITION BY $1 ROWS BETWEEN UNBOUNDED
> PRECEDING AND UNBOUNDED FOLLOWING)])
>      JdbcTableScan(table=[[foodmart, expense_fact]])"
> 
> But after the Planner completes the optimization the final output is this
> (e.g. the initial query):
> 
> "PLAN=JdbcToEnumerableConverter
>  JdbcProject(store_id=[$0], account_id=[$1], exp_date=[$2], time_id=[$3],
> category_id=[$4], currency_id=[$5])
>    JdbcTableScan(table=[[foodmart, expense_fact]])
> 
> 1. What am i missing? Are the field/type references copied correctly?
> 
> 2. Also what is the right approach to prevent this Rule get in loop?
> 
> I'm a bit stuck so any ideas and suggestions appreciated!
> 
> Cheers,
> Christian
> 
> -- 
> Christian Tzolov <http://www.linkedin.com/in/tzolov> | Solution Architect,
> EMEA Practice Team | Pivotal <http://pivotal.io/>
> ctzolov@pivotal.io|+31610285517