You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by Enrico Olivelli <eo...@gmail.com> on 2020/09/15 16:42:58 UTC

Combining HepPLanner and VolcanoPlanner

Hi,

I am trying to create a two stage planner using HepPlanner and then
VolcanoPlanner

Which is the correct sequence of steps to pass from SQL to Enumerable ?
My goal is to use Hep for very simple queries like simple INSERTs, SELECT *
FROM TABLE, SELECT * FROM TABLE WHERE pk=?....
Volcano is overkill for such stuff.

This is my idea:
1) get a Planner (that's Volcano)
Planner planner = Frameworks.getPlanner(config);

2) Get the logical plan
 SqlNode n = planner.parse(query);
            n = planner.validate(n);
            RelNode logicalPlan = planner.rel(n).project();
3) Create HepPlanner
HepProgram hepProgram =
                    HepProgram.
                            builder()
                            . ?? which Rules ?
                            .build();

            HepPlanner hepPlanner = new HepPlanner(hepProgram);
            hepPlanner.addRelTraitDef(ConventionTraitDef.INSTANCE);

            hepPlanner.setRoot(logicalPlan);

4) Run HepPlanner
RelNode bestForHep = hepPlanner.findBestExp();

5) Pick Volcano

final RelOptPlanner optPlanner = cluster.getPlanner();

6) Convert to Enumerable

final RelOptPlanner optPlanner = cluster.getPlanner();

            optPlanner.addRule(CoreRules.FILTER_REDUCE_EXPRESSIONS);
            RelTraitSet desiredTraits =
                    cluster.traitSet()
                            .replace(EnumerableConvention.INSTANCE);
            final RelCollation collation =
                    logicalPlan instanceof Sort
                            ? ((Sort) logicalPlan).collation
                            : null;
            if (collation != null) {
                desiredTraits = desiredTraits.replace(collation);
            }
            final RelNode newRoot = optPlanner.changeTraits(logicalPlan,
desiredTraits);
            optPlanner.setRoot(newRoot);
            RelNode bestExp = optPlanner.findBestExp();

Any hint/pointer is very appreciated

The alternative is to detect such simple queries and use a little set of
Rules and not Programs.ofRules(Programs.RULE_SET)

Best regards
Enrico

Re: Combining HepPLanner and VolcanoPlanner

Posted by Enrico Olivelli <eo...@gmail.com>.
Ruben,

Il giorno gio 17 set 2020 alle ore 14:57 Ruben Q L <ru...@gmail.com> ha
scritto:

> Enrico,
>
> I cannot understand how you can arrive at scenario C. Theoretically, if
> there are not enough rules to implement a certain plan, you should get a
> CannotPlanException from RelOptPlanner#findBest. I would expect this
> outcome if e.g. you do not have "rules that handle the LIMIT". Moreover,
> having "wrong plans computed for joins" seems like a bug. But it is not
> easy to assess the situation and evaluate what is going on without the
> actual query and rules that are applied.
>

I will send a demo when I have cycles.
Thanks for your response.

Enrico


>
> Best regards,
> Ruben
>
>
> Le mer. 16 sept. 2020 à 17:24, Enrico Olivelli <eo...@gmail.com> a
> écrit :
>
> > Ruben,
> > thank you very much, using your snippet I got the planner working !
> >
> > In HerdDB we are currently using
> > Programs.ofRules(Programs.RULE_SET) +RelOptUtils#registerDefaultRules
> (that
> > add lots of rules) and we are able to handle several cases of very
> complex
> > generic SQL.
> >
> > But If I try to reduce the number of rules I see this behaviour:
> > a) if I am lucky I get some error from the planner that tells that it
> can't
> > find a way to compute the plan -> I can switch to the full planner
> ruleset
> > b) If I am less lucky I have a plan that works but it is less efficient
> ->
> > (this is expected, and this is what I want, that is to trade planner
> speed
> > vs efficiency of plans
> > c) If I am unlucky the planner returns a plan that leads to wrong
> results !
> >
> > For instance you can see c) if you do not add rules that handle "LIMIT"
> and
> > simply the plan sometimes won't pick the LIMIT and the database will
> return
> > unexpected results
> > but I saw worse cases, like wrong plans computed for joins.
> >
> > So my next question is:
> > is there a way to create a minimal set of rules that covers simple SQL
> > (think about basic JPA stuff....) and eventually fails the plan in case
> of
> > unhandled cases ?
> >
> > Best regards
> > Enrico
> >
> > Il giorno mer 16 set 2020 alle ore 09:57 Ruben Q L <ru...@gmail.com>
> ha
> > scritto:
> >
> > > Hi Enrico,
> > >
> > > not sure if this can be applied to your case, but I also have a use
> case
> > > with HepPlanner and VolcanoPlanner.
> > > In my case I have an initial phase of rules applied with HepPlanner.
> > These
> > > are rules that optimize the logical plan and are always a "quickwin"
> > (they
> > > always generate a better version of the plan, so it makes sense to
> apply
> > > them with a Hep).
> > > Then, I have a "main" phase, applied with VolcanoPlanner, that takes
> care
> > > of converting the plan into the most optimized Enumerable result.
> > > A simplified code snippet would be:
> > >
> > > private static final Program PHASE_1 =
> > > Programs.of(HepProgram.builder().addRuleCollection(
> > >  Arrays.asList(
> > >   CoreRules.FILTER_INTO_JOIN,
> > >   CoreRules.SORT_PROJECT_TRANSPOSE,
> > >   CoreRules.SORT_JOIN_TRANSPOSE,
> > >   CoreRules.FILTER_REDUCE_EXPRESSIONS,
> > >   ...
> > >   )).build(), false, MyRelMetadataProvider.INSTANCE);
> > >
> > > private static final List<RelOptRule> PHASE_2_RULES = Arrays.asList(
> > >  EnumerableRules.ENUMERABLE_VALUES_RULE,
> > >  EnumerableRules.ENUMERABLE_UNION_RULE,
> > >  EnumerableRules.ENUMERABLE_FILTER_TO_CALC_RULE,
> > >  EnumerableRules.ENUMERABLE_PROJECT_TO_CALC_RULE,
> > >  EnumerableRules.ENUMERABLE_COLLECT_RULE,
> > >  EnumerableRules.ENUMERABLE_UNCOLLECT_RULE,
> > >  ...);
> > > private static final Program PHASE_2 =
> > > Programs.of(RuleSets.ofList(PHASE_2_RULES));
> > >
> > >
> > > EnumerableRel optimizeLogicalPlan(RelNode query)
> > > {
> > >  Program optPhases = Programs.sequence(PHASE_1, PHASE_2);
> > >  RelTraitSet desiredTraitSet = query.getTraitSet()
> > >   .replace(EnumerableConvention.INSTANCE)
> > >   .simplify();
> > >  EnumerableRel result = (EnumerableRel) optPhases.run(
> > >   query.getCluster().getPlanner(), // this is a VolcanoPlanner
> > >   query,
> > >   desiredTraitSet,
> > >   Collections.emptyList(),
> > >   Collections.emptyList());
> > >  return result;
> > > }
> > >
> > >
> > > I hope it helps.
> > >
> > > Best regards,
> > > Ruben
> > >
> > >
> > > Le mar. 15 sept. 2020 à 17:43, Enrico Olivelli <eo...@gmail.com> a
> > > écrit :
> > >
> > > > Hi,
> > > >
> > > > I am trying to create a two stage planner using HepPlanner and then
> > > > VolcanoPlanner
> > > >
> > > > Which is the correct sequence of steps to pass from SQL to
> Enumerable ?
> > > > My goal is to use Hep for very simple queries like simple INSERTs,
> > > SELECT *
> > > > FROM TABLE, SELECT * FROM TABLE WHERE pk=?....
> > > > Volcano is overkill for such stuff.
> > > >
> > > > This is my idea:
> > > > 1) get a Planner (that's Volcano)
> > > > Planner planner = Frameworks.getPlanner(config);
> > > >
> > > > 2) Get the logical plan
> > > >  SqlNode n = planner.parse(query);
> > > >             n = planner.validate(n);
> > > >             RelNode logicalPlan = planner.rel(n).project();
> > > > 3) Create HepPlanner
> > > > HepProgram hepProgram =
> > > >                     HepProgram.
> > > >                             builder()
> > > >                             . ?? which Rules ?
> > > >                             .build();
> > > >
> > > >             HepPlanner hepPlanner = new HepPlanner(hepProgram);
> > > >             hepPlanner.addRelTraitDef(ConventionTraitDef.INSTANCE);
> > > >
> > > >             hepPlanner.setRoot(logicalPlan);
> > > >
> > > > 4) Run HepPlanner
> > > > RelNode bestForHep = hepPlanner.findBestExp();
> > > >
> > > > 5) Pick Volcano
> > > >
> > > > final RelOptPlanner optPlanner = cluster.getPlanner();
> > > >
> > > > 6) Convert to Enumerable
> > > >
> > > > final RelOptPlanner optPlanner = cluster.getPlanner();
> > > >
> > > >             optPlanner.addRule(CoreRules.FILTER_REDUCE_EXPRESSIONS);
> > > >             RelTraitSet desiredTraits =
> > > >                     cluster.traitSet()
> > > >                             .replace(EnumerableConvention.INSTANCE);
> > > >             final RelCollation collation =
> > > >                     logicalPlan instanceof Sort
> > > >                             ? ((Sort) logicalPlan).collation
> > > >                             : null;
> > > >             if (collation != null) {
> > > >                 desiredTraits = desiredTraits.replace(collation);
> > > >             }
> > > >             final RelNode newRoot =
> > optPlanner.changeTraits(logicalPlan,
> > > > desiredTraits);
> > > >             optPlanner.setRoot(newRoot);
> > > >             RelNode bestExp = optPlanner.findBestExp();
> > > >
> > > > Any hint/pointer is very appreciated
> > > >
> > > > The alternative is to detect such simple queries and use a little set
> > of
> > > > Rules and not Programs.ofRules(Programs.RULE_SET)
> > > >
> > > > Best regards
> > > > Enrico
> > > >
> > >
> >
>

Re: Combining HepPLanner and VolcanoPlanner

Posted by Ruben Q L <ru...@gmail.com>.
Enrico,

I cannot understand how you can arrive at scenario C. Theoretically, if
there are not enough rules to implement a certain plan, you should get a
CannotPlanException from RelOptPlanner#findBest. I would expect this
outcome if e.g. you do not have "rules that handle the LIMIT". Moreover,
having "wrong plans computed for joins" seems like a bug. But it is not
easy to assess the situation and evaluate what is going on without the
actual query and rules that are applied.

Best regards,
Ruben


Le mer. 16 sept. 2020 à 17:24, Enrico Olivelli <eo...@gmail.com> a
écrit :

> Ruben,
> thank you very much, using your snippet I got the planner working !
>
> In HerdDB we are currently using
> Programs.ofRules(Programs.RULE_SET) +RelOptUtils#registerDefaultRules (that
> add lots of rules) and we are able to handle several cases of very complex
> generic SQL.
>
> But If I try to reduce the number of rules I see this behaviour:
> a) if I am lucky I get some error from the planner that tells that it can't
> find a way to compute the plan -> I can switch to the full planner ruleset
> b) If I am less lucky I have a plan that works but it is less efficient ->
> (this is expected, and this is what I want, that is to trade planner speed
> vs efficiency of plans
> c) If I am unlucky the planner returns a plan that leads to wrong results !
>
> For instance you can see c) if you do not add rules that handle "LIMIT" and
> simply the plan sometimes won't pick the LIMIT and the database will return
> unexpected results
> but I saw worse cases, like wrong plans computed for joins.
>
> So my next question is:
> is there a way to create a minimal set of rules that covers simple SQL
> (think about basic JPA stuff....) and eventually fails the plan in case of
> unhandled cases ?
>
> Best regards
> Enrico
>
> Il giorno mer 16 set 2020 alle ore 09:57 Ruben Q L <ru...@gmail.com> ha
> scritto:
>
> > Hi Enrico,
> >
> > not sure if this can be applied to your case, but I also have a use case
> > with HepPlanner and VolcanoPlanner.
> > In my case I have an initial phase of rules applied with HepPlanner.
> These
> > are rules that optimize the logical plan and are always a "quickwin"
> (they
> > always generate a better version of the plan, so it makes sense to apply
> > them with a Hep).
> > Then, I have a "main" phase, applied with VolcanoPlanner, that takes care
> > of converting the plan into the most optimized Enumerable result.
> > A simplified code snippet would be:
> >
> > private static final Program PHASE_1 =
> > Programs.of(HepProgram.builder().addRuleCollection(
> >  Arrays.asList(
> >   CoreRules.FILTER_INTO_JOIN,
> >   CoreRules.SORT_PROJECT_TRANSPOSE,
> >   CoreRules.SORT_JOIN_TRANSPOSE,
> >   CoreRules.FILTER_REDUCE_EXPRESSIONS,
> >   ...
> >   )).build(), false, MyRelMetadataProvider.INSTANCE);
> >
> > private static final List<RelOptRule> PHASE_2_RULES = Arrays.asList(
> >  EnumerableRules.ENUMERABLE_VALUES_RULE,
> >  EnumerableRules.ENUMERABLE_UNION_RULE,
> >  EnumerableRules.ENUMERABLE_FILTER_TO_CALC_RULE,
> >  EnumerableRules.ENUMERABLE_PROJECT_TO_CALC_RULE,
> >  EnumerableRules.ENUMERABLE_COLLECT_RULE,
> >  EnumerableRules.ENUMERABLE_UNCOLLECT_RULE,
> >  ...);
> > private static final Program PHASE_2 =
> > Programs.of(RuleSets.ofList(PHASE_2_RULES));
> >
> >
> > EnumerableRel optimizeLogicalPlan(RelNode query)
> > {
> >  Program optPhases = Programs.sequence(PHASE_1, PHASE_2);
> >  RelTraitSet desiredTraitSet = query.getTraitSet()
> >   .replace(EnumerableConvention.INSTANCE)
> >   .simplify();
> >  EnumerableRel result = (EnumerableRel) optPhases.run(
> >   query.getCluster().getPlanner(), // this is a VolcanoPlanner
> >   query,
> >   desiredTraitSet,
> >   Collections.emptyList(),
> >   Collections.emptyList());
> >  return result;
> > }
> >
> >
> > I hope it helps.
> >
> > Best regards,
> > Ruben
> >
> >
> > Le mar. 15 sept. 2020 à 17:43, Enrico Olivelli <eo...@gmail.com> a
> > écrit :
> >
> > > Hi,
> > >
> > > I am trying to create a two stage planner using HepPlanner and then
> > > VolcanoPlanner
> > >
> > > Which is the correct sequence of steps to pass from SQL to Enumerable ?
> > > My goal is to use Hep for very simple queries like simple INSERTs,
> > SELECT *
> > > FROM TABLE, SELECT * FROM TABLE WHERE pk=?....
> > > Volcano is overkill for such stuff.
> > >
> > > This is my idea:
> > > 1) get a Planner (that's Volcano)
> > > Planner planner = Frameworks.getPlanner(config);
> > >
> > > 2) Get the logical plan
> > >  SqlNode n = planner.parse(query);
> > >             n = planner.validate(n);
> > >             RelNode logicalPlan = planner.rel(n).project();
> > > 3) Create HepPlanner
> > > HepProgram hepProgram =
> > >                     HepProgram.
> > >                             builder()
> > >                             . ?? which Rules ?
> > >                             .build();
> > >
> > >             HepPlanner hepPlanner = new HepPlanner(hepProgram);
> > >             hepPlanner.addRelTraitDef(ConventionTraitDef.INSTANCE);
> > >
> > >             hepPlanner.setRoot(logicalPlan);
> > >
> > > 4) Run HepPlanner
> > > RelNode bestForHep = hepPlanner.findBestExp();
> > >
> > > 5) Pick Volcano
> > >
> > > final RelOptPlanner optPlanner = cluster.getPlanner();
> > >
> > > 6) Convert to Enumerable
> > >
> > > final RelOptPlanner optPlanner = cluster.getPlanner();
> > >
> > >             optPlanner.addRule(CoreRules.FILTER_REDUCE_EXPRESSIONS);
> > >             RelTraitSet desiredTraits =
> > >                     cluster.traitSet()
> > >                             .replace(EnumerableConvention.INSTANCE);
> > >             final RelCollation collation =
> > >                     logicalPlan instanceof Sort
> > >                             ? ((Sort) logicalPlan).collation
> > >                             : null;
> > >             if (collation != null) {
> > >                 desiredTraits = desiredTraits.replace(collation);
> > >             }
> > >             final RelNode newRoot =
> optPlanner.changeTraits(logicalPlan,
> > > desiredTraits);
> > >             optPlanner.setRoot(newRoot);
> > >             RelNode bestExp = optPlanner.findBestExp();
> > >
> > > Any hint/pointer is very appreciated
> > >
> > > The alternative is to detect such simple queries and use a little set
> of
> > > Rules and not Programs.ofRules(Programs.RULE_SET)
> > >
> > > Best regards
> > > Enrico
> > >
> >
>

Re: Combining HepPLanner and VolcanoPlanner

Posted by Enrico Olivelli <eo...@gmail.com>.
Ruben,
thank you very much, using your snippet I got the planner working !

In HerdDB we are currently using
Programs.ofRules(Programs.RULE_SET) +RelOptUtils#registerDefaultRules (that
add lots of rules) and we are able to handle several cases of very complex
generic SQL.

But If I try to reduce the number of rules I see this behaviour:
a) if I am lucky I get some error from the planner that tells that it can't
find a way to compute the plan -> I can switch to the full planner ruleset
b) If I am less lucky I have a plan that works but it is less efficient ->
(this is expected, and this is what I want, that is to trade planner speed
vs efficiency of plans
c) If I am unlucky the planner returns a plan that leads to wrong results !

For instance you can see c) if you do not add rules that handle "LIMIT" and
simply the plan sometimes won't pick the LIMIT and the database will return
unexpected results
but I saw worse cases, like wrong plans computed for joins.

So my next question is:
is there a way to create a minimal set of rules that covers simple SQL
(think about basic JPA stuff....) and eventually fails the plan in case of
unhandled cases ?

Best regards
Enrico

Il giorno mer 16 set 2020 alle ore 09:57 Ruben Q L <ru...@gmail.com> ha
scritto:

> Hi Enrico,
>
> not sure if this can be applied to your case, but I also have a use case
> with HepPlanner and VolcanoPlanner.
> In my case I have an initial phase of rules applied with HepPlanner. These
> are rules that optimize the logical plan and are always a "quickwin" (they
> always generate a better version of the plan, so it makes sense to apply
> them with a Hep).
> Then, I have a "main" phase, applied with VolcanoPlanner, that takes care
> of converting the plan into the most optimized Enumerable result.
> A simplified code snippet would be:
>
> private static final Program PHASE_1 =
> Programs.of(HepProgram.builder().addRuleCollection(
>  Arrays.asList(
>   CoreRules.FILTER_INTO_JOIN,
>   CoreRules.SORT_PROJECT_TRANSPOSE,
>   CoreRules.SORT_JOIN_TRANSPOSE,
>   CoreRules.FILTER_REDUCE_EXPRESSIONS,
>   ...
>   )).build(), false, MyRelMetadataProvider.INSTANCE);
>
> private static final List<RelOptRule> PHASE_2_RULES = Arrays.asList(
>  EnumerableRules.ENUMERABLE_VALUES_RULE,
>  EnumerableRules.ENUMERABLE_UNION_RULE,
>  EnumerableRules.ENUMERABLE_FILTER_TO_CALC_RULE,
>  EnumerableRules.ENUMERABLE_PROJECT_TO_CALC_RULE,
>  EnumerableRules.ENUMERABLE_COLLECT_RULE,
>  EnumerableRules.ENUMERABLE_UNCOLLECT_RULE,
>  ...);
> private static final Program PHASE_2 =
> Programs.of(RuleSets.ofList(PHASE_2_RULES));
>
>
> EnumerableRel optimizeLogicalPlan(RelNode query)
> {
>  Program optPhases = Programs.sequence(PHASE_1, PHASE_2);
>  RelTraitSet desiredTraitSet = query.getTraitSet()
>   .replace(EnumerableConvention.INSTANCE)
>   .simplify();
>  EnumerableRel result = (EnumerableRel) optPhases.run(
>   query.getCluster().getPlanner(), // this is a VolcanoPlanner
>   query,
>   desiredTraitSet,
>   Collections.emptyList(),
>   Collections.emptyList());
>  return result;
> }
>
>
> I hope it helps.
>
> Best regards,
> Ruben
>
>
> Le mar. 15 sept. 2020 à 17:43, Enrico Olivelli <eo...@gmail.com> a
> écrit :
>
> > Hi,
> >
> > I am trying to create a two stage planner using HepPlanner and then
> > VolcanoPlanner
> >
> > Which is the correct sequence of steps to pass from SQL to Enumerable ?
> > My goal is to use Hep for very simple queries like simple INSERTs,
> SELECT *
> > FROM TABLE, SELECT * FROM TABLE WHERE pk=?....
> > Volcano is overkill for such stuff.
> >
> > This is my idea:
> > 1) get a Planner (that's Volcano)
> > Planner planner = Frameworks.getPlanner(config);
> >
> > 2) Get the logical plan
> >  SqlNode n = planner.parse(query);
> >             n = planner.validate(n);
> >             RelNode logicalPlan = planner.rel(n).project();
> > 3) Create HepPlanner
> > HepProgram hepProgram =
> >                     HepProgram.
> >                             builder()
> >                             . ?? which Rules ?
> >                             .build();
> >
> >             HepPlanner hepPlanner = new HepPlanner(hepProgram);
> >             hepPlanner.addRelTraitDef(ConventionTraitDef.INSTANCE);
> >
> >             hepPlanner.setRoot(logicalPlan);
> >
> > 4) Run HepPlanner
> > RelNode bestForHep = hepPlanner.findBestExp();
> >
> > 5) Pick Volcano
> >
> > final RelOptPlanner optPlanner = cluster.getPlanner();
> >
> > 6) Convert to Enumerable
> >
> > final RelOptPlanner optPlanner = cluster.getPlanner();
> >
> >             optPlanner.addRule(CoreRules.FILTER_REDUCE_EXPRESSIONS);
> >             RelTraitSet desiredTraits =
> >                     cluster.traitSet()
> >                             .replace(EnumerableConvention.INSTANCE);
> >             final RelCollation collation =
> >                     logicalPlan instanceof Sort
> >                             ? ((Sort) logicalPlan).collation
> >                             : null;
> >             if (collation != null) {
> >                 desiredTraits = desiredTraits.replace(collation);
> >             }
> >             final RelNode newRoot = optPlanner.changeTraits(logicalPlan,
> > desiredTraits);
> >             optPlanner.setRoot(newRoot);
> >             RelNode bestExp = optPlanner.findBestExp();
> >
> > Any hint/pointer is very appreciated
> >
> > The alternative is to detect such simple queries and use a little set of
> > Rules and not Programs.ofRules(Programs.RULE_SET)
> >
> > Best regards
> > Enrico
> >
>

Re: Combining HepPLanner and VolcanoPlanner

Posted by Ruben Q L <ru...@gmail.com>.
Hi Enrico,

not sure if this can be applied to your case, but I also have a use case
with HepPlanner and VolcanoPlanner.
In my case I have an initial phase of rules applied with HepPlanner. These
are rules that optimize the logical plan and are always a "quickwin" (they
always generate a better version of the plan, so it makes sense to apply
them with a Hep).
Then, I have a "main" phase, applied with VolcanoPlanner, that takes care
of converting the plan into the most optimized Enumerable result.
A simplified code snippet would be:

private static final Program PHASE_1 =
Programs.of(HepProgram.builder().addRuleCollection(
 Arrays.asList(
  CoreRules.FILTER_INTO_JOIN,
  CoreRules.SORT_PROJECT_TRANSPOSE,
  CoreRules.SORT_JOIN_TRANSPOSE,
  CoreRules.FILTER_REDUCE_EXPRESSIONS,
  ...
  )).build(), false, MyRelMetadataProvider.INSTANCE);

private static final List<RelOptRule> PHASE_2_RULES = Arrays.asList(
 EnumerableRules.ENUMERABLE_VALUES_RULE,
 EnumerableRules.ENUMERABLE_UNION_RULE,
 EnumerableRules.ENUMERABLE_FILTER_TO_CALC_RULE,
 EnumerableRules.ENUMERABLE_PROJECT_TO_CALC_RULE,
 EnumerableRules.ENUMERABLE_COLLECT_RULE,
 EnumerableRules.ENUMERABLE_UNCOLLECT_RULE,
 ...);
private static final Program PHASE_2 =
Programs.of(RuleSets.ofList(PHASE_2_RULES));


EnumerableRel optimizeLogicalPlan(RelNode query)
{
 Program optPhases = Programs.sequence(PHASE_1, PHASE_2);
 RelTraitSet desiredTraitSet = query.getTraitSet()
  .replace(EnumerableConvention.INSTANCE)
  .simplify();
 EnumerableRel result = (EnumerableRel) optPhases.run(
  query.getCluster().getPlanner(), // this is a VolcanoPlanner
  query,
  desiredTraitSet,
  Collections.emptyList(),
  Collections.emptyList());
 return result;
}


I hope it helps.

Best regards,
Ruben


Le mar. 15 sept. 2020 à 17:43, Enrico Olivelli <eo...@gmail.com> a
écrit :

> Hi,
>
> I am trying to create a two stage planner using HepPlanner and then
> VolcanoPlanner
>
> Which is the correct sequence of steps to pass from SQL to Enumerable ?
> My goal is to use Hep for very simple queries like simple INSERTs, SELECT *
> FROM TABLE, SELECT * FROM TABLE WHERE pk=?....
> Volcano is overkill for such stuff.
>
> This is my idea:
> 1) get a Planner (that's Volcano)
> Planner planner = Frameworks.getPlanner(config);
>
> 2) Get the logical plan
>  SqlNode n = planner.parse(query);
>             n = planner.validate(n);
>             RelNode logicalPlan = planner.rel(n).project();
> 3) Create HepPlanner
> HepProgram hepProgram =
>                     HepProgram.
>                             builder()
>                             . ?? which Rules ?
>                             .build();
>
>             HepPlanner hepPlanner = new HepPlanner(hepProgram);
>             hepPlanner.addRelTraitDef(ConventionTraitDef.INSTANCE);
>
>             hepPlanner.setRoot(logicalPlan);
>
> 4) Run HepPlanner
> RelNode bestForHep = hepPlanner.findBestExp();
>
> 5) Pick Volcano
>
> final RelOptPlanner optPlanner = cluster.getPlanner();
>
> 6) Convert to Enumerable
>
> final RelOptPlanner optPlanner = cluster.getPlanner();
>
>             optPlanner.addRule(CoreRules.FILTER_REDUCE_EXPRESSIONS);
>             RelTraitSet desiredTraits =
>                     cluster.traitSet()
>                             .replace(EnumerableConvention.INSTANCE);
>             final RelCollation collation =
>                     logicalPlan instanceof Sort
>                             ? ((Sort) logicalPlan).collation
>                             : null;
>             if (collation != null) {
>                 desiredTraits = desiredTraits.replace(collation);
>             }
>             final RelNode newRoot = optPlanner.changeTraits(logicalPlan,
> desiredTraits);
>             optPlanner.setRoot(newRoot);
>             RelNode bestExp = optPlanner.findBestExp();
>
> Any hint/pointer is very appreciated
>
> The alternative is to detect such simple queries and use a little set of
> Rules and not Programs.ofRules(Programs.RULE_SET)
>
> Best regards
> Enrico
>