You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by Stéphane Campinas <st...@gmail.com> on 2017/11/06 11:05:13 UTC

Volcano planner

Hi,

I am trying to use the Volcano planner in order to optimise queries based
on statistics but I am having some issues understanding how to achieve
this, even after looking at the Github repository for tests.
A first goal I would like to achieve would be to choose a join
implementation based on its cost.

For example, a query tree can have several joins, and depending on the
position of the join in the tree, an certain implementation would be more
efficient than another.
Would that be possible ? If so, could you share a code snippet ?

Thanks

-- 
Campinas Stéphane

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Sure will do soon!
Enrico

Il lun 13 nov 2017, 16:44 Edmon Begoli <eb...@gmail.com> ha scritto:

> Enrico -- if you have a minute -- please share any of your findings,
> observations, and notes about the use of Calcite Planner, and we'll add
> that to the future documentation.
>
> Thank you,
> Edmon
>
> On Mon, Nov 13, 2017 at 10:33 AM, Enrico Olivelli <eo...@gmail.com>
> wrote:
>
> > Thank you all.
> > Once I have understood the basic mechanism I am working very well with
> > Calcite planner.
> > I will report on the issue my troubles in understanding the first step.
> > I think that the most important starting point would be a glossary of
> main
> > terms, like Convention and Traits then a good example.
> > The fact that we use marker interfaces for Tables was not very intuitive.
> > I am going at spees now, in less that a wel of manwork I am going to
> > replace fully the planner of my opensource product, HerdDB.
> >
> > Thanks
> > Enrico
> >
> > Il lun 13 nov 2017, 16:21 Michael Mior <mm...@uwaterloo.ca> ha scritto:
> >
> > > Enrico,
> > >
> > > The documentation on the planner definitely could be improved. There's
> > > already been an issue created for this which can be followed here:
> > > https://issues.apache.org/jira/browse/CALCITE-2048
> > >
> > > --
> > > Michael Mior
> > > mmior@apache.org
> > >
> > > 2017-11-11 10:29 GMT-05:00 Enrico Olivelli <eo...@gmail.com>:
> > >
> > > > Got it by myself, the Table must implement ModifiableTable.
> > > > As far as I am learning the planner is driven by the properties of
> the
> > > > Table, expressed using intefaces
> > > >
> > > > I wonder if there is some summary of the behavior of the planner or
> > some
> > > > basic Glossary
> > > >
> > > > Cheers
> > > > Enrico
> > > >
> > > > 2017-11-11 11:27 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > > >
> > > > > Sorry I cannot make it work for INSERT/DELETE/UPDATE
> > > > >
> > > > >
> > > > > This is the error for a DELETE
> > > > > Qury: DELETE FROM MYTABLE where id=1
> > > > > -- Logical Plan
> > > > > LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
> > > > > flattened=[true])
> > > > >   LogicalProject(id=[$0], name=[$1])
> > > > >     LogicalFilter(condition=[=($0, 1)])
> > > > >       LogicalTableScan(table=[[x, MYTABLE]])
> > > > >
> > > > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> 3.167
> > > sec
> > > > > <<< FAILURE!
> > > > > test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
> > > > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > > > [rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented;
> > planner
> > > > > state:
> > > > >
> > > > > Root: rel#15:Subset#3.ENUMERABLE.[].any
> > > > > Original rel:
> > > > > LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any],
> > > > table=[[x,
> > > > > MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
> > > > > cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
> > > > >   LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0],
> > > > > name=[$1]): rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu,
> > 0.0
> > > > > io}, id = 11
> > > > >     LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any],
> > > condition=[=($0,
> > > > > 1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0
> > io},
> > > > id
> > > > > = 9
> > > > >       LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any],
> > table=[[x,
> > > > > MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0
> cpu,
> > > 0.0
> > > > > io}, id = 4
> > > > >
> > > > > Sets:
> > > > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > > > >     rel#8:Subset#0.NONE.[].any, best=null,
> > > importance=0.7290000000000001
> > > > >         rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
> > > > > rowcount=15.0, cumulative cost={inf}
> > > > >     rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
> > > > > importance=0.36450000000000005
> > > > >         rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > > > 25:Subset#0.BINDABLE.[].any),
> > > > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > > > >     rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
> > > > > importance=0.36450000000000005
> > > > >         rel#24:BindableTableScan.BINDABLE.[].any(table=[x,
> > MYTABLE]),
> > > > > rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
> > > > > Set#1, type: RecordType(INTEGER id, INTEGER name)
> > > > >     rel#10:Subset#1.NONE.[].any, best=null, importance=0.81
> > > > >         rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.
> > > > [].any,condition==($0,
> > > > > 1)), rowcount=2.25, cumulative cost={inf}
> > > > >         rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.
> > > > NONE.[].any,id=$0,name=$1),
> > > > > rowcount=2.25, cumulative cost={inf}
> > > > >     rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
> > > > > importance=0.4510687500000001
> > > > >         rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#
> > > > > 17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1), rowcount=15.0,
> > cumulative
> > > > > cost={22.65 rows, 37.66 cpu, 0.0 io}
> > > > >         rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#
> > > > > 21:Subset#0.ENUMERABLE.[].any,condition==($0, 1)), rowcount=2.25,
> > > > > cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}
> > > > >         rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > > > 20:Subset#1.BINDABLE.[].any),
> > > > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > > > >     rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
> > > > >         rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
> > > > > MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15
> > > rows,
> > > > > 0.16 cpu, 0.0 io}
> > > > > Set#3, type: RecordType(BIGINT ROWCOUNT)
> > > > >     rel#14:Subset#3.NONE.[].any, best=null, importance=0.9
> > > > >         rel#13:LogicalTableModify.NONE.[].any(input=rel#10:
> > > > Subset#1.NONE.[].any,table=[x,
> > > > > MYTABLE],operation=DELETE,flattened=true), rowcount=2.25,
> cumulative
> > > > > cost={inf}
> > > > >     rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0
> > > > >         rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#
> > > > > 14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > > > rowcount=2.25, cumulative cost={inf}
> > > > >
> > > > >
> > > > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > > > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > > > buildCheapestPlan(RelSubset.java:291)
> > > > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > > findBestExp(VolcanoPlanner.java:666)
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > This is for an Insert
> > > > > Qury: INSERT INTO MYTABLE(id,name) values(1,2)
> > > > > -- Logical Plan
> > > > > LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
> > > > > flattened=[true])
> > > > >   LogicalValues(type=[RecordType(INTEGER id, INTEGER name)],
> > tuples=[[{
> > > > > 1, 2 }]])
> > > > >
> > > > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> 2.377
> > > sec
> > > > > <<< FAILURE!
> > > > > test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
> > > > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > > > [rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented;
> planner
> > > > > state:
> > > > >
> > > > > Root: rel#7:Subset#1.ENUMERABLE.[].any
> > > > > Original rel:
> > > > > LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any],
> > > table=[[x,
> > > > > MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
> > > > > cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
> > > > >   LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1,
> 2
> > > > > }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0
> io},
> > id
> > > > = 2
> > > > >
> > > > > Sets:
> > > > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > > > >     rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
> > > > >         rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(
> > > > INTEGER
> > > > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> > > cost={inf}
> > > > >     rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9,
> > > > importance=0.405
> > > > >         rel#9:EnumerableValues.ENUMERABLE.[[0, 1],
> > [1]].broadcast(type=
> > > > RecordType(INTEGER
> > > > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> > > cost={1.0
> > > > > rows, 1.0 cpu, 0.0 io}
> > > > > Set#1, type: RecordType(BIGINT ROWCOUNT)
> > > > >     rel#6:Subset#1.NONE.[].any, best=null, importance=0.9
> > > > >         rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.
> > > > NONE.[].any,table=[x,
> > > > > MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
> > > > > cost={inf}
> > > > >     rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0
> > > > >         rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:
> > > > > Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > > > rowcount=1.0, cumulative cost={inf}
> > > > >
> > > > >
> > > > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > > > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > > > buildCheapestPlan(RelSubset.java:291)
> > > > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > > findBestExp(VolcanoPlanner.java:666)
> > > > >
> > > > >
> > > > >
> > > > > I really appreciate your help
> > > > > Enrico
> > > > >
> > > > > 2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > > > >
> > > > >> The example from Luis works like a charm.
> > > > >> I have some questions,I will start separate threads
> > > > >>
> > > > >> Thank you
> > > > >> Enrico
> > > > >>
> > > > >> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > > > >>
> > > > >>> Luis thank you,
> > > > >>> my case is the second one. I want to use Calcite planner
> internally
> > > on
> > > > a
> > > > >>> database system. I will try with your suggestion
> > > > >>>
> > > > >>> Enrico
> > > > >>>
> > > > >>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer
> > > > >>> <lf...@yahoo.com.br.invalid> ha scritto:
> > > > >>>
> > > > >>>>  If you intend to run a query then you should follow the
> tutorial
> > > and
> > > > >>>> try to change the csv adapter.  You can add the table to the
> > schema
> > > at
> > > > >>>> runtime using something like:
> > > > >>>>
> > > ---------------------------------------------------------------------
> > > > >>>>
> > > > >>>> Class.forName("org.apache.calcite.jdbc.Driver");
> > > > >>>> Properties info = new Properties();
> > > > >>>> info.setProperty("lex", "MYSQL_ANSI");
> > > > >>>>
> > > > >>>> final Connection connection = DriverManager.getConnection("
> > > > jdbc:calcite:",
> > > > >>>> info);
> > > > >>>> CalciteConnection conn = connection.unwrap(
> > CalciteConnection.class);
> > > > >>>> SchemaPlus root = conn.getRootSchema();
> > > > >>>> root.add("MYTABLE", new TableImpl());
> > > > >>>> Statement statement = conn.createStatement();
> > > > >>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
> > > > >>>>
> > > ---------------------------------------------------------------------
> > > > >>>>
> > > > >>>> But if you only want to parse, validate and optimize the query
> > plan,
> > > > >>>> you can use something like:
> > > > >>>>
> > > ---------------------------------------------------------------------
> > > > >>>>     Table table = new TableImpl();
> > > > >>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(
> > true);
> > > > >>>>     SchemaPlus schema = rootSchema.add("x", new
> AbstractSchema());
> > > > >>>>     schema.add("MYTABLE", table);
> > > > >>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
> > > > >>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
> > > > >>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
> > > > >>>>     SqlParser.Config parserConfig =
> > > > >>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
> > > > >>>>       .setCaseSensitive(false)
> > > > >>>>       .build();
> > > > >>>>
> > > > >>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
> > > > >>>>         .parserConfig(parserConfig)
> > > > >>>>         .defaultSchema(schema)
> > > > >>>>         .traitDefs(traitDefs)
> > > > >>>>         // define the rules you want to apply
> > > > >>>>
> > > > >>>>         .programs(Programs.ofRules(Programs.RULE_SET))
> > > > >>>>         .build();
> > > > >>>>     Planner planner = Frameworks.getPlanner(config);
> > > > >>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID <
> > > 10");
> > > > >>>>     n = planner.validate(n);
> > > > >>>>     RelNode root = planner.rel(n).project();
> > > > >>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan",
> > root,
> > > > >>>> SqlExplainFormat.TEXT,
> > > > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > > > >>>>     RelOptCluster cluster = root.getCluster();
> > > > >>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
> > > > >>>>     RelTraitSet desiredTraits =
> > > > >>>>
>  cluster.traitSet().replace(EnumerableConvention.INSTANCE)
> > ;
> > > > >>>>     final RelNode newRoot = optPlanner.changeTraits(root,
> > > > >>>> desiredTraits);
> > > > >>>>     optPlanner.setRoot(newRoot);
> > > > >>>>     RelNode bestExp = optPlanner.findBestExp();
> > > > >>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
> > > > >>>> SqlExplainFormat.TEXT,
> > > > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > > > >>>>  ------------------------------------------------------------
> > > > ---------
> > > > >>>>
> > > > >>>> The main problem was that you were not setting the desired trait
> > to
> > > > use
> > > > >>>> EnumerableConvention.
> > > > >>>> You can see that instead of implementing all the interfaces you
> > > should
> > > > >>>> use the available builders and classes.
> > > > >>>> Also for implementing Table I think you should extend
> > AbstractTable
> > > > >>>> instead of implementing Table interface and you can use
> > > Statistics.of
> > > > >>>> instead of implementing Statistic interface if it is simple:
> > > > >>>>
> > > ---------------------------------------------------------------------
> > > > >>>>
> > > > >>>>   private static class TableImpl extends AbstractTable {
> > > > >>>>     public TableImpl() {}
> > > > >>>>     @Override    public RelDataType
> getRowType(RelDataTypeFactory
> > > > >>>> typeFactory) {
> > > > >>>>       Builder builder = new RelDataTypeFactory.Builder(
> > typeFactory);
> > > > >>>>       return builder.add("id", typeFactory.createSqlType(SqlT
> > > > >>>> ypeName.INTEGER))
> > > > >>>>           .add("name", typeFactory.createSqlType(SqlT
> > > > >>>> ypeName.VARCHAR)).build();
> > > > >>>>     }
> > > > >>>>     @Override
> > > > >>>>     public Statistic getStatistic() {
> > > > >>>>        return Statistics.of(15D,
> > > ImmutableList.<ImmutableBitSet>of(),
> > > > >>>>           ImmutableList.of(RelCollations.of(0),
> > > > RelCollations.of(1)));
> > > > >>>>     }
> > > > >>>>
> > > > >>>>   }
> > > > >>>>
> > > ---------------------------------------------------------------------
> > > > >>>>
> > > > >>>>
> > > > >>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
> > > > >>>> Olivelli <eo...@gmail.com> escreveu:
> > > > >>>>
> > > > >>>>  Hi,
> > > > >>>> I am playing with the planner but I can't get it work for a very
> > > > simple
> > > > >>>> query.
> > > > >>>> Th table is
> > > > >>>>  MYTABLE(id integer, name varchar)            definition is
> given
> > in
> > > > >>>> code
> > > > >>>> snippet
> > > > >>>> the query is "SELECT * FROM MYTABLE"
> > > > >>>>
> > > > >>>> The error is:
> > > > >>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > > >>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented;
> planner
> > > > >>>> state:
> > > > >>>>
> > > > >>>> Root: rel#7:Subset#0.NONE.[0, 1].any
> > > > >>>> Original rel:
> > > > >>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
> > > > >>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0
> > cpu,
> > > > 0.0
> > > > >>>> io}, id = 5
> > > > >>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0,
> > 1].any],
> > > > >>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost =
> > > {15.0
> > > > >>>> rows,
> > > > >>>> 16.0 cpu, 0.0 io}, id = 2
> > > > >>>>
> > > > >>>> Sets:
> > > > >>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
> > > > >>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2,
> > importance=0.9
> > > > >>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
> > > > >>>> 1]].any(table=[default,
> > > > >>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu,
> > 0.0
> > > > io}
> > > > >>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
> > > > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> > 1].any,id=$0,name=$1),
> > > > >>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
> > > > >>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
> > > > >>>>         rel#5:LogicalProject.NONE.[[0,
> > > > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> > 1].any,id=$0,name=$1),
> > > > >>>> rowcount=15.0, cumulative cost={inf}
> > > > >>>>         rel#8:AbstractConverter.NONE.[0,
> > > > >>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> > > > >>>> 1].any,convention=NONE,sort=[0,
> > > > >>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
> > > > >>>>
> > > > >>>> Does anybody has an hint for me ?
> > > > >>>> I am using currert master of Calcite (1.15-SNAPSHOT)
> > > > >>>>
> > > > >>>> Thank you
> > > > >>>>
> > > > >>>> Enrico
> > > > >>>>
> > > > >>>>
> > > > >>>> My code is:
> > > > >>>>   @Test
> > > > >>>>     public void test() throws Exception {
> > > > >>>>         Table table = new TableImpl();
> > > > >>>>         CalciteSchema schema = CalciteSchema.
> > createRootSchema(true,
> > > > >>>> true,
> > > > >>>> "default");
> > > > >>>>         schema.add("MYTABLE", table);
> > > > >>>>         SchemaPlus rootSchema = schema.plus();
> > > > >>>>         SqlRexConvertletTable convertletTable =
> > > > >>>> StandardConvertletTable.INSTANCE;
> > > > >>>>         SqlToRelConverter.Config config =
> > > > SqlToRelConverter.Config.DEFAU
> > > > >>>> LT;
> > > > >>>>         FrameworkConfig frameworkConfig = new
> > > > >>>> FrameworkConfigImpl(config,
> > > > >>>> rootSchema, convertletTable);
> > > > >>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
> > > > >>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
> > > > >>>>         sqlNode = imp.validate(sqlNode);
> > > > >>>>         RelRoot relRoot = imp.rel(sqlNode);
> > > > >>>>         RelNode project = relRoot.project();
> > > > >>>>         RelOptPlanner planner = project.getCluster().
> > getPlanner();
> > > > >>>>         planner.setRoot(project);
> > > > >>>>         RelNode findBestExp = planner.findBestExp();
> > > > >>>>         System.out.println("best:" + findBestExp);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     private class FrameworkConfigImpl implements
> FrameworkConfig {
> > > > >>>>
> > > > >>>>         private final SqlToRelConverter.Config config;
> > > > >>>>         private final SchemaPlus rootSchema;
> > > > >>>>         private final SqlRexConvertletTable convertletTable;
> > > > >>>>
> > > > >>>>         public FrameworkConfigImpl(SqlToRelConverter.Config
> > config,
> > > > >>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
> > > > >>>>             this.config = config;
> > > > >>>>             this.rootSchema = rootSchema;
> > > > >>>>             this.convertletTable = convertletTable;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public SqlParser.Config getParserConfig() {
> > > > >>>>             return SqlParser.Config.DEFAULT;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public SqlToRelConverter.Config
> > > getSqlToRelConverterConfig() {
> > > > >>>>             return config;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public SchemaPlus getDefaultSchema() {
> > > > >>>>             return rootSchema;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public RexExecutor getExecutor() {
> > > > >>>>             return new RexExecutorImpl(new DataContextImpl());
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public ImmutableList<Program> getPrograms() {
> > > > >>>>             return ImmutableList.of(Programs.standard());
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public SqlOperatorTable getOperatorTable() {
> > > > >>>>             return new SqlStdOperatorTable();
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public RelOptCostFactory getCostFactory() {
> > > > >>>>             return null;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
> > > > >>>>
> > > > >>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
> > > > >>>>                     RelCollationTraitDef.INSTANCE,
> > > > >>>>                     RelDistributionTraitDef.INSTANCE
> > > > >>>>             );
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public SqlRexConvertletTable getConvertletTable() {
> > > > >>>>             return convertletTable;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public Context getContext() {
> > > > >>>>             return new ContextImpl();
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public RelDataTypeSystem getTypeSystem() {
> > > > >>>>             return RelDataTypeSystem.DEFAULT;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         class DataContextImpl implements DataContext {
> > > > >>>>
> > > > >>>>             public DataContextImpl() {
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public SchemaPlus getRootSchema() {
> > > > >>>>                 return rootSchema;
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public JavaTypeFactory getTypeFactory() {
> > > > >>>>                 throw new UnsupportedOperationException("Not
> > > > supported
> > > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > > Templates.
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public QueryProvider getQueryProvider() {
> > > > >>>>                 throw new UnsupportedOperationException("Not
> > > > supported
> > > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > > Templates.
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public Object get(String name) {
> > > > >>>>                 throw new UnsupportedOperationException("Not
> > > > supported
> > > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > > Templates.
> > > > >>>>             }
> > > > >>>>
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         private class ContextImpl implements Context {
> > > > >>>>
> > > > >>>>             public ContextImpl() {
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public <C> C unwrap(Class<C> aClass) {
> > > > >>>>                 return null;
> > > > >>>>             }
> > > > >>>>         }
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     private static class TableImpl implements Table {
> > > > >>>>
> > > > >>>>         public TableImpl() {
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public RelDataType getRowType(RelDataTypeFactory
> > > typeFactory)
> > > > {
> > > > >>>>             return typeFactory
> > > > >>>>                     .builder()
> > > > >>>>                     .add("id",
> > > > >>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
> > > > >>>>                     .add("name",
> > > > >>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
> > > > >>>>                     .build();
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public Statistic getStatistic() {
> > > > >>>>             return new StatisticImpl();
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public Schema.TableType getJdbcTableType() {
> > > > >>>>             throw new UnsupportedOperationException("Not
> > supported
> > > > >>>> yet.");
> > > > >>>> //To change body of generated methods, choose Tools | Templates.
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public boolean isRolledUp(String column) {
> > > > >>>>             return true;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         @Override
> > > > >>>>         public boolean rolledUpColumnValidInsideAgg(String
> > column,
> > > > >>>> SqlCall
> > > > >>>> call, SqlNode parent, CalciteConnectionConfig config) {
> > > > >>>>             return false;
> > > > >>>>         }
> > > > >>>>
> > > > >>>>         class StatisticImpl implements Statistic {
> > > > >>>>
> > > > >>>>             public StatisticImpl() {
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public Double getRowCount() {
> > > > >>>>                 return 15d;
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public boolean isKey(ImmutableBitSet columns) {
> > > > >>>>                 return false;
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public List<RelReferentialConstraint>
> > > > >>>> getReferentialConstraints() {
> > > > >>>>                 return Collections.emptyList();
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public List<RelCollation> getCollations() {
> > > > >>>>                 RelCollation c = new RelCollationImpl(
> > > > >>>>                         ImmutableList.of(
> > > > >>>>                                 new RelFieldCollation(0,
> > > > >>>> RelFieldCollation.Direction.ASCENDING),
> > > > >>>>                                 new RelFieldCollation(1,
> > > > >>>> RelFieldCollation.Direction.ASCENDING)
> > > > >>>>                         )) {
> > > > >>>>                 };
> > > > >>>>                 return Arrays.asList(c);
> > > > >>>>             }
> > > > >>>>
> > > > >>>>             @Override
> > > > >>>>             public RelDistribution getDistribution() {
> > > > >>>>                 return RelDistributions.ANY;
> > > > >>>>             }
> > > > >>>>         }
> > > > >>>>     }
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
> > > > >>>>
> > > > >>>> > Yes that is definitely possible. I am too busy to write a code
> > > > >>>> snippet but
> > > > >>>> > you should take a look at PlannerTest.
> > > > >>>> >
> > > > >>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> > > > >>>> > stephane.campinas@gmail.com> wrote:
> > > > >>>> > >
> > > > >>>> > > Hi,
> > > > >>>> > >
> > > > >>>> > > I am trying to use the Volcano planner in order to optimise
> > > > queries
> > > > >>>> based
> > > > >>>> > > on statistics but I am having some issues understanding how
> to
> > > > >>>> achieve
> > > > >>>> > > this, even after looking at the Github repository for tests.
> > > > >>>> > > A first goal I would like to achieve would be to choose a
> join
> > > > >>>> > > implementation based on its cost.
> > > > >>>> > >
> > > > >>>> > > For example, a query tree can have several joins, and
> > depending
> > > on
> > > > >>>> the
> > > > >>>> > > position of the join in the tree, an certain implementation
> > > would
> > > > >>>> be more
> > > > >>>> > > efficient than another.
> > > > >>>> > > Would that be possible ? If so, could you share a code
> > snippet ?
> > > > >>>> > >
> > > > >>>> > > Thanks
> > > > >>>> > >
> > > > >>>> > > --
> > > > >>>> > > Campinas Stéphane
> > > > >>>> >
> > > > >>>> >
> > > > >>>
> > > > >>> --
> > > > >>>
> > > > >>>
> > > > >>> -- Enrico Olivelli
> > > > >>>
> > > > >>
> > > > >>
> > > > >
> > > >
> > >
> > --
> >
> >
> > -- Enrico Olivelli
> >
>
-- 


-- Enrico Olivelli

Re: Volcano planner

Posted by Edmon Begoli <eb...@gmail.com>.
Enrico -- if you have a minute -- please share any of your findings,
observations, and notes about the use of Calcite Planner, and we'll add
that to the future documentation.

Thank you,
Edmon

On Mon, Nov 13, 2017 at 10:33 AM, Enrico Olivelli <eo...@gmail.com>
wrote:

> Thank you all.
> Once I have understood the basic mechanism I am working very well with
> Calcite planner.
> I will report on the issue my troubles in understanding the first step.
> I think that the most important starting point would be a glossary of main
> terms, like Convention and Traits then a good example.
> The fact that we use marker interfaces for Tables was not very intuitive.
> I am going at spees now, in less that a wel of manwork I am going to
> replace fully the planner of my opensource product, HerdDB.
>
> Thanks
> Enrico
>
> Il lun 13 nov 2017, 16:21 Michael Mior <mm...@uwaterloo.ca> ha scritto:
>
> > Enrico,
> >
> > The documentation on the planner definitely could be improved. There's
> > already been an issue created for this which can be followed here:
> > https://issues.apache.org/jira/browse/CALCITE-2048
> >
> > --
> > Michael Mior
> > mmior@apache.org
> >
> > 2017-11-11 10:29 GMT-05:00 Enrico Olivelli <eo...@gmail.com>:
> >
> > > Got it by myself, the Table must implement ModifiableTable.
> > > As far as I am learning the planner is driven by the properties of the
> > > Table, expressed using intefaces
> > >
> > > I wonder if there is some summary of the behavior of the planner or
> some
> > > basic Glossary
> > >
> > > Cheers
> > > Enrico
> > >
> > > 2017-11-11 11:27 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > >
> > > > Sorry I cannot make it work for INSERT/DELETE/UPDATE
> > > >
> > > >
> > > > This is the error for a DELETE
> > > > Qury: DELETE FROM MYTABLE where id=1
> > > > -- Logical Plan
> > > > LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
> > > > flattened=[true])
> > > >   LogicalProject(id=[$0], name=[$1])
> > > >     LogicalFilter(condition=[=($0, 1)])
> > > >       LogicalTableScan(table=[[x, MYTABLE]])
> > > >
> > > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.167
> > sec
> > > > <<< FAILURE!
> > > > test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
> > > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > > [rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented;
> planner
> > > > state:
> > > >
> > > > Root: rel#15:Subset#3.ENUMERABLE.[].any
> > > > Original rel:
> > > > LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any],
> > > table=[[x,
> > > > MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
> > > > cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
> > > >   LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0],
> > > > name=[$1]): rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu,
> 0.0
> > > > io}, id = 11
> > > >     LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any],
> > condition=[=($0,
> > > > 1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0
> io},
> > > id
> > > > = 9
> > > >       LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any],
> table=[[x,
> > > > MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0 cpu,
> > 0.0
> > > > io}, id = 4
> > > >
> > > > Sets:
> > > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > > >     rel#8:Subset#0.NONE.[].any, best=null,
> > importance=0.7290000000000001
> > > >         rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
> > > > rowcount=15.0, cumulative cost={inf}
> > > >     rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
> > > > importance=0.36450000000000005
> > > >         rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > > 25:Subset#0.BINDABLE.[].any),
> > > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > > >     rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
> > > > importance=0.36450000000000005
> > > >         rel#24:BindableTableScan.BINDABLE.[].any(table=[x,
> MYTABLE]),
> > > > rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
> > > > Set#1, type: RecordType(INTEGER id, INTEGER name)
> > > >     rel#10:Subset#1.NONE.[].any, best=null, importance=0.81
> > > >         rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.
> > > [].any,condition==($0,
> > > > 1)), rowcount=2.25, cumulative cost={inf}
> > > >         rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.
> > > NONE.[].any,id=$0,name=$1),
> > > > rowcount=2.25, cumulative cost={inf}
> > > >     rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
> > > > importance=0.4510687500000001
> > > >         rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#
> > > > 17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1), rowcount=15.0,
> cumulative
> > > > cost={22.65 rows, 37.66 cpu, 0.0 io}
> > > >         rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#
> > > > 21:Subset#0.ENUMERABLE.[].any,condition==($0, 1)), rowcount=2.25,
> > > > cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}
> > > >         rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > > 20:Subset#1.BINDABLE.[].any),
> > > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > > >     rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
> > > >         rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
> > > > MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15
> > rows,
> > > > 0.16 cpu, 0.0 io}
> > > > Set#3, type: RecordType(BIGINT ROWCOUNT)
> > > >     rel#14:Subset#3.NONE.[].any, best=null, importance=0.9
> > > >         rel#13:LogicalTableModify.NONE.[].any(input=rel#10:
> > > Subset#1.NONE.[].any,table=[x,
> > > > MYTABLE],operation=DELETE,flattened=true), rowcount=2.25, cumulative
> > > > cost={inf}
> > > >     rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0
> > > >         rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#
> > > > 14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > > rowcount=2.25, cumulative cost={inf}
> > > >
> > > >
> > > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > > buildCheapestPlan(RelSubset.java:291)
> > > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:666)
> > > >
> > > >
> > > >
> > > >
> > > > This is for an Insert
> > > > Qury: INSERT INTO MYTABLE(id,name) values(1,2)
> > > > -- Logical Plan
> > > > LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
> > > > flattened=[true])
> > > >   LogicalValues(type=[RecordType(INTEGER id, INTEGER name)],
> tuples=[[{
> > > > 1, 2 }]])
> > > >
> > > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.377
> > sec
> > > > <<< FAILURE!
> > > > test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
> > > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > > [rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented; planner
> > > > state:
> > > >
> > > > Root: rel#7:Subset#1.ENUMERABLE.[].any
> > > > Original rel:
> > > > LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any],
> > table=[[x,
> > > > MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
> > > > cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
> > > >   LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1, 2
> > > > }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io},
> id
> > > = 2
> > > >
> > > > Sets:
> > > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > > >     rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
> > > >         rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(
> > > INTEGER
> > > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> > cost={inf}
> > > >     rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9,
> > > importance=0.405
> > > >         rel#9:EnumerableValues.ENUMERABLE.[[0, 1],
> [1]].broadcast(type=
> > > RecordType(INTEGER
> > > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> > cost={1.0
> > > > rows, 1.0 cpu, 0.0 io}
> > > > Set#1, type: RecordType(BIGINT ROWCOUNT)
> > > >     rel#6:Subset#1.NONE.[].any, best=null, importance=0.9
> > > >         rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.
> > > NONE.[].any,table=[x,
> > > > MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
> > > > cost={inf}
> > > >     rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0
> > > >         rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:
> > > > Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > > rowcount=1.0, cumulative cost={inf}
> > > >
> > > >
> > > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > > buildCheapestPlan(RelSubset.java:291)
> > > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:666)
> > > >
> > > >
> > > >
> > > > I really appreciate your help
> > > > Enrico
> > > >
> > > > 2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > > >
> > > >> The example from Luis works like a charm.
> > > >> I have some questions,I will start separate threads
> > > >>
> > > >> Thank you
> > > >> Enrico
> > > >>
> > > >> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > > >>
> > > >>> Luis thank you,
> > > >>> my case is the second one. I want to use Calcite planner internally
> > on
> > > a
> > > >>> database system. I will try with your suggestion
> > > >>>
> > > >>> Enrico
> > > >>>
> > > >>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer
> > > >>> <lf...@yahoo.com.br.invalid> ha scritto:
> > > >>>
> > > >>>>  If you intend to run a query then you should follow the tutorial
> > and
> > > >>>> try to change the csv adapter.  You can add the table to the
> schema
> > at
> > > >>>> runtime using something like:
> > > >>>>
> > ---------------------------------------------------------------------
> > > >>>>
> > > >>>> Class.forName("org.apache.calcite.jdbc.Driver");
> > > >>>> Properties info = new Properties();
> > > >>>> info.setProperty("lex", "MYSQL_ANSI");
> > > >>>>
> > > >>>> final Connection connection = DriverManager.getConnection("
> > > jdbc:calcite:",
> > > >>>> info);
> > > >>>> CalciteConnection conn = connection.unwrap(
> CalciteConnection.class);
> > > >>>> SchemaPlus root = conn.getRootSchema();
> > > >>>> root.add("MYTABLE", new TableImpl());
> > > >>>> Statement statement = conn.createStatement();
> > > >>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
> > > >>>>
> > ---------------------------------------------------------------------
> > > >>>>
> > > >>>> But if you only want to parse, validate and optimize the query
> plan,
> > > >>>> you can use something like:
> > > >>>>
> > ---------------------------------------------------------------------
> > > >>>>     Table table = new TableImpl();
> > > >>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(
> true);
> > > >>>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
> > > >>>>     schema.add("MYTABLE", table);
> > > >>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
> > > >>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
> > > >>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
> > > >>>>     SqlParser.Config parserConfig =
> > > >>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
> > > >>>>       .setCaseSensitive(false)
> > > >>>>       .build();
> > > >>>>
> > > >>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
> > > >>>>         .parserConfig(parserConfig)
> > > >>>>         .defaultSchema(schema)
> > > >>>>         .traitDefs(traitDefs)
> > > >>>>         // define the rules you want to apply
> > > >>>>
> > > >>>>         .programs(Programs.ofRules(Programs.RULE_SET))
> > > >>>>         .build();
> > > >>>>     Planner planner = Frameworks.getPlanner(config);
> > > >>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID <
> > 10");
> > > >>>>     n = planner.validate(n);
> > > >>>>     RelNode root = planner.rel(n).project();
> > > >>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan",
> root,
> > > >>>> SqlExplainFormat.TEXT,
> > > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > > >>>>     RelOptCluster cluster = root.getCluster();
> > > >>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
> > > >>>>     RelTraitSet desiredTraits =
> > > >>>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE)
> ;
> > > >>>>     final RelNode newRoot = optPlanner.changeTraits(root,
> > > >>>> desiredTraits);
> > > >>>>     optPlanner.setRoot(newRoot);
> > > >>>>     RelNode bestExp = optPlanner.findBestExp();
> > > >>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
> > > >>>> SqlExplainFormat.TEXT,
> > > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > > >>>>  ------------------------------------------------------------
> > > ---------
> > > >>>>
> > > >>>> The main problem was that you were not setting the desired trait
> to
> > > use
> > > >>>> EnumerableConvention.
> > > >>>> You can see that instead of implementing all the interfaces you
> > should
> > > >>>> use the available builders and classes.
> > > >>>> Also for implementing Table I think you should extend
> AbstractTable
> > > >>>> instead of implementing Table interface and you can use
> > Statistics.of
> > > >>>> instead of implementing Statistic interface if it is simple:
> > > >>>>
> > ---------------------------------------------------------------------
> > > >>>>
> > > >>>>   private static class TableImpl extends AbstractTable {
> > > >>>>     public TableImpl() {}
> > > >>>>     @Override    public RelDataType getRowType(RelDataTypeFactory
> > > >>>> typeFactory) {
> > > >>>>       Builder builder = new RelDataTypeFactory.Builder(
> typeFactory);
> > > >>>>       return builder.add("id", typeFactory.createSqlType(SqlT
> > > >>>> ypeName.INTEGER))
> > > >>>>           .add("name", typeFactory.createSqlType(SqlT
> > > >>>> ypeName.VARCHAR)).build();
> > > >>>>     }
> > > >>>>     @Override
> > > >>>>     public Statistic getStatistic() {
> > > >>>>        return Statistics.of(15D,
> > ImmutableList.<ImmutableBitSet>of(),
> > > >>>>           ImmutableList.of(RelCollations.of(0),
> > > RelCollations.of(1)));
> > > >>>>     }
> > > >>>>
> > > >>>>   }
> > > >>>>
> > ---------------------------------------------------------------------
> > > >>>>
> > > >>>>
> > > >>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
> > > >>>> Olivelli <eo...@gmail.com> escreveu:
> > > >>>>
> > > >>>>  Hi,
> > > >>>> I am playing with the planner but I can't get it work for a very
> > > simple
> > > >>>> query.
> > > >>>> Th table is
> > > >>>>  MYTABLE(id integer, name varchar)            definition is given
> in
> > > >>>> code
> > > >>>> snippet
> > > >>>> the query is "SELECT * FROM MYTABLE"
> > > >>>>
> > > >>>> The error is:
> > > >>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > >>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner
> > > >>>> state:
> > > >>>>
> > > >>>> Root: rel#7:Subset#0.NONE.[0, 1].any
> > > >>>> Original rel:
> > > >>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
> > > >>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0
> cpu,
> > > 0.0
> > > >>>> io}, id = 5
> > > >>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0,
> 1].any],
> > > >>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost =
> > {15.0
> > > >>>> rows,
> > > >>>> 16.0 cpu, 0.0 io}, id = 2
> > > >>>>
> > > >>>> Sets:
> > > >>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
> > > >>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2,
> importance=0.9
> > > >>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
> > > >>>> 1]].any(table=[default,
> > > >>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu,
> 0.0
> > > io}
> > > >>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
> > > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> 1].any,id=$0,name=$1),
> > > >>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
> > > >>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
> > > >>>>         rel#5:LogicalProject.NONE.[[0,
> > > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> 1].any,id=$0,name=$1),
> > > >>>> rowcount=15.0, cumulative cost={inf}
> > > >>>>         rel#8:AbstractConverter.NONE.[0,
> > > >>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> > > >>>> 1].any,convention=NONE,sort=[0,
> > > >>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
> > > >>>>
> > > >>>> Does anybody has an hint for me ?
> > > >>>> I am using currert master of Calcite (1.15-SNAPSHOT)
> > > >>>>
> > > >>>> Thank you
> > > >>>>
> > > >>>> Enrico
> > > >>>>
> > > >>>>
> > > >>>> My code is:
> > > >>>>   @Test
> > > >>>>     public void test() throws Exception {
> > > >>>>         Table table = new TableImpl();
> > > >>>>         CalciteSchema schema = CalciteSchema.
> createRootSchema(true,
> > > >>>> true,
> > > >>>> "default");
> > > >>>>         schema.add("MYTABLE", table);
> > > >>>>         SchemaPlus rootSchema = schema.plus();
> > > >>>>         SqlRexConvertletTable convertletTable =
> > > >>>> StandardConvertletTable.INSTANCE;
> > > >>>>         SqlToRelConverter.Config config =
> > > SqlToRelConverter.Config.DEFAU
> > > >>>> LT;
> > > >>>>         FrameworkConfig frameworkConfig = new
> > > >>>> FrameworkConfigImpl(config,
> > > >>>> rootSchema, convertletTable);
> > > >>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
> > > >>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
> > > >>>>         sqlNode = imp.validate(sqlNode);
> > > >>>>         RelRoot relRoot = imp.rel(sqlNode);
> > > >>>>         RelNode project = relRoot.project();
> > > >>>>         RelOptPlanner planner = project.getCluster().
> getPlanner();
> > > >>>>         planner.setRoot(project);
> > > >>>>         RelNode findBestExp = planner.findBestExp();
> > > >>>>         System.out.println("best:" + findBestExp);
> > > >>>>     }
> > > >>>>
> > > >>>>     private class FrameworkConfigImpl implements FrameworkConfig {
> > > >>>>
> > > >>>>         private final SqlToRelConverter.Config config;
> > > >>>>         private final SchemaPlus rootSchema;
> > > >>>>         private final SqlRexConvertletTable convertletTable;
> > > >>>>
> > > >>>>         public FrameworkConfigImpl(SqlToRelConverter.Config
> config,
> > > >>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
> > > >>>>             this.config = config;
> > > >>>>             this.rootSchema = rootSchema;
> > > >>>>             this.convertletTable = convertletTable;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public SqlParser.Config getParserConfig() {
> > > >>>>             return SqlParser.Config.DEFAULT;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public SqlToRelConverter.Config
> > getSqlToRelConverterConfig() {
> > > >>>>             return config;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public SchemaPlus getDefaultSchema() {
> > > >>>>             return rootSchema;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public RexExecutor getExecutor() {
> > > >>>>             return new RexExecutorImpl(new DataContextImpl());
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public ImmutableList<Program> getPrograms() {
> > > >>>>             return ImmutableList.of(Programs.standard());
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public SqlOperatorTable getOperatorTable() {
> > > >>>>             return new SqlStdOperatorTable();
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public RelOptCostFactory getCostFactory() {
> > > >>>>             return null;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
> > > >>>>
> > > >>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
> > > >>>>                     RelCollationTraitDef.INSTANCE,
> > > >>>>                     RelDistributionTraitDef.INSTANCE
> > > >>>>             );
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public SqlRexConvertletTable getConvertletTable() {
> > > >>>>             return convertletTable;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public Context getContext() {
> > > >>>>             return new ContextImpl();
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public RelDataTypeSystem getTypeSystem() {
> > > >>>>             return RelDataTypeSystem.DEFAULT;
> > > >>>>         }
> > > >>>>
> > > >>>>         class DataContextImpl implements DataContext {
> > > >>>>
> > > >>>>             public DataContextImpl() {
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public SchemaPlus getRootSchema() {
> > > >>>>                 return rootSchema;
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public JavaTypeFactory getTypeFactory() {
> > > >>>>                 throw new UnsupportedOperationException("Not
> > > supported
> > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > Templates.
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public QueryProvider getQueryProvider() {
> > > >>>>                 throw new UnsupportedOperationException("Not
> > > supported
> > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > Templates.
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public Object get(String name) {
> > > >>>>                 throw new UnsupportedOperationException("Not
> > > supported
> > > >>>> yet."); //To change body of generated methods, choose Tools |
> > > Templates.
> > > >>>>             }
> > > >>>>
> > > >>>>         }
> > > >>>>
> > > >>>>         private class ContextImpl implements Context {
> > > >>>>
> > > >>>>             public ContextImpl() {
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public <C> C unwrap(Class<C> aClass) {
> > > >>>>                 return null;
> > > >>>>             }
> > > >>>>         }
> > > >>>>     }
> > > >>>>
> > > >>>>     private static class TableImpl implements Table {
> > > >>>>
> > > >>>>         public TableImpl() {
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public RelDataType getRowType(RelDataTypeFactory
> > typeFactory)
> > > {
> > > >>>>             return typeFactory
> > > >>>>                     .builder()
> > > >>>>                     .add("id",
> > > >>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
> > > >>>>                     .add("name",
> > > >>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
> > > >>>>                     .build();
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public Statistic getStatistic() {
> > > >>>>             return new StatisticImpl();
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public Schema.TableType getJdbcTableType() {
> > > >>>>             throw new UnsupportedOperationException("Not
> supported
> > > >>>> yet.");
> > > >>>> //To change body of generated methods, choose Tools | Templates.
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public boolean isRolledUp(String column) {
> > > >>>>             return true;
> > > >>>>         }
> > > >>>>
> > > >>>>         @Override
> > > >>>>         public boolean rolledUpColumnValidInsideAgg(String
> column,
> > > >>>> SqlCall
> > > >>>> call, SqlNode parent, CalciteConnectionConfig config) {
> > > >>>>             return false;
> > > >>>>         }
> > > >>>>
> > > >>>>         class StatisticImpl implements Statistic {
> > > >>>>
> > > >>>>             public StatisticImpl() {
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public Double getRowCount() {
> > > >>>>                 return 15d;
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public boolean isKey(ImmutableBitSet columns) {
> > > >>>>                 return false;
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public List<RelReferentialConstraint>
> > > >>>> getReferentialConstraints() {
> > > >>>>                 return Collections.emptyList();
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public List<RelCollation> getCollations() {
> > > >>>>                 RelCollation c = new RelCollationImpl(
> > > >>>>                         ImmutableList.of(
> > > >>>>                                 new RelFieldCollation(0,
> > > >>>> RelFieldCollation.Direction.ASCENDING),
> > > >>>>                                 new RelFieldCollation(1,
> > > >>>> RelFieldCollation.Direction.ASCENDING)
> > > >>>>                         )) {
> > > >>>>                 };
> > > >>>>                 return Arrays.asList(c);
> > > >>>>             }
> > > >>>>
> > > >>>>             @Override
> > > >>>>             public RelDistribution getDistribution() {
> > > >>>>                 return RelDistributions.ANY;
> > > >>>>             }
> > > >>>>         }
> > > >>>>     }
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
> > > >>>>
> > > >>>> > Yes that is definitely possible. I am too busy to write a code
> > > >>>> snippet but
> > > >>>> > you should take a look at PlannerTest.
> > > >>>> >
> > > >>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> > > >>>> > stephane.campinas@gmail.com> wrote:
> > > >>>> > >
> > > >>>> > > Hi,
> > > >>>> > >
> > > >>>> > > I am trying to use the Volcano planner in order to optimise
> > > queries
> > > >>>> based
> > > >>>> > > on statistics but I am having some issues understanding how to
> > > >>>> achieve
> > > >>>> > > this, even after looking at the Github repository for tests.
> > > >>>> > > A first goal I would like to achieve would be to choose a join
> > > >>>> > > implementation based on its cost.
> > > >>>> > >
> > > >>>> > > For example, a query tree can have several joins, and
> depending
> > on
> > > >>>> the
> > > >>>> > > position of the join in the tree, an certain implementation
> > would
> > > >>>> be more
> > > >>>> > > efficient than another.
> > > >>>> > > Would that be possible ? If so, could you share a code
> snippet ?
> > > >>>> > >
> > > >>>> > > Thanks
> > > >>>> > >
> > > >>>> > > --
> > > >>>> > > Campinas Stéphane
> > > >>>> >
> > > >>>> >
> > > >>>
> > > >>> --
> > > >>>
> > > >>>
> > > >>> -- Enrico Olivelli
> > > >>>
> > > >>
> > > >>
> > > >
> > >
> >
> --
>
>
> -- Enrico Olivelli
>

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Thank you all.
Once I have understood the basic mechanism I am working very well with
Calcite planner.
I will report on the issue my troubles in understanding the first step.
I think that the most important starting point would be a glossary of main
terms, like Convention and Traits then a good example.
The fact that we use marker interfaces for Tables was not very intuitive.
I am going at spees now, in less that a wel of manwork I am going to
replace fully the planner of my opensource product, HerdDB.

Thanks
Enrico

Il lun 13 nov 2017, 16:21 Michael Mior <mm...@uwaterloo.ca> ha scritto:

> Enrico,
>
> The documentation on the planner definitely could be improved. There's
> already been an issue created for this which can be followed here:
> https://issues.apache.org/jira/browse/CALCITE-2048
>
> --
> Michael Mior
> mmior@apache.org
>
> 2017-11-11 10:29 GMT-05:00 Enrico Olivelli <eo...@gmail.com>:
>
> > Got it by myself, the Table must implement ModifiableTable.
> > As far as I am learning the planner is driven by the properties of the
> > Table, expressed using intefaces
> >
> > I wonder if there is some summary of the behavior of the planner or some
> > basic Glossary
> >
> > Cheers
> > Enrico
> >
> > 2017-11-11 11:27 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> >
> > > Sorry I cannot make it work for INSERT/DELETE/UPDATE
> > >
> > >
> > > This is the error for a DELETE
> > > Qury: DELETE FROM MYTABLE where id=1
> > > -- Logical Plan
> > > LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
> > > flattened=[true])
> > >   LogicalProject(id=[$0], name=[$1])
> > >     LogicalFilter(condition=[=($0, 1)])
> > >       LogicalTableScan(table=[[x, MYTABLE]])
> > >
> > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.167
> sec
> > > <<< FAILURE!
> > > test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
> > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > [rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented; planner
> > > state:
> > >
> > > Root: rel#15:Subset#3.ENUMERABLE.[].any
> > > Original rel:
> > > LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any],
> > table=[[x,
> > > MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
> > > cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
> > >   LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0],
> > > name=[$1]): rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu, 0.0
> > > io}, id = 11
> > >     LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any],
> condition=[=($0,
> > > 1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0 io},
> > id
> > > = 9
> > >       LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any], table=[[x,
> > > MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0 cpu,
> 0.0
> > > io}, id = 4
> > >
> > > Sets:
> > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > >     rel#8:Subset#0.NONE.[].any, best=null,
> importance=0.7290000000000001
> > >         rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
> > > rowcount=15.0, cumulative cost={inf}
> > >     rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
> > > importance=0.36450000000000005
> > >         rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > 25:Subset#0.BINDABLE.[].any),
> > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > >     rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
> > > importance=0.36450000000000005
> > >         rel#24:BindableTableScan.BINDABLE.[].any(table=[x, MYTABLE]),
> > > rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
> > > Set#1, type: RecordType(INTEGER id, INTEGER name)
> > >     rel#10:Subset#1.NONE.[].any, best=null, importance=0.81
> > >         rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.
> > [].any,condition==($0,
> > > 1)), rowcount=2.25, cumulative cost={inf}
> > >         rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.
> > NONE.[].any,id=$0,name=$1),
> > > rowcount=2.25, cumulative cost={inf}
> > >     rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
> > > importance=0.4510687500000001
> > >         rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#
> > > 17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1), rowcount=15.0, cumulative
> > > cost={22.65 rows, 37.66 cpu, 0.0 io}
> > >         rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#
> > > 21:Subset#0.ENUMERABLE.[].any,condition==($0, 1)), rowcount=2.25,
> > > cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}
> > >         rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> > 20:Subset#1.BINDABLE.[].any),
> > > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> > >     rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
> > >         rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
> > > MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15
> rows,
> > > 0.16 cpu, 0.0 io}
> > > Set#3, type: RecordType(BIGINT ROWCOUNT)
> > >     rel#14:Subset#3.NONE.[].any, best=null, importance=0.9
> > >         rel#13:LogicalTableModify.NONE.[].any(input=rel#10:
> > Subset#1.NONE.[].any,table=[x,
> > > MYTABLE],operation=DELETE,flattened=true), rowcount=2.25, cumulative
> > > cost={inf}
> > >     rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0
> > >         rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#
> > > 14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > rowcount=2.25, cumulative cost={inf}
> > >
> > >
> > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > buildCheapestPlan(RelSubset.java:291)
> > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > findBestExp(VolcanoPlanner.java:666)
> > >
> > >
> > >
> > >
> > > This is for an Insert
> > > Qury: INSERT INTO MYTABLE(id,name) values(1,2)
> > > -- Logical Plan
> > > LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
> > > flattened=[true])
> > >   LogicalValues(type=[RecordType(INTEGER id, INTEGER name)], tuples=[[{
> > > 1, 2 }]])
> > >
> > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.377
> sec
> > > <<< FAILURE!
> > > test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
> > > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > > [rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented; planner
> > > state:
> > >
> > > Root: rel#7:Subset#1.ENUMERABLE.[].any
> > > Original rel:
> > > LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any],
> table=[[x,
> > > MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
> > > cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
> > >   LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1, 2
> > > }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id
> > = 2
> > >
> > > Sets:
> > > Set#0, type: RecordType(INTEGER id, INTEGER name)
> > >     rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
> > >         rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(
> > INTEGER
> > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> cost={inf}
> > >     rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9,
> > importance=0.405
> > >         rel#9:EnumerableValues.ENUMERABLE.[[0, 1], [1]].broadcast(type=
> > RecordType(INTEGER
> > > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative
> cost={1.0
> > > rows, 1.0 cpu, 0.0 io}
> > > Set#1, type: RecordType(BIGINT ROWCOUNT)
> > >     rel#6:Subset#1.NONE.[].any, best=null, importance=0.9
> > >         rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.
> > NONE.[].any,table=[x,
> > > MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
> > > cost={inf}
> > >     rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0
> > >         rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:
> > > Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > > rowcount=1.0, cumulative cost={inf}
> > >
> > >
> > >     at org.apache.calcite.plan.volcano.RelSubset$
> > > CheapestPlanReplacer.visit(RelSubset.java:441)
> > >     at org.apache.calcite.plan.volcano.RelSubset.
> > > buildCheapestPlan(RelSubset.java:291)
> > >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > findBestExp(VolcanoPlanner.java:666)
> > >
> > >
> > >
> > > I really appreciate your help
> > > Enrico
> > >
> > > 2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > >
> > >> The example from Luis works like a charm.
> > >> I have some questions,I will start separate threads
> > >>
> > >> Thank you
> > >> Enrico
> > >>
> > >> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> > >>
> > >>> Luis thank you,
> > >>> my case is the second one. I want to use Calcite planner internally
> on
> > a
> > >>> database system. I will try with your suggestion
> > >>>
> > >>> Enrico
> > >>>
> > >>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer
> > >>> <lf...@yahoo.com.br.invalid> ha scritto:
> > >>>
> > >>>>  If you intend to run a query then you should follow the tutorial
> and
> > >>>> try to change the csv adapter.  You can add the table to the schema
> at
> > >>>> runtime using something like:
> > >>>>
> ---------------------------------------------------------------------
> > >>>>
> > >>>> Class.forName("org.apache.calcite.jdbc.Driver");
> > >>>> Properties info = new Properties();
> > >>>> info.setProperty("lex", "MYSQL_ANSI");
> > >>>>
> > >>>> final Connection connection = DriverManager.getConnection("
> > jdbc:calcite:",
> > >>>> info);
> > >>>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
> > >>>> SchemaPlus root = conn.getRootSchema();
> > >>>> root.add("MYTABLE", new TableImpl());
> > >>>> Statement statement = conn.createStatement();
> > >>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
> > >>>>
> ---------------------------------------------------------------------
> > >>>>
> > >>>> But if you only want to parse, validate and optimize the query plan,
> > >>>> you can use something like:
> > >>>>
> ---------------------------------------------------------------------
> > >>>>     Table table = new TableImpl();
> > >>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
> > >>>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
> > >>>>     schema.add("MYTABLE", table);
> > >>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
> > >>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
> > >>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
> > >>>>     SqlParser.Config parserConfig =
> > >>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
> > >>>>       .setCaseSensitive(false)
> > >>>>       .build();
> > >>>>
> > >>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
> > >>>>         .parserConfig(parserConfig)
> > >>>>         .defaultSchema(schema)
> > >>>>         .traitDefs(traitDefs)
> > >>>>         // define the rules you want to apply
> > >>>>
> > >>>>         .programs(Programs.ofRules(Programs.RULE_SET))
> > >>>>         .build();
> > >>>>     Planner planner = Frameworks.getPlanner(config);
> > >>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID <
> 10");
> > >>>>     n = planner.validate(n);
> > >>>>     RelNode root = planner.rel(n).project();
> > >>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
> > >>>> SqlExplainFormat.TEXT,
> > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > >>>>     RelOptCluster cluster = root.getCluster();
> > >>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
> > >>>>     RelTraitSet desiredTraits =
> > >>>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
> > >>>>     final RelNode newRoot = optPlanner.changeTraits(root,
> > >>>> desiredTraits);
> > >>>>     optPlanner.setRoot(newRoot);
> > >>>>     RelNode bestExp = optPlanner.findBestExp();
> > >>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
> > >>>> SqlExplainFormat.TEXT,
> > >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> > >>>>  ------------------------------------------------------------
> > ---------
> > >>>>
> > >>>> The main problem was that you were not setting the desired trait to
> > use
> > >>>> EnumerableConvention.
> > >>>> You can see that instead of implementing all the interfaces you
> should
> > >>>> use the available builders and classes.
> > >>>> Also for implementing Table I think you should extend AbstractTable
> > >>>> instead of implementing Table interface and you can use
> Statistics.of
> > >>>> instead of implementing Statistic interface if it is simple:
> > >>>>
> ---------------------------------------------------------------------
> > >>>>
> > >>>>   private static class TableImpl extends AbstractTable {
> > >>>>     public TableImpl() {}
> > >>>>     @Override    public RelDataType getRowType(RelDataTypeFactory
> > >>>> typeFactory) {
> > >>>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
> > >>>>       return builder.add("id", typeFactory.createSqlType(SqlT
> > >>>> ypeName.INTEGER))
> > >>>>           .add("name", typeFactory.createSqlType(SqlT
> > >>>> ypeName.VARCHAR)).build();
> > >>>>     }
> > >>>>     @Override
> > >>>>     public Statistic getStatistic() {
> > >>>>        return Statistics.of(15D,
> ImmutableList.<ImmutableBitSet>of(),
> > >>>>           ImmutableList.of(RelCollations.of(0),
> > RelCollations.of(1)));
> > >>>>     }
> > >>>>
> > >>>>   }
> > >>>>
> ---------------------------------------------------------------------
> > >>>>
> > >>>>
> > >>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
> > >>>> Olivelli <eo...@gmail.com> escreveu:
> > >>>>
> > >>>>  Hi,
> > >>>> I am playing with the planner but I can't get it work for a very
> > simple
> > >>>> query.
> > >>>> Th table is
> > >>>>  MYTABLE(id integer, name varchar)            definition is given in
> > >>>> code
> > >>>> snippet
> > >>>> the query is "SELECT * FROM MYTABLE"
> > >>>>
> > >>>> The error is:
> > >>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > >>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner
> > >>>> state:
> > >>>>
> > >>>> Root: rel#7:Subset#0.NONE.[0, 1].any
> > >>>> Original rel:
> > >>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
> > >>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu,
> > 0.0
> > >>>> io}, id = 5
> > >>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
> > >>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost =
> {15.0
> > >>>> rows,
> > >>>> 16.0 cpu, 0.0 io}, id = 2
> > >>>>
> > >>>> Sets:
> > >>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
> > >>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
> > >>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
> > >>>> 1]].any(table=[default,
> > >>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0
> > io}
> > >>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
> > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> > >>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
> > >>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
> > >>>>         rel#5:LogicalProject.NONE.[[0,
> > >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> > >>>> rowcount=15.0, cumulative cost={inf}
> > >>>>         rel#8:AbstractConverter.NONE.[0,
> > >>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> > >>>> 1].any,convention=NONE,sort=[0,
> > >>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
> > >>>>
> > >>>> Does anybody has an hint for me ?
> > >>>> I am using currert master of Calcite (1.15-SNAPSHOT)
> > >>>>
> > >>>> Thank you
> > >>>>
> > >>>> Enrico
> > >>>>
> > >>>>
> > >>>> My code is:
> > >>>>   @Test
> > >>>>     public void test() throws Exception {
> > >>>>         Table table = new TableImpl();
> > >>>>         CalciteSchema schema = CalciteSchema.createRootSchema(true,
> > >>>> true,
> > >>>> "default");
> > >>>>         schema.add("MYTABLE", table);
> > >>>>         SchemaPlus rootSchema = schema.plus();
> > >>>>         SqlRexConvertletTable convertletTable =
> > >>>> StandardConvertletTable.INSTANCE;
> > >>>>         SqlToRelConverter.Config config =
> > SqlToRelConverter.Config.DEFAU
> > >>>> LT;
> > >>>>         FrameworkConfig frameworkConfig = new
> > >>>> FrameworkConfigImpl(config,
> > >>>> rootSchema, convertletTable);
> > >>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
> > >>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
> > >>>>         sqlNode = imp.validate(sqlNode);
> > >>>>         RelRoot relRoot = imp.rel(sqlNode);
> > >>>>         RelNode project = relRoot.project();
> > >>>>         RelOptPlanner planner = project.getCluster().getPlanner();
> > >>>>         planner.setRoot(project);
> > >>>>         RelNode findBestExp = planner.findBestExp();
> > >>>>         System.out.println("best:" + findBestExp);
> > >>>>     }
> > >>>>
> > >>>>     private class FrameworkConfigImpl implements FrameworkConfig {
> > >>>>
> > >>>>         private final SqlToRelConverter.Config config;
> > >>>>         private final SchemaPlus rootSchema;
> > >>>>         private final SqlRexConvertletTable convertletTable;
> > >>>>
> > >>>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
> > >>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
> > >>>>             this.config = config;
> > >>>>             this.rootSchema = rootSchema;
> > >>>>             this.convertletTable = convertletTable;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public SqlParser.Config getParserConfig() {
> > >>>>             return SqlParser.Config.DEFAULT;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public SqlToRelConverter.Config
> getSqlToRelConverterConfig() {
> > >>>>             return config;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public SchemaPlus getDefaultSchema() {
> > >>>>             return rootSchema;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public RexExecutor getExecutor() {
> > >>>>             return new RexExecutorImpl(new DataContextImpl());
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public ImmutableList<Program> getPrograms() {
> > >>>>             return ImmutableList.of(Programs.standard());
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public SqlOperatorTable getOperatorTable() {
> > >>>>             return new SqlStdOperatorTable();
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public RelOptCostFactory getCostFactory() {
> > >>>>             return null;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
> > >>>>
> > >>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
> > >>>>                     RelCollationTraitDef.INSTANCE,
> > >>>>                     RelDistributionTraitDef.INSTANCE
> > >>>>             );
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public SqlRexConvertletTable getConvertletTable() {
> > >>>>             return convertletTable;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public Context getContext() {
> > >>>>             return new ContextImpl();
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public RelDataTypeSystem getTypeSystem() {
> > >>>>             return RelDataTypeSystem.DEFAULT;
> > >>>>         }
> > >>>>
> > >>>>         class DataContextImpl implements DataContext {
> > >>>>
> > >>>>             public DataContextImpl() {
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public SchemaPlus getRootSchema() {
> > >>>>                 return rootSchema;
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public JavaTypeFactory getTypeFactory() {
> > >>>>                 throw new UnsupportedOperationException("Not
> > supported
> > >>>> yet."); //To change body of generated methods, choose Tools |
> > Templates.
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public QueryProvider getQueryProvider() {
> > >>>>                 throw new UnsupportedOperationException("Not
> > supported
> > >>>> yet."); //To change body of generated methods, choose Tools |
> > Templates.
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public Object get(String name) {
> > >>>>                 throw new UnsupportedOperationException("Not
> > supported
> > >>>> yet."); //To change body of generated methods, choose Tools |
> > Templates.
> > >>>>             }
> > >>>>
> > >>>>         }
> > >>>>
> > >>>>         private class ContextImpl implements Context {
> > >>>>
> > >>>>             public ContextImpl() {
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public <C> C unwrap(Class<C> aClass) {
> > >>>>                 return null;
> > >>>>             }
> > >>>>         }
> > >>>>     }
> > >>>>
> > >>>>     private static class TableImpl implements Table {
> > >>>>
> > >>>>         public TableImpl() {
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public RelDataType getRowType(RelDataTypeFactory
> typeFactory)
> > {
> > >>>>             return typeFactory
> > >>>>                     .builder()
> > >>>>                     .add("id",
> > >>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
> > >>>>                     .add("name",
> > >>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
> > >>>>                     .build();
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public Statistic getStatistic() {
> > >>>>             return new StatisticImpl();
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public Schema.TableType getJdbcTableType() {
> > >>>>             throw new UnsupportedOperationException("Not supported
> > >>>> yet.");
> > >>>> //To change body of generated methods, choose Tools | Templates.
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public boolean isRolledUp(String column) {
> > >>>>             return true;
> > >>>>         }
> > >>>>
> > >>>>         @Override
> > >>>>         public boolean rolledUpColumnValidInsideAgg(String column,
> > >>>> SqlCall
> > >>>> call, SqlNode parent, CalciteConnectionConfig config) {
> > >>>>             return false;
> > >>>>         }
> > >>>>
> > >>>>         class StatisticImpl implements Statistic {
> > >>>>
> > >>>>             public StatisticImpl() {
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public Double getRowCount() {
> > >>>>                 return 15d;
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public boolean isKey(ImmutableBitSet columns) {
> > >>>>                 return false;
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public List<RelReferentialConstraint>
> > >>>> getReferentialConstraints() {
> > >>>>                 return Collections.emptyList();
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public List<RelCollation> getCollations() {
> > >>>>                 RelCollation c = new RelCollationImpl(
> > >>>>                         ImmutableList.of(
> > >>>>                                 new RelFieldCollation(0,
> > >>>> RelFieldCollation.Direction.ASCENDING),
> > >>>>                                 new RelFieldCollation(1,
> > >>>> RelFieldCollation.Direction.ASCENDING)
> > >>>>                         )) {
> > >>>>                 };
> > >>>>                 return Arrays.asList(c);
> > >>>>             }
> > >>>>
> > >>>>             @Override
> > >>>>             public RelDistribution getDistribution() {
> > >>>>                 return RelDistributions.ANY;
> > >>>>             }
> > >>>>         }
> > >>>>     }
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
> > >>>>
> > >>>> > Yes that is definitely possible. I am too busy to write a code
> > >>>> snippet but
> > >>>> > you should take a look at PlannerTest.
> > >>>> >
> > >>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> > >>>> > stephane.campinas@gmail.com> wrote:
> > >>>> > >
> > >>>> > > Hi,
> > >>>> > >
> > >>>> > > I am trying to use the Volcano planner in order to optimise
> > queries
> > >>>> based
> > >>>> > > on statistics but I am having some issues understanding how to
> > >>>> achieve
> > >>>> > > this, even after looking at the Github repository for tests.
> > >>>> > > A first goal I would like to achieve would be to choose a join
> > >>>> > > implementation based on its cost.
> > >>>> > >
> > >>>> > > For example, a query tree can have several joins, and depending
> on
> > >>>> the
> > >>>> > > position of the join in the tree, an certain implementation
> would
> > >>>> be more
> > >>>> > > efficient than another.
> > >>>> > > Would that be possible ? If so, could you share a code snippet ?
> > >>>> > >
> > >>>> > > Thanks
> > >>>> > >
> > >>>> > > --
> > >>>> > > Campinas Stéphane
> > >>>> >
> > >>>> >
> > >>>
> > >>> --
> > >>>
> > >>>
> > >>> -- Enrico Olivelli
> > >>>
> > >>
> > >>
> > >
> >
>
-- 


-- Enrico Olivelli

Re: Volcano planner

Posted by Michael Mior <mm...@uwaterloo.ca>.
Enrico,

The documentation on the planner definitely could be improved. There's
already been an issue created for this which can be followed here:
https://issues.apache.org/jira/browse/CALCITE-2048

--
Michael Mior
mmior@apache.org

2017-11-11 10:29 GMT-05:00 Enrico Olivelli <eo...@gmail.com>:

> Got it by myself, the Table must implement ModifiableTable.
> As far as I am learning the planner is driven by the properties of the
> Table, expressed using intefaces
>
> I wonder if there is some summary of the behavior of the planner or some
> basic Glossary
>
> Cheers
> Enrico
>
> 2017-11-11 11:27 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
>
> > Sorry I cannot make it work for INSERT/DELETE/UPDATE
> >
> >
> > This is the error for a DELETE
> > Qury: DELETE FROM MYTABLE where id=1
> > -- Logical Plan
> > LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
> > flattened=[true])
> >   LogicalProject(id=[$0], name=[$1])
> >     LogicalFilter(condition=[=($0, 1)])
> >       LogicalTableScan(table=[[x, MYTABLE]])
> >
> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.167 sec
> > <<< FAILURE!
> > test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
> > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > [rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented; planner
> > state:
> >
> > Root: rel#15:Subset#3.ENUMERABLE.[].any
> > Original rel:
> > LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any],
> table=[[x,
> > MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
> > cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
> >   LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0],
> > name=[$1]): rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu, 0.0
> > io}, id = 11
> >     LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any], condition=[=($0,
> > 1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0 io},
> id
> > = 9
> >       LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any], table=[[x,
> > MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0 cpu, 0.0
> > io}, id = 4
> >
> > Sets:
> > Set#0, type: RecordType(INTEGER id, INTEGER name)
> >     rel#8:Subset#0.NONE.[].any, best=null, importance=0.7290000000000001
> >         rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
> > rowcount=15.0, cumulative cost={inf}
> >     rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
> > importance=0.36450000000000005
> >         rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> 25:Subset#0.BINDABLE.[].any),
> > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> >     rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
> > importance=0.36450000000000005
> >         rel#24:BindableTableScan.BINDABLE.[].any(table=[x, MYTABLE]),
> > rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
> > Set#1, type: RecordType(INTEGER id, INTEGER name)
> >     rel#10:Subset#1.NONE.[].any, best=null, importance=0.81
> >         rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.
> [].any,condition==($0,
> > 1)), rowcount=2.25, cumulative cost={inf}
> >         rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.
> NONE.[].any,id=$0,name=$1),
> > rowcount=2.25, cumulative cost={inf}
> >     rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
> > importance=0.4510687500000001
> >         rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#
> > 17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1), rowcount=15.0, cumulative
> > cost={22.65 rows, 37.66 cpu, 0.0 io}
> >         rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#
> > 21:Subset#0.ENUMERABLE.[].any,condition==($0, 1)), rowcount=2.25,
> > cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}
> >         rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#
> 20:Subset#1.BINDABLE.[].any),
> > rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
> >     rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
> >         rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
> > MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15 rows,
> > 0.16 cpu, 0.0 io}
> > Set#3, type: RecordType(BIGINT ROWCOUNT)
> >     rel#14:Subset#3.NONE.[].any, best=null, importance=0.9
> >         rel#13:LogicalTableModify.NONE.[].any(input=rel#10:
> Subset#1.NONE.[].any,table=[x,
> > MYTABLE],operation=DELETE,flattened=true), rowcount=2.25, cumulative
> > cost={inf}
> >     rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0
> >         rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#
> > 14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > rowcount=2.25, cumulative cost={inf}
> >
> >
> >     at org.apache.calcite.plan.volcano.RelSubset$
> > CheapestPlanReplacer.visit(RelSubset.java:441)
> >     at org.apache.calcite.plan.volcano.RelSubset.
> > buildCheapestPlan(RelSubset.java:291)
> >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > findBestExp(VolcanoPlanner.java:666)
> >
> >
> >
> >
> > This is for an Insert
> > Qury: INSERT INTO MYTABLE(id,name) values(1,2)
> > -- Logical Plan
> > LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
> > flattened=[true])
> >   LogicalValues(type=[RecordType(INTEGER id, INTEGER name)], tuples=[[{
> > 1, 2 }]])
> >
> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.377 sec
> > <<< FAILURE!
> > test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
> > org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> > [rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented; planner
> > state:
> >
> > Root: rel#7:Subset#1.ENUMERABLE.[].any
> > Original rel:
> > LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any], table=[[x,
> > MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
> > cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
> >   LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1, 2
> > }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id
> = 2
> >
> > Sets:
> > Set#0, type: RecordType(INTEGER id, INTEGER name)
> >     rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
> >         rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(
> INTEGER
> > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative cost={inf}
> >     rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9,
> importance=0.405
> >         rel#9:EnumerableValues.ENUMERABLE.[[0, 1], [1]].broadcast(type=
> RecordType(INTEGER
> > id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative cost={1.0
> > rows, 1.0 cpu, 0.0 io}
> > Set#1, type: RecordType(BIGINT ROWCOUNT)
> >     rel#6:Subset#1.NONE.[].any, best=null, importance=0.9
> >         rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.
> NONE.[].any,table=[x,
> > MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
> > cost={inf}
> >     rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0
> >         rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:
> > Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> > rowcount=1.0, cumulative cost={inf}
> >
> >
> >     at org.apache.calcite.plan.volcano.RelSubset$
> > CheapestPlanReplacer.visit(RelSubset.java:441)
> >     at org.apache.calcite.plan.volcano.RelSubset.
> > buildCheapestPlan(RelSubset.java:291)
> >     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> > findBestExp(VolcanoPlanner.java:666)
> >
> >
> >
> > I really appreciate your help
> > Enrico
> >
> > 2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> >
> >> The example from Luis works like a charm.
> >> I have some questions,I will start separate threads
> >>
> >> Thank you
> >> Enrico
> >>
> >> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
> >>
> >>> Luis thank you,
> >>> my case is the second one. I want to use Calcite planner internally on
> a
> >>> database system. I will try with your suggestion
> >>>
> >>> Enrico
> >>>
> >>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer
> >>> <lf...@yahoo.com.br.invalid> ha scritto:
> >>>
> >>>>  If you intend to run a query then you should follow the tutorial and
> >>>> try to change the csv adapter.  You can add the table to the schema at
> >>>> runtime using something like:
> >>>> ---------------------------------------------------------------------
> >>>>
> >>>> Class.forName("org.apache.calcite.jdbc.Driver");
> >>>> Properties info = new Properties();
> >>>> info.setProperty("lex", "MYSQL_ANSI");
> >>>>
> >>>> final Connection connection = DriverManager.getConnection("
> jdbc:calcite:",
> >>>> info);
> >>>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
> >>>> SchemaPlus root = conn.getRootSchema();
> >>>> root.add("MYTABLE", new TableImpl());
> >>>> Statement statement = conn.createStatement();
> >>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
> >>>> ---------------------------------------------------------------------
> >>>>
> >>>> But if you only want to parse, validate and optimize the query plan,
> >>>> you can use something like:
> >>>> ---------------------------------------------------------------------
> >>>>     Table table = new TableImpl();
> >>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
> >>>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
> >>>>     schema.add("MYTABLE", table);
> >>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
> >>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
> >>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
> >>>>     SqlParser.Config parserConfig =
> >>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
> >>>>       .setCaseSensitive(false)
> >>>>       .build();
> >>>>
> >>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
> >>>>         .parserConfig(parserConfig)
> >>>>         .defaultSchema(schema)
> >>>>         .traitDefs(traitDefs)
> >>>>         // define the rules you want to apply
> >>>>
> >>>>         .programs(Programs.ofRules(Programs.RULE_SET))
> >>>>         .build();
> >>>>     Planner planner = Frameworks.getPlanner(config);
> >>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
> >>>>     n = planner.validate(n);
> >>>>     RelNode root = planner.rel(n).project();
> >>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
> >>>> SqlExplainFormat.TEXT,
> >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> >>>>     RelOptCluster cluster = root.getCluster();
> >>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
> >>>>     RelTraitSet desiredTraits =
> >>>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
> >>>>     final RelNode newRoot = optPlanner.changeTraits(root,
> >>>> desiredTraits);
> >>>>     optPlanner.setRoot(newRoot);
> >>>>     RelNode bestExp = optPlanner.findBestExp();
> >>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
> >>>> SqlExplainFormat.TEXT,
> >>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
> >>>>  ------------------------------------------------------------
> ---------
> >>>>
> >>>> The main problem was that you were not setting the desired trait to
> use
> >>>> EnumerableConvention.
> >>>> You can see that instead of implementing all the interfaces you should
> >>>> use the available builders and classes.
> >>>> Also for implementing Table I think you should extend AbstractTable
> >>>> instead of implementing Table interface and you can use Statistics.of
> >>>> instead of implementing Statistic interface if it is simple:
> >>>> ---------------------------------------------------------------------
> >>>>
> >>>>   private static class TableImpl extends AbstractTable {
> >>>>     public TableImpl() {}
> >>>>     @Override    public RelDataType getRowType(RelDataTypeFactory
> >>>> typeFactory) {
> >>>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
> >>>>       return builder.add("id", typeFactory.createSqlType(SqlT
> >>>> ypeName.INTEGER))
> >>>>           .add("name", typeFactory.createSqlType(SqlT
> >>>> ypeName.VARCHAR)).build();
> >>>>     }
> >>>>     @Override
> >>>>     public Statistic getStatistic() {
> >>>>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
> >>>>           ImmutableList.of(RelCollations.of(0),
> RelCollations.of(1)));
> >>>>     }
> >>>>
> >>>>   }
> >>>> ---------------------------------------------------------------------
> >>>>
> >>>>
> >>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
> >>>> Olivelli <eo...@gmail.com> escreveu:
> >>>>
> >>>>  Hi,
> >>>> I am playing with the planner but I can't get it work for a very
> simple
> >>>> query.
> >>>> Th table is
> >>>>  MYTABLE(id integer, name varchar)            definition is given in
> >>>> code
> >>>> snippet
> >>>> the query is "SELECT * FROM MYTABLE"
> >>>>
> >>>> The error is:
> >>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> >>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner
> >>>> state:
> >>>>
> >>>> Root: rel#7:Subset#0.NONE.[0, 1].any
> >>>> Original rel:
> >>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
> >>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu,
> 0.0
> >>>> io}, id = 5
> >>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
> >>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0
> >>>> rows,
> >>>> 16.0 cpu, 0.0 io}, id = 2
> >>>>
> >>>> Sets:
> >>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
> >>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
> >>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
> >>>> 1]].any(table=[default,
> >>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0
> io}
> >>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
> >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> >>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
> >>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
> >>>>         rel#5:LogicalProject.NONE.[[0,
> >>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> >>>> rowcount=15.0, cumulative cost={inf}
> >>>>         rel#8:AbstractConverter.NONE.[0,
> >>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
> >>>> 1].any,convention=NONE,sort=[0,
> >>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
> >>>>
> >>>> Does anybody has an hint for me ?
> >>>> I am using currert master of Calcite (1.15-SNAPSHOT)
> >>>>
> >>>> Thank you
> >>>>
> >>>> Enrico
> >>>>
> >>>>
> >>>> My code is:
> >>>>   @Test
> >>>>     public void test() throws Exception {
> >>>>         Table table = new TableImpl();
> >>>>         CalciteSchema schema = CalciteSchema.createRootSchema(true,
> >>>> true,
> >>>> "default");
> >>>>         schema.add("MYTABLE", table);
> >>>>         SchemaPlus rootSchema = schema.plus();
> >>>>         SqlRexConvertletTable convertletTable =
> >>>> StandardConvertletTable.INSTANCE;
> >>>>         SqlToRelConverter.Config config =
> SqlToRelConverter.Config.DEFAU
> >>>> LT;
> >>>>         FrameworkConfig frameworkConfig = new
> >>>> FrameworkConfigImpl(config,
> >>>> rootSchema, convertletTable);
> >>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
> >>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
> >>>>         sqlNode = imp.validate(sqlNode);
> >>>>         RelRoot relRoot = imp.rel(sqlNode);
> >>>>         RelNode project = relRoot.project();
> >>>>         RelOptPlanner planner = project.getCluster().getPlanner();
> >>>>         planner.setRoot(project);
> >>>>         RelNode findBestExp = planner.findBestExp();
> >>>>         System.out.println("best:" + findBestExp);
> >>>>     }
> >>>>
> >>>>     private class FrameworkConfigImpl implements FrameworkConfig {
> >>>>
> >>>>         private final SqlToRelConverter.Config config;
> >>>>         private final SchemaPlus rootSchema;
> >>>>         private final SqlRexConvertletTable convertletTable;
> >>>>
> >>>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
> >>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
> >>>>             this.config = config;
> >>>>             this.rootSchema = rootSchema;
> >>>>             this.convertletTable = convertletTable;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public SqlParser.Config getParserConfig() {
> >>>>             return SqlParser.Config.DEFAULT;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
> >>>>             return config;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public SchemaPlus getDefaultSchema() {
> >>>>             return rootSchema;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public RexExecutor getExecutor() {
> >>>>             return new RexExecutorImpl(new DataContextImpl());
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public ImmutableList<Program> getPrograms() {
> >>>>             return ImmutableList.of(Programs.standard());
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public SqlOperatorTable getOperatorTable() {
> >>>>             return new SqlStdOperatorTable();
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public RelOptCostFactory getCostFactory() {
> >>>>             return null;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
> >>>>
> >>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
> >>>>                     RelCollationTraitDef.INSTANCE,
> >>>>                     RelDistributionTraitDef.INSTANCE
> >>>>             );
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public SqlRexConvertletTable getConvertletTable() {
> >>>>             return convertletTable;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public Context getContext() {
> >>>>             return new ContextImpl();
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public RelDataTypeSystem getTypeSystem() {
> >>>>             return RelDataTypeSystem.DEFAULT;
> >>>>         }
> >>>>
> >>>>         class DataContextImpl implements DataContext {
> >>>>
> >>>>             public DataContextImpl() {
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public SchemaPlus getRootSchema() {
> >>>>                 return rootSchema;
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public JavaTypeFactory getTypeFactory() {
> >>>>                 throw new UnsupportedOperationException("Not
> supported
> >>>> yet."); //To change body of generated methods, choose Tools |
> Templates.
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public QueryProvider getQueryProvider() {
> >>>>                 throw new UnsupportedOperationException("Not
> supported
> >>>> yet."); //To change body of generated methods, choose Tools |
> Templates.
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public Object get(String name) {
> >>>>                 throw new UnsupportedOperationException("Not
> supported
> >>>> yet."); //To change body of generated methods, choose Tools |
> Templates.
> >>>>             }
> >>>>
> >>>>         }
> >>>>
> >>>>         private class ContextImpl implements Context {
> >>>>
> >>>>             public ContextImpl() {
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public <C> C unwrap(Class<C> aClass) {
> >>>>                 return null;
> >>>>             }
> >>>>         }
> >>>>     }
> >>>>
> >>>>     private static class TableImpl implements Table {
> >>>>
> >>>>         public TableImpl() {
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public RelDataType getRowType(RelDataTypeFactory typeFactory)
> {
> >>>>             return typeFactory
> >>>>                     .builder()
> >>>>                     .add("id",
> >>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
> >>>>                     .add("name",
> >>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
> >>>>                     .build();
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public Statistic getStatistic() {
> >>>>             return new StatisticImpl();
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public Schema.TableType getJdbcTableType() {
> >>>>             throw new UnsupportedOperationException("Not supported
> >>>> yet.");
> >>>> //To change body of generated methods, choose Tools | Templates.
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public boolean isRolledUp(String column) {
> >>>>             return true;
> >>>>         }
> >>>>
> >>>>         @Override
> >>>>         public boolean rolledUpColumnValidInsideAgg(String column,
> >>>> SqlCall
> >>>> call, SqlNode parent, CalciteConnectionConfig config) {
> >>>>             return false;
> >>>>         }
> >>>>
> >>>>         class StatisticImpl implements Statistic {
> >>>>
> >>>>             public StatisticImpl() {
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public Double getRowCount() {
> >>>>                 return 15d;
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public boolean isKey(ImmutableBitSet columns) {
> >>>>                 return false;
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public List<RelReferentialConstraint>
> >>>> getReferentialConstraints() {
> >>>>                 return Collections.emptyList();
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public List<RelCollation> getCollations() {
> >>>>                 RelCollation c = new RelCollationImpl(
> >>>>                         ImmutableList.of(
> >>>>                                 new RelFieldCollation(0,
> >>>> RelFieldCollation.Direction.ASCENDING),
> >>>>                                 new RelFieldCollation(1,
> >>>> RelFieldCollation.Direction.ASCENDING)
> >>>>                         )) {
> >>>>                 };
> >>>>                 return Arrays.asList(c);
> >>>>             }
> >>>>
> >>>>             @Override
> >>>>             public RelDistribution getDistribution() {
> >>>>                 return RelDistributions.ANY;
> >>>>             }
> >>>>         }
> >>>>     }
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
> >>>>
> >>>> > Yes that is definitely possible. I am too busy to write a code
> >>>> snippet but
> >>>> > you should take a look at PlannerTest.
> >>>> >
> >>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> >>>> > stephane.campinas@gmail.com> wrote:
> >>>> > >
> >>>> > > Hi,
> >>>> > >
> >>>> > > I am trying to use the Volcano planner in order to optimise
> queries
> >>>> based
> >>>> > > on statistics but I am having some issues understanding how to
> >>>> achieve
> >>>> > > this, even after looking at the Github repository for tests.
> >>>> > > A first goal I would like to achieve would be to choose a join
> >>>> > > implementation based on its cost.
> >>>> > >
> >>>> > > For example, a query tree can have several joins, and depending on
> >>>> the
> >>>> > > position of the join in the tree, an certain implementation would
> >>>> be more
> >>>> > > efficient than another.
> >>>> > > Would that be possible ? If so, could you share a code snippet ?
> >>>> > >
> >>>> > > Thanks
> >>>> > >
> >>>> > > --
> >>>> > > Campinas Stéphane
> >>>> >
> >>>> >
> >>>
> >>> --
> >>>
> >>>
> >>> -- Enrico Olivelli
> >>>
> >>
> >>
> >
>

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Got it by myself, the Table must implement ModifiableTable.
As far as I am learning the planner is driven by the properties of the
Table, expressed using intefaces

I wonder if there is some summary of the behavior of the planner or some
basic Glossary

Cheers
Enrico

2017-11-11 11:27 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:

> Sorry I cannot make it work for INSERT/DELETE/UPDATE
>
>
> This is the error for a DELETE
> Qury: DELETE FROM MYTABLE where id=1
> -- Logical Plan
> LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
> flattened=[true])
>   LogicalProject(id=[$0], name=[$1])
>     LogicalFilter(condition=[=($0, 1)])
>       LogicalTableScan(table=[[x, MYTABLE]])
>
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.167 sec
> <<< FAILURE!
> test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> [rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented; planner
> state:
>
> Root: rel#15:Subset#3.ENUMERABLE.[].any
> Original rel:
> LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any], table=[[x,
> MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
> cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
>   LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0],
> name=[$1]): rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu, 0.0
> io}, id = 11
>     LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any], condition=[=($0,
> 1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0 io}, id
> = 9
>       LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any], table=[[x,
> MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0 cpu, 0.0
> io}, id = 4
>
> Sets:
> Set#0, type: RecordType(INTEGER id, INTEGER name)
>     rel#8:Subset#0.NONE.[].any, best=null, importance=0.7290000000000001
>         rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
> rowcount=15.0, cumulative cost={inf}
>     rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
> importance=0.36450000000000005
>         rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#25:Subset#0.BINDABLE.[].any),
> rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
>     rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
> importance=0.36450000000000005
>         rel#24:BindableTableScan.BINDABLE.[].any(table=[x, MYTABLE]),
> rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
> Set#1, type: RecordType(INTEGER id, INTEGER name)
>     rel#10:Subset#1.NONE.[].any, best=null, importance=0.81
>         rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.[].any,condition==($0,
> 1)), rowcount=2.25, cumulative cost={inf}
>         rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.NONE.[].any,id=$0,name=$1),
> rowcount=2.25, cumulative cost={inf}
>     rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
> importance=0.4510687500000001
>         rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#
> 17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1), rowcount=15.0, cumulative
> cost={22.65 rows, 37.66 cpu, 0.0 io}
>         rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#
> 21:Subset#0.ENUMERABLE.[].any,condition==($0, 1)), rowcount=2.25,
> cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}
>         rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#20:Subset#1.BINDABLE.[].any),
> rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
>     rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
>         rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
> MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15 rows,
> 0.16 cpu, 0.0 io}
> Set#3, type: RecordType(BIGINT ROWCOUNT)
>     rel#14:Subset#3.NONE.[].any, best=null, importance=0.9
>         rel#13:LogicalTableModify.NONE.[].any(input=rel#10:Subset#1.NONE.[].any,table=[x,
> MYTABLE],operation=DELETE,flattened=true), rowcount=2.25, cumulative
> cost={inf}
>     rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0
>         rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#
> 14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> rowcount=2.25, cumulative cost={inf}
>
>
>     at org.apache.calcite.plan.volcano.RelSubset$
> CheapestPlanReplacer.visit(RelSubset.java:441)
>     at org.apache.calcite.plan.volcano.RelSubset.
> buildCheapestPlan(RelSubset.java:291)
>     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> findBestExp(VolcanoPlanner.java:666)
>
>
>
>
> This is for an Insert
> Qury: INSERT INTO MYTABLE(id,name) values(1,2)
> -- Logical Plan
> LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
> flattened=[true])
>   LogicalValues(type=[RecordType(INTEGER id, INTEGER name)], tuples=[[{
> 1, 2 }]])
>
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.377 sec
> <<< FAILURE!
> test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> [rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented; planner
> state:
>
> Root: rel#7:Subset#1.ENUMERABLE.[].any
> Original rel:
> LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any], table=[[x,
> MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
> cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
>   LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1, 2
> }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 2
>
> Sets:
> Set#0, type: RecordType(INTEGER id, INTEGER name)
>     rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
>         rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(INTEGER
> id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative cost={inf}
>     rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9, importance=0.405
>         rel#9:EnumerableValues.ENUMERABLE.[[0, 1], [1]].broadcast(type=RecordType(INTEGER
> id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative cost={1.0
> rows, 1.0 cpu, 0.0 io}
> Set#1, type: RecordType(BIGINT ROWCOUNT)
>     rel#6:Subset#1.NONE.[].any, best=null, importance=0.9
>         rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.NONE.[].any,table=[x,
> MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
> cost={inf}
>     rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0
>         rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:
> Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
> rowcount=1.0, cumulative cost={inf}
>
>
>     at org.apache.calcite.plan.volcano.RelSubset$
> CheapestPlanReplacer.visit(RelSubset.java:441)
>     at org.apache.calcite.plan.volcano.RelSubset.
> buildCheapestPlan(RelSubset.java:291)
>     at org.apache.calcite.plan.volcano.VolcanoPlanner.
> findBestExp(VolcanoPlanner.java:666)
>
>
>
> I really appreciate your help
> Enrico
>
> 2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
>
>> The example from Luis works like a charm.
>> I have some questions,I will start separate threads
>>
>> Thank you
>> Enrico
>>
>> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
>>
>>> Luis thank you,
>>> my case is the second one. I want to use Calcite planner internally on a
>>> database system. I will try with your suggestion
>>>
>>> Enrico
>>>
>>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer
>>> <lf...@yahoo.com.br.invalid> ha scritto:
>>>
>>>>  If you intend to run a query then you should follow the tutorial and
>>>> try to change the csv adapter.  You can add the table to the schema at
>>>> runtime using something like:
>>>> ---------------------------------------------------------------------
>>>>
>>>> Class.forName("org.apache.calcite.jdbc.Driver");
>>>> Properties info = new Properties();
>>>> info.setProperty("lex", "MYSQL_ANSI");
>>>>
>>>> final Connection connection = DriverManager.getConnection("jdbc:calcite:",
>>>> info);
>>>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
>>>> SchemaPlus root = conn.getRootSchema();
>>>> root.add("MYTABLE", new TableImpl());
>>>> Statement statement = conn.createStatement();
>>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
>>>> ---------------------------------------------------------------------
>>>>
>>>> But if you only want to parse, validate and optimize the query plan,
>>>> you can use something like:
>>>> ---------------------------------------------------------------------
>>>>     Table table = new TableImpl();
>>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
>>>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
>>>>     schema.add("MYTABLE", table);
>>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
>>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
>>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
>>>>     SqlParser.Config parserConfig =
>>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
>>>>       .setCaseSensitive(false)
>>>>       .build();
>>>>
>>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
>>>>         .parserConfig(parserConfig)
>>>>         .defaultSchema(schema)
>>>>         .traitDefs(traitDefs)
>>>>         // define the rules you want to apply
>>>>
>>>>         .programs(Programs.ofRules(Programs.RULE_SET))
>>>>         .build();
>>>>     Planner planner = Frameworks.getPlanner(config);
>>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
>>>>     n = planner.validate(n);
>>>>     RelNode root = planner.rel(n).project();
>>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
>>>> SqlExplainFormat.TEXT,
>>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>>>     RelOptCluster cluster = root.getCluster();
>>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
>>>>     RelTraitSet desiredTraits =
>>>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
>>>>     final RelNode newRoot = optPlanner.changeTraits(root,
>>>> desiredTraits);
>>>>     optPlanner.setRoot(newRoot);
>>>>     RelNode bestExp = optPlanner.findBestExp();
>>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
>>>> SqlExplainFormat.TEXT,
>>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>>>  ---------------------------------------------------------------------
>>>>
>>>> The main problem was that you were not setting the desired trait to use
>>>> EnumerableConvention.
>>>> You can see that instead of implementing all the interfaces you should
>>>> use the available builders and classes.
>>>> Also for implementing Table I think you should extend AbstractTable
>>>> instead of implementing Table interface and you can use Statistics.of
>>>> instead of implementing Statistic interface if it is simple:
>>>> ---------------------------------------------------------------------
>>>>
>>>>   private static class TableImpl extends AbstractTable {
>>>>     public TableImpl() {}
>>>>     @Override    public RelDataType getRowType(RelDataTypeFactory
>>>> typeFactory) {
>>>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
>>>>       return builder.add("id", typeFactory.createSqlType(SqlT
>>>> ypeName.INTEGER))
>>>>           .add("name", typeFactory.createSqlType(SqlT
>>>> ypeName.VARCHAR)).build();
>>>>     }
>>>>     @Override
>>>>     public Statistic getStatistic() {
>>>>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
>>>>           ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
>>>>     }
>>>>
>>>>   }
>>>> ---------------------------------------------------------------------
>>>>
>>>>
>>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
>>>> Olivelli <eo...@gmail.com> escreveu:
>>>>
>>>>  Hi,
>>>> I am playing with the planner but I can't get it work for a very simple
>>>> query.
>>>> Th table is
>>>>  MYTABLE(id integer, name varchar)            definition is given in
>>>> code
>>>> snippet
>>>> the query is "SELECT * FROM MYTABLE"
>>>>
>>>> The error is:
>>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
>>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner
>>>> state:
>>>>
>>>> Root: rel#7:Subset#0.NONE.[0, 1].any
>>>> Original rel:
>>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
>>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
>>>> io}, id = 5
>>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
>>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0
>>>> rows,
>>>> 16.0 cpu, 0.0 io}, id = 2
>>>>
>>>> Sets:
>>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
>>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
>>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
>>>> 1]].any(table=[default,
>>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
>>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
>>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
>>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
>>>>         rel#5:LogicalProject.NONE.[[0,
>>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>>>> rowcount=15.0, cumulative cost={inf}
>>>>         rel#8:AbstractConverter.NONE.[0,
>>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
>>>> 1].any,convention=NONE,sort=[0,
>>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
>>>>
>>>> Does anybody has an hint for me ?
>>>> I am using currert master of Calcite (1.15-SNAPSHOT)
>>>>
>>>> Thank you
>>>>
>>>> Enrico
>>>>
>>>>
>>>> My code is:
>>>>   @Test
>>>>     public void test() throws Exception {
>>>>         Table table = new TableImpl();
>>>>         CalciteSchema schema = CalciteSchema.createRootSchema(true,
>>>> true,
>>>> "default");
>>>>         schema.add("MYTABLE", table);
>>>>         SchemaPlus rootSchema = schema.plus();
>>>>         SqlRexConvertletTable convertletTable =
>>>> StandardConvertletTable.INSTANCE;
>>>>         SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAU
>>>> LT;
>>>>         FrameworkConfig frameworkConfig = new
>>>> FrameworkConfigImpl(config,
>>>> rootSchema, convertletTable);
>>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
>>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
>>>>         sqlNode = imp.validate(sqlNode);
>>>>         RelRoot relRoot = imp.rel(sqlNode);
>>>>         RelNode project = relRoot.project();
>>>>         RelOptPlanner planner = project.getCluster().getPlanner();
>>>>         planner.setRoot(project);
>>>>         RelNode findBestExp = planner.findBestExp();
>>>>         System.out.println("best:" + findBestExp);
>>>>     }
>>>>
>>>>     private class FrameworkConfigImpl implements FrameworkConfig {
>>>>
>>>>         private final SqlToRelConverter.Config config;
>>>>         private final SchemaPlus rootSchema;
>>>>         private final SqlRexConvertletTable convertletTable;
>>>>
>>>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
>>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
>>>>             this.config = config;
>>>>             this.rootSchema = rootSchema;
>>>>             this.convertletTable = convertletTable;
>>>>         }
>>>>
>>>>         @Override
>>>>         public SqlParser.Config getParserConfig() {
>>>>             return SqlParser.Config.DEFAULT;
>>>>         }
>>>>
>>>>         @Override
>>>>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
>>>>             return config;
>>>>         }
>>>>
>>>>         @Override
>>>>         public SchemaPlus getDefaultSchema() {
>>>>             return rootSchema;
>>>>         }
>>>>
>>>>         @Override
>>>>         public RexExecutor getExecutor() {
>>>>             return new RexExecutorImpl(new DataContextImpl());
>>>>         }
>>>>
>>>>         @Override
>>>>         public ImmutableList<Program> getPrograms() {
>>>>             return ImmutableList.of(Programs.standard());
>>>>         }
>>>>
>>>>         @Override
>>>>         public SqlOperatorTable getOperatorTable() {
>>>>             return new SqlStdOperatorTable();
>>>>         }
>>>>
>>>>         @Override
>>>>         public RelOptCostFactory getCostFactory() {
>>>>             return null;
>>>>         }
>>>>
>>>>         @Override
>>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
>>>>
>>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
>>>>                     RelCollationTraitDef.INSTANCE,
>>>>                     RelDistributionTraitDef.INSTANCE
>>>>             );
>>>>         }
>>>>
>>>>         @Override
>>>>         public SqlRexConvertletTable getConvertletTable() {
>>>>             return convertletTable;
>>>>         }
>>>>
>>>>         @Override
>>>>         public Context getContext() {
>>>>             return new ContextImpl();
>>>>         }
>>>>
>>>>         @Override
>>>>         public RelDataTypeSystem getTypeSystem() {
>>>>             return RelDataTypeSystem.DEFAULT;
>>>>         }
>>>>
>>>>         class DataContextImpl implements DataContext {
>>>>
>>>>             public DataContextImpl() {
>>>>             }
>>>>
>>>>             @Override
>>>>             public SchemaPlus getRootSchema() {
>>>>                 return rootSchema;
>>>>             }
>>>>
>>>>             @Override
>>>>             public JavaTypeFactory getTypeFactory() {
>>>>                 throw new UnsupportedOperationException("Not supported
>>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>>             }
>>>>
>>>>             @Override
>>>>             public QueryProvider getQueryProvider() {
>>>>                 throw new UnsupportedOperationException("Not supported
>>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>>             }
>>>>
>>>>             @Override
>>>>             public Object get(String name) {
>>>>                 throw new UnsupportedOperationException("Not supported
>>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>>             }
>>>>
>>>>         }
>>>>
>>>>         private class ContextImpl implements Context {
>>>>
>>>>             public ContextImpl() {
>>>>             }
>>>>
>>>>             @Override
>>>>             public <C> C unwrap(Class<C> aClass) {
>>>>                 return null;
>>>>             }
>>>>         }
>>>>     }
>>>>
>>>>     private static class TableImpl implements Table {
>>>>
>>>>         public TableImpl() {
>>>>         }
>>>>
>>>>         @Override
>>>>         public RelDataType getRowType(RelDataTypeFactory typeFactory) {
>>>>             return typeFactory
>>>>                     .builder()
>>>>                     .add("id",
>>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
>>>>                     .add("name",
>>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
>>>>                     .build();
>>>>         }
>>>>
>>>>         @Override
>>>>         public Statistic getStatistic() {
>>>>             return new StatisticImpl();
>>>>         }
>>>>
>>>>         @Override
>>>>         public Schema.TableType getJdbcTableType() {
>>>>             throw new UnsupportedOperationException("Not supported
>>>> yet.");
>>>> //To change body of generated methods, choose Tools | Templates.
>>>>         }
>>>>
>>>>         @Override
>>>>         public boolean isRolledUp(String column) {
>>>>             return true;
>>>>         }
>>>>
>>>>         @Override
>>>>         public boolean rolledUpColumnValidInsideAgg(String column,
>>>> SqlCall
>>>> call, SqlNode parent, CalciteConnectionConfig config) {
>>>>             return false;
>>>>         }
>>>>
>>>>         class StatisticImpl implements Statistic {
>>>>
>>>>             public StatisticImpl() {
>>>>             }
>>>>
>>>>             @Override
>>>>             public Double getRowCount() {
>>>>                 return 15d;
>>>>             }
>>>>
>>>>             @Override
>>>>             public boolean isKey(ImmutableBitSet columns) {
>>>>                 return false;
>>>>             }
>>>>
>>>>             @Override
>>>>             public List<RelReferentialConstraint>
>>>> getReferentialConstraints() {
>>>>                 return Collections.emptyList();
>>>>             }
>>>>
>>>>             @Override
>>>>             public List<RelCollation> getCollations() {
>>>>                 RelCollation c = new RelCollationImpl(
>>>>                         ImmutableList.of(
>>>>                                 new RelFieldCollation(0,
>>>> RelFieldCollation.Direction.ASCENDING),
>>>>                                 new RelFieldCollation(1,
>>>> RelFieldCollation.Direction.ASCENDING)
>>>>                         )) {
>>>>                 };
>>>>                 return Arrays.asList(c);
>>>>             }
>>>>
>>>>             @Override
>>>>             public RelDistribution getDistribution() {
>>>>                 return RelDistributions.ANY;
>>>>             }
>>>>         }
>>>>     }
>>>>
>>>>
>>>>
>>>>
>>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
>>>>
>>>> > Yes that is definitely possible. I am too busy to write a code
>>>> snippet but
>>>> > you should take a look at PlannerTest.
>>>> >
>>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
>>>> > stephane.campinas@gmail.com> wrote:
>>>> > >
>>>> > > Hi,
>>>> > >
>>>> > > I am trying to use the Volcano planner in order to optimise queries
>>>> based
>>>> > > on statistics but I am having some issues understanding how to
>>>> achieve
>>>> > > this, even after looking at the Github repository for tests.
>>>> > > A first goal I would like to achieve would be to choose a join
>>>> > > implementation based on its cost.
>>>> > >
>>>> > > For example, a query tree can have several joins, and depending on
>>>> the
>>>> > > position of the join in the tree, an certain implementation would
>>>> be more
>>>> > > efficient than another.
>>>> > > Would that be possible ? If so, could you share a code snippet ?
>>>> > >
>>>> > > Thanks
>>>> > >
>>>> > > --
>>>> > > Campinas Stéphane
>>>> >
>>>> >
>>>
>>> --
>>>
>>>
>>> -- Enrico Olivelli
>>>
>>
>>
>

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Sorry I cannot make it work for INSERT/DELETE/UPDATE


This is the error for a DELETE
Qury: DELETE FROM MYTABLE where id=1
-- Logical Plan
LogicalTableModify(table=[[x, MYTABLE]], operation=[DELETE],
flattened=[true])
  LogicalProject(id=[$0], name=[$1])
    LogicalFilter(condition=[=($0, 1)])
      LogicalTableScan(table=[[x, MYTABLE]])

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.167 sec
<<< FAILURE!
test(PlannerExampleTest)  Time elapsed: 3.034 sec  <<< ERROR!
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
[rel#15:Subset#3.ENUMERABLE.[].any] could not be implemented; planner state:

Root: rel#15:Subset#3.ENUMERABLE.[].any
Original rel:
LogicalTableModify(subset=[rel#15:Subset#3.ENUMERABLE.[].any], table=[[x,
MYTABLE]], operation=[DELETE], flattened=[true]): rowcount = 2.25,
cumulative cost = {2.25 rows, 0.0 cpu, 0.0 io}, id = 13
  LogicalProject(subset=[rel#12:Subset#2.NONE.[].any], id=[$0], name=[$1]):
rowcount = 2.25, cumulative cost = {2.25 rows, 4.5 cpu, 0.0 io}, id = 11
    LogicalFilter(subset=[rel#10:Subset#1.NONE.[].any], condition=[=($0,
1)]): rowcount = 2.25, cumulative cost = {2.25 rows, 15.0 cpu, 0.0 io}, id
= 9
      LogicalTableScan(subset=[rel#8:Subset#0.NONE.[].any], table=[[x,
MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows, 16.0 cpu, 0.0
io}, id = 4

Sets:
Set#0, type: RecordType(INTEGER id, INTEGER name)
    rel#8:Subset#0.NONE.[].any, best=null, importance=0.7290000000000001
        rel#4:LogicalTableScan.NONE.[].any(table=[x, MYTABLE]),
rowcount=15.0, cumulative cost={inf}
    rel#21:Subset#0.ENUMERABLE.[].any, best=rel#26,
importance=0.36450000000000005

rel#26:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#25:Subset#0.BINDABLE.[].any),
rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
    rel#25:Subset#0.BINDABLE.[].any, best=rel#24,
importance=0.36450000000000005
        rel#24:BindableTableScan.BINDABLE.[].any(table=[x, MYTABLE]),
rowcount=15.0, cumulative cost={0.15 rows, 0.16 cpu, 0.0 io}
Set#1, type: RecordType(INTEGER id, INTEGER name)
    rel#10:Subset#1.NONE.[].any, best=null, importance=0.81

rel#9:LogicalFilter.NONE.[].any(input=rel#8:Subset#0.NONE.[].any,condition==($0,
1)), rowcount=2.25, cumulative cost={inf}

rel#11:LogicalProject.NONE.[].any(input=rel#10:Subset#1.NONE.[].any,id=$0,name=$1),
rowcount=2.25, cumulative cost={inf}
    rel#17:Subset#1.ENUMERABLE.[].any, best=rel#29,
importance=0.4510687500000001

rel#18:EnumerableProject.ENUMERABLE.[].any(input=rel#17:Subset#1.ENUMERABLE.[].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={22.65 rows, 37.66 cpu, 0.0 io}

rel#22:EnumerableFilter.ENUMERABLE.[].any(input=rel#21:Subset#0.ENUMERABLE.[].any,condition==($0,
1)), rowcount=2.25, cumulative cost={9.9 rows, 22.66 cpu, 0.0 io}

rel#29:EnumerableInterpreter.ENUMERABLE.[].any(input=rel#20:Subset#1.BINDABLE.[].any),
rowcount=15.0, cumulative cost={7.65 rows, 7.66 cpu, 0.0 io}
    rel#20:Subset#1.BINDABLE.[].any, best=rel#19, importance=0.405
        rel#19:BindableTableScan.BINDABLE.[].any(table=[x,
MYTABLE],filters=[=($0, 1)]), rowcount=15.0, cumulative cost={0.15 rows,
0.16 cpu, 0.0 io}
Set#3, type: RecordType(BIGINT ROWCOUNT)
    rel#14:Subset#3.NONE.[].any, best=null, importance=0.9

rel#13:LogicalTableModify.NONE.[].any(input=rel#10:Subset#1.NONE.[].any,table=[x,
MYTABLE],operation=DELETE,flattened=true), rowcount=2.25, cumulative
cost={inf}
    rel#15:Subset#3.ENUMERABLE.[].any, best=null, importance=1.0

rel#16:AbstractConverter.ENUMERABLE.[].any(input=rel#14:Subset#3.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
rowcount=2.25, cumulative cost={inf}


    at
org.apache.calcite.plan.volcano.RelSubset$CheapestPlanReplacer.visit(RelSubset.java:441)
    at
org.apache.calcite.plan.volcano.RelSubset.buildCheapestPlan(RelSubset.java:291)
    at
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:666)




This is for an Insert
Qury: INSERT INTO MYTABLE(id,name) values(1,2)
-- Logical Plan
LogicalTableModify(table=[[x, MYTABLE]], operation=[INSERT],
flattened=[true])
  LogicalValues(type=[RecordType(INTEGER id, INTEGER name)], tuples=[[{ 1,
2 }]])

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.377 sec
<<< FAILURE!
test(PlannerExampleTest)  Time elapsed: 2.214 sec  <<< ERROR!
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
[rel#7:Subset#1.ENUMERABLE.[].any] could not be implemented; planner state:

Root: rel#7:Subset#1.ENUMERABLE.[].any
Original rel:
LogicalTableModify(subset=[rel#7:Subset#1.ENUMERABLE.[].any], table=[[x,
MYTABLE]], operation=[INSERT], flattened=[true]): rowcount = 1.0,
cumulative cost = {1.0 rows, 0.0 cpu, 0.0 io}, id = 5
  LogicalValues(subset=[rel#4:Subset#0.NONE.[].any], tuples=[[{ 1, 2 }]]):
rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 2

Sets:
Set#0, type: RecordType(INTEGER id, INTEGER name)
    rel#4:Subset#0.NONE.[].any, best=null, importance=0.81
        rel#2:LogicalValues.NONE.[[0, 1], [1]].any(type=RecordType(INTEGER
id, INTEGER name),tuples=[{ 1, 2 }]), rowcount=1.0, cumulative cost={inf}
    rel#10:Subset#0.ENUMERABLE.[].broadcast, best=rel#9, importance=0.405
        rel#9:EnumerableValues.ENUMERABLE.[[0, 1],
[1]].broadcast(type=RecordType(INTEGER id, INTEGER name),tuples=[{ 1, 2
}]), rowcount=1.0, cumulative cost={1.0 rows, 1.0 cpu, 0.0 io}
Set#1, type: RecordType(BIGINT ROWCOUNT)
    rel#6:Subset#1.NONE.[].any, best=null, importance=0.9

rel#5:LogicalTableModify.NONE.[].any(input=rel#4:Subset#0.NONE.[].any,table=[x,
MYTABLE],operation=INSERT,flattened=true), rowcount=1.0, cumulative
cost={inf}
    rel#7:Subset#1.ENUMERABLE.[].any, best=null, importance=1.0

rel#8:AbstractConverter.ENUMERABLE.[].any(input=rel#6:Subset#1.NONE.[].any,convention=ENUMERABLE,sort=[],dist=any),
rowcount=1.0, cumulative cost={inf}


    at
org.apache.calcite.plan.volcano.RelSubset$CheapestPlanReplacer.visit(RelSubset.java:441)
    at
org.apache.calcite.plan.volcano.RelSubset.buildCheapestPlan(RelSubset.java:291)
    at
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:666)



I really appreciate your help
Enrico

2017-11-09 9:43 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:

> The example from Luis works like a charm.
> I have some questions,I will start separate threads
>
> Thank you
> Enrico
>
> 2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:
>
>> Luis thank you,
>> my case is the second one. I want to use Calcite planner internally on a
>> database system. I will try with your suggestion
>>
>> Enrico
>>
>> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer <lf...@yahoo.com.br.invalid>
>> ha scritto:
>>
>>>  If you intend to run a query then you should follow the tutorial and
>>> try to change the csv adapter.  You can add the table to the schema at
>>> runtime using something like:
>>> ---------------------------------------------------------------------
>>>
>>> Class.forName("org.apache.calcite.jdbc.Driver");
>>> Properties info = new Properties();
>>> info.setProperty("lex", "MYSQL_ANSI");
>>>
>>> final Connection connection = DriverManager.getConnection("jdbc:calcite:",
>>> info);
>>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
>>> SchemaPlus root = conn.getRootSchema();
>>> root.add("MYTABLE", new TableImpl());
>>> Statement statement = conn.createStatement();
>>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
>>> ---------------------------------------------------------------------
>>>
>>> But if you only want to parse, validate and optimize the query plan, you
>>> can use something like:
>>> ---------------------------------------------------------------------
>>>     Table table = new TableImpl();
>>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
>>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
>>>     schema.add("MYTABLE", table);
>>>     List<RelTraitDef> traitDefs = new ArrayList<>();
>>>     traitDefs.add(ConventionTraitDef.INSTANCE);
>>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
>>>     SqlParser.Config parserConfig =
>>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
>>>       .setCaseSensitive(false)
>>>       .build();
>>>
>>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
>>>         .parserConfig(parserConfig)
>>>         .defaultSchema(schema)
>>>         .traitDefs(traitDefs)
>>>         // define the rules you want to apply
>>>
>>>         .programs(Programs.ofRules(Programs.RULE_SET))
>>>         .build();
>>>     Planner planner = Frameworks.getPlanner(config);
>>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
>>>     n = planner.validate(n);
>>>     RelNode root = planner.rel(n).project();
>>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
>>> SqlExplainFormat.TEXT,
>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>>     RelOptCluster cluster = root.getCluster();
>>>     final RelOptPlanner optPlanner = cluster.getPlanner();
>>>     RelTraitSet desiredTraits =
>>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
>>>     final RelNode newRoot = optPlanner.changeTraits(root, desiredTraits);
>>>     optPlanner.setRoot(newRoot);
>>>     RelNode bestExp = optPlanner.findBestExp();
>>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
>>> SqlExplainFormat.TEXT,
>>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>>  ---------------------------------------------------------------------
>>>
>>> The main problem was that you were not setting the desired trait to use
>>> EnumerableConvention.
>>> You can see that instead of implementing all the interfaces you should
>>> use the available builders and classes.
>>> Also for implementing Table I think you should extend AbstractTable
>>> instead of implementing Table interface and you can use Statistics.of
>>> instead of implementing Statistic interface if it is simple:
>>> ---------------------------------------------------------------------
>>>
>>>   private static class TableImpl extends AbstractTable {
>>>     public TableImpl() {}
>>>     @Override    public RelDataType getRowType(RelDataTypeFactory
>>> typeFactory) {
>>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
>>>       return builder.add("id", typeFactory.createSqlType(SqlT
>>> ypeName.INTEGER))
>>>           .add("name", typeFactory.createSqlType(SqlT
>>> ypeName.VARCHAR)).build();
>>>     }
>>>     @Override
>>>     public Statistic getStatistic() {
>>>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
>>>           ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
>>>     }
>>>
>>>   }
>>> ---------------------------------------------------------------------
>>>
>>>
>>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico
>>> Olivelli <eo...@gmail.com> escreveu:
>>>
>>>  Hi,
>>> I am playing with the planner but I can't get it work for a very simple
>>> query.
>>> Th table is
>>>  MYTABLE(id integer, name varchar)            definition is given in code
>>> snippet
>>> the query is "SELECT * FROM MYTABLE"
>>>
>>> The error is:
>>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
>>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:
>>>
>>> Root: rel#7:Subset#0.NONE.[0, 1].any
>>> Original rel:
>>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
>>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
>>> io}, id = 5
>>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
>>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0
>>> rows,
>>> 16.0 cpu, 0.0 io}, id = 2
>>>
>>> Sets:
>>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
>>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
>>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0,
>>> 1]].any(table=[default,
>>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
>>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
>>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
>>>         rel#5:LogicalProject.NONE.[[0,
>>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>>> rowcount=15.0, cumulative cost={inf}
>>>         rel#8:AbstractConverter.NONE.[0,
>>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0,
>>> 1].any,convention=NONE,sort=[0,
>>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
>>>
>>> Does anybody has an hint for me ?
>>> I am using currert master of Calcite (1.15-SNAPSHOT)
>>>
>>> Thank you
>>>
>>> Enrico
>>>
>>>
>>> My code is:
>>>   @Test
>>>     public void test() throws Exception {
>>>         Table table = new TableImpl();
>>>         CalciteSchema schema = CalciteSchema.createRootSchema(true,
>>> true,
>>> "default");
>>>         schema.add("MYTABLE", table);
>>>         SchemaPlus rootSchema = schema.plus();
>>>         SqlRexConvertletTable convertletTable =
>>> StandardConvertletTable.INSTANCE;
>>>         SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAU
>>> LT;
>>>         FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
>>> rootSchema, convertletTable);
>>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
>>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
>>>         sqlNode = imp.validate(sqlNode);
>>>         RelRoot relRoot = imp.rel(sqlNode);
>>>         RelNode project = relRoot.project();
>>>         RelOptPlanner planner = project.getCluster().getPlanner();
>>>         planner.setRoot(project);
>>>         RelNode findBestExp = planner.findBestExp();
>>>         System.out.println("best:" + findBestExp);
>>>     }
>>>
>>>     private class FrameworkConfigImpl implements FrameworkConfig {
>>>
>>>         private final SqlToRelConverter.Config config;
>>>         private final SchemaPlus rootSchema;
>>>         private final SqlRexConvertletTable convertletTable;
>>>
>>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
>>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
>>>             this.config = config;
>>>             this.rootSchema = rootSchema;
>>>             this.convertletTable = convertletTable;
>>>         }
>>>
>>>         @Override
>>>         public SqlParser.Config getParserConfig() {
>>>             return SqlParser.Config.DEFAULT;
>>>         }
>>>
>>>         @Override
>>>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
>>>             return config;
>>>         }
>>>
>>>         @Override
>>>         public SchemaPlus getDefaultSchema() {
>>>             return rootSchema;
>>>         }
>>>
>>>         @Override
>>>         public RexExecutor getExecutor() {
>>>             return new RexExecutorImpl(new DataContextImpl());
>>>         }
>>>
>>>         @Override
>>>         public ImmutableList<Program> getPrograms() {
>>>             return ImmutableList.of(Programs.standard());
>>>         }
>>>
>>>         @Override
>>>         public SqlOperatorTable getOperatorTable() {
>>>             return new SqlStdOperatorTable();
>>>         }
>>>
>>>         @Override
>>>         public RelOptCostFactory getCostFactory() {
>>>             return null;
>>>         }
>>>
>>>         @Override
>>>         public ImmutableList<RelTraitDef> getTraitDefs() {
>>>
>>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
>>>                     RelCollationTraitDef.INSTANCE,
>>>                     RelDistributionTraitDef.INSTANCE
>>>             );
>>>         }
>>>
>>>         @Override
>>>         public SqlRexConvertletTable getConvertletTable() {
>>>             return convertletTable;
>>>         }
>>>
>>>         @Override
>>>         public Context getContext() {
>>>             return new ContextImpl();
>>>         }
>>>
>>>         @Override
>>>         public RelDataTypeSystem getTypeSystem() {
>>>             return RelDataTypeSystem.DEFAULT;
>>>         }
>>>
>>>         class DataContextImpl implements DataContext {
>>>
>>>             public DataContextImpl() {
>>>             }
>>>
>>>             @Override
>>>             public SchemaPlus getRootSchema() {
>>>                 return rootSchema;
>>>             }
>>>
>>>             @Override
>>>             public JavaTypeFactory getTypeFactory() {
>>>                 throw new UnsupportedOperationException("Not supported
>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>             }
>>>
>>>             @Override
>>>             public QueryProvider getQueryProvider() {
>>>                 throw new UnsupportedOperationException("Not supported
>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>             }
>>>
>>>             @Override
>>>             public Object get(String name) {
>>>                 throw new UnsupportedOperationException("Not supported
>>> yet."); //To change body of generated methods, choose Tools | Templates.
>>>             }
>>>
>>>         }
>>>
>>>         private class ContextImpl implements Context {
>>>
>>>             public ContextImpl() {
>>>             }
>>>
>>>             @Override
>>>             public <C> C unwrap(Class<C> aClass) {
>>>                 return null;
>>>             }
>>>         }
>>>     }
>>>
>>>     private static class TableImpl implements Table {
>>>
>>>         public TableImpl() {
>>>         }
>>>
>>>         @Override
>>>         public RelDataType getRowType(RelDataTypeFactory typeFactory) {
>>>             return typeFactory
>>>                     .builder()
>>>                     .add("id",
>>> typeFactory.createSqlType(SqlTypeName.INTEGER))
>>>                     .add("name",
>>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
>>>                     .build();
>>>         }
>>>
>>>         @Override
>>>         public Statistic getStatistic() {
>>>             return new StatisticImpl();
>>>         }
>>>
>>>         @Override
>>>         public Schema.TableType getJdbcTableType() {
>>>             throw new UnsupportedOperationException("Not supported
>>> yet.");
>>> //To change body of generated methods, choose Tools | Templates.
>>>         }
>>>
>>>         @Override
>>>         public boolean isRolledUp(String column) {
>>>             return true;
>>>         }
>>>
>>>         @Override
>>>         public boolean rolledUpColumnValidInsideAgg(String column,
>>> SqlCall
>>> call, SqlNode parent, CalciteConnectionConfig config) {
>>>             return false;
>>>         }
>>>
>>>         class StatisticImpl implements Statistic {
>>>
>>>             public StatisticImpl() {
>>>             }
>>>
>>>             @Override
>>>             public Double getRowCount() {
>>>                 return 15d;
>>>             }
>>>
>>>             @Override
>>>             public boolean isKey(ImmutableBitSet columns) {
>>>                 return false;
>>>             }
>>>
>>>             @Override
>>>             public List<RelReferentialConstraint>
>>> getReferentialConstraints() {
>>>                 return Collections.emptyList();
>>>             }
>>>
>>>             @Override
>>>             public List<RelCollation> getCollations() {
>>>                 RelCollation c = new RelCollationImpl(
>>>                         ImmutableList.of(
>>>                                 new RelFieldCollation(0,
>>> RelFieldCollation.Direction.ASCENDING),
>>>                                 new RelFieldCollation(1,
>>> RelFieldCollation.Direction.ASCENDING)
>>>                         )) {
>>>                 };
>>>                 return Arrays.asList(c);
>>>             }
>>>
>>>             @Override
>>>             public RelDistribution getDistribution() {
>>>                 return RelDistributions.ANY;
>>>             }
>>>         }
>>>     }
>>>
>>>
>>>
>>>
>>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
>>>
>>> > Yes that is definitely possible. I am too busy to write a code snippet
>>> but
>>> > you should take a look at PlannerTest.
>>> >
>>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
>>> > stephane.campinas@gmail.com> wrote:
>>> > >
>>> > > Hi,
>>> > >
>>> > > I am trying to use the Volcano planner in order to optimise queries
>>> based
>>> > > on statistics but I am having some issues understanding how to
>>> achieve
>>> > > this, even after looking at the Github repository for tests.
>>> > > A first goal I would like to achieve would be to choose a join
>>> > > implementation based on its cost.
>>> > >
>>> > > For example, a query tree can have several joins, and depending on
>>> the
>>> > > position of the join in the tree, an certain implementation would be
>>> more
>>> > > efficient than another.
>>> > > Would that be possible ? If so, could you share a code snippet ?
>>> > >
>>> > > Thanks
>>> > >
>>> > > --
>>> > > Campinas Stéphane
>>> >
>>> >
>>
>> --
>>
>>
>> -- Enrico Olivelli
>>
>
>

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
The example from Luis works like a charm.
I have some questions,I will start separate threads

Thank you
Enrico

2017-11-08 21:51 GMT+01:00 Enrico Olivelli <eo...@gmail.com>:

> Luis thank you,
> my case is the second one. I want to use Calcite planner internally on a
> database system. I will try with your suggestion
>
> Enrico
>
> Il mer 8 nov 2017, 20:14 Luis Fernando Kauer <lf...@yahoo.com.br.invalid>
> ha scritto:
>
>>  If you intend to run a query then you should follow the tutorial and try
>> to change the csv adapter.  You can add the table to the schema at runtime
>> using something like:
>> ---------------------------------------------------------------------
>>
>> Class.forName("org.apache.calcite.jdbc.Driver");
>> Properties info = new Properties();
>> info.setProperty("lex", "MYSQL_ANSI");
>>
>> final Connection connection = DriverManager.getConnection("jdbc:calcite:",
>> info);
>> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
>> SchemaPlus root = conn.getRootSchema();
>> root.add("MYTABLE", new TableImpl());
>> Statement statement = conn.createStatement();
>> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
>> ---------------------------------------------------------------------
>>
>> But if you only want to parse, validate and optimize the query plan, you
>> can use something like:
>> ---------------------------------------------------------------------
>>     Table table = new TableImpl();
>>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
>>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
>>     schema.add("MYTABLE", table);
>>     List<RelTraitDef> traitDefs = new ArrayList<>();
>>     traitDefs.add(ConventionTraitDef.INSTANCE);
>>     traitDefs.add(RelCollationTraitDef.INSTANCE);
>>     SqlParser.Config parserConfig =
>>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
>>       .setCaseSensitive(false)
>>       .build();
>>
>>     final FrameworkConfig config = Frameworks.newConfigBuilder()
>>         .parserConfig(parserConfig)
>>         .defaultSchema(schema)
>>         .traitDefs(traitDefs)
>>         // define the rules you want to apply
>>
>>         .programs(Programs.ofRules(Programs.RULE_SET))
>>         .build();
>>     Planner planner = Frameworks.getPlanner(config);
>>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
>>     n = planner.validate(n);
>>     RelNode root = planner.rel(n).project();
>>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
>> SqlExplainFormat.TEXT,
>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>     RelOptCluster cluster = root.getCluster();
>>     final RelOptPlanner optPlanner = cluster.getPlanner();
>>     RelTraitSet desiredTraits =
>>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
>>     final RelNode newRoot = optPlanner.changeTraits(root, desiredTraits);
>>     optPlanner.setRoot(newRoot);
>>     RelNode bestExp = optPlanner.findBestExp();
>> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
>> SqlExplainFormat.TEXT,
>>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>>  ---------------------------------------------------------------------
>>
>> The main problem was that you were not setting the desired trait to use
>> EnumerableConvention.
>> You can see that instead of implementing all the interfaces you should
>> use the available builders and classes.
>> Also for implementing Table I think you should extend AbstractTable
>> instead of implementing Table interface and you can use Statistics.of
>> instead of implementing Statistic interface if it is simple:
>> ---------------------------------------------------------------------
>>
>>   private static class TableImpl extends AbstractTable {
>>     public TableImpl() {}
>>     @Override    public RelDataType getRowType(RelDataTypeFactory
>> typeFactory) {
>>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
>>       return builder.add("id", typeFactory.createSqlType(
>> SqlTypeName.INTEGER))
>>           .add("name", typeFactory.createSqlType(
>> SqlTypeName.VARCHAR)).build();
>>     }
>>     @Override
>>     public Statistic getStatistic() {
>>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
>>           ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
>>     }
>>
>>   }
>> ---------------------------------------------------------------------
>>
>>
>>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico Olivelli
>> <eo...@gmail.com> escreveu:
>>
>>  Hi,
>> I am playing with the planner but I can't get it work for a very simple
>> query.
>> Th table is
>>  MYTABLE(id integer, name varchar)            definition is given in code
>> snippet
>> the query is "SELECT * FROM MYTABLE"
>>
>> The error is:
>> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
>> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:
>>
>> Root: rel#7:Subset#0.NONE.[0, 1].any
>> Original rel:
>> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
>> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
>> io}, id = 5
>>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
>> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0
>> rows,
>> 16.0 cpu, 0.0 io}, id = 2
>>
>> Sets:
>> Set#0, type: RecordType(INTEGER id, VARCHAR name)
>>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
>>         rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
>> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
>>         rel#9:EnumerableProject.ENUMERABLE.[[0,
>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
>>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
>>         rel#5:LogicalProject.NONE.[[0,
>> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
>> rowcount=15.0, cumulative cost={inf}
>>         rel#8:AbstractConverter.NONE.[0,
>> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[
>> 0,
>> 1],dist=any), rowcount=15.0, cumulative cost={inf}
>>
>> Does anybody has an hint for me ?
>> I am using currert master of Calcite (1.15-SNAPSHOT)
>>
>> Thank you
>>
>> Enrico
>>
>>
>> My code is:
>>   @Test
>>     public void test() throws Exception {
>>         Table table = new TableImpl();
>>         CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
>> "default");
>>         schema.add("MYTABLE", table);
>>         SchemaPlus rootSchema = schema.plus();
>>         SqlRexConvertletTable convertletTable =
>> StandardConvertletTable.INSTANCE;
>>         SqlToRelConverter.Config config = SqlToRelConverter.Config.
>> DEFAULT;
>>         FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
>> rootSchema, convertletTable);
>>         Planner imp = Frameworks.getPlanner(frameworkConfig);
>>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
>>         sqlNode = imp.validate(sqlNode);
>>         RelRoot relRoot = imp.rel(sqlNode);
>>         RelNode project = relRoot.project();
>>         RelOptPlanner planner = project.getCluster().getPlanner();
>>         planner.setRoot(project);
>>         RelNode findBestExp = planner.findBestExp();
>>         System.out.println("best:" + findBestExp);
>>     }
>>
>>     private class FrameworkConfigImpl implements FrameworkConfig {
>>
>>         private final SqlToRelConverter.Config config;
>>         private final SchemaPlus rootSchema;
>>         private final SqlRexConvertletTable convertletTable;
>>
>>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
>> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
>>             this.config = config;
>>             this.rootSchema = rootSchema;
>>             this.convertletTable = convertletTable;
>>         }
>>
>>         @Override
>>         public SqlParser.Config getParserConfig() {
>>             return SqlParser.Config.DEFAULT;
>>         }
>>
>>         @Override
>>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
>>             return config;
>>         }
>>
>>         @Override
>>         public SchemaPlus getDefaultSchema() {
>>             return rootSchema;
>>         }
>>
>>         @Override
>>         public RexExecutor getExecutor() {
>>             return new RexExecutorImpl(new DataContextImpl());
>>         }
>>
>>         @Override
>>         public ImmutableList<Program> getPrograms() {
>>             return ImmutableList.of(Programs.standard());
>>         }
>>
>>         @Override
>>         public SqlOperatorTable getOperatorTable() {
>>             return new SqlStdOperatorTable();
>>         }
>>
>>         @Override
>>         public RelOptCostFactory getCostFactory() {
>>             return null;
>>         }
>>
>>         @Override
>>         public ImmutableList<RelTraitDef> getTraitDefs() {
>>
>>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
>>                     RelCollationTraitDef.INSTANCE,
>>                     RelDistributionTraitDef.INSTANCE
>>             );
>>         }
>>
>>         @Override
>>         public SqlRexConvertletTable getConvertletTable() {
>>             return convertletTable;
>>         }
>>
>>         @Override
>>         public Context getContext() {
>>             return new ContextImpl();
>>         }
>>
>>         @Override
>>         public RelDataTypeSystem getTypeSystem() {
>>             return RelDataTypeSystem.DEFAULT;
>>         }
>>
>>         class DataContextImpl implements DataContext {
>>
>>             public DataContextImpl() {
>>             }
>>
>>             @Override
>>             public SchemaPlus getRootSchema() {
>>                 return rootSchema;
>>             }
>>
>>             @Override
>>             public JavaTypeFactory getTypeFactory() {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>             @Override
>>             public QueryProvider getQueryProvider() {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>             @Override
>>             public Object get(String name) {
>>                 throw new UnsupportedOperationException("Not supported
>> yet."); //To change body of generated methods, choose Tools | Templates.
>>             }
>>
>>         }
>>
>>         private class ContextImpl implements Context {
>>
>>             public ContextImpl() {
>>             }
>>
>>             @Override
>>             public <C> C unwrap(Class<C> aClass) {
>>                 return null;
>>             }
>>         }
>>     }
>>
>>     private static class TableImpl implements Table {
>>
>>         public TableImpl() {
>>         }
>>
>>         @Override
>>         public RelDataType getRowType(RelDataTypeFactory typeFactory) {
>>             return typeFactory
>>                     .builder()
>>                     .add("id",
>> typeFactory.createSqlType(SqlTypeName.INTEGER))
>>                     .add("name",
>> typeFactory.createSqlType(SqlTypeName.VARCHAR))
>>                     .build();
>>         }
>>
>>         @Override
>>         public Statistic getStatistic() {
>>             return new StatisticImpl();
>>         }
>>
>>         @Override
>>         public Schema.TableType getJdbcTableType() {
>>             throw new UnsupportedOperationException("Not supported
>> yet.");
>> //To change body of generated methods, choose Tools | Templates.
>>         }
>>
>>         @Override
>>         public boolean isRolledUp(String column) {
>>             return true;
>>         }
>>
>>         @Override
>>         public boolean rolledUpColumnValidInsideAgg(String column,
>> SqlCall
>> call, SqlNode parent, CalciteConnectionConfig config) {
>>             return false;
>>         }
>>
>>         class StatisticImpl implements Statistic {
>>
>>             public StatisticImpl() {
>>             }
>>
>>             @Override
>>             public Double getRowCount() {
>>                 return 15d;
>>             }
>>
>>             @Override
>>             public boolean isKey(ImmutableBitSet columns) {
>>                 return false;
>>             }
>>
>>             @Override
>>             public List<RelReferentialConstraint>
>> getReferentialConstraints() {
>>                 return Collections.emptyList();
>>             }
>>
>>             @Override
>>             public List<RelCollation> getCollations() {
>>                 RelCollation c = new RelCollationImpl(
>>                         ImmutableList.of(
>>                                 new RelFieldCollation(0,
>> RelFieldCollation.Direction.ASCENDING),
>>                                 new RelFieldCollation(1,
>> RelFieldCollation.Direction.ASCENDING)
>>                         )) {
>>                 };
>>                 return Arrays.asList(c);
>>             }
>>
>>             @Override
>>             public RelDistribution getDistribution() {
>>                 return RelDistributions.ANY;
>>             }
>>         }
>>     }
>>
>>
>>
>>
>> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
>>
>> > Yes that is definitely possible. I am too busy to write a code snippet
>> but
>> > you should take a look at PlannerTest.
>> >
>> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
>> > stephane.campinas@gmail.com> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying to use the Volcano planner in order to optimise queries
>> based
>> > > on statistics but I am having some issues understanding how to achieve
>> > > this, even after looking at the Github repository for tests.
>> > > A first goal I would like to achieve would be to choose a join
>> > > implementation based on its cost.
>> > >
>> > > For example, a query tree can have several joins, and depending on the
>> > > position of the join in the tree, an certain implementation would be
>> more
>> > > efficient than another.
>> > > Would that be possible ? If so, could you share a code snippet ?
>> > >
>> > > Thanks
>> > >
>> > > --
>> > > Campinas Stéphane
>> >
>> >
>
> --
>
>
> -- Enrico Olivelli
>

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Luis thank you,
my case is the second one. I want to use Calcite planner internally on a
database system. I will try with your suggestion

Enrico

Il mer 8 nov 2017, 20:14 Luis Fernando Kauer <lf...@yahoo.com.br.invalid>
ha scritto:

>  If you intend to run a query then you should follow the tutorial and try
> to change the csv adapter.  You can add the table to the schema at runtime
> using something like:
> ---------------------------------------------------------------------
>
> Class.forName("org.apache.calcite.jdbc.Driver");
> Properties info = new Properties();
> info.setProperty("lex", "MYSQL_ANSI");
>
> final Connection connection = DriverManager.getConnection("jdbc:calcite:",
> info);
> CalciteConnection conn = connection.unwrap(CalciteConnection.class);
> SchemaPlus root = conn.getRootSchema();
> root.add("MYTABLE", new TableImpl());
> Statement statement = conn.createStatement();
> ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
> ---------------------------------------------------------------------
>
> But if you only want to parse, validate and optimize the query plan, you
> can use something like:
> ---------------------------------------------------------------------
>     Table table = new TableImpl();
>     final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
>     SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
>     schema.add("MYTABLE", table);
>     List<RelTraitDef> traitDefs = new ArrayList<>();
>     traitDefs.add(ConventionTraitDef.INSTANCE);
>     traitDefs.add(RelCollationTraitDef.INSTANCE);
>     SqlParser.Config parserConfig =
>        SqlParser.configBuilder(SqlParser.Config.DEFAULT)
>       .setCaseSensitive(false)
>       .build();
>
>     final FrameworkConfig config = Frameworks.newConfigBuilder()
>         .parserConfig(parserConfig)
>         .defaultSchema(schema)
>         .traitDefs(traitDefs)
>         // define the rules you want to apply
>
>         .programs(Programs.ofRules(Programs.RULE_SET))
>         .build();
>     Planner planner = Frameworks.getPlanner(config);
>     SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
>     n = planner.validate(n);
>     RelNode root = planner.rel(n).project();
>     System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root,
> SqlExplainFormat.TEXT,
>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>     RelOptCluster cluster = root.getCluster();
>     final RelOptPlanner optPlanner = cluster.getPlanner();
>     RelTraitSet desiredTraits =
>         cluster.traitSet().replace(EnumerableConvention.INSTANCE);
>     final RelNode newRoot = optPlanner.changeTraits(root, desiredTraits);
>     optPlanner.setRoot(newRoot);
>     RelNode bestExp = optPlanner.findBestExp();
> System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp,
> SqlExplainFormat.TEXT,
>         SqlExplainLevel.DIGEST_ATTRIBUTES));
>  ---------------------------------------------------------------------
>
> The main problem was that you were not setting the desired trait to use
> EnumerableConvention.
> You can see that instead of implementing all the interfaces you should use
> the available builders and classes.
> Also for implementing Table I think you should extend AbstractTable
> instead of implementing Table interface and you can use Statistics.of
> instead of implementing Statistic interface if it is simple:
> ---------------------------------------------------------------------
>
>   private static class TableImpl extends AbstractTable {
>     public TableImpl() {}
>     @Override    public RelDataType getRowType(RelDataTypeFactory
> typeFactory) {
>       Builder builder = new RelDataTypeFactory.Builder(typeFactory);
>       return builder.add("id",
> typeFactory.createSqlType(SqlTypeName.INTEGER))
>           .add("name",
> typeFactory.createSqlType(SqlTypeName.VARCHAR)).build();
>     }
>     @Override
>     public Statistic getStatistic() {
>        return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(),
>           ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
>     }
>
>   }
> ---------------------------------------------------------------------
>
>
>     Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico Olivelli <
> eolivelli@gmail.com> escreveu:
>
>  Hi,
> I am playing with the planner but I can't get it work for a very simple
> query.
> Th table is
>  MYTABLE(id integer, name varchar)            definition is given in code
> snippet
> the query is "SELECT * FROM MYTABLE"
>
> The error is:
> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
> [rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:
>
> Root: rel#7:Subset#0.NONE.[0, 1].any
> Original rel:
> LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
> name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
> io}, id = 5
>   EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
> table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows,
> 16.0 cpu, 0.0 io}, id = 2
>
> Sets:
> Set#0, type: RecordType(INTEGER id, VARCHAR name)
>     rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
>         rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
> MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
>         rel#9:EnumerableProject.ENUMERABLE.[[0,
> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
>     rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
>         rel#5:LogicalProject.NONE.[[0,
> 1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
> rowcount=15.0, cumulative cost={inf}
>         rel#8:AbstractConverter.NONE.[0,
> 1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[0,
> 1],dist=any), rowcount=15.0, cumulative cost={inf}
>
> Does anybody has an hint for me ?
> I am using currert master of Calcite (1.15-SNAPSHOT)
>
> Thank you
>
> Enrico
>
>
> My code is:
>   @Test
>     public void test() throws Exception {
>         Table table = new TableImpl();
>         CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
> "default");
>         schema.add("MYTABLE", table);
>         SchemaPlus rootSchema = schema.plus();
>         SqlRexConvertletTable convertletTable =
> StandardConvertletTable.INSTANCE;
>         SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAULT;
>         FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
> rootSchema, convertletTable);
>         Planner imp = Frameworks.getPlanner(frameworkConfig);
>         SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
>         sqlNode = imp.validate(sqlNode);
>         RelRoot relRoot = imp.rel(sqlNode);
>         RelNode project = relRoot.project();
>         RelOptPlanner planner = project.getCluster().getPlanner();
>         planner.setRoot(project);
>         RelNode findBestExp = planner.findBestExp();
>         System.out.println("best:" + findBestExp);
>     }
>
>     private class FrameworkConfigImpl implements FrameworkConfig {
>
>         private final SqlToRelConverter.Config config;
>         private final SchemaPlus rootSchema;
>         private final SqlRexConvertletTable convertletTable;
>
>         public FrameworkConfigImpl(SqlToRelConverter.Config config,
> SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
>             this.config = config;
>             this.rootSchema = rootSchema;
>             this.convertletTable = convertletTable;
>         }
>
>         @Override
>         public SqlParser.Config getParserConfig() {
>             return SqlParser.Config.DEFAULT;
>         }
>
>         @Override
>         public SqlToRelConverter.Config getSqlToRelConverterConfig() {
>             return config;
>         }
>
>         @Override
>         public SchemaPlus getDefaultSchema() {
>             return rootSchema;
>         }
>
>         @Override
>         public RexExecutor getExecutor() {
>             return new RexExecutorImpl(new DataContextImpl());
>         }
>
>         @Override
>         public ImmutableList<Program> getPrograms() {
>             return ImmutableList.of(Programs.standard());
>         }
>
>         @Override
>         public SqlOperatorTable getOperatorTable() {
>             return new SqlStdOperatorTable();
>         }
>
>         @Override
>         public RelOptCostFactory getCostFactory() {
>             return null;
>         }
>
>         @Override
>         public ImmutableList<RelTraitDef> getTraitDefs() {
>
>             return ImmutableList.of(ConventionTraitDef.INSTANCE,
>                     RelCollationTraitDef.INSTANCE,
>                     RelDistributionTraitDef.INSTANCE
>             );
>         }
>
>         @Override
>         public SqlRexConvertletTable getConvertletTable() {
>             return convertletTable;
>         }
>
>         @Override
>         public Context getContext() {
>             return new ContextImpl();
>         }
>
>         @Override
>         public RelDataTypeSystem getTypeSystem() {
>             return RelDataTypeSystem.DEFAULT;
>         }
>
>         class DataContextImpl implements DataContext {
>
>             public DataContextImpl() {
>             }
>
>             @Override
>             public SchemaPlus getRootSchema() {
>                 return rootSchema;
>             }
>
>             @Override
>             public JavaTypeFactory getTypeFactory() {
>                 throw new UnsupportedOperationException("Not supported
> yet."); //To change body of generated methods, choose Tools | Templates.
>             }
>
>             @Override
>             public QueryProvider getQueryProvider() {
>                 throw new UnsupportedOperationException("Not supported
> yet."); //To change body of generated methods, choose Tools | Templates.
>             }
>
>             @Override
>             public Object get(String name) {
>                 throw new UnsupportedOperationException("Not supported
> yet."); //To change body of generated methods, choose Tools | Templates.
>             }
>
>         }
>
>         private class ContextImpl implements Context {
>
>             public ContextImpl() {
>             }
>
>             @Override
>             public <C> C unwrap(Class<C> aClass) {
>                 return null;
>             }
>         }
>     }
>
>     private static class TableImpl implements Table {
>
>         public TableImpl() {
>         }
>
>         @Override
>         public RelDataType getRowType(RelDataTypeFactory typeFactory) {
>             return typeFactory
>                     .builder()
>                     .add("id",
> typeFactory.createSqlType(SqlTypeName.INTEGER))
>                     .add("name",
> typeFactory.createSqlType(SqlTypeName.VARCHAR))
>                     .build();
>         }
>
>         @Override
>         public Statistic getStatistic() {
>             return new StatisticImpl();
>         }
>
>         @Override
>         public Schema.TableType getJdbcTableType() {
>             throw new UnsupportedOperationException("Not supported yet.");
> //To change body of generated methods, choose Tools | Templates.
>         }
>
>         @Override
>         public boolean isRolledUp(String column) {
>             return true;
>         }
>
>         @Override
>         public boolean rolledUpColumnValidInsideAgg(String column, SqlCall
> call, SqlNode parent, CalciteConnectionConfig config) {
>             return false;
>         }
>
>         class StatisticImpl implements Statistic {
>
>             public StatisticImpl() {
>             }
>
>             @Override
>             public Double getRowCount() {
>                 return 15d;
>             }
>
>             @Override
>             public boolean isKey(ImmutableBitSet columns) {
>                 return false;
>             }
>
>             @Override
>             public List<RelReferentialConstraint>
> getReferentialConstraints() {
>                 return Collections.emptyList();
>             }
>
>             @Override
>             public List<RelCollation> getCollations() {
>                 RelCollation c = new RelCollationImpl(
>                         ImmutableList.of(
>                                 new RelFieldCollation(0,
> RelFieldCollation.Direction.ASCENDING),
>                                 new RelFieldCollation(1,
> RelFieldCollation.Direction.ASCENDING)
>                         )) {
>                 };
>                 return Arrays.asList(c);
>             }
>
>             @Override
>             public RelDistribution getDistribution() {
>                 return RelDistributions.ANY;
>             }
>         }
>     }
>
>
>
>
> 2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:
>
> > Yes that is definitely possible. I am too busy to write a code snippet
> but
> > you should take a look at PlannerTest.
> >
> > > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> > stephane.campinas@gmail.com> wrote:
> > >
> > > Hi,
> > >
> > > I am trying to use the Volcano planner in order to optimise queries
> based
> > > on statistics but I am having some issues understanding how to achieve
> > > this, even after looking at the Github repository for tests.
> > > A first goal I would like to achieve would be to choose a join
> > > implementation based on its cost.
> > >
> > > For example, a query tree can have several joins, and depending on the
> > > position of the join in the tree, an certain implementation would be
> more
> > > efficient than another.
> > > Would that be possible ? If so, could you share a code snippet ?
> > >
> > > Thanks
> > >
> > > --
> > > Campinas Stéphane
> >
> >

-- 


-- Enrico Olivelli

Re: Volcano planner

Posted by Luis Fernando Kauer <lf...@yahoo.com.br.INVALID>.
 If you intend to run a query then you should follow the tutorial and try to change the csv adapter.  You can add the table to the schema at runtime using something like:
---------------------------------------------------------------------

Class.forName("org.apache.calcite.jdbc.Driver");
Properties info = new Properties();
info.setProperty("lex", "MYSQL_ANSI");

final Connection connection = DriverManager.getConnection("jdbc:calcite:", info);
CalciteConnection conn = connection.unwrap(CalciteConnection.class);
SchemaPlus root = conn.getRootSchema();
root.add("MYTABLE", new TableImpl());
Statement statement = conn.createStatement();
ResultSet rs = statement.executeQuery("SELECT * FROM MYTABLE");
---------------------------------------------------------------------

But if you only want to parse, validate and optimize the query plan, you can use something like:
---------------------------------------------------------------------
    Table table = new TableImpl();
    final SchemaPlus rootSchema = Frameworks.createRootSchema(true);
    SchemaPlus schema = rootSchema.add("x", new AbstractSchema());
    schema.add("MYTABLE", table);
    List<RelTraitDef> traitDefs = new ArrayList<>();
    traitDefs.add(ConventionTraitDef.INSTANCE);
    traitDefs.add(RelCollationTraitDef.INSTANCE);
    SqlParser.Config parserConfig = 
       SqlParser.configBuilder(SqlParser.Config.DEFAULT)
      .setCaseSensitive(false)
      .build();
      
    final FrameworkConfig config = Frameworks.newConfigBuilder()
        .parserConfig(parserConfig)
        .defaultSchema(schema)
        .traitDefs(traitDefs)
        // define the rules you want to apply

        .programs(Programs.ofRules(Programs.RULE_SET))
        .build();
    Planner planner = Frameworks.getPlanner(config);
    SqlNode n = planner.parse(" SELECT * FROM MYTABLE WHERE ID < 10");
    n = planner.validate(n);
    RelNode root = planner.rel(n).project();
    System.out.println(RelOptUtil.dumpPlan("-- Logical Plan", root, SqlExplainFormat.TEXT, 
        SqlExplainLevel.DIGEST_ATTRIBUTES));
    RelOptCluster cluster = root.getCluster();
    final RelOptPlanner optPlanner = cluster.getPlanner();
    RelTraitSet desiredTraits =
        cluster.traitSet().replace(EnumerableConvention.INSTANCE);
    final RelNode newRoot = optPlanner.changeTraits(root, desiredTraits);
    optPlanner.setRoot(newRoot);
    RelNode bestExp = optPlanner.findBestExp();        System.out.println(RelOptUtil.dumpPlan("-- Best Plan", bestExp, SqlExplainFormat.TEXT, 
        SqlExplainLevel.DIGEST_ATTRIBUTES));
 ---------------------------------------------------------------------

The main problem was that you were not setting the desired trait to use EnumerableConvention.
You can see that instead of implementing all the interfaces you should use the available builders and classes.
Also for implementing Table I think you should extend AbstractTable instead of implementing Table interface and you can use Statistics.of instead of implementing Statistic interface if it is simple:
---------------------------------------------------------------------

  private static class TableImpl extends AbstractTable {
    public TableImpl() {}
    @Override    public RelDataType getRowType(RelDataTypeFactory typeFactory) {
      Builder builder = new RelDataTypeFactory.Builder(typeFactory);
      return builder.add("id", typeFactory.createSqlType(SqlTypeName.INTEGER))
          .add("name", typeFactory.createSqlType(SqlTypeName.VARCHAR)).build();
    }
    @Override
    public Statistic getStatistic() {
       return Statistics.of(15D, ImmutableList.<ImmutableBitSet>of(), 
          ImmutableList.of(RelCollations.of(0), RelCollations.of(1)));
    }

  }
---------------------------------------------------------------------


    Em quarta-feira, 8 de novembro de 2017 12:15:34 BRST, Enrico Olivelli <eo...@gmail.com> escreveu:  
 
 Hi,
I am playing with the planner but I can't get it work for a very simple
query.
Th table is
 MYTABLE(id integer, name varchar)            definition is given in code
snippet
the query is "SELECT * FROM MYTABLE"

The error is:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
[rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:

Root: rel#7:Subset#0.NONE.[0, 1].any
Original rel:
LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
io}, id = 5
  EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows,
16.0 cpu, 0.0 io}, id = 2

Sets:
Set#0, type: RecordType(INTEGER id, VARCHAR name)
    rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
        rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
        rel#9:EnumerableProject.ENUMERABLE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
    rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
        rel#5:LogicalProject.NONE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={inf}
        rel#8:AbstractConverter.NONE.[0,
1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[0,
1],dist=any), rowcount=15.0, cumulative cost={inf}

Does anybody has an hint for me ?
I am using currert master of Calcite (1.15-SNAPSHOT)

Thank you

Enrico


My code is:
  @Test
    public void test() throws Exception {
        Table table = new TableImpl();
        CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
"default");
        schema.add("MYTABLE", table);
        SchemaPlus rootSchema = schema.plus();
        SqlRexConvertletTable convertletTable =
StandardConvertletTable.INSTANCE;
        SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAULT;
        FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
rootSchema, convertletTable);
        Planner imp = Frameworks.getPlanner(frameworkConfig);
        SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
        sqlNode = imp.validate(sqlNode);
        RelRoot relRoot = imp.rel(sqlNode);
        RelNode project = relRoot.project();
        RelOptPlanner planner = project.getCluster().getPlanner();
        planner.setRoot(project);
        RelNode findBestExp = planner.findBestExp();
        System.out.println("best:" + findBestExp);
    }

    private class FrameworkConfigImpl implements FrameworkConfig {

        private final SqlToRelConverter.Config config;
        private final SchemaPlus rootSchema;
        private final SqlRexConvertletTable convertletTable;

        public FrameworkConfigImpl(SqlToRelConverter.Config config,
SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
            this.config = config;
            this.rootSchema = rootSchema;
            this.convertletTable = convertletTable;
        }

        @Override
        public SqlParser.Config getParserConfig() {
            return SqlParser.Config.DEFAULT;
        }

        @Override
        public SqlToRelConverter.Config getSqlToRelConverterConfig() {
            return config;
        }

        @Override
        public SchemaPlus getDefaultSchema() {
            return rootSchema;
        }

        @Override
        public RexExecutor getExecutor() {
            return new RexExecutorImpl(new DataContextImpl());
        }

        @Override
        public ImmutableList<Program> getPrograms() {
            return ImmutableList.of(Programs.standard());
        }

        @Override
        public SqlOperatorTable getOperatorTable() {
            return new SqlStdOperatorTable();
        }

        @Override
        public RelOptCostFactory getCostFactory() {
            return null;
        }

        @Override
        public ImmutableList<RelTraitDef> getTraitDefs() {

            return ImmutableList.of(ConventionTraitDef.INSTANCE,
                    RelCollationTraitDef.INSTANCE,
                    RelDistributionTraitDef.INSTANCE
            );
        }

        @Override
        public SqlRexConvertletTable getConvertletTable() {
            return convertletTable;
        }

        @Override
        public Context getContext() {
            return new ContextImpl();
        }

        @Override
        public RelDataTypeSystem getTypeSystem() {
            return RelDataTypeSystem.DEFAULT;
        }

        class DataContextImpl implements DataContext {

            public DataContextImpl() {
            }

            @Override
            public SchemaPlus getRootSchema() {
                return rootSchema;
            }

            @Override
            public JavaTypeFactory getTypeFactory() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public QueryProvider getQueryProvider() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public Object get(String name) {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

        }

        private class ContextImpl implements Context {

            public ContextImpl() {
            }

            @Override
            public <C> C unwrap(Class<C> aClass) {
                return null;
            }
        }
    }

    private static class TableImpl implements Table {

        public TableImpl() {
        }

        @Override
        public RelDataType getRowType(RelDataTypeFactory typeFactory) {
            return typeFactory
                    .builder()
                    .add("id",
typeFactory.createSqlType(SqlTypeName.INTEGER))
                    .add("name",
typeFactory.createSqlType(SqlTypeName.VARCHAR))
                    .build();
        }

        @Override
        public Statistic getStatistic() {
            return new StatisticImpl();
        }

        @Override
        public Schema.TableType getJdbcTableType() {
            throw new UnsupportedOperationException("Not supported yet.");
//To change body of generated methods, choose Tools | Templates.
        }

        @Override
        public boolean isRolledUp(String column) {
            return true;
        }

        @Override
        public boolean rolledUpColumnValidInsideAgg(String column, SqlCall
call, SqlNode parent, CalciteConnectionConfig config) {
            return false;
        }

        class StatisticImpl implements Statistic {

            public StatisticImpl() {
            }

            @Override
            public Double getRowCount() {
                return 15d;
            }

            @Override
            public boolean isKey(ImmutableBitSet columns) {
                return false;
            }

            @Override
            public List<RelReferentialConstraint>
getReferentialConstraints() {
                return Collections.emptyList();
            }

            @Override
            public List<RelCollation> getCollations() {
                RelCollation c = new RelCollationImpl(
                        ImmutableList.of(
                                new RelFieldCollation(0,
RelFieldCollation.Direction.ASCENDING),
                                new RelFieldCollation(1,
RelFieldCollation.Direction.ASCENDING)
                        )) {
                };
                return Arrays.asList(c);
            }

            @Override
            public RelDistribution getDistribution() {
                return RelDistributions.ANY;
            }
        }
    }




2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:

> Yes that is definitely possible. I am too busy to write a code snippet but
> you should take a look at PlannerTest.
>
> > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> stephane.campinas@gmail.com> wrote:
> >
> > Hi,
> >
> > I am trying to use the Volcano planner in order to optimise queries based
> > on statistics but I am having some issues understanding how to achieve
> > this, even after looking at the Github repository for tests.
> > A first goal I would like to achieve would be to choose a join
> > implementation based on its cost.
> >
> > For example, a query tree can have several joins, and depending on the
> > position of the join in the tree, an certain implementation would be more
> > efficient than another.
> > Would that be possible ? If so, could you share a code snippet ?
> >
> > Thanks
> >
> > --
> > Campinas Stéphane
>
>  

Re: Volcano planner

Posted by Enrico Olivelli <eo...@gmail.com>.
Hi,
I am playing with the planner but I can't get it work for a very simple
query.
Th table is
 MYTABLE(id integer, name varchar)            definition is given in code
snippet
the query is "SELECT * FROM MYTABLE"

The error is:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
[rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:

Root: rel#7:Subset#0.NONE.[0, 1].any
Original rel:
LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
io}, id = 5
  EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows,
16.0 cpu, 0.0 io}, id = 2

Sets:
Set#0, type: RecordType(INTEGER id, VARCHAR name)
    rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
        rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
        rel#9:EnumerableProject.ENUMERABLE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
    rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
        rel#5:LogicalProject.NONE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={inf}
        rel#8:AbstractConverter.NONE.[0,
1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[0,
1],dist=any), rowcount=15.0, cumulative cost={inf}

Does anybody has an hint for me ?
I am using currert master of Calcite (1.15-SNAPSHOT)

Thank you

Enrico


My code is:
   @Test
    public void test() throws Exception {
        Table table = new TableImpl();
        CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
"default");
        schema.add("MYTABLE", table);
        SchemaPlus rootSchema = schema.plus();
        SqlRexConvertletTable convertletTable =
StandardConvertletTable.INSTANCE;
        SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAULT;
        FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
rootSchema, convertletTable);
        Planner imp = Frameworks.getPlanner(frameworkConfig);
        SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
        sqlNode = imp.validate(sqlNode);
        RelRoot relRoot = imp.rel(sqlNode);
        RelNode project = relRoot.project();
        RelOptPlanner planner = project.getCluster().getPlanner();
        planner.setRoot(project);
        RelNode findBestExp = planner.findBestExp();
        System.out.println("best:" + findBestExp);
    }

    private class FrameworkConfigImpl implements FrameworkConfig {

        private final SqlToRelConverter.Config config;
        private final SchemaPlus rootSchema;
        private final SqlRexConvertletTable convertletTable;

        public FrameworkConfigImpl(SqlToRelConverter.Config config,
SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
            this.config = config;
            this.rootSchema = rootSchema;
            this.convertletTable = convertletTable;
        }

        @Override
        public SqlParser.Config getParserConfig() {
            return SqlParser.Config.DEFAULT;
        }

        @Override
        public SqlToRelConverter.Config getSqlToRelConverterConfig() {
            return config;
        }

        @Override
        public SchemaPlus getDefaultSchema() {
            return rootSchema;
        }

        @Override
        public RexExecutor getExecutor() {
            return new RexExecutorImpl(new DataContextImpl());
        }

        @Override
        public ImmutableList<Program> getPrograms() {
            return ImmutableList.of(Programs.standard());
        }

        @Override
        public SqlOperatorTable getOperatorTable() {
            return new SqlStdOperatorTable();
        }

        @Override
        public RelOptCostFactory getCostFactory() {
            return null;
        }

        @Override
        public ImmutableList<RelTraitDef> getTraitDefs() {

            return ImmutableList.of(ConventionTraitDef.INSTANCE,
                    RelCollationTraitDef.INSTANCE,
                    RelDistributionTraitDef.INSTANCE
            );
        }

        @Override
        public SqlRexConvertletTable getConvertletTable() {
            return convertletTable;
        }

        @Override
        public Context getContext() {
            return new ContextImpl();
        }

        @Override
        public RelDataTypeSystem getTypeSystem() {
            return RelDataTypeSystem.DEFAULT;
        }

        class DataContextImpl implements DataContext {

            public DataContextImpl() {
            }

            @Override
            public SchemaPlus getRootSchema() {
                return rootSchema;
            }

            @Override
            public JavaTypeFactory getTypeFactory() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public QueryProvider getQueryProvider() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public Object get(String name) {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

        }

        private class ContextImpl implements Context {

            public ContextImpl() {
            }

            @Override
            public <C> C unwrap(Class<C> aClass) {
                return null;
            }
        }
    }

    private static class TableImpl implements Table {

        public TableImpl() {
        }

        @Override
        public RelDataType getRowType(RelDataTypeFactory typeFactory) {
            return typeFactory
                    .builder()
                    .add("id",
typeFactory.createSqlType(SqlTypeName.INTEGER))
                    .add("name",
typeFactory.createSqlType(SqlTypeName.VARCHAR))
                    .build();
        }

        @Override
        public Statistic getStatistic() {
            return new StatisticImpl();
        }

        @Override
        public Schema.TableType getJdbcTableType() {
            throw new UnsupportedOperationException("Not supported yet.");
//To change body of generated methods, choose Tools | Templates.
        }

        @Override
        public boolean isRolledUp(String column) {
            return true;
        }

        @Override
        public boolean rolledUpColumnValidInsideAgg(String column, SqlCall
call, SqlNode parent, CalciteConnectionConfig config) {
            return false;
        }

        class StatisticImpl implements Statistic {

            public StatisticImpl() {
            }

            @Override
            public Double getRowCount() {
                return 15d;
            }

            @Override
            public boolean isKey(ImmutableBitSet columns) {
                return false;
            }

            @Override
            public List<RelReferentialConstraint>
getReferentialConstraints() {
                return Collections.emptyList();
            }

            @Override
            public List<RelCollation> getCollations() {
                RelCollation c = new RelCollationImpl(
                        ImmutableList.of(
                                new RelFieldCollation(0,
RelFieldCollation.Direction.ASCENDING),
                                new RelFieldCollation(1,
RelFieldCollation.Direction.ASCENDING)
                        )) {
                };
                return Arrays.asList(c);
            }

            @Override
            public RelDistribution getDistribution() {
                return RelDistributions.ANY;
            }
        }
    }




2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:

> Yes that is definitely possible. I am too busy to write a code snippet but
> you should take a look at PlannerTest.
>
> > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> stephane.campinas@gmail.com> wrote:
> >
> > Hi,
> >
> > I am trying to use the Volcano planner in order to optimise queries based
> > on statistics but I am having some issues understanding how to achieve
> > this, even after looking at the Github repository for tests.
> > A first goal I would like to achieve would be to choose a join
> > implementation based on its cost.
> >
> > For example, a query tree can have several joins, and depending on the
> > position of the join in the tree, an certain implementation would be more
> > efficient than another.
> > Would that be possible ? If so, could you share a code snippet ?
> >
> > Thanks
> >
> > --
> > Campinas Stéphane
>
>

Re: Volcano planner

Posted by Julian Hyde <jh...@apache.org>.
Yes that is definitely possible. I am too busy to write a code snippet but you should take a look at PlannerTest.

> On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <st...@gmail.com> wrote:
> 
> Hi,
> 
> I am trying to use the Volcano planner in order to optimise queries based
> on statistics but I am having some issues understanding how to achieve
> this, even after looking at the Github repository for tests.
> A first goal I would like to achieve would be to choose a join
> implementation based on its cost.
> 
> For example, a query tree can have several joins, and depending on the
> position of the join in the tree, an certain implementation would be more
> efficient than another.
> Would that be possible ? If so, could you share a code snippet ?
> 
> Thanks
> 
> -- 
> Campinas Stéphane