You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/06/26 13:29:01 UTC
[jira] [Commented] (FLINK-7005) Optimization steps are missing for
nested registered tables
[ https://issues.apache.org/jira/browse/FLINK-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16063079#comment-16063079 ]
ASF GitHub Bot commented on FLINK-7005:
---------------------------------------
GitHub user twalthr opened a pull request:
https://github.com/apache/flink/pull/4186
[FLINK-7005] [table] Optimization steps are missing for nested registered tables
This PR solves the bug described in FLINK-7005. This PR adds another stage to the optimization phase that converts table scans to full plans. However, it makes 2 rules non-optional. Do we want to make those two rules also configurable through the Calcite config?
This should definitely be part of Flink 1.3.2.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/twalthr/flink FLINK-7005
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flink/pull/4186.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #4186
----
commit 5cfc366bba6899f2def57a92af4856ab3e63c0b0
Author: twalthr <tw...@apache.org>
Date: 2017-06-26T13:22:11Z
[FLINK-7005] [table] Optimization steps are missing for nested registered tables
----
> Optimization steps are missing for nested registered tables
> -----------------------------------------------------------
>
> Key: FLINK-7005
> URL: https://issues.apache.org/jira/browse/FLINK-7005
> Project: Flink
> Issue Type: Bug
> Components: Table API & SQL
> Affects Versions: 1.3.0, 1.3.1
> Reporter: Timo Walther
> Assignee: Timo Walther
>
> Tables that are registered (implicitly or explicitly) do not pass the first three optimization steps:
> - decorrelate
> - convert time indicators
> - normalize the logical plan
> E.g. this has the wrong plan right now:
> {code}
> val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 'bigdec, 'string)
> val table1 = tEnv.sql(s"""SELECT 1 + 1 FROM $table""") // not optimized
> val table2 = tEnv.sql(s"""SELECT myrt FROM $table1""")
> val results = table2.toAppendStream[Row]
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)