You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yijie Shen (JIRA)" <ji...@apache.org> on 2016/04/13 08:51:25 UTC

[jira] [Commented] (FLINK-3738) Refactor TableEnvironment and TranslationContext

    [ https://issues.apache.org/jira/browse/FLINK-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15238719#comment-15238719 ] 

Yijie Shen commented on FLINK-3738:
-----------------------------------

Hi [~fhueske], your idea make much sense to me.

Besides you have mentioned above, I think it's worthwhile to rethink the *validation* of plan generated from our Table API.

Currently, out validation codes are scattered mainly in two places: in table API's RelNode construction, and in code generation. Regarding the lifecycle of query execution, it's scattered in logical plan generation and physical plan generation. Since table API & sql execution share the code generation path, the validation here is inappropriate.

Therefore, I think we could possibly extract the validation into a single phase, between table API call and RelNode construction, similar to the planning procedure of SQL String execution. 

> Refactor TableEnvironment and TranslationContext
> ------------------------------------------------
>
>                 Key: FLINK-3738
>                 URL: https://issues.apache.org/jira/browse/FLINK-3738
>             Project: Flink
>          Issue Type: Task
>          Components: Table API
>            Reporter: Fabian Hueske
>            Assignee: Fabian Hueske
>
> Currently the TableAPI uses a static object called {{TranslationContext}} which holds the Calcite table catalog and a Calcite planner instance. Whenever a {{DataSet}} or {{DataStream}} is converted into a {{Table}} or registered as a {{Table}} on the {{TableEnvironment}}, a new entry is added to the catalog. The first time a {{Table}} is added, a planner instance is created. The planner is used to optimize the query (defined by one or more Table API operations and/or one ore more SQL queries) when a {{Table}} is converted into a {{DataSet}} or {{DataStream}}. Since a planner may only be used to optimize a single program, the choice of a single static object is problematic.
> I propose to refactor the {{TableEnvironment}} to take over the responsibility of holding the catalog and the planner instance. 
> - A {{TableEnvironment}} holds a catalog of registered tables and a single planner instance.
> - A {{TableEnvironment}} will only allow to translate a single {{Table}} (possibly composed of several Table API operations and SQL queries) into a {{DataSet}} or {{DataStream}}. 
> - A {{TableEnvironment}} is bound to an {{ExecutionEnvironment}} or a {{StreamExecutionEnvironment}}. This is necessary to create data source or source functions to read external tables or streams.
> - {{DataSet}} and {{DataStream}} need a reference to a {{TableEnvironment}} to be converted into a {{Table}}. This will prohibit implicit casts as currently supported for the DataSet Scala API.
> - A {{Table}} needs a reference to the {{TableEnvironment}} it is bound to. Only tables from the same {{TableEnvironment}} can be processed together.
> - The {{TranslationContext}} will be completely removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)