You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "godfrey he (JIRA)" <ji...@apache.org> on 2019/07/03 14:25:00 UTC

[jira] [Created] (FLINK-13081) Supports explain and execute DAG plan

godfrey he created FLINK-13081:
----------------------------------

             Summary: Supports explain and execute DAG plan
                 Key: FLINK-13081
                 URL: https://issues.apache.org/jira/browse/FLINK-13081
             Project: Flink
          Issue Type: New Feature
          Components: Table SQL / Planner
            Reporter: godfrey he
            Assignee: godfrey he


in flink planner, a query will be optimized while calling {{TableEnvironment#insertInto}} or {{TableEnvironment#sqlUpdate}}. however, if a job has multiple sinks (means {{insertInto}} or {{sqlUpdate}} will be called multiple times), the final job contains several independent sub-graphs. In most cases, there is duplicate computing in a multiple sinks job. so in blink planner, multiple sinks queries will be optimized together to avoid duplicate computing. a query will not be optimized in {{insertInto}} and {{sqlUpdate}}. instead, queries will be optimized before executing.
this issue aims to support above case.

two methods will be added into {{TableEnvironment}}:
{code:java}
// explain multiple-sinks plan
String explain(boolean extended);
// Triggers the program execution
// in blink planner, queries will be optimized together in this method
JobExecutionResult execute(String jobName) throws Exception;
{code}

to make sure the behavior of flink planner is same as before, a {{isLazyOptMode}} filed is added into {{EnvironmentSettings}}, which tell the table environment should optimize the query immediately in {{insertInto}}/{{sqlUpdate}} methods(isLazyOptMode=false) for flink planner or in execute method for blink planner.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)