You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Bowen Li (Jira)" <ji...@apache.org> on 2019/12/12 00:48:00 UTC

[jira] [Created] (FLINK-15206) support dynamic catalog table for unified SQL job

Bowen Li created FLINK-15206:
--------------------------------

             Summary: support dynamic catalog table for unified SQL job
                 Key: FLINK-15206
                 URL: https://issues.apache.org/jira/browse/FLINK-15206
             Project: Flink
          Issue Type: New Feature
          Components: Table SQL / API
            Reporter: Bowen Li
            Assignee: Bowen Li
             Fix For: 1.11.0


currently if users have both an online and an offline job with same business logic in Flink SQL, their codebase is still not unified. They would keep two SQL statements whose only difference is the source (or/and sink) table. E.g.


{code:java}
// online job
insert into x select * from kafka_table;

// offline backfill job
insert into x select * from hive_table;
{code}

We would like to introduce a "dynamic catalog table". The dynamic catalog table acts as a view, and is just an abstract source from actual sources behind it under with configurations. When execute a job, depending on the configuration, the dynamic catalog table can point to an actual source table.

A use case for this is the example given above - users want to just keep one sql statement as {{insert into x select * from my_source_dynamic_table);}}; when executed in streaming mode, the {{my_source_dynamic_table}} should point to a kafka catalog table, and in batch mode, the {{my_source_dynamic_table}} should point to a hive catalog table.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)