You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2018/10/19 23:38:00 UTC

[jira] [Created] (FLINK-10618) Introduce catalog for Flink tables

Xuefu Zhang created FLINK-10618:
-----------------------------------

             Summary: Introduce catalog for Flink tables
                 Key: FLINK-10618
                 URL: https://issues.apache.org/jira/browse/FLINK-10618
             Project: Flink
          Issue Type: New Feature
          Components: SQL Client
    Affects Versions: 1.6.1
            Reporter: Xuefu Zhang
            Assignee: Xuefu Zhang


Besides meta objects such as tables that may come from an {{ExternalCatalog}}, Flink also deals with tables/views/functions that are created on the fly (in memory), or specified in a configuration file. Those objects don't belong to any {{ExternalCatalog}}, yet Flink either stores them in memory, which are non-persistent, or recreates them from a file, which is a big pain for the user. Those objects are only known to Flink but Flink has a poor management for them.

Since they are typical objects in a database catalog, it's natural to have a catalog that manages those objects. The interface will be similar to {{ExternalCatalog}}, which contains meta objects that are not managed by Flink. There are several possible implementations of the Flink internal catalog interface: memory, file, external registry (such as confluent schema registry or Hive metastore), and relational database, etc. 

The initial functionality as well as the catalog hierarchy could be very simple. The basic functionality of the catalog will be mostly create, alter, and drop tables, views, functions, etc. Obviously, this can evolve over the time.

We plan to provide implementations with memory, file, and Hive metastore, and will be plugged in at SQL-Client layer.

Please provide your feedback.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)