You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by ch...@apache.org on 2023/03/26 10:11:58 UTC

[kyuubi] branch master updated: [KYUUBI #4608] [DOCS] Rename Flink Table Store to Apache Paimon (Incubating) in docs `Connectors for Trino SQL Query Engine`

This is an automated email from the ASF dual-hosted git repository.

chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git


The following commit(s) were added to refs/heads/master by this push:
     new 8a526ced4 [KYUUBI #4608] [DOCS] Rename Flink Table Store to Apache Paimon (Incubating)  in docs `Connectors for Trino SQL Query Engine`
8a526ced4 is described below

commit 8a526ced4ab3b02d84bc532c43aac1426fc2f5fa
Author: guanhua.lgh <gu...@alibaba-inc.com>
AuthorDate: Sun Mar 26 18:11:48 2023 +0800

    [KYUUBI #4608] [DOCS] Rename Flink Table Store to Apache Paimon (Incubating)  in docs `Connectors for Trino SQL Query Engine`
    
    ### _Why are the changes needed?_
    
    To update docs.
    This PR is tot rename Flink Table Store to Apache Paimon (Incubating)  in docs under `Connectors for Trino SQL Query Engine`
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
    
    - [ ] Add screenshots for manual tests if appropriate
    
    - [ ] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request
    
    Closes #4608 from huage1994/docs2.
    
    Closes #4608
    
    61926b7c0 [guanhua.lgh] [DOCS] Rename Flink Table Store to Apache Paimon (Incubating)  in docs under `Connectors for Trino SQL Query Engine`
    
    Authored-by: guanhua.lgh <gu...@alibaba-inc.com>
    Signed-off-by: Cheng Pan <ch...@apache.org>
---
 docs/connector/trino/flink_table_store.rst | 94 ------------------------------
 docs/connector/trino/index.rst             |  2 +-
 docs/connector/trino/paimon.rst            | 92 +++++++++++++++++++++++++++++
 3 files changed, 93 insertions(+), 95 deletions(-)

diff --git a/docs/connector/trino/flink_table_store.rst b/docs/connector/trino/flink_table_store.rst
deleted file mode 100644
index 8dd0c4061..000000000
--- a/docs/connector/trino/flink_table_store.rst
+++ /dev/null
@@ -1,94 +0,0 @@
-.. Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-.. Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-`Flink Table Store`_
-==========
-
-Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
-supporting high-speed data ingestion and timely data query.
-
-.. tip::
-   This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
-   For the knowledge about Flink Table Store not mentioned in this article,
-   you can obtain it from its `Official Documentation`_.
-
-By using kyuubi, we can run SQL queries towards Flink Table Store which is more
-convenient, easy to understand, and easy to expand than directly using
-trino to manipulate Flink Table Store.
-
-Flink Table Store Integration
--------------------
-
-To enable the integration of kyuubi trino sql engine and Flink Table Store, you need to:
-
-- Referencing the Flink Table Store :ref:`dependencies<trino-flink-table-store-deps>`
-- Setting the trino extension and catalog :ref:`configurations<trino-flink-table-store-conf>`
-
-.. _trino-flink-table-store-deps:
-
-Dependencies
-************
-
-The **classpath** of kyuubi trino sql engine with Flink Table Store supported consists of
-
-1. kyuubi-trino-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
-2. a copy of trino distribution
-3. flink-table-store-trino-<version>.jar (example: flink-table-store-trino-0.2.jar), which code can be found in the `Source Code`_
-4. flink-shaded-hadoop-2-uber-2.8.3-10.0.jar, which code can be found in the `Pre-bundled Hadoop 2.8.3`_
-
-In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
-
-1. Build the flink-table-store-trino-<version>.jar by reference to `Flink Table Store Trino README`_
-2. Put the flink-table-store-trino-<version>.jar and flink-shaded-hadoop-2-uber-2.8.3-10.0.jar packages into ``$TRINO_SERVER_HOME/plugin/tablestore`` directly
-
-.. warning::
-   Please mind the compatibility of different Flink Table Store and Trino versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
-
-.. _trino-flink-table-store-conf:
-
-Configurations
-**************
-
-To activate functionality of Flink Table Store, we can set the following configurations:
-
-Catalogs are registered by creating a catalog properties file in the $TRINO_SERVER_HOME/etc/catalog directory.
-For example, create $TRINO_SERVER_HOME/etc/catalog/tablestore.properties with the following contents to mount the tablestore connector as the tablestore catalog:
-
-.. code-block:: properties
-
-   connector.name=tablestore
-   warehouse=file:///tmp/warehouse
-
-Flink Table Store Operations
-------------------
-
-Flink Table Store supports reading table store tables through Trino.
-A common scenario is to write data with Flink and read data with Trino.
-You can follow this document `Flink Table Store Quick Start`_  to write data to a table store table
-and then use kyuubi trino sql engine to query the table with the following SQL ``SELECT`` statement.
-
-
-.. code-block:: sql
-
-   SELECT * FROM tablestore.default.t1
-
-
-.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Flink Table Store Quick Start: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/try-table-store/quick-start/
-.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
-.. _Source Code: https://github.com/JingsongLi/flink-table-store-trino
-.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/
-.. _Pre-bundled Hadoop 2.8.3: https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
-.. _Flink Table Store Trino README: https://github.com/JingsongLi/flink-table-store-trino#readme
diff --git a/docs/connector/trino/index.rst b/docs/connector/trino/index.rst
index f5d651d45..290966a5c 100644
--- a/docs/connector/trino/index.rst
+++ b/docs/connector/trino/index.rst
@@ -19,6 +19,6 @@ Connectors For Trino SQL Engine
 .. toctree::
     :maxdepth: 2
 
-    flink_table_store
+    paimon
     hudi
     iceberg
\ No newline at end of file
diff --git a/docs/connector/trino/paimon.rst b/docs/connector/trino/paimon.rst
new file mode 100644
index 000000000..5ac892234
--- /dev/null
+++ b/docs/connector/trino/paimon.rst
@@ -0,0 +1,92 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+`Apache Paimon (Incubating)`_
+==========
+
+Apache Paimon(incubating) is a streaming data lake platform that supports high-speed data ingestion, change data tracking and efficient real-time analytics.
+
+.. tip::
+   This article assumes that you have mastered the basic knowledge and operation of `Apache Paimon (Incubating)`_.
+   For the knowledge about Apache Paimon (Incubating) not mentioned in this article,
+   you can obtain it from its `Official Documentation`_.
+
+By using kyuubi, we can run SQL queries towards Apache Paimon (Incubating) which is more
+convenient, easy to understand, and easy to expand than directly using
+trino to manipulate Apache Paimon (Incubating).
+
+Apache Paimon (Incubating) Integration
+-------------------
+
+To enable the integration of kyuubi trino sql engine and Apache Paimon (Incubating), you need to:
+
+- Referencing the Apache Paimon (Incubating) :ref:`dependencies<trino-paimon-deps>`
+- Setting the trino extension and catalog :ref:`configurations<trino-paimon-conf>`
+
+.. _trino-paimon-deps:
+
+Dependencies
+************
+
+The **classpath** of kyuubi trino sql engine with Apache Paimon (Incubating) supported consists of
+
+1. kyuubi-trino-sql-engine-\ |release|\ _2.12.jar, the engine jar deployed with Kyuubi distributions
+2. a copy of trino distribution
+3. paimon-trino-<version>.jar (example: paimon-trino-0.2.jar), which code can be found in the `Source Code`_
+4. flink-shaded-hadoop-2-uber-<version>.jar, which code can be found in the `Pre-bundled Hadoop`_
+
+In order to make the Apache Paimon (Incubating) packages visible for the runtime classpath of engines, you need to:
+
+1. Build the paimon-trino-<version>.jar by reference to `Apache Paimon (Incubating) Trino README`_
+2. Put the paimon-trino-<version>.jar and flink-shaded-hadoop-2-uber-<version>.jar packages into ``$TRINO_SERVER_HOME/plugin/tablestore`` directly
+
+.. warning::
+   Please mind the compatibility of different Apache Paimon (Incubating) and Trino versions, which can be confirmed on the page of `Apache Paimon (Incubating) multi engine support`_.
+
+.. _trino-paimon-conf:
+
+Configurations
+**************
+
+To activate functionality of Apache Paimon (Incubating), we can set the following configurations:
+
+Catalogs are registered by creating a catalog properties file in the $TRINO_SERVER_HOME/etc/catalog directory.
+For example, create $TRINO_SERVER_HOME/etc/catalog/tablestore.properties with the following contents to mount the tablestore connector as the tablestore catalog:
+
+.. code-block:: properties
+
+   connector.name=tablestore
+   warehouse=file:///tmp/warehouse
+
+Apache Paimon (Incubating) Operations
+------------------
+
+Apache Paimon (Incubating) supports reading table store tables through Trino.
+A common scenario is to write data with Spark or Flink and read data with Trino.
+You can follow this document `Apache Paimon (Incubating) Engines Flink Quick Start`_  to write data to a table store table
+and then use kyuubi trino sql engine to query the table with the following SQL ``SELECT`` statement.
+
+
+.. code-block:: sql
+
+   SELECT * FROM tablestore.default.t1
+
+.. _Apache Paimon (Incubating): https://paimon.apache.org/
+.. _Apache Paimon (Incubating) multi engine support: https://paimon.apache.org/docs/master/engines/overview/
+.. _Apache Paimon (Incubating) Engines Flink Quick Start: https://paimon.apache.org/docs/master/engines/flink/#quick-start
+.. _Official Documentation: https://paimon.apache.org/docs/master/
+.. _Source Code: https://github.com/JingsongLi/paimon-trino
+.. _Pre-bundled Hadoop: https://flink.apache.org/downloads/#additional-components
+.. _Apache Paimon (Incubating) Trino README: https://github.com/JingsongLi/paimon-trino#readme