You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ja...@apache.org on 2022/09/19 14:09:45 UTC

[flink] 03/04: [FLINK-29148][docs][sql-gateway][hive] Add SQL Gateway docs

This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 102dd0225d56b18e839d4f3e24b34975446ff61f
Author: Shengkai <10...@qq.com>
AuthorDate: Wed Aug 31 11:52:11 2022 +0800

    [FLINK-29148][docs][sql-gateway][hive] Add SQL Gateway docs
    
    This closes #20719
---
 .../docs/dev/table/hiveCompatibility/_index.md     |  23 ++
 .../dev/table/hiveCompatibility/hiveserver2.md     | 317 +++++++++++++++++++++
 .../docs/dev/table/sql-gateway/_index.md           |  23 ++
 .../docs/dev/table/sql-gateway/hiveserver2.md      |  34 +++
 .../docs/dev/table/sql-gateway/overview.md         | 236 +++++++++++++++
 docs/content.zh/docs/dev/table/sql-gateway/rest.md | 119 ++++++++
 .../docs/dev/table/hiveCompatibility/_index.md     |  23 ++
 .../dev/table/hiveCompatibility/hiveserver2.md     | 317 +++++++++++++++++++++
 docs/content/docs/dev/table/overview.md            |   1 +
 docs/content/docs/dev/table/sql-gateway/_index.md  |  23 ++
 .../docs/dev/table/sql-gateway/hiveserver2.md      |  34 +++
 .../content/docs/dev/table/sql-gateway/overview.md | 236 +++++++++++++++
 docs/content/docs/dev/table/sql-gateway/rest.md    | 119 ++++++++
 docs/static/fig/apache_superset.png                | Bin 0 -> 125404 bytes
 docs/static/fig/dbeaver.png                        | Bin 0 -> 1638252 bytes
 docs/static/fig/sql-gateway-architecture.png       | Bin 0 -> 218739 bytes
 docs/static/fig/sql-gateway-interactions.png       | Bin 0 -> 71423 bytes
 .../hive/HiveServer2EndpointConfigOptions.java     |   2 +-
 18 files changed, 1506 insertions(+), 1 deletion(-)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
new file mode 100644
index 00000000000..3dee17410da
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
@@ -0,0 +1,23 @@
+---
+title: Hive Compatibility
+bookCollapseSection: true
+weight: 94
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
new file mode 100644
index 00000000000..1c02ad8ca27
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
@@ -0,0 +1,317 @@
+---
+title: HiveServer2 Endpoint
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveserver2.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# HiveServer2 Endpoint
+
+[Flink SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) supports deploying as a HiveServer2 Endpoint which is compatible with [HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
+wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
+
+Setting Up
+----------------
+Before the trip of the SQL Gateway with the HiveServer2 Endpoint, please prepare the required [dependencies]({{< ref "docs/connectors/table/hive/overview#dependencies" >}}).
+
+### Configure HiveServer2 Endpoint
+
+The HiveServer2 Endpoint is not the default endpoint for the SQL Gateway. You can configure to use the HiveServer2 Endpoint by calling
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2 -Dsql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir=<path to hive conf>
+```
+
+or add the following configuration into `conf/flink-conf.yaml` (please replace the `<path to hive conf>` with your hive conf path).
+
+```yaml
+sql-gateway.endpoint.type: hiveserver2
+sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir: <path to hive conf>
+```
+
+### Connecting to HiveServer2
+
+After starting the SQL Gateway, you are able to submit SQL with Apache Hive Beeline.
+
+```bash
+$ ./beeline
+SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in [jar:file:/Users/ohmeatball/Work/hive-related/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/3.2.1_1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Beeline version 2.3.9 by Apache Hive
+beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
+Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
+Enter username for jdbc:hive2://localhost:10000/default:
+Enter password for jdbc:hive2://localhost:10000/default:
+Connected to: Apache Flink (version 1.16)
+Driver: Hive JDBC (version 2.3.9)
+Transaction isolation: TRANSACTION_REPEATABLE_READ
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Source (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Sink (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> INSERT INTO Sink SELECT * FROM Source; 
++-----------------------------------+
+|              job id               |
++-----------------------------------+
+| 55ff290b57829998ea6e9acc240a0676  |
++-----------------------------------+
+1 row selected (2.427 seconds)
+```
+
+Endpoint Options
+----------------
+
+Below are the options supported when creating a HiveServer2 Endpoint instance with YAML file or DDL.
+
+<table class="configuration table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-center" style="width: 8%">Required</th>
+            <th class="text-left" style="width: 7%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.endpoint.type</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">"rest"</td>
+            <td>List&lt;String&gt;</td>
+            <td>Specify which endpoint to use, here should be 'hiveserver2'.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>URI to your Hive conf dir containing hive-site.xml. The URI needs to be supported by Hadoop FileSystem. If the URI is relative, i.e. without a scheme, local file system is assumed. If the option is not specified, hive-site.xml is searched in class path.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.default-database</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"default"</td>
+            <td>String</td>
+            <td>The default database to use when the catalog is set as the current catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.module.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive module.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.exponential.backoff.slot.length</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">100 ms</td>
+            <td>Duration</td>
+            <td>Binary exponential backoff slot time for Thrift clients during login to HiveServer2,for retries until hitting Thrift client timeout</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.host</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The server address of HiveServer2 host to be used for communication.Default is empty, which means the to bind to the localhost. This is only necessary if the host has multiple network addresses.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.login.timeout</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">20 s</td>
+            <td>Duration</td>
+            <td>Timeout for Thrift clients during login to HiveServer2</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.max.message.size</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">104857600</td>
+            <td>Long</td>
+            <td>Maximum message size in bytes a HS2 server will accept.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.port</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">10000</td>
+            <td>Integer</td>
+            <td>The port of the HiveServer2 endpoint.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.keepalive-time</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">1 min</td>
+            <td>Duration</td>
+            <td>Keepalive time for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.max</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">512</td>
+            <td>Integer</td>
+            <td>The maximum number of Thrift worker threads</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.min</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">5</td>
+            <td>Integer</td>
+            <td>The minimum number of Thrift worker threads</td>
+        </tr>
+    </tbody>
+</table>
+
+Features
+----------------
+
+### Be Compatible with HiveServer2 Protocol
+
+The SQL Gateway with HiveServer2 Endpoint aims to provide the same experience compared to the HiveServer2.
+When users connect to the HiveServer2, the HiveServer2 Endpoint
+- creates the Hive Catalog as the default catalog;
+- switches to the Hive dialect;
+- set the option `execution.runtime-mode` with value `BATCH`, the option `table.dml-sync` with value `true` to
+  execute SQL in batch mode and return the results until execution finishes.
+
+With these essential prerequisites, you can submit the Hive SQL in Hive style but execute it in the Flink environment.
+
+### Clients
+
+The HiveServer2 Endpoint is compatible with the HiveServer2 wire protocol. Therefore, the tools that manage the Hive SQL also work for
+the SQL Gateway with the HiveServer2 Endpoint. Currently, Hive JDBC, Hive Beeline, Dbeaver, Apache Superset and so on are tested to be able to connect to the
+Flink SQL Gateway with HiveServer2 Endpoint and submit SQL.
+
+#### Hive JDBC
+
+SQL Gateway is compatible with HiveServer2. You can write a program that uses Hive JDBC to connect to SQL Gateway. To build the program, add the
+following dependencies in your project pom.xml.
+
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>${hive.version}</version>
+</dependency>
+```
+
+After reimport the dependencies, you can use the following program to connect and list tables in the Hive Catalog.
+
+```java
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.Statement;
+
+public class JdbcConnection {
+    public static void main(String[] args) throws Exception {
+        try (
+                // Please replace the JDBC URI with your actual host, port and database.
+                Connection connection = DriverManager.getConnection("jdbc:hive2://{host}:{port}/{database};auth=noSasl"); 
+                Statement statement = connection.createStatement()) {
+            statement.execute("SHOW TABLES");
+            ResultSet resultSet = statement.getResultSet();
+            while (resultSet.next()) {
+                System.out.println(resultSet.getString(1));
+            }
+        }
+    }
+}
+```
+
+#### DBeaver
+
+DBeaver uses Hive JDBC to connect to the HiveServer2. So DBeaver can connect to the Flink SQL Gateway to submit Hive SQL. Considering the
+API compatibility, you can connect to the Flink SQL Gateway like HiveServer2. Please refer to the [guidance](https://github.com/dbeaver/dbeaver/wiki/Apache-Hive)
+about how to use DBeaver to connect to the Flink SQL Gateway with the HiveServer2 Endpoint.
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 Endpoint doesn't support authentication. Please use
+the following JDBC URL to connect to the DBeaver:
+
+```bash
+jdbc:hive2://{host}:{port}/{database};auth=noSasl
+```
+
+After the setup, you can explore Flink with DBeaver.
+
+{{< img width="80%" src="/fig/dbeaver.png" alt="DBeaver" >}}
+
+#### Apache Superset
+
+Apache Superset is a powerful data exploration and visualization platform. With the API compatibility, you can connect
+to the Flink SQL Gateway like Hive. Please refer to the [guidance](https://superset.apache.org/docs/databases/hive) for more details.
+
+{{< img width="80%" src="/fig/apache_superset.png" alt="Apache Superset" >}}
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 Endpoint doesn't support authentication. Please use
+the following JDBC URL to connect to the Apache Superset:
+
+```bash
+jdbc:hive2://{host}:{port}/{database}?auth=NOSASL
+```
+
+### Submit the Streaming SQL
+
+Flink is a batch-streaming unified engine. You can switch to the streaming SQL with the following SQL
+
+```bash
+SET table.sql-dialect=default; 
+SET execution.runtime-mode=streaming; 
+SET table.dml-sync=false;
+```
+
+After that, the environment is ready to parse the Flink SQL, optimize with the streaming planner and submit the job in async mode.
+
+{{< hint info >}}
+Notice: The `RowKind` in the HiveServer2 API is always `INSERT`. Therefore, HiveServer2 Endpoint doesn't support
+to present the CDC data.
+{{< /hint >}}
+
+Supported Types
+----------------
+
+The HiveServer2 Endpoint is built on the Hive2 now and supports all Hive2 available types. For Hive-compatible tables, the HiveServer2 Endpoint
+obeys the same rule as the HiveCatalog to convert the Flink types to Hive Types and serialize them to the thrift object. Please refer to
+the [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog#supported-types" >}}) for the type mappings.
\ No newline at end of file
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/_index.md b/docs/content.zh/docs/dev/table/sql-gateway/_index.md
new file mode 100644
index 00000000000..6ecaaf7dda5
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/sql-gateway/_index.md
@@ -0,0 +1,23 @@
+---
+title: SQL Gateway
+bookCollapseSection: true
+weight: 92
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md b/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md
new file mode 100644
index 00000000000..b2ac2fd6233
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md
@@ -0,0 +1,34 @@
+---
+title: HiveServer2 Endpoint
+weight: 3
+type: docs
+aliases:
+- /dev/table/sql-gateway/hiveserver2.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# HiveServer2 Endpoint
+
+HiveServer2 Endpoint is compatible with [HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
+wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
+
+It suggests to use HiveServer2 Endpoint with Hive Catalog and Hive dialect to get the same experience
+as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}})
+for more details. 
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/overview.md b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
new file mode 100644
index 00000000000..7ee788027bb
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
@@ -0,0 +1,236 @@
+---
+title: Overview
+weight: 1
+type: docs
+aliases:
+- /dev/table/sql-gateway.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Introduction
+----------------
+
+The SQL Gateway is a service that enables multiple clients from the remote to execute SQL in concurrency. It provides
+an easy way to submit the Flink Job, look up the metadata, and analyze the data online.
+
+The SQL Gateway is composed of pluggable endpoints and the `SqlGatewayService`. The `SqlGatewayService` is a processor that is
+reused by the endpoints to handle the requests. The endpoint is an entry point that allows users to connect. Depending on the
+type of the endpoints, users can use different utils to connect.
+
+{{< img width="80%" src="/fig/sql-gateway-architecture.png" alt="SQL Gateway Architecture" >}}
+
+Getting Started
+---------------
+
+This section describes how to setup and run your first Flink SQL program from the command-line.
+
+The SQL Gateway is bundled in the regular Flink distribution and thus runnable out-of-the-box. It requires only a running Flink cluster where table programs can be executed. For more information about setting up a Flink cluster see the [Cluster & Deployment]({{< ref "docs/deployment/resource-providers/standalone/overview" >}}) part. If you simply want to try out the SQL Client, you can also start a local cluster with one worker using the following command:
+
+```bash
+$ ./bin/start-cluster.sh
+```
+### Starting the SQL Gateway
+
+The SQL Gateway scripts are also located in the binary directory of Flink. Users can start by calling:
+
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.rest.address=localhost
+```
+
+The command starts the SQL Gateway with REST Endpoint that listens on the address localhost:8083. You can use the curl command to check
+whether the REST Endpoint is available.
+
+```bash
+$ curl http://localhost:8083/v1/info
+{"productName":"Apache Flink","version":"1.16-SNAPSHOT"}
+```
+
+### Running SQL Queries
+
+For validating your setup and cluster connection, you can work with following steps.
+
+**Step 1: Open a session**
+
+```bash
+$ curl --request POST http://localhost:8083/v1/sessions
+{"sessionHandle":"..."}
+```
+
+The `sessionHandle` in the return results is used by the SQL Gateway to uniquely identify every active user.
+
+**Step 2: Execute a query**
+
+```bash
+$ curl --request POST http://localhost:8083/v1/sessions/${sessionHandle}/statements/ --data '{"statement": "SELECT 1"}'
+{"operationHandle":"..."}
+```
+
+The `operationHandle` in the return results is used by the SQL Gateway to uniquely identify the submitted SQL.
+
+
+**Step 3: Fetch results**
+
+With the `sessionHandle` and `operationHandle` above, you can fetch the corresponding results.
+
+```bash
+$ curl --request GET http://localhost:8083/v1/sessions/${sessionHandle}/operations/${operationHandle}/result/0
+{
+  "results": {
+    "columns": [
+      {
+        "name": "EXPR$0",
+        "logicalType": {
+          "type": "INTEGER",
+          "nullable": false
+        }
+      }
+    ],
+    "data": [
+      {
+        "kind": "INSERT",
+        "fields": [
+          1
+        ]
+      }
+    ]
+  },
+  "resultType": "PAYLOAD",
+  "nextResultUri": "..."
+}
+```
+
+The `nextResultUri` in the results is used to fetch the next batch results if it is not `null`.
+
+```bash
+$ curl --request GET ${nextResultUri}
+```
+
+Configuration
+----------------
+
+### SQL Gateway startup options
+
+Currently, the SQL Gateway script has the following optional commands. They are discussed in details in the subsequent paragraphs.
+
+```bash
+$ ./bin/sql-gateway.sh --help
+
+Usage: sql-gateway.sh [start|start-foreground|stop|stop-all] [args]
+  commands:
+    start               - Run a SQL Gateway as a daemon
+    start-foreground    - Run a SQL Gateway as a console application
+    stop                - Stop the SQL Gateway daemon
+    stop-all            - Stop all the SQL Gateway daemons
+    -h | --help         - Show this help message
+```
+
+For "start" or "start-foreground" command,  you are able to configure the SQL Gateway in the CLI.
+
+```bash
+$ ./bin/sql-gateway.sh start --help
+
+Start the Flink SQL Gateway as a daemon to submit Flink SQL.
+
+  Syntax: start [OPTIONS]
+     -D <property=value>   Use value for given property
+     -h,--help             Show the help message with descriptions of all
+                           options.
+```
+
+### SQL Gateway Configuration
+
+You can configure the SQL Gateway when starting the SQL Gateway below, or any valid [Flink configuration]({{< ref "docs/dev/table/config" >}}) entry:
+
+```bash
+$ ./sql-gateway -Dkey=value
+```
+
+<table class="configuration table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-left" style="width: 15%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.session.check-interval</h5></td>
+            <td style="word-wrap: break-word;">1 min</td>
+            <td>Duration</td>
+            <td>The check interval for idle session timeout, which can be disabled by setting to zero or negative value.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.session.idle-timeout</h5></td>
+            <td style="word-wrap: break-word;">10 min</td>
+            <td>Duration</td>
+            <td>Timeout interval for closing the session when the session hasn't been accessed during the interval. If setting to zero or negative value, the session will not be closed.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.session.max-num</h5></td>
+            <td style="word-wrap: break-word;">1000000</td>
+            <td>Integer</td>
+            <td>The maximum number of the active session for sql gateway service.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.keepalive-time</h5></td>
+            <td style="word-wrap: break-word;">5 min</td>
+            <td>Duration</td>
+            <td>Keepalive time for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.threads.max</h5></td>
+            <td style="word-wrap: break-word;">500</td>
+            <td>Integer</td>
+            <td>The maximum number of worker threads for sql gateway service.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.threads.min</h5></td>
+            <td style="word-wrap: break-word;">5</td>
+            <td>Integer</td>
+            <td>The minimum number of worker threads for sql gateway service.</td>
+        </tr>
+    </tbody>
+</table>
+
+Supported Endpoints
+----------------
+
+Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}).
+The SQL Gateway is bundled with the REST Endpoint by default. With the flexible architecture, users are able to start the SQL Gateway with the specified endpoints by calling
+
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2
+```
+
+or add the following config in the `conf/flink-conf.yaml`:
+
+```yaml
+sql-gateway.endpoint.type: hiveserver2
+```
+
+{{< hint info >}}
+Notice: The CLI command has higher priority if flink-conf.yaml also contains the option `sql-gateway.endpoint.type`.
+{{< /hint >}}
+
+For the specific endpoint, please refer to the corresponding page.
+
+{{< top >}}
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/rest.md b/docs/content.zh/docs/dev/table/sql-gateway/rest.md
new file mode 100644
index 00000000000..3ea420a21ba
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/sql-gateway/rest.md
@@ -0,0 +1,119 @@
+---
+title: REST Endpoint
+weight: 2
+type: docs
+aliases:
+- /dev/table/sql-gateway/rest.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# REST Endpoint
+
+The REST endpoint allows user to connect to SQL Gateway with REST API.
+
+Overview of SQL Processing
+----------------
+
+### Open Session
+
+When the client connects to the SQL Gateway, the SQL Gateway creates a `Session` as the context to store the users-specified information
+during the interactions between the client and SQL Gateway. After the creation of the `Session`, the SQL Gateway server returns an identifier named
+`SessionHandle` for later interactions.
+
+### Submit SQL
+
+After the registration of the `Session`, the client can submit the SQL to the SQL Gateway server. When submitting the SQL,
+the SQL is translated to an `Operation` and an identifier named `OperationHandle` is returned for fetch results later. The Operation has
+its lifecycle, the client is able to cancel the execution of the `Operation` or close the `Operation` to release the resources used by the `Operation`.
+
+### Fetch Results
+
+With the `OperationHandle`, the client can fetch the results from the `Operation`. If the `Operation` is ready, the SQL Gateway will return a batch
+of the data with the corresponding schema and a URI that is used to fetch the next batch of the data. When all results have been fetched, the
+SQL Gateway will fill the `resultType` in the response with value `EOS` and the URI to the next batch of the data is null.
+
+{{< img width="100%" src="/fig/sql-gateway-interactions.png" alt="SQL Gateway Interactions" >}}
+
+Endpoint Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-left" style="width: 15%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.address</h5></td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The address that should be used by clients to connect to the sql gateway server.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.bind-address</h5></td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The address that the sql gateway server binds itself.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.bind-port</h5></td>
+            <td style="word-wrap: break-word;">"8083"</td>
+            <td>String</td>
+            <td>The port that the sql gateway server binds itself. Accepts a list of ports (“50100,50101”), ranges (“50100-50200”) or a combination of both. It is recommended to set a range of ports to avoid collisions when multiple sql gateway servers are running on the same machine.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.port</h5></td>
+            <td style="word-wrap: break-word;">8083</td>
+            <td>Integer</td>
+            <td>The port that the client connects to. If bind-port has not been specified, then the sql gateway server will bind to this port.</td>
+        </tr>
+    </tbody>
+</table>
+
+REST API
+----------------
+
+[OpenAPI specification]({{< ref_static "generated/rest_v1_sql_gateway.yml" >}})
+
+{{< hint warning >}}
+The OpenAPI specification is still experimental.
+{{< /hint >}}
+
+#### API reference
+
+{{< tabs "f00ed142-b05f-44f0-bafc-799080c1d40d" >}}
+{{< tab "v1" >}}
+
+{{< generated/rest_v1_sql_gateway >}}
+
+{{< /tab >}}
+{{< /tabs >}}
+
+Data Type Mapping
+----------------
+
+Currently, REST endpoint uses JSON format to serialize the Table Objects. Please refer
+[JSON format]({{< ref "docs/connectors/table/formats/json#data-type-mapping" >}}) to the mappings.
+
+{{< top >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/_index.md b/docs/content/docs/dev/table/hiveCompatibility/_index.md
new file mode 100644
index 00000000000..3dee17410da
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/_index.md
@@ -0,0 +1,23 @@
+---
+title: Hive Compatibility
+bookCollapseSection: true
+weight: 94
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveserver2.md b/docs/content/docs/dev/table/hiveCompatibility/hiveserver2.md
new file mode 100644
index 00000000000..913aa56a8ea
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveserver2.md
@@ -0,0 +1,317 @@
+---
+title: HiveServer2 Endpoint
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveserver2.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# HiveServer2 Endpoint
+
+[Flink SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}) supports deploying as a HiveServer2 Endpoint which is compatible with [HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview) 
+wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
+
+Setting Up
+----------------
+Before the trip of the SQL Gateway with the HiveServer2 Endpoint, please prepare the required [dependencies]({{< ref "docs/connectors/table/hive/overview#dependencies" >}}).
+
+### Configure HiveServer2 Endpoint
+
+The HiveServer2 Endpoint is not the default endpoint for the SQL Gateway. You can configure to use the HiveServer2 Endpoint by calling 
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2 -Dsql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir=<path to hive conf>
+```
+
+or add the following configuration into `conf/flink-conf.yaml` (please replace the `<path to hive conf>` with your hive conf path).
+
+```yaml
+sql-gateway.endpoint.type: hiveserver2
+sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir: <path to hive conf>
+```
+
+### Connecting to HiveServer2 
+
+After starting the SQL Gateway, you are able to submit SQL with Apache Hive Beeline.
+
+```bash
+$ ./beeline
+SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in [jar:file:/Users/ohmeatball/Work/hive-related/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/3.2.1_1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Beeline version 2.3.9 by Apache Hive
+beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
+Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
+Enter username for jdbc:hive2://localhost:10000/default:
+Enter password for jdbc:hive2://localhost:10000/default:
+Connected to: Apache Flink (version 1.16)
+Driver: Hive JDBC (version 2.3.9)
+Transaction isolation: TRANSACTION_REPEATABLE_READ
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Source (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> CREATE TABLE Sink (
+. . . . . . . . . . . . . . . . . . . .> a INT,
+. . . . . . . . . . . . . . . . . . . .> b STRING
+. . . . . . . . . . . . . . . . . . . .> );
++---------+
+| result  |
++---------+
+| OK      |
++---------+
+0: jdbc:hive2://localhost:10000/default> INSERT INTO Sink SELECT * FROM Source; 
++-----------------------------------+
+|              job id               |
++-----------------------------------+
+| 55ff290b57829998ea6e9acc240a0676  |
++-----------------------------------+
+1 row selected (2.427 seconds)
+```
+
+Endpoint Options
+----------------
+
+Below are the options supported when creating a HiveServer2 Endpoint instance with YAML file or DDL.
+
+<table class="configuration table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-center" style="width: 8%">Required</th>
+            <th class="text-left" style="width: 7%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.endpoint.type</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">"rest"</td>
+            <td>List&lt;String&gt;</td>
+            <td>Specify which endpoint to use, here should be 'hiveserver2'.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.hive-conf-dir</h5></td>
+            <td>required</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>URI to your Hive conf dir containing hive-site.xml. The URI needs to be supported by Hadoop FileSystem. If the URI is relative, i.e. without a scheme, local file system is assumed. If the option is not specified, hive-site.xml is searched in class path.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.default-database</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"default"</td>
+            <td>String</td>
+            <td>The default database to use when the catalog is set as the current catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.catalog.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive catalog.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.module.name</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">"hive"</td>
+            <td>String</td>
+            <td>Name for the pre-registered hive module.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.exponential.backoff.slot.length</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">100 ms</td>
+            <td>Duration</td>
+            <td>Binary exponential backoff slot time for Thrift clients during login to HiveServer2,for retries until hitting Thrift client timeout</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.host</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The server address of HiveServer2 host to be used for communication.Default is empty, which means the to bind to the localhost. This is only necessary if the host has multiple network addresses.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.login.timeout</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">20 s</td>
+            <td>Duration</td>
+            <td>Timeout for Thrift clients during login to HiveServer2</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.max.message.size</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">104857600</td>
+            <td>Long</td>
+            <td>Maximum message size in bytes a HS2 server will accept.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.port</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">10000</td>
+            <td>Integer</td>
+            <td>The port of the HiveServer2 endpoint.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.keepalive-time</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">1 min</td>
+            <td>Duration</td>
+            <td>Keepalive time for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.max</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">512</td>
+            <td>Integer</td>
+            <td>The maximum number of Thrift worker threads</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.hiveserver2.thrift.worker.threads.min</h5></td>
+            <td>optional</td>
+            <td style="word-wrap: break-word;">5</td>
+            <td>Integer</td>
+            <td>The minimum number of Thrift worker threads</td>
+        </tr>
+    </tbody>
+</table>
+
+Features
+----------------
+
+### Be Compatible with HiveServer2 Protocol
+
+The SQL Gateway with HiveServer2 Endpoint aims to provide the same experience compared to the HiveServer2. 
+When users connect to the HiveServer2, the HiveServer2 Endpoint 
+- creates the Hive Catalog as the default catalog;
+- switches to the Hive dialect;
+- set the option `execution.runtime-mode` with value `BATCH`, the option `table.dml-sync` with value `true` to
+execute SQL in batch mode and return the results until execution finishes.
+
+With these essential prerequisites, you can submit the Hive SQL in Hive style but execute it in the Flink environment.
+
+### Clients
+
+The HiveServer2 Endpoint is compatible with the HiveServer2 wire protocol. Therefore, the tools that manage the Hive SQL also work for
+the SQL Gateway with the HiveServer2 Endpoint. Currently, Hive JDBC, Hive Beeline, Dbeaver, Apache Superset and so on are tested to be able to connect to the
+Flink SQL Gateway with HiveServer2 Endpoint and submit SQL.
+
+#### Hive JDBC
+
+SQL Gateway is compatible with HiveServer2. You can write a program that uses Hive JDBC to connect to SQL Gateway. To build the program, add the 
+following dependencies in your project pom.xml.
+
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>${hive.version}</version>
+</dependency>
+```
+
+After reimport the dependencies, you can use the following program to connect and list tables in the Hive Catalog.
+
+```java
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.Statement;
+
+public class JdbcConnection {
+    public static void main(String[] args) throws Exception {
+        try (
+                // Please replace the JDBC URI with your actual host, port and database.
+                Connection connection = DriverManager.getConnection("jdbc:hive2://{host}:{port}/{database};auth=noSasl"); 
+                Statement statement = connection.createStatement()) {
+            statement.execute("SHOW TABLES");
+            ResultSet resultSet = statement.getResultSet();
+            while (resultSet.next()) {
+                System.out.println(resultSet.getString(1));
+            }
+        }
+    }
+}
+```
+
+#### DBeaver
+
+DBeaver uses Hive JDBC to connect to the HiveServer2. So DBeaver can connect to the Flink SQL Gateway to submit Hive SQL. Considering the
+API compatibility, you can connect to the Flink SQL Gateway like HiveServer2. Please refer to the [guidance](https://github.com/dbeaver/dbeaver/wiki/Apache-Hive)
+about how to use DBeaver to connect to the Flink SQL Gateway with the HiveServer2 Endpoint.
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 Endpoint doesn't support authentication. Please use 
+the following JDBC URL to connect to the DBeaver:
+
+```bash
+jdbc:hive2://{host}:{port}/{database};auth=noSasl
+```
+
+After the setup, you can explore Flink with DBeaver.
+
+{{< img width="80%" src="/fig/dbeaver.png" alt="DBeaver" >}}
+
+#### Apache Superset
+
+Apache Superset is a powerful data exploration and visualization platform. With the API compatibility, you can connect 
+to the Flink SQL Gateway like Hive. Please refer to the [guidance](https://superset.apache.org/docs/databases/hive) for more details.
+
+{{< img width="80%" src="/fig/apache_superset.png" alt="Apache Superset" >}}
+
+<span class="label label-danger">Attention</span> Currently, HiveServer2 Endpoint doesn't support authentication. Please use
+the following JDBC URL to connect to the Apache Superset:
+
+```bash
+jdbc:hive2://{host}:{port}/{database}?auth=NOSASL
+```
+
+### Submit the Streaming SQL
+
+Flink is a batch-streaming unified engine. You can switch to the streaming SQL with the following SQL
+
+```bash
+SET table.sql-dialect=default; 
+SET execution.runtime-mode=streaming; 
+SET table.dml-sync=false;
+```
+
+After that, the environment is ready to parse the Flink SQL, optimize with the streaming planner and submit the job in async mode.
+
+{{< hint info >}}
+Notice: The `RowKind` in the HiveServer2 API is always `INSERT`. Therefore, HiveServer2 Endpoint doesn't support
+to present the CDC data.
+{{< /hint >}}
+
+Supported Types
+----------------
+
+The HiveServer2 Endpoint is built on the Hive2 now and supports all Hive2 available types. For Hive-compatible tables, the HiveServer2 Endpoint
+obeys the same rule as the HiveCatalog to convert the Flink types to Hive Types and serialize them to the thrift object. Please refer to
+the [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog#supported-types" >}}) for the type mappings.
\ No newline at end of file
diff --git a/docs/content/docs/dev/table/overview.md b/docs/content/docs/dev/table/overview.md
index 9f630e010c4..2e148ea9399 100644
--- a/docs/content/docs/dev/table/overview.md
+++ b/docs/content/docs/dev/table/overview.md
@@ -59,5 +59,6 @@ Where to go next?
 * [SQL]({{< ref "docs/dev/table/sql/overview" >}}): Supported operations and syntax for SQL.
 * [Built-in Functions]({{< ref "docs/dev/table/functions/systemFunctions" >}}): Supported functions in Table API and SQL.
 * [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}): Play around with Flink SQL and submit a table program to a cluster without programming knowledge.
+* [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): A service that enables the multiple clients to execute SQL from the remote in concurrency.
 
 {{< top >}}
diff --git a/docs/content/docs/dev/table/sql-gateway/_index.md b/docs/content/docs/dev/table/sql-gateway/_index.md
new file mode 100644
index 00000000000..6ecaaf7dda5
--- /dev/null
+++ b/docs/content/docs/dev/table/sql-gateway/_index.md
@@ -0,0 +1,23 @@
+---
+title: SQL Gateway
+bookCollapseSection: true
+weight: 92
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content/docs/dev/table/sql-gateway/hiveserver2.md b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
new file mode 100644
index 00000000000..d60141ef968
--- /dev/null
+++ b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
@@ -0,0 +1,34 @@
+---
+title: HiveServer2 Endpoint
+weight: 3
+type: docs
+aliases:
+- /dev/table/sql-gateway/hiveserver2.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# HiveServer2 Endpoint
+
+HiveServer2 Endpoint is compatible with [HiveServer2](https://cwiki.apache.org/confluence/display/hive/hiveserver2+overview)
+wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
+
+It suggests to use HiveServer2 Endpoint with Hive Catalog and Hive dialect to get the same experience
+as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}) 
+for more details. 
diff --git a/docs/content/docs/dev/table/sql-gateway/overview.md b/docs/content/docs/dev/table/sql-gateway/overview.md
new file mode 100644
index 00000000000..e13ad914fc7
--- /dev/null
+++ b/docs/content/docs/dev/table/sql-gateway/overview.md
@@ -0,0 +1,236 @@
+---
+title: Overview
+weight: 1
+type: docs
+aliases:
+- /dev/table/sql-gateway.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Introduction
+----------------
+
+The SQL Gateway is a service that enables multiple clients from the remote to execute SQL in concurrency. It provides 
+an easy way to submit the Flink Job, look up the metadata, and analyze the data online.
+
+The SQL Gateway is composed of pluggable endpoints and the `SqlGatewayService`. The `SqlGatewayService` is a processor that is 
+reused by the endpoints to handle the requests. The endpoint is an entry point that allows users to connect. Depending on the 
+type of the endpoints, users can use different utils to connect.
+
+{{< img width="80%" src="/fig/sql-gateway-architecture.png" alt="SQL Gateway Architecture" >}}
+
+Getting Started
+---------------
+
+This section describes how to setup and run your first Flink SQL program from the command-line.
+
+The SQL Gateway is bundled in the regular Flink distribution and thus runnable out-of-the-box. It requires only a running Flink cluster where table programs can be executed. For more information about setting up a Flink cluster see the [Cluster & Deployment]({{< ref "docs/deployment/resource-providers/standalone/overview" >}}) part. If you simply want to try out the SQL Client, you can also start a local cluster with one worker using the following command:
+
+```bash
+$ ./bin/start-cluster.sh
+```
+### Starting the SQL Gateway
+
+The SQL Gateway scripts are also located in the binary directory of Flink. Users can start by calling:
+
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.rest.address=localhost
+```
+
+The command starts the SQL Gateway with REST Endpoint that listens on the address localhost:8083. You can use the curl command to check
+whether the REST Endpoint is available.
+
+```bash
+$ curl http://localhost:8083/v1/info
+{"productName":"Apache Flink","version":"1.16-SNAPSHOT"}
+```
+
+### Running SQL Queries
+
+For validating your setup and cluster connection, you can work with following steps.
+
+**Step 1: Open a session**
+
+```bash
+$ curl --request POST http://localhost:8083/v1/sessions
+{"sessionHandle":"..."}
+```
+
+The `sessionHandle` in the return results is used by the SQL Gateway to uniquely identify every active user. 
+
+**Step 2: Execute a query**
+
+```bash
+$ curl --request POST http://localhost:8083/v1/sessions/${sessionHandle}/statements/ --data '{"statement": "SELECT 1"}'
+{"operationHandle":"..."}
+```
+
+The `operationHandle` in the return results is used by the SQL Gateway to uniquely identify the submitted SQL.
+
+
+**Step 3: Fetch results**
+
+With the `sessionHandle` and `operationHandle` above, you can fetch the corresponding results.
+
+```bash
+$ curl --request GET http://localhost:8083/v1/sessions/${sessionHandle}/operations/${operationHandle}/result/0
+{
+  "results": {
+    "columns": [
+      {
+        "name": "EXPR$0",
+        "logicalType": {
+          "type": "INTEGER",
+          "nullable": false
+        }
+      }
+    ],
+    "data": [
+      {
+        "kind": "INSERT",
+        "fields": [
+          1
+        ]
+      }
+    ]
+  },
+  "resultType": "PAYLOAD",
+  "nextResultUri": "..."
+}
+```
+
+The `nextResultUri` in the results is used to fetch the next batch results if it is not `null`.
+
+```bash
+$ curl --request GET ${nextResultUri}
+```
+
+Configuration
+----------------
+
+### SQL Gateway startup options
+
+Currently, the SQL Gateway script has the following optional commands. They are discussed in details in the subsequent paragraphs.
+
+```bash
+$ ./bin/sql-gateway.sh --help
+
+Usage: sql-gateway.sh [start|start-foreground|stop|stop-all] [args]
+  commands:
+    start               - Run a SQL Gateway as a daemon
+    start-foreground    - Run a SQL Gateway as a console application
+    stop                - Stop the SQL Gateway daemon
+    stop-all            - Stop all the SQL Gateway daemons
+    -h | --help         - Show this help message
+```
+
+For "start" or "start-foreground" command,  you are able to configure the SQL Gateway in the CLI.
+
+```bash
+$ ./bin/sql-gateway.sh start --help
+
+Start the Flink SQL Gateway as a daemon to submit Flink SQL.
+
+  Syntax: start [OPTIONS]
+     -D <property=value>   Use value for given property
+     -h,--help             Show the help message with descriptions of all
+                           options.
+```
+
+### SQL Gateway Configuration
+
+You can configure the SQL Gateway when starting the SQL Gateway below, or any valid [Flink configuration]({{< ref "docs/dev/table/config" >}}) entry:
+
+```bash
+$ ./sql-gateway -Dkey=value
+```
+
+<table class="configuration table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-left" style="width: 15%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.session.check-interval</h5></td>
+            <td style="word-wrap: break-word;">1 min</td>
+            <td>Duration</td>
+            <td>The check interval for idle session timeout, which can be disabled by setting to zero or negative value.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.session.idle-timeout</h5></td>
+            <td style="word-wrap: break-word;">10 min</td>
+            <td>Duration</td>
+            <td>Timeout interval for closing the session when the session hasn't been accessed during the interval. If setting to zero or negative value, the session will not be closed.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.session.max-num</h5></td>
+            <td style="word-wrap: break-word;">1000000</td>
+            <td>Integer</td>
+            <td>The maximum number of the active session for sql gateway service.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.keepalive-time</h5></td>
+            <td style="word-wrap: break-word;">5 min</td>
+            <td>Duration</td>
+            <td>Keepalive time for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.threads.max</h5></td>
+            <td style="word-wrap: break-word;">500</td>
+            <td>Integer</td>
+            <td>The maximum number of worker threads for sql gateway service.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.worker.threads.min</h5></td>
+            <td style="word-wrap: break-word;">5</td>
+            <td>Integer</td>
+            <td>The minimum number of worker threads for sql gateway service.</td>
+        </tr>
+    </tbody>
+</table>
+
+Supported Endpoints
+----------------
+
+Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}). 
+The SQL Gateway is bundled with the REST Endpoint by default. With the flexible architecture, users are able to start the SQL Gateway with the specified endpoints by calling 
+
+```bash
+$ ./bin/sql-gateway.sh start -Dsql-gateway.endpoint.type=hiveserver2
+```
+
+or add the following config in the `conf/flink-conf.yaml`:
+
+```yaml
+sql-gateway.endpoint.type: hiveserver2
+```
+
+{{< hint info >}}
+Notice: The CLI command has higher priority if flink-conf.yaml also contains the option `sql-gateway.endpoint.type`.
+{{< /hint >}}
+
+For the specific endpoint, please refer to the corresponding page.
+
+{{< top >}}
diff --git a/docs/content/docs/dev/table/sql-gateway/rest.md b/docs/content/docs/dev/table/sql-gateway/rest.md
new file mode 100644
index 00000000000..da44cc501bf
--- /dev/null
+++ b/docs/content/docs/dev/table/sql-gateway/rest.md
@@ -0,0 +1,119 @@
+---
+title: REST Endpoint
+weight: 2
+type: docs
+aliases:
+- /dev/table/sql-gateway/rest.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# REST Endpoint
+
+The REST endpoint allows user to connect to SQL Gateway with REST API.
+
+Overview of SQL Processing
+----------------
+
+### Open Session
+
+When the client connects to the SQL Gateway, the SQL Gateway creates a `Session` as the context to store the users-specified information 
+during the interactions between the client and SQL Gateway. After the creation of the `Session`, the SQL Gateway server returns an identifier named
+`SessionHandle` for later interactions.
+
+### Submit SQL
+
+After the registration of the `Session`, the client can submit the SQL to the SQL Gateway server. When submitting the SQL,
+the SQL is translated to an `Operation` and an identifier named `OperationHandle` is returned for fetch results later. The Operation has
+its lifecycle, the client is able to cancel the execution of the `Operation` or close the `Operation` to release the resources used by the `Operation`.
+
+### Fetch Results
+
+With the `OperationHandle`, the client can fetch the results from the `Operation`. If the `Operation` is ready, the SQL Gateway will return a batch 
+of the data with the corresponding schema and a URI that is used to fetch the next batch of the data. When all results have been fetched, the 
+SQL Gateway will fill the `resultType` in the response with value `EOS` and the URI to the next batch of the data is null.
+
+{{< img width="100%" src="/fig/sql-gateway-interactions.png" alt="SQL Gateway Interactions" >}}
+
+Endpoint Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+        <tr>
+            <th class="text-left" style="width: 20%">Key</th>
+            <th class="text-left" style="width: 15%">Default</th>
+            <th class="text-left" style="width: 10%">Type</th>
+            <th class="text-left" style="width: 55%">Description</th>
+        </tr>
+    </thead>
+    <tbody>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.address</h5></td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The address that should be used by clients to connect to the sql gateway server.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.bind-address</h5></td>
+            <td style="word-wrap: break-word;">(none)</td>
+            <td>String</td>
+            <td>The address that the sql gateway server binds itself.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.bind-port</h5></td>
+            <td style="word-wrap: break-word;">"8083"</td>
+            <td>String</td>
+            <td>The port that the sql gateway server binds itself. Accepts a list of ports (“50100,50101”), ranges (“50100-50200”) or a combination of both. It is recommended to set a range of ports to avoid collisions when multiple sql gateway servers are running on the same machine.</td>
+        </tr>
+        <tr>
+            <td><h5>sql-gateway.endpoint.rest.port</h5></td>
+            <td style="word-wrap: break-word;">8083</td>
+            <td>Integer</td>
+            <td>The port that the client connects to. If bind-port has not been specified, then the sql gateway server will bind to this port.</td>
+        </tr>
+    </tbody>
+</table>
+
+REST API
+----------------
+
+[OpenAPI specification]({{< ref_static "generated/rest_v1_sql_gateway.yml" >}})
+
+{{< hint warning >}}
+The OpenAPI specification is still experimental.
+{{< /hint >}}
+
+#### API reference
+
+{{< tabs "f00ed142-b05f-44f0-bafc-799080c1d40d" >}}
+{{< tab "v1" >}}
+
+{{< generated/rest_v1_sql_gateway >}}
+
+{{< /tab >}}
+{{< /tabs >}}
+
+Data Type Mapping
+----------------
+
+Currently, REST endpoint uses JSON format to serialize the Table Objects. Please refer
+[JSON format]({{< ref "docs/connectors/table/formats/json#data-type-mapping" >}}) to the mappings.
+
+{{< top >}}
diff --git a/docs/static/fig/apache_superset.png b/docs/static/fig/apache_superset.png
new file mode 100644
index 00000000000..944ce3744d6
Binary files /dev/null and b/docs/static/fig/apache_superset.png differ
diff --git a/docs/static/fig/dbeaver.png b/docs/static/fig/dbeaver.png
new file mode 100644
index 00000000000..bea5833f9ca
Binary files /dev/null and b/docs/static/fig/dbeaver.png differ
diff --git a/docs/static/fig/sql-gateway-architecture.png b/docs/static/fig/sql-gateway-architecture.png
new file mode 100644
index 00000000000..934fe4215b8
Binary files /dev/null and b/docs/static/fig/sql-gateway-architecture.png differ
diff --git a/docs/static/fig/sql-gateway-interactions.png b/docs/static/fig/sql-gateway-interactions.png
new file mode 100644
index 00000000000..fda1366a657
Binary files /dev/null and b/docs/static/fig/sql-gateway-interactions.png differ
diff --git a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2EndpointConfigOptions.java b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2EndpointConfigOptions.java
index 5f8a05dd6f5..4c30f53e876 100644
--- a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2EndpointConfigOptions.java
+++ b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2EndpointConfigOptions.java
@@ -48,7 +48,7 @@ public class HiveServer2EndpointConfigOptions {
             ConfigOptions.key("thrift.port")
                     .intType()
                     .defaultValue(10000)
-                    .withDescription("The port of the HiveServer2 endpoint");
+                    .withDescription("The port of the HiveServer2 endpoint.");
 
     public static final ConfigOption<Integer> THRIFT_WORKER_THREADS_MIN =
             ConfigOptions.key("thrift.worker.threads.min")