You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2022/11/17 13:00:25 UTC

[inlong-website] branch master updated: [INLONG-609][Doc] Connect all Sort-related document guides (#610)

This is an automated email from the ASF dual-hosted git repository.

dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new b62a12d866 [INLONG-609][Doc] Connect all Sort-related document guides (#610)
b62a12d866 is described below

commit b62a12d866ba688ca901cd69da10bd8e03e0dac1
Author: Charles Zhang <do...@apache.org>
AuthorDate: Thu Nov 17 21:00:20 2022 +0800

    [INLONG-609][Doc] Connect all Sort-related document guides (#610)
---
 docs/data_node/extract_node/overview.md            |  9 ++---
 docs/data_node/load_node/overview.md               |  9 ++---
 docs/deployment/bare_metal.md                      |  4 +--
 docs/modules/sort/example.md                       | 39 +++++++++-----------
 docs/modules/sort/quick_start.md                   | 35 +++++++++++-------
 .../current/data_node/extract_node/overview.md     |  9 ++---
 .../current/data_node/load_node/overview.md        |  9 ++---
 .../current/deployment/bare_metal.md               |  4 +--
 .../current/modules/sort/example.md                | 41 +++++++++-------------
 .../current/modules/sort/quick_start.md            | 31 +++++++++++-----
 .../data_node/extract_node/overview.md             |  9 ++---
 .../version-1.4.0/data_node/load_node/overview.md  |  9 ++---
 .../version-1.4.0/deployment/bare_metal.md         |  4 +--
 .../version-1.4.0/modules/sort/example.md          | 41 +++++++++-------------
 .../version-1.4.0/modules/sort/quick_start.md      | 31 +++++++++++-----
 .../data_node/extract_node/overview.md             |  9 ++---
 .../version-1.4.0/data_node/load_node/overview.md  |  9 ++---
 .../version-1.4.0/deployment/bare_metal.md         |  4 +--
 .../version-1.4.0/modules/sort/example.md          | 39 +++++++++-----------
 .../version-1.4.0/modules/sort/quick_start.md      | 35 +++++++++++-------
 20 files changed, 180 insertions(+), 200 deletions(-)

diff --git a/docs/data_node/extract_node/overview.md b/docs/data_node/extract_node/overview.md
index dca63130bc..d8950c58de 100644
--- a/docs/data_node/extract_node/overview.md
+++ b/docs/data_node/extract_node/overview.md
@@ -26,13 +26,8 @@ The following table shows the version mapping between InLong<sup>®</sup> Extrac
 |          <font color="DarkCyan">1.2.0</font>           | <font color="MediumVioletRed">1.13.5</font> |
 
 ## Usage for SQL API
-
-We need several steps to setup a Flink cluster with the provided connector.
-
-1. Setup a Flink cluster with version 1.13.5 and Java 8+ installed.
-2. Download and the Sort Connectors jars from the [Downloads](/download) page (or [build yourself](../../quick_start/how_to_build.md)).
-3. Put the Sort Connectors jars under `FLINK_HOME/lib/`.
-4. Restart the Flink cluster.
+- [Deploy InLong Sort](modules/sort/quick_start.md)
+- Create Data Node
 
 The example shows how to create a MySQL Extract Node in [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) and execute queries on it.
 
diff --git a/docs/data_node/load_node/overview.md b/docs/data_node/load_node/overview.md
index 1753798dcd..72f9d0073e 100644
--- a/docs/data_node/load_node/overview.md
+++ b/docs/data_node/load_node/overview.md
@@ -34,13 +34,8 @@ The following table shows the version mapping between InLong<sup>®</sup> Load N
 |  <font color="DarkCyan">1.2.0</font>  | <font color="MediumVioletRed">1.13.5</font> |
 
 ## Usage for SQL API
-
-We need several steps to setup a Flink cluster with the provided connector.
-
-1. Setup a Flink cluster with version 1.13.5 and Java 8+ installed.
-2. Download and decompress the Sort Connectors jars from the [Downloads](/download) page (or [build yourself](../../quick_start/how_to_build.md)).
-3. Put the Sort Connectors jars under `FLINK_HOME/lib/`.
-4. Restart the Flink cluster.
+- [Deploy InLong Sort](modules/sort/quick_start.md)
+- Create Data Node
 
 The example shows how to create a MySQL Load Node in [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) and load data to it.
 
diff --git a/docs/deployment/bare_metal.md b/docs/deployment/bare_metal.md
index 3ff2896455..1d327336bf 100644
--- a/docs/deployment/bare_metal.md
+++ b/docs/deployment/bare_metal.md
@@ -4,8 +4,8 @@ sidebar_position: 4
 ---
 
 ## Environment Requirements
-- MySQL 5.7+
-- Flink 1.13.5
+- MySQL 5.7+ or PostgreSQL
+- [Apache Flink 1.13.5](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)
 
 ## Prepare Message Queue
 InLong Support the following Message Queue services now, you can choose one of them.
diff --git a/docs/modules/sort/example.md b/docs/modules/sort/example.md
index 9a3317c652..19fed89a32 100644
--- a/docs/modules/sort/example.md
+++ b/docs/modules/sort/example.md
@@ -3,18 +3,15 @@ title: Example
 sidebar_position: 3
 ---
 
-## Overview
-
 To make it easier for you to create InLong Sort jobs, here we list some data stream configuration examples.
 The following will introduce SQL, Dashboard, Manager Client Tools methods to use Inlong Sort.
 
 ## Environment Requirements
-- JDK 1.8.x
-- Flink 1.13.5
+- Apache Flink 1.13.5
 - MySQL
-- Kafka
-- Hadoop
-- Hive 3.x
+- Apache Kafka
+- Apache Hadoop
+- Apache Hive 3.x
 
 ## Prepare InLong Sort And Connectors
 You can prepare InLong Sort and Data Node Connectors by referring to [Deployment Guide](quick_start.md).
@@ -28,11 +25,11 @@ This example defines the data flow for a single table(mysql-->kafka-->hive).
 Single table sync example:
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/mysql-to-kafka.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-kafka.sql
 ```
 
-* mysql-to-kafka.sql
+- mysql-to-kafka.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -84,14 +81,16 @@ INSERT INTO `table_2`
 ```
 
 ### Kafka to Hive
-
-**Note:**  First you need to create user table in Hive.
+:::caution
+First you need to create `user` table in Hive.
+:::
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/kafka-to-hive.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file kafka-to-hive.sql
 ```
-* kafka-to-hive.sql
+
+- kafka-to-hive.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -138,12 +137,6 @@ INSERT INTO `user`
     FROM `table_1`;
 
 ```
-Note: Of course you can also put all the SQL in one file.
-
-## Usage for Dashboard
-
-The underlying capabilities are already available and will complement the Dashboard capabilities in the future.
-
-## Usage for Manager Client Tools
 
-TODO: It will be supported in the future.
+## Other Connectors
+there are lots of supported [Extract Node](data_node/extract_node/overview.md) and [Load Node](data_node/load_node/overview.md) , you can use them directly.
diff --git a/docs/modules/sort/quick_start.md b/docs/modules/sort/quick_start.md
index 93994bc7c2..4afd75904d 100644
--- a/docs/modules/sort/quick_start.md
+++ b/docs/modules/sort/quick_start.md
@@ -4,30 +4,33 @@ sidebar_position: 2
 ---
 
 ## Set up Flink Environment
-Currently, InLong Sort is based on Flink, before you run an InLong Sort Application,
+InLong Sort is based on Apache Flink, you need to set up an [Apache Flink Environment](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/).
 
-you need to set up [Flink Environment](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/).
-
-Currently, InLong Sort relies on Flink-1.13.5. Chose `flink-1.13.5-bin-scala_2.11.tgz` when downloading package.
+InLong Sort relies on Apache Flink 1.13.5. Chose `flink-1.13.5-bin-scala_2.11.tgz` when downloading package.
 
 ## Prepare installation files
 - InLong Sort file, [Download](https://inlong.apache.org/download/) `apache-inlong-[version]-bin.tar.gz`
 - Data Nodes Connectors, [Download](https://inlong.apache.org/download/) `apache-inlong-[version]-sort-connectors.tar.gz`
 
-Notice: Please put required Connectors jars into under `FLINK_HOME/lib/` after download.  
-Put [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar) to `FLINK_HOME/lib/` when you use `mysql-cdc-inlong` connector. 
+:::caution
+Please put required Connectors jars into under `FLINK_HOME/lib/` after download.  
+Put [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar) to `FLINK_HOME/lib/` when you use `mysql-cdc-inlong` connector.
+:::
 
-## Start an inlong-sort application
+## Start an InLong Sort Job
 ```shell
 ./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
---sql.script.file mysql-to-postgresql.sql
+--sql.script.file [souce-to-sink].sql
 ```
 
-## Configuration
-`/YOUR_SQL_SCRIPT_DIR/mysql-to-postgresql.sql` is a sql script file includes multi Flink SQL statements that can be separated by semicolon.  
-Statement can support `CREATE TABLE`, `CRETAE VIEW`, `INSERT INTO`. We can write sql to do data integration.  
+:::note
+`--sql.script.file` add a SQL script file includes multi Flink SQL statements that can be separated by semicolon, support `CREATE TABLE`, `CRETAE VIEW`, `INSERT INTO` etc.
+:::
 
+### MySQL to PostgreSQL
 We can write following SQL script if we want to read data from MySQL and write into PostgreSQL.
+
+- Prepare mysql-to-postgresql.sql
 ```sql
  CREATE TABLE `table_1`(
     `age` INT,
@@ -59,4 +62,12 @@ INSERT INTO `table_2`
     `name` AS `name`,
     `age` AS `age`
     FROM `table_1`;
-```
\ No newline at end of file
+```
+
+- Summit job
+```shell
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-postgresql.sql
+```
+
+其它完整使用示例,可以参考 [Example](example.md)
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
index cb387ea4fa..e100700104 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
@@ -28,13 +28,8 @@ Extract 节点列表是一组基于 <a href="https://flink.apache.org/">Apache F
 | <font color="DarkCyan">1.2.0</font> | <font color="MediumVioletRed">1.13.5</font> |
 
 ## SQL API 用法
-
-我们需要几个步骤来使用提供的连接器设置 Flink 集群。
-
-- 设置一个安装了 1.13.5 版本和 Java 8+ 的 Flink 集群。
-- 从 [下载](/zh-CN/download) 页面下载并解压 Sort Connectors jars (或者参考 [如何编译](../../quick_start/how_to_build.md) 编译需要的版本)。
-- 将下载并解压后的 Sort Connectors jars 放到 `FLINK_HOME/lib/`。
-- 重启 Flink 集群。
+- [部署 InLong Sort](modules/sort/quick_start.md)
+- 创建数据节点
 
 下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建 MySQL Extract 节点,并从中查询数据:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/overview.md
index 65e8fe318b..a3dc62f0a3 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/overview.md
@@ -34,13 +34,8 @@ Load 节点列表是一组基于 <a href="https://flink.apache.org/">Apache Flin
 | <font color="DarkCyan">1.2.0</font> | <font color="MediumVioletRed">1.13.5</font> |
 
 ## SQL API 的用法
-
-我们需要几个步骤来使用提供的连接器设置 Flink 集群。
-
-- 设置一个安装了 1.13.5 版本和 Java 8+ 的 Flink 集群。 
-- 从 [下载](/zh-CN/download) 页面下载并解压 Sort Connectors jars (或者参考 [如何编译](../../quick_start/how_to_build.md) 编译需要的版本)。
-- 将下载并解压后的 Sort Connectors jars 放到 `FLINK_HOME/lib/`。
-- 重启 Flink 集群。
+- [部署 InLong Sort](modules/sort/quick_start.md)
+- 创建数据节点
 
 下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建一个 MySQL Load 节点并加载数据进去:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/bare_metal.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/bare_metal.md
index 7be5a21659..9dbc039399 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/bare_metal.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/bare_metal.md
@@ -4,8 +4,8 @@ sidebar_position: 4
 ---
 
 ## 环境要求
-- MySQL 5.7+
-- Flink 1.13.5
+- MySQL 5.7+ or PostgreSQL
+- [Apache Flink 1.13.5](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)
 
 ## 准备消息队列
 InLong 当前支持以下消息队列,根据使用情况**选择其一**即可。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/example.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/example.md
index 20e07b7c68..9840b57220 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/example.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/example.md
@@ -1,19 +1,16 @@
 ---
-title: 例子
+title: 使用示例
 sidebar_position: 3
 ---
 
-## 示例
-
 为了更容易创建 InLong Sort 作业,这里我们列出了一些数据流配置示例。下面将介绍 InLong Sort 的 SQL、Dashboard、Manager 客户端工具的使用。
 
 ## 环境要求
-- JDK 1.8.x
-- Flink 1.13.5
+- Apache Flink 1.13.5
 - MySQL
-- Kafka
-- Hadoop
-- Hive 3.x
+- Apache Kafka
+- Apache Hadoop
+- Apache Hive 3.x
 
 ## 准备 InLong Sort 和 Connectors
 你可以通过参考[部署指引](quick_start.md)准备 InLong Sort 和数据节点 Connectors。
@@ -27,11 +24,11 @@ sidebar_position: 3
 单表同步配置示例如下:
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/mysql-to-kafka.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-kafka.sql
 ```
 
-* mysql-to-kafka.sql
+- mysql-to-kafka.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -83,14 +80,16 @@ INSERT INTO `table_2`
 ```
 
 ### 读 Kafka 写 Hive
-
-**注意:**  首先需要在 hive 中创建 user 表。
+:::caution
+需要在 hive 中先创建 `user` 表。
+:::
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/kafka-to-hive.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file kafka-to-hive.sql
 ```
-* kafka-to-hive.sql
+
+- kafka-to-hive.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -137,12 +136,6 @@ INSERT INTO `user`
     FROM `table_1`;
 
 ```
-备注:以上过程所有的 SQL 可以放在一个文件中提交执行。
-
-## 使用 Inlong Dashboard 方式
-
-目前 Dashboard 支持文件采集同步的方式,以上数据源可视化配置方式正在开发中。
-
-## 使用 Manager Client Tools 方式
 
-TODO: 未来发布的版本将会支持。
+## 其它 Connectors
+在 [Extract Node](data_node/extract_node/overview.md) 和 [Load Node](data_node/load_node/overview.md) 部分,有更丰富的 connector 可以使用,可根据使用场景参考配置。
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
index 6eca26e5bc..d4bbdf3a07 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
@@ -4,28 +4,33 @@ sidebar_position: 2
 ---
 
 ## 配置 Flink 运行环境
-当前 InLong Sort 是基于 Flink 的一个应用,因此运行 InLong Sort 应用前,需要准备好 [Flink 环境](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)。
+InLong Sort 是基于 Flink 的一个应用,需要准备好 [Apache Flink 环境](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)。
 
-由于当前 InLong Sort 依赖的是 Flink1.13.5 版本,因此在下载部署包时,请选择`flink-1.13.5-bin-scala_2.11.tgz`
+当前 InLong Sort 依赖的是 Apache Flink 1.13.5 版本,因此在下载部署包时,请选择 `flink-1.13.5-bin-scala_2.11.tgz`
 
 ## 准备安装文件
 - InLong Sort 运行文件,[下载](https://inlong.apache.org/zh-CN/download/) `apache-inlong-[version]-bin.tar.gz`
 - 数据节点 Connectors,[下载](https://inlong.apache.org/zh-CN/download/) `apache-inlong-[version]-sort-connectors.tar.gz`
 
-注意:Connectors 下载后可以将需要的 jars 放到`FLINK_HOME/lib/`下。  
+:::caution
+Connectors 下载后可以将需要的 jars 放到`FLINK_HOME/lib/`下。  
 如果使用`mysql-cdc-inlong` 连接器,请将 [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar)  包放到 `FLINK_HOME/lib/`下。
+:::
 
-## 启动 InLong Sort
+## 启动 InLong Sort 任务
 ```
 ./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
---sql.script.file mysql-to-postgresql.sql
+--sql.script.file [souce-to-sink].sql
 ```
 
-## 配置
-`/YOUR_SQL_SCRIPT_DIR/mysql-to-postgresql.sql` 是一个 sql 脚本文件,包含多个 Flink SQL 语句,可以用分号分隔。
-语句可以支持`CREATE TABLE`、`CRETAE VIEW`、`INSERT INTO`。 我们可以写sql来做数据集成。
+:::note
+`--sql.script.file` 需要指定一个 SQL 脚本文件,包含多个 Flink SQL 语句,可以用分号分隔。支持`CREATE TABLE`、`CRETAE VIEW`、`INSERT INTO` 等。
+:::
 
+### MySQL to PostgreSQL
 如果我们想从 MySQL 读取数据并写入 PostgreSQL,我们可以编写以下 SQL 脚本。
+
+- 准备 mysql-to-postgresql.sql
 ```sql
  CREATE TABLE `table_1`(
     `age` INT,
@@ -57,4 +62,12 @@ INSERT INTO `table_2`
     `name` AS `name`,
     `age` AS `age`
     FROM `table_1`;
-```
\ No newline at end of file
+```
+
+- 提交任务
+```shell
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-postgresql.sql
+```
+
+Other complete usage example, you can refer to [Example](example.md)
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/overview.md
index cb387ea4fa..e100700104 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/overview.md
@@ -28,13 +28,8 @@ Extract 节点列表是一组基于 <a href="https://flink.apache.org/">Apache F
 | <font color="DarkCyan">1.2.0</font> | <font color="MediumVioletRed">1.13.5</font> |
 
 ## SQL API 用法
-
-我们需要几个步骤来使用提供的连接器设置 Flink 集群。
-
-- 设置一个安装了 1.13.5 版本和 Java 8+ 的 Flink 集群。
-- 从 [下载](/zh-CN/download) 页面下载并解压 Sort Connectors jars (或者参考 [如何编译](../../quick_start/how_to_build.md) 编译需要的版本)。
-- 将下载并解压后的 Sort Connectors jars 放到 `FLINK_HOME/lib/`。
-- 重启 Flink 集群。
+- [部署 InLong Sort](modules/sort/quick_start.md)
+- 创建数据节点
 
 下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建 MySQL Extract 节点,并从中查询数据:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/overview.md
index 65e8fe318b..a3dc62f0a3 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/overview.md
@@ -34,13 +34,8 @@ Load 节点列表是一组基于 <a href="https://flink.apache.org/">Apache Flin
 | <font color="DarkCyan">1.2.0</font> | <font color="MediumVioletRed">1.13.5</font> |
 
 ## SQL API 的用法
-
-我们需要几个步骤来使用提供的连接器设置 Flink 集群。
-
-- 设置一个安装了 1.13.5 版本和 Java 8+ 的 Flink 集群。 
-- 从 [下载](/zh-CN/download) 页面下载并解压 Sort Connectors jars (或者参考 [如何编译](../../quick_start/how_to_build.md) 编译需要的版本)。
-- 将下载并解压后的 Sort Connectors jars 放到 `FLINK_HOME/lib/`。
-- 重启 Flink 集群。
+- [部署 InLong Sort](modules/sort/quick_start.md)
+- 创建数据节点
 
 下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建一个 MySQL Load 节点并加载数据进去:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/deployment/bare_metal.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/deployment/bare_metal.md
index 7be5a21659..9dbc039399 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/deployment/bare_metal.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/deployment/bare_metal.md
@@ -4,8 +4,8 @@ sidebar_position: 4
 ---
 
 ## 环境要求
-- MySQL 5.7+
-- Flink 1.13.5
+- MySQL 5.7+ or PostgreSQL
+- [Apache Flink 1.13.5](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)
 
 ## 准备消息队列
 InLong 当前支持以下消息队列,根据使用情况**选择其一**即可。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/example.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/example.md
index 20e07b7c68..9840b57220 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/example.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/example.md
@@ -1,19 +1,16 @@
 ---
-title: 例子
+title: 使用示例
 sidebar_position: 3
 ---
 
-## 示例
-
 为了更容易创建 InLong Sort 作业,这里我们列出了一些数据流配置示例。下面将介绍 InLong Sort 的 SQL、Dashboard、Manager 客户端工具的使用。
 
 ## 环境要求
-- JDK 1.8.x
-- Flink 1.13.5
+- Apache Flink 1.13.5
 - MySQL
-- Kafka
-- Hadoop
-- Hive 3.x
+- Apache Kafka
+- Apache Hadoop
+- Apache Hive 3.x
 
 ## 准备 InLong Sort 和 Connectors
 你可以通过参考[部署指引](quick_start.md)准备 InLong Sort 和数据节点 Connectors。
@@ -27,11 +24,11 @@ sidebar_position: 3
 单表同步配置示例如下:
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/mysql-to-kafka.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-kafka.sql
 ```
 
-* mysql-to-kafka.sql
+- mysql-to-kafka.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -83,14 +80,16 @@ INSERT INTO `table_2`
 ```
 
 ### 读 Kafka 写 Hive
-
-**注意:**  首先需要在 hive 中创建 user 表。
+:::caution
+需要在 hive 中先创建 `user` 表。
+:::
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/kafka-to-hive.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file kafka-to-hive.sql
 ```
-* kafka-to-hive.sql
+
+- kafka-to-hive.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -137,12 +136,6 @@ INSERT INTO `user`
     FROM `table_1`;
 
 ```
-备注:以上过程所有的 SQL 可以放在一个文件中提交执行。
-
-## 使用 Inlong Dashboard 方式
-
-目前 Dashboard 支持文件采集同步的方式,以上数据源可视化配置方式正在开发中。
-
-## 使用 Manager Client Tools 方式
 
-TODO: 未来发布的版本将会支持。
+## 其它 Connectors
+在 [Extract Node](data_node/extract_node/overview.md) 和 [Load Node](data_node/load_node/overview.md) 部分,有更丰富的 connector 可以使用,可根据使用场景参考配置。
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/quick_start.md
index 6eca26e5bc..d4bbdf3a07 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/quick_start.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/quick_start.md
@@ -4,28 +4,33 @@ sidebar_position: 2
 ---
 
 ## 配置 Flink 运行环境
-当前 InLong Sort 是基于 Flink 的一个应用,因此运行 InLong Sort 应用前,需要准备好 [Flink 环境](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)。
+InLong Sort 是基于 Flink 的一个应用,需要准备好 [Apache Flink 环境](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)。
 
-由于当前 InLong Sort 依赖的是 Flink1.13.5 版本,因此在下载部署包时,请选择`flink-1.13.5-bin-scala_2.11.tgz`
+当前 InLong Sort 依赖的是 Apache Flink 1.13.5 版本,因此在下载部署包时,请选择 `flink-1.13.5-bin-scala_2.11.tgz`
 
 ## 准备安装文件
 - InLong Sort 运行文件,[下载](https://inlong.apache.org/zh-CN/download/) `apache-inlong-[version]-bin.tar.gz`
 - 数据节点 Connectors,[下载](https://inlong.apache.org/zh-CN/download/) `apache-inlong-[version]-sort-connectors.tar.gz`
 
-注意:Connectors 下载后可以将需要的 jars 放到`FLINK_HOME/lib/`下。  
+:::caution
+Connectors 下载后可以将需要的 jars 放到`FLINK_HOME/lib/`下。  
 如果使用`mysql-cdc-inlong` 连接器,请将 [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar)  包放到 `FLINK_HOME/lib/`下。
+:::
 
-## 启动 InLong Sort
+## 启动 InLong Sort 任务
 ```
 ./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
---sql.script.file mysql-to-postgresql.sql
+--sql.script.file [souce-to-sink].sql
 ```
 
-## 配置
-`/YOUR_SQL_SCRIPT_DIR/mysql-to-postgresql.sql` 是一个 sql 脚本文件,包含多个 Flink SQL 语句,可以用分号分隔。
-语句可以支持`CREATE TABLE`、`CRETAE VIEW`、`INSERT INTO`。 我们可以写sql来做数据集成。
+:::note
+`--sql.script.file` 需要指定一个 SQL 脚本文件,包含多个 Flink SQL 语句,可以用分号分隔。支持`CREATE TABLE`、`CRETAE VIEW`、`INSERT INTO` 等。
+:::
 
+### MySQL to PostgreSQL
 如果我们想从 MySQL 读取数据并写入 PostgreSQL,我们可以编写以下 SQL 脚本。
+
+- 准备 mysql-to-postgresql.sql
 ```sql
  CREATE TABLE `table_1`(
     `age` INT,
@@ -57,4 +62,12 @@ INSERT INTO `table_2`
     `name` AS `name`,
     `age` AS `age`
     FROM `table_1`;
-```
\ No newline at end of file
+```
+
+- 提交任务
+```shell
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-postgresql.sql
+```
+
+Other complete usage example, you can refer to [Example](example.md)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/overview.md b/versioned_docs/version-1.4.0/data_node/extract_node/overview.md
index dca63130bc..d8950c58de 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/overview.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/overview.md
@@ -26,13 +26,8 @@ The following table shows the version mapping between InLong<sup>®</sup> Extrac
 |          <font color="DarkCyan">1.2.0</font>           | <font color="MediumVioletRed">1.13.5</font> |
 
 ## Usage for SQL API
-
-We need several steps to setup a Flink cluster with the provided connector.
-
-1. Setup a Flink cluster with version 1.13.5 and Java 8+ installed.
-2. Download and the Sort Connectors jars from the [Downloads](/download) page (or [build yourself](../../quick_start/how_to_build.md)).
-3. Put the Sort Connectors jars under `FLINK_HOME/lib/`.
-4. Restart the Flink cluster.
+- [Deploy InLong Sort](modules/sort/quick_start.md)
+- Create Data Node
 
 The example shows how to create a MySQL Extract Node in [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) and execute queries on it.
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/overview.md b/versioned_docs/version-1.4.0/data_node/load_node/overview.md
index 1753798dcd..72f9d0073e 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/overview.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/overview.md
@@ -34,13 +34,8 @@ The following table shows the version mapping between InLong<sup>®</sup> Load N
 |  <font color="DarkCyan">1.2.0</font>  | <font color="MediumVioletRed">1.13.5</font> |
 
 ## Usage for SQL API
-
-We need several steps to setup a Flink cluster with the provided connector.
-
-1. Setup a Flink cluster with version 1.13.5 and Java 8+ installed.
-2. Download and decompress the Sort Connectors jars from the [Downloads](/download) page (or [build yourself](../../quick_start/how_to_build.md)).
-3. Put the Sort Connectors jars under `FLINK_HOME/lib/`.
-4. Restart the Flink cluster.
+- [Deploy InLong Sort](modules/sort/quick_start.md)
+- Create Data Node
 
 The example shows how to create a MySQL Load Node in [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) and load data to it.
 
diff --git a/versioned_docs/version-1.4.0/deployment/bare_metal.md b/versioned_docs/version-1.4.0/deployment/bare_metal.md
index 3ff2896455..1d327336bf 100644
--- a/versioned_docs/version-1.4.0/deployment/bare_metal.md
+++ b/versioned_docs/version-1.4.0/deployment/bare_metal.md
@@ -4,8 +4,8 @@ sidebar_position: 4
 ---
 
 ## Environment Requirements
-- MySQL 5.7+
-- Flink 1.13.5
+- MySQL 5.7+ or PostgreSQL
+- [Apache Flink 1.13.5](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/)
 
 ## Prepare Message Queue
 InLong Support the following Message Queue services now, you can choose one of them.
diff --git a/versioned_docs/version-1.4.0/modules/sort/example.md b/versioned_docs/version-1.4.0/modules/sort/example.md
index 9a3317c652..19fed89a32 100644
--- a/versioned_docs/version-1.4.0/modules/sort/example.md
+++ b/versioned_docs/version-1.4.0/modules/sort/example.md
@@ -3,18 +3,15 @@ title: Example
 sidebar_position: 3
 ---
 
-## Overview
-
 To make it easier for you to create InLong Sort jobs, here we list some data stream configuration examples.
 The following will introduce SQL, Dashboard, Manager Client Tools methods to use Inlong Sort.
 
 ## Environment Requirements
-- JDK 1.8.x
-- Flink 1.13.5
+- Apache Flink 1.13.5
 - MySQL
-- Kafka
-- Hadoop
-- Hive 3.x
+- Apache Kafka
+- Apache Hadoop
+- Apache Hive 3.x
 
 ## Prepare InLong Sort And Connectors
 You can prepare InLong Sort and Data Node Connectors by referring to [Deployment Guide](quick_start.md).
@@ -28,11 +25,11 @@ This example defines the data flow for a single table(mysql-->kafka-->hive).
 Single table sync example:
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/mysql-to-kafka.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-kafka.sql
 ```
 
-* mysql-to-kafka.sql
+- mysql-to-kafka.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -84,14 +81,16 @@ INSERT INTO `table_2`
 ```
 
 ### Kafka to Hive
-
-**Note:**  First you need to create user table in Hive.
+:::caution
+First you need to create `user` table in Hive.
+:::
 
 ```shell
-./bin/flink run -c org.apache.inlong.sort.Entrance FLINK_HOME/lib/sort-dist-[version].jar \
---sql.script.file /YOUR_SQL_SCRIPT_DIR/kafka-to-hive.sql
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file kafka-to-hive.sql
 ```
-* kafka-to-hive.sql
+
+- kafka-to-hive.sql
 
 ```sql
 CREATE TABLE `table_1`(
@@ -138,12 +137,6 @@ INSERT INTO `user`
     FROM `table_1`;
 
 ```
-Note: Of course you can also put all the SQL in one file.
-
-## Usage for Dashboard
-
-The underlying capabilities are already available and will complement the Dashboard capabilities in the future.
-
-## Usage for Manager Client Tools
 
-TODO: It will be supported in the future.
+## Other Connectors
+there are lots of supported [Extract Node](data_node/extract_node/overview.md) and [Load Node](data_node/load_node/overview.md) , you can use them directly.
diff --git a/versioned_docs/version-1.4.0/modules/sort/quick_start.md b/versioned_docs/version-1.4.0/modules/sort/quick_start.md
index 93994bc7c2..4afd75904d 100644
--- a/versioned_docs/version-1.4.0/modules/sort/quick_start.md
+++ b/versioned_docs/version-1.4.0/modules/sort/quick_start.md
@@ -4,30 +4,33 @@ sidebar_position: 2
 ---
 
 ## Set up Flink Environment
-Currently, InLong Sort is based on Flink, before you run an InLong Sort Application,
+InLong Sort is based on Apache Flink, you need to set up an [Apache Flink Environment](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/).
 
-you need to set up [Flink Environment](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/overview/).
-
-Currently, InLong Sort relies on Flink-1.13.5. Chose `flink-1.13.5-bin-scala_2.11.tgz` when downloading package.
+InLong Sort relies on Apache Flink 1.13.5. Chose `flink-1.13.5-bin-scala_2.11.tgz` when downloading package.
 
 ## Prepare installation files
 - InLong Sort file, [Download](https://inlong.apache.org/download/) `apache-inlong-[version]-bin.tar.gz`
 - Data Nodes Connectors, [Download](https://inlong.apache.org/download/) `apache-inlong-[version]-sort-connectors.tar.gz`
 
-Notice: Please put required Connectors jars into under `FLINK_HOME/lib/` after download.  
-Put [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar) to `FLINK_HOME/lib/` when you use `mysql-cdc-inlong` connector. 
+:::caution
+Please put required Connectors jars into under `FLINK_HOME/lib/` after download.  
+Put [mysql-connector-java:8.0.21.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar) to `FLINK_HOME/lib/` when you use `mysql-cdc-inlong` connector.
+:::
 
-## Start an inlong-sort application
+## Start an InLong Sort Job
 ```shell
 ./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
---sql.script.file mysql-to-postgresql.sql
+--sql.script.file [souce-to-sink].sql
 ```
 
-## Configuration
-`/YOUR_SQL_SCRIPT_DIR/mysql-to-postgresql.sql` is a sql script file includes multi Flink SQL statements that can be separated by semicolon.  
-Statement can support `CREATE TABLE`, `CRETAE VIEW`, `INSERT INTO`. We can write sql to do data integration.  
+:::note
+`--sql.script.file` add a SQL script file includes multi Flink SQL statements that can be separated by semicolon, support `CREATE TABLE`, `CRETAE VIEW`, `INSERT INTO` etc.
+:::
 
+### MySQL to PostgreSQL
 We can write following SQL script if we want to read data from MySQL and write into PostgreSQL.
+
+- Prepare mysql-to-postgresql.sql
 ```sql
  CREATE TABLE `table_1`(
     `age` INT,
@@ -59,4 +62,12 @@ INSERT INTO `table_2`
     `name` AS `name`,
     `age` AS `age`
     FROM `table_1`;
-```
\ No newline at end of file
+```
+
+- Summit job
+```shell
+./bin/flink run -c org.apache.inlong.sort.Entrance apache-inlong-[version]-bin/inlong-sort/sort-dist-[version].jar \
+--sql.script.file mysql-to-postgresql.sql
+```
+
+其它完整使用示例,可以参考 [Example](example.md)
\ No newline at end of file