You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by zh...@apache.org on 2022/03/29 13:33:40 UTC

[incubator-seatunnel] branch dev updated: [doc] Change docs structure and make it more suitable (#1611)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new f3c9025  [doc] Change docs structure and make it more suitable (#1611)
f3c9025 is described below

commit f3c9025610583fa31b7f86fdcd3ddd25ee7ed975
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Tue Mar 29 21:33:33 2022 +0800

    [doc] Change docs structure and make it more suitable (#1611)
    
    This patch is to change our docs structure in dev branch and make it more suitable,
    which includes:
    
    * Merge engine Spark and Flink content into one document, which is separate before.
    * And the root directory `connector`, `transform`, `command` from engine spark and
       flink.
    * Add `tab` components and close: #1456
---
 README.md                                          | 112 ++++-----
 README_zh_CN.md                                    | 110 ++++-----
 docs/en/command/usage.mdx                          | 161 +++++++++++++
 docs/en/connector/config-example.md                |   8 +
 .../sink-plugins => connector/sink}/Clickhouse.md  |  14 +-
 .../sink}/ClickhouseFile.md                        |  13 +-
 docs/en/connector/sink/Console.mdx                 | 101 ++++++++
 docs/en/connector/sink/Doris.mdx                   | 174 ++++++++++++++
 .../sink-plugins => connector/sink}/Druid.md       |   9 +
 .../sink/Elasticsearch.mdx}                        |  78 +++++--
 .../sink-plugins => connector/sink}/Email.md       |  11 +-
 docs/en/connector/sink/File.mdx                    | 190 +++++++++++++++
 .../sink-plugins => connector/sink}/Hbase.md       |  13 +-
 .../sink-plugins => connector/sink}/Hive.md        |  11 +-
 .../sink-plugins => connector/sink}/Hudi.md        |  11 +-
 .../sink-plugins => connector/sink}/Iceberg.md     |  11 +-
 .../sink-plugins => connector/sink}/InfluxDb.md    |  11 +-
 .../Jdbc.md => connector/sink/Jdbc.mdx}            | 107 ++++++++-
 .../sink-plugins => connector/sink}/Kafka.md       |  25 +-
 .../sink-plugins => connector/sink}/Kudu.md        |  11 +-
 .../sink-plugins => connector/sink}/MongoDB.md     |  11 +-
 .../sink-plugins => connector/sink}/Phoenix.md     |  13 +-
 .../sink-plugins => connector/sink}/Redis.md       |  11 +-
 .../sink-plugins => connector/sink}/Tidb.md        |  11 +-
 .../sink/common-options.md}                        |   6 +-
 .../source-plugins => connector/source}/Druid.md   |  13 +-
 .../source}/Elasticsearch.md                       |  13 +-
 docs/en/connector/source/Fake.mdx                  | 135 +++++++++++
 .../File.md => connector/source/File.mdx}          | 102 ++++++--
 .../source-plugins => connector/source}/Hbase.md   |  13 +-
 .../source-plugins => connector/source}/Hive.md    |  13 +-
 .../source-plugins => connector/source}/Hudi.md    |  11 +-
 .../source-plugins => connector/source}/Iceberg.md |  13 +-
 .../source}/InfluxDb.md                            |  11 +-
 docs/en/connector/source/Jdbc.mdx                  | 205 ++++++++++++++++
 .../Kafka.md => connector/source/Kafka.mdx}        | 121 +++++++---
 .../source-plugins => connector/source}/Kudu.md    |  13 +-
 .../source-plugins => connector/source}/MongoDB.md |  13 +-
 .../source-plugins => connector/source}/Phoenix.md |  13 +-
 .../source-plugins => connector/source}/Redis.md   |  13 +-
 docs/en/connector/source/Socket.mdx                | 102 ++++++++
 .../source-plugins => connector/source}/Tidb.md    |  13 +-
 docs/en/connector/source/common-options.mdx        |  89 +++++++
 .../source-plugins => connector/source}/neo4j.md   |   9 +-
 docs/en/deployment.mdx                             | 125 ++++++++++
 .../new-license.md}                                |   2 +-
 docs/en/{developement => development}/setup.md     |   2 +-
 docs/en/{FAQ.md => faq.md}                         |   7 +
 docs/en/flink/commands/start-seatunnel-flink.sh.md | 258 ---------------------
 docs/en/flink/configuration/ConfigExamples.md      |  49 ----
 .../en/flink/configuration/sink-plugins/Console.md |  32 ---
 docs/en/flink/configuration/sink-plugins/Doris.md  |  80 -------
 .../configuration/sink-plugins/Elasticsearch.md    |  64 -----
 docs/en/flink/configuration/sink-plugins/File.md   |  91 --------
 docs/en/flink/configuration/sink-plugins/Jdbc.md   |  68 ------
 docs/en/flink/configuration/source-plugins/Fake.md |  43 ----
 docs/en/flink/configuration/source-plugins/Jdbc.md |  80 -------
 .../flink/configuration/source-plugins/Socket.md   |  38 ---
 .../configuration/source-plugins/source-plugin.md  |  35 ---
 .../flink/configuration/transform-plugins/Split.md |  41 ----
 .../flink/configuration/transform-plugins/Sql.md   |  26 ---
 .../transform-plugins/transform-plugin.md          |  52 -----
 docs/en/flink/deployment.md                        |  35 ---
 docs/en/flink/installation.md                      |  31 ---
 docs/en/flink/quick-start.md                       | 113 ---------
 docs/en/intro/about.md                             |  73 ++++++
 docs/en/intro/history.md                           |  15 ++
 docs/en/intro/why.md                               |  13 ++
 docs/en/introduction.md                            | 169 --------------
 docs/en/spark/commands/start-seatunnel-spark.sh.md |  43 ----
 docs/en/spark/configuration/ConfigExamples.md      |   9 -
 .../en/spark/configuration/sink-plugins/Console.md |  38 ---
 docs/en/spark/configuration/sink-plugins/Doris.md  |  53 -----
 docs/en/spark/configuration/sink-plugins/File.md   |  69 ------
 docs/en/spark/configuration/sink-plugins/Kafka.md  |  38 ---
 .../configuration/sink-plugins/sink-plugin.md      |  31 ---
 docs/en/spark/configuration/source-plugins/Fake.md |  21 --
 .../configuration/source-plugins/FakeStream.md     |  47 ----
 docs/en/spark/configuration/source-plugins/File.md |  41 ----
 docs/en/spark/configuration/source-plugins/Jdbc.md |  97 --------
 .../configuration/source-plugins/KafkaStream.md    |  49 ----
 .../configuration/source-plugins/SocketStream.md   |  35 ---
 .../configuration/source-plugins/source-plugin.md  |  30 ---
 .../spark/configuration/transform-plugins/Sql.md   |  49 ----
 .../transform-plugins/transform-plugin.md          |  46 ----
 docs/en/spark/deployment.md                        |  72 ------
 docs/en/spark/installation.md                      |  29 ---
 docs/en/spark/quick-start.md                       | 108 ---------
 docs/en/start/docker.md                            |   8 +
 docs/en/start/local.mdx                            | 149 ++++++++++++
 docs/en/transform/common-options.mdx               | 116 +++++++++
 .../Json.md => transform/json.md}                  |  10 +-
 .../Split.md => transform/split.mdx}               |  75 +++++-
 docs/en/transform/sql.md                           |  60 +++++
 docs/sidebars.js                                   | 102 +++++---
 95 files changed, 2571 insertions(+), 2544 deletions(-)

diff --git a/README.md b/README.md
index a6d1801..ba59f8c 100644
--- a/README.md
+++ b/README.md
@@ -62,66 +62,56 @@ processing plug-in, because the whole system is easy to expand.
 
 ## Plugins supported by SeaTunnel
 
-| <div style="width: 130pt">Spark Connector Plugins | <div style="width: 80pt">Database Type | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                        |
-|:------------------------:|:--------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|Batch                     |Fake            |[doc](./docs/en/spark/configuration/source-plugins/Fake.md)         |                                                                     |
-|                          |ElasticSearch   |[doc](./docs/en/spark/configuration/source-plugins/Elasticsearch.md)|[doc](./docs/en/spark/configuration/sink-plugins/Elasticsearch.md)   |
-|                          |File            |[doc](./docs/en/spark/configuration/source-plugins/File.md)         |[doc](./docs/en/spark/configuration/sink-plugins/File.md)            |
-|                          |Hive            |[doc](./docs/en/spark/configuration/source-plugins/Hive.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Hive.md)          |
-|                          |Hudi            |[doc](./docs/en/spark/configuration/source-plugins/Hudi.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Hudi.md)            |
-|                          |Jdbc            |[doc](./docs/en/spark/configuration/source-plugins/Jdbc.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Jdbc.md)            |
-|                          |MongoDB         |[doc](./docs/en/spark/configuration/source-plugins/MongoDB.md)      |[doc](./docs/en/spark/configuration/sink-plugins/MongoDB.md)         |
-|                          |Neo4j           |[doc](./docs/en/spark/configuration/source-plugins/neo4j.md)        |                                                                     |
-|                          |Phoenix         |[doc](./docs/en/spark/configuration/source-plugins/Phoenix.md)      |[doc](./docs/en/spark/configuration/sink-plugins/Phoenix.md)         |
-|                          |Redis           |[doc](./docs/en/spark/configuration/source-plugins/Redis.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Redis.md)           |
-|                          |Tidb            |[doc](./docs/en/spark/configuration/source-plugins/Tidb.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Tidb.md)            |
-|                          |Clickhouse      |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Clickhouse.md)      |
-|                          |Doris           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Doris.md)           |
-|                          |Email           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Email.md)           |
-|                          |Hbase           |[doc](./docs/en/spark/configuration/source-plugins/Hbase.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Hbase.md)           |
-|                          |Kafka           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Kafka.md)           |
-|                          |Console         |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Console.md)         |
-|                          |Kudu            |[doc](./docs/en/spark/configuration/source-plugins/Kudu.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Kudu.md)            |
-|                          |Redis           |[doc](./docs/en/spark/configuration/source-plugins/Redis.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Redis.md)           |
-|Stream                    |FakeStream      |[doc](./docs/en/spark/configuration/source-plugins/FakeStream.md)   |                                                                     |
-|                          |KafkaStream     |[doc](./docs/en/spark/configuration/source-plugins/KafkaStream.md)  |                                                                     |
-|                          |SocketStream    |[doc](./docs/en/spark/configuration/source-plugins/SocketStream.md) |                                                                     |
-
-| <div style="width: 130pt">Flink Connector Plugins | <div style="width: 80pt">Database Type  | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                                                                |
-|:------------------------:|:--------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|                          |Druid           |[doc](./docs/en/flink/configuration/source-plugins/Druid.md)        |[doc](./docs/en/flink/configuration/sink-plugins/Druid.md)           |
-|                          |Fake            |[doc](./docs/en/flink/configuration/source-plugins/Fake.md)         |                                                                     |
-|                          |File            |[doc](./docs/en/flink/configuration/source-plugins/File.md)         |[doc](./docs/en/flink/configuration/sink-plugins/File.md)            |
-|                          |InfluxDb        |[doc](./docs/en/flink/configuration/source-plugins/InfluxDb.md)     |[doc](./docs/en/flink/configuration/sink-plugins/InfluxDb.md)        |
-|                          |Jdbc            |[doc](./docs/en/flink/configuration/source-plugins/Jdbc.md)         |[doc](./docs/en/flink/configuration/sink-plugins/Jdbc.md)            |
-|                          |Kafka           |[doc](./docs/en/flink/configuration/source-plugins/Kafka.md)        |[doc](./docs/en/flink/configuration/sink-plugins/Kafka.md)           |
-|                          |Socket          |[doc](./docs/en/flink/configuration/source-plugins/Socket.md)       |                                                                     |
-|                          |Console         |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Console.md)         |
-|                          |Doris           |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Doris.md)           |
-|                          |ElasticSearch   |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Elasticsearch.md)   |
-
-|<div style="width: 130pt">Transform Plugins| <div style="width: 100pt">Spark                                    | <div style="width: 100pt">Flink                                     |
-|:-----------------------------------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|Add                                        |                                                                    |                                                                     |
-|CheckSum                                   |                                                                    |                                                                     |
-|Convert                                    |                                                                    |                                                                     |
-|Date                                       |                                                                    |                                                                     |
-|Drop                                       |                                                                    |                                                                     |
-|Grok                                       |                                                                    |                                                                     |
-|Json                                       |[doc](./docs/en/spark/configuration/transform-plugins/Json.md)      |                                                                     |
-|Kv                                         |                                                                    |                                                                     |
-|Lowercase                                  |                                                                    |                                                                     |
-|Remove                                     |                                                                    |                                                                     |
-|Rename                                     |                                                                    |                                                                     |
-|Repartition                                |                                                                    |                                                                     |
-|Replace                                    |                                                                    |                                                                     |
-|Sample                                     |                                                                    |                                                                     |
-|Split                                      |[doc](./docs/en/spark/configuration/transform-plugins/Split.md)     |[doc](./docs/en/flink/configuration/transform-plugins/Split.md)      |
-|Sql                                        |[doc](./docs/en/spark/configuration/transform-plugins/Sql.md)       |[doc](./docs/en/flink/configuration/transform-plugins/Sql.md)        |
-|Table                                      |                                                                    |                                                                     |
-|Truncate                                   |                                                                    |                                                                     |
-|Uppercase                                  |                                                                    |                                                                     |
-|Uuid                                       |                                                                    |                                                                     |
+### Connector
+
+| <div style="width: 80pt">Connector Type | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                     |
+|:--------------:|:--------------------------------------------------------:|:-------------------------------------------------:|
+|Clickhouse      |                                                          |[doc](./docs/en/connector/sink/Clickhouse.md)      |
+|Doris           |                                                          |[doc](./docs/en/connector/sink/Doris.mdx)          |
+|Druid           |[doc](./docs/en/connector/source/Druid.md)                |[doc](./docs/en/connector/sink/Druid.md)           |
+|ElasticSearch   |[doc](./docs/en/connector/source/Elasticsearch.md)        |[doc](./docs/en/connector/sink/Elasticsearch.mdx)  |
+|Email           |                                                          |[doc](./docs/en/connector/sink/Email.md)           |
+|Fake            |[doc](./docs/en/connector/source/Fake.mdx)                |                                                   |
+|File            |[doc](./docs/en/connector/source/File.mdx)                |[doc](./docs/en/connector/sink/File.mdx)           |
+|Hbase           |[doc](./docs/en/connector/source/Hbase.md)                |[doc](./docs/en/connector/sink/Hbase.md)           |
+|Hive            |[doc](./docs/en/connector/source/Hive.md)                 |[doc](./docs/en/connector/sink/Hive.md)            |
+|Hudi            |[doc](./docs/en/connector/source/Hudi.md)                 |[doc](./docs/en/connector/sink/Hudi.md)            |
+|Iceberg         |[doc](./docs/en/connector/source/Iceberg.md)              |[doc](./docs/en/connector/sink/Iceberg.md)         |
+|InfluxDb        |[doc](./docs/en/connector/source/InfluxDb.md)             |[doc](./docs/en/connector/sink/InfluxDb.md)        |
+|Jdbc            |[doc](./docs/en/connector/source/Jdbc.mdx)                |[doc](./docs/en/connector/sink/Jdbc.mdx)           |
+|Kafka           |[doc](./docs/en/connector/source/Kafka.mdx)               |[doc](./docs/en/connector/sink/Kafka.md)           |
+|Kudu            |[doc](./docs/en/connector/source/Kudu.md)                 |[doc](./docs/en/connector/sink/Kudu.md)            |
+|MongoDB         |[doc](./docs/en/connector/source/MongoDB.md)              |[doc](./docs/en/connector/sink/MongoDB.md)         |
+|Neo4j           |[doc](./docs/en/connector/source/neo4j.md)                |                                                   |
+|Phoenix         |[doc](./docs/en/connector/source/Phoenix.md)              |[doc](./docs/en/connector/sink/Phoenix.md)         |
+|Redis           |[doc](./docs/en/connector/source/Redis.md)                |[doc](./docs/en/connector/sink/Redis.md)           |
+|Socket          |[doc](./docs/en/connector/source/Socket.mdx)              |                                                   |
+|Tidb            |[doc](./docs/en/connector/source/Tidb.md)                 |[doc](./docs/en/connector/sink/Tidb.md)            |
+
+### Transform
+
+|<div style="width: 130pt">Transform Plugins|
+|:-----------------------------------------:|
+|Add                                        |
+|CheckSum                                   |
+|Convert                                    |
+|Date                                       |
+|Drop                                       |
+|Grok                                       |
+|[Json](./docs/en/transform/json.md)        |
+|Kv                                         |
+|Lowercase                                  |
+|Remove                                     |
+|Rename                                     |
+|Repartition                                |
+|Replace                                    |
+|Sample                                     |
+|[Split](./docs/en/transform/split.mdx)     |
+|[Sql](./docs/en/transform/sql.md)          |
+|Table                                      |
+|Truncate                                   |
+|Uppercase                                  |
+|Uuid                                       |
 
 ## Environmental dependency
 
@@ -137,7 +127,7 @@ a cluster environment, because SeaTunnel supports standalone operation. Note: Se
 and Flink.
 
 ## Compiling project
-Follow this [document](docs/en/developement/setup.md).
+Follow this [document](docs/en/development/setup.md).
 
 ## Downloads
 
diff --git a/README_zh_CN.md b/README_zh_CN.md
index 43fdf82..83e84f3 100644
--- a/README_zh_CN.md
+++ b/README_zh_CN.md
@@ -59,66 +59,56 @@ Source[数据源输入] -> Transform[数据处理] -> Sink[结果输出]
 
 ## SeaTunnel 支持的插件
 
-| <div style="width: 130pt">Spark Connector Plugins | <div style="width: 80pt">Database Type | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                        |
-|:------------------------:|:--------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|Batch                     |Fake            |[doc](./docs/en/spark/configuration/source-plugins/Fake.md)         |                                                                     |
-|                          |ElasticSearch   |[doc](./docs/en/spark/configuration/source-plugins/Elasticsearch.md)|[doc](./docs/en/spark/configuration/sink-plugins/Elasticsearch.md)   |
-|                          |File            |[doc](./docs/en/spark/configuration/source-plugins/File.md)         |[doc](./docs/en/spark/configuration/sink-plugins/File.md)            |
-|                          |Hive            |[doc](./docs/en/spark/configuration/source-plugins/Hive.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Hive.md)          |
-|                          |Hudi            |[doc](./docs/en/spark/configuration/source-plugins/Hudi.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Hudi.md)            |
-|                          |Jdbc            |[doc](./docs/en/spark/configuration/source-plugins/Jdbc.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Jdbc.md)            |
-|                          |MongoDB         |[doc](./docs/en/spark/configuration/source-plugins/MongoDB.md)      |[doc](./docs/en/spark/configuration/sink-plugins/MongoDB.md)         |
-|                          |Neo4j           |[doc](./docs/en/spark/configuration/source-plugins/neo4j.md)        |                                                                     |
-|                          |Phoenix         |[doc](./docs/en/spark/configuration/source-plugins/Phoenix.md)      |[doc](./docs/en/spark/configuration/sink-plugins/Phoenix.md)         |
-|                          |Redis           |[doc](./docs/en/spark/configuration/source-plugins/Redis.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Redis.md)           |
-|                          |Tidb            |[doc](./docs/en/spark/configuration/source-plugins/Tidb.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Tidb.md)            |
-|                          |Clickhouse      |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Clickhouse.md)      |
-|                          |Doris           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Doris.md)           |
-|                          |Email           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Email.md)           |
-|                          |Hbase           |[doc](./docs/en/spark/configuration/source-plugins/Hbase.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Hbase.md)           |
-|                          |Kafka           |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Kafka.md)           |
-|                          |Console         |                                                                    |[doc](./docs/en/spark/configuration/sink-plugins/Console.md)         |
-|                          |Kudu            |[doc](./docs/en/spark/configuration/source-plugins/Kudu.md)         |[doc](./docs/en/spark/configuration/sink-plugins/Kudu.md)            |
-|                          |Redis           |[doc](./docs/en/spark/configuration/source-plugins/Redis.md)        |[doc](./docs/en/spark/configuration/sink-plugins/Redis.md)           |
-|Stream                    |FakeStream      |[doc](./docs/en/spark/configuration/source-plugins/FakeStream.md)   |                                                                     |
-|                          |KafkaStream     |[doc](./docs/en/spark/configuration/source-plugins/KafkaStream.md)  |                                                                     |
-|                          |SocketStream    |[doc](./docs/en/spark/configuration/source-plugins/SocketStream.md) |                                                                     |
-
-| <div style="width: 130pt">Flink Connector Plugins | <div style="width: 80pt">Database Type  | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                                                                |
-|:------------------------:|:--------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|                          |Druid           |[doc](./docs/en/flink/configuration/source-plugins/Druid.md)        |[doc](./docs/en/flink/configuration/sink-plugins/Druid.md)           |
-|                          |Fake            |[doc](./docs/en/flink/configuration/source-plugins/Fake.md)         |                                                                     |
-|                          |File            |[doc](./docs/en/flink/configuration/source-plugins/File.md)         |[doc](./docs/en/flink/configuration/sink-plugins/File.md)            |
-|                          |InfluxDb        |[doc](./docs/en/flink/configuration/source-plugins/InfluxDb.md)     |[doc](./docs/en/flink/configuration/sink-plugins/InfluxDb.md)        |
-|                          |Jdbc            |[doc](./docs/en/flink/configuration/source-plugins/Jdbc.md)         |[doc](./docs/en/flink/configuration/sink-plugins/Jdbc.md)            |
-|                          |Kafka           |[doc](./docs/en/flink/configuration/source-plugins/Kafka.md)        |[doc](./docs/en/flink/configuration/sink-plugins/Kafka.md)           |
-|                          |Socket          |[doc](./docs/en/flink/configuration/source-plugins/Socket.md)       |                                                                     |
-|                          |Console         |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Console.md)         |
-|                          |Doris           |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Doris.md)           |
-|                          |ElasticSearch   |                                                                    |[doc](./docs/en/flink/configuration/sink-plugins/Elasticsearch.md)   |
-
-|<div style="width: 130pt">Transform Plugins| <div style="width: 100pt">Spark                                    | <div style="width: 100pt">Flink                                     |
-|:-----------------------------------------:|:------------------------------------------------------------------:|:-------------------------------------------------------------------:|
-|Add                                        |                                                                    |                                                                     |
-|CheckSum                                   |                                                                    |                                                                     |
-|Convert                                    |                                                                    |                                                                     |
-|Date                                       |                                                                    |                                                                     |
-|Drop                                       |                                                                    |                                                                     |
-|Grok                                       |                                                                    |                                                                     |
-|Json                                       |[doc](./docs/en/spark/configuration/transform-plugins/Json.md)      |                                                                     |
-|Kv                                         |                                                                    |                                                                     |
-|Lowercase                                  |                                                                    |                                                                     |
-|Remove                                     |                                                                    |                                                                     |
-|Rename                                     |                                                                    |                                                                     |
-|Repartition                                |                                                                    |                                                                     |
-|Replace                                    |                                                                    |                                                                     |
-|Sample                                     |                                                                    |                                                                     |
-|Split                                      |[doc](./docs/en/spark/configuration/transform-plugins/Split.md)     |[doc](./docs/en/flink/configuration/transform-plugins/Split.md)      |
-|Sql                                        |[doc](./docs/en/spark/configuration/transform-plugins/Sql.md)       |[doc](./docs/en/flink/configuration/transform-plugins/Sql.md)        |
-|Table                                      |                                                                    |                                                                     |
-|Truncate                                   |                                                                    |                                                                     |
-|Uppercase                                  |                                                                    |                                                                     |
-|Uuid                                       |                                                                    |                                                                     |
+### Connector
+
+| <div style="width: 80pt">Connector Type | <div style="width: 50pt">Source | <div style="width: 50pt">Sink                     |
+|:--------------:|:--------------------------------------------------------:|:-------------------------------------------------:|
+|Clickhouse      |                                                          |[doc](./docs/en/connector/sink/Clickhouse.md)      |
+|Doris           |                                                          |[doc](./docs/en/connector/sink/Doris.mdx)          |
+|Druid           |[doc](./docs/en/connector/source/Druid.md)                |[doc](./docs/en/connector/sink/Druid.md)           |
+|ElasticSearch   |[doc](./docs/en/connector/source/Elasticsearch.md)        |[doc](./docs/en/connector/sink/Elasticsearch.mdx)  |
+|Email           |                                                          |[doc](./docs/en/connector/sink/Email.md)           |
+|Fake            |[doc](./docs/en/connector/source/Fake.mdx)                |                                                   |
+|File            |[doc](./docs/en/connector/source/File.mdx)                |[doc](./docs/en/connector/sink/File.mdx)           |
+|Hbase           |[doc](./docs/en/connector/source/Hbase.md)                |[doc](./docs/en/connector/sink/Hbase.md)           |
+|Hive            |[doc](./docs/en/connector/source/Hive.md)                 |[doc](./docs/en/connector/sink/Hive.md)            |
+|Hudi            |[doc](./docs/en/connector/source/Hudi.md)                 |[doc](./docs/en/connector/sink/Hudi.md)            |
+|Iceberg         |[doc](./docs/en/connector/source/Iceberg.md)              |[doc](./docs/en/connector/sink/Iceberg.md)         |
+|InfluxDb        |[doc](./docs/en/connector/source/InfluxDb.md)             |[doc](./docs/en/connector/sink/InfluxDb.md)        |
+|Jdbc            |[doc](./docs/en/connector/source/Jdbc.mdx)                |[doc](./docs/en/connector/sink/Jdbc.mdx)           |
+|Kafka           |[doc](./docs/en/connector/source/Kafka.mdx)               |[doc](./docs/en/connector/sink/Kafka.md)           |
+|Kudu            |[doc](./docs/en/connector/source/Kudu.md)                 |[doc](./docs/en/connector/sink/Kudu.md)            |
+|MongoDB         |[doc](./docs/en/connector/source/MongoDB.md)              |[doc](./docs/en/connector/sink/MongoDB.md)         |
+|Neo4j           |[doc](./docs/en/connector/source/neo4j.md)                |                                                   |
+|Phoenix         |[doc](./docs/en/connector/source/Phoenix.md)              |[doc](./docs/en/connector/sink/Phoenix.md)         |
+|Redis           |[doc](./docs/en/connector/source/Redis.md)                |[doc](./docs/en/connector/sink/Redis.md)           |
+|Socket          |[doc](./docs/en/connector/source/Socket.mdx)              |                                                   |
+|Tidb            |[doc](./docs/en/connector/source/Tidb.md)                 |[doc](./docs/en/connector/sink/Tidb.md)            |
+
+### Transform
+
+|<div style="width: 130pt">Transform Plugins|
+|:-----------------------------------------:|
+|Add                                        |
+|CheckSum                                   |
+|Convert                                    |
+|Date                                       |
+|Drop                                       |
+|Grok                                       |
+|[Json](./docs/en/transform/json.md)        |
+|Kv                                         |
+|Lowercase                                  |
+|Remove                                     |
+|Rename                                     |
+|Repartition                                |
+|Replace                                    |
+|Sample                                     |
+|[Split](./docs/en/transform/split.mdx)     |
+|[Sql](./docs/en/transform/sql.md)          |
+|Table                                      |
+|Truncate                                   |
+|Uppercase                                  |
+|Uuid                                       |
 
 ## 环境依赖
 
diff --git a/docs/en/command/usage.mdx b/docs/en/command/usage.mdx
new file mode 100644
index 0000000..5533fc5
--- /dev/null
+++ b/docs/en/command/usage.mdx
@@ -0,0 +1,161 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Command usage
+
+## Command Entrypoint
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+bin/start-seatunnel-spark.sh
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+bin/start-seatunnel-flink.sh  
+```
+
+</TabItem>
+</Tabs>
+
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+bin/start-seatunnel-spark.sh \
+    -c config-path \
+    -m master \
+    -e deploy-mode \
+    -i city=beijing
+```
+
+- Use `-m` or `--master` to specify the cluster manager
+
+- Use `-e` or `--deploy-mode` to specify the deployment mode
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -c config-path \
+    -i key=value \
+    [other params]
+```
+
+</TabItem>
+</Tabs>
+
+- Use `-c` or `--config` to specify the path of the configuration file
+
+- Use `-i` or `--variable` to specify the variables in the configuration file, you can configure multiple
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+# Yarn client mode
+./bin/start-seatunnel-spark.sh \
+    --master yarn \
+    --deploy-mode client \
+    --config ./config/application.conf
+
+# Yarn cluster mode
+./bin/start-seatunnel-spark.sh \
+    --master yarn \
+    --deploy-mode cluster \
+    --config ./config/application.conf
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+env {
+    execution.parallelism = 1
+}
+
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+
+transform {
+    sql {
+        sql = "select name,age from fake where name='"${my_name}"'"
+    }
+}
+
+sink {
+    ConsoleSink {}
+}
+```
+
+**Run**
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -c config-path \
+    -i my_name=kid-xiong
+```
+
+This designation will replace `"${my_name}"` in the configuration file with `kid-xiong`
+
+> For the rest of the parameters, refer to the original flink parameters. Check the flink parameter method: `bin/flink run -h` . The parameters can be added as needed. For example, `-m yarn-cluster` is specified as `on yarn` mode.
+
+```bash
+bin/flink run -h
+```
+
+For example:
+
+* `-p 2` specifies that the job parallelism is `2`
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -p 2 \
+    -c config-path
+```
+
+* Configurable parameters of `flink yarn-cluster`
+
+For example: `-m yarn-cluster -ynm seatunnel` specifies that the job is running on `yarn`, and the name of `yarn WebUI` is `seatunnel`
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -m yarn-cluster \
+    -ynm seatunnel \
+    -c config-path
+```
+
+</TabItem>
+</Tabs>
diff --git a/docs/en/connector/config-example.md b/docs/en/connector/config-example.md
new file mode 100644
index 0000000..e5e21e7
--- /dev/null
+++ b/docs/en/connector/config-example.md
@@ -0,0 +1,8 @@
+# Config Examples
+
+This section show you the example about SeaTunnel configuration file, we already have exists useful examples in
+[example-config](https://github.com/apache/incubator-seatunnel/tree/dev/config)
+
+## What's More
+
+If you want to know the details of this format configuration, Please see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
\ No newline at end of file
diff --git a/docs/en/spark/configuration/sink-plugins/Clickhouse.md b/docs/en/connector/sink/Clickhouse.md
similarity index 97%
rename from docs/en/spark/configuration/sink-plugins/Clickhouse.md
rename to docs/en/connector/sink/Clickhouse.md
index e510eee..b538d6c 100644
--- a/docs/en/spark/configuration/sink-plugins/Clickhouse.md
+++ b/docs/en/connector/sink/Clickhouse.md
@@ -1,11 +1,19 @@
 # Clickhouse
 
-> Sink plugin : Clickhouse [Spark]
-
 ## Description
 
 Use [Clickhouse-jdbc](https://github.com/ClickHouse/clickhouse-jdbc) to correspond the data source according to the field name and write it into ClickHouse. The corresponding data table needs to be created in advance before use
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Clickhouse
+* [ ] Flink
+
+:::
+
+
 ## Options
 
 | name           | type    | required | default value |
@@ -82,7 +90,7 @@ worked when 'split_mode' is true.
 
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [common options](common-options.md) for details
 
 ## ClickHouse type comparison table
 
diff --git a/docs/en/spark/configuration/sink-plugins/ClickhouseFile.md b/docs/en/connector/sink/ClickhouseFile.md
similarity index 97%
rename from docs/en/spark/configuration/sink-plugins/ClickhouseFile.md
rename to docs/en/connector/sink/ClickhouseFile.md
index be0709b..211ee2d 100644
--- a/docs/en/spark/configuration/sink-plugins/ClickhouseFile.md
+++ b/docs/en/connector/sink/ClickhouseFile.md
@@ -1,12 +1,19 @@
 # ClickhouseFile
 
-> Sink plugin : ClickhouseFile [Spark]
-
 ## Description
 
 Generate the clickhouse data file with the clickhouse-local program, and then send it to the clickhouse 
 server, also call bulk load.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: ClickhouseFile
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                   | type    | required | default value |
@@ -85,7 +92,7 @@ The password corresponding to the clickhouse server, only support root user yet.
 
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [common options](common-options.md) for details
 
 ## ClickHouse type comparison table
 
diff --git a/docs/en/connector/sink/Console.mdx b/docs/en/connector/sink/Console.mdx
new file mode 100644
index 0000000..8eaa884
--- /dev/null
+++ b/docs/en/connector/sink/Console.mdx
@@ -0,0 +1,101 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Console
+
+## Description
+
+Output data to standard terminal or Flink taskManager, which is often used for debugging and easy to observe the data.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Console
+* [x] Flink: Console
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| limit          | number | no       | 100           |
+| serializer     | string | no       | plain         |
+| common-options | string | no       | -             |
+
+### limit [number]
+
+Limit the number of `rows` to be output, the legal range is `[-1, 2147483647]` , `-1` means that the output is up to `2147483647` rows
+
+### serializer [string]
+
+The format of serialization when outputting. Available serializers include: `json` , `plain`
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+<TabItem value="flink">
+
+## Options
+
+| name           | type   | required | default value |
+|----------------|--------| -------- |---------------|
+| limit          | int    | no       | INT_MAX       |
+| common-options | string | no       | -             |
+
+### limit [int]
+
+limit console result lines
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+</Tabs>
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+console {
+    limit = 10,
+    serializer = "json"
+}
+```
+
+> Output 10 pieces of data in Json format
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+ConsoleSink{}
+```
+
+## Note
+
+Flink's console output is in flink's WebUI
+
+</TabItem>
+</Tabs>
diff --git a/docs/en/connector/sink/Doris.mdx b/docs/en/connector/sink/Doris.mdx
new file mode 100644
index 0000000..9e1ca8d
--- /dev/null
+++ b/docs/en/connector/sink/Doris.mdx
@@ -0,0 +1,174 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Doris
+
+### Description:
+
+Write Data to a Doris Table.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Doris
+* [x] Flink: DorisSink
+
+:::
+
+### Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| fenodes | string | yes | - |
+| database | string | yes | - |
+| table	 | string | yes | - |
+| user	 | string | yes | - |
+| password	 | string | yes | - |
+| batch_size	 | int | yes | 100 |
+| doris.*	 | string | no | - |
+
+##### fenodes [string]
+
+Doris FE address:8030
+
+##### database [string]
+
+Doris target database name
+
+##### table [string]
+
+Doris target table name
+
+##### user [string]
+
+Doris user name
+
+##### password [string]
+
+Doris user's password
+
+##### batch_size [string]
+
+Doris number of submissions per batch
+
+Default value:5000
+
+##### doris. [string]
+
+Doris stream_load properties,you can use 'doris.' prefix + stream_load properties
+[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
+
+</TabItem>
+<TabItem value="flink">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| fenodes | string | yes | - |
+| database | string | yes | - |
+| table | string | yes | - |
+| user	 | string | yes | - |
+| password	 | string | yes | - |
+| batch_size	 | int | no |  100 |
+| interval	 | int | no |1000 |
+| max_retries	 | int | no | 1 |
+| doris.*	 | - | no | - |
+| parallelism | int | no  | - |
+
+##### fenodes [string]
+
+Doris FE http address
+
+##### database [string]
+
+Doris database name
+
+##### table [string]
+
+Doris table name
+
+##### user [string]
+
+Doris username
+
+##### password [string]
+
+Doris password
+
+##### batch_size [int]
+
+Maximum number of lines in a single write Doris,default value is 5000.
+
+##### interval [int]
+
+The flush interval millisecond, after which the asynchronous thread will write the data in the cache to Doris.Set to 0 to turn off periodic writing.
+
+Default value :5000
+
+##### max_retries [int]
+
+Number of retries after writing Doris failed
+
+##### doris.* [string]
+
+The doris stream load parameters.you can use 'doris.' prefix + stream_load properties. eg:doris.column_separator' = ','
+[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
+
+### parallelism [Int]
+
+The parallelism of an individual operator, for DorisSink
+
+</TabItem>
+</Tabs>
+
+### Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```conf
+Doris {
+    fenodes="0.0.0.0:8030"
+    database="test"
+    table="user"
+    user="doris"
+    password="doris"
+    batch_size=10000
+    doris.column_separator="\t"
+    doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```conf
+DorisSink {
+    fenodes = "127.0.0.1:8030"
+    database = database
+    table = table
+    user = root
+    password = password
+    batch_size = 1
+    doris.column_separator="\t"
+    doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
+}
+```
+
+</TabItem>
+</Tabs>
diff --git a/docs/en/flink/configuration/sink-plugins/Druid.md b/docs/en/connector/sink/Druid.md
similarity index 96%
rename from docs/en/flink/configuration/sink-plugins/Druid.md
rename to docs/en/connector/sink/Druid.md
index 1f7ee27..534c092 100644
--- a/docs/en/flink/configuration/sink-plugins/Druid.md
+++ b/docs/en/connector/sink/Druid.md
@@ -6,6 +6,15 @@
 
 Write data to Apache Druid.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: Druid
+
+:::
+
 ## Options
 
 | name                    | type     | required | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/Elasticsearch.md b/docs/en/connector/sink/Elasticsearch.mdx
similarity index 54%
rename from docs/en/spark/configuration/sink-plugins/Elasticsearch.md
rename to docs/en/connector/sink/Elasticsearch.mdx
index 0f43c1e..6e055d0 100644
--- a/docs/en/spark/configuration/sink-plugins/Elasticsearch.md
+++ b/docs/en/connector/sink/Elasticsearch.mdx
@@ -1,13 +1,32 @@
-# Elasticsearch
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
 
-> Sink plugin : Elasticsearch [Spark]
+# Elasticsearch
 
 ## Description
 
-Output data to `Elasticsearch` , the supported `ElasticSearch version is >= 2.x and <7.0.0` .
+Output data to `Elasticsearch`.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Elasticsearch(supported `ElasticSearch version is >= 2.x and <7.0.0`)
+* [x] Flink: Elasticsearch
+
+:::
 
 ## Options
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
 | name              | type   | required | default value |
 | ----------------- | ------ | -------- | ------------- |
 | hosts             | array  | yes      | -             |
@@ -17,6 +36,21 @@ Output data to `Elasticsearch` , the supported `ElasticSearch version is >= 2.x
 | es.*              | string | no       |               |
 | common-options    | string | no       | -             |
 
+</TabItem>
+<TabItem value="flink">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| hosts             | array  | yes      | -             |
+| index_type        | string | no       | log           |
+| index_time_format | string | no       | yyyy.MM.dd    |
+| index             | string | no       | seatunnel     |
+| common-options    | string | no       | -             |
+| parallelism       | int    | no       | -             |
+
+</TabItem>
+</Tabs>
+
 ### hosts [array]
 
 `Elasticsearch` cluster address, the format is `host:port` , allowing multiple hosts to be specified. Such as `["host1:9200", "host2:9200"]` .
@@ -27,7 +61,7 @@ Output data to `Elasticsearch` , the supported `ElasticSearch version is >= 2.x
 
 #### index_time_format [string]
 
-When the format in the `index` parameter is `xxxx-${now}` , `index_time_format` can specify the time format of the index name, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+When the format in the `index` parameter is `xxxx-${now}` , `index_time_format` can specify the time format of the `index` name, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
 
 | Symbol | Description        |
 | ------ | ------------------ |
@@ -42,7 +76,16 @@ See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/
 
 ### index [string]
 
-`Elasticsearch` index name. If you need to generate an `index` based on time, you can specify a time variable, such as `seatunnel-${now}` . `now` represents the current data processing time.
+Elasticsearch `index` name. If you need to generate an `index` based on time, you can specify a time variable, such as `seatunnel-${now}` . `now` represents the current data processing time.
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
 
 ### es.* [string]
 
@@ -50,9 +93,19 @@ Users can also specify multiple optional parameters. For a detailed list of para
 
 For example, the way to specify `es.batch.size.entries` is: `es.batch.size.entries = 100000` . If these non-essential parameters are not specified, they will use the default values given in the official documentation.
 
+</TabItem>
+<TabItem value="flink">
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, data source, or data sink
+
+</TabItem>
+</Tabs>
+
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
 
 ## Examples
 
@@ -62,16 +115,3 @@ elasticsearch {
     index = "seatunnel"
 }
 ```
-
-> Write the result to the `index` of the `Elasticsearch` cluster named `seatunnel`
-
-```bash
-elasticsearch {
-    hosts = ["localhost:9200"]
-    index = "seatunnel-${now}"
-    es.batch.size.entries = 100000
-    index_time_format = "yyyy.MM.dd"
-}
-```
-
-> Create `index` by day, for example `seatunnel-2020.01.01`
diff --git a/docs/en/spark/configuration/sink-plugins/Email.md b/docs/en/connector/sink/Email.md
similarity index 95%
rename from docs/en/spark/configuration/sink-plugins/Email.md
rename to docs/en/connector/sink/Email.md
index 5176f20..bfb69a3 100644
--- a/docs/en/spark/configuration/sink-plugins/Email.md
+++ b/docs/en/connector/sink/Email.md
@@ -1,11 +1,18 @@
 # Email
 
-> Sink plugin : Email [Spark]
-
 ## Description
 
 Supports data output through `email attachments`. The attachments are in the `xlsx` format that supports `excel` to open, which can be used to notify the task statistics results through email attachments.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Email
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name     | type   | required | default value |
diff --git a/docs/en/connector/sink/File.mdx b/docs/en/connector/sink/File.mdx
new file mode 100644
index 0000000..f82ad1f
--- /dev/null
+++ b/docs/en/connector/sink/File.mdx
@@ -0,0 +1,190 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# File
+
+## Description
+
+Output data to local or hdfs file.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: File
+* [x] Flink: File
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name             | type   | required | default value  |
+| ---------------- | ------ | -------- | -------------- |
+| options          | object | no       | -              |
+| partition_by     | array  | no       | -              |
+| path             | string | yes      | -              |
+| path_time_format | string | no       | yyyyMMddHHmmss |
+| save_mode        | string | no       | error          |
+| serializer       | string | no       | json           |
+| common-options   | string | no       | -              |
+
+### options [object]
+
+Custom parameters
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### path [string]
+
+The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
+we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`, 
+`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
+
+### path_time_format [string]
+
+When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite` , `append` , `ignore` and `error` . For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+### serializer [string]
+
+Serialization method, currently supports `csv` , `json` , `parquet` , `orc` and `text`
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+<TabItem value="flink">
+
+
+| name              | type   | required | default value  |
+|-------------------|--------| -------- |----------------|
+| format            | string | yes      | -              |
+| path              | string | yes      | -              |
+| path_time_format  | string | no       | yyyyMMddHHmmss |
+| write_mode        | string | no       | -              |
+| common-options    | string | no       | -              |
+| parallelism       | int    | no       | -              |
+| rollover_interval | long   | no       | 1              |
+| max_part_size     | long   | no       | 1024          |
+| prefix            | string | no       | seatunnel      |
+| suffix            | string | no       | .ext           |
+
+### format [string]
+
+Currently, `csv` , `json` , and `text` are supported. The streaming mode currently only supports `text`
+
+### path [string]
+
+The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
+we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`,
+`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
+
+### path_time_format [string]
+
+When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### write_mode [string]
+
+- NO_OVERWRITE
+
+- No overwrite, there is an error in the path
+
+- OVERWRITE
+
+- Overwrite, delete and then write if the path exists
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for FileSink
+
+### rollover_interval [long]
+
+The new file part rollover interval, unit min.
+
+### max_part_size [long]
+
+The max size of each file part, unit MB.
+
+### prefix [string]
+
+The prefix of each file part.
+
+### suffix [string]
+
+The suffix of each file part.
+
+</TabItem>
+</Tabs>
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+file {
+    path = "file:///var/logs"
+    serializer = "text"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+FileSink {
+    format = "json"
+    path = "hdfs://localhost:9000/flink/output/"
+    write_mode = "OVERWRITE"
+}
+```
+
+</TabItem>
+</Tabs>
diff --git a/docs/en/spark/configuration/sink-plugins/Hbase.md b/docs/en/connector/sink/Hbase.md
similarity index 95%
rename from docs/en/spark/configuration/sink-plugins/Hbase.md
rename to docs/en/connector/sink/Hbase.md
index 1c5cd61..0c6e833 100644
--- a/docs/en/spark/configuration/sink-plugins/Hbase.md
+++ b/docs/en/connector/sink/Hbase.md
@@ -1,11 +1,18 @@
 # Hbase
 
-> Sink plugin : Hbase [Spark]
-
 ## Description
 
 Use [hbase-connectors](https://github.com/apache/hbase-connectors/tree/master/spark) to output data to `Hbase` , `Hbase (>=2.1.0)` and `Spark (>=2.0.0)` version compatibility depends on `hbase-connectors` . The `hbase-connectors` in the official Apache Hbase documentation is also one of the [Apache Hbase Repos](https://hbase.apache.org/book.html#repos).
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hbase
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                   | type   | required | default value |
@@ -42,7 +49,7 @@ If these non-essential parameters are not specified, they will use the default v
 
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
 
 ## Examples
 
diff --git a/docs/en/spark/configuration/sink-plugins/Hive.md b/docs/en/connector/sink/Hive.md
similarity index 96%
rename from docs/en/spark/configuration/sink-plugins/Hive.md
rename to docs/en/connector/sink/Hive.md
index 4ea6b82..7c57a1a 100644
--- a/docs/en/spark/configuration/sink-plugins/Hive.md
+++ b/docs/en/connector/sink/Hive.md
@@ -1,11 +1,18 @@
 # Hive
 
-> Sink plugin: Hive [Spark]
-
 ### Description
 
 Write Rows to [Apache Hive](https://hive.apache.org).
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hive
+* [ ] Flink
+
+:::
+
 ### Options
 
 | name                                    | type          | required | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/Hudi.md b/docs/en/connector/sink/Hudi.md
similarity index 92%
rename from docs/en/spark/configuration/sink-plugins/Hudi.md
rename to docs/en/connector/sink/Hudi.md
index 4c4c281..5599bb7 100644
--- a/docs/en/spark/configuration/sink-plugins/Hudi.md
+++ b/docs/en/connector/sink/Hudi.md
@@ -1,11 +1,18 @@
 # Hudi
 
-> Sink plugin: Hudi [Spark]
-
 ## Description
 
 Write Rows to a Hudi.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hudi
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name | type | required | default value | engine |
diff --git a/docs/en/spark/configuration/sink-plugins/Iceberg.md b/docs/en/connector/sink/Iceberg.md
similarity index 95%
rename from docs/en/spark/configuration/sink-plugins/Iceberg.md
rename to docs/en/connector/sink/Iceberg.md
index 5081859..ce281fc 100644
--- a/docs/en/spark/configuration/sink-plugins/Iceberg.md
+++ b/docs/en/connector/sink/Iceberg.md
@@ -1,11 +1,18 @@
 # Iceberg
 
-> Sink plugin: Iceberg [Spark]
-
 ## Description
 
 Write data to Iceberg.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Iceberg
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                                                         | type   | required | default value |
diff --git a/docs/en/flink/configuration/sink-plugins/InfluxDb.md b/docs/en/connector/sink/InfluxDb.md
similarity index 95%
rename from docs/en/flink/configuration/sink-plugins/InfluxDb.md
rename to docs/en/connector/sink/InfluxDb.md
index 9357b52..c896e8d 100644
--- a/docs/en/flink/configuration/sink-plugins/InfluxDb.md
+++ b/docs/en/connector/sink/InfluxDb.md
@@ -1,11 +1,18 @@
 # InfluxDB
 
-> Sink plugin: InfluxDB [Flink]
-
 ## Description
 
 Write data to InfluxDB.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: InfluxDB
+
+:::
+
 ## Options
 
 | name        | type           | required | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/Jdbc.md b/docs/en/connector/sink/Jdbc.mdx
similarity index 56%
rename from docs/en/spark/configuration/sink-plugins/Jdbc.md
rename to docs/en/connector/sink/Jdbc.mdx
index 9aae7c6..607d32d 100644
--- a/docs/en/spark/configuration/sink-plugins/Jdbc.md
+++ b/docs/en/connector/sink/Jdbc.mdx
@@ -1,13 +1,32 @@
-# Jdbc
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
 
-> Sink plugin : Jdbc [Spark]
+# Jdbc
 
 ## Description
 
-Support `Update` to output data to Relational database
+Write data through jdbc
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Jdbc
+* [x] Flink: Jdbc
+
+:::
 
 ## Options
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
 | name             | type   | required | default value |
 |------------------| ------ |----------|---------------|
 | driver           | string | yes      | -             |
@@ -61,8 +80,69 @@ Configure when `saveMode` is specified as `update` , and when the specified key
 
 Configure when `saveMode` is specified as `update` , whether to show sql
 
+</TabItem>
+<TabItem value="flink">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| driver            | string | yes      | -             |
+| url               | string | yes      | -             |
+| username          | string | yes      | -             |
+| password          | string | no       | -             |
+| query             | string | yes      | -             |
+| batch_size        | int    | no       | -             |
+| source_table_name | string | yes      | -             |
+| common-options    | string | no       | -             |
+| parallelism       | int    | no       | -             |
+
+### driver [string]
+
+Driver name, such as `com.mysql.cj.jdbc.Driver` for MySQL.
+
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
+
+### url [string]
+
+The URL of the JDBC connection. Such as: `jdbc:mysql://localhost:3306/test`
+
+### username [string]
+
+username
+
+### password [string]
+
+password
+
+### query [string]
+
+Insert statement
+
+### batch_size [int]
+
+Number of writes per batch
+
+### parallelism [int]
+
+The parallelism of an individual operator, for JdbcSink.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+</Tabs>
+
 ## Examples
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
 ```bash
 jdbc {
     saveMode = "update",
@@ -73,7 +153,9 @@ jdbc {
     customUpdateStmt = "INSERT INTO table (column1, column2, created, modified, yn) values(?, ?, now(), now(), 1) ON DUPLICATE KEY UPDATE column1 = IFNULL(VALUES (column1), column1), column2 = IFNULL(VALUES (column2), column2)"
 }
 ```
+
 > Insert data through JDBC
+
 ```bash
 jdbc {
     saveMode = "update",
@@ -87,4 +169,21 @@ jdbc {
     jdbc.socket_timeout = 10000
 }
 ```
-> Timeout config
\ No newline at end of file
+> Timeout config
+
+</TabItem>
+<TabItem value="flink">
+
+```conf
+JdbcSink {
+    source_table_name = fake
+    driver = com.mysql.jdbc.Driver
+    url = "jdbc:mysql://localhost/test"
+    username = root
+    query = "insert into test(name,age) values(?,?)"
+    batch_size = 2
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/flink/configuration/sink-plugins/Kafka.md b/docs/en/connector/sink/Kafka.md
similarity index 73%
rename from docs/en/flink/configuration/sink-plugins/Kafka.md
rename to docs/en/connector/sink/Kafka.md
index ccb9213..bee26c9 100644
--- a/docs/en/flink/configuration/sink-plugins/Kafka.md
+++ b/docs/en/connector/sink/Kafka.md
@@ -1,10 +1,17 @@
 # Kafka
 
-> Sink plugin : Kafka [Flink]
-
 ## Description
 
-Write data to Kafka
+Write Rows to a Kafka topic.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kafka
+* [x] Flink: Kafka
+
+:::
 
 ## Options
 
@@ -26,7 +33,7 @@ Kafka Topic
 
 ### producer [string]
 
-In addition to the above mandatory parameters that must be specified by the `Kafka producer` client, the user can also specify multiple non-mandatory parameters for the `producer` client, covering [all the producer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#producerconfigs).
+In addition to the above parameters that must be specified by the `Kafka producer` client, the user can also specify multiple non-mandatory parameters for the `producer` client, covering [all the producer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#producerconfigs).
 
 The way to specify the parameter is to add the prefix `producer.` to the original parameter name. For example, the way to specify `request.timeout.ms` is: `producer.request.timeout.ms = 60000` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
 
@@ -43,13 +50,13 @@ please refer to [Flink Kafka Fault Tolerance](https://nightlies.apache.org/flink
 
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
 
 ## Examples
 
 ```bash
-   KafkaSink {
-     producer.bootstrap.servers = "127.0.0.1:9092"
-     topics = test_sink
-   }
+kafka {
+    topic = "seatunnel"
+    producer.bootstrap.servers = "localhost:9092"
+}
 ```
diff --git a/docs/en/spark/configuration/sink-plugins/Kudu.md b/docs/en/connector/sink/Kudu.md
similarity index 91%
rename from docs/en/spark/configuration/sink-plugins/Kudu.md
rename to docs/en/connector/sink/Kudu.md
index 050cf0a..c82963e 100644
--- a/docs/en/spark/configuration/sink-plugins/Kudu.md
+++ b/docs/en/connector/sink/Kudu.md
@@ -1,11 +1,18 @@
 # Kudu
 
-> Sink plugin: Kudu [Spark]
-
 ## Description
 
 Write data to Kudu.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kudu
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/MongoDB.md b/docs/en/connector/sink/MongoDB.md
similarity index 94%
rename from docs/en/spark/configuration/sink-plugins/MongoDB.md
rename to docs/en/connector/sink/MongoDB.md
index 6bdae19..8004acf 100644
--- a/docs/en/spark/configuration/sink-plugins/MongoDB.md
+++ b/docs/en/connector/sink/MongoDB.md
@@ -1,11 +1,18 @@
 # MongoDB
 
-> Sink plugin : MongoDB [Spark]
-
 ## Description
 
 Write data to `MongoDB`
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: MongoDB
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                   | type   | required | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/Phoenix.md b/docs/en/connector/sink/Phoenix.md
similarity index 87%
rename from docs/en/spark/configuration/sink-plugins/Phoenix.md
rename to docs/en/connector/sink/Phoenix.md
index fa16ef7..0f4c69d 100644
--- a/docs/en/spark/configuration/sink-plugins/Phoenix.md
+++ b/docs/en/connector/sink/Phoenix.md
@@ -1,11 +1,18 @@
 # Phoenix
 
-> Sink plugin : Phoenix [Spark]
-
 ## Description
 
 Export data to `Phoenix` , compatible with `Kerberos` authentication
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Phoenix
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                      | type    | required | default value |
@@ -34,7 +41,7 @@ Whether to skip the normalized identifier, if the column name is surrounded by d
 
 ### common options [string]
 
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
 
 ## Examples
 
diff --git a/docs/en/spark/configuration/sink-plugins/Redis.md b/docs/en/connector/sink/Redis.md
similarity index 94%
rename from docs/en/spark/configuration/sink-plugins/Redis.md
rename to docs/en/connector/sink/Redis.md
index 6a47738..3eefdfa 100644
--- a/docs/en/spark/configuration/sink-plugins/Redis.md
+++ b/docs/en/connector/sink/Redis.md
@@ -1,11 +1,18 @@
 # Redis
 
-> Sink plugin: Redis [Spark]
-
 ## Description
 
 Write Rows to a Redis.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Redis
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name      | type   | required  | default value |
diff --git a/docs/en/spark/configuration/sink-plugins/Tidb.md b/docs/en/connector/sink/Tidb.md
similarity index 95%
rename from docs/en/spark/configuration/sink-plugins/Tidb.md
rename to docs/en/connector/sink/Tidb.md
index d709d58..5a65883 100644
--- a/docs/en/spark/configuration/sink-plugins/Tidb.md
+++ b/docs/en/connector/sink/Tidb.md
@@ -1,11 +1,18 @@
 # TiDb
 
-> Sink plugin: TiDb [Spark]
-
 ### Description
 
 Write data to TiDB.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: TiDb
+* [ ] Flink
+
+:::
+
 ### Env Options
 
 | name           | type   | required | default value |
diff --git a/docs/en/flink/configuration/sink-plugins/sink-plugin.md b/docs/en/connector/sink/common-options.md
similarity index 73%
rename from docs/en/flink/configuration/sink-plugins/sink-plugin.md
rename to docs/en/connector/sink/common-options.md
index c6f35c4..620ef3b 100644
--- a/docs/en/flink/configuration/sink-plugins/sink-plugin.md
+++ b/docs/en/connector/sink/common-options.md
@@ -1,7 +1,5 @@
 # Common Options
 
-> Sink Common Options [Flink]
-
 ## Sink Plugin common parameters
 
 | name              | type   | required | default value |
@@ -10,9 +8,9 @@
 
 ### source_table_name [string]
 
-When `source_table_name` is not specified, the current plugin is processing the data set `(dataStream/dataset)` output by the previous plugin in the configuration file;
+When `source_table_name` is not specified, the current plug-in processes the data set `dataset` output by the previous plugin in the configuration file;
 
-When `source_table_name` is specified, the current plugin is processing the data set corresponding to this parameter.
+When `source_table_name` is specified, the current plug-in is processing the data set corresponding to this parameter.
 
 ## Examples
 
diff --git a/docs/en/flink/configuration/source-plugins/Druid.md b/docs/en/connector/source/Druid.md
similarity index 88%
rename from docs/en/flink/configuration/source-plugins/Druid.md
rename to docs/en/connector/source/Druid.md
index 924b3b3..fc5e648 100644
--- a/docs/en/flink/configuration/source-plugins/Druid.md
+++ b/docs/en/connector/source/Druid.md
@@ -1,11 +1,18 @@
 # Druid
 
-> Source plugin: Druid [Flink]
-
 ## Description
 
 Read data from Apache Druid.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: Druid
+
+:::
+
 ## Options
 
 | name       | type           | required | default value |
@@ -39,7 +46,7 @@ These columns that you want to query of DataSource.
 
 ### common options [string]
 
-Source Plugin common parameters, refer to [Source Plugin](./source-plugin.md) for details
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
 
 ### parallelism [`Int`]
 
diff --git a/docs/en/spark/configuration/source-plugins/Elasticsearch.md b/docs/en/connector/source/Elasticsearch.md
similarity index 92%
rename from docs/en/spark/configuration/source-plugins/Elasticsearch.md
rename to docs/en/connector/source/Elasticsearch.md
index 6d57004..4a68ab5 100644
--- a/docs/en/spark/configuration/source-plugins/Elasticsearch.md
+++ b/docs/en/connector/source/Elasticsearch.md
@@ -1,11 +1,18 @@
 # Elasticsearch
 
-> Source plugin : Elasticsearch [Spark]
-
 ## Description
 
 Read data from Elasticsearch
 
+:::tip 
+
+Engine Supported and plugin name
+
+* [x] Spark: Elasticsearch
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -31,7 +38,7 @@ For example, the way to specify `es.read.metadata` is: `es.read.metadata = true`
 
 ### common options [string]
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 ## Examples
 
diff --git a/docs/en/connector/source/Fake.mdx b/docs/en/connector/source/Fake.mdx
new file mode 100644
index 0000000..7d96348
--- /dev/null
+++ b/docs/en/connector/source/Fake.mdx
@@ -0,0 +1,135 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Fake
+
+## Description
+
+`Fake` is mainly used to conveniently generate user-specified data, which is used as input for functional verification, testing, and performance testing of seatunnel.
+
+:::note
+
+Engine Supported and plugin name
+
+* [x] Spark: Fake, FakeStream
+* [x] Flink: FakeSource, FakeSourceStream
+    * Flink `Fake Source` is mainly used to automatically generate data. The data has only two columns. The first column is of `String type` and the content is a random one from `["Gary", "Ricky Huo", "Kid Xiong"]` . The second column is of `Long type` , which is The current 13-bit timestamp is used as input for functional verification and testing of `seatunnel` .
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+:::note
+
+These options is for Spark:`FakeStream`, and Spark:`Fake` do not have any options
+
+:::
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| content        | array  | no       | -             |
+| rate           | number | yes      | -             |
+| common-options | string | yes      | -             |
+
+### content [array]
+
+List of test data strings
+
+### rate [number]
+
+Number of test cases generated per second
+
+</TabItem>
+<TabItem value="flink">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| parallelism    | `Int`  | no       | -             |
+| common-options |`string`| no       | -             |
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for Fake Source Stream
+
+</TabItem>
+</Tabs>
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+### Fake
+
+```bash
+Fake {
+    result_table_name = "my_dataset"
+}
+```
+
+### FakeStream
+
+```bash
+fakeStream {
+    content = ['name=ricky&age=23', 'name=gary&age=28']
+    rate = 5
+}
+```
+
+The generated data is as follows, randomly extract the string from the `content` list
+
+```bash
++-----------------+
+|raw_message      |
++-----------------+
+|name=gary&age=28 |
+|name=ricky&age=23|
++-----------------+
+```
+
+</TabItem>
+<TabItem value="flink">
+
+### FakeSource
+
+```bash
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+```
+
+### FakeSourceStream
+
+```bash
+source {
+    FakeSource {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/flink/configuration/source-plugins/File.md b/docs/en/connector/source/File.mdx
similarity index 59%
rename from docs/en/flink/configuration/source-plugins/File.md
rename to docs/en/connector/source/File.mdx
index 19f2061..2148787 100644
--- a/docs/en/flink/configuration/source-plugins/File.md
+++ b/docs/en/connector/source/File.mdx
@@ -1,13 +1,47 @@
-# File
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
 
-> Source plugin : File [Flink]
+# File
 
 ## Description
 
-Read data from the file system
+read data from local or hdfs file.
+
+Write Data to a Doris Table.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: File
+* [x] Flink: File
+
+:::
 
 ## Options
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| format | string | no | json |
+| path | string | yes | - |
+| common-options| string | yes | - |
+
+##### format [string]
+
+Format for reading files, currently supports text, parquet, json, orc, csv.
+
+</TabItem>
+<TabItem value="flink">
+
 | name           | type   | required | default value |
 | -------------- | ------ | -------- | ------------- |
 | format.type    | string | yes      | -             |
@@ -20,47 +54,71 @@ Read data from the file system
 
 The format for reading files from the file system, currently supports `csv` , `json` , `parquet` , `orc` and `text` .
 
-### path [string]
-
-The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://` .
-
 ### schema [string]
 
 - csv
-
     - The `schema` of `csv` is a string of `jsonArray` , such as `"[{\"type\":\"long\"},{\"type\":\"string\"}]"` , this can only specify the type of the field , The field name cannot be specified, and the common configuration parameter `field_name` is generally required.
-
 - json
-
     - The `schema` parameter of `json` is to provide a `json string` of the original data, and the `schema` can be automatically generated, but the original data with the most complete content needs to be provided, otherwise the fields will be lost.
-
 - parquet
-
     - The `schema` of `parquet` is an `Avro schema string` , such as `{\"type\":\"record\",\"name\":\"test\",\"fields\":[{\"name\" :\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"string\"}]}` .
-
 - orc
-
     - The `schema` of `orc` is the string of `orc schema` , such as `"struct<name:string,addresses:array<struct<street:string,zip:smallint>>>"` .
-
 - text
-
     - The `schema` of `text` can be filled with `string` .
 
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
 ### parallelism [`Int`]
 
 The parallelism of an individual operator, for FileSource
 
+</TabItem>
+</Tabs>
+
+##### path [string]
+
+- If read data from hdfs , the file path should start with `hdfs://`  
+- If read data from local , the file path should start with `file://`
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
 ## Examples
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```
+file {
+    path = "hdfs:///var/logs"
+    result_table_name = "access_log"
+}
+```
+
+```
+file {
+    path = "file:///var/logs"
+    result_table_name = "access_log"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
 ```bash
-  FileSource{
+    FileSource{
     path = "hdfs://localhost:9000/input/"
     format.type = "json"
     schema = "{\"data\":[{\"a\":1,\"b\":2},{\"a\":3,\"b\":4}],\"db\":\"string\",\"q\":{\"s\":\"string\"}}"
     result_table_name = "test"
-  }
+}
 ```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/spark/configuration/source-plugins/Hbase.md b/docs/en/connector/source/Hbase.md
similarity index 91%
rename from docs/en/spark/configuration/source-plugins/Hbase.md
rename to docs/en/connector/source/Hbase.md
index f47d400..a96c785 100644
--- a/docs/en/spark/configuration/source-plugins/Hbase.md
+++ b/docs/en/connector/source/Hbase.md
@@ -1,11 +1,18 @@
 # HBase
 
-> Source plugin : HBase [Spark]
-
 ## Description
 
 Get data from HBase
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: HBase
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -24,7 +31,7 @@ The structure of the `hbase` table is defined by `catalog` , the name of the `hb
 
 ### common options [string]
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 ## Example
 
diff --git a/docs/en/spark/configuration/source-plugins/Hive.md b/docs/en/connector/source/Hive.md
similarity index 92%
rename from docs/en/spark/configuration/source-plugins/Hive.md
rename to docs/en/connector/source/Hive.md
index 593dc77..9fd71ab 100644
--- a/docs/en/spark/configuration/source-plugins/Hive.md
+++ b/docs/en/connector/source/Hive.md
@@ -1,11 +1,18 @@
 # Hive
 
-> Source plugin : Hive [Spark]
-
 ## Description
 
 Get data from hive
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hive
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -19,7 +26,7 @@ For preprocessed `sql` , if preprocessing is not required, you can use `select *
 
 ### common options [string]
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 **Note: The following configuration must be done to use hive source:**
 
diff --git a/docs/en/spark/configuration/source-plugins/Hudi.md b/docs/en/connector/source/Hudi.md
similarity index 97%
rename from docs/en/spark/configuration/source-plugins/Hudi.md
rename to docs/en/connector/source/Hudi.md
index 18fb31b..3148772 100644
--- a/docs/en/spark/configuration/source-plugins/Hudi.md
+++ b/docs/en/connector/source/Hudi.md
@@ -1,11 +1,18 @@
 # Hudi
 
-> Source plugin: Hudi [Spark]
-
 ## Description
 
 Read data from Hudi.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hudi
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
diff --git a/docs/en/spark/configuration/source-plugins/Iceberg.md b/docs/en/connector/source/Iceberg.md
similarity index 91%
rename from docs/en/spark/configuration/source-plugins/Iceberg.md
rename to docs/en/connector/source/Iceberg.md
index 6087d98..f7b0984 100644
--- a/docs/en/spark/configuration/source-plugins/Iceberg.md
+++ b/docs/en/connector/source/Iceberg.md
@@ -1,11 +1,18 @@
 # Iceberg
 
-> Source plugin: Iceberg [Spark]
-
 ## Description
 
 Read data from Iceberg.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Iceberg
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -21,7 +28,7 @@ Refer to [iceberg read options](https://iceberg.apache.org/docs/latest/spark-con
 
 ### common-options
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 ### path
 
diff --git a/docs/en/flink/configuration/source-plugins/InfluxDb.md b/docs/en/connector/source/InfluxDb.md
similarity index 95%
rename from docs/en/flink/configuration/source-plugins/InfluxDb.md
rename to docs/en/connector/source/InfluxDb.md
index 6eaffcc..e57d97e 100644
--- a/docs/en/flink/configuration/source-plugins/InfluxDb.md
+++ b/docs/en/connector/source/InfluxDb.md
@@ -1,11 +1,18 @@
 # InfluxDb
 
-> Source plugin: InfluxDb [Flink]
-
 ## Description
 
 Read data from InfluxDB.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: InfluxDb
+
+:::
+
 ## Options
 
 | name        | type           | required | default value |
diff --git a/docs/en/connector/source/Jdbc.mdx b/docs/en/connector/source/Jdbc.mdx
new file mode 100644
index 0000000..ca92c1e
--- /dev/null
+++ b/docs/en/connector/source/Jdbc.mdx
@@ -0,0 +1,205 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Jdbc
+
+## Description
+
+Read external data source data through JDBC
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Jdbc
+* [x] Flink: Jdbc
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| driver         | string | yes      | -             |
+| jdbc.*         | string | no       |               |
+| password       | string | yes      | -             |
+| table          | string | yes      | -             |
+| url            | string | yes      | -             |
+| user           | string | yes      | -             |
+| common-options | string | yes      | -             |
+
+</TabItem>
+<TabItem value="flink">
+
+| name                  | type   | required | default value |
+|-----------------------|--------| -------- | ------------- |
+| driver                | string | yes      | -             |
+| url                   | string | yes      | -             |
+| username              | string | yes      | -             |
+| password              | string | no       | -             |
+| query                 | string | yes      | -             |
+| fetch_size            | int    | no       | -             |
+| partition_column      | string | no       | -             |
+| partition_upper_bound | long   | no       | -             |
+| partition_lower_bound | long   | no       | -             |
+| common-options        | string | no       | -             |
+| parallelism           | int    | no       | -             |
+
+</TabItem>
+</Tabs>
+
+### driver [string]
+
+The `jdbc class name` used to connect to the remote data source, if you use MySQL the value is `com.mysql.cj.jdbc.Driver`.
+
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
+
+### password [string]
+
+##### password
+
+### url [string]
+
+The URL of the JDBC connection. Refer to a case: `jdbc:postgresql://localhost/test`
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+### jdbc [string]
+
+In addition to the parameters that must be specified above, users can also specify multiple optional parameters, which cover [all the parameters](https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases) provided by Spark JDBC.
+
+The way to specify parameters is to add the prefix `jdbc.` to the original parameter name. For example, the way to specify `fetchsize` is: `jdbc.fetchsize = 50000` . If these non-essential parameters are not specified, they will use the default values given by Spark JDBC.
+
+### user [string]
+
+username
+
+### table [string]
+
+table name
+
+</TabItem>
+<TabItem value="flink">
+
+### username [string]
+
+username
+
+### query [string]
+
+Query statement
+
+### fetch_size [int]
+
+fetch size
+
+### parallelism [int]
+
+The parallelism of an individual operator, for JdbcSource.
+
+### partition_column [string]
+
+The column name for parallelism's partition, only support numeric type.
+
+### partition_upper_bound [long]
+
+The partition_column max value for scan, if not set SeaTunnel will query database get max value.
+
+### partition_lower_bound [long]
+
+The partition_column min value for scan, if not set SeaTunnel will query database get min value.
+
+</TabItem>
+</Tabs>
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+jdbc {
+    driver = "com.mysql.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+}
+```
+
+> Read MySQL data through JDBC
+
+```bash
+jdbc {
+    driver = "com.mysql.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+    jdbc.partitionColumn = "item_id"
+    jdbc.numPartitions = "10"
+    jdbc.lowerBound = 0
+    jdbc.upperBound = 100
+}
+```
+
+> Divide partitions based on specified fields
+
+
+```bash
+jdbc {
+    driver = "com.mysql.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+    
+    jdbc.connect_timeout = 10000
+    jdbc.socket_timeout = 10000
+}
+```
+> Timeout config
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+JdbcSource {
+    driver = com.mysql.jdbc.Driver
+    url = "jdbc:mysql://localhost/test"
+    username = root
+    query = "select * from test"
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/flink/configuration/source-plugins/Kafka.md b/docs/en/connector/source/Kafka.mdx
similarity index 61%
rename from docs/en/flink/configuration/source-plugins/Kafka.md
rename to docs/en/connector/source/Kafka.mdx
index fb0b78f..1555251 100644
--- a/docs/en/flink/configuration/source-plugins/Kafka.md
+++ b/docs/en/connector/source/Kafka.mdx
@@ -1,13 +1,43 @@
-# Kafka
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
 
-> Source plugin : Kafka [Flink]
+# Kafka
 
 ## Description
 
-To consume data from `Kafka` , the supported `Kafka version >= 0.10.0` .
+To consume data from `Kafka` , supported `Kafka version >= 0.10.0` .
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: KafkaStream
+* [x] Flink: Kafka
+
+:::
 
 ## Options
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name                       | type   | required | default value |
+| -------------------------- | ------ | -------- | ------------- |
+| topics                     | string | yes      | -             |
+| consumer.group.id          | string | yes      | -             |
+| consumer.bootstrap.servers | string | yes      | -             |
+| consumer.*                 | string | no       | -             |
+| common-options             | string | yes      | -             |
+
+</TabItem>
+<TabItem value="flink">
+
 | name                       | type   | required | default value |
 | -------------------------- | ------ | -------- | ------------- |
 | topics                     | string | yes      | -             |
@@ -22,26 +52,39 @@ To consume data from `Kafka` , the supported `Kafka version >= 0.10.0` .
 | offset.reset               | string | no       | -             |
 | common-options             | string | no       | -             |
 
+</TabItem>
+</Tabs>
+
 ### topics [string]
 
-Kafka topic name. If there are multiple topics, use `,` to split, for example: `"tpc1,tpc2"` .
+`Kafka topic` name. If there are multiple `topics`, use `,` to split, for example: `"tpc1,tpc2"`
 
 ### consumer.group.id [string]
 
-Kafka consumer group id, used to distinguish different consumer groups.
+`Kafka consumer group id`, used to distinguish different consumer groups
 
 ### consumer.bootstrap.servers [string]
 
-Kafka cluster address, separated by `,`
+`Kafka` cluster address, separated by `,`
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+</TabItem>
+<TabItem value="flink">
 
 ### format.type [string]
 
 Currently supports three formats
 
 - json
-
 - csv
-
 - avro
 
 ### format.* [string]
@@ -51,57 +94,64 @@ The `csv` format uses this parameter to set the separator and so on. For example
 ### schema [string]
 
 - csv
-
     - The `schema` of `csv` is a string of `jsonArray` , such as `"[{\"field\":\"name\",\"type\":\"string\"},{\"field\":\"age\ ",\"type\":\"int\"}]"` .
 
 - json
-
     - The `schema` parameter of `json` is to provide a `json string` of the original data, and the `schema` can be automatically generated, but the original data with the most complete content needs to be provided, otherwise the fields will be lost.
 
 - avro
-
     - The `schema` parameter of `avro` is to provide a standard `avro schema JSON string` , such as `{\"name\":\"test\",\"type\":\"record\",\"fields\":[{ \"name\":\"name\",\"type\":\"string\"},{\"name\":\"age\",\"type\":\"long\"} ,{\"name\":\"addrs\",\"type\":{\"name\":\"addrs\",\"type\":\"record\",\"fields\" :[{\"name\":\"province\",\"type\":\"string\"},{\"name\":\"city\",\"type\":\"string \"}]}}]}`
 
-    - To learn more about how the `Avro Schema JSON string` should be defined, please refer to: https://avro.apache.org/docs/current/spec.html
-
-### consumer.* [string]
-
-In addition to the above necessary parameters that must be specified by the `Kafka consumer` client, users can also specify multiple `consumer` client non-mandatory parameters, covering [all consumer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#consumerconfigs).
-
-The way to specify parameters is to add the prefix `consumer.` to the original parameter name. For example, the way to specify `ssl.key.password` is: `consumer.ssl.key.password = xxxx` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
-
-### rowtime.field [string]
-
-Set the field for generating `watermark`
-
-### watermark [long]
-
-Set the allowable delay for generating `watermark`
+- To learn more about how the `Avro Schema JSON string` should be defined, please refer to: https://avro.apache.org/docs/current/spec.html
 
 ### offset.reset [string]
 
 The consumer's initial `offset` is only valid for new consumers. There are three modes
 
 - latest
-
     - Start consumption from the latest offset
-
 - earliest
-
     - Start consumption from the earliest offset
-
 - specific
-
     - Start consumption from the specified `offset` , and specify the `start offset` of each partition at this time. The setting method is through `offset.reset.specific="{0:111,1:123}"`
 
+</TabItem>
+</Tabs>
+
+### consumer.* [string]
+
+In addition to the above necessary parameters that must be specified by the `Kafka consumer` client, users can also specify multiple `consumer` client non-mandatory parameters, covering [all consumer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#consumerconfigs).
+
+The way to specify parameters is to add the prefix `consumer.` to the original parameter name. For example, the way to specify `auto.offset.reset` is: `consumer.auto.offset.reset = latest` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
+
 ### common options [string]
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 ## Examples
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+kafkaStream {
+    topics = "seatunnel"
+    consumer.bootstrap.servers = "localhost:9092"
+    consumer.group.id = "seatunnel_group"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
 ```bash
-  KafkaTableStream {
+KafkaTableStream {
     consumer.bootstrap.servers = "127.0.0.1:9092"
     consumer.group.id = "seatunnel5"
     topics = test
@@ -111,5 +161,8 @@ Source plugin common parameters, please refer to [Source Plugin](./source-plugin
     format.field-delimiter = ";"
     format.allow-comments = "true"
     format.ignore-parse-errors = "true"
-  }
+}
 ```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/spark/configuration/source-plugins/Kudu.md b/docs/en/connector/source/Kudu.md
similarity index 76%
rename from docs/en/spark/configuration/source-plugins/Kudu.md
rename to docs/en/connector/source/Kudu.md
index c5a5067..6cef523 100644
--- a/docs/en/spark/configuration/source-plugins/Kudu.md
+++ b/docs/en/connector/source/Kudu.md
@@ -1,11 +1,18 @@
 # Kudu
 
-> Source plugin: Kudu [Spark]
-
 ## Description
 
 Read data from Kudu.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kudu
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -23,7 +30,7 @@ Kudu Table
 
 ### common options [string]
 
-Source Plugin common parameters, refer to [Source Plugin](./source-plugin.md) for details
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
 
 ## Example
 
diff --git a/docs/en/spark/configuration/source-plugins/MongoDB.md b/docs/en/connector/source/MongoDB.md
similarity index 92%
rename from docs/en/spark/configuration/source-plugins/MongoDB.md
rename to docs/en/connector/source/MongoDB.md
index e5e0aea..5e7aba3 100644
--- a/docs/en/spark/configuration/source-plugins/MongoDB.md
+++ b/docs/en/connector/source/MongoDB.md
@@ -1,11 +1,18 @@
 # MongoDb
 
-> Source plugin: MongoDb [Spark]
-
 ## Description
 
 Read data from MongoDB.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: MongoDb
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                  | type   | required | default value |
@@ -39,7 +46,7 @@ Because `MongoDB` does not have the concept of `schema`, when spark reads `Mongo
 
 ### common options [string]
 
-Source Plugin common parameters, refer to [Source Plugin](./source-plugin.md) for details
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
 
 ## Example
 
diff --git a/docs/en/spark/configuration/source-plugins/Phoenix.md b/docs/en/connector/source/Phoenix.md
similarity index 91%
rename from docs/en/spark/configuration/source-plugins/Phoenix.md
rename to docs/en/connector/source/Phoenix.md
index de7c4cb..0232b04 100644
--- a/docs/en/spark/configuration/source-plugins/Phoenix.md
+++ b/docs/en/connector/source/Phoenix.md
@@ -1,11 +1,18 @@
 # Phoenix
 
-> Source plugin : Phoenix [Spark]
-
 ## Description
 
 Read external data source data through `Phoenix` , compatible with `Kerberos`  authentication
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Phoenix
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name       | type   | required | default value |
@@ -38,7 +45,7 @@ Conditional filter string configuration, optional configuration items
 
 ### common options [string]
 
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
 
 ## Example
 
diff --git a/docs/en/spark/configuration/source-plugins/Redis.md b/docs/en/connector/source/Redis.md
similarity index 90%
rename from docs/en/spark/configuration/source-plugins/Redis.md
rename to docs/en/connector/source/Redis.md
index a69b53b..516a787 100644
--- a/docs/en/spark/configuration/source-plugins/Redis.md
+++ b/docs/en/connector/source/Redis.md
@@ -1,11 +1,18 @@
 # Redis
 
-> Source plugin: Redis [Spark]
-
 ## Description
 
 Read data from Redis.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Redis
+* [ ] Flink
+
+:::
+
 ## Options
 
 | name                | type     | required | default value |
@@ -54,7 +61,7 @@ Redis timeout
 
 ### common options [string]
 
-Source Plugin common parameters, refer to [Source Plugin](./source-plugin.md) for details
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
 
 ## Example
 
diff --git a/docs/en/connector/source/Socket.mdx b/docs/en/connector/source/Socket.mdx
new file mode 100644
index 0000000..86d6faf
--- /dev/null
+++ b/docs/en/connector/source/Socket.mdx
@@ -0,0 +1,102 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Socket
+
+## Description
+
+`SocketStream` is mainly used to receive `Socket` data and is used to quickly verify `Spark streaming` computing.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: SocketStream
+* [x] Flink: Socket
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| host           | string | no       | localhost     |
+| port           | number | no       | 9999          |
+| common-options | string | yes      | -             |
+
+### host [string]
+
+socket server hostname
+
+### port [number]
+
+socket server port
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+</TabItem>
+<TabItem value="flink">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| host           | string | no       | localhost     |
+| port           | int    | no       | 9999          |
+| common-options | string | no       | -             |
+
+### host [string]
+
+socket server hostname
+
+### port [int]
+
+socket server port
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+</TabItem>
+</Tabs>
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+socketStream {
+  port = 9999
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+source {
+    SocketStream{
+        result_table_name = "socket"
+        field_name = "info"
+    }
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/spark/configuration/source-plugins/Tidb.md b/docs/en/connector/source/Tidb.md
similarity index 87%
rename from docs/en/spark/configuration/source-plugins/Tidb.md
rename to docs/en/connector/source/Tidb.md
index c73423d..fd08f9c 100644
--- a/docs/en/spark/configuration/source-plugins/Tidb.md
+++ b/docs/en/connector/source/Tidb.md
@@ -1,11 +1,18 @@
 # Tidb
 
-> Source plugin: Tidb [Spark]
-
 ### Description
 
 Read data from Tidb.
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Tidb
+* [ ] Flink
+
+:::
+
 ### Env Options
 
 | name           | type   | required | default value |
@@ -38,7 +45,7 @@ sql script
 
 ##### common options [string]
 
-Source Plugin common parameters, refer to [Source Plugin](./source-plugin.md) for details
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
 
 ### Example
 
diff --git a/docs/en/connector/source/common-options.mdx b/docs/en/connector/source/common-options.mdx
new file mode 100644
index 0000000..22ade54
--- /dev/null
+++ b/docs/en/connector/source/common-options.mdx
@@ -0,0 +1,89 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Common Options
+
+## Source common parameters
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| result_table_name | string | yes      | -             |
+| table_name        | string | no       | -             |
+
+### result_table_name [string]
+
+When `result_table_name` is not specified, the data processed by this plug-in will not be registered as a data set that can be directly accessed by other plugins, or called a temporary table (table);
+
+When `result_table_name` is specified, the data processed by this plug-in will be registered as a data set (dataset) that can be directly accessed by other plug-ins, or called a temporary table (table). The dataset registered here can be directly accessed by other plugins by specifying `source_table_name`.
+
+### table_name [string]
+
+[Deprecated since v1.4] The function is the same as `result_table_name` , this parameter will be deleted in subsequent Release versions, and `result_table_name`  parameter is recommended.
+
+</TabItem>
+<TabItem value="flink">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| result_table_name | string | no       | -             |
+| field_name        | string | no       | -             |
+
+### result_table_name [string]
+
+When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` ;
+
+When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The data set `(dataStream/dataset)` registered here can be directly accessed by other plugins by specifying `source_table_name` .
+
+### field_name [string]
+
+When the data is obtained from the upper-level plug-in, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
+
+</TabItem>
+</Tabs>
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+fake {
+    result_table_name = "view_table"
+}
+```
+
+> The result of the data source `fake` will be registered as a temporary table named `view_table` . This temporary table can be used by any Filter or Output plugin by specifying `source_table_name` .
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+```
+
+> The result of the data source `FakeSourceStream` will be registered as a temporary table named `fake` . This temporary table can be used by any `Transform` or `Sink` plugin by specifying `source_table_name` .
+>
+> `field_name` names the two columns of the temporary table `name` and `age` respectively.
+
+</TabItem>
+</Tabs>
diff --git a/docs/en/spark/configuration/source-plugins/neo4j.md b/docs/en/connector/source/neo4j.md
similarity index 98%
rename from docs/en/spark/configuration/source-plugins/neo4j.md
rename to docs/en/connector/source/neo4j.md
index 28e2ed2..534b6de 100644
--- a/docs/en/spark/configuration/source-plugins/neo4j.md
+++ b/docs/en/connector/source/neo4j.md
@@ -1,7 +1,5 @@
 # Neo4j
 
-> Source plugin: Neo4j [Spark]
-
 ## Description
 
 Read data from Neo4j.
@@ -12,7 +10,14 @@ The Options required of yes* means that  you must specify  one way of (query lab
 
 for detail neo4j config message please visit [neo4j doc](https://neo4j.com/docs/spark/current/reading/) 
 
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Neo4j
+* [ ] Flink
 
+:::
 
 ## Options
 
diff --git a/docs/en/deployment.mdx b/docs/en/deployment.mdx
new file mode 100644
index 0000000..f320d72
--- /dev/null
+++ b/docs/en/deployment.mdx
@@ -0,0 +1,125 @@
+# Deployment
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This section will show you how to submit your SeaTunnel application in all kinds of cluster engine. If you still not installation
+<!-- markdown-link-check-disable-next-line -->
+SeaTunnel you could go to see [quick start](/category/start) about how to prepare and change SeaTunnel configuration firstly.
+
+## Deployment in All Kind of Engine
+
+### Local Mode(Spark Only)
+
+Local mode only support Spark engine for now.
+
+```shell
+./bin/start-seatunnel-spark.sh \
+--master local[4] \
+--deploy-mode client \
+--config ./config/application.conf
+```
+
+### Standalone Cluster
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```shell
+# client mode
+./bin/start-seatunnel-spark.sh \
+--master spark://ip:7077 \
+--deploy-mode client \
+--config ./config/application.conf
+
+# cluster mode
+./bin/start-seatunnel-spark.sh \
+--master spark://ip:7077 \
+--deploy-mode cluster \
+--config ./config/application.conf
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```shell
+bin/start-seatunnel-flink.sh \
+--config config-path
+
+# -p 2 specifies that the parallelism of flink job is 2. You can also specify more parameters, use flink run -h to view
+bin/start-seatunnel-flink.sh \
+-p 2 \
+--config config-path
+```
+
+</TabItem>
+</Tabs>
+
+### Yarn Cluster
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```shell
+# client mode
+./bin/start-seatunnel-spark.sh \
+--master yarn \
+--deploy-mode client \
+--config ./config/application.conf
+
+# cluster mode
+./bin/start-seatunnel-spark.sh \
+--master yarn \
+--deploy-mode cluster \
+--config ./config/application.conf
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```shell
+bin/start-seatunnel-flink.sh \
+-m yarn-cluster \
+--config config-path
+
+# -ynm seatunnel specifies the name displayed in the yarn webUI as seatunnel, you can also specify more parameters, use flink run -h to view
+bin/start-seatunnel-flink.sh \
+-m yarn-cluster \
+-ynm seatunnel \
+--config config-path
+```
+
+</TabItem>
+</Tabs>
+
+### Mesos Cluster
+
+Mesos cluster deployment only support Spark engine for now.
+
+```shell
+# cluster mode
+./bin/start-seatunnel-spark.sh \
+--master mesos://ip:7077 \
+--deploy-mode cluster \
+--config ./config/application.conf
+```
+
+## Run Your Engine in Scaling
+
+(This section is about engine instead of SeaTunnel itself, it is background knowledge for user who not understand engine
+cluster type). Both Spark and Flink could be run in different kind of cluster and any scale. This guide only show the basic
+usage of SeaTunnel which build above engine Spark or Flink, if you want to scale your engine cluster see
+[Spark](https://spark.apache.org/docs/latest/running-on-kubernetes.html)
+or [Flink](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/native_kubernetes/) document.
diff --git a/docs/en/developement/NewLicenseGuide.md b/docs/en/development/new-license.md
similarity index 97%
rename from docs/en/developement/NewLicenseGuide.md
rename to docs/en/development/new-license.md
index d9d9e49..44c80b3 100644
--- a/docs/en/developement/NewLicenseGuide.md
+++ b/docs/en/development/new-license.md
@@ -1,4 +1,4 @@
-# How to add a new License Guide
+# How To Add New License
 
 If you have any new Jar binary package adding in you PR, you need to follow the steps below to notice license
 
diff --git a/docs/en/developement/setup.md b/docs/en/development/setup.md
similarity index 98%
rename from docs/en/developement/setup.md
rename to docs/en/development/setup.md
index 866167d..d7ec81c 100644
--- a/docs/en/developement/setup.md
+++ b/docs/en/development/setup.md
@@ -1,4 +1,4 @@
-# Set Up Environment and Run Simple Example
+# Set Up Develop Environment
 
 In this section, we are going to show you how to set up your development environment for SeaTunnel, and then run a simple
 example in your JetBrains IntelliJ IDEA.
diff --git a/docs/en/FAQ.md b/docs/en/faq.md
similarity index 98%
rename from docs/en/FAQ.md
rename to docs/en/faq.md
index ba44ef6..f509f64 100644
--- a/docs/en/FAQ.md
+++ b/docs/en/faq.md
@@ -1,5 +1,12 @@
 # FAQ
 
+why-i-should-install-computing-engine-like-spark-or-flink
+
+## Why I should install computing engine like Spark or Flink
+
+<!-- We should add the reason -->
+TODO
+
 ## I have a question, but I can not solve it by myself
 
 I encounter a problem when using SeaTunnel and I cannot solve it by myself. What should I do? Firstly search in [Issue list](https://github.com/apache/incubator-seatunnel/issues) or [mailing list](https://lists.apache.org/list.html?dev@seatunnel.apache.org) to see if someone has already asked the same question and got the answer. If you still cannot find the answer, you can contact community members for help in[ these ways](https://github.com/apache/incubator-seatunnel#contact-us) .
diff --git a/docs/en/flink/commands/start-seatunnel-flink.sh.md b/docs/en/flink/commands/start-seatunnel-flink.sh.md
deleted file mode 100644
index 053a7fd..0000000
--- a/docs/en/flink/commands/start-seatunnel-flink.sh.md
+++ /dev/null
@@ -1,258 +0,0 @@
-# Command usage instructions
-
-> Command usage instructions [Flink]
-
-## seatunnel flink start command
-
-```bash
-bin/start-seatunnel-flink.sh  
-```
-
-### usage instructions
-
-```bash
-bin/start-seatunnel-flink.sh \-c config-path \  
--i key=value \  
-[other params]  
-```
-
-- Use `-c` or `--config` to specify the path of the configuration file
-
-- Use `-i` or `--variable` to specify the variables in the configuration file, you can configure multiple
-
-```bash
-env {
-  execution.parallelism = 1
-}
-
-source {
-    FakeSourceStream {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-}
-
-transform {
-    sql {
-      sql = "select name,age from fake where name='"${my_name}"'"
-    }
-}
-
-sink {
-  ConsoleSink {}
-}
-```
-
-**Run**
-
-```bash
- bin/start-seatunnel-flink.sh \
- -c config-path \
- -i my_name=kid-xiong
-```
-
-This designation will replace `"${my_name}"` in the configuration file with `kid-xiong`
-
-> For the rest of the parameters, refer to the original flink parameters. Check the flink parameter method: `bin/flink run -h` . The parameters can be added as needed. For example, `-m yarn-cluster` is specified as `on yarn` mode.
-
-```bash
-bin/flink run -h
-```
-
-- `Flink standalone` configurable parameters
-
-```bash
-Action "run" compiles and runs a program.
-
-  Syntax: run [OPTIONS] <jar-file> <arguments>
-  "run" action options:
-     -c,--class <classname>                     Class with the program entry
-                                                point ("main()" method). Only
-                                                needed if the JAR file does not
-                                                specify the class in its
-                                                manifest.
-     -C,--classpath <url>                       Adds a URL to each user code
-                                                classloader  on all nodes in the
-                                                cluster. The paths must specify
-                                                a protocol (e.g. file://) and be
-                                                accessible on all nodes (e.g. by
-                                                means of a NFS share). You can
-                                                use this option multiple times
-                                                for specifying more than one
-                                                URL. The protocol must be
-                                                supported by the {@link
-                                                java.net.URLClassLoader}.
-     -d,--detached                              If present, runs the job in
-                                                detached mode
-     -n,--allowNonRestoredState                 Allow to skip savepoint state
-                                                that cannot be restored. You
-                                                need to allow this if you
-                                                removed an operator from your
-                                                program that was part of the
-                                                program when the savepoint was
-                                                triggered.
-     -p,--parallelism <parallelism>             The parallelism with which to
-                                                run the program. Optional flag
-                                                to override the default value
-                                                specified in the configuration.
-     -py,--python <pythonFile>                  Python script with the program
-                                                entry point. The dependent
-                                                resources can be configured with
-                                                the `--pyFiles` option.
-     -pyarch,--pyArchives <arg>                 Add python archive files for
-                                                job. The archive files will be
-                                                extracted to the working
-                                                directory of python UDF worker.
-                                                For each archive file, a target
-                                                directory be specified. If the
-                                                target directory name is
-                                                specified, the archive file will
-                                                be extracted to a directory with
-                                                the specified name. Otherwise,
-                                                the archive file will be
-                                                extracted to a directory with
-                                                the same name of the archive
-                                                file. The files uploaded via
-                                                this option are accessible via
-                                                relative path. '#' could be used
-                                                as the separator of the archive
-                                                file path and the target
-                                                directory name. Comma (',')
-                                                could be used as the separator
-                                                to specify multiple archive
-                                                files. This option can be used
-                                                to upload the virtual
-                                                environment, the data files used
-                                                in Python UDF (e.g.,
-                                                --pyArchives
-                                                file:///tmp/py37.zip,file:///tmp
-                                                /data.zip#data --pyExecutable
-                                                py37.zip/py37/bin/python). The
-                                                data files could be accessed in
-                                                Python UDF, e.g.: f =
-                                                open('data/data.txt', 'r').
-     -pyclientexec,--pyClientExecutable <arg>   The path of the Python
-                                                interpreter used to launch the
-                                                Python process when submitting
-                                                the Python jobs via "flink run"
-                                                or compiling the Java/Scala jobs
-                                                containing Python UDFs.
-     -pyexec,--pyExecutable <arg>               Specify the path of the python
-                                                interpreter used to execute the
-                                                python UDF worker (e.g.:
-                                                --pyExecutable
-                                                /usr/local/bin/python3). The
-                                                python UDF worker depends on
-                                                Python 3.6+, Apache Beam
-                                                (version == 2.27.0), Pip
-                                                (version >= 7.1.0) and
-                                                SetupTools (version >= 37.0.0).
-                                                Please ensure that the specified
-                                                environment meets the above
-                                                requirements.
-     -pyfs,--pyFiles <pythonFiles>              Attach custom files for job. The
-                                                standard resource file suffixes
-                                                such as .py/.egg/.zip/.whl or
-                                                directory are all supported.
-                                                These files will be added to the
-                                                PYTHONPATH of both the local
-                                                client and the remote python UDF
-                                                worker. Files suffixed with .zip
-                                                will be extracted and added to
-                                                PYTHONPATH. Comma (',') could be
-                                                used as the separator to specify
-                                                multiple files (e.g., --pyFiles
-                                                file:///tmp/myresource.zip,hdfs:
-                                                ///$namenode_address/myresource2
-                                                .zip).
-     -pym,--pyModule <pythonModule>             Python module with the program
-                                                entry point. This option must be
-                                                used in conjunction with
-                                                `--pyFiles`.
-     -pyreq,--pyRequirements <arg>              Specify a requirements.txt file
-                                                which defines the third-party
-                                                dependencies. These dependencies
-                                                will be installed and added to
-                                                the PYTHONPATH of the python UDF
-                                                worker. A directory which
-                                                contains the installation
-                                                packages of these dependencies
-                                                could be specified optionally.
-                                                Use '#' as the separator if the
-                                                optional parameter exists (e.g.,
-                                                --pyRequirements
-                                                file:///tmp/requirements.txt#fil
-                                                e:///tmp/cached_dir).
-     -s,--fromSavepoint <savepointPath>         Path to a savepoint to restore
-                                                the job from (for example
-                                                hdfs:///flink/savepoint-1537).
-     -sae,--shutdownOnAttachedExit              If the job is submitted in
-                                                attached mode, perform a
-                                                best-effort cluster shutdown
-                                                when the CLI is terminated
-                                                abruptly, e.g., in response to a
-                                                user interrupt, such as typing
-                                                Ctrl + C.
-Options for Generic CLI mode:
-     -D <property=value>   Allows specifying multiple generic configuration
-                           options. The available options can be found at
-                           https://nightlies.apache.org/flink/flink-docs-stable/
-                           ops/config.html
-     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
-                           also available with the "Application Mode".
-                           The name of the executor to be used for executing the
-                           given job, which is equivalent to the
-                           "execution.target" config option. The currently
-                           available executors are: "remote", "local",
-                           "kubernetes-session", "yarn-per-job", "yarn-session".
-     -t,--target <arg>     The deployment target for the given application,
-                           which is equivalent to the "execution.target" config
-                           option. For the "run" action the currently available
-                           targets are: "remote", "local", "kubernetes-session",
-                           "yarn-per-job", "yarn-session". For the
-                           "run-application" action the currently available
-                           targets are: "kubernetes-application".
-```
-
-For example: `-p 2` specifies that the job parallelism is `2`
-
-```bash
-bin/start-seatunnel-flink.sh \
--p 2 \
--c config-path
-```
-
-- Configurable parameters of `flink yarn-cluster`
-
-```bash
-Options for yarn-cluster mode:
-     -m,--jobmanager <arg>            Set to yarn-cluster to use YARN execution
-                                      mode.
-     -yid,--yarnapplicationId <arg>   Attach to running YARN session
-     -z,--zookeeperNamespace <arg>    Namespace to create the Zookeeper
-                                      sub-paths for high availability mode
-
-  Options for default mode:
-     -D <property=value>             Allows specifying multiple generic
-                                     configuration options. The available
-                                     options can be found at
-                                     https://nightlies.apache.org/flink/flink-do
-                                     cs-stable/ops/config.html
-     -m,--jobmanager <arg>           Address of the JobManager to which to
-                                     connect. Use this flag to connect to a
-                                     different JobManager than the one specified
-                                     in the configuration. Attention: This
-                                     option is respected only if the
-                                     high-availability configuration is NONE.
-     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
-                                     for high availability mode
-```
-
-For example: `-m yarn-cluster -ynm seatunnel` specifies that the job is running on `yarn`, and the name of `yarn WebUI` is `seatunnel`
-
-```bash
-bin/start-seatunnel-flink.sh \
--m yarn-cluster \
--ynm seatunnel \
--c config-path
-```
diff --git a/docs/en/flink/configuration/ConfigExamples.md b/docs/en/flink/configuration/ConfigExamples.md
deleted file mode 100644
index 433dc12..0000000
--- a/docs/en/flink/configuration/ConfigExamples.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Config Examples
-
-> Full configuration file example [Flink]
-
-An example is as follows:
-
-> In the configuration, the behavior comment beginning with `#`.
-
-```bash
-######
-###### This config file is a demonstration of streaming processing in seatunnel config
-######
-
-env {
-    # You can set flink configuration here
-    execution.parallelism = 1
-    #execution.checkpoint.interval = 10000
-    #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
-}
-
-source {
-    # This is a example source plugin **only for test and demonstrate the feature source plugin**
-    FakeSourceStream {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-
-    # If you would like to get more information about how to configure seatunnel and see full list of source plugins,
-    # please go to https://seatunnel.apache.org/docs/flink/configuration/source-plugins/Fake
-}
-
-transform {
-    sql {
-      sql = "select name,age from fake"
-    }
-
-    # If you would like to get more information about how to configure seatunnel and see full list of transform plugins,
-    # please go to https://seatunnel.apache.org/docs/flink/configuration/transform-plugins/Sql
-}
-
-sink {
-    ConsoleSink {}
-
-    # If you would like to get more information about how to configure seatunnel and see full list of sink plugins,
-    # please go to https://seatunnel.apache.org/docs/flink/configuration/sink-plugins/Console
-}
-```
-
-If you want to know the details of this format configuration, Please see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
diff --git a/docs/en/flink/configuration/sink-plugins/Console.md b/docs/en/flink/configuration/sink-plugins/Console.md
deleted file mode 100644
index 9ae4ee2..0000000
--- a/docs/en/flink/configuration/sink-plugins/Console.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Console
-
-> Sink plugin : Console [Flink]
-
-## Description
-
-Used for functional testing and debugging, the results will be output in the stdout tab of taskManager
-
-## Options
-
-| name           | type   | required | default value |
-|----------------|--------| -------- |---------------|
-| limit          | int    | no       | INT_MAX       |
-| common-options | string | no       | -             |
-
-### limit [int]
-
-limit console result lines
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-## Examples
-
-```bash
-ConsoleSink{}
-```
-
-## Note
-
-Flink's console output is in flink's WebUI
diff --git a/docs/en/flink/configuration/sink-plugins/Doris.md b/docs/en/flink/configuration/sink-plugins/Doris.md
deleted file mode 100644
index 6b69a3d..0000000
--- a/docs/en/flink/configuration/sink-plugins/Doris.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Doris
-
-> Sink plugin: Doris [Flink]
-
-### Description
-
-Write Data to a Doris Table.
-
-### Options
-
-| name | type | required | default value | engine |
-| --- | --- | --- | --- | --- |
-| fenodes | string | yes | - | Flink |
-| database | string | yes | - | Flink  |
-| table | string | yes | - | Flink  |
-| user	 | string | yes | - | Flink  |
-| password	 | string | yes | - | Flink  |
-| batch_size	 | int | no |  100 | Flink  |
-| interval	 | int | no |1000 | Flink |
-| max_retries	 | int | no | 1 | Flink|
-| doris.*	 | - | no | - | Flink  |
-| parallelism | int | no  | - |Flink|
-
-##### fenodes [string]
-
-Doris FE http address
-
-##### database [string]
-
-Doris database name
-
-##### table [string]
-
-Doris table name
-
-##### user [string]
-
-Doris username
-
-##### password [string]
-
-Doris password
-
-##### batch_size [int]
-
-Maximum number of lines in a single write Doris,default value is 5000.
-
-##### interval [int]
-
-The flush interval millisecond, after which the asynchronous thread will write the data in the cache to Doris.Set to 0 to turn off periodic writing.
-
-Default value :5000
-
-##### max_retries [int]
-
-Number of retries after writing Doris failed
-
-##### doris.* [string]
-
-The doris stream load parameters.you can use 'doris.' prefix + stream_load properties. eg:doris.column_separator' = ','
-[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
-
-### parallelism [Int]
-
-The parallelism of an individual operator, for DorisSink
-
-### Examples
-
-```
-DorisSink {
-	 fenodes = "127.0.0.1:8030"
-	 database = database
-	 table = table
-	 user = root
-	 password = password
-	 batch_size = 1
-	 doris.column_separator="\t"
-     doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
-}
-```
diff --git a/docs/en/flink/configuration/sink-plugins/Elasticsearch.md b/docs/en/flink/configuration/sink-plugins/Elasticsearch.md
deleted file mode 100644
index a63cc91..0000000
--- a/docs/en/flink/configuration/sink-plugins/Elasticsearch.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Elasticsearch
-
-> Sink plugin : Elasticsearch [Flink]
-
-## Description
-
-Output data to ElasticSearch
-
-## Options
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| hosts             | array  | yes      | -             |
-| index_type        | string | no       | log           |
-| index_time_format | string | no       | yyyy.MM.dd    |
-| index             | string | no       | seatunnel     |
-| common-options    | string | no       | -             |
-| parallelism       | int    | no       | -             |
-
-### hosts [array]
-
-`Elasticsearch` cluster address, the format is `host:port` , allowing multiple hosts to be specified. Such as `["host1:9200", "host2:9200"]` .
-
-### index_type [string]
-
-Elasticsearch index type
-
-### index_time_format [string]
-
-When the format in the `index` parameter is `xxxx-${now}` , `index_time_format` can specify the time format of the `index` name, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
-
-| Symbol | Description        |
-| ------ | ------------------ |
-| y      | Year               |
-| M      | Month              |
-| d      | Day of month       |
-| H      | Hour in day (0-23) |
-| m      | Minute in hour     |
-| s      | Second in minute   |
-
-See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
-
-### index [string]
-
-Elasticsearch `index` name. If you need to generate an `index` based on time, you can specify a time variable, such as `seatunnel-${now}` . `now` represents the current data processing time.
-
-### parallelism [`Int`]
-
-The parallelism of an individual operator, data source, or data sink
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-## Examples
-
-```bash
-elasticsearch {
-    hosts = ["localhost:9200"]
-    index = "seatunnel"
-}
-```
-
-> Write the result to the index of the `Elasticsearch` cluster named `seatunnel`
diff --git a/docs/en/flink/configuration/sink-plugins/File.md b/docs/en/flink/configuration/sink-plugins/File.md
deleted file mode 100644
index ba41462..0000000
--- a/docs/en/flink/configuration/sink-plugins/File.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# File
-
-> Sink plugin : File [Flink]
-
-## Description
-
-Write data to the file system
-
-## Options
-
-| name              | type   | required | default value  |
-|-------------------|--------| -------- |----------------|
-| format            | string | yes      | -              |
-| path              | string | yes      | -              |
-| path_time_format  | string | no       | yyyyMMddHHmmss |
-| write_mode        | string | no       | -              |
-| common-options    | string | no       | -              |
-| parallelism       | int    | no       | -              |
-| rollover_interval | long   | no       | 1              |
-| max_part_size     | long   | no       | 1024          |
-| prefix            | string | no       | seatunnel      |
-| suffix            | string | no       | .ext           |
-
-### format [string]
-
-Currently, `csv` , `json` , and `text` are supported. The streaming mode currently only supports `text`
-
-### path [string]
-
-The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
-we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`, 
-`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
-
-### path_time_format [string]
-
-When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
-
-| Symbol | Description        |
-| ------ | ------------------ |
-| y      | Year               |
-| M      | Month              |
-| d      | Day of month       |
-| H      | Hour in day (0-23) |
-| m      | Minute in hour     |
-| s      | Second in minute   |
-
-See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
-
-### write_mode [string]
-
-- NO_OVERWRITE
-
-    - No overwrite, there is an error in the path
-
-- OVERWRITE
-
-  - Overwrite, delete and then write if the path exists
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-### parallelism [`Int`]
-
-The parallelism of an individual operator, for FileSink
-
-### rollover_interval [long]
-
-The new file part rollover interval, unit min.
-
-### max_part_size [long]
-
-The max size of each file part, unit MB.
-
-### prefix [string]
-
-The prefix of each file part.
-
-### suffix [string]
-
-The suffix of each file part.
-
-## Examples
-
-```bash
-  FileSink {
-    format = "json"
-    path = "hdfs://localhost:9000/flink/output/"
-    write_mode = "OVERWRITE"
-  }
-```
diff --git a/docs/en/flink/configuration/sink-plugins/Jdbc.md b/docs/en/flink/configuration/sink-plugins/Jdbc.md
deleted file mode 100644
index e212cdc..0000000
--- a/docs/en/flink/configuration/sink-plugins/Jdbc.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# Jdbc
-
-> Sink plugin : Jdbc [Flink]
-
-## Description
-
-Write data through jdbc
-
-## Options
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| driver            | string | yes      | -             |
-| url               | string | yes      | -             |
-| username          | string | yes      | -             |
-| password          | string | no       | -             |
-| query             | string | yes      | -             |
-| batch_size        | int    | no       | -             |
-| source_table_name | string | yes      | -             |
-| common-options    | string | no       | -             |
-| parallelism       | int    | no       | -             |
-
-### driver [string]
-
-Driver name, such as `com.mysql.cj.jdbc.Driver` for MySQL.
-
-Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
-
-### url [string]
-
-The URL of the JDBC connection. Such as: `jdbc:mysql://localhost:3306/test`
-
-### username [string]
-
-username
-
-### password [string]
-
-password
-
-### query [string]
-
-Insert statement
-
-### batch_size [int]
-
-Number of writes per batch
-
-### parallelism [int]
-
-The parallelism of an individual operator, for JdbcSink.
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-## Examples
-
-```bash
-   JdbcSink {
-     source_table_name = fake
-     driver = com.mysql.jdbc.Driver
-     url = "jdbc:mysql://localhost/test"
-     username = root
-     query = "insert into test(name,age) values(?,?)"
-     batch_size = 2
-   }
-```
diff --git a/docs/en/flink/configuration/source-plugins/Fake.md b/docs/en/flink/configuration/source-plugins/Fake.md
deleted file mode 100644
index efd5e7f..0000000
--- a/docs/en/flink/configuration/source-plugins/Fake.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Fake
-
-> Source plugin : FakeSource [Flink]  
-> Source plugin : FakeSourceStream [Flink]
-
-## Description
-
-`Fake Source` is mainly used to automatically generate data. The data has only two columns. The first column is of `String type` and the content is a random one from `["Gary", "Ricky Huo", "Kid Xiong"]` . The second column is of `Long type` , which is The current 13-bit timestamp is used as input for functional verification and testing of `seatunnel` .
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| parallelism    | `Int`  | no       | -             |
-| common-options |`string`| no       | -             |
-
-### parallelism [`Int`]
-
-The parallelism of an individual operator, for Fake Source Stream
-
-### common options [`string`]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-source {
-    FakeSourceStream {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-}
-```
-
-```bash
-source {
-    FakeSource {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-}
-```
diff --git a/docs/en/flink/configuration/source-plugins/Jdbc.md b/docs/en/flink/configuration/source-plugins/Jdbc.md
deleted file mode 100644
index ed2a138..0000000
--- a/docs/en/flink/configuration/source-plugins/Jdbc.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Jdbc
-
-> Source plugin : Jdbc [Flink]
-
-## Description
-
-Read data through jdbc
-
-## Options
-
-| name                  | type   | required | default value |
-|-----------------------|--------| -------- | ------------- |
-| driver                | string | yes      | -             |
-| url                   | string | yes      | -             |
-| username              | string | yes      | -             |
-| password              | string | no       | -             |
-| query                 | string | yes      | -             |
-| fetch_size            | int    | no       | -             |
-| partition_column      | string | no       | -             |
-| partition_upper_bound | long   | no       | -             |
-| partition_lower_bound | long   | no       | -             |
-| common-options        | string | no       | -             |
-| parallelism           | int    | no       | -             |
-
-### driver [string]
-
-Driver name, such as `com.mysql.cj.jdbc.Driver` for MySQL.
-
-Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
-
-### url [string]
-
-The URL of the JDBC connection. Such as: `jdbc:mysql://localhost:3306/test`
-
-### username [string]
-
-username
-
-### password [string]
-
-password
-
-### query [string]
-
-Query statement
-
-### fetch_size [int]
-
-fetch size
-
-### parallelism [int]
-
-The parallelism of an individual operator, for JdbcSource.
-
-### partition_column [string]
-
-The column name for parallelism's partition, only support numeric type.
-
-### partition_upper_bound [long]
-
-The partition_column max value for scan, if not set SeaTunnel will query database get max value.
-
-### partition_lower_bound [long]
-
-The partition_column min value for scan, if not set SeaTunnel will query database get min value.
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-JdbcSource {
-    driver = com.mysql.jdbc.Driver
-    url = "jdbc:mysql://localhost/test"
-    username = root
-    query = "select * from test"
-}
-```
diff --git a/docs/en/flink/configuration/source-plugins/Socket.md b/docs/en/flink/configuration/source-plugins/Socket.md
deleted file mode 100644
index 8d0c867..0000000
--- a/docs/en/flink/configuration/source-plugins/Socket.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Socket
-
-> Source plugin : Socket [Flink]
-
-## Description
-
-Socket as data source
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| host           | string | no       | localhost     |
-| port           | int    | no       | 9999          |
-| common-options | string | no       | -             |
-
-### host [string]
-
-socket server hostname
-
-### port [int]
-
-socket server port
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-source {
-  SocketStream{
-        result_table_name = "socket"
-        field_name = "info"
-  }
-}
-```
diff --git a/docs/en/flink/configuration/source-plugins/source-plugin.md b/docs/en/flink/configuration/source-plugins/source-plugin.md
deleted file mode 100644
index 4afcf42..0000000
--- a/docs/en/flink/configuration/source-plugins/source-plugin.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Common Options
-
-> Source Common Options [Flink]
-
-## Source Plugin common parameters
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| result_table_name | string | no       | -             |
-| field_name        | string | no       | -             |
-
-### result_table_name [string]
-
-When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` ;
-
-When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The data set `(dataStream/dataset)` registered here can be directly accessed by other plugins by specifying `source_table_name` .
-
-### field_name [string]
-
-When the data is obtained from the upper-level plug-in, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
-
-## Examples
-
-```bash
-source {
-    FakeSourceStream {
-      result_table_name = "fake"
-      field_name = "name,age"
-    }
-}
-```
-
-> The result of the data source `FakeSourceStream` will be registered as a temporary table named `fake` . This temporary table can be used by any `Transform` or `Sink` plugin by specifying `source_table_name` .
->
-> `field_name` names the two columns of the temporary table `name` and `age` respectively.
diff --git a/docs/en/flink/configuration/transform-plugins/Split.md b/docs/en/flink/configuration/transform-plugins/Split.md
deleted file mode 100644
index f10c5a0..0000000
--- a/docs/en/flink/configuration/transform-plugins/Split.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Split
-
-> Transform plugin : Split [Flink]
-
-## Description
-
-A string cutting function is defined, which is used to split the specified field in the Sql plugin.
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| separator      | string | no       | ,             |
-| fields         | array  | yes      | -             |
-| common-options | string | no       | -             |
-
-### separator [string]
-
-The specified delimiter, the default is `,`
-
-### fields [array]
-
-The name of each field after split
-
-### common options [string]
-
-Transform plugin common parameters, please refer to [Transform Plugin](./transform-plugin.md) for details
-
-## Examples
-
-```bash
-  # This just created a udf called split
-  Split{
-    separator = "#"
-    fields = ["name","age"]
-  }
-  # Use the split function (confirm that the fake table exists)
-  sql {
-    sql = "select * from (select info,split(info) as info_row from fake) t1"
-  }
-```
diff --git a/docs/en/flink/configuration/transform-plugins/Sql.md b/docs/en/flink/configuration/transform-plugins/Sql.md
deleted file mode 100644
index 80f1aea..0000000
--- a/docs/en/flink/configuration/transform-plugins/Sql.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Sql
-
-> Transform plugin : SQL [Flink]
-
-## Description
-
-Use SQL to process data, use `flink sql` grammar, and support its various `udf`
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| sql            | string | yes      | -             |
-| common-options | string | no       | -             |
-
-### common options [string]
-
-Transform plugin common parameters, please refer to [Transform Plugin](./transform-plugin.md) for details
-
-## Examples
-
-```bash
-sql {
-    sql = "select name, age from fake"
-}
-```
diff --git a/docs/en/flink/configuration/transform-plugins/transform-plugin.md b/docs/en/flink/configuration/transform-plugins/transform-plugin.md
deleted file mode 100644
index fd42d87..0000000
--- a/docs/en/flink/configuration/transform-plugins/transform-plugin.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# Common Options
-
-> Transform Common Options [Flink]
-
-## Transform Plugin common parameters
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| source_table_name | string | no       | -             |
-| result_table_name | string | no       | -             |
-| field_name        | string | no       | -             |
-
-### source_table_name [string]
-
-When `source_table_name` is not specified, the current plugin is processing the data set `(dataStream/dataset)` output by the previous plugin in the configuration file;
-
-When `source_table_name` is specified, the current plugin is processing the data set corresponding to this parameter.
-
-### result_table_name [string]
-
-When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table (table);
-
-When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The data set `(dataStream/dataset)` registered here can be directly accessed by other plugins by specifying `source_table_name` .
-
-### field_name [string]
-
-When the data is obtained from the upper-level plugin, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
-
-## Examples
-
-```bash
-source {
-    FakeSourceStream {
-      result_table_name = "fake_1"
-      field_name = "name,age"
-    }
-    FakeSourceStream {
-      result_table_name = "fake_2"
-      field_name = "name,age"
-    }
-}
-
-transform {
-    sql {
-      source_table_name = "fake_1"
-      sql = "select name from fake_1"
-      result_table_name = "fake_name"
-    }
-}
-```
-
-> If `source_table_name` is not specified, the sql plugin will process the data of `fake_2` , and if it is set to `fake_1` , it will process the data of `fake_1` .
diff --git a/docs/en/flink/deployment.md b/docs/en/flink/deployment.md
deleted file mode 100644
index 9d0f739..0000000
--- a/docs/en/flink/deployment.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Deployment and run
-
-`seatunnel For Flink` relies on the `Java` runtime environment and `Flink` . For detailed `seatunnel` installation steps, refer to [installing seatunnel](./installation.md)
-
-The following focuses on how different platforms run:
-
-> First edit the `config/seatunnel-env.sh` in the `seatunnel` directory after decompression, and specify the required environment configuration `FLINK_HOME`
-
-## Run seatunnel on Flink Standalone cluster
-
-```bash
-bin/start-seatunnel-flink.sh \
---config config-path
-
-# -p 2 specifies that the parallelism of flink job is 2. You can also specify more parameters, use flink run -h to view
-bin/start-seatunnel-flink.sh \
--p 2 \
---config config-path
-```
-
-## Run seatunnel on Yarn cluster
-
-```bash
-bin/start-seatunnel-flink.sh \
--m yarn-cluster \
---config config-path
-
-# -ynm seatunnel specifies the name displayed in the yarn webUI as seatunnel, you can also specify more parameters, use flink run -h to view
-bin/start-seatunnel-flink.sh \
--m yarn-cluster \
--ynm seatunnel \
---config config-path
-```
-
-Refer to: [Flink Yarn Setup](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/deployment/resource-providers/yarn)
diff --git a/docs/en/flink/installation.md b/docs/en/flink/installation.md
deleted file mode 100644
index 23cbe3e..0000000
--- a/docs/en/flink/installation.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Download and install
-
-## Download
-
-```bash
-https://github.com/apache/incubator-seatunnel/releases
-```
-
-## Environmental preparation
-
-### Prepare JDK1.8
-
-seatunnel relies on the JDK1.8 runtime environment.
-
-### Get Flink ready
-
-Please [download Flink](https://flink.apache.org/downloads.html) first, please choose Flink version >= 1.9.0. The download is complete to [install flink](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/deployment/resource-providers/standalone/overview)
-
-### Install seatunnel
-
-Download the seatunnel installation package and unzip
-
-```bash
-wget https://github.com/apache/incubator-seatunnel/releases/download/v<version>/seatunnel-<version>.zip -O seatunnel-<version>.zip
-unzip seatunnel-<version>.zip
-ln -s seatunnel-<version> seatunnel
-```
-
-Without any complicated installation and configuration steps, please refer to [Quick Start](./quick-start.md) for the usage of seatunnel, and refer to Configuration for [configuration](./configuration).
-
-If you want to deploy `seatunnel` to run on `Flink Standalone/Yarn cluster` , please refer to [seatunnel deployment](./deployment.md)
diff --git a/docs/en/flink/quick-start.md b/docs/en/flink/quick-start.md
deleted file mode 100644
index d1a66f5..0000000
--- a/docs/en/flink/quick-start.md
+++ /dev/null
@@ -1,113 +0,0 @@
-# Quick start
-
-> Let's take an application that receives data through a `socket` , divides the data into multiple fields, and outputs the processing results as an example to quickly show how to use `seatunnel` .
-
-## Step 1: Prepare Flink runtime environment
-
-> If you are familiar with `Flink` or have prepared the `Flink` operating environment, you can ignore this step. `Flink` does not require any special configuration.
-
-Please [download Flink](https://flink.apache.org/downloads.html) first, please choose Flink version >= 1.9.0. The download is complete to [install Flink](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/deployment/resource-providers/standalone/overview/)
-
-## Step 2: Download seatunnel
-
-Enter the [seatunnel installation package](https://seatunnel.apache.org/download) download page and download the latest version of `seatunnel-<version>-bin.tar.gz`
-
-Or download the specified version directly (take 2.1.0 as an example):
-
-```bash
-wget https://downloads.apache.org/incubator/seatunnel/2.1.0/apache-seatunnel-incubating-2.1.0-bin.tar.gz -O seatunnel-2.1.0.tar.gz
-```
-
-After downloading, extract:
-
-```bash
-tar -xvzf seatunnel-<version>.tar.gz
-ln -s seatunnel-<version> seatunnel
-```
-
-## Step 3: Configure seatunnel
-
-- Edit `config/seatunnel-env.sh` , specify the necessary environment configuration such as `FLINK_HOME` (the directory after `Flink` downloaded and decompressed in Step 1)
-
-- Edit `config/application.conf` , it determines the way and logic of data input, processing, and output after `seatunnel` is started.
-
-```bash
-env {
-  # You can set flink configuration here
-  execution.parallelism = 1
-  #execution.checkpoint.interval = 10000
-  #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
-}
-
-source {
-    SocketStream{
-          result_table_name = "fake"
-          field_name = "info"
-    }
-}
-
-transform {
-  Split{
-    separator = "#"
-    fields = ["name","age"]
-  }
-  sql {
-    sql = "select * from (select info,split(info) as info_row from fake) t1"
-  }
-}
-
-sink {
-  ConsoleSink {}
-}
-
-```
-
-## Step 4: Start the `netcat server` to send data
-
-```bash
-nc -l -p 9999
-```
-
-## Step 5: start `seatunnel`
-
-```bash
-cd seatunnel
-./bin/start-seatunnel-flink.sh \
---config ./config/application.conf
-```
-
-## Step 6: Input at the `nc` terminal
-
-```bash
-xg#1995
-```
-
-It is printed in the TaskManager Stdout log of `flink WebUI`:
-
-```bash
-xg#1995,xg,1995
-```
-
-## Summary
-
-If you want to know more `seatunnel` configuration examples, please refer to:
-
-- Configuration example 1: [Streaming streaming computing](https://github.com/apache/incubator-seatunnel/blob/dev/config/flink.streaming.conf.template)
-
-The above configuration is the default `[streaming configuration template]` , which can be run directly, the command is as follows:
-
-```bash
-cd seatunnel
-./bin/start-seatunnel-flink.sh \
---config ./config/flink.streaming.conf.template
-```
-
-- Configuration example 2: [Batch offline batch processing](https://github.com/apache/incubator-seatunnel/blob/dev/config/flink.batch.conf.template)
-
-The above configuration is the default `[offline batch configuration template]` , which can be run directly, the command is as follows:
-
-```bash
-cd seatunnel
-./bin/start-seatunnel-flink.sh \
---config ./config/flink.batch.conf.template
-```
diff --git a/docs/en/intro/about.md b/docs/en/intro/about.md
new file mode 100644
index 0000000..c4e2536
--- /dev/null
+++ b/docs/en/intro/about.md
@@ -0,0 +1,73 @@
+---
+sidebar_position: 1
+---
+
+# About Seatunnel
+
+<img src="https://seatunnel.apache.org/image/logo.png" alt="seatunnel logo" width="200px" height="200px" align="right" />
+
+[![Slack](https://img.shields.io/badge/slack-%23seatunnel-4f8eba?logo=slack)](https://join.slack.com/t/apacheseatunnel/shared_invite/zt-123jmewxe-RjB_DW3M3gV~xL91pZ0oVQ)
+[![Twitter Follow](https://img.shields.io/twitter/follow/ASFSeaTunnel.svg?label=Follow&logo=twitter)](https://twitter.com/ASFSeaTunnel)
+
+SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time
+synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has
+been used in the production of nearly 100 companies.
+
+## Use Scenarios
+
+- Mass data synchronization
+- Mass data integration
+- ETL with massive data
+- Mass data aggregation
+- Multi-source data processing
+
+## Features
+
+- Easy to use, flexible configuration, low code development
+- Real-time streaming
+- Offline multi-source data analysis
+- High-performance, massive data processing capabilities
+- Modular and plug-in mechanism, easy to extend
+- Support data processing and aggregation by SQL
+- Support Spark structured streaming
+- Support Spark 2.x
+
+## Workflow
+
+![seatunnel-workflow.svg](../images/seatunnel-workflow.svg)
+
+```text
+Source[Data Source Input] -> Transform[Data Processing] -> Sink[Result Output]
+```
+
+The data processing pipeline is constituted by multiple filters to meet a variety of data processing needs. If you are
+accustomed to SQL, you can also directly construct a data processing pipeline by SQL, which is simple and efficient.
+Currently, the filter list supported by SeaTunnel is still being expanded. Furthermore, you can develop your own data
+processing plug-in, because the whole system is easy to expand.
+
+## Connector
+
+- Input plugin Fake, File, Hdfs, Kafka, Druid, InfluxDB, S3, Socket, self-developed Input plugin
+
+- Filter plugin Add, Checksum, Convert, Date, Drop, Grok, Json, Kv, Lowercase, Remove, Rename, Repartition, Replace,
+  Sample, Split, Sql, Table, Truncate, Uppercase, Uuid, Self-developed Filter plugin
+
+- Output plugin Elasticsearch, File, Hdfs, Jdbc, Kafka, Druid, InfluxDB, Mysql, S3, Stdout, self-developed Output plugin
+
+## Who Use SeaTunnel
+
+SeaTunnel have lots of users which you can find more information in [users](https://seatunnel.apache.org/user)
+
+## Landscapes
+
+<p align="center">
+<br/><br/>
+<img src="https://landscape.cncf.io/images/left-logo.svg" width="150" alt=""/>&nbsp;&nbsp;<img src="https://landscape.cncf.io/images/right-logo.svg" width="200" alt=""/>
+<br/><br/>
+SeaTunnel enriches the <a href="https://landscape.cncf.io/landscape=observability-and-analysis&license=apache-license-2-0">CNCF CLOUD NATIVE Landscape.</a >
+</p >
+
+## What's More
+
+<!-- markdown-link-check-disable-next-line -->
+You can see [Quick Start](/category/start) for the next step.
diff --git a/docs/en/intro/history.md b/docs/en/intro/history.md
new file mode 100644
index 0000000..1d62ea6
--- /dev/null
+++ b/docs/en/intro/history.md
@@ -0,0 +1,15 @@
+---
+sidebar_position: 3
+---
+
+# History of Seatunnel
+
+SeaTunnel was formerly named WaterDrop
+
+## Rename to SeaTunnel
+
+This project renamed to SeaTunnel since Oct 12th, 2021.
+
+## Enter Apache Software Foundation’s Incubator
+
+SeaTunnel joined the Apache Software Foundation’s Incubator program in Dec 9th, 2021.
diff --git a/docs/en/intro/why.md b/docs/en/intro/why.md
new file mode 100644
index 0000000..d6de2e2
--- /dev/null
+++ b/docs/en/intro/why.md
@@ -0,0 +1,13 @@
+---
+sidebar_position: 2
+---
+
+# Why SeaTunnel
+
+SeaTunnel will do its best to solve the problems that may be encountered in the synchronization of massive data:
+
+- Data loss and duplication
+- Task accumulation and delay
+- Low throughput
+- Long cycle to be applied in the production environment
+- Lack of application running status monitoring
diff --git a/docs/en/introduction.md b/docs/en/introduction.md
deleted file mode 100644
index 7cfff2f..0000000
--- a/docs/en/introduction.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: Introduction
-sidebar_position: 1
----
-
-# SeaTunnel
-
-<img src="https://seatunnel.apache.org/image/logo.png" alt="seatunnel logo" width="200px" height="200px" align="right" />
-
-[![Slack](https://img.shields.io/badge/slack-%23seatunnel-4f8eba?logo=slack)](https://join.slack.com/t/apacheseatunnel/shared_invite/zt-123jmewxe-RjB_DW3M3gV~xL91pZ0oVQ)
-[![Twitter Follow](https://img.shields.io/twitter/follow/ASFSeaTunnel.svg?label=Follow&logo=twitter)](https://twitter.com/ASFSeaTunnel)
-
----
-
-SeaTunnel was formerly named Waterdrop , and renamed SeaTunnel since October 12, 2021.
-
----
-
-SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time
-synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has
-been used in the production of nearly 100 companies.
-
-## Why do we need SeaTunnel
-
-SeaTunnel will do its best to solve the problems that may be encountered in the synchronization of massive data:
-
-- Data loss and duplication
-- Task accumulation and delay
-- Low throughput
-- Long cycle to be applied in the production environment
-- Lack of application running status monitoring
-
-## SeaTunnel use scenarios
-
-- Mass data synchronization
-- Mass data integration
-- ETL with massive data
-- Mass data aggregation
-- Multi-source data processing
-
-## Features of SeaTunnel
-
-- Easy to use, flexible configuration, low code development
-- Real-time streaming
-- Offline multi-source data analysis
-- High-performance, massive data processing capabilities
-- Modular and plug-in mechanism, easy to extend
-- Support data processing and aggregation by SQL
-- Support Spark structured streaming
-- Support Spark 2.x
-
-## Workflow of SeaTunnel
-
-![seatunnel-workflow.svg](images/seatunnel-workflow.svg)
-
-```
-Source[Data Source Input] -> Transform[Data Processing] -> Sink[Result Output]
-```
-
-The data processing pipeline is constituted by multiple filters to meet a variety of data processing needs. If you are
-accustomed to SQL, you can also directly construct a data processing pipeline by SQL, which is simple and efficient.
-Currently, the filter list supported by SeaTunnel is still being expanded. Furthermore, you can develop your own data
-processing plug-in, because the whole system is easy to expand.
-
-## Plugins supported by SeaTunnel
-
-- Input plugin Fake, File, Hdfs, Kafka, Druid, InfluxDB, S3, Socket, self-developed Input plugin
-
-- Filter plugin Add, Checksum, Convert, Date, Drop, Grok, Json, Kv, Lowercase, Remove, Rename, Repartition, Replace,
-  Sample, Split, Sql, Table, Truncate, Uppercase, Uuid, Self-developed Filter plugin
-
-- Output plugin Elasticsearch, File, Hdfs, Jdbc, Kafka, Druid, InfluxDB, Mysql, S3, Stdout, self-developed Output plugin
-
-## Environmental dependency
-
-1. java runtime environment, java >= 8
-
-2. If you want to run SeaTunnel in a cluster environment, any of the following Spark cluster environments is usable:
-
-- Spark on Yarn
-- Spark Standalone
-
-If the data volume is small, or the goal is merely for functional verification, you can also start in local mode without
-a cluster environment, because SeaTunnel supports standalone operation. Note: SeaTunnel 2.0 supports running on Spark
-and Flink.
-
-## Downloads
-
-Download address for run-directly software package :https://github.com/apache/incubator-seatunnel/releases
-
-## Quick start
-
-**Spark**
-https://seatunnel.apache.org/docs/spark/quick-start
-
-**Flink**
-https://seatunnel.apache.org/docs/flink/quick-start
-
-Detailed documentation on SeaTunnel
-https://seatunnel.apache.org/docs/introduction
-
-## Application practice cases
-
-- Weibo, Value-added Business Department Data Platform
-
-Weibo business uses an internal customized version of SeaTunnel and its sub-project Guardian for SeaTunnel On Yarn task
-monitoring for hundreds of real-time streaming computing tasks.
-
-- Sina, Big Data Operation Analysis Platform
-
-Sina Data Operation Analysis Platform uses SeaTunnel to perform real-time and offline analysis of data operation and
-maintenance for Sina News, CDN and other services, and write it into Clickhouse.
-
-- Sogou, Sogou Qiqian System
-
-Sogou Qiqian System takes SeaTunnel as an ETL tool to help establish a real-time data warehouse system.
-
-- Qutoutiao, Qutoutiao Data Center
-
-Qutoutiao Data Center uses SeaTunnel to support mysql to hive offline ETL tasks, real-time hive to clickhouse backfill
-technical support, and well covers most offline and real-time tasks needs.
-
-- Yixia Technology, Yizhibo Data Platform
-
-- Yonghui Superstores Founders' Alliance-Yonghui Yunchuang Technology, Member E-commerce Data Analysis Platform
-
-SeaTunnel provides real-time streaming and offline SQL computing of e-commerce user behavior data for Yonghui Life, a
-new retail brand of Yonghui Yunchuang Technology.
-
-- Shuidichou, Data Platform
-
-Shuidichou adopts SeaTunnel to do real-time streaming and regular offline batch processing on Yarn, processing 3~4T data
-volume average daily, and later writing the data to Clickhouse.
-
-- Tencent Cloud
-
-Collecting various logs from business services into Apache Kafka, some of the data in Apache Kafka is consumed and extracted through Seatunnel, and then store into Clickhouse.
-
-For more use cases, please refer to: https://seatunnel.apache.org/blog
-
-## Code of conduct
-
-This project adheres to the Contributor Covenant [code of conduct](https://www.apache.org/foundation/policies/conduct).
-By participating, you are expected to uphold this code. Please follow
-the [REPORTING GUIDELINES](https://www.apache.org/foundation/policies/conduct#reporting-guidelines) to report
-unacceptable behavior.
-
-## Developer
-
-Thanks to all developers!
-
-[![](https://opencollective.com/seatunnel/contributors.svg?width=666)](https://github.com/apache/incubator-seatunnel/graphs/contributors)
-
-## Contact Us
-
-* Mail list: **dev@seatunnel.apache.org**. Mail to `dev-subscribe@seatunnel.apache.org`, follow the reply to subscribe
-  the mail list.
-* Slack: https://join.slack.com/t/apacheseatunnel/shared_invite/zt-123jmewxe-RjB_DW3M3gV~xL91pZ0oVQ
-* Twitter: https://twitter.com/ASFSeaTunnel
-* [Bilibili](https://space.bilibili.com/1542095008) (for Chinese users)
-
-## Landscapes
-
-<p align="center">
-<br/><br/>
-<img src="https://landscape.cncf.io/images/left-logo.svg" width="150" alt=""/>&nbsp;&nbsp;<img src="https://landscape.cncf.io/images/right-logo.svg" width="200" alt=""/>
-<br/><br/>
-SeaTunnel enriches the <a href="https://landscape.cncf.io/landscape=observability-and-analysis&license=apache-license-2-0">CNCF CLOUD NATIVE Landscape.</a >
-</p >
diff --git a/docs/en/spark/commands/start-seatunnel-spark.sh.md b/docs/en/spark/commands/start-seatunnel-spark.sh.md
deleted file mode 100644
index fb78335..0000000
--- a/docs/en/spark/commands/start-seatunnel-spark.sh.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Command usage instructions
-
-> Command usage instructions [Spark]
-
-## seatunnel spark start command
-
-```bash
-bin/start-seatunnel-spark.sh
-```
-
-### usage instructions
-
-```bash
-bin/start-seatunnel-spark.sh \
--c config-path \
--m master \
--e deploy-mode \
--i city=beijing
-```
-
-- Use `-c` or `--config` to specify the path of the configuration file
-
-- Use `-m` or `--master` to specify the cluster manager
-
-- Use `-e` or `--deploy-mode` to specify the deployment mode
-
-- Use `-i` or `--variable` to specify the variables in the configuration file, you can configure multiple
-
-#### Use Cases
-
-```bash
-# Yarn client mode
-./bin/start-seatunnel-spark.sh \
---master yarn \
---deploy-mode client \
---config ./config/application.conf
-
-# Yarn cluster mode
-./bin/start-seatunnel-spark.sh \
---master yarn \
---deploy-mode cluster \
---config ./config/application.conf
-```
diff --git a/docs/en/spark/configuration/ConfigExamples.md b/docs/en/spark/configuration/ConfigExamples.md
deleted file mode 100644
index 0e558c1..0000000
--- a/docs/en/spark/configuration/ConfigExamples.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Config Examples
-
-> Complete configuration file example [Spark]
-
-- Configuration example 1: [Stream processing](https://github.com/apache/incubator-seatunnel/blob/dev/config/spark.streaming.conf.template)
-
-- Configuration example 2: [Batch offline processing](https://github.com/apache/incubator-seatunnel/blob/dev/config/spark.batch.conf.template) 
-
-If you want to know the details of this format configuration, Please see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
\ No newline at end of file
diff --git a/docs/en/spark/configuration/sink-plugins/Console.md b/docs/en/spark/configuration/sink-plugins/Console.md
deleted file mode 100644
index 3e88179..0000000
--- a/docs/en/spark/configuration/sink-plugins/Console.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Console
-
-> Sink plugin : Console [Spark]
-
-## Description
-
-Output data to standard output/terminal, which is often used for debugging, which makes it easy to observe the data.
-
-## Options
-
-| name           | type   | required | default value | engine                |
-| -------------- | ------ | -------- | ------------- | --------------------- |
-| limit          | number | no       | 100           | batch/spark streaming |
-| serializer     | string | no       | plain         | batch/spark streaming |
-| common-options | string | no       | -             | all streaming         |
-
-### limit [number]
-
-Limit the number of `rows` to be output, the legal range is `[-1, 2147483647]` , `-1` means that the output is up to `2147483647` rows
-
-### serializer [string]
-
-The format of serialization when outputting. Available serializers include: `json` , `plain`
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-## Examples
-
-```bash
-console {
-    limit = 10
-    serializer = "json"
-}
-```
-
-> Output 10 pieces of data in Json format
diff --git a/docs/en/spark/configuration/sink-plugins/Doris.md b/docs/en/spark/configuration/sink-plugins/Doris.md
deleted file mode 100644
index 9ef5cd4..0000000
--- a/docs/en/spark/configuration/sink-plugins/Doris.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Doris
-
-> Sink plugin: Doris [Spark]
-
-### Description:
-Use Spark Batch Engine ETL Data to Doris.
-
-### Options
-| name | type | required | default value | engine |
-| --- | --- | --- | --- | --- |
-| fenodes | string | yes | - | Spark |
-| database | string | yes | - | Spark |
-| table	 | string | yes | - | Spark |
-| user	 | string | yes | - | Spark |
-| password	 | string | yes | - | Spark |
-| batch_size	 | int | yes | 100 | Spark |
-| doris.*	 | string | no | - | Spark |
-
-##### fenodes [string]
-Doris FE address:8030
-
-##### database [string]
-Doris target database name
-##### table [string]
-Doris target table name
-##### user [string]
-Doris user name
-##### password [string]
-Doris user's password
-##### batch_size [string]
-Doris number of submissions per batch
-
-Default value:5000
-
-##### doris. [string]
-Doris stream_load properties,you can use 'doris.' prefix + stream_load properties
-[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
-
-### Examples
-
-```
-Doris {
-            fenodes="0.0.0.0:8030"
-            database="test"
-            table="user"
-            user="doris"
-            password="doris"
-            batch_size=10000
-            doris.column_separator="\t"
-            doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
-      
-      }
-```
diff --git a/docs/en/spark/configuration/sink-plugins/File.md b/docs/en/spark/configuration/sink-plugins/File.md
deleted file mode 100644
index 077db05..0000000
--- a/docs/en/spark/configuration/sink-plugins/File.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# File
-
-> Sink plugin : File [Spark]
-
-## Description
-
-Output data to local or hdfs file.
-
-## Options
-
-| name             | type   | required | default value  |
-| ---------------- | ------ | -------- | -------------- |
-| options          | object | no       | -              |
-| partition_by     | array  | no       | -              |
-| path             | string | yes      | -              |
-| path_time_format | string | no       | yyyyMMddHHmmss |
-| save_mode        | string | no       | error          |
-| serializer       | string | no       | json           |
-| common-options   | string | no       | -              |
-
-### options [object]
-
-Custom parameters
-
-### partition_by [array]
-
-Partition data based on selected fields
-
-### path [string]
-
-The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
-we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`, 
-`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
-
-### path_time_format [string]
-
-When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
-
-| Symbol | Description        |
-| ------ | ------------------ |
-| y      | Year               |
-| M      | Month              |
-| d      | Day of month       |
-| H      | Hour in day (0-23) |
-| m      | Minute in hour     |
-| s      | Second in minute   |
-
-See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
-
-### save_mode [string]
-
-Storage mode, currently supports `overwrite` , `append` , `ignore` and `error` . For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
-
-### serializer [string]
-
-Serialization method, currently supports `csv` , `json` , `parquet` , `orc` and `text`
-
-### common options [string]
-
-Sink plugin common parameters, please refer to [Sink Plugin](./sink-plugin.md) for details
-
-## Example
-
-```bash
-file {
-    path = "file:///var/logs"
-    serializer = "text"
-}
-```
diff --git a/docs/en/spark/configuration/sink-plugins/Kafka.md b/docs/en/spark/configuration/sink-plugins/Kafka.md
deleted file mode 100644
index 80b8496..0000000
--- a/docs/en/spark/configuration/sink-plugins/Kafka.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# Kafka
-
-> Sink plugin: Kafka [Spark]
-
-## Description
-
-Write Rows to a Kafka topic.
-
-## Options
-
-| name | type | required | default value | engine |
-| --- | --- | --- | --- | --- |
-| producer.bootstrap.servers | string | yes | - | all streaming |
-| topic | string | yes | - | all streaming |
-| producer.* | string | no | - | all streaming |
-
-### producer.bootstrap.servers [string]
-
-Kafka Brokers List
-
-### topic [string]
-
-Kafka Topic
-
-### producer [string]
-
-In addition to the above parameters that must be specified for the producer client, you can also specify multiple kafka's producer parameters described in [producerconfigs](http://kafka.apache.org/10/documentation.html#producerconfigs)
-
-The way to specify parameters is to use the prefix "producer" before the parameter. For example, `request.timeout.ms` is specified as: `producer.request.timeout.ms = 60000`.If you do not specify these parameters, it will be set the default values according to Kafka documentation
-
-## Examples
-
-```bash
-kafka {
-    topic = "seatunnel"
-    producer.bootstrap.servers = "localhost:9092"
-}
-```
diff --git a/docs/en/spark/configuration/sink-plugins/sink-plugin.md b/docs/en/spark/configuration/sink-plugins/sink-plugin.md
deleted file mode 100644
index 7672a16..0000000
--- a/docs/en/spark/configuration/sink-plugins/sink-plugin.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Common Options
-
-> Sink Common Options [Spark]
-
-## Sink Plugin common parameters
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| source_table_name | string | no       | -             |
-
-### source_table_name [string]
-
-When `source_table_name` is not specified, the current plug-in processes the data set `dataset` output by the previous plugin in the configuration file;
-
-When `source_table_name` is specified, the current plug-in is processing the data set corresponding to this parameter.
-
-## Examples
-
-```bash
-stdout {
-    source_table_name = "view_table"
-}
-```
-
-> Output a temporary table named `view_table`.
-
-```bash
-stdout {}
-```
-
-> If `source_table_name` is not configured, output the processing result of the last `Filter` plugin in the configuration file
diff --git a/docs/en/spark/configuration/source-plugins/Fake.md b/docs/en/spark/configuration/source-plugins/Fake.md
deleted file mode 100644
index a0e17d6..0000000
--- a/docs/en/spark/configuration/source-plugins/Fake.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Fake
-
-> Source plugin : Fake [Spark]
-
-## Description
-
-`Fake` is mainly used to quickly get started and run a seatunnel application
-
-## Options
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-Fake {
-    result_table_name = "my_dataset"
-}
-```
diff --git a/docs/en/spark/configuration/source-plugins/FakeStream.md b/docs/en/spark/configuration/source-plugins/FakeStream.md
deleted file mode 100644
index 5917fcc..0000000
--- a/docs/en/spark/configuration/source-plugins/FakeStream.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# FakeStream
-
-> Source plugin : FakeStream [Spark]
-
-## Description
-
-`FakeStream` is mainly used to conveniently generate user-specified data, which is used as input for functional verification, testing, and performance testing of seatunnel.
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| content        | array  | no       | -             |
-| rate           | number | yes      | -             |
-| common-options | string | yes      | -             |
-
-### content [array]
-
-List of test data strings
-
-### rate [number]
-
-Number of test cases generated per second
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-fakeStream {
-    content = ['name=ricky&age=23', 'name=gary&age=28']
-    rate = 5
-}
-```
-
-The generated data is as follows, randomly extract the string from the `content` list
-
-```bash
-+-----------------+
-|raw_message      |
-+-----------------+
-|name=gary&age=28 |
-|name=ricky&age=23|
-+-----------------+
-```
diff --git a/docs/en/spark/configuration/source-plugins/File.md b/docs/en/spark/configuration/source-plugins/File.md
deleted file mode 100644
index 8ac1000..0000000
--- a/docs/en/spark/configuration/source-plugins/File.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# File
-
-> Source plugin : File [Spark]
-
-## Description
-read data from local or hdfs file.
-
-## Options
-
-| name | type | required | default value |
-| --- | --- | --- | --- |
-| format | string | no | json |
-| path | string | yes | - |
-| common-options| string | yes | - |
-
-##### format [string]
-Format for reading files, currently supports text, parquet, json, orc, csv.
-
-##### path [string]
-- If read data from hdfs , the file path should start with `hdfs://`  
-- If read data from local , the file path should start with `file://`
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```
-file {
-    path = "hdfs:///var/logs"
-    result_table_name = "access_log"
-}
-```
-
-```
-file {
-    path = "file:///var/logs"
-    result_table_name = "access_log"
-}
-```
diff --git a/docs/en/spark/configuration/source-plugins/Jdbc.md b/docs/en/spark/configuration/source-plugins/Jdbc.md
deleted file mode 100644
index 6debf62..0000000
--- a/docs/en/spark/configuration/source-plugins/Jdbc.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Jdbc
-
-> Source plugin : Jdbc [Spark]
-
-## Description
-
-Read external data source data through JDBC
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| driver         | string | yes      | -             |
-| jdbc.*         | string | no       |               |
-| password       | string | yes      | -             |
-| table          | string | yes      | -             |
-| url            | string | yes      | -             |
-| user           | string | yes      | -             |
-| common-options | string | yes      | -             |
-
-### driver [string]
-
-The `jdbc class name` used to connect to the remote data source
-
-### jdbc [string]
-
-In addition to the parameters that must be specified above, users can also specify multiple optional parameters, which cover [all the parameters](https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases) provided by Spark JDBC.
-
-The way to specify parameters is to add the prefix `jdbc.` to the original parameter name. For example, the way to specify `fetchsize` is: `jdbc.fetchsize = 50000` . If these non-essential parameters are not specified, they will use the default values given by Spark JDBC.
-
-### password [string]
-
-##### password
-
-### table [string]
-
-table name
-
-### url [string]
-
-The URL of the JDBC connection. Refer to a case: `jdbc:postgresql://localhost/test`
-
-### user [string]
-
-username
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Example
-
-```bash
-jdbc {
-    driver = "com.mysql.jdbc.Driver"
-    url = "jdbc:mysql://localhost:3306/info"
-    table = "access"
-    result_table_name = "access_log"
-    user = "username"
-    password = "password"
-}
-```
-
-> Read MySQL data through JDBC
-
-```bash
-jdbc {
-    driver = "com.mysql.jdbc.Driver"
-    url = "jdbc:mysql://localhost:3306/info"
-    table = "access"
-    result_table_name = "access_log"
-    user = "username"
-    password = "password"
-    jdbc.partitionColumn = "item_id"
-    jdbc.numPartitions = "10"
-    jdbc.lowerBound = 0
-    jdbc.upperBound = 100
-}
-```
-
-> Divide partitions based on specified fields
-
-
-```bash
-jdbc {
-    driver = "com.mysql.jdbc.Driver"
-    url = "jdbc:mysql://localhost:3306/info"
-    table = "access"
-    result_table_name = "access_log"
-    user = "username"
-    password = "password"
-    
-    jdbc.connect_timeout = 10000
-    jdbc.socket_timeout = 10000
-}
-```
-> Timeout config
\ No newline at end of file
diff --git a/docs/en/spark/configuration/source-plugins/KafkaStream.md b/docs/en/spark/configuration/source-plugins/KafkaStream.md
deleted file mode 100644
index 6feb017..0000000
--- a/docs/en/spark/configuration/source-plugins/KafkaStream.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# KafkaStream
-
-> Source plugin : KafkaStream [Spark]
-
-## Description
-
-To consume data from `Kafka` , the supported `Kafka version >= 0.10.0` .
-
-## Options
-
-| name                       | type   | required | default value |
-| -------------------------- | ------ | -------- | ------------- |
-| topics                     | string | yes      | -             |
-| consumer.group.id          | string | yes      | -             |
-| consumer.bootstrap.servers | string | yes      | -             |
-| consumer.*                 | string | no       | -             |
-| common-options             | string | yes      | -             |
-
-### topics [string]
-
-`Kafka topic` name. If there are multiple `topics`, use `,` to split, for example: `"tpc1,tpc2"`
-
-### consumer.group.id [string]
-
-`Kafka consumer group id` , used to distinguish different consumer groups
-
-### consumer.bootstrap.servers [string]
-
-`Kafka` cluster address, separated by `,`
-
-### consumer.* [string]
-
-In addition to the above necessary parameters that must be specified by the `Kafka consumer` client, users can also specify multiple `consumer` client non-mandatory parameters, covering [all consumer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#oldconsumerconfigs) .
-
-The way to specify parameters is to add the prefix `consumer.` to the original parameter name. For example, the way to specify `auto.offset.reset` is: `consumer.auto.offset.reset = latest` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-kafkaStream {
-    topics = "seatunnel"
-    consumer.bootstrap.servers = "localhost:9092"
-    consumer.group.id = "seatunnel_group"
-}
-```
diff --git a/docs/en/spark/configuration/source-plugins/SocketStream.md b/docs/en/spark/configuration/source-plugins/SocketStream.md
deleted file mode 100644
index 3337898..0000000
--- a/docs/en/spark/configuration/source-plugins/SocketStream.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# SocketStream
-
-> Source plugin : SocketStream [Spark]
-
-## Description
-
-`SocketStream` is mainly used to receive `Socket` data and is used to quickly verify `Spark streaming` computing.
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| host           | string | no       | localhost     |
-| port           | number | no       | 9999          |
-| common-options | string | yes      | -             |
-
-### host [string]
-
-socket server hostname
-
-### port [number]
-
-socket server port
-
-### common options [string]
-
-Source plugin common parameters, please refer to [Source Plugin](./source-plugin.md) for details
-
-## Examples
-
-```bash
-socketStream {
-  port = 9999
-}
-```
diff --git a/docs/en/spark/configuration/source-plugins/source-plugin.md b/docs/en/spark/configuration/source-plugins/source-plugin.md
deleted file mode 100644
index 9459fb9..0000000
--- a/docs/en/spark/configuration/source-plugins/source-plugin.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Common Options
-
-> Source Common Options [Spark]
-
-## Source Plugin common parameters
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| result_table_name | string | yes      | -             |
-| table_name        | string | no       | -             |
-
-### result_table_name [string]
-
-When `result_table_name` is not specified, the data processed by this plug-in will not be registered as a data set that can be directly accessed by other plugins, or called a temporary table (table);
-
-When `result_table_name` is specified, the data processed by this plug-in will be registered as a data set (dataset) that can be directly accessed by other plug-ins, or called a temporary table (table). The dataset registered here can be directly accessed by other plugins by specifying `source_table_name`.
-
-### table_name [string]
-
-[Deprecated since v1.4] The function is the same as `result_table_name` , this parameter will be deleted in subsequent Release versions, and `result_table_name`  parameter is recommended.
-
-## Example
-
-```bash
-fake {
-    result_table_name = "view_table"
-}
-```
-
-> The result of the data source `fake` will be registered as a temporary table named `view_table` . This temporary table can be used by any Filter or Output plugin by specifying `source_table_name` .
diff --git a/docs/en/spark/configuration/transform-plugins/Sql.md b/docs/en/spark/configuration/transform-plugins/Sql.md
deleted file mode 100644
index 1277932..0000000
--- a/docs/en/spark/configuration/transform-plugins/Sql.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Sql
-
-> Transform plugin : Sql [Spark]
-
-## Description
-
-Use SQL to process data and support Spark's rich [UDF functions](https://spark.apache.org/docs/latest/api/sql)
-
-## Options
-
-| name           | type   | required | default value |
-| -------------- | ------ | -------- | ------------- |
-| sql            | string | yes      | -             |
-| common-options | string | no       | -             |
-
-### sql [string]
-
-SQL statement, the table name used in SQL is the `result_table_name` configured in the `Source` or `Transform` plugin
-
-### common options [string]
-
-Transform plugin common parameters, please refer to [Transform Plugin](./transform-plugin.md) for details
-
-## Examples
-
-```bash
-sql {
-    sql = "select username, address from user_info",
-}
-```
-
-> Use the SQL plugin for field deletion. Only the `username` and `address` fields are reserved, and the remaining fields will be discarded. `user_info` is the `result_table_name` configured by the previous plugin
-
-```bash
-sql {
-    sql = "select substring(telephone, 0, 10) from user_info",
-}
-```
-
-> Use SQL plugin for data processing, use [substring functions](https://spark.apache.org/docs/latest/api/sql/#substring) to intercept the `telephone` field
-
-```bash
-sql {
-    sql = "select avg(age) from user_info",
-    table_name = "user_info"
-}
-```
-
-> Use SQL plugin for data aggregation, use [avg functions](https://spark.apache.org/docs/latest/api/sql/#avg) to perform aggregation operations on the original data set, and take out the average value of the `age` field
diff --git a/docs/en/spark/configuration/transform-plugins/transform-plugin.md b/docs/en/spark/configuration/transform-plugins/transform-plugin.md
deleted file mode 100644
index f5e1888..0000000
--- a/docs/en/spark/configuration/transform-plugins/transform-plugin.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# Common Options
-
-> Transform Common Options [Spark]
-
-## Transform Plugin common parameters
-
-| name              | type   | required | default value |
-| ----------------- | ------ | -------- | ------------- |
-| source_table_name | string | no       | -             |
-| result_table_name | string | no       | -             |
-
-### source_table_name [string]
-
-When `source_table_name` is not specified, the current plug-in processes the data set `(dataset)` output by the previous plug-in in the configuration file;
-
-When `source_table_name` is specified, the current plugin is processing the data set corresponding to this parameter.
-
-### result_table_name [string]
-
-When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set that can be directly accessed by other plugins, or called a temporary table `(table)`;
-
-When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The dataset registered here can be directly accessed by other plugins by specifying `source_table_name` .
-
-## Examples
-
-```bash
-split {
-    source_table_name = "source_view_table"
-    source_field = "message"
-    delimiter = "&"
-    fields = ["field1", "field2"]
-    result_table_name = "result_view_table"
-}
-```
-
-> The `Split` plugin will process the data in the temporary table `source_view_table` and register the processing result as a temporary table named `result_view_table`. This temporary table can be used by any subsequent `Filter` or `Output` plugin by specifying `source_table_name` .
-
-```bash
-split {
-    source_field = "message"
-    delimiter = "&"
-    fields = ["field1", "field2"]
-}
-```
-
-> If `source_table_name` is not configured, output the processing result of the last `Transform` plugin in the configuration file
diff --git a/docs/en/spark/deployment.md b/docs/en/spark/deployment.md
deleted file mode 100644
index 085cc29..0000000
--- a/docs/en/spark/deployment.md
+++ /dev/null
@@ -1,72 +0,0 @@
-## Deployment and run
-
-> seatunnel v2 For Spark relies on the Java runtime environment and Spark. For detailed seatunnel installation steps, please refer to [installing seatunnel](./installation.md)
-
-The following focuses on how different platforms operate:
-
-## Run seatunnel locally in local mode
-
-```bash
-./bin/start-seatunnel-spark.sh \
---master local[4] \
---deploy-mode client \
---config ./config/application.conf
-```
-
-## Run seatunnel on Spark Standalone cluster
-
-```bash
-# client mode
-./bin/start-seatunnel-spark.sh \
---master spark://ip:7077 \
---deploy-mode client \
---config ./config/application.conf
-
-# cluster mode
-./bin/start-seatunnel-spark.sh \
---master spark://ip:7077 \
---deploy-mode cluster \
---config ./config/application.conf
-```
-
-## Run seatunnel on Yarn cluster
-
-```bash
-# client mode
-./bin/start-seatunnel-spark.sh \
---master yarn \
---deploy-mode client \
---config ./config/application.conf
-
-# cluster mode
-./bin/start-seatunnel-spark.sh \
---master yarn \
---deploy-mode cluster \
---config ./config/application.conf
-```
-
-## Run seatunnel on Mesos cluster
-
-```bash
-# cluster mode
-./bin/start-seatunnel-spark.sh \
---master mesos://ip:7077 \
---deploy-mode cluster \
---config ./config/application.conf
-```
-
-For the meaning of the `master` and `deploy-mode` parameters of `start-seatunnel-spark.sh` , please refer to: [Command Instructions](./commands/start-seatunnel-spark.sh.md)
-
-If you want to specify the resource size occupied by `seatunnel` when running, or other `Spark parameters` , you can specify it in the configuration file specified by `--config` :
-
-```bash
-env {
-  spark.executor.instances = 2
-  spark.executor.cores = 1
-  spark.executor.memory = "1g"
-  ...
-}
-...
-```
-
-For how to configure `seatunnel` , please refer to `seatunnel` [common configuration](./configuration)
diff --git a/docs/en/spark/installation.md b/docs/en/spark/installation.md
deleted file mode 100644
index f5055a4..0000000
--- a/docs/en/spark/installation.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Download and install
-
-## download
-
-```bash
-https://github.com/apache/incubator-seatunnel/releases
-```
-
-## Environmental preparation
-
-### Prepare JDK1.8
-
-`seatunnel` relies on the `JDK1.8` operating environment.
-
-### Get Spark ready
-
-`Seatunnel` relies on `Spark` . Before installing `seatunnel` , you need to prepare `Spark` . Please [download Spark](https://spark.apache.org/downloads.html) first, please select `Spark version >= 2.x.x`. After downloading and decompressing, you can submit the Spark `deploy-mode = local` mode task without any configuration. If you expect the task to run on the `Standalone cluster` or `Yarn cluster` or `Mesos cluster`, please refer to the Spark official website configuration document.
-
-## Install seatunnel
-
-Download the `seatunnel` installation package and unzip:
-
-```bash
-wget https://github.com/apache/incubator-seatunnel/releases/download/v<version>/seatunnel-<version>.zip -O seatunnel-<version>.zip
-unzip seatunnel-<version>.zip
-ln -s seatunnel-<version> seatunnel
-```
-
-There are no complicated installation and configuration steps. Please refer to [Quick Start](./quick-start.md) for how to use `seatunnel` , and refer to Configuration for [configuration](./configuration).
diff --git a/docs/en/spark/quick-start.md b/docs/en/spark/quick-start.md
deleted file mode 100644
index e02f0cf..0000000
--- a/docs/en/spark/quick-start.md
+++ /dev/null
@@ -1,108 +0,0 @@
-# Quick start
-
-> Let's take an application that receives data through a `socket` , divides the data into multiple fields, and outputs the processing results as an example to quickly show how to use `seatunnel`.
-
-## Step 1: Prepare Spark runtime environment
-
-> If you are familiar with Spark or have prepared the Spark operating environment, you can ignore this step. Spark does not require any special configuration.
-
-Please [download Spark](https://spark.apache.org/downloads.html) first, please choose `Spark version >= 2.x.x` . After downloading and decompressing, you can submit the Spark `deploy-mode = local` mode task without any configuration. If you expect tasks to run on `Standalone clusters` or `Yarn clusters` or `Mesos clusters`, please refer to the [Spark deployment documentation](https://spark.apache.org/docs/latest/cluster-overview.html) on the Spark official website.
-
-### Step 2: Download seatunnel
-
-Enter the [seatunnel installation package download page](https://seatunnel.apache.org/download) and download the latest version of `seatunnel-<version>-bin.tar.gz`
-
-Or download the specified version directly (take 2.1.0 as an example):
-
-```bash
-wget https://downloads.apache.org/incubator/seatunnel/2.1.0/apache-seatunnel-incubating-2.1.0-bin.tar.gz -O seatunnel-2.1.0.tar.gz
-```
-
-After downloading, extract:
-
-```bash
-tar -xvzf seatunnel-<version>.tar.gz
-ln -s seatunnel-<version> seatunnel
-```
-```
-
-## Step 3: Configure seatunnel
-
-- Edit `config/seatunnel-env.sh` , specify the necessary environment configuration such as `SPARK_HOME` (the directory after Spark downloaded and decompressed in Step 1)
-
-- Create a new `config/application.conf` , which determines the method and logic of data input, processing, and output after `seatunnel` is started.
-
-```bash
-env {
-  # seatunnel defined streaming batch duration in seconds
-  spark.streaming.batchDuration = 5
-
-  spark.app.name = "seatunnel"
-  spark.ui.port = 13000
-}
-
-source {
-  socketStream {}
-}
-
-transform {
-  split {
-    fields = ["msg", "name"]
-    delimiter = ","
-  }
-}
-
-sink {
-  console {}
-}
-```
-
-## Step 4: Start the `netcat server` to send data
-
-```bash
-nc -lk 9999
-```
-
-## Step 5: start seatunnel
-
-```bash
-cd seatunnel
-./bin/start-seatunnel-spark.sh \
---master local[4] \
---deploy-mode client \
---config ./config/application.conf
-```
-
-## Step 6: Input at the `nc` terminal
-
-```bash
-Hello World, seatunnel
-```
-
-The `seatunnel` log prints out:
-
-```bash
-+----------------------+-----------+---------+
-|raw_message           |msg        |name     |
-+----------------------+-----------+---------+
-|Hello World, seatunnel|Hello World|seatunnel|
-+----------------------+-----------+---------+
-```
-
-## summary
-
-`seatunnel` is simple and easy to use, and there are more abundant data processing functions waiting to be discovered. The data processing case shown in this article does not require any code, compilation, and packaging, and is simpler than the official [Quick Example](https://spark.apache.org/docs/latest/streaming-programming-guide.html#a-quick-example).
-
-If you want to know more `seatunnel configuration examples`, please refer to:
-
-- Configuration example 2: [Batch offline batch processing](https://github.com/apache/incubator-seatunnel/blob/dev/config/spark.batch.conf.template)
-
-The above configuration is the default [offline batch configuration template], which can be run directly, the command is as follows:
-
-```bash
-cd seatunnel
-./bin/start-seatunnel-spark.sh \
---master 'local[2]' \
---deploy-mode client \
---config ./config/spark.batch.conf.template
-```
diff --git a/docs/en/start/docker.md b/docs/en/start/docker.md
new file mode 100644
index 0000000..2553b99
--- /dev/null
+++ b/docs/en/start/docker.md
@@ -0,0 +1,8 @@
+---
+sidebar_position: 3
+---
+
+# Set Up with Docker
+
+<!-- TODO -->
+WIP
\ No newline at end of file
diff --git a/docs/en/start/local.mdx b/docs/en/start/local.mdx
new file mode 100644
index 0000000..78c4138
--- /dev/null
+++ b/docs/en/start/local.mdx
@@ -0,0 +1,149 @@
+---
+sidebar_position: 2
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Set Up with Locally
+
+## Prepare
+
+Before you getting start the local run, you need to make sure you already have installed the following software which SeaTunnel required:
+
+* [Java](https://www.java.com/en/download/) (only JDK 8 supported by now) installed and `JAVA_HOME` set.
+* Download the engine, you can choose and download one of them from below as your favour, you could see more information about [why we need engine in SeaTunnel](../faq.md#why-i-should-install-computing-engine-like-spark-or-flink)
+  * Spark: Please [download Spark](https://spark.apache.org/downloads.html) first(**required version >= 2**). For more information you could see [Getting Started: standalone](https://spark.apache.org/docs/latest/spark-standalone.html#installing-spark-standalone-to-a-cluster)
+  * Flink: Please [download Flink](https://flink.apache.org/downloads.html) first(**required version >= 1.9.0**). For more information you could see [Getting Started: standalone](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/overview/)
+
+## Installation
+
+Enter the [seatunnel download page](https://seatunnel.apache.org/download) and download the latest version of distribute
+package `seatunnel-<version>-bin.tar.gz`
+
+Or you can download it by terminal
+
+```shell
+export version="2.1.0"
+wget "https://archive.apache.org/dist/incubator/seatunnel/${version}/apache-seatunnel-incubating-${version}-bin.tar.gz"
+tar -xzvf "apache-seatunnel-incubating-${version}-bin.tar.gz"
+```
+
+<!-- TODO: We should add example module as quick start which is no need for install Spark or Flink -->
+
+## Run SeaTunnel Application
+
+**Configure SeaTunnel**: Change the setting in `config/seatunnel-env.sh`, it is base on the path your engine install at [prepare step two](#prepare).
+Change `SPARK_HOME` if you using Spark as your engine, or change `FLINK_HOME` if you're using Flink.
+
+**Run Application with Build-in Configure**: We already providers and out-of-box configuration in directory `config` which
+you could find when you extract the tarball. You could start the application by the following commands
+
+<Tabs
+  groupId="engine-type"
+  defaultValue="spark"
+  values={[
+    {label: 'Spark', value: 'spark'},
+    {label: 'Flink', value: 'flink'},
+  ]}>
+<TabItem value="spark">
+
+```shell
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-spark.sh \
+--master local[4] \
+--deploy-mode client \
+--config ./config/spark.streaming.conf.template
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```shell
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-flink.sh \
+--config ./config/flink.streaming.conf.template
+```
+
+</TabItem>
+</Tabs>
+
+**See The Output**: When you run the command, you could see its output in your console or in Flink UI, You can think this
+is a sign that the command ran successfully or not.
+
+<Tabs
+  groupId="engine-type"
+  defaultValue="spark"
+  values={[
+    {label: 'Spark', value: 'spark'},
+    {label: 'Flink', value: 'flink'},
+  ]}>
+<TabItem value="spark">
+The SeaTunnel console will prints some logs as below:
+
+```shell
+Hello World, SeaTunnel
+Hello World, SeaTunnel
+Hello World, SeaTunnel
+...
+Hello World, SeaTunnel
+```
+
+</TabItem>
+<TabItem value="flink">
+
+The content printed in the TaskManager Stdout log of `flink WebUI`, is two columned record just like below(your
+content maybe different cause we use fake source to create data random):
+
+```shell
+apache, 15
+seatunnel, 30
+incubator, 20
+...
+topLevel, 20
+```
+
+</TabItem>
+</Tabs>
+
+## Explore More Build-in Examples
+
+Our local quick start is using one of the build-in example in directory `config`, and we provider more than one out-of-box
+example you could and feel free to have a try and make your hands dirty. All you have to do is change the started command
+option value in [running application](#run-seaTunnel-application) to the configuration you want to run, we use batch
+template in `config` as examples:
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```shell
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-spark.sh \
+--master local[4] \
+--deploy-mode client \
+--config ./config/spark.batch.conf.template
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```shell
+cd "apache-seatunnel-incubating-${version}"
+./bin/start-seatunnel-flink.sh \
+--config ./config/flink.batch.conf.template
+```
+
+</TabItem>
+</Tabs>
+
+## What's More
+
+For now, you are already take a quick look about SeaTunnel, you could see [connector](/category/connector) to find all
+source and sink SeaTunnel supported. Or see [deployment](../deployment.mdx) if you want to submit your application in other
+kind of your engine cluster.
diff --git a/docs/en/transform/common-options.mdx b/docs/en/transform/common-options.mdx
new file mode 100644
index 0000000..766f907
--- /dev/null
+++ b/docs/en/transform/common-options.mdx
@@ -0,0 +1,116 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Common Options
+
+:::tip
+
+This transform both supported by engine Spark and Flink.
+
+:::
+
+## Transform Plugin common parameters
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| source_table_name | string | no       | -             |
+| result_table_name | string | no       | -             |
+
+</TabItem>
+<TabItem value="flink">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| source_table_name | string | no       | -             |
+| result_table_name | string | no       | -             |
+| field_name        | string | no       | -             |
+
+### field_name [string]
+
+When the data is obtained from the upper-level plugin, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
+
+</TabItem>
+</Tabs>
+
+### source_table_name [string]
+
+When `source_table_name` is not specified, the current plug-in processes the data set `(dataset)` output by the previous plug-in in the configuration file;
+
+When `source_table_name` is specified, the current plugin is processing the data set corresponding to this parameter.
+
+### result_table_name [string]
+
+When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set that can be directly accessed by other plugins, or called a temporary table `(table)`;
+
+When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The dataset registered here can be directly accessed by other plugins by specifying `source_table_name` .
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+split {
+    source_table_name = "source_view_table"
+    source_field = "message"
+    delimiter = "&"
+    fields = ["field1", "field2"]
+    result_table_name = "result_view_table"
+}
+```
+
+> The `Split` plugin will process the data in the temporary table `source_view_table` and register the processing result as a temporary table named `result_view_table`. This temporary table can be used by any subsequent `Filter` or `Output` plugin by specifying `source_table_name` .
+
+```bash
+split {
+    source_field = "message"
+    delimiter = "&"
+    fields = ["field1", "field2"]
+}
+```
+
+> Note: If `source_table_name` is not configured, output the processing result of the last `Transform` plugin in the configuration file
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+source {
+    FakeSourceStream {
+      result_table_name = "fake_1"
+      field_name = "name,age"
+    }
+    FakeSourceStream {
+      result_table_name = "fake_2"
+      field_name = "name,age"
+    }
+}
+
+transform {
+    sql {
+      source_table_name = "fake_1"
+      sql = "select name from fake_1"
+      result_table_name = "fake_name"
+    }
+}
+```
+
+> If `source_table_name` is not specified, the sql plugin will process the data of `fake_2` , and if it is set to `fake_1` , it will process the data of `fake_1` .
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/docs/en/spark/configuration/transform-plugins/Json.md b/docs/en/transform/json.md
similarity index 97%
rename from docs/en/spark/configuration/transform-plugins/Json.md
rename to docs/en/transform/json.md
index a5caba9..3068fe7 100644
--- a/docs/en/spark/configuration/transform-plugins/Json.md
+++ b/docs/en/transform/json.md
@@ -1,11 +1,15 @@
 # Json
 
-> Transform plugin : Json [Spark]
-
 ## Description
 
 Json analysis of the specified fields of the original data set
 
+:::tip
+
+This transform **ONLY** supported by Spark.
+
+:::
+
 ## Options
 
 | name           | type   | required | default value |
@@ -34,7 +38,7 @@ The style file name, if it is not configured, the default is empty, that is, the
 
 ### common options [string]
 
-Transform plugin common parameters, please refer to [Transform Plugin](./transform-plugin.md) for details
+Transform plugin common parameters, please refer to [Transform Plugin](common-options.mdx) for details
 
 ## Schema Use cases
 
diff --git a/docs/en/spark/configuration/transform-plugins/Split.md b/docs/en/transform/split.mdx
similarity index 56%
rename from docs/en/spark/configuration/transform-plugins/Split.md
rename to docs/en/transform/split.mdx
index b536afb..39bdcff 100644
--- a/docs/en/spark/configuration/transform-plugins/Split.md
+++ b/docs/en/transform/split.mdx
@@ -1,13 +1,29 @@
-# Split
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
 
-> Transform plugin : Split [Spark]
+# Split
 
 ## Description
 
-Split string according to `separator`
+A string cutting function is defined, which is used to split the specified field in the Sql plugin.
+
+:::tip
+
+This transform both supported by engine Spark and Flink.
+
+:::
 
 ## Options
 
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
 | name           | type   | required | default value |
 | -------------- | ------ | -------- | ------------- |
 | separator      | string | no       | " "      |
@@ -20,10 +36,6 @@ Split string according to `separator`
 
 Separator, the input string is separated according to the separator. The default separator is a space `(" ")` .
 
-### fields [list]
-
-In the split field name list, specify the field names of each character string after splitting in order. If the length of the `fields` is greater than the length of the separation result, the extra fields are assigned null characters.
-
 ### source_field [string]
 
 The source field of the string before being split, if not configured, the default is `raw_message`
@@ -32,12 +44,42 @@ The source field of the string before being split, if not configured, the defaul
 
 `target_field` can specify the location where multiple split fields are added to the Event. If it is not configured, the default is `_root_` , that is, all split fields will be added to the top level of the Event. If a specific field is specified, the divided field will be added to the next level of this field.
 
+</TabItem>
+<TabItem value="flink">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| separator      | string | no       | ,             |
+| fields         | array  | yes      | -             |
+| common-options | string | no       | -             |
+
+### separator [string]
+
+The specified delimiter, the default is `,`
+
+</TabItem>
+</Tabs>
+
+### fields [list]
+
+In the split field name list, specify the field names of each character string after splitting in order. If the length of the `fields` is greater than the length of the separation result, the extra fields are assigned null characters.
+
 ### common options [string]
 
-Transform plugin common parameters, please refer to [Transform Plugin](./transform-plugin.md) for details
+Transform plugin common parameters, please refer to [Transform Plugin](common-options.mdx) for details
 
 ## Examples
-- Split the `message` field in the source data according to `&`, you can use `field1` or `field2` as the key to get the corresponding value
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+Split the `message` field in the source data according to `&`, you can use `field1` or `field2` as the key to get the corresponding value
 
 ```bash
 split {
@@ -46,7 +88,8 @@ split {
     fields = ["field1", "field2"]
 }
 ```
-- Split the `message` field in the source data according to `,` , the split field is `info` , you can use `info.field1` or `info.field2` as the key to get the corresponding value
+
+Split the `message` field in the source data according to `,` , the split field is `info` , you can use `info.field1` or `info.field2` as the key to get the corresponding value
 
 ```bash
 split {
@@ -56,7 +99,15 @@ split {
     fields = ["field1", "field2"]
 }
 ```
-- Use `Split` as udf in sql.
+
+</TabItem>
+<TabItem value="flink">
+
+</TabItem>
+</Tabs>
+
+Use `Split` as udf in sql.
+
 ```bash
   # This just created a udf called split
   Split{
@@ -67,4 +118,4 @@ split {
   sql {
     sql = "select * from (select raw_message,split(raw_message) as info_row from fake) t1"
   }
-```
\ No newline at end of file
+```
diff --git a/docs/en/transform/sql.md b/docs/en/transform/sql.md
new file mode 100644
index 0000000..9f8003d
--- /dev/null
+++ b/docs/en/transform/sql.md
@@ -0,0 +1,60 @@
+# Sql
+
+## Description
+
+Use SQL to process data and support engine's UDF function.
+
+:::tip
+
+This transform both supported by engine Spark and Flink.
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| sql            | string | yes      | -             |
+| common-options | string | no       | -             |
+
+### sql [string]
+
+SQL statement, the table name used in SQL configured in the `Source` or `Transform` plugin
+
+### common options [string]
+
+Transform plugin common parameters, please refer to [Transform Plugin](common-options.mdx) for details
+
+## Examples
+
+### Simple Select
+
+Use the SQL plugin for field deletion. Only the `username` and `address` fields are reserved, and the remaining fields will be discarded. `user_info` is the `result_table_name` configured by the previous plugin
+
+```bash
+sql {
+    sql = "select username, address from user_info",
+}
+```
+
+### Use UDF
+
+Use SQL plugin for data processing, use `substring` functions to intercept the `telephone` field
+
+```bash
+sql {
+    sql = "select substring(telephone, 0, 10) from user_info",
+}
+```
+
+### Use UDAF
+
+Use SQL plugin for data aggregation, use avg functions to perform aggregation operations on the original data set, and take out the average value of the `age` field
+
+```bash
+sql {
+    sql = "select avg(age) from user_info",
+    table_name = "user_info"
+}
+```
+
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 120df81..7a11bba 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -45,61 +45,69 @@ const sidebars = {
    */
 
   docs: [
-    'introduction',
     {
       type: 'category',
-      label: 'Spark',
+      label: 'Introduction',
       items: [
-        'spark/installation',
-        'spark/quick-start',
-        'spark/deployment',
-        {
-          type: 'category',
-          label: 'Configuration',
-          items: [
-            // TODO we can use generated-index to create some leading page like https://docusaurus.io/docs/category/guides
-            {
-              type: 'autogenerated',
-              dirName: 'spark/configuration',
-            },
-          ],
-        },
-        {
-          type: 'category',
-          label: 'Commands',
-          items: [
-            {
-              type: 'autogenerated',
-              dirName: 'spark/commands',
-            },
-          ],
-        },
+        'intro/about',
+        'intro/why',
+        'intro/history',
       ],
     },
     {
       type: 'category',
-      label: 'Flink',
+      label: 'Quick Start',
+      link: {
+        type: 'generated-index',
+        title: 'Quick Start for SeaTunnel',
+        description: 'In this section, you could learn how to get up and running Apache SeaTunnel in both locally or in Docker environment.',
+        slug: '/category/start',
+        keywords: ['start'],
+        image: '/img/favicon.ico',
+      },
       items: [
-        'flink/installation',
-        'flink/quick-start',
-        'flink/deployment',
+        'start/local',
+        'start/docker',
+      ],
+    },
+    {
+      type: 'category',
+      label: 'Connector',
+      items: [
+        'connector/config-example',
         {
           type: 'category',
-          label: 'Configuration',
+          label: 'Source',
+          link: {
+            type: 'generated-index',
+            title: 'Source of SeaTunnel',
+            description: 'List all source supported Apache SeaTunnel for now.',
+            slug: '/category/source',
+            keywords: ['source'],
+            image: '/img/favicon.ico',
+          },
           items: [
             {
               type: 'autogenerated',
-              dirName: 'flink/configuration',
+              dirName: 'connector/source',
             },
           ],
         },
         {
           type: 'category',
-          label: 'Commands',
+          label: 'Sink',
+          link: {
+            type: 'generated-index',
+            title: 'Source of SeaTunnel',
+            description: 'List all sink supported Apache SeaTunnel for now.',
+            slug: '/category/sink',
+            keywords: ['sink'],
+            image: '/img/favicon.ico',
+          },
           items: [
             {
               type: 'autogenerated',
-              dirName: 'flink/commands',
+              dirName: 'connector/sink',
             },
           ],
         },
@@ -107,13 +115,31 @@ const sidebars = {
     },
     {
       type: 'category',
-      label: 'developement',
+      label: 'Transform',
+      items: [
+        'transform/common-options',
+        'transform/sql',
+        'transform/split',
+        'transform/json',
+      ],
+    },
+    {
+      type: 'category',
+      label: 'Command',
+      items: [
+        'command/usage',
+      ],
+    },
+    'deployment',
+    {
+      type: 'category',
+      label: 'Development',
       items: [
-        'developement/setup',
-        'developement/NewLicenseGuide',
+        'development/setup',
+        'development/new-license',
       ],
     },
-    'FAQ',
+    'faq',
   ]
 };