You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by ki...@apache.org on 2022/10/02 13:55:44 UTC

[incubator-seatunnel-website] branch main updated: [release] add 2.2.0-beta docs (#148)

This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel-website.git


The following commit(s) were added to refs/heads/main by this push:
     new 3e2283e32 [release] add 2.2.0-beta docs (#148)
3e2283e32 is described below

commit 3e2283e329f4205bce5b2949a1c7c95cf996a43c
Author: Zongwen Li <zo...@gmail.com>
AuthorDate: Sun Oct 2 21:55:40 2022 +0800

    [release] add 2.2.0-beta docs (#148)
---
 src/pages/download/data.json                       |  14 +
 src/pages/versions/config.json                     |  14 +-
 .../version-2.2.0-beta/command/usage.mdx           | 166 ++++++++++
 .../version-2.2.0-beta/concept/config.md           | 108 ++++++
 .../concept/connector-v2-features.md               |  65 ++++
 .../version-2.2.0-beta/connector-v2/sink/Assert.md |  99 ++++++
 .../connector-v2/sink/Clickhouse.md                | 119 +++++++
 .../connector-v2/sink/ClickhouseFile.md            | 118 +++++++
 .../connector-v2/sink/Console.md                   |  80 +++++
 .../connector-v2/sink/Datahub.md                   |  68 ++++
 .../connector-v2/sink/Elasticsearch.md             |  62 ++++
 .../version-2.2.0-beta/connector-v2/sink/Email.md  |  78 +++++
 .../connector-v2/sink/Enterprise-WeChat.md         |  57 ++++
 .../version-2.2.0-beta/connector-v2/sink/Feishu.md |  42 +++
 .../connector-v2/sink/FtpFile.md                   | 153 +++++++++
 .../connector-v2/sink/Greenplum.md                 |  32 ++
 .../connector-v2/sink/HdfsFile.md                  | 185 +++++++++++
 .../version-2.2.0-beta/connector-v2/sink/Hive.md   | 156 +++++++++
 .../version-2.2.0-beta/connector-v2/sink/Http.md   |  66 ++++
 .../version-2.2.0-beta/connector-v2/sink/IoTDB.md  | 126 +++++++
 .../version-2.2.0-beta/connector-v2/sink/Jdbc.md   | 118 +++++++
 .../version-2.2.0-beta/connector-v2/sink/Kudu.md   |  46 +++
 .../connector-v2/sink/LocalFile.md                 | 175 ++++++++++
 .../connector-v2/sink/MongoDB.md                   |  46 +++
 .../version-2.2.0-beta/connector-v2/sink/Neo4j.md  |  87 +++++
 .../connector-v2/sink/OssFile.md                   | 217 ++++++++++++
 .../connector-v2/sink/Phoenix.md                   |  46 +++
 .../version-2.2.0-beta/connector-v2/sink/Redis.md  | 113 +++++++
 .../version-2.2.0-beta/connector-v2/sink/Sentry.md |  59 ++++
 .../version-2.2.0-beta/connector-v2/sink/Socket.md |  93 ++++++
 .../connector-v2/sink/common-options.md            |  45 +++
 .../connector-v2/sink/dingtalk.md                  |  38 +++
 .../connector-v2/source/Clickhouse.md              |  77 +++++
 .../connector-v2/source/FakeSource.md              |  85 +++++
 .../connector-v2/source/FtpFile.md                 | 117 +++++++
 .../connector-v2/source/Greenplum.md               |  29 ++
 .../connector-v2/source/HdfsFile.md                | 127 +++++++
 .../version-2.2.0-beta/connector-v2/source/Hive.md |  55 ++++
 .../version-2.2.0-beta/connector-v2/source/Http.md | 144 ++++++++
 .../version-2.2.0-beta/connector-v2/source/Hudi.md |  73 +++++
 .../connector-v2/source/Iceberg.md                 | 157 +++++++++
 .../connector-v2/source/IoTDB.md                   | 149 +++++++++
 .../version-2.2.0-beta/connector-v2/source/Jdbc.md | 102 ++++++
 .../version-2.2.0-beta/connector-v2/source/Kudu.md |  52 +++
 .../connector-v2/source/LocalFile.md               | 124 +++++++
 .../connector-v2/source/MongoDB.md                 |  76 +++++
 .../connector-v2/source/OssFile.md                 | 155 +++++++++
 .../connector-v2/source/Phoenix.md                 |  51 +++
 .../connector-v2/source/Redis.md                   | 158 +++++++++
 .../connector-v2/source/Socket.md                  |  94 ++++++
 .../connector-v2/source/common-options.md          |  33 ++
 .../connector-v2/source/pulsar.md                  | 137 ++++++++
 .../connector/flink-sql/ElasticSearch.md           |  50 +++
 .../version-2.2.0-beta/connector/flink-sql/Jdbc.md |  67 ++++
 .../connector/flink-sql/Kafka.md                   |  76 +++++
 .../connector/flink-sql/usage.md                   | 277 ++++++++++++++++
 .../version-2.2.0-beta/connector/sink/Assert.md    | 106 ++++++
 .../connector/sink/Clickhouse.md                   | 148 +++++++++
 .../connector/sink/ClickhouseFile.md               | 164 ++++++++++
 .../version-2.2.0-beta/connector/sink/Console.mdx  | 103 ++++++
 .../version-2.2.0-beta/connector/sink/Doris.mdx    | 176 ++++++++++
 .../version-2.2.0-beta/connector/sink/Druid.md     | 106 ++++++
 .../connector/sink/Elasticsearch.mdx               | 120 +++++++
 .../version-2.2.0-beta/connector/sink/Email.md     | 103 ++++++
 .../version-2.2.0-beta/connector/sink/File.mdx     | 192 +++++++++++
 .../version-2.2.0-beta/connector/sink/Hbase.md     |  68 ++++
 .../version-2.2.0-beta/connector/sink/Hive.md      |  72 ++++
 .../version-2.2.0-beta/connector/sink/Hudi.md      |  43 +++
 .../version-2.2.0-beta/connector/sink/Iceberg.md   |  70 ++++
 .../version-2.2.0-beta/connector/sink/InfluxDb.md  |  90 +++++
 .../version-2.2.0-beta/connector/sink/Jdbc.mdx     | 213 ++++++++++++
 .../version-2.2.0-beta/connector/sink/Kafka.md     |  64 ++++
 .../version-2.2.0-beta/connector/sink/Kudu.md      |  42 +++
 .../version-2.2.0-beta/connector/sink/MongoDB.md   |  51 +++
 .../version-2.2.0-beta/connector/sink/Phoenix.md   |  55 ++++
 .../version-2.2.0-beta/connector/sink/Redis.md     |  95 ++++++
 .../version-2.2.0-beta/connector/sink/Tidb.md      |  88 +++++
 .../connector/sink/common-options.md               |  45 +++
 .../version-2.2.0-beta/connector/source/Druid.md   |  67 ++++
 .../connector/source/Elasticsearch.md              |  64 ++++
 .../version-2.2.0-beta/connector/source/Fake.mdx   | 203 ++++++++++++
 .../connector/source/FeishuSheet.md                |  61 ++++
 .../version-2.2.0-beta/connector/source/File.mdx   | 124 +++++++
 .../version-2.2.0-beta/connector/source/Hbase.md   |  46 +++
 .../version-2.2.0-beta/connector/source/Hive.md    |  66 ++++
 .../version-2.2.0-beta/connector/source/Http.md    |  63 ++++
 .../version-2.2.0-beta/connector/source/Hudi.md    |  78 +++++
 .../version-2.2.0-beta/connector/source/Iceberg.md |  61 ++++
 .../connector/source/InfluxDb.md                   |  89 +++++
 .../version-2.2.0-beta/connector/source/Jdbc.mdx   | 207 ++++++++++++
 .../version-2.2.0-beta/connector/source/Kafka.mdx  | 179 ++++++++++
 .../version-2.2.0-beta/connector/source/Kudu.md    |  45 +++
 .../version-2.2.0-beta/connector/source/MongoDB.md |  64 ++++
 .../version-2.2.0-beta/connector/source/Phoenix.md |  60 ++++
 .../version-2.2.0-beta/connector/source/Redis.md   |  95 ++++++
 .../version-2.2.0-beta/connector/source/Socket.mdx | 106 ++++++
 .../version-2.2.0-beta/connector/source/Tidb.md    |  68 ++++
 .../version-2.2.0-beta/connector/source/Webhook.md |  44 +++
 .../connector/source/common-options.mdx            |  89 +++++
 .../version-2.2.0-beta/connector/source/neo4j.md   | 145 ++++++++
 .../contribution/contribute-plugin.md              | 142 ++++++++
 .../version-2.2.0-beta/contribution/new-license.md |  54 +++
 .../version-2.2.0-beta/contribution/setup.md       | 105 ++++++
 versioned_docs/version-2.2.0-beta/deployment.mdx   | 124 +++++++
 versioned_docs/version-2.2.0-beta/faq.md           | 364 +++++++++++++++++++++
 .../version-2.2.0-beta/images/azkaban.png          | Bin 0 -> 732486 bytes
 .../version-2.2.0-beta/images/checkstyle.png       | Bin 0 -> 479660 bytes
 versioned_docs/version-2.2.0-beta/images/kafka.png | Bin 0 -> 32151 bytes
 .../images/seatunnel-workflow.svg                  |   4 +
 .../images/seatunnel_architecture.png              | Bin 0 -> 778394 bytes
 .../images/seatunnel_starter.png                   | Bin 0 -> 423840 bytes
 .../version-2.2.0-beta/images/workflow.png         | Bin 0 -> 258921 bytes
 versioned_docs/version-2.2.0-beta/intro/about.md   |  72 ++++
 versioned_docs/version-2.2.0-beta/intro/history.md |  15 +
 versioned_docs/version-2.2.0-beta/intro/why.md     |  13 +
 .../version-2.2.0-beta/start-v2/docker.md          |   8 +
 .../version-2.2.0-beta/start-v2/kubernetes.mdx     | 270 +++++++++++++++
 .../version-2.2.0-beta/start-v2/local.mdx          | 165 ++++++++++
 versioned_docs/version-2.2.0-beta/start/docker.md  |   8 +
 .../version-2.2.0-beta/start/kubernetes.mdx        | 270 +++++++++++++++
 versioned_docs/version-2.2.0-beta/start/local.mdx  | 165 ++++++++++
 .../transform/common-options.mdx                   | 118 +++++++
 .../version-2.2.0-beta/transform/json.md           | 197 +++++++++++
 .../version-2.2.0-beta/transform/nullRate.md       |  69 ++++
 .../version-2.2.0-beta/transform/nulltf.md         |  75 +++++
 .../version-2.2.0-beta/transform/replace.md        |  81 +++++
 .../version-2.2.0-beta/transform/split.mdx         | 124 +++++++
 versioned_docs/version-2.2.0-beta/transform/sql.md |  62 ++++
 versioned_docs/version-2.2.0-beta/transform/udf.md |  44 +++
 .../version-2.2.0-beta/transform/uuid.md           |  64 ++++
 .../version-2.2.0-beta-sidebars.json               | 191 +++++++++++
 versions.json                                      |   1 +
 132 files changed, 12395 insertions(+), 4 deletions(-)

diff --git a/src/pages/download/data.json b/src/pages/download/data.json
index a4829f7a3..0fd323b30 100644
--- a/src/pages/download/data.json
+++ b/src/pages/download/data.json
@@ -1,4 +1,18 @@
 [
+	{
+		"date": "2022-10-02",
+		"version": "v2.2.0-beta",
+		"sourceCode": {
+			"src": "https://www.apache.org/dyn/closer.lua/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-src.tar.gz",
+			"asc": "https://downloads.apache.org/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-src.tar.gz.asc",
+			"sha512": "https://downloads.apache.org/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-src.tar.gz.sha512"
+		},
+		"binaryDistribution": {
+			"bin": "https://www.apache.org/dyn/closer.lua/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-bin.tar.gz",
+			"asc": "https://downloads.apache.org/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-bin.tar.gz.asc",
+			"sha512": "https://downloads.apache.org/incubator/seatunnel/2.2.0-beta/apache-seatunnel-incubating-2.2.0-beta-bin.tar.gz.sha512"
+		}
+	},
 	{
 		"date": "2022-08-04",
 		"version": "v2.1.3",
diff --git a/src/pages/versions/config.json b/src/pages/versions/config.json
index f122632dd..7221a1007 100644
--- a/src/pages/versions/config.json
+++ b/src/pages/versions/config.json
@@ -50,10 +50,10 @@
       "nextLink": "/docs/intro/about",
       "latestData": [
         {
-          "versionLabel": "2.1.3",
-          "docUrl": "/docs/2.1.3/intro/about",
-          "downloadUrl": "https://github.com/apache/incubator-seatunnel/releases/tag/2.1.3",
-          "sourceTag": "2.1.3"
+          "versionLabel": "2.2.0-beta",
+          "docUrl": "/docs/2.2.0-beta/intro/about",
+          "downloadUrl": "https://github.com/apache/incubator-seatunnel/releases/tag/2.2.0-beta",
+          "sourceTag": "2.2.0-beta"
         }
       ],
       "nextData": [
@@ -63,6 +63,12 @@
         }
       ],
       "historyData": [
+        {
+          "versionLabel": "2.2.0-beta",
+          "docUrl": "/docs/2.2.0-beta/intro/about",
+          "downloadUrl": "https://github.com/apache/incubator-seatunnel/releases/tag/2.2.0-beta",
+          "sourceTag": "2.2.0-beta"
+        },
         {
           "versionLabel": "2.1.3",
           "docUrl": "/docs/2.1.3/intro/about",
diff --git a/versioned_docs/version-2.2.0-beta/command/usage.mdx b/versioned_docs/version-2.2.0-beta/command/usage.mdx
new file mode 100644
index 000000000..e7406d83d
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/command/usage.mdx
@@ -0,0 +1,166 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Command usage
+
+## Command Entrypoint
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+bin/start-seatunnel-spark.sh
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+bin/start-seatunnel-flink.sh  
+```
+
+</TabItem>
+</Tabs>
+
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+bin/start-seatunnel-spark.sh \
+    -c config-path \
+    -m master \
+    -e deploy-mode \
+    -i city=beijing
+```
+
+- Use `-m` or `--master` to specify the cluster manager
+
+- Use `-e` or `--deploy-mode` to specify the deployment mode
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -c config-path \
+    -i key=value \
+    -r run-application \
+    [other params]
+```
+
+- Use `-r` or `--run-mode` to specify the flink job run mode, you can use `run-application` or `run` (default value)
+
+</TabItem>
+</Tabs>
+
+- Use `-c` or `--config` to specify the path of the configuration file
+
+- Use `-i` or `--variable` to specify the variables in the configuration file, you can configure multiple
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+# Yarn client mode
+./bin/start-seatunnel-spark.sh \
+    --master yarn \
+    --deploy-mode client \
+    --config ./config/application.conf
+
+# Yarn cluster mode
+./bin/start-seatunnel-spark.sh \
+    --master yarn \
+    --deploy-mode cluster \
+    --config ./config/application.conf
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+env {
+    execution.parallelism = 1
+}
+
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+
+transform {
+    sql {
+        sql = "select name,age from fake where name='"${my_name}"'"
+    }
+}
+
+sink {
+    ConsoleSink {}
+}
+```
+
+**Run**
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -c config-path \
+    -i my_name=kid-xiong
+```
+
+This designation will replace `"${my_name}"` in the configuration file with `kid-xiong`
+
+> All the configurations in the `env` section will be applied to Flink dynamic parameters with the format of `-D`, such as `-Dexecution.parallelism=1` .
+
+> For the rest of the parameters, refer to the original flink parameters. Check the flink parameter method: `bin/flink run -h` . The parameters can be added as needed. For example, `-m yarn-cluster` is specified as `on yarn` mode.
+
+```bash
+bin/flink run -h
+```
+
+For example:
+
+* `-p 2` specifies that the job parallelism is `2`
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -p 2 \
+    -c config-path
+```
+
+* Configurable parameters of `flink yarn-cluster`
+
+For example: `-m yarn-cluster -ynm seatunnel` specifies that the job is running on `yarn`, and the name of `yarn WebUI` is `seatunnel`
+
+```bash
+bin/start-seatunnel-flink.sh \
+    -m yarn-cluster \
+    -ynm seatunnel \
+    -c config-path
+```
+
+</TabItem>
+</Tabs>
diff --git a/versioned_docs/version-2.2.0-beta/concept/config.md b/versioned_docs/version-2.2.0-beta/concept/config.md
new file mode 100644
index 000000000..533c3a5af
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/concept/config.md
@@ -0,0 +1,108 @@
+---
+sidebar_position: 2
+---
+
+# Intro to config file
+
+In SeaTunnel, the most important thing is the Config file, through which users can customize their own data
+synchronization requirements to maximize the potential of SeaTunnel. So next, I will introduce you how to
+configure the Config file.
+
+## Example
+
+Before you read on, you can find config file
+examples [here](https://github.com/apache/incubator-seatunnel/tree/dev/config) and in distribute package's
+config directory.
+
+## Config file structure
+
+The Config file will be similar to the one below.
+
+```hocon
+env {
+  execution.parallelism = 1
+}
+
+source {
+  FakeSource {
+    result_table_name = "fake"
+    field_name = "name,age"
+  }
+}
+
+transform {
+  sql {
+    sql = "select name,age from fake"
+  }
+}
+
+sink {
+  Clickhouse {
+    host = "clickhouse:8123"
+    database = "default"
+    table = "seatunnel_console"
+    fields = ["name"]
+    username = "default"
+    password = ""
+  }
+}
+```
+
+As you can see, the Config file contains several sections: env, source, transform, sink. Different modules
+have different functions. After you understand these modules, you will understand how SeaTunnel works.
+
+### env
+
+Used to add some engine optional parameters, no matter which engine (Spark or Flink), the corresponding
+optional parameters should be filled in here.
+
+<!-- TODO add supported env parameters -->
+
+### source
+
+source is used to define where SeaTunnel needs to fetch data, and use the fetched data for the next step.
+Multiple sources can be defined at the same time. The supported source at now
+check [Source of SeaTunnel](../connector/source). Each source has its own specific parameters to define how to
+fetch data, and SeaTunnel also extracts the parameters that each source will use, such as
+the `result_table_name` parameter, which is used to specify the name of the data generated by the current
+source, which is convenient for follow-up used by other modules.
+
+### transform
+
+When we have the data source, we may need to further process the data, so we have the transform module. Of
+course, this uses the word 'may', which means that we can also directly treat the transform as non-existent,
+directly from source to sink. Like below.
+
+```hocon
+transform {
+  // no thing on here
+}
+```
+
+Like source, transform has specific parameters that belong to each module. The supported source at now check.
+The supported transform at now check [Transform of SeaTunnel](../transform)
+
+### sink
+
+Our purpose with SeaTunnel is to synchronize data from one place to another, so it is critical to define how
+and where data is written. With the sink module provided by SeaTunnel, you can complete this operation quickly
+and efficiently. Sink and source are very similar, but the difference is reading and writing. So go check out
+our [supported sinks](../connector/sink).
+
+### Other
+
+You will find that when multiple sources and multiple sinks are defined, which data is read by each sink, and
+which is the data read by each transform? We use `result_table_name` and `source_table_name` two key
+configurations. Each source module will be configured with a `result_table_name` to indicate the name of the
+data source generated by the data source, and other transform and sink modules can use `source_table_name` to
+refer to the corresponding data source name, indicating that I want to read the data for processing. Then
+transform, as an intermediate processing module, can use both `result_table_name` and `source_table_name`
+configurations at the same time. But you will find that in the above example Config, not every module is
+configured with these two parameters, because in SeaTunnel, there is a default convention, if these two
+parameters are not configured, then the generated data from the last module of the previous node will be used.
+This is much more convenient when there is only one source.
+
+## What's More
+
+If you want to know the details of this format configuration, Please
+see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
diff --git a/versioned_docs/version-2.2.0-beta/concept/connector-v2-features.md b/versioned_docs/version-2.2.0-beta/concept/connector-v2-features.md
new file mode 100644
index 000000000..d400722fa
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/concept/connector-v2-features.md
@@ -0,0 +1,65 @@
+# Intro To Connector V2 Features
+
+## Differences Between Connector V2 And Connector v1
+
+Since https://github.com/apache/incubator-seatunnel/issues/1608 We Added Connector V2 Features.
+Connector V2 is a connector defined based on the Seatunnel Connector API interface. Unlike Connector V1, Connector V2 supports the following features.
+
+* **Multi Engine Support** SeaTunnel Connector API is an engine independent API. The connectors developed based on this API can run in multiple engines. Currently, Flink and Spark are supported, and we will support other engines in the future.
+* **Multi Engine Version Support** Decoupling the connector from the engine through the translation layer solves the problem that most connectors need to modify the code in order to support a new version of the underlying engine.
+* **Unified Batch And Stream** Connector V2 can perform batch processing or streaming processing. We do not need to develop connectors for batch and stream separately.
+* **Multiplexing JDBC/Log connection.** Connector V2 supports JDBC resource reuse and sharing database log parsing.
+
+## Source Connector Features
+
+Source connectors have some common core features, and each source connector supports them to varying degrees.
+
+### exactly-once
+
+If each piece of data in the data source will only be sent downstream by the source once, we think this source connector supports exactly once.
+
+In SeaTunnel, we can save the read **Split** and its **offset**(The position of the read data in split at that time,
+such as line number, byte size, offset, etc) as **StateSnapshot** when checkpoint. If the task restarted, we will get the last **StateSnapshot**
+and then locate the **Split** and **offset** read last time and continue to send data downstream.
+
+For example `File`, `Kafka`.
+
+### schema projection
+
+If the source connector supports selective reading of certain columns or redefine columns order or supports the data format read through `schema` params, we think it supports schema projection.
+
+For example `JDBCSource` can use sql define read columns, `KafkaSource` can use `schema` params to define the read schema.
+
+### batch
+
+Batch Job Mode, The data read is bounded and the job will stop when all data read complete.
+
+### stream
+
+Streaming Job Mode, The data read is unbounded and the job never stop.
+
+### parallelism
+
+Parallelism Source Connector support config `parallelism`, every parallelism will create a task to read the data. 
+In the **Parallelism Source Connector**, the source will be split into multiple splits, and then the enumerator will allocate the splits to the SourceReader for processing.
+
+### support user-defined split
+
+User can config the split rule.
+
+## Sink Connector Features
+
+Sink connectors have some common core features, and each sink connector supports them to varying degrees.
+
+### exactly-once
+
+When any piece of data flows into a distributed system, if the system processes any piece of data accurately only once in the whole processing process and the processing results are correct, it is considered that the system meets the exact once consistency.
+
+For sink connector, the sink connector supports exactly-once if any piece of data only write into target once. There are generally two ways to achieve this:
+
+* The target database supports key deduplication. For example `MySQL`, `Kudu`.
+* The target support **XA Transaction**(This transaction can be used across sessions. Even if the program that created the transaction has ended, the newly started program only needs to know the ID of the last transaction to resubmit or roll back the transaction). Then we can use **Two-phase Commit** to ensure **exactly-once**. For example `File`, `MySQL`.
+
+### schema projection
+
+If a sink connector supports the fields and their types or redefine columns order written in the configuration, we think it supports schema projection.
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Assert.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Assert.md
new file mode 100644
index 000000000..5a1612126
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Assert.md
@@ -0,0 +1,99 @@
+# Assert
+
+> Assert sink connector
+
+## Description
+
+A flink sink plugin which can assert illegal data by user defined rules
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                          | type        | required | default value |
+| ----------------------------- | ----------  | -------- | ------------- |
+|rules                          | ConfigList  | yes      | -             |
+|rules.field_name               | string      | yes      | -             |
+|rules.field_type               | string      | no       | -             |
+|rules.field_value              | ConfigList  | no       | -             |
+|rules.field_value.rule_type    | string      | no       | -             |
+|rules.field_value.rule_value   | double      | no       | -             |
+
+
+### rules [ConfigList]
+
+Rule definition of user's available data.  Each rule represents one field validation.
+
+### field_name [string]
+
+field name(string)
+
+### field_type [string]
+
+field type (string),  e.g. `string,boolean,byte,short,int,long,float,double,char,void,BigInteger,BigDecimal,Instant`
+
+### field_value [ConfigList]
+
+A list value rule define the data value validation
+
+### rule_type [string]
+
+The following rules are supported for now
+- NOT_NULL `value can't be null`
+- MIN `define the minimum value of data`
+- MAX `define the maximum value of data`
+- MIN_LENGTH `define the minimum string length of a string data`
+- MAX_LENGTH `define the maximum string length of a string data`
+
+### rule_value [double]
+
+the value related to rule type
+
+
+## Example
+the whole config obey with `hocon` style
+
+```hocon
+Assert {
+    rules = 
+        [{
+            field_name = name
+            field_type = string
+            field_value = [
+                {
+                    rule_type = NOT_NULL
+                },
+                {
+                    rule_type = MIN_LENGTH
+                    rule_value = 3
+                },
+                {
+                     rule_type = MAX_LENGTH
+                     rule_value = 5
+                }
+            ]
+        },{
+            field_name = age
+            field_type = int
+            field_value = [
+                {
+                    rule_type = NOT_NULL
+                },
+                {
+                    rule_type = MIN
+                    rule_value = 10
+                },
+                {
+                     rule_type = MAX
+                     rule_value = 20
+                }
+            ]
+        }
+        ]
+    
+}
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Clickhouse.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Clickhouse.md
new file mode 100644
index 000000000..32ee3b5f8
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Clickhouse.md
@@ -0,0 +1,119 @@
+# Clickhouse
+
+> Clickhouse sink connector
+
+## Description
+
+Used to write data to Clickhouse.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+
+The Clickhouse sink plug-in can achieve accuracy once by implementing idempotent writing, and needs to cooperate with aggregatingmergetree and other engines that support deduplication.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+:::tip
+
+Write data to Clickhouse can also be done using JDBC
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+|----------------|--------|----------|---------------|
+| host           | string | yes      | -             |
+| database       | string | yes      | -             |
+| table          | string | yes      | -             |
+| username       | string | yes      | -             |
+| password       | string | yes      | -             |
+| fields         | string | yes      | -             |
+| clickhouse.*   | string | no       |               |
+| bulk_size      | string | no       | 20000         |
+| split_mode     | string | no       | false         |
+| sharding_key   | string | no       | -             |
+| common-options | string | no       | -             |
+
+### host [string]
+
+`ClickHouse` cluster address, the format is `host:port` , allowing multiple `hosts` to be specified. Such as `"host1:8123,host2:8123"` .
+
+### database [string]
+
+The `ClickHouse` database
+
+### table [string]
+
+The table name
+
+### username [string]
+
+`ClickHouse` user username
+
+### password [string]
+
+`ClickHouse` user password
+
+### fields [array]
+
+The data field that needs to be output to `ClickHouse` , if not configured, it will be automatically adapted according to the sink table `schema` .
+
+### clickhouse [string]
+
+In addition to the above mandatory parameters that must be specified by `clickhouse-jdbc` , users can also specify multiple optional parameters, which cover all the [parameters](https://github.com/ClickHouse/clickhouse-jdbc/tree/master/clickhouse-client#configuration) provided by `clickhouse-jdbc` .
+
+The way to specify the parameter is to add the prefix `clickhouse.` to the original parameter name. For example, the way to specify `socket_timeout` is: `clickhouse.socket_timeout = 50000` . If these non-essential parameters are not specified, they will use the default values given by `clickhouse-jdbc`.
+
+### bulk_size [number]
+
+The number of rows written through [Clickhouse-jdbc](https://github.com/ClickHouse/clickhouse-jdbc) each time, the `default is 20000` .
+
+### split_mode [boolean]
+
+This mode only support clickhouse table which engine is 'Distributed'.And `internal_replication` option
+should be `true`. They will split distributed table data in seatunnel and perform write directly on each shard. The shard weight define is clickhouse will be
+counted.
+
+### sharding_key [string]
+
+When use split_mode, which node to send data to is a problem, the default is random selection, but the
+'sharding_key' parameter can be used to specify the field for the sharding algorithm. This option only
+worked when 'split_mode' is true.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
+## Examples
+
+```hocon
+sink {
+
+  Clickhouse {
+    host = "localhost:8123"
+    database = "default"
+    table = "fake_all"
+    username = "default"
+    password = ""
+    split_mode = true
+    sharding_key = "age"
+  }
+  
+}
+```
+
+```hocon
+sink {
+
+  Clickhouse {
+    host = "localhost:8123"
+    database = "default"
+    table = "fake_all"
+    username = "default"
+    password = ""
+  }
+  
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/ClickhouseFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/ClickhouseFile.md
new file mode 100644
index 000000000..90e196c92
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/ClickhouseFile.md
@@ -0,0 +1,118 @@
+# ClickhouseFile
+
+> Clickhouse file sink connector
+
+## Description
+
+Generate the clickhouse data file with the clickhouse-local program, and then send it to the clickhouse
+server, also call bulk load. This connector only support clickhouse table which engine is 'Distributed'.And `internal_replication` option
+should be `true`. Supports Batch and Streaming mode.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+:::tip
+
+Write data to Clickhouse can also be done using JDBC
+
+:::
+
+## Options
+
+| name                   | type    | required | default value |
+|------------------------|---------|----------|---------------|
+| host                   | string  | yes      | -             |
+| database               | string  | yes      | -             |
+| table                  | string  | yes      | -             |
+| username               | string  | yes      | -             |
+| password               | string  | yes      | -             |
+| clickhouse_local_path  | string  | yes      | -             |
+| sharding_key           | string  | no       | -             |
+| copy_method            | string  | no       | scp           |
+| node_free_password     | boolean | no       | false         |
+| node_pass              | list    | no       | -             |
+| node_pass.node_address | string  | no       | -             |
+| node_pass.username     | string  | no       | "root"        |
+| node_pass.password     | string  | no       | -             |
+| common-options         | string  | no       | -             |
+
+### host [string]
+
+`ClickHouse` cluster address, the format is `host:port` , allowing multiple `hosts` to be specified. Such as `"host1:8123,host2:8123"` .
+
+### database [string]
+
+The `ClickHouse` database
+
+### table [string]
+
+The table name
+
+### username [string]
+
+`ClickHouse` user username
+
+### password [string]
+
+`ClickHouse` user password
+
+### sharding_key [string]
+
+When ClickhouseFile split data, which node to send data to is a problem, the default is random selection, but the
+'sharding_key' parameter can be used to specify the field for the sharding algorithm. 
+
+### clickhouse_local_path [string]
+
+The address of the clickhouse-local program on the spark node. Since each task needs to be called,
+clickhouse-local should be located in the same path of each spark node.
+
+### copy_method [string]
+
+Specifies the method used to transfer files, the default is scp, optional scp and rsync
+
+### node_free_password [boolean]
+
+Because seatunnel need to use scp or rsync for file transfer, seatunnel need clickhouse server-side access.
+If each spark node and clickhouse server are configured with password-free login,
+you can configure this option to true, otherwise you need to configure the corresponding node password in the node_pass configuration
+
+### node_pass [list]
+
+Used to save the addresses and corresponding passwords of all clickhouse servers
+
+### node_pass.node_address [string]
+
+The address corresponding to the clickhouse server
+
+### node_pass.username [string]
+
+The username corresponding to the clickhouse server, default root user.
+
+### node_pass.password [string]
+
+The password corresponding to the clickhouse server.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
+## Examples
+
+```hocon
+  ClickhouseFile {
+    host = "192.168.0.1:8123"
+    database = "default"
+    table = "fake_all"
+    username = "default"
+    password = ""
+    clickhouse_local_path = "/Users/seatunnel/Tool/clickhouse local"
+    sharding_key = "age"
+    node_free_password = false
+    node_pass = [{
+      node_address = "192.168.0.1"
+      password = "seatunnel"
+    }]
+  }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Console.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Console.md
new file mode 100644
index 000000000..9635d0487
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Console.md
@@ -0,0 +1,80 @@
+# Console
+
+> Console sink connector
+
+## Description
+
+Used to send data to Console. Both support streaming and batch mode.
+> For example, if the data from upstream is [`age: 12, name: jared`], the content send to console is the following: `{"name":"jared","age":17}`
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name | type   | required | default value |
+| --- |--------|----------|---------------|
+## Example
+
+simple:
+
+```hocon
+Console {
+
+    }
+```
+
+test:
+
+* Configuring the SeaTunnel config file
+
+```hocon
+env {
+  execution.parallelism = 1
+  job.mode = "STREAMING"
+}
+
+source {
+    FakeSource {
+      result_table_name = "fake"
+      schema = {
+        fields {
+          name = "string"
+          age = "int"
+        }
+      }
+    }
+}
+
+transform {
+      sql {
+        sql = "select name, age from fake"
+      }
+}
+
+sink {
+    Console {
+
+    }
+}
+
+```
+
+* Start a SeaTunnel task
+
+
+* Console print data
+
+```text
+row=1 : XTblOoJMBr, 1968671376
+row=2 : NAoJoFrthI, 1603900622
+row=3 : VHZBzqQAPr, 1713899051
+row=4 : pfUYOOrPgA, 1412123956
+row=5 : dCNFobURas, 202987936
+row=6 : XGWVgFnfWA, 1879270917
+row=7 : KIGOqnLhqe, 430165110
+row=8 : goMdjHlRpX, 288221239
+row=9 : VBtpiNGArV, 1906991577
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Datahub.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Datahub.md
new file mode 100644
index 000000000..800c2a54b
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Datahub.md
@@ -0,0 +1,68 @@
+# Datahub
+
+> Datahub sink connector
+
+## Description
+
+A sink plugin which use send message to datahub
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name       | type   | required | default value |
+|------------|--------|----------|---------------|
+| endpoint   | string | yes      | -             |
+| accessId   | string | yes      | -             |
+| accessKey  | string | yes      | -             |
+| project    | string | yes      | -             |
+| topic      | string | yes      | -             |
+| timeout    | int    | yes      | -             |
+| retryTimes | int    | yes      | -             |
+
+### url [string]
+
+your datahub endpoint start with http (string)
+
+### accessId [string]
+
+your datahub accessId which cloud be access from Alibaba Cloud  (string)
+
+### accessKey[string]
+
+your datahub accessKey which cloud be access from Alibaba Cloud  (string)
+
+### project [string]
+
+your datahub project which is created in Alibaba Cloud  (string)
+
+### topic [string]
+
+your datahub topic  (string)
+
+### timeout [int]
+
+the max connection timeout (int)
+
+### retryTimes [int]
+
+the max retry times when your client put record failed  (int)
+
+## Example
+
+```hocon
+sink {
+ DataHub {
+  endpoint="yourendpoint"
+  accessId="xxx"
+  accessKey="xxx"
+  project="projectname"
+  topic="topicname"
+  timeout=3000
+  retryTimes=3
+ }
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Elasticsearch.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Elasticsearch.md
new file mode 100644
index 000000000..fe8198f50
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Elasticsearch.md
@@ -0,0 +1,62 @@
+# Elasticsearch
+
+## Description
+
+Output data to `Elasticsearch`.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+:::tip
+
+Engine Supported
+
+* supported  `ElasticSearch version is >= 2.x and < 8.x`
+
+:::
+
+## Options
+
+| name           | type   | required | default value | 
+|----------------|--------|----------|---------------|
+| hosts          | array  | yes      | -             |
+| index          | string | yes      | -             |
+| index_type     | string | no       |               |
+| username       | string | no       |               |
+| password       | string | no       |               | 
+| max_retry_size | int    | no       | 3             |
+| max_batch_size | int    | no       | 10            |
+
+
+
+### hosts [array]
+`Elasticsearch` cluster http address, the format is `host:port` , allowing multiple hosts to be specified. Such as `["host1:9200", "host2:9200"]`.
+
+### index [string]
+`Elasticsearch`  `index` name.Index support contains variables of field name,such as `seatunnel_${age}`,and the field must appear at seatunnel row.
+If not, we will treat it as a normal index.
+
+### index_type [string]
+`Elasticsearch` index type, it is recommended not to specify in elasticsearch 6 and above
+
+### username [string]
+x-pack username
+
+### password [string]
+x-pack password
+
+### max_retry_size [int]
+one bulk request max try size
+
+### max_batch_size [int]
+batch bulk doc max size
+
+## Examples
+```bash
+Elasticsearch {
+    hosts = ["localhost:9200"]
+    index = "seatunnel-${age}"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Email.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Email.md
new file mode 100644
index 000000000..cc74cf495
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Email.md
@@ -0,0 +1,78 @@
+# Email
+
+> Email source connector
+
+## Description
+
+Send the data as a file to email.
+
+ The tested email version is 1.5.6.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                     | type    | required | default value |
+|--------------------------|---------|----------|---------------|
+| email_from_address             | string  | yes      | -             |
+| email_to_address               | string  | yes      | -             |
+| email_host               | string  | yes      | -             |
+| email_transport_protocol             | string  | yes      | -             |
+| email_smtp_auth               | string  | yes      | -             |
+| email_authorization_code               | string  | yes      | -             |
+| email_message_headline             | string  | yes      | -             |
+| email_message_content               | string  | yes      | -             |
+
+
+### email_from_address [string]
+
+Sender Email Address .
+
+### email_to_address [string]
+
+Address to receive mail.
+
+### email_host [string]
+
+SMTP server to connect to.
+
+### email_transport_protocol [string]
+
+The protocol to load the session .
+
+### email_smtp_auth [string]
+
+Whether to authenticate the customer.
+
+### email_authorization_code [string]
+
+authorization code,You can obtain the authorization code from the mailbox Settings.
+
+### email_message_headline [string]
+
+The subject line of the entire message.
+
+### email_message_content [string]
+
+The body of the entire message.
+
+
+## Example
+
+```bash
+
+ EmailSink {
+      email_from_address = "xxxxxx@qq.com"
+      email_to_address = "xxxxxx@163.com"
+      email_host="smtp.qq.com"
+      email_transport_protocol="smtp"
+      email_smtp_auth="true"
+      email_authorization_code=""
+      email_message_headline=""
+      email_message_content=""
+   }
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Enterprise-WeChat.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Enterprise-WeChat.md
new file mode 100644
index 000000000..28ec03059
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Enterprise-WeChat.md
@@ -0,0 +1,57 @@
+# Enterprise WeChat
+
+> Enterprise WeChat sink connector
+
+## Description
+
+A sink plugin which use Enterprise WeChat robot send message
+> For example, if the data from upstream is [`"alarmStatus": "firing", "alarmTime": "2022-08-03 01:38:49","alarmContent": "The disk usage exceeds the threshold"`], the output content to WeChat Robot is the following:
+> ```
+> alarmStatus: firing 
+> alarmTime: 2022-08-03 01:38:49
+> alarmContent: The disk usage exceeds the threshold
+> ```
+**Tips: WeChat sink only support `string` webhook and the data from source will be treated as body content in web hook.**
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name | type   | required | default value |
+| --- |--------|----------| --- |
+| url | String | Yes      | - |
+| mentioned_list | array | No       | - |
+| mentioned_mobile_list | array | No       | - |
+
+### url [string]
+
+Enterprise WeChat webhook url format is https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=XXXXXX(string)
+
+### mentioned_list [array]
+
+A list of userids to remind the specified members in the group (@ a member), @ all means to remind everyone. If the developer can't get the userid, he can use called_ mobile_ list
+
+### mentioned_mobile_list [array]
+
+Mobile phone number list, remind the group member corresponding to the mobile phone number (@ a member), @ all means remind everyone
+
+## Example
+
+simple:
+
+```hocon
+WeChat {
+        url = "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=693axxx6-7aoc-4bc4-97a0-0ec2sifa5aaa"
+    }
+```
+
+```hocon
+WeChat {
+        url = "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=693axxx6-7aoc-4bc4-97a0-0ec2sifa5aaa"
+        mentioned_list=["wangqing","@all"]
+        mentioned_mobile_list=["13800001111","@all"]
+    }
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Feishu.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Feishu.md
new file mode 100644
index 000000000..311a5d7fe
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Feishu.md
@@ -0,0 +1,42 @@
+# Feishu
+
+> Feishu sink connector
+
+## Description
+
+Used to launch feishu web hooks using data. 
+
+> For example, if the data from upstream is [`age: 12, name: tyrantlucifer`], the body content is the following: `{"age": 12, "name": "tyrantlucifer"}`
+
+**Tips: Feishu sink only support `post json` webhook and the data from source will be treated as body content in web hook.**
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name | type   | required | default value |
+| --- |--------| --- | --- |
+| url | String | Yes | - |
+| headers | Map    | No | - |
+
+### url [string]
+
+Feishu webhook url
+
+### headers [Map]
+
+Http request headers
+
+## Example
+
+simple:
+
+```hocon
+Feishu {
+        url = "https://www.feishu.cn/flow/api/trigger-webhook/108bb8f208d9b2378c8c7aedad715c19"
+    }
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/FtpFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/FtpFile.md
new file mode 100644
index 000000000..783346cb3
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/FtpFile.md
@@ -0,0 +1,153 @@
+# FtpFile
+
+> Ftp file sink connector
+
+## Description
+
+Output data to Ftp . 
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name                             | type    | required | default value                                             |
+|----------------------------------|---------|----------|-----------------------------------------------------------|
+| host                             | string  | yes      | -                                                         |
+| port                             | int     | yes      | -                                                         |
+| username                         | string  | yes      | -                                                         |
+| password                         | string  | yes      | -                                                         |
+| path                             | string  | yes      | -                                                         |
+| file_name_expression             | string  | no       | "${transactionId}"                                        |
+| file_format                      | string  | no       | "text"                                                    |
+| filename_time_format             | string  | no       | "yyyy.MM.dd"                                              |
+| field_delimiter                  | string  | no       | '\001'                                                    |
+| row_delimiter                    | string  | no       | "\n"                                                      |
+| partition_by                     | array   | no       | -                                                         |
+| partition_dir_expression         | string  | no       | "\${k0}=\${v0}\/\${k1}=\${v1}\/...\/\${kn}=\${vn}\/"      |
+| is_partition_field_write_in_file | boolean | no       | false                                                     |
+| sink_columns                     | array   | no       | When this parameter is empty, all fields are sink columns |
+| is_enable_transaction            | boolean | no       | true                                                      |
+| save_mode                        | string  | no       | "error"                                                   |
+
+### host [string]
+
+The target ftp host is required
+
+### port [int]
+
+The target ftp port is required
+
+### username [string]
+
+The target ftp username is required
+
+### password [string]
+
+The target ftp password is required
+
+### path [string]
+
+The target dir path is required.
+
+### file_name_expression [string]
+
+`file_name_expression` describes the file expression which will be created into the `path`. We can add the variable `${now}` or `${uuid}` in the `file_name_expression`, like `test_${uuid}_${now}`,
+`${now}` represents the current time, and its format can be defined by specifying the option `filename_time_format`.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+### file_format [string]
+
+We supported as the following file types:
+
+`text` `json` `csv` `orc` `parquet`
+
+Please note that, The final file name will end with the file_format's suffix, the suffix of the text file is `txt`.
+
+### filename_time_format [string]
+
+When the format in the `file_name_expression` parameter is `xxxx-${now}` , `filename_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+
+### field_delimiter [string]
+
+The separator between columns in a row of data. Only needed by `text` and `csv` file format.
+
+### row_delimiter [string]
+
+The separator between rows in a file. Only needed by `text` and `csv` file format.
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### partition_dir_expression [string]
+
+If the `partition_by` is specified, we will generate the corresponding partition directory based on the partition information, and the final file will be placed in the partition directory.
+
+Default `partition_dir_expression` is `${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/`. `k0` is the first partition field and `v0` is the value of the first partition field.
+
+### is_partition_field_write_in_file [boolean]
+
+If `is_partition_field_write_in_file` is `true`, the partition field and the value of it will be write into data file.
+
+For example, if you want to write a Hive Data File, Its value should be `false`.
+
+### sink_columns [array]
+
+Which columns need be wrote to file, default value is all the columns get from `Transform` or `Source`.
+The order of the fields determines the order in which the file is actually written.
+
+### is_enable_transaction [boolean]
+
+If `is_enable_transaction` is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+Only support `true` now.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite`. This means we will delete the old file when a new file have a same name with it.
+
+If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
+
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+## Example
+
+For text file format
+
+```bash
+
+FtpFile {
+    host="xxx.xxx.xxx.xxx"
+    port=21
+    username="username"
+    password="password"
+    path="/data/ftp"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="text"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Greenplum.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Greenplum.md
new file mode 100644
index 000000000..91af690d5
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Greenplum.md
@@ -0,0 +1,32 @@
+# Greenplum
+
+> Greenplum sink connector
+
+## Description
+
+Write data to Greenplum using [Jdbc connector](Jdbc.md).
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+:::tip
+
+Not support exactly-once semantics (XA transaction is not yet supported in Greenplum database).
+
+:::
+
+## Options
+
+### driver [string]
+
+Optional jdbc drivers:
+- `org.postgresql.Driver`
+- `com.pivotal.jdbc.GreenplumDriver`
+
+Warn: for license compliance, if you use `GreenplumDriver` the have to provide Greenplum JDBC driver yourself, e.g. copy greenplum-xxx.jar to $SEATNUNNEL_HOME/lib for Standalone.
+
+### url [string]
+
+The URL of the JDBC connection. if you use postgresql driver the value is `jdbc:postgresql://${yous_host}:${yous_port}/${yous_database}`, or you use greenplum driver the value is `jdbc:pivotal:greenplum://${yous_host}:${yous_port};DatabaseName=${yous_database}`
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/HdfsFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/HdfsFile.md
new file mode 100644
index 000000000..928156760
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/HdfsFile.md
@@ -0,0 +1,185 @@
+# HdfsFile
+
+> HDFS file sink connector
+
+## Description
+
+Output data to hdfs file
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] csv
+  - [x] parquet
+  - [x] orc
+  - [x] json
+
+## Options
+
+In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
+
+| name                             | type   | required | default value                                           |
+|----------------------------------| ------ | -------- |---------------------------------------------------------|
+| fs.defaultFS                     | string | yes      | -                                                       |
+| path                             | string | yes      | -                                                       |
+| file_name_expression             | string | no       | "${transactionId}"                                      |
+| file_format                      | string | no       | "text"                                                  |
+| filename_time_format             | string | no       | "yyyy.MM.dd"                                            |
+| field_delimiter                  | string | no       | '\001'                                                  |
+| row_delimiter                    | string | no       | "\n"                                                    |
+| partition_by                     | array  | no       | -                                                       |
+| partition_dir_expression         | string | no       | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"              |
+| is_partition_field_write_in_file | boolean| no       | false                                                   |
+| sink_columns                     | array  | no       | When this parameter is empty, all fields are sink columns |
+| is_enable_transaction            | boolean| no       | true                                                    |
+| save_mode                        | string | no       | "error"                                                 |
+
+### fs.defaultFS [string]
+
+The hadoop cluster address that start with `hdfs://`, for example: `hdfs://hadoopcluster`
+
+### path [string]
+
+The target dir path is required.
+
+### file_name_expression [string]
+
+`file_name_expression` describes the file expression which will be created into the `path`. We can add the variable `${now}` or `${uuid}` in the `file_name_expression`, like `test_${uuid}_${now}`,
+`${now}` represents the current time, and its format can be defined by specifying the option `filename_time_format`.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+### file_format [string]
+
+We supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+Please note that, The final file name will ends with the file_format's suffix, the suffix of the text file is `txt`.
+
+### filename_time_format [string]
+
+When the format in the `file_name_expression` parameter is `xxxx-${now}` , `filename_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### field_delimiter [string]
+
+The separator between columns in a row of data. Only needed by `text` and `csv` file format.
+
+### row_delimiter [string]
+
+The separator between rows in a file. Only needed by `text` and `csv` file format.
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### partition_dir_expression [string]
+
+If the `partition_by` is specified, we will generate the corresponding partition directory based on the partition information, and the final file will be placed in the partition directory.
+
+Default `partition_dir_expression` is `${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/`. `k0` is the first partition field and `v0` is the value of the first partition field.
+
+### is_partition_field_write_in_file [boolean]
+
+If `is_partition_field_write_in_file` is `true`, the partition field and the value of it will be write into data file.
+
+For example, if you want to write a Hive Data File, Its value should be `false`.
+
+### sink_columns [array]
+
+Which columns need be write to file, default value is all of the columns get from `Transform` or `Source`.
+The order of the fields determines the order in which the file is actually written.
+
+### is_enable_transaction [boolean]
+
+If `is_enable_transaction` is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+Only support `true` now.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite`. This means we will delete the old file when a new file have a same name with it.
+
+If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
+
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+## Example
+
+For text file format
+
+```bash
+
+HdfsFile {
+    fs.defaultFS="hdfs://hadoopcluster"
+    path="/tmp/hive/warehouse/test2"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="text"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
+
+For parquet file format
+
+```bash
+
+HdfsFile {
+    fs.defaultFS="hdfs://hadoopcluster"
+    path="/tmp/hive/warehouse/test2"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="parquet"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
+
+For orc file format
+
+```bash
+
+HdfsFile {
+    fs.defaultFS="hdfs://hadoopcluster"
+    path="/tmp/hive/warehouse/test2"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="orc"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Hive.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Hive.md
new file mode 100644
index 000000000..e7e0a8f78
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Hive.md
@@ -0,0 +1,156 @@
+# Hive
+
+> Hive sink connector
+
+## Description
+
+Write data to Hive.
+
+In order to use this connector, You must ensure your spark/flink cluster already integrated hive. The tested hive version is 2.3.9.
+
+**Tips: Hive Sink Connector not support array, map and struct datatype now**
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] parquet
+  - [x] orc
+
+## Options
+
+| name                  | type   | required                                    | default value                                                 |
+|-----------------------| ------ |---------------------------------------------| ------------------------------------------------------------- |
+| table_name            | string | yes                                         | -                                                             |
+| metastore_uri         | string | yes                                         | -                                                             |
+| partition_by          | array  | required if hive sink table have partitions | -                                                             |
+| sink_columns          | array  | no                                          | When this parameter is empty, all fields are sink columns     |
+| is_enable_transaction | boolean| no                                          | true                                                          |
+| save_mode             | string | no                                          | "append"                                                      |
+
+### table_name [string]
+
+Target Hive table name eg: db1.table1
+
+### metastore_uri [string]
+
+Hive metastore uri
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### sink_columns [array]
+
+Which columns need be write to hive, default value is all of the columns get from `Transform` or `Source`.
+The order of the fields determines the order in which the file is actually written.
+
+### is_enable_transaction [boolean]
+
+If `is_enable_transaction` is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.
+
+Only support `true` now.
+
+### save_mode [string]
+
+Storage mode, we need support `overwrite` and `append`. `append` is now supported.
+
+Streaming Job not support `overwrite`.
+
+## Example
+
+```bash
+
+  Hive {
+    table_name = "default.seatunnel_orc"
+    metastore_uri = "thrift://namenode001:9083"
+  }
+
+```
+
+### example 1
+
+We have a source table like this:
+
+```bash
+create table test_hive_source(
+     test_tinyint                          TINYINT,
+     test_smallint                       SMALLINT,
+     test_int                                INT,
+     test_bigint                           BIGINT,
+     test_boolean                       BOOLEAN,
+     test_float                             FLOAT,
+     test_double                         DOUBLE,
+     test_string                           STRING,
+     test_binary                          BINARY,
+     test_timestamp                  TIMESTAMP,
+     test_decimal                       DECIMAL(8,2),
+     test_char                             CHAR(64),
+     test_varchar                        VARCHAR(64),
+     test_date                             DATE,
+     test_array                            ARRAY<INT>,
+     test_map                              MAP<STRING, FLOAT>,
+     test_struct                           STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>
+     )
+PARTITIONED BY (test_par1 STRING, test_par2 STRING);
+
+```
+
+We need read data from the source table and write to another table:
+
+```bash
+create table test_hive_sink_text_simple(
+     test_tinyint                          TINYINT,
+     test_smallint                       SMALLINT,
+     test_int                                INT,
+     test_bigint                           BIGINT,
+     test_boolean                       BOOLEAN,
+     test_float                             FLOAT,
+     test_double                         DOUBLE,
+     test_string                           STRING,
+     test_binary                          BINARY,
+     test_timestamp                  TIMESTAMP,
+     test_decimal                       DECIMAL(8,2),
+     test_char                             CHAR(64),
+     test_varchar                        VARCHAR(64),
+     test_date                             DATE
+     )
+PARTITIONED BY (test_par1 STRING, test_par2 STRING);
+
+```
+
+The job config file can like this:
+
+```
+env {
+  # You can set flink configuration here
+  execution.parallelism = 3
+  job.name="test_hive_source_to_hive"
+}
+
+source {
+  Hive {
+    table_name = "test_hive.test_hive_source"
+    metastore_uri = "thrift://ctyun7:9083"
+  }
+}
+
+transform {
+}
+
+sink {
+  # choose stdout output plugin to output data to console
+
+  Hive {
+    table_name = "test_hive.test_hive_sink_text_simple"
+    metastore_uri = "thrift://ctyun7:9083"
+    partition_by = ["test_par1", "test_par2"]
+    sink_columns = ["test_tinyint", "test_smallint", "test_int", "test_bigint", "test_boolean", "test_float", "test_double", "test_string", "test_binary", "test_timestamp", "test_decimal", "test_char", "test_varchar", "test_date", "test_par1", "test_par2"]
+  }
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Http.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Http.md
new file mode 100644
index 000000000..2a5cb4385
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Http.md
@@ -0,0 +1,66 @@
+# Http
+
+> Http sink connector
+
+## Description
+
+Used to launch web hooks using data.
+
+> For example, if the data from upstream is [`age: 12, name: tyrantlucifer`], the body content is the following: `{"age": 12, "name": "tyrantlucifer"}`
+
+**Tips: Http sink only support `post json` webhook and the data from source will be treated as body content in web hook.**
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name                               | type   | required | default value |
+|------------------------------------|--------|----------|---------------|
+| url                                | String | Yes      | -             |
+| headers                            | Map    | No       | -             |
+| params                             | Map    | No       | -             |
+| retry                              | int    | No       | -             |
+| retry_backoff_multiplier_ms        | int    | No       | 100           |
+| retry_backoff_max_ms               | int    | No       | 10000         |
+
+
+### url [String]
+
+http request url
+
+### headers [Map]
+
+http headers
+
+### params [Map]
+
+http params
+
+### retry [int]
+
+The max retry times if request http return to `IOException`
+
+### retry_backoff_multiplier_ms [int]
+
+The retry-backoff times(millis) multiplier if request http failed
+
+### retry_backoff_max_ms [int]
+
+The maximum retry-backoff times(millis) if request http failed
+
+## Example
+
+simple:
+
+```hocon
+Http {
+        url = "http://localhost/test/webhook"
+        headers {
+            token = "9e32e859ef044462a257e1fc76730066"
+        }
+    }
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/IoTDB.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/IoTDB.md
new file mode 100644
index 000000000..31389c03f
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/IoTDB.md
@@ -0,0 +1,126 @@
+# IoTDB
+
+> IoTDB sink connector
+
+## Description
+
+Used to write data to IoTDB.
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+IoTDB supports the `exactly-once` feature through idempotent writing. If two pieces of data have
+the same `key` and `timestamp`, the new data will overwrite the old one.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+:::tip
+
+There is a conflict of thrift version between IoTDB and Spark.Therefore, you need to execute `rm -f $SPARK_HOME/jars/libthrift*` and `cp $IOTDB_HOME/lib/libthrift* $SPARK_HOME/jars/` to resolve it.
+
+:::
+
+## Options
+
+| name                          | type              | required | default value |
+|-------------------------------|-------------------|----------|---------------|
+| node_urls                     | list              | yes      | -             |
+| username                      | string            | yes      | -             |
+| password                      | string            | yes      | -             |
+| batch_size                    | int               | no       | 1024          |
+| batch_interval_ms             | int               | no       | -             |
+| max_retries                   | int               | no       | -             |
+| retry_backoff_multiplier_ms   | int               | no       | -             |
+| max_retry_backoff_ms          | int               | no       | -             |
+| default_thrift_buffer_size    | int               | no       | -             |
+| max_thrift_frame_size         | int               | no       | -             |
+| zone_id                       | string            | no       | -             |
+| enable_rpc_compression        | boolean           | no       | -             |
+| connection_timeout_in_ms      | int               | no       | -             |
+| timeseries_options            | list              | no       | -             |
+| timeseries_options.path       | string            | no       | -             |
+| timeseries_options.data_type  | string            | no       | -             |
+| common-options                | string            | no       | -             |
+
+### node_urls [list]
+
+`IoTDB` cluster address, the format is `["host:port", ...]`
+
+### username [string]
+
+`IoTDB` user username
+
+### password [string]
+
+`IoTDB` user password
+
+### batch_size [int]
+
+For batch writing, when the number of buffers reaches the number of `batch_size` or the time reaches `batch_interval_ms`, the data will be flushed into the IoTDB
+
+### batch_interval_ms [int]
+
+For batch writing, when the number of buffers reaches the number of `batch_size` or the time reaches `batch_interval_ms`, the data will be flushed into the IoTDB
+
+### max_retries [int]
+
+The number of retries to flush failed
+
+### retry_backoff_multiplier_ms [int]
+
+Using as a multiplier for generating the next delay for backoff
+
+### max_retry_backoff_ms [int]
+
+The amount of time to wait before attempting to retry a request to `IoTDB`
+
+### default_thrift_buffer_size [int]
+
+Thrift init buffer size in `IoTDB` client
+
+### max_thrift_frame_size [int]
+
+Thrift max frame size in `IoTDB` client
+
+### zone_id [string]
+
+java.time.ZoneId in `IoTDB` client
+
+### enable_rpc_compression [boolean]
+
+Enable rpc compression in `IoTDB` client
+
+### connection_timeout_in_ms [int]
+
+The maximum time (in ms) to wait when connect `IoTDB`
+
+### timeseries_options [list]
+
+Timeseries options
+
+### timeseries_options.path [string]
+
+Timeseries path
+
+### timeseries_options.data_type [string]
+
+Timeseries data type
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
+## Examples
+
+```hocon
+sink {
+  IoTDB {
+    node_urls = ["localhost:6667"]
+    username = "root"
+    password = "root"
+    batch_size = 1024
+    batch_interval_ms = 1000
+  }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Jdbc.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Jdbc.md
new file mode 100644
index 000000000..8393cc17d
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Jdbc.md
@@ -0,0 +1,118 @@
+# JDBC
+
+> JDBC sink connector
+
+## Description
+Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing, support exactly-once semantics (using XA transaction guarantee).
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Use `Xa transactions` to ensure `exactly-once`. So only support `exactly-once` for the database which is support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                         | type    | required | default value |
+|------------------------------|---------|----------|---------------|
+| url                          | String  | Yes      | -             |
+| driver                       | String  | Yes      | -             |
+| user                         | String  | No       | -             |
+| password                     | String  | No       | -             |
+| query                        | String  | Yes      | -             |
+| connection_check_timeout_sec | Int     | No       | 30            |
+| max_retries                  | Int     | No       | 3             |
+| batch_size                   | Int     | No       | 300           |
+| batch_interval_ms            | Int     | No       | 1000          |
+| is_exactly_once              | Boolean | No       | false         |
+| xa_data_source_class_name    | String  | No       | -             |
+| max_commit_attempts          | Int     | No       | 3             |
+| transaction_timeout_sec      | Int     | No       | -1            |
+
+### driver [string]
+The jdbc class name used to connect to the remote data source, if you use MySQL the value is com.mysql.cj.jdbc.Driver.
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy mysql-connector-java-xxx.jar to $SEATNUNNEL_HOME/lib for Standalone.
+
+### user [string]
+userName
+
+### password [string]
+password
+
+### url [string]
+The URL of the JDBC connection. Refer to a case: jdbc:postgresql://localhost/test
+
+### query [string]
+Query statement
+
+### connection_check_timeout_sec [int]
+
+The time in seconds to wait for the database operation used to validate the connection to complete.
+
+### max_retries[int]
+The number of retries to submit failed (executeBatch)
+
+### batch_size[int]
+For batch writing, when the number of buffers reaches the number of `batch_size` or the time reaches `batch_interval_ms`, the data will be flushed into the database
+
+### batch_interval_ms[int]
+For batch writing, when the number of buffers reaches the number of `batch_size` or the time reaches `batch_interval_ms`, the data will be flushed into the database
+
+### is_exactly_once[boolean]
+Whether to enable exactly-once semantics, which will use Xa transactions. If on, you need to set `xa_data_source_class_name`.
+
+### xa_data_source_class_name[string]
+The xa data source class name of the database Driver, for example, mysql is `com.mysql.cj.jdbc.MysqlXADataSource`, and please refer to appendix for other data sources
+
+### max_commit_attempts[int]
+The number of retries for transaction commit failures
+
+### transaction_timeout_sec[int]
+The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect exactly-once semantics
+
+## tips
+In the case of is_exactly_once = "true", Xa transactions are used. This requires database support, and some databases require some setup : 
+  1 postgres needs to set `max_prepared_transactions > 1` such as `ALTER SYSTEM set max_prepared_transactions to 10`.
+  2 mysql version need >= `8.0.29` and Non-root users need to grant `XA_RECOVER_ADMIN` permissions. such as `grant XA_RECOVER_ADMIN on test_db.* to 'user1'@'%'`.
+
+## appendix
+there are some reference value for params above.
+
+| datasource | driver                   | url                                       | xa_data_source_class_name           | maven                                                         |
+|------------|--------------------------|-------------------------------------------|-------------------------------------|---------------------------------------------------------------|
+| mysql      | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test          | com.mysql.cj.jdbc.MysqlXADataSource | https://mvnrepository.com/artifact/mysql/mysql-connector-java |
+| postgresql | org.postgresql.Driver    | jdbc:postgresql://localhost:5432/postgres | org.postgresql.xa.PGXADataSource    | https://mvnrepository.com/artifact/org.postgresql/postgresql  |                                                             |
+| dm         | dm.jdbc.driver.DmDriver  | jdbc:dm://localhost:5236                  | dm.jdbc.driver.DmdbXADataSource     | https://mvnrepository.com/artifact/com.dameng/DmJdbcDriver18  |
+
+## Example
+Simple
+```
+jdbc {
+    url = "jdbc:mysql://localhost/test"
+    driver = "com.mysql.cj.jdbc.Driver"
+    user = "root"
+    password = "123456"
+    query = "insert into test_table(name,age) values(?,?)"
+}
+
+```
+
+Exactly-once
+```
+jdbc {
+
+    url = "jdbc:mysql://localhost/test"
+    driver = "com.mysql.cj.jdbc.Driver"
+
+    max_retries = 0
+    user = "root"
+    password = "123456"
+    query = "insert into test_table(name,age) values(?,?)"
+
+    is_exactly_once = "true"
+
+    xa_data_source_class_name = "com.mysql.cj.jdbc.MysqlXADataSource"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Kudu.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Kudu.md
new file mode 100644
index 000000000..ae08b3afa
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Kudu.md
@@ -0,0 +1,46 @@
+# Kudu
+
+> Kudu sink connector
+
+## Description
+
+Write data to Kudu.
+
+ The tested kudu version is 1.11.1.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                     | type    | required | default value |
+|--------------------------|---------|----------|---------------|
+| kudu_master             | string  | yes      | -             |
+| kudu_table               | string  | yes      | -             |
+| save_mode               | string  | yes      | -             |
+
+### kudu_master [string]
+
+`kudu_master`  The address of kudu master,such as '192.168.88.110:7051'.
+
+### kudu_table [string]
+
+`kudu_table` The name of kudu table..
+
+### save_mode [string]
+
+Storage mode, we need support `overwrite` and `append`. `append` is now supported.
+
+## Example
+
+```bash
+
+ kuduSink {
+      kudu_master = "192.168.88.110:7051"
+      kudu_table = "studentlyhresultflink"
+      save_mode="append"
+   }
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/LocalFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/LocalFile.md
new file mode 100644
index 000000000..b9ddd3f39
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/LocalFile.md
@@ -0,0 +1,175 @@
+# LocalFile
+
+> Local file sink connector
+
+## Description
+
+Output data to local file.
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+    - [x] text
+    - [x] csv
+    - [x] parquet
+    - [x] orc
+    - [x] json
+
+## Options
+
+| name                              | type   | required | default value                                       |
+| --------------------------------- | ------ | -------- | --------------------------------------------------- |
+| path                              | string | yes      | -                                                   |
+| file_name_expression              | string | no       | "${transactionId}"                                  |
+| file_format                       | string | no       | "text"                                              |
+| filename_time_format              | string | no       | "yyyy.MM.dd"                                        |
+| field_delimiter                   | string | no       | '\001'                                              |
+| row_delimiter                     | string | no       | "\n"                                                |
+| partition_by                      | array  | no       | -                                                   |
+| partition_dir_expression          | string | no       | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"          |
+| is_partition_field_write_in_file  | boolean| no       | false                                               |
+| sink_columns                      | array  | no       | When this parameter is empty, all fields are sink columns |
+| is_enable_transaction             | boolean| no       | true                                                |
+| save_mode                         | string | no       | "error"                                             |
+
+### path [string]
+
+The target dir path is required.
+
+### file_name_expression [string]
+
+`file_name_expression` describes the file expression which will be created into the `path`. We can add the variable `${now}` or `${uuid}` in the `file_name_expression`, like `test_${uuid}_${now}`,
+`${now}` represents the current time, and its format can be defined by specifying the option `filename_time_format`.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+### file_format [string]
+
+We supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+Please note that, The final file name will ends with the file_format's suffix, the suffix of the text file is `txt`.
+
+### filename_time_format [string]
+
+When the format in the `file_name_expression` parameter is `xxxx-${now}` , `filename_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### field_delimiter [string]
+
+The separator between columns in a row of data. Only needed by `text` and `csv` file format.
+
+### row_delimiter [string]
+
+The separator between rows in a file. Only needed by `text` and `csv` file format.
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### partition_dir_expression [string]
+
+If the `partition_by` is specified, we will generate the corresponding partition directory based on the partition information, and the final file will be placed in the partition directory.
+
+Default `partition_dir_expression` is `${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/`. `k0` is the first partition field and `v0` is the value of the first partition field.
+
+### is_partition_field_write_in_file [boolean]
+
+If `is_partition_field_write_in_file` is `true`, the partition field and the value of it will be write into data file.
+
+For example, if you want to write a Hive Data File, Its value should be `false`.
+
+### sink_columns [array]
+
+Which columns need be write to file, default value is all of the columns get from `Transform` or `Source`.
+The order of the fields determines the order in which the file is actually written.
+
+### is_enable_transaction [boolean]
+
+If `is_enable_transaction` is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+Only support `true` now.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite`. This means we will delete the old file when a new file have a same name with it.
+
+If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
+
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+## Example
+
+For text file format
+
+```bash
+
+LocalFile {
+    path="/tmp/hive/warehouse/test2"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="text"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
+
+For parquet file format
+
+```bash
+
+LocalFile {
+    path="/tmp/hive/warehouse/test2"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="parquet"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
+
+For orc file format
+
+```bash
+
+LocalFile {
+    path="/tmp/hive/warehouse/test2"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="orc"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+}
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/MongoDB.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/MongoDB.md
new file mode 100644
index 000000000..2768aa03c
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/MongoDB.md
@@ -0,0 +1,46 @@
+# MongoDb
+
+> MongoDB sink connector
+
+## Description
+
+Write data to `MongoDB`
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name       | type   | required | default value |
+|------------| ------ |----------| ------------- |
+| uri        | string | yes      | -             |
+| database   | string | yes      | -             |
+| collection | string | yes      | -             |
+
+### uri [string]
+
+uri to write to mongoDB
+
+### database [string]
+
+database to write to mongoDB
+
+### collection [string]
+
+collection to write to mongoDB
+
+## Example
+
+```bash
+mongodb {
+    uri = "mongodb://username:password@127.0.0.1:27017/mypost?retryWrites=true&writeConcern=majority"
+    database = "mydatabase"
+    collection = "mycollection"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Neo4j.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Neo4j.md
new file mode 100644
index 000000000..519212b01
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Neo4j.md
@@ -0,0 +1,87 @@
+# Neo4j
+
+> Neo4j sink connector
+
+## Description
+
+Write data to Neo4j. 
+
+`neo4j-java-driver` version 4.4.9
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                       | type   | required | default value |
+|----------------------------|--------|----------|---------------|
+| uri                        | String | Yes      | -             |
+| username                   | String | No       | -             |
+| password                   | String | No       | -             |
+| bearer_token               | String | No       | -             |
+| kerberos_ticket            | String | No       | -             |
+| database                   | String | Yes      | -             |
+| query                      | String | Yes      | -             |
+| queryParamPosition         | Object | Yes      | -             |
+| max_transaction_retry_time | Long   | No       | 30            |
+| max_connection_timeout     | Long   | No       | 30            |
+
+
+### uri [string]
+The URI of the Neo4j database. Refer to a case: `neo4j://localhost:7687`
+
+### username [string]
+username of the Neo4j
+
+### password [string]
+password of the Neo4j. required if `username` is provided
+
+### bearer_token [string]
+base64 encoded bearer token of the Neo4j. for Auth. 
+
+### kerberos_ticket [string]
+base64 encoded kerberos ticket of the Neo4j. for Auth.
+
+### database [string]
+database name.
+
+### query [string]
+Query statement. contain parameter placeholders that are substituted with the corresponding values at runtime
+
+### queryParamPosition [object]
+position mapping information for query parameters.
+
+key name is parameter placeholder name.
+
+associated value is position of field in input data row. 
+
+
+### max_transaction_retry_time [long]
+maximum transaction retry time(seconds). transaction fail if exceeded
+
+### max_connection_timeout [long]
+The maximum amount of time to wait for a TCP connection to be established (seconds)
+
+
+## Example
+```
+sink {
+  Neo4j {
+    uri = "neo4j://localhost:7687"
+    username = "neo4j"
+    password = "1234"
+    database = "neo4j"
+
+    max_transaction_retry_time = 10
+    max_connection_timeout = 10
+
+    query = "CREATE (a:Person {name: $name, age: $age})"
+    queryParamPosition = {
+        name = 0
+        age = 1
+    }
+  }
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/OssFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/OssFile.md
new file mode 100644
index 000000000..c5a96aae1
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/OssFile.md
@@ -0,0 +1,217 @@
+# OssFile
+
+> Oss file sink connector
+
+## Description
+
+Output data to oss file system.
+
+> Tips: We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OSS and this connector need some hadoop dependencies.
+> It's only support hadoop version **2.9.X+**.
+
+## Key features
+
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+By default, we use 2PC commit to ensure `exactly-once`
+
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] file format
+    - [x] text
+    - [x] csv
+    - [x] parquet
+    - [x] orc
+    - [x] json
+
+## Options
+
+| name                             | type   | required | default value               |
+|----------------------------------| ------ |---------|-----------------------------|
+| path                             | string | yes     | -                           |
+| bucket                           | string | yes     | -                           |
+| access_key                       | string | yes     | -                           |
+| access_secret                    | string | yes     | -                           |
+| endpoint                         | string | yes     | -                           |
+| file_name_expression             | string | no      | "${transactionId}"          |
+| file_format                      | string | no      | "text"                      |
+| filename_time_format             | string | no      | "yyyy.MM.dd"                |
+| field_delimiter                  | string | no      | '\001'                      |
+| row_delimiter                    | string | no      | "\n"                        |
+| partition_by                     | array  | no      | -                           |
+| partition_dir_expression         | string | no      | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/" |
+| is_partition_field_write_in_file | boolean| no      | false                       |
+| sink_columns                     | array  | no      | When this parameter is empty, all fields are sink columns |
+| is_enable_transaction            | boolean| no      | true                        |
+| save_mode                        | string | no      | "error"                     |
+
+### path [string]
+
+The target dir path is required.
+
+### bucket [string]
+
+The bucket address of oss file system, for example: `oss://tyrantlucifer-image-bed`
+
+### access_key [string]
+
+The access key of oss file system.
+
+### access_secret [string]
+
+The access secret of oss file system.
+
+### endpoint [string]
+
+The endpoint of oss file system.
+
+### file_name_expression [string]
+
+`file_name_expression` describes the file expression which will be created into the `path`. We can add the variable `${now}` or `${uuid}` in the `file_name_expression`, like `test_${uuid}_${now}`,
+`${now}` represents the current time, and its format can be defined by specifying the option `filename_time_format`.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+### file_format [string]
+
+We supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+Please note that, The final file name will end with the file_format's suffix, the suffix of the text file is `txt`.
+
+### filename_time_format [string]
+
+When the format in the `file_name_expression` parameter is `xxxx-${now}` , `filename_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### field_delimiter [string]
+
+The separator between columns in a row of data. Only needed by `text` and `csv` file format.
+
+### row_delimiter [string]
+
+The separator between rows in a file. Only needed by `text` and `csv` file format.
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### partition_dir_expression [string]
+
+If the `partition_by` is specified, we will generate the corresponding partition directory based on the partition information, and the final file will be placed in the partition directory.
+
+Default `partition_dir_expression` is `${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/`. `k0` is the first partition field and `v0` is the value of the first partition field.
+
+### is_partition_field_write_in_file [boolean]
+
+If `is_partition_field_write_in_file` is `true`, the partition field and the value of it will be written into data file.
+
+For example, if you want to write a Hive Data File, Its value should be `false`.
+
+### sink_columns [array]
+
+Which columns need be written to file, default value is all the columns get from `Transform` or `Source`.
+The order of the fields determines the order in which the file is actually written.
+
+### is_enable_transaction [boolean]
+
+If `is_enable_transaction` is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.
+
+Please note that, If `is_enable_transaction` is `true`, we will auto add `${transactionId}_` in the head of the file.
+
+Only support `true` now.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite`. This means we will delete the old file when a new file have a same name with it.
+
+If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
+
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+## Example
+
+For text file format
+
+```hocon
+
+  OssFile {
+    path="/seatunnel/sink"
+    bucket = "oss://tyrantlucifer-image-bed"
+    access_key = "xxxxxxxxxxx"
+    access_secret = "xxxxxxxxxxx"
+    endpoint = "oss-cn-beijing.aliyuncs.com"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="text"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+    save_mode="error"
+  }
+
+```
+
+For parquet file format
+
+```hocon
+
+  OssFile {
+    path="/seatunnel/sink"
+    bucket = "oss://tyrantlucifer-image-bed"
+    access_key = "xxxxxxxxxxx"
+    access_secret = "xxxxxxxxxxxxxxxxx"
+    endpoint = "oss-cn-beijing.aliyuncs.com"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="parquet"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+    save_mode="error"
+  }
+
+```
+
+For orc file format
+
+```bash
+
+  OssFile {
+    path="/seatunnel/sink"
+    bucket = "oss://tyrantlucifer-image-bed"
+    access_key = "xxxxxxxxxxx"
+    access_secret = "xxxxxxxxxxx"
+    endpoint = "oss-cn-beijing.aliyuncs.com"
+    field_delimiter="\t"
+    row_delimiter="\n"
+    partition_by=["age"]
+    partition_dir_expression="${k0}=${v0}"
+    is_partition_field_write_in_file=true
+    file_name_expression="${transactionId}_${now}"
+    file_format="orc"
+    sink_columns=["name","age"]
+    filename_time_format="yyyy.MM.dd"
+    is_enable_transaction=true
+    save_mode="error"
+  }
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Phoenix.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Phoenix.md
new file mode 100644
index 000000000..f7383daea
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Phoenix.md
@@ -0,0 +1,46 @@
+# Phoenix
+
+> Phoenix sink connector
+
+## Description
+Write Phoenix data through [Jdbc connector](Jdbc.md).
+Support Batch mode and Streaming mode. The tested Phoenix version is 4.xx and 5.xx
+On the underlying implementation, through the jdbc driver of Phoenix, execute the upsert statement to write data to HBase.
+Two ways of connecting Phoenix with Java JDBC. One is to connect to zookeeper through JDBC, and the other is to connect to queryserver through JDBC thin client.
+
+> Tips: By default, the (thin) driver jar is used. If you want to use the (thick) driver  or other versions of Phoenix (thin) driver, you need to recompile the jdbc connector module
+
+> Tips: Not support exactly-once semantics (XA transaction is not yet supported in Phoenix).
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+### driver [string]
+if you use phoenix (thick) driver the value is `org.apache.phoenix.jdbc.PhoenixDriver` or you use (thin) driver the value is `org.apache.phoenix.queryserver.client.Driver`
+
+### url [string]
+if you use phoenix (thick) driver the value is `jdbc:phoenix:localhost:2182/hbase` or you use (thin) driver the value is `jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF`
+
+## Example
+use thick client drive
+```
+    Jdbc {
+        driver = org.apache.phoenix.jdbc.PhoenixDriver
+        url = "jdbc:phoenix:localhost:2182/hbase"
+        query = "upsert into test.sink(age, name) values(?, ?)"
+    }
+
+```
+
+use thin client drive
+```
+    Jdbc {
+        driver = org.apache.phoenix.queryserver.client.Driver
+        url = "jdbc:phoenix:thin:url=http://spark_e2e_phoenix_sink:8765;serialization=PROTOBUF"
+        query = "upsert into test.sink(age, name) values(?, ?)"
+    }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Redis.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Redis.md
new file mode 100644
index 000000000..550e89e9b
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Redis.md
@@ -0,0 +1,113 @@
+# Redis
+
+> Redis sink connector
+
+## Description
+
+Used to write data to Redis.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name      | type   | required | default value |
+|-----------|--------|----------|---------------|
+| host      | string | yes      | -             |
+| port      | int    | yes      | -             |
+| key       | string | yes      | -             |
+| data_type | string | yes      | -             |
+| auth      | string | No       | -             |
+| format    | string | No       | json          |
+
+### host [string]
+
+Redis host
+
+### port [int]
+
+Redis port
+
+### key [string]
+
+The value of key you want to write to redis. 
+
+For example, if you want to use value of a field from upstream data as key, you can assign it to the field name.
+
+Upstream data is the following:
+
+| code | data           | success |
+|------|----------------|---------|
+| 200  | get success    | true    |
+| 500  | internal error | false   |
+
+If you assign field name to `code` and data_type to `key`, two data will be written to redis: 
+1. `200 -> {code: 200, message: true, data: get success}`
+2. `500 -> {code: 500, message: false, data: internal error}`
+
+If you assign field name to `value` and data_type to `key`, only one data will be written to redis because `value` is not existed in upstream data's fields:
+
+1. `value -> {code: 500, message: false, data: internal error}` 
+
+Please see the data_type section for specific writing rules.
+
+Of course, the format of the data written here I just take json as an example, the specific or user-configured `format` prevails.
+
+### data_type [string]
+
+Redis data types, support `key` `hash` `list` `set` `zset`
+
+- key
+> Each data from upstream will be updated to the configured key, which means the later data will overwrite the earlier data, and only the last data will be stored in the key.
+
+- hash
+> Each data from upstream will be split according to the field and written to the hash key, also the data after will overwrite the data before.
+
+- list
+> Each data from upstream will be added to the configured list key.
+
+- set
+> Each data from upstream will be added to the configured set key.
+
+- zset
+> Each data from upstream will be added to the configured zset key with a weight of 1. So the order of data in zset is based on the order of data consumption.
+
+### auth [String]
+
+Redis authentication password, you need it when you connect to an encrypted cluster
+
+### format [String]
+
+The format of upstream data, now only support `json`, `text` will be supported later, default `json`.
+
+When you assign format is `json`, for example:
+
+Upstream data is the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+Connector will generate data as the following and write it to redis:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  "true"}
+
+```
+
+## Example
+
+simple:
+
+```hocon
+  Redis {
+    host = localhost
+    port = 6379
+    key = age
+    data_type = list
+  }
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Sentry.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Sentry.md
new file mode 100644
index 000000000..1e64e8aab
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Sentry.md
@@ -0,0 +1,59 @@
+# Sentry
+
+## Description
+
+Write message to Sentry.
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+
+## Options
+
+| name                       | type    | required | default value |
+|----------------------------|---------|----------| ------------- |
+| dsn                        | string  | yes      | -             |
+| env                        | string  | no       | -             |
+| release                    | string  | no       | -             |
+| cacheDirPath               | string  | no       | -             |
+| enableExternalConfiguration | boolean | no       | -             |
+| maxCacheItems              | number  | no       | -             |
+| flushTimeoutMills          | number  | no       | -             |
+| maxQueueSize               | number  | no       | -             |
+### dsn [string]
+
+The DSN tells the SDK where to send the events to.
+
+### env [string]
+specify the environment
+
+### release [string]
+specify the release
+
+### cacheDirPath [string]
+the cache dir path for caching offline events
+
+### enableExternalConfiguration [boolean]
+if loading properties from external sources is enabled.
+
+### maxCacheItems [number]
+The max cache items for capping the number of events Default is 30
+
+### flushTimeoutMillis [number]
+Controls how many seconds to wait before flushing down. Sentry SDKs cache events from a background queue and this queue is given a certain amount to drain pending events Default is 15000 = 15s
+
+### maxQueueSize [number]
+Max queue size before flushing events/envelopes to the disk
+
+## Example
+```
+  Sentry {
+    dsn = "https://xxx@sentry.xxx.com:9999/6"
+    enableExternalConfiguration = true
+    maxCacheItems = 1000
+    env = prod
+  }
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/Socket.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Socket.md
new file mode 100644
index 000000000..498cfa99d
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/Socket.md
@@ -0,0 +1,93 @@
+# Socket
+
+> Socket sink connector
+
+## Description
+
+Used to send data to Socket Server. Both support streaming and batch mode.
+> For example, if the data from upstream is [`age: 12, name: jared`], the content send to socket server is the following: `{"name":"jared","age":17}`
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name | type   | required | default value |
+| --- |--------|----------|---------------|
+| host | String | Yes       | -             |
+| port | Integer | yes      | -             |
+| max_retries | Integer | No       | 3             |
+
+### host [string]
+socket server host
+
+### port [integer]
+
+socket server port
+
+### max_retries [integer]
+
+The number of retries to send record failed
+
+## Example
+
+simple:
+
+```hocon
+Socket {
+        host = "localhost"
+        port = 9999
+    }
+```
+
+test:
+
+* Configuring the SeaTunnel config file
+
+```hocon
+env {
+  execution.parallelism = 1
+  job.mode = "STREAMING"
+}
+
+source {
+    FakeSource {
+      result_table_name = "fake"
+      schema = {
+        fields {
+          name = "string"
+          age = "int"
+        }
+      }
+    }
+}
+
+transform {
+      sql = "select name, age from fake"
+}
+
+sink {
+    Socket {
+        host = "localhost"
+        port = 9999
+    }
+}
+
+```
+
+* Start a port listening
+
+```shell
+nc -l -v 9999
+```
+
+* Start a SeaTunnel task
+
+
+* Socket Server Console print data
+
+```text
+{"name":"jared","age":17}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/common-options.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/common-options.md
new file mode 100644
index 000000000..ac4a2e428
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/common-options.md
@@ -0,0 +1,45 @@
+# Common Options
+
+> Common parameters of sink connectors
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| source_table_name | string | no       | -             |
+
+### source_table_name [string]
+
+When `source_table_name` is not specified, the current plug-in processes the data set `dataset` output by the previous plugin in the configuration file;
+
+When `source_table_name` is specified, the current plug-in is processing the data set corresponding to this parameter.
+
+## Examples
+
+```bash
+source {
+    FakeSourceStream {
+      result_table_name = "fake"
+      field_name = "name,age"
+    }
+}
+
+transform {
+    sql {
+      source_table_name = "fake"
+      sql = "select name from fake"
+      result_table_name = "fake_name"
+    }
+    sql {
+      source_table_name = "fake"
+      sql = "select age from fake"
+      result_table_name = "fake_age"
+    }
+}
+
+sink {
+    console {
+      source_table_name = "fake_name"
+    }
+}
+```
+
+> If `source_table_name` is not specified, the console outputs the data of the last transform, and if it is set to `fake_name` , it will output the data of `fake_name`
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/sink/dingtalk.md b/versioned_docs/version-2.2.0-beta/connector-v2/sink/dingtalk.md
new file mode 100644
index 000000000..e949ae2bc
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/sink/dingtalk.md
@@ -0,0 +1,38 @@
+# DingTalk
+
+> DinkTalk sink connector
+
+## Description
+
+A sink plugin which use DingTalk robot send message
+
+## Key features
+
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                         | type        | required | default value |
+|------------------------------| ----------  | -------- | ------------- |
+| url                            | string      | yes      | -             |
+| secret             | string      | yes       | -             |
+
+### url [string]
+
+DingTalk robot address format is https://oapi.dingtalk.com/robot/send?access_token=XXXXXX(string)
+
+### secret [string]
+
+DingTalk robot secret (string)
+
+## Example
+
+```hocon
+sink {
+ DingTalk {
+  url="https://oapi.dingtalk.com/robot/send?access_token=ec646cccd028d978a7156ceeac5b625ebd94f586ea0743fa501c100007890"
+  secret="SEC093249eef7aa57d4388aa635f678930c63db3d28b2829d5b2903fc1e5c10000"
+ }
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Clickhouse.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Clickhouse.md
new file mode 100644
index 000000000..e73c621b2
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Clickhouse.md
@@ -0,0 +1,77 @@
+# Clickhouse
+
+> Clickhouse source connector
+
+## Description
+
+Used to read data from Clickhouse.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+:::tip
+
+Reading data from Clickhouse can also be done using JDBC
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+|----------------|--------|----------|---------------|
+| host           | string | yes      | -             |
+| database       | string | yes      | -             |
+| sql            | string | yes      | -             |
+| username       | string | yes      | -             |
+| password       | string | yes      | -             |
+| common-options | string | yes      | -             |
+
+### host [string]
+
+`ClickHouse` cluster address, the format is `host:port` , allowing multiple `hosts` to be specified. Such as `"host1:8123,host2:8123"` .
+
+### database [string]
+
+The `ClickHouse` database
+
+### sql [string]
+
+The query sql used to search data though Clickhouse server
+
+### username [string]
+
+`ClickHouse` user username
+
+### password [string]
+
+`ClickHouse` user password
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
+## Examples
+
+```hocon
+source {
+  
+  Clickhouse {
+    host = "localhost:8123"
+    database = "default"
+    sql = "select * from test where age = 20 limit 100"
+    username = "default"
+    password = ""
+    result_table_name = "test"
+  }
+  
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/FakeSource.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/FakeSource.md
new file mode 100644
index 000000000..3c66ce679
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/FakeSource.md
@@ -0,0 +1,85 @@
+# FakeSource
+
+> FakeSource connector
+
+## Description
+
+The FakeSource is a virtual data source, which randomly generates the number of rows according to the data structure of the user-defined schema,
+just for testing, such as type conversion and feature testing
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name              | type   | required | default value |
+|-------------------|--------|----------|---------------|
+| result_table_name | string | yes      | -             |
+| schema            | config | yes      | -             |
+
+### result_table_name [string]
+
+The table name.
+
+### type [string]
+Table structure description ,you should assign schema option to tell connector how to parse data to the row you want.  
+**Tips**: Most of Unstructured-Datasource contain this param, such as LocalFile,HdfsFile.  
+**Example**:
+```hocon
+schema = {
+      fields {
+        c_map = "map<string, string>"
+        c_array = "array<tinyint>"
+        c_string = string
+        c_boolean = boolean
+        c_tinyint = tinyint
+        c_smallint = smallint
+        c_int = int
+        c_bigint = bigint
+        c_float = float
+        c_double = double
+        c_decimal = "decimal(30, 8)"
+        c_null = "null"
+        c_bytes = bytes
+        c_date = date
+        c_time = time
+        c_timestamp = timestamp
+      }
+    }
+```
+
+## Example
+Simple source for FakeSource which contains enough datatype
+```hocon
+source {
+  FakeSource {
+    schema = {
+      fields {
+        c_map = "map<string, string>"
+        c_array = "array<tinyint>"
+        c_string = string
+        c_boolean = boolean
+        c_tinyint = tinyint
+        c_smallint = smallint
+        c_int = int
+        c_bigint = bigint
+        c_float = float
+        c_double = double
+        c_decimal = "decimal(30, 8)"
+        c_null = "null"
+        c_bytes = bytes
+        c_date = date
+        c_time = time
+        c_timestamp = timestamp
+      }
+    }
+    result_table_name = "fake"
+  }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/FtpFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/FtpFile.md
new file mode 100644
index 000000000..af22fcd8e
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/FtpFile.md
@@ -0,0 +1,117 @@
+# FtpFile
+
+> Ftp file source connector
+
+## Description
+
+Read data from ftp file server.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+    - [x] text
+    - [x] csv
+    - [x] json
+
+## Options
+
+| name     | type   | required | default value |
+|----------|--------|----------|---------------|
+| host     | string | yes      | -             |
+| port     | int    | yes      | -             |
+| user     | string | yes      | -             |
+| password | string | yes      | -             |
+| path     | string | yes      | -             |
+| type     | string | yes      | -             |
+| schema   | config | no       | -             |
+
+### host [string]
+
+The target ftp host is required
+
+### port [int]
+
+The target ftp port is required
+
+### username [string]
+
+The target ftp username is required
+
+### password [string]
+
+The target ftp password is required
+
+### path [string]
+
+The source file path.
+
+### type [string]
+
+File type, supported as the following file types:
+
+`text` `csv` `json`
+
+If you assign file type to `json`, you should also assign schema option to tell connector how to parse data to the row you want.
+
+For example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+If you assign file type to `text` `csv`, schema option not supported temporarily, but the subsequent features will support.
+
+Now connector will treat the upstream data as the following:
+
+| lines                             |
+|-----------------------------------|
+| The content of every line in file |
+
+### schema [config]
+
+The schema information of upstream data.
+
+## Example
+
+```hocon
+
+  FtpFile {
+    path = "/tmp/seatunnel/sink/parquet"
+    host = "192.168.31.48"
+    port = 21
+    user = tyrantlucifer
+    password = tianchao
+    type = "text"
+  }
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Greenplum.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Greenplum.md
new file mode 100644
index 000000000..fad156c24
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Greenplum.md
@@ -0,0 +1,29 @@
+# Greenplum
+
+> Greenplum source connector
+
+## Description
+
+Read Greenplum data through [Jdbc connector](Jdbc.md).
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md) 
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+:::tip
+
+Optional jdbc drivers:
+- `org.postgresql.Driver`
+- `com.pivotal.jdbc.GreenplumDriver`
+
+Warn: for license compliance, if you use `GreenplumDriver` the have to provide Greenplum JDBC driver yourself, e.g. copy greenplum-xxx.jar to $SEATNUNNEL_HOME/lib for Standalone.
+
+:::
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/HdfsFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/HdfsFile.md
new file mode 100644
index 000000000..5bd4e1e9a
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/HdfsFile.md
@@ -0,0 +1,127 @@
+# HdfsFile
+
+> Hdfs file source connector
+
+## Description
+
+Read data from hdfs file system.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.
+
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] csv
+  - [x] parquet
+  - [x] orc
+  - [x] json
+
+## Options
+
+| name          | type   | required | default value |
+|---------------|--------|----------|---------------|
+| path          | string | yes      | -             |
+| type          | string | yes      | -             |
+| fs.defaultFS  | string | yes      | -             |
+| schema        | config | no       | -             |
+
+### path [string]
+
+The source file path.
+
+### type [string]
+
+File type, supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+If you assign file type to `json`, you should also assign schema option to tell connector how to parse data to the row you want.
+
+For example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+You can also save multiple pieces of data in one file and split them by newline:
+
+```json lines
+
+{"code":  200, "data":  "get success", "success":  true}
+{"code":  300, "data":  "get failed", "success":  false}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+If you assign file type to `parquet` `orc`, schema option not required, connector can find the schema of upstream data automatically.
+
+If you assign file type to `text` `csv`, schema option not supported temporarily, but the subsequent features will support.
+
+Now connector will treat the upstream data as the following:
+
+| lines                             |
+|-----------------------------------|
+| The content of every line in file |
+
+### fs.defaultFS [string]
+
+Hdfs cluster address.
+
+## Example
+
+```hocon
+
+HdfsFile {
+  path = "/apps/hive/demo/student"
+  type = "parquet"
+  fs.defaultFS = "hdfs://namenode001"
+}
+
+```
+
+```hocon
+
+HdfsFile {
+  schema {
+    fields {
+      name = string
+      age = int
+    }
+  }
+  path = "/apps/hive/demo/student"
+  type = "json"
+  fs.defaultFS = "hdfs://namenode001"
+}
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Hive.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Hive.md
new file mode 100644
index 000000000..99372fbcb
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Hive.md
@@ -0,0 +1,55 @@
+# Hive
+
+> Hive source connector
+
+## Description
+
+Read data from Hive.
+
+In order to use this connector, You must ensure your spark/flink cluster already integrated hive. The tested hive version is 2.3.9.
+
+**Tips: Hive Sink Connector can not add partition field to the output data now**
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.
+
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] csv
+  - [x] parquet
+  - [x] orc
+  - [x] json
+
+## Options
+
+| name                  | type   | required | default value                                                 |
+|-----------------------| ------ | -------- | ------------------------------------------------------------- |
+| table_name            | string | yes      | -                                                             |
+| metastore_uri         | string | yes      | -                                                             |
+
+### table_name [string]
+
+Target Hive table name eg: db1.table1
+
+### metastore_uri [string]
+
+Hive metastore uri
+
+## Example
+
+```bash
+
+  Hive {
+    table_name = "default.seatunnel_orc"
+    metastore_uri = "thrift://namenode001:9083"
+  }
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Http.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Http.md
new file mode 100644
index 000000000..21ac01e4a
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Http.md
@@ -0,0 +1,144 @@
+# Http
+
+> Http source connector
+
+## Description
+
+Used to read data from Http.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name                               | type   | required | default value |
+|------------------------------------|--------|----------|---------------|
+| url                                | String | Yes      | -             |
+| schema                             | Config | No       | -             |
+| schema.fields                      | Config | No       | -             |
+| format                             | String | No       | json          |
+| method                             | String | No       | get           |
+| headers                            | Map    | No       | -             |
+| params                             | Map    | No       | -             |
+| body                               | String | No       | -             |
+| poll_interval_ms                   | int    | No       | -             |
+| retry                              | int    | No       | -             |
+| retry_backoff_multiplier_ms        | int    | No       | 100           |
+| retry_backoff_max_ms               | int    | No       | 10000         |
+
+### url [String]
+
+http request url
+
+### method [String]
+
+http request method, only supports GET, POST method.
+
+### headers [Map]
+
+http headers
+
+### params [Map]
+
+http params
+
+### body [String]
+
+http body
+
+### poll_interval_ms [int]
+
+request http api interval(millis) in stream mode
+
+### retry [int]
+
+The max retry times if request http return to `IOException`
+
+### retry_backoff_multiplier_ms [int]
+
+The retry-backoff times(millis) multiplier if request http failed
+
+### retry_backoff_max_ms [int]
+
+The maximum retry-backoff times(millis) if request http failed
+
+### format [String]
+
+the format of upstream data, now only support `json` `text`, default `json`.
+
+when you assign format is `json`, you should also assign schema option, for example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+when you assign format is `text`, connector will do nothing for upstream data, for example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+connector will generate data as the following:
+
+| content |
+|---------|
+| {"code":  200, "data":  "get success", "success":  true}        |
+
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data
+
+## Example
+
+simple:
+
+```hocon
+Http {
+    url = "https://tyrantlucifer.com/api/getDemoData"
+    schema {
+      fields {
+        code = int
+        message = string
+        data = string
+        ok = boolean
+      }
+    }
+}
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Hudi.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Hudi.md
new file mode 100644
index 000000000..7eae78720
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Hudi.md
@@ -0,0 +1,73 @@
+# Hudi
+
+> Hudi source connector
+
+## Description
+
+Used to read data from Hudi. Currently, only supports hudi cow table and Snapshot Query with Batch Mode.
+
+In order to use this connector, You must ensure your spark/flink cluster already integrated hive. The tested hive version is 2.3.9.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+
+Currently, only supports hudi cow table and Snapshot Query with Batch Mode
+
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                     | type    | required | default value |
+|--------------------------|---------|----------|---------------|
+| table.path               | string  | yes      | -             |
+| table.type               | string  | yes      | -             |
+| conf.files               | string  | yes      | -             |
+| use.kerberos             | boolean | no       | false         |
+| kerberos.principal       | string  | no       | -             |
+| kerberos.principal.file  | string  | no       | -             |
+
+### table.path [string]
+
+`table.path` The hdfs root path of hudi table,such as 'hdfs://nameserivce/data/hudi/hudi_table/'.
+
+### table.type [string]
+
+`table.type` The type of hudi table. Now we only support 'cow', 'mor' is not support yet.
+
+### conf.files [string]
+
+`conf.files` The environment conf file path list(local path), which used to init hdfs client to read hudi table file. The example is '/home/test/hdfs-site.xml;/home/test/core-site.xml;/home/test/yarn-site.xml'.
+
+### use.kerberos [boolean]
+
+`use.kerberos` Whether to enable Kerberos, default is false.
+
+### kerberos.principal [string]
+
+`kerberos.principal` When use kerberos, we should set kerberos princal such as 'test_user@xxx'.
+
+### kerberos.principal.file [string]
+
+`kerberos.principal.file` When use kerberos,  we should set kerberos princal file such as '/home/test/test_user.keytab'.
+
+## Examples
+
+```hocon
+source {
+
+  Hudi {
+    table.path = "hdfs://nameserivce/data/hudi/hudi_table/"
+    table.type = "cow"
+    conf.files = "/home/test/hdfs-site.xml;/home/test/core-site.xml;/home/test/yarn-site.xml"
+    use.kerberos = true
+    kerberos.principal = "test_user@xxx"
+    kerberos.principal.file = "/home/test/test_user.keytab"
+  }
+
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Iceberg.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Iceberg.md
new file mode 100644
index 000000000..85458b0ea
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Iceberg.md
@@ -0,0 +1,157 @@
+# Apache Iceberg
+
+> Apache Iceberg source connector
+
+## Description
+
+Source connector for Apache Iceberg. It can support batch and stream mode.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+- [x] data format
+    - [x] parquet
+    - [x] orc
+    - [x] avro
+- [x] iceberg catalog
+    - [x] hadoop(2.7.5)
+    - [x] hive(2.3.9)
+
+##  Options
+
+| name                              | type     | required | default value           |
+|-----------------------------------|----------|----------|-------------------------|
+| catalog_name                      | string   | yes      | -                       |
+| catalog_type                      | string   | yes      | -                       |
+| uri                               | string   | false    | -                       |
+| warehouse                         | string   | yes      | -                       |
+| namespace                         | string   | yes      | -                       |
+| table                             | string   | yes      | -                       |
+| case_sensitive                    | boolean  | false    | false                   |
+| start_snapshot_timestamp          | long     | false    | -                       |
+| start_snapshot_id                 | long     | false    | -                       |
+| end_snapshot_id                   | long     | false    | -                       |
+| use_snapshot_id                   | long     | false    | -                       |
+| use_snapshot_timestamp            | long     | false    | -                       |
+| stream_scan_strategy              | enum     | false    | FROM_LATEST_SNAPSHOT    |
+
+### catalog_name [string]
+
+User-specified catalog name.
+
+### catalog_type [string]
+
+The optional values are:
+- hive: The hive metastore catalog.
+- hadoop: The hadoop catalog.
+
+### uri [string]
+
+The Hive metastore’s thrift URI.
+
+### warehouse [string]
+
+The location to store metadata files and data files.
+
+### namespace [string]
+
+The iceberg database name in the backend catalog.
+
+### table [string]
+
+The iceberg table name in the backend catalog.
+
+### case_sensitive [boolean]
+
+If data columns where selected via fields(Collection), controls whether the match to the schema will be done with case sensitivity.
+
+### fields [array]
+
+Use projection to select data columns and columns order.
+
+### start_snapshot_id [long]
+
+Instructs this scan to look for changes starting from a particular snapshot (exclusive).
+
+### start_snapshot_timestamp [long]
+
+Instructs this scan to look for changes starting from  the most recent snapshot for the table as of the timestamp. timestamp – the timestamp in millis since the Unix epoch
+
+### end_snapshot_id [long]
+
+Instructs this scan to look for changes up to a particular snapshot (inclusive).
+
+### use_snapshot_id [long]
+
+Instructs this scan to look for use the given snapshot ID.
+
+### use_snapshot_timestamp [long]
+
+Instructs this scan to look for use the most recent snapshot as of the given time in milliseconds. timestamp – the timestamp in millis since the Unix epoch
+
+### stream_scan_strategy [enum]
+
+Starting strategy for stream mode execution, Default to use `FROM_LATEST_SNAPSHOT` if don’t specify any value.
+The optional values are:
+- TABLE_SCAN_THEN_INCREMENTAL: Do a regular table scan then switch to the incremental mode.
+- FROM_LATEST_SNAPSHOT: Start incremental mode from the latest snapshot inclusive.
+- FROM_EARLIEST_SNAPSHOT: Start incremental mode from the earliest snapshot inclusive.
+- FROM_SNAPSHOT_ID: Start incremental mode from a snapshot with a specific id inclusive.
+- FROM_SNAPSHOT_TIMESTAMP: Start incremental mode from a snapshot with a specific timestamp inclusive.
+
+## Example
+
+simple
+
+```hocon
+source {
+  Iceberg {
+    catalog_name = "seatunnel"
+    catalog_type = "hadoop"
+    warehouse = "hdfs://your_cluster//tmp/seatunnel/iceberg/"
+    namespace = "your_iceberg_database"
+    table = "your_iceberg_table"
+  }
+}
+```
+Or
+
+```hocon
+source {
+  Iceberg {
+    catalog_name = "seatunnel"
+    catalog_type = "hive"
+    uri = "thrift://localhost:9083"
+    warehouse = "hdfs://your_cluster//tmp/seatunnel/iceberg/"
+    namespace = "your_iceberg_database"
+    table = "your_iceberg_table"
+  }
+}
+```
+
+schema projection
+
+```hocon
+source {
+  Iceberg {
+    catalog_name = "seatunnel"
+    catalog_type = "hadoop"
+    warehouse = "hdfs://your_cluster/tmp/seatunnel/iceberg/"
+    namespace = "your_iceberg_database"
+    table = "your_iceberg_table"
+
+    fields {
+      f2 = "boolean"
+      f1 = "bigint"
+      f3 = "int"
+      f4 = "bigint"
+    }
+  }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/IoTDB.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/IoTDB.md
new file mode 100644
index 000000000..01a3487a3
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/IoTDB.md
@@ -0,0 +1,149 @@
+# IoTDB
+
+> IoTDB source connector
+
+## Description
+
+Read external data source data through IoTDB.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                       | type    | required | default value |
+|----------------------------|---------|----------|---------------|
+| host                       | string  | yes      | -             |
+| port                       | Int     | yes      | -             |
+| node_urls                  | string  | yes      | -             |
+| sql                        | string  | yes      |          |
+| fields                     | config  | yes      | -             |
+| fetch_size                 | int     | no       | -             |
+| username                   | string  | no       | -             |
+| password   | string  | no       | -             |
+| lower_bound                | long    | no       | -             |
+| upper_bound                | long    | no       | -             |
+| num_partitions             | int     | no       | -             |
+| thrift_default_buffer_size | int     | no       | -             |
+| enable_cache_leader        | boolean | no       | -             |
+| version                    | string  | no       | -             |
+
+### single node, you need to set host and port to connect to the remote data source.
+
+**host** [string] the host of the IoTDB when you select host of the IoTDB
+
+**port** [int] the port of the IoTDB when you select
+
+### multi node, you need to set node_urls to connect to the remote data source.
+
+**node_urls** [string] the node_urls of the IoTDB when you select
+
+e.g.
+
+``` 127.0.0.1:8080,127.0.0.2:8080
+```
+
+### other parameters
+
+**sql** [string]
+execute sql statement e.g.
+
+```
+select name,age from test
+```
+
+### fields [string]
+
+the fields of the IoTDB when you select
+
+the field type is SeaTunnel field type `org.apache.seatunnel.api.table.type.SqlType`
+
+e.g.
+
+```
+fields{
+    name=STRING
+    age=INT
+    }
+```
+
+### option parameters
+
+### fetch_size [int]
+
+the fetch_size of the IoTDB when you select
+
+### username [string]
+
+the username of the IoTDB when you select
+
+### password [string]
+
+the password of the IoTDB when you select
+
+### lower_bound [long]
+
+the lower_bound of the IoTDB when you select
+
+### upper_bound [long]
+
+the upper_bound of the IoTDB when you select
+
+### num_partitions [int]
+
+the num_partitions of the IoTDB when you select
+
+### thrift_default_buffer_size [int]
+
+the thrift_default_buffer_size of the IoTDB when you select
+
+### enable_cache_leader [boolean]
+
+enable_cache_leader of the IoTDB when you select
+
+### version [string]
+
+Version represents the SQL semantic version used by the client, which is used to be compatible with the SQL semantics of
+0.12 when upgrading 0.13. The possible values are: V_0_12, V_0_13.
+
+### split partitions
+
+we can split the partitions of the IoTDB and we used time column split
+
+#### num_partitions [int]
+
+split num
+
+### upper_bound [long]
+
+upper bound of the time column
+
+### lower_bound [long]
+
+lower bound of the time column
+
+```
+     split the time range into numPartitions parts
+     if numPartitions is 1, use the whole time range
+     if numPartitions < (upper_bound - lower_bound), use (upper_bound - lower_bound) partitions
+     
+     eg: lower_bound = 1, upper_bound = 10, numPartitions = 2
+     sql = "select * from test where age > 0 and age < 10"
+     
+     split result
+
+     split 1: select * from test  where (time >= 1 and time < 6)  and (  age > 0 and age < 10 )
+     
+     split 2: select * from test  where (time >= 6 and time < 11) and (  age > 0 and age < 10 )
+
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Jdbc.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Jdbc.md
new file mode 100644
index 000000000..784d2d264
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Jdbc.md
@@ -0,0 +1,102 @@
+# JDBC
+
+> JDBC source connector
+
+## Description
+
+Read external data source data through JDBC.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [x] [support user-defined split](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name                         | type   | required | default value |
+|------------------------------|--------|----------|---------------|
+| url                          | String | Yes      | -             |
+| driver                       | String | Yes      | -             |
+| user                         | String | No       | -             |
+| password                     | String | No       | -             |
+| query                        | String | Yes      | -             |
+| connection_check_timeout_sec | Int    | No       | 30            |
+| partition_column             | String | No       | -             |
+| partition_upper_bound        | Long   | No       | -             |
+| partition_lower_bound        | Long   | No       | -             |
+
+### driver [string]
+The jdbc class name used to connect to the remote data source, if you use MySQL the value is com.mysql.cj.jdbc.Driver.
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy mysql-connector-java-xxx.jar to $SEATNUNNEL_HOME/lib for Standalone.
+
+### user [string]
+userName
+
+### password [string]
+password
+
+### url [string]
+The URL of the JDBC connection. Refer to a case: jdbc:postgresql://localhost/test
+
+### query [string]
+Query statement
+
+### connection_check_timeout_sec [int]
+
+The time in seconds to wait for the database operation used to validate the connection to complete.
+
+### partition_column [string]
+The column name for parallelism's partition, only support numeric type.
+
+
+### partition_upper_bound [long]
+The partition_column max value for scan, if not set SeaTunnel will query database get max value.
+
+
+### partition_lower_bound [long]
+The partition_column min value for scan, if not set SeaTunnel will query database get min value.
+
+## tips
+If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed in parallel according to the concurrency of tasks.
+
+
+## appendix
+there are some reference value for params above.
+
+| datasource | driver                   | url                                       | maven                                                         |
+|------------|--------------------------|-------------------------------------------|---------------------------------------------------------------|
+| mysql      | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test          | https://mvnrepository.com/artifact/mysql/mysql-connector-java |
+| postgresql | org.postgresql.Driver    | jdbc:postgresql://localhost:5432/postgres | https://mvnrepository.com/artifact/org.postgresql/postgresql  |                                                             |
+| dm         | dm.jdbc.driver.DmDriver  | jdbc:dm://localhost:5236                  | https://mvnrepository.com/artifact/com.dameng/DmJdbcDriver18  |
+
+## Example
+simple:
+```
+    Jdbc {
+        url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8"
+        driver = "com.mysql.cj.jdbc.Driver"
+        connection_check_timeout_sec = 100
+        user = "root"
+        password = "123456"
+        query = "select * from type_bin"
+    }
+```
+parallel:
+```
+    Jdbc {
+        url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8"
+        driver = "com.mysql.cj.jdbc.Driver"
+        connection_check_timeout_sec = 100
+        user = "root"
+        password = "123456"
+        query = "select * from type_bin"
+        partition_column= "id"
+    }
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Kudu.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Kudu.md
new file mode 100644
index 000000000..22ff42623
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Kudu.md
@@ -0,0 +1,52 @@
+# Kudu
+
+> Kudu source connector
+
+## Description
+
+Used to read data from Kudu.
+
+ The tested kudu version is 1.11.1.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                     | type    | required | default value |
+|--------------------------|---------|----------|---------------|
+| kudu_master             | string  | yes      | -             |
+| kudu_table               | string  | yes      | -             |
+| columnsList               | string  | yes      | -             |
+
+### kudu_master [string]
+
+`kudu_master` The address of kudu master,such as '192.168.88.110:7051'.
+
+### kudu_table [string]
+
+`kudu_table` The name of kudu table..
+
+### columnsList [string]
+
+`columnsList` Specifies the column names of the table.
+
+## Examples
+
+```hocon
+source {
+   KuduSource {
+      result_table_name = "studentlyh2"
+      kudu_master = "192.168.88.110:7051"
+      kudu_table = "studentlyh2"
+      columnsList = "id,name,age,sex"
+    }
+
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/LocalFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/LocalFile.md
new file mode 100644
index 000000000..4f3c0e6c5
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/LocalFile.md
@@ -0,0 +1,124 @@
+# LocalFile
+
+> Local file source connector
+
+## Description
+
+Read data from local file system.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.
+
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] csv
+  - [x] parquet
+  - [x] orc
+  - [x] json
+
+## Options
+
+| name   | type   | required | default value |
+|--------|--------|----------|---------------|
+| path   | string | yes      | -             |
+| type   | string | yes      | -             |
+| schema | config | no       | -             |
+
+### path [string]
+
+The source file path.
+
+### type [string]
+
+File type, supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+If you assign file type to `json`, you should also assign schema option to tell connector how to parse data to the row you want.
+
+For example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+You can also save multiple pieces of data in one file and split them by newline:
+
+```json lines
+
+{"code":  200, "data":  "get success", "success":  true}
+{"code":  300, "data":  "get failed", "success":  false}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+If you assign file type to `parquet` `orc`, schema option not required, connector can find the schema of upstream data automatically.
+
+If you assign file type to `text` `csv`, schema option not supported temporarily, but the subsequent features will support.
+
+Now connector will treat the upstream data as the following:
+
+| lines                             |
+|-----------------------------------|
+| The content of every line in file |
+
+### schema [config]
+
+The schema information of upstream data.
+
+## Example
+
+```hocon
+
+LocalFile {
+  path = "/apps/hive/demo/student"
+  type = "parquet"
+}
+
+```
+
+```hocon
+
+LocalFile {
+  schema {
+    fields {
+      name = string
+      age = int
+    }
+  }
+  path = "/apps/hive/demo/student"
+  type = "json"
+}
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/MongoDB.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/MongoDB.md
new file mode 100644
index 000000000..e587f919a
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/MongoDB.md
@@ -0,0 +1,76 @@
+# MongoDb
+
+> MongoDb source connector
+
+## Description
+
+Read data from MongoDB.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name           | type   | required | default value |
+|----------------|--------|----------|---------------|
+| uri            | string | yes      | -             |
+| database       | string | yes      | -             |
+| collection     | string | yes      | -             |
+| schema         | object | yes      | -             |
+| common-options | string | yes      | -             |
+
+### uri [string]
+
+MongoDB uri
+
+### database [string]
+
+MongoDB database
+
+### collection [string]
+
+MongoDB collection
+
+### schema [object]
+
+Because `MongoDB` does not have the concept of `schema`, when engine reads `MongoDB` , it will sample `MongoDB` data and infer the `schema` . In fact, this process will be slow and may be inaccurate. This parameter can be manually specified. Avoid these problems. 
+
+such as:
+
+```
+schema {
+  fields {
+    id = int
+    key_aa = string
+    key_bb = string
+  }
+}
+```
+
+### common options [string]
+
+Source Plugin common parameters, refer to [Source Plugin](common-options.md) for details
+
+## Example
+
+```bash
+mongodb {
+    uri = "mongodb://username:password@127.0.0.1:27017/mypost?retryWrites=true&writeConcern=majority"
+    database = "mydatabase"
+    collection = "mycollection"
+    schema {
+      fields {
+        id = int
+        key_aa = string
+        key_bb = string
+      }
+    }
+    result_table_name = "mongodb_result_table"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/OssFile.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/OssFile.md
new file mode 100644
index 000000000..8bf87aadd
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/OssFile.md
@@ -0,0 +1,155 @@
+# OssFile
+
+> Oss file source connector
+
+## Description
+
+Read data from aliyun oss file system.
+
+> Tips: We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OSS and this connector need some hadoop dependencies. 
+> It's only support hadoop version **2.9.X+**.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+
+Read all the data in a split in a pollNext call. What splits are read will be saved in snapshot.
+
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+- [x] file format
+  - [x] text
+  - [x] csv
+  - [x] parquet
+  - [x] orc
+  - [x] json
+
+## Options
+
+| name          | type   | required | default value |
+|---------------|--------|----------|---------------|
+| path          | string | yes      | -             |
+| type          | string | yes      | -             |
+| bucket        | string | yes      | -             |
+| access_key    | string | yes      | -             |
+| access_secret | string | yes      | -             |
+| endpoint      | string | yes      | -             |
+| schema        | config | no       | -             |
+
+### path [string]
+
+The source file path.
+
+### type [string]
+
+File type, supported as the following file types:
+
+`text` `csv` `parquet` `orc` `json`
+
+If you assign file type to `json`, you should also assign schema option to tell connector how to parse data to the row you want.
+
+For example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+You can also save multiple pieces of data in one file and split them by newline:
+
+```json lines
+
+{"code":  200, "data":  "get success", "success":  true}
+{"code":  300, "data":  "get failed", "success":  false}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+If you assign file type to `parquet` `orc`, schema option not required, connector can find the schema of upstream data automatically.
+
+If you assign file type to `text` `csv`, schema option not supported temporarily, but the subsequent features will support.
+
+Now connector will treat the upstream data as the following:
+
+| lines                             |
+|-----------------------------------|
+| The content of every line in file |
+
+### bucket [string]
+
+The bucket address of oss file system, for example: `oss://tyrantlucifer-image-bed`
+
+### access_key [string]
+
+The access key of oss file system.
+
+### access_secret [string]
+
+The access secret of oss file system.
+
+### endpoint [string]
+
+The endpoint of oss file system.
+
+### schema [config]
+
+The schema of upstream data.
+
+## Example
+
+```hocon
+
+  OssFile {
+    path = "/seatunnel/orc"
+    bucket = "oss://tyrantlucifer-image-bed"
+    access_key = "xxxxxxxxxxxxxxxxx"
+    access_secret = "xxxxxxxxxxxxxxxxxxxxxx"
+    endpoint = "oss-cn-beijing.aliyuncs.com"
+    type = "orc"
+  }
+
+```
+
+```hocon
+
+  OssFile {
+    path = "/seatunnel/json"
+    bucket = "oss://tyrantlucifer-image-bed"
+    access_key = "xxxxxxxxxxxxxxxxx"
+    access_secret = "xxxxxxxxxxxxxxxxxxxxxx"
+    endpoint = "oss-cn-beijing.aliyuncs.com"
+    type = "json"
+    schema {
+      fields {
+        id = int 
+        name = string
+      }
+    }
+  }
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Phoenix.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Phoenix.md
new file mode 100644
index 000000000..a82196ea3
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Phoenix.md
@@ -0,0 +1,51 @@
+# Phoenix
+
+> Phoenix source connector
+
+## Description
+Read Phoenix data through [Jdbc connector](Jdbc.md).
+Support Batch mode and Streaming mode. The tested Phoenix version is 4.xx and 5.xx
+On the underlying implementation, through the jdbc driver of Phoenix, execute the upsert statement to write data to HBase.
+Two ways of connecting Phoenix with Java JDBC. One is to connect to zookeeper through JDBC, and the other is to connect to queryserver through JDBC thin client.
+
+> Tips: By default, the (thin) driver jar is used. If you want to use the (thick) driver  or other versions of Phoenix (thin) driver, you need to recompile the jdbc connector module
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+
+supports query SQL and can achieve projection effect.
+
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+### driver [string]
+if you use phoenix (thick) driver the value is `org.apache.phoenix.jdbc.PhoenixDriver` or you use (thin) driver the value is `org.apache.phoenix.queryserver.client.Driver`
+
+### url [string]
+if you use phoenix (thick) driver the value is `jdbc:phoenix:localhost:2182/hbase` or you use (thin) driver the value is `jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF`
+
+## Example
+use thick client drive
+```
+    Jdbc {
+        driver = org.apache.phoenix.jdbc.PhoenixDriver
+        url = "jdbc:phoenix:localhost:2182/hbase"
+        query = "select age, name from test.source"
+    }
+
+```
+
+use thin client drive
+```
+    Jdbc {
+        driver = org.apache.phoenix.queryserver.client.Driver
+        url = "jdbc:phoenix:thin:url=http://spark_e2e_phoenix_sink:8765;serialization=PROTOBUF"
+        query = "select age, name from test.source"
+    }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Redis.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Redis.md
new file mode 100644
index 000000000..dfb1b4340
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Redis.md
@@ -0,0 +1,158 @@
+# Redis
+
+> Redis source connector
+
+## Description
+
+Used to read data from Redis.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [ ] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name      | type   | required | default value |
+|-----------|--------|----------|---------------|
+| host      | string | yes      | -             |
+| port      | int    | yes      | -             |
+| keys      | string | yes      | -             |
+| data_type | string | yes      | -             |
+| auth      | string | No       | -             |
+| schema    | config | No       | -             |
+| format    | string | No       | json          |
+
+### host [string]
+
+redis host
+
+### port [int]
+
+redis port
+
+### keys [string]
+
+keys pattern
+
+**Tips:Redis source connector support fuzzy key matching, user needs to ensure that the matched keys are the same type**
+
+### data_type [string]
+
+redis data types, support `key` `hash` `list` `set` `zset`
+
+- key
+> The value of each key will be sent downstream as a single row of data.
+> For example, the value of key is `SeaTunnel test message`, the data received downstream is `SeaTunnel test message` and only one message will be received.
+
+
+- hash
+> The hash key-value pairs will be formatted as json to be sent downstream as a single row of data.
+> For example, the value of hash is `name:tyrantlucifer age:26`, the data received downstream is `{"name":"tyrantlucifer", "age":"26"}` and only one message will be received.
+
+- list
+> Each element in the list will be sent downstream as a single row of data.
+> For example, the value of list is `[tyrantlucier, CalvinKirs]`, the data received downstream are `tyrantlucifer` and `CalvinKirs` and only two message will be received.
+
+- set
+> Each element in the set will be sent downstream as a single row of data
+> For example, the value of set is `[tyrantlucier, CalvinKirs]`, the data received downstream are `tyrantlucifer` and `CalvinKirs` and only two message will be received.
+
+- zset
+> Each element in the sorted set will be sent downstream as a single row of data
+> For example, the value of sorted set is `[tyrantlucier, CalvinKirs]`, the data received downstream are `tyrantlucifer` and `CalvinKirs` and only two message will be received.
+
+### auth [String]
+
+redis authentication password, you need it when you connect to an encrypted cluster
+
+### format [String]
+
+the format of upstream data, now only support `json` `text`, default `json`.
+
+when you assign format is `json`, you should also assign schema option, for example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+you should assign schema as the following:
+
+```hocon
+
+schema {
+    fields {
+        code = int
+        data = string
+        success = boolean
+    }
+}
+
+```
+
+connector will generate data as the following:
+
+| code | data        | success |
+|------|-------------|---------|
+| 200  | get success | true    |
+
+when you assign format is `text`, connector will do nothing for upstream data, for example:
+
+upstream data is the following:
+
+```json
+
+{"code":  200, "data":  "get success", "success":  true}
+
+```
+
+connector will generate data as the following:
+
+| content |
+|---------|
+| {"code":  200, "data":  "get success", "success":  true}        |
+
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data
+
+## Example
+
+simple:
+
+```hocon
+  Redis {
+    host = localhost
+    port = 6379
+    keys = "key_test*"
+    data_type = key
+    format = text
+  }
+```
+
+```hocon
+  Redis {
+    host = localhost
+    port = 6379
+    keys = "key_test*"
+    data_type = key
+    format = json
+    schema {
+      fields {
+        name = string
+        age = int
+      }
+    }
+  }
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/Socket.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/Socket.md
new file mode 100644
index 000000000..84a2b487e
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/Socket.md
@@ -0,0 +1,94 @@
+# Socket
+
+> Socket source connector
+
+## Description
+
+Used to read data from Socket.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [ ] [exactly-once](../../concept/connector-v2-features.md)
+- [ ] [schema projection](../../concept/connector-v2-features.md)
+- [ ] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+##  Options
+
+| name | type   | required | default value |
+| --- |--------| --- | --- |
+| host | String | No | localhost |
+| port | Integer | No | 9999 |
+
+### host [string]
+socket server host
+
+### port [integer]
+
+socket server port
+
+## Example
+
+simple:
+
+```hocon
+Socket {
+        host = "localhost"
+        port = 9999
+    }
+```
+
+test:
+
+* Configuring the SeaTunnel config file
+
+```hocon
+env {
+  execution.parallelism = 1
+  job.mode = "STREAMING"
+}
+
+source {
+    Socket {
+        host = "localhost"
+        port = 9999
+    }
+}
+
+transform {
+}
+
+sink {
+  Console {}
+}
+
+```
+
+* Start a port listening
+
+```shell
+nc -l 9999
+```
+
+* Start a SeaTunnel task
+
+* Socket Source send test data
+
+```text
+~ nc -l 9999
+test
+hello
+flink
+spark
+```
+
+* Console Sink print data
+
+```text
+[test]
+[hello]
+[flink]
+[spark]
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/common-options.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/common-options.md
new file mode 100644
index 000000000..529732743
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/common-options.md
@@ -0,0 +1,33 @@
+# Common Options
+
+> Common parameters of source connectors
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| result_table_name | string | no       | -             |
+| field_name        | string | no       | -             |
+
+### result_table_name [string]
+
+When `result_table_name` is not specified, the data processed by this plugin will not be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` ;
+
+When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The data set `(dataStream/dataset)` registered here can be directly accessed by other plugins by specifying `source_table_name` .
+
+### field_name [string]
+
+When the data is obtained from the upper-level plug-in, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
+
+## Example
+
+```bash
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+```
+
+> The result of the data source `FakeSourceStream` will be registered as a temporary table named `fake` . This temporary table can be used by any `Transform` or `Sink` plugin by specifying `source_table_name` .
+>
+> `field_name` names the two columns of the temporary table `name` and `age` respectively.
diff --git a/versioned_docs/version-2.2.0-beta/connector-v2/source/pulsar.md b/versioned_docs/version-2.2.0-beta/connector-v2/source/pulsar.md
new file mode 100644
index 000000000..572ecc2e0
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector-v2/source/pulsar.md
@@ -0,0 +1,137 @@
+# Apache Pulsar
+
+> Apache Pulsar source connector
+
+## Description
+
+Source connector for Apache Pulsar.
+
+## Key features
+
+- [x] [batch](../../concept/connector-v2-features.md)
+- [x] [stream](../../concept/connector-v2-features.md)
+- [x] [exactly-once](../../concept/connector-v2-features.md)
+- [x] [schema projection](../../concept/connector-v2-features.md)
+- [x] [parallelism](../../concept/connector-v2-features.md)
+- [ ] [support user-defined split](../../concept/connector-v2-features.md)
+
+## Options
+
+| name                     | type    | required | default value |
+|--------------------------|---------|----------|---------------|
+| topic                    | String  | No       | -             |
+| topic-pattern            | String  | No       | -             |
+| topic-discovery.interval | Long    | No       | -1            |
+| subscription.name        | String  | Yes      | -             |
+| client.service-url       | String  | Yes      | -             |
+| admin.service-url        | String  | Yes      | -             |
+| auth.plugin-class        | String  | No       | -             |
+| auth.params              | String  | No       | -             |
+| poll.timeout             | Integer | No       | 100           |
+| poll.interval            | Long    | No       | 50            |
+| poll.batch.size          | Integer | No       | 500           |
+| cursor.startup.mode      | Enum    | No       | LATEST        |
+| cursor.startup.timestamp | Long    | No       | -             |
+| cursor.reset.mode        | Enum    | No       | LATEST        |
+| cursor.stop.mode         | Enum    | No       | NEVER         |
+| cursor.stop.timestamp    | Long    | No       | -             |
+
+### topic [String]
+
+Topic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like 'topic-1;topic-2'.
+
+**Note, only one of "topic-pattern" and "topic" can be specified for sources.**
+
+### topic-pattern [String]
+
+The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running.
+
+**Note, only one of "topic-pattern" and "topic" can be specified for sources.**
+
+### topic-discovery.interval [Long]
+
+The interval (in ms) for the Pulsar source to discover the new topic partitions. A non-positive value disables the topic partition discovery.
+
+**Note, This option only works if the 'topic-pattern' option is used.**
+
+### subscription.name [String]
+
+Specify the subscription name for this consumer. This argument is required when constructing the consumer.
+
+### client.service-url [String]
+
+Service URL provider for Pulsar service.
+To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+You can assign Pulsar protocol URLs to specific clusters and use the Pulsar scheme.
+
+For example, `localhost`: `pulsar://localhost:6650,localhost:6651`.
+
+### admin.service-url [String]
+
+The Pulsar service HTTP URL for the admin endpoint.
+
+For example, `http://my-broker.example.com:8080`, or `https://my-broker.example.com:8443` for TLS.
+
+### auth.plugin-class [String]
+
+Name of the authentication plugin.
+
+### auth.params [String]
+
+Parameters for the authentication plugin.
+
+For example, `key1:val1,key2:val2`
+
+### poll.timeout [Integer]
+
+The maximum time (in ms) to wait when fetching records. A longer time increases throughput but also latency.
+
+### poll.interval [Long]
+
+The interval time(in ms) when fetcing records. A shorter time increases throughput, but also increases CPU load.
+
+### poll.batch.size [Integer]
+
+The maximum number of records to fetch to wait when polling. A longer time increases throughput but also latency.
+
+### cursor.startup.mode [Enum]
+
+Startup mode for Pulsar consumer, valid values are `'EARLIEST'`, `'LATEST'`, `'SUBSCRIPTION'`, `'TIMESTAMP'`.
+
+### cursor.startup.timestamp [String]
+
+Start from the specified epoch timestamp (in milliseconds).
+
+**Note, This option is required when the "cursor.startup.mode" option used `'TIMESTAMP'`.**
+
+### cursor.reset.mode [Enum]
+
+Cursor reset strategy for Pulsar consumer valid values are `'EARLIEST'`, `'LATEST'`.
+
+**Note, This option only works if the "cursor.startup.mode" option used `'SUBSCRIPTION'`.**
+
+### cursor.stop.mode [String]
+
+Stop mode for Pulsar consumer, valid values are `'NEVER'`, `'LATEST'`and `'TIMESTAMP'`.
+
+**Note, When `'NEVER' `is specified, it is a real-time job, and other mode are off-line jobs.**
+
+### cursor.startup.timestamp [String]
+
+Stop from the specified epoch timestamp (in milliseconds).
+
+**Note, This option is required when the "cursor.stop.mode" option used `'TIMESTAMP'`.**
+
+## Example
+
+```Jdbc {
+source {
+  Pulsar {
+  	topic = "example"
+  	subscription.name = "seatunnel"
+    client.service-url = "localhost:pulsar://localhost:6650"
+    admin.service-url = "http://my-broker.example.com:8080"
+    result_table_name = "test"
+  }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/flink-sql/ElasticSearch.md b/versioned_docs/version-2.2.0-beta/connector/flink-sql/ElasticSearch.md
new file mode 100644
index 000000000..317c638ad
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/flink-sql/ElasticSearch.md
@@ -0,0 +1,50 @@
+# Flink SQL ElasticSearch Connector
+
+> ElasticSearch connector based flink sql
+
+## Description
+With elasticsearch connector, you can use the Flink SQL to write data into ElasticSearch.
+
+
+## Usage
+Let us have a brief example to show how to use the connector.
+
+### 1. Elastic prepare
+Please refer to the [Elastic Doc](https://www.elastic.co/guide/index.html) to prepare elastic environment.
+
+### 2. prepare seatunnel configuration
+ElasticSearch provide different connectors for different version:
+* version 6.x: flink-sql-connector-elasticsearch6
+* version 7.x: flink-sql-connector-elasticsearch7
+
+Here is a simple example of seatunnel configuration.
+```sql
+SET table.dml-sync = true;
+
+CREATE TABLE events (
+    id INT,
+    name STRING
+) WITH (
+    'connector' = 'datagen'
+);
+
+CREATE TABLE es_sink (
+    id INT,
+    name STRING
+) WITH (
+    'connector' = 'elasticsearch-7', -- or 'elasticsearch-6'
+    'hosts' = 'http://localhost:9200',
+    'index' = 'users'
+);
+
+INSERT INTO es_sink SELECT * FROM events;
+```
+
+### 3. start Flink SQL job
+Execute the following command in seatunnel home path to start the Flink SQL job.
+```bash
+$ bin/start-seatunnel-sql.sh -c config/elasticsearch.sql.conf
+```
+
+### 4. verify result
+Verify result from elasticsearch.
diff --git a/versioned_docs/version-2.2.0-beta/connector/flink-sql/Jdbc.md b/versioned_docs/version-2.2.0-beta/connector/flink-sql/Jdbc.md
new file mode 100644
index 000000000..53486d288
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/flink-sql/Jdbc.md
@@ -0,0 +1,67 @@
+# Flink SQL JDBC Connector
+
+> JDBC connector based flink sql
+
+## Description
+
+We can use the Flink SQL JDBC Connector to connect to a JDBC database. Refer to the [Flink SQL JDBC Connector](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/jdbc/index.html) for more information.
+
+
+## Usage
+
+### 1. download driver
+A driver dependency is also required to connect to a specified database. Here are drivers currently supported:
+
+| Driver     | Group Id	         | Artifact Id	        | JAR           |
+|------------|-------------------|----------------------|---------------|
+| MySQL	     | mysql	         | mysql-connector-java | [Download](https://repo.maven.apache.org/maven2/mysql/mysql-connector-java/) |
+| PostgreSQL | org.postgresql	 | postgresql	        | [Download](https://jdbc.postgresql.org/download/) |
+| Derby	     | org.apache.derby	 | derby	            | [Download](http://db.apache.org/derby/derby_downloads.html) |
+
+After downloading the driver jars, you need to place the jars into $FLINK_HOME/lib/.
+
+### 2. prepare data
+Start mysql server locally, and create a database named "test" and a table named "test_table" in the database.
+
+The table "test_table" could be created by the following SQL:
+```sql
+CREATE TABLE IF NOT EXISTS `test_table`(
+   `id` INT UNSIGNED AUTO_INCREMENT,
+   `name` VARCHAR(100) NOT NULL,
+   PRIMARY KEY ( `id` )
+)ENGINE=InnoDB DEFAULT CHARSET=utf8;
+```
+
+Insert some data into the table "test_table".
+
+### 3. seatunnel config 
+Prepare a seatunnel config file with the following content:
+```sql
+SET table.dml-sync = true;
+
+CREATE TABLE test (
+  id BIGINT,
+  name STRING
+) WITH (
+'connector'='jdbc',
+  'url' = 'jdbc:mysql://localhost:3306/test',
+  'table-name' = 'test_table',
+  'username' = '<replace with your username>',
+  'password' = '<replace with your password>'
+);
+
+CREATE TABLE print_table (
+  id BIGINT,
+  name STRING
+) WITH (
+  'connector' = 'print',
+  'sink.parallelism' = '1'
+);
+
+INSERT INTO print_table SELECT * FROM test;
+```
+
+### 4. run job
+```bash
+./bin/start-seatunnel-sql.sh --config <path/to/your/config>
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/flink-sql/Kafka.md b/versioned_docs/version-2.2.0-beta/connector/flink-sql/Kafka.md
new file mode 100644
index 000000000..acdd1b055
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/flink-sql/Kafka.md
@@ -0,0 +1,76 @@
+# Flink SQL Kafka Connector
+
+> Kafka connector based by flink sql
+
+## Description
+
+With kafka connector, we can read data from kafka and write data to kafka using Flink SQL. Refer to the [Kafka connector](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/kafka/) for more details.
+
+
+## Usage
+Let us have a brief example to show how to use the connector from end to end.
+
+### 1. kafka prepare
+Please refer to the [Kafka QuickStart](https://kafka.apache.org/quickstart) to prepare kafka environment and produce data like following:
+
+```bash
+$ bin/kafka-console-producer.sh --topic <topic-name> --bootstrap-server localhost:9092
+```
+
+After executing the command, we will come to the interactive mode. Print the following message to send data to kafka.
+```bash
+{"id":1,"name":"abc"}
+>{"id":2,"name":"def"}
+>{"id":3,"name":"dfs"}
+>{"id":4,"name":"eret"}
+>{"id":5,"name":"yui"}
+```
+
+### 2. prepare seatunnel configuration
+Here is a simple example of seatunnel configuration.
+```sql
+SET table.dml-sync = true;
+
+CREATE TABLE events (
+    id INT,
+    name STRING
+) WITH (
+    'connector' = 'kafka',
+    'topic'='<topic-name>',
+    'properties.bootstrap.servers' = 'localhost:9092',
+    'properties.group.id' = 'testGroup',
+    'scan.startup.mode' = 'earliest-offset',
+    'format' = 'json'
+);
+
+CREATE TABLE print_table (
+    id INT,
+    name STRING
+) WITH (
+    'connector' = 'print',
+    'sink.parallelism' = '1'
+);
+
+INSERT INTO print_table SELECT * FROM events;
+```
+
+### 3. start flink local cluster
+```bash
+$ ${FLINK_HOME}/bin/start-cluster.sh
+```
+
+### 4. start Flink SQL job
+Execute the following command in seatunnel home path to start the Flink SQL job.
+```bash
+$ bin/start-seatunnel-sql.sh -c config/kafka.sql.conf
+```
+
+### 5. verify result
+After the job submitted, we can see the data printing by connector 'print' in taskmanager's log .
+```text
++I[1, abc]
++I[2, def]
++I[3, dfs]
++I[4, eret]
++I[5, yui]
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/flink-sql/usage.md b/versioned_docs/version-2.2.0-beta/connector/flink-sql/usage.md
new file mode 100644
index 000000000..1495b43fe
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/flink-sql/usage.md
@@ -0,0 +1,277 @@
+# How to use flink sql module
+
+> Tutorial of flink sql module
+
+## Usage
+
+### 1. Command Entrypoint
+
+```bash
+bin/start-seatunnel-sql.sh
+```
+
+### 2. seatunnel config
+
+Change the file flink.sql.conf.template in the config/ directory to flink.sql.conf
+
+```bash
+mv flink.sql.conf.template flink.sql.conf
+```
+
+Prepare a seatunnel config file with the following content:
+
+```sql
+SET table.dml-sync = true;
+
+CREATE TABLE events (
+  f_type INT,
+  f_uid INT,
+  ts AS localtimestamp,
+  WATERMARK FOR ts AS ts
+) WITH (
+  'connector' = 'datagen',
+  'rows-per-second'='5',
+  'fields.f_type.min'='1',
+  'fields.f_type.max'='5',
+  'fields.f_uid.min'='1',
+  'fields.f_uid.max'='1000'
+);
+
+CREATE TABLE print_table (
+  type INT,
+  uid INT,
+  lstmt TIMESTAMP
+) WITH (
+  'connector' = 'print',
+  'sink.parallelism' = '1'
+);
+
+INSERT INTO print_table SELECT * FROM events where f_type = 1;
+```
+
+### 3. run job
+
+#### Standalone Cluster
+
+```bash
+bin/start-seatunnel-sql.sh --config config/flink.sql.conf
+
+# -p 2 specifies that the parallelism of flink job is 2. You can also specify more parameters, use flink run -h to view
+bin/start-seatunnel-flink.sh \
+-p 2 \
+--config config/flink.sql.conf
+```
+
+#### Yarn Cluster
+
+```bash
+bin/start-seatunnel-sql.sh -m yarn-cluster --config config/flink.sql.conf
+
+bin/start-seatunnel-sql.sh -t yarn-per-job --config config/flink.sql.conf
+
+# -p 2 specifies that the parallelism of flink job is 2. You can also specify more parameters, use flink run -h to view
+bin/start-seatunnel-flink.sh \
+-p 2 \
+-m yarn-cluster \
+--config config/flink.sql.conf
+```
+
+#### Other Options
+
+* `-p 2` specifies that the job parallelism is `2`
+
+```bash
+bin/start-seatunnel-sql.sh -p 2 --config config/flink.sql.conf
+```
+
+## Example
+
+1. How to implement flink sql interval join with seatunnel flink-sql module
+
+intervaljoin.sql.conf
+
+```hocon
+CREATE TABLE basic (
+  `id` BIGINT,
+  `name` STRING,
+   `ts`  STRING
+) WITH (
+  'connector' = 'kafka',
+  'topic' = 'basic',
+  'properties.bootstrap.servers' = 'XX.XX.XX.XX:9092',
+  'properties.group.id' = 'testGroup',
+  'scan.startup.mode' = 'latest-offset',
+  'format' = 'json'
+);
+
+CREATE TABLE infos (
+  `id` BIGINT,
+  `age` BIGINT,
+   `ts`  STRING
+) WITH (
+  'connector' = 'kafka',
+  'topic' = 'info',
+  'properties.bootstrap.servers' = 'XX.XX.XX.XX:9092',
+  'properties.group.id' = 'testGroup',
+  'scan.startup.mode' = 'latest-offset',
+  'format' = 'json'
+);
+
+CREATE TABLE stream2_join_result (
+  id BIGINT , 
+  name STRING,
+  age BIGINT,
+  ts1 STRING , 
+  ts2 STRING,
+  PRIMARY KEY(id) NOT ENFORCED
+) WITH (
+  'connector' = 'jdbc',
+  'url' = 'jdbc:mysql://XX.XX.XX.XX:3306/testDB',
+  'username' = 'root',
+  'password' = 'taia@2021',
+  'table-name' = 'stream2_join_result'
+);
+
+insert into  stream2_join_result select basic.id, basic.name, infos.age,basic.ts,infos.ts 
+from basic join infos on (basic.id = infos.id) where  TO_TIMESTAMP(basic.ts,'yyyy-MM-dd HH:mm:ss') 
+BETWEEN   TO_TIMESTAMP(infos.ts,'yyyy-MM-dd HH:mm:ss')  - INTERVAL '10' SECOND AND  TO_TIMESTAMP(infos.ts,'yyyy-MM-dd HH:mm:ss') + INTERVAL '10' SECOND;
+```
+
+```bash
+bin/start-seatunnel-sql.sh -m yarn-cluster --config config/intervaljoin.sql.conf
+```
+
+2. How to implement flink sql dim join (using mysql) with seatunnel flink-sql module
+
+dimjoin.sql.conf
+
+```hocon
+CREATE TABLE code_set_street (
+  area_code STRING,
+  area_name STRING,
+  town_code STRING ,
+  town_name STRING ,
+  PRIMARY KEY(town_code) NOT ENFORCED
+) WITH (
+  'connector' = 'jdbc',
+  'url' = 'jdbc:mysql://XX.XX.XX.XX:3306/testDB',
+  'username' = 'root',
+  'password' = '2021',
+  'table-name' = 'code_set_street',
+  'lookup.cache.max-rows' = '5000' ,
+  'lookup.cache.ttl' = '5min'
+);
+
+CREATE TABLE people (
+  `id` STRING,
+  `name` STRING,
+  `ts`  TimeStamp(3) ,
+  proctime AS PROCTIME() 
+) WITH (
+  'connector' = 'kafka',
+  'topic' = 'people',
+  'properties.bootstrap.servers' = 'XX.XX.XX.XX:9092',
+  'properties.group.id' = 'testGroup',
+  'scan.startup.mode' = 'latest-offset',
+  'format' = 'json'
+);
+
+CREATE TABLE mysql_dim_join_result (
+  id STRING , 
+  name STRING,
+  area_name STRING,
+  town_code STRING , 
+  town_name STRING,
+  ts TimeStamp ,
+  PRIMARY KEY(id,town_code) NOT ENFORCED
+) WITH (
+  'connector' = 'jdbc',
+  'url' = 'jdbc:mysql://XX.XX.XX.XX:3306/testDB',
+  'username' = 'root',
+  'password' = '2021',
+  'table-name' = 'mysql_dim_join_result'
+);
+
+insert into mysql_dim_join_result
+select people.id , people.name ,code_set_street.area_name ,code_set_street.town_code, code_set_street.town_name , people.ts  
+from people inner join code_set_street FOR SYSTEM_TIME AS OF  people.proctime  
+on (people.id = code_set_street.town_code);
+```
+
+```bash
+bin/start-seatunnel-sql.sh -m yarn-cluster --config config/dimjoin.sql.conf
+```
+
+3. How to implement flink SQL cdc dim join (using mysql-cdc) with seatunnel flink-sql module
+
+##### First , Need create mysql table in mysql database
+
+```
+CREATE TABLE `dim_cdc_join_result` (
+    `id` varchar(255) NOT NULL,
+    `name` varchar(255) DEFAULT NULL,
+    `area_name` varchar(255) NOT NULL,
+    `town_code` varchar(255) NOT NULL,
+    `town_name` varchar(255) DEFAULT NULL,
+    `ts` varchar(255) DEFAULT NULL,
+    PRIMARY KEY (`id`,`town_code`,`ts`) USING BTREE
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;
+```
+
+cdcjoin.sql.conf
+
+```hocon
+CREATE TABLE code_set_street_cdc (
+  area_code STRING,
+  area_name STRING,
+  town_code STRING ,
+  town_name STRING ,
+  PRIMARY KEY(town_code) NOT ENFORCED
+) WITH (
+  'connector' = 'mysql-cdc',
+  'hostname' = 'XX.XX.XX.XX',
+  'port' = '3306',
+  'username' = 'root',
+  'password' = '2021',
+  'database-name' = 'flink',
+  'table-name' = 'code_set_street'
+);
+     
+CREATE TABLE people (
+  `id` STRING,
+  `name` STRING,
+  `ts`  STRING
+) WITH (
+  'connector' = 'kafka',
+  'topic' = 'people',
+  'properties.bootstrap.servers' = 'XX.XX.XX.XX:9092',
+  'properties.group.id' = 'testGroup',
+  'scan.startup.mode' = 'latest-offset',
+  'format' = 'json'
+);
+
+# create mysql sink table in flink
+CREATE TABLE dim_cdc_join_result (
+  id STRING , 
+  name STRING,
+  area_name STRING,
+  town_code STRING , 
+  town_name STRING,
+  ts STRING ,
+  PRIMARY KEY(id,town_code) NOT ENFORCED
+) WITH (
+  'connector' = 'jdbc',
+  'url' = 'jdbc:mysql://XX.XX.XX.XX:3306/flink',
+  'username' = 'root',
+  'password' = '2021',
+  'table-name' = 'dim_cdc_join_result'
+);
+ 
+insert into dim_cdc_join_result
+select a.id , a.name ,b.area_name ,b.town_code, b.town_name , a.ts  
+from people a inner join code_set_street_cdc b  on (a.id = b.town_code);
+```
+
+```bash
+bin/start-seatunnel-sql.sh -m yarn-cluster --config config/cdcjoin.sql.conf
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Assert.md b/versioned_docs/version-2.2.0-beta/connector/sink/Assert.md
new file mode 100644
index 000000000..74316f925
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Assert.md
@@ -0,0 +1,106 @@
+# Assert
+
+> Assert sink connector
+
+## Description
+
+A flink sink plugin which can assert illegal data by user defined rules
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark:Assert
+* [x] Flink: Assert
+
+:::
+
+## Options
+
+| name                          | type        | required | default value |
+| ----------------------------- | ----------  | -------- | ------------- |
+|rules                          | ConfigList  | yes      | -             |
+|rules.field_name               | string      | yes      | -             |
+|rules.field_type               | string      | no       | -             |
+|rules.field_value              | ConfigList  | no       | -             |
+|rules.field_value.rule_type    | string      | no       | -             |
+|rules.field_value.rule_value   | double      | no       | -             |
+
+
+### rules [ConfigList]
+
+Rule definition of user's available data.  Each rule represents one field validation.
+
+### field_name [string]
+
+field name(string)
+
+### field_type [string]
+
+field type (string),  e.g. `string,boolean,byte,short,int,long,float,double,char,void,BigInteger,BigDecimal,Instant`
+
+### field_value [ConfigList]
+
+A list value rule define the data value validation
+
+### rule_type [string]
+
+The following rules are supported for now
+`
+NOT_NULL,   // value can't be null
+MIN,        // define the minimum value of data
+MAX,        // define the maximum value of data
+MIN_LENGTH, // define the minimum string length of a string data
+MAX_LENGTH  // define the maximum string length of a string data
+`
+
+### rule_value [double]
+
+the value related to rule type
+
+
+## Example
+the whole config obey with `hocon` style
+
+```hocon
+
+Assert {
+   rules = 
+        [{
+            field_name = name
+            field_type = string
+            field_value = [
+                {
+                    rule_type = NOT_NULL
+                },
+                {
+                    rule_type = MIN_LENGTH
+                    rule_value = 3
+                },
+                {
+                     rule_type = MAX_LENGTH
+                     rule_value = 5
+                }
+            ]
+        },{
+            field_name = age
+            field_type = int
+            field_value = [
+                {
+                    rule_type = NOT_NULL
+                },
+                {
+                    rule_type = MIN
+                    rule_value = 10
+                },
+                {
+                     rule_type = MAX
+                     rule_value = 20
+                }
+            ]
+        }
+        ]
+    
+}
+
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Clickhouse.md b/versioned_docs/version-2.2.0-beta/connector/sink/Clickhouse.md
new file mode 100644
index 000000000..ab926060e
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Clickhouse.md
@@ -0,0 +1,148 @@
+# Clickhouse
+
+> Clickhouse sink connector
+
+## Description
+
+Use [Clickhouse-jdbc](https://github.com/ClickHouse/clickhouse-jdbc) to correspond the data source according to the field name and write it into ClickHouse. The corresponding data table needs to be created in advance before use
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Clickhouse
+* [x] Flink: Clickhouse
+
+:::
+
+
+## Options
+
+| name           | type    | required | default value |
+|----------------|---------| -------- |---------------|
+| bulk_size      | number  | no       | 20000         |
+| clickhouse.*   | string  | no       |               |
+| database       | string  | yes      | -             |
+| fields         | array   | no       | -             |
+| host           | string  | yes      | -             |
+| password       | string  | no       | -             |
+| retry          | number  | no       | 1             |
+| retry_codes    | array   | no       | [ ]           |
+| table          | string  | yes      | -             |
+| username       | string  | no       | -             |
+| split_mode     | boolean | no       | false         |
+| sharding_key   | string  | no       | -             |
+| common-options | string  | no       | -             |
+
+### bulk_size [number]
+
+The number of rows written through [Clickhouse-jdbc](https://github.com/ClickHouse/clickhouse-jdbc) each time, the `default is 20000` .
+
+### database [string]
+
+database name
+
+### fields [array]
+
+The data field that needs to be output to `ClickHouse` , if not configured, it will be automatically adapted according to the data `schema` .
+
+### host [string]
+
+`ClickHouse` cluster address, the format is `host:port` , allowing multiple `hosts` to be specified. Such as `"host1:8123,host2:8123"` .
+
+### password [string]
+
+`ClickHouse user password` . This field is only required when the permission is enabled in `ClickHouse` .
+
+### retry [number]
+
+The number of retries, the default is 1
+
+### retry_codes [array]
+
+When an exception occurs, the ClickHouse exception error code of the operation will be retried. For a detailed list of error codes, please refer to [ClickHouseErrorCode](https://github.com/ClickHouse/clickhouse-jdbc/blob/master/clickhouse-jdbc/src/main/java/ru/yandex/clickhouse/except/ClickHouseErrorCode.java)
+
+If multiple retries fail, this batch of data will be discarded, use with caution! !
+
+### table [string]
+
+table name
+
+### username [string]
+
+`ClickHouse` user username, this field is only required when permission is enabled in `ClickHouse`
+
+### clickhouse [string]
+
+In addition to the above mandatory parameters that must be specified by `clickhouse-jdbc` , users can also specify multiple optional parameters, which cover all the [parameters](https://github.com/ClickHouse/clickhouse-jdbc/blob/master/clickhouse-jdbc/src/main/java/ru/yandex/clickhouse/settings/ClickHouseProperties.java) provided by `clickhouse-jdbc` .
+
+The way to specify the parameter is to add the prefix `clickhouse.` to the original parameter name. For example, the way to specify `socket_timeout` is: `clickhouse.socket_timeout = 50000` . If these non-essential parameters are not specified, they will use the default values given by `clickhouse-jdbc`.
+
+### split_mode [boolean]
+
+This mode only support clickhouse table which engine is 'Distributed'.And `internal_replication` option 
+should be `true`. They will split distributed table data in seatunnel and perform write directly on each shard. The shard weight define is clickhouse will be 
+counted.
+
+### sharding_key [string]
+
+When use split_mode, which node to send data to is a problem, the default is random selection, but the 
+'sharding_key' parameter can be used to specify the field for the sharding algorithm. This option only 
+worked when 'split_mode' is true.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [common options](common-options.md) for details
+
+## ClickHouse type comparison table
+
+| ClickHouse field type | Convert plugin conversion goal type | SQL conversion expression     | Description                                           |
+| --------------------- | ----------------------------------- | ----------------------------- | ----------------------------------------------------- |
+| Date                  | string                              | string()                      | `yyyy-MM-dd` Format string                            |
+| DateTime              | string                              | string()                      | `yyyy-MM-dd HH:mm:ss` Format string                   |
+| String                | string                              | string()                      |                                                       |
+| Int8                  | integer                             | int()                         |                                                       |
+| Uint8                 | integer                             | int()                         |                                                       |
+| Int16                 | integer                             | int()                         |                                                       |
+| Uint16                | integer                             | int()                         |                                                       |
+| Int32                 | integer                             | int()                         |                                                       |
+| Uint32                | long                                | bigint()                      |                                                       |
+| Int64                 | long                                | bigint()                      |                                                       |
+| Uint64                | long                                | bigint()                      |                                                       |
+| Float32               | float                               | float()                       |                                                       |
+| Float64               | double                              | double()                      |                                                       |
+| Decimal(P, S)         | -                                   | CAST(source AS DECIMAL(P, S)) | Decimal32(S), Decimal64(S), Decimal128(S) Can be used |
+| Array(T)              | -                                   | -                             |                                                       |
+| Nullable(T)           | Depends on T                        | Depends on T                  |                                                       |
+| LowCardinality(T)     | Depends on T                        | Depends on T                  |                                                       |
+
+## Examples
+
+```bash
+clickhouse {
+    host = "localhost:8123"
+    clickhouse.socket_timeout = 50000
+    database = "nginx"
+    table = "access_msg"
+    fields = ["date", "datetime", "hostname", "http_code", "data_size", "ua", "request_time"]
+    username = "username"
+    password = "password"
+    bulk_size = 20000
+}
+```
+
+```bash
+ClickHouse {
+    host = "localhost:8123"
+    database = "nginx"
+    table = "access_msg"
+    fields = ["date", "datetime", "hostname", "http_code", "data_size", "ua", "request_time"]
+    username = "username"
+    password = "password"
+    bulk_size = 20000
+    retry_codes = [209, 210]
+    retry = 3
+}
+```
+
+> In case of network timeout or network abnormality, retry writing 3 times
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/ClickhouseFile.md b/versioned_docs/version-2.2.0-beta/connector/sink/ClickhouseFile.md
new file mode 100644
index 000000000..6080846ee
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/ClickhouseFile.md
@@ -0,0 +1,164 @@
+# ClickhouseFile
+
+> Clickhouse file sink connector
+
+## Description
+
+Generate the clickhouse data file with the clickhouse-local program, and then send it to the clickhouse 
+server, also call bulk load.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: ClickhouseFile
+* [x] Flink
+
+:::
+
+## Options
+
+| name                   | type     | required | default value |
+|------------------------|----------|----------|---------------|
+| database               | string   | yes      | -             |
+| fields                 | array    | no       | -             |
+| host                   | string   | yes      | -             |
+| password               | string   | no       | -             |
+| table                  | string   | yes      | -             |
+| username               | string   | no       | -             |
+| sharding_key           | string   | no       | -             |
+| clickhouse_local_path  | string   | yes      | -             |
+| tmp_batch_cache_line   | int      | no       | 100000        |
+| copy_method            | string   | no       | scp           |
+| node_free_password     | boolean  | no       | false         |
+| node_pass              | list     | no       | -             |
+| node_pass.node_address | string   | no       | -             |
+| node_pass.password     | string   | no       | -             |
+| common-options         | string   | no       | -             |
+
+### database [string]
+
+database name
+
+### fields [array]
+
+The data field that needs to be output to `ClickHouse` , if not configured, it will be automatically adapted according to the data `schema` .
+
+### host [string]
+
+`ClickHouse` cluster address, the format is `host:port` , allowing multiple `hosts` to be specified. Such as `"host1:8123,host2:8123"` .
+
+### password [string]
+
+`ClickHouse user password` . This field is only required when the permission is enabled in `ClickHouse` .
+
+### table [string]
+
+table name
+
+### username [string]
+
+`ClickHouse` user username, this field is only required when permission is enabled in `ClickHouse`
+
+### sharding_key [string]
+
+When use split_mode, which node to send data to is a problem, the default is random selection, but the 
+'sharding_key' parameter can be used to specify the field for the sharding algorithm. This option only 
+worked when 'split_mode' is true.
+
+### clickhouse_local_path [string]
+
+The address of the clickhouse-local program on the spark node. Since each task needs to be called, 
+clickhouse-local should be located in the same path of each spark node.
+
+### tmp_batch_cache_line [int]
+
+SeaTunnel will use memory map technology to write temporary data to the file to cache the data that the 
+user needs to write to clickhouse. This parameter is used to configure the number of data pieces written 
+to the file each time. Most of the time you don't need to modify it.
+
+### copy_method [string]
+
+Specifies the method used to transfer files, the default is scp, optional scp and rsync
+
+### node_free_password [boolean]
+
+Because seatunnel need to use scp or rsync for file transfer, seatunnel need clickhouse server-side access.
+If each spark node and clickhouse server are configured with password-free login, 
+you can configure this option to true, otherwise you need to configure the corresponding node password in the node_pass configuration
+
+### node_pass [list]
+
+Used to save the addresses and corresponding passwords of all clickhouse servers
+
+### node_pass.node_address [string]
+
+The address corresponding to the clickhouse server
+
+### node_pass.node_password [string]
+
+The password corresponding to the clickhouse server, only support root user yet.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [common options](common-options.md) for details
+
+## ClickHouse type comparison table
+
+| ClickHouse field type | Convert plugin conversion goal type | SQL conversion expression     | Description                                           |
+| --------------------- | ----------------------------------- | ----------------------------- |-------------------------------------------------------|
+| Date                  | string                              | string()                      | `yyyy-MM-dd` Format string                            |
+| DateTime              | string                              | string()                      | `yyyy-MM-dd HH:mm:ss` Format string                   |
+| String                | string                              | string()                      |                                                       |
+| Int8                  | integer                             | int()                         |                                                       |
+| Uint8                 | integer                             | int()                         |                                                       |
+| Int16                 | integer                             | int()                         |                                                       |
+| Uint16                | integer                             | int()                         |                                                       |
+| Int32                 | integer                             | int()                         |                                                       |
+| Uint32                | long                                | bigint()                      |                                                       |
+| Int64                 | long                                | bigint()                      |                                                       |
+| Uint64                | long                                | bigint()                      |                                                       |
+| Float32               | float                               | float()                       |                                                       |
+| Float64               | double                              | double()                      |                                                       |
+| Decimal(P, S)         | -                                   | CAST(source AS DECIMAL(P, S)) | Decimal32(S), Decimal64(S), Decimal128(S) Can be used |
+| Array(T)              | -                                   | -                             |                                                       |
+| Nullable(T)           | Depends on T                        | Depends on T                  |                                                       |
+| LowCardinality(T)     | Depends on T                        | Depends on T                  |                                                       |
+
+## Examples
+
+```bash
+ClickhouseFile {
+    host = "localhost:8123"
+    database = "nginx"
+    table = "access_msg"
+    fields = ["date", "datetime", "hostname", "http_code", "data_size", "ua", "request_time"]
+    username = "username"
+    password = "password"
+    clickhouse_local_path = "/usr/bin/clickhouse-local"
+    node_free_password = true
+}
+```
+
+```bash
+ClickhouseFile {
+    host = "localhost:8123"
+    database = "nginx"
+    table = "access_msg"
+    fields = ["date", "datetime", "hostname", "http_code", "data_size", "ua", "request_time"]
+    username = "username"
+    password = "password"
+    sharding_key = "age"
+    clickhouse_local_path = "/usr/bin/Clickhouse local"
+    node_pass = [
+      {
+        node_address = "localhost1"
+        password = "password"
+      }
+      {
+        node_address = "localhost2"
+        password = "password"
+      }
+    ]
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Console.mdx b/versioned_docs/version-2.2.0-beta/connector/sink/Console.mdx
new file mode 100644
index 000000000..d20b15341
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Console.mdx
@@ -0,0 +1,103 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Console
+
+> Console sink connector
+
+## Description
+
+Output data to standard terminal or Flink taskManager, which is often used for debugging and easy to observe the data.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Console
+* [x] Flink: Console
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| limit          | number | no       | 100           |
+| serializer     | string | no       | plain         |
+| common-options | string | no       | -             |
+
+### limit [number]
+
+Limit the number of `rows` to be output, the legal range is `[-1, 2147483647]` , `-1` means that the output is up to `2147483647` rows
+
+### serializer [string]
+
+The format of serialization when outputting. Available serializers include: `json` , `plain`
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+<TabItem value="flink">
+
+## Options
+
+| name           | type   | required | default value |
+|----------------|--------| -------- |---------------|
+| limit          | int    | no       | INT_MAX       |
+| common-options | string | no       | -             |
+
+### limit [int]
+
+limit console result lines
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+</Tabs>
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+console {
+    limit = 10,
+    serializer = "json"
+}
+```
+
+> Output 10 rows of data in Json format
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+ConsoleSink{}
+```
+
+## Note
+
+Flink's console output is in flink's WebUI
+
+</TabItem>
+</Tabs>
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Doris.mdx b/versioned_docs/version-2.2.0-beta/connector/sink/Doris.mdx
new file mode 100644
index 000000000..ebccc9b8e
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Doris.mdx
@@ -0,0 +1,176 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Doris
+
+> Doris sink connector
+
+### Description:
+
+Write Data to a Doris Table.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Doris
+* [x] Flink: DorisSink
+
+:::
+
+### Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| fenodes | string | yes | - |
+| database | string | yes | - |
+| table	 | string | yes | - |
+| user	 | string | yes | - |
+| password	 | string | yes | - |
+| batch_size	 | int | yes | 100 |
+| doris.*	 | string | no | - |
+
+##### fenodes [string]
+
+Doris FE address:8030
+
+##### database [string]
+
+Doris target database name
+
+##### table [string]
+
+Doris target table name
+
+##### user [string]
+
+Doris user name
+
+##### password [string]
+
+Doris user's password
+
+##### batch_size [string]
+
+Doris number of submissions per batch
+
+Default value:5000
+
+##### doris. [string]
+
+Doris stream_load properties,you can use 'doris.' prefix + stream_load properties
+[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
+
+</TabItem>
+<TabItem value="flink">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| fenodes | string | yes | - |
+| database | string | yes | - |
+| table | string | yes | - |
+| user	 | string | yes | - |
+| password	 | string | yes | - |
+| batch_size	 | int | no |  100 |
+| interval	 | int | no |1000 |
+| max_retries	 | int | no | 1 |
+| doris.*	 | - | no | - |
+| parallelism | int | no  | - |
+
+##### fenodes [string]
+
+Doris FE http address
+
+##### database [string]
+
+Doris database name
+
+##### table [string]
+
+Doris table name
+
+##### user [string]
+
+Doris username
+
+##### password [string]
+
+Doris password
+
+##### batch_size [int]
+
+Maximum number of lines in a single write Doris,default value is 5000.
+
+##### interval [int]
+
+The flush interval millisecond, after which the asynchronous thread will write the data in the cache to Doris.Set to 0 to turn off periodic writing.
+
+Default value :5000
+
+##### max_retries [int]
+
+Number of retries after writing Doris failed
+
+##### doris.* [string]
+
+The doris stream load parameters.you can use 'doris.' prefix + stream_load properties. eg:doris.column_separator' = ','
+[More Doris stream_load Configurations](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
+
+### parallelism [Int]
+
+The parallelism of an individual operator, for DorisSink
+
+</TabItem>
+</Tabs>
+
+### Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```conf
+Doris {
+    fenodes="0.0.0.0:8030"
+    database="test"
+    table="user"
+    user="doris"
+    password="doris"
+    batch_size=10000
+    doris.column_separator="\t"
+    doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```conf
+DorisSink {
+    fenodes = "127.0.0.1:8030"
+    database = database
+    table = table
+    user = root
+    password = password
+    batch_size = 1
+    doris.column_separator="\t"
+    doris.columns="id,user_name,user_name_cn,create_time,last_login_time"
+}
+```
+
+</TabItem>
+</Tabs>
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Druid.md b/versioned_docs/version-2.2.0-beta/connector/sink/Druid.md
new file mode 100644
index 000000000..363695f38
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Druid.md
@@ -0,0 +1,106 @@
+# Druid
+
+> Druid sink connector
+
+## Description
+
+Write data to Apache Druid.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: Druid
+
+:::
+
+## Options
+
+| name                    | type     | required | default value |
+| ----------------------- | -------- | -------- | ------------- |
+| coordinator_url         | `String` | yes      | -             |
+| datasource              | `String` | yes      | -             |
+| timestamp_column        | `String` | no       | timestamp     |
+| timestamp_format        | `String` | no       | auto          |
+| timestamp_missing_value | `String` | no       | -             |
+| parallelism             | `Int`    | no       | -             |
+
+### coordinator_url [`String`]
+
+The URL of Coordinator service in Apache Druid.
+
+### datasource [`String`]
+
+The DataSource name in Apache Druid.
+
+### timestamp_column [`String`]
+
+The timestamp column name in Apache Druid, the default value is `timestamp`.
+
+### timestamp_format [`String`]
+
+The timestamp format in Apache Druid, the default value is `auto`, it could be:
+
+- `iso`
+  - ISO8601 with 'T' separator, like "2000-01-01T01:02:03.456"
+
+- `posix`
+  - seconds since epoch
+
+- `millis`
+  - milliseconds since epoch
+
+- `micro`
+  - microseconds since epoch
+
+- `nano`
+  - nanoseconds since epoch
+
+- `auto`
+  - automatically detects ISO (either 'T' or space separator) or millis format
+
+- any [Joda DateTimeFormat](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) string
+
+### timestamp_missing_value [`String`]
+
+The timestamp missing value in Apache Druid, which is used for input records that have a null or missing timestamp. The value of `timestamp_missing_value` should be in ISO 8601 format, for example `"2022-02-02T02:02:02.222"`.
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for DruidSink
+
+## Example
+
+### Simple
+
+```hocon
+DruidSink {
+  coordinator_url = "http://localhost:8081/"
+  datasource = "wikipedia"
+}
+```
+
+### Specified timestamp column and format
+
+```hocon
+DruidSink {
+  coordinator_url = "http://localhost:8081/"
+  datasource = "wikipedia"
+  timestamp_column = "timestamp"
+  timestamp_format = "auto"
+}
+```
+
+### Specified timestamp column, format and missing value
+
+```hocon
+DruidSink {
+  coordinator_url = "http://localhost:8081/"
+  datasource = "wikipedia"
+  timestamp_column = "timestamp"
+  timestamp_format = "auto"
+  timestamp_missing_value = "2022-02-02T02:02:02.222"
+}
+```
+
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Elasticsearch.mdx b/versioned_docs/version-2.2.0-beta/connector/sink/Elasticsearch.mdx
new file mode 100644
index 000000000..73a7669a4
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Elasticsearch.mdx
@@ -0,0 +1,120 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Elasticsearch
+
+> Elasticsearch sink connector
+
+## Description
+
+Output data to `Elasticsearch`.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Elasticsearch(supported `ElasticSearch version is >= 2.x and <7.0.0`)
+* [x] Flink: Elasticsearch(supported `ElasticSearch version = 7.x`, if you want use Elasticsearch version is 6.x,
+please use the source code to repackage by execute `mvn clean package -Delasticsearch=6`)
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| hosts             | array  | yes      | -             |
+| index_type        | string | no       | -             |
+| index_time_format | string | no       | yyyy.MM.dd    |
+| index             | string | no       | seatunnel     |
+| es.*              | string | no       |               |
+| common-options    | string | no       | -             |
+
+</TabItem>
+<TabItem value="flink">
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| hosts             | array  | yes      | -             |
+| index_type        | string | no       | log           |
+| index_time_format | string | no       | yyyy.MM.dd    |
+| index             | string | no       | seatunnel     |
+| common-options    | string | no       | -             |
+| parallelism       | int    | no       | -             |
+
+</TabItem>
+</Tabs>
+
+### hosts [array]
+
+`Elasticsearch` cluster address, the format is `host:port` , allowing multiple hosts to be specified. Such as `["host1:9200", "host2:9200"]` .
+
+### index_type [string]
+
+`Elasticsearch` index type, it is recommended not to specify in elasticsearch 7 and above
+
+#### index_time_format [string]
+
+When the format in the `index` parameter is `xxxx-${now}` , `index_time_format` can specify the time format of the `index` name, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### index [string]
+
+Elasticsearch `index` name. If you need to generate an `index` based on time, you can specify a time variable, such as `seatunnel-${now}` . `now` represents the current data processing time.
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+### es.* [string]
+
+Users can also specify multiple optional parameters. For a detailed list of parameters, see [Parameters Supported by Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html#cfg-mapping).
+
+For example, the way to specify `es.batch.size.entries` is: `es.batch.size.entries = 100000` . If these non-essential parameters are not specified, they will use the default values given in the official documentation.
+
+</TabItem>
+<TabItem value="flink">
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, data source, or data sink
+
+</TabItem>
+</Tabs>
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+## Examples
+
+```bash
+elasticsearch {
+    hosts = ["localhost:9200"]
+    index = "seatunnel"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Email.md b/versioned_docs/version-2.2.0-beta/connector/sink/Email.md
new file mode 100644
index 000000000..406aea308
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Email.md
@@ -0,0 +1,103 @@
+# Email
+
+> Email sink connector
+
+## Description
+
+Supports data output through `email attachments`. The attachments are in the `xlsx` format that supports `excel` to open, which can be used to notify the task statistics results through email attachments.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Email
+* [ ] Flink
+
+:::
+
+## Options
+
+| name     | type    | required | default value |
+|----------|---------|----------|---------------|
+| subject  | string  | yes      | -             |
+| from     | string  | yes      | -             |
+| to       | string  | yes      | -             |
+| bodyText | string  | no       | -             |
+| bodyHtml | string  | no       | -             |
+| cc       | string  | no       | -             |
+| bcc      | string  | no       | -             |
+| host     | string  | yes      | -             |
+| port     | string  | yes      | -             |
+| password | string  | yes      | -             |
+| limit    | string  | no       | 100000        |
+| use_ssl  | boolean | no       | false         |
+| use_tls  | boolean | no       | false         |
+
+### subject [string]
+
+Email Subject
+
+### from [string]
+
+Email sender
+
+### to [string]
+
+Email recipients, multiple recipients separated by `,`
+
+### bodyText [string]
+
+Email content, text format
+
+### bodyHtml [string]
+
+Email content, hypertext content
+
+### cc [string]
+
+Email CC, multiple CCs separated by `,`
+
+### bcc [string]
+
+Email Bcc, multiple Bccs separated by `,`
+
+### host [string]
+
+Email server address, for example: `stmp.exmail.qq.com`
+
+### port [string]
+
+Email server port For example: `25`
+
+### password [string]
+
+The password of the email sender, the user name is the sender specified by `from`
+
+### limit [string]
+
+The number of rows to include, the default is `100000`
+
+### use_ssl [boolean]
+
+The security properties for encrypted link to smtp server, the default is `false`
+
+### use_tls [boolean]
+
+The security properties for encrypted link to smtp server, the default is `false`
+
+## Examples
+
+```bash
+Email {
+    subject = "Report statistics",
+    from = "xxxx@qq.com",
+    to = "xxxxx1@qq.com,xxxxx2@qq.com",
+    cc = "xxxxx3@qq.com,xxxxx4@qq.com",
+    bcc = "xxxxx5@qq.com,xxxxx6@qq.com",
+    host= "stmp.exmail.qq.com",
+    port= "25",
+    password = "***********",
+    limit = "1000",
+    use_ssl = true
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/File.mdx b/versioned_docs/version-2.2.0-beta/connector/sink/File.mdx
new file mode 100644
index 000000000..9ce219440
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/File.mdx
@@ -0,0 +1,192 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# File
+
+> File sink connector
+
+## Description
+
+Output data to local or hdfs file.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: File
+* [x] Flink: File
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name             | type   | required | default value  |
+| ---------------- | ------ | -------- | -------------- |
+| options          | object | no       | -              |
+| partition_by     | array  | no       | -              |
+| path             | string | yes      | -              |
+| path_time_format | string | no       | yyyyMMddHHmmss |
+| save_mode        | string | no       | error          |
+| serializer       | string | no       | json           |
+| common-options   | string | no       | -              |
+
+### options [object]
+
+Custom parameters
+
+### partition_by [array]
+
+Partition data based on selected fields
+
+### path [string]
+
+The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
+we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`, 
+`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
+
+### path_time_format [string]
+
+When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### save_mode [string]
+
+Storage mode, currently supports `overwrite` , `append` , `ignore` and `error` . For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+### serializer [string]
+
+Serialization method, currently supports `csv` , `json` , `parquet` , `orc` and `text`
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+<TabItem value="flink">
+
+
+| name              | type   | required | default value  |
+|-------------------|--------| -------- |----------------|
+| format            | string | yes      | -              |
+| path              | string | yes      | -              |
+| path_time_format  | string | no       | yyyyMMddHHmmss |
+| write_mode        | string | no       | -              |
+| common-options    | string | no       | -              |
+| parallelism       | int    | no       | -              |
+| rollover_interval | long   | no       | 1              |
+| max_part_size     | long   | no       | 1024          |
+| prefix            | string | no       | seatunnel      |
+| suffix            | string | no       | .ext           |
+
+### format [string]
+
+Currently, `csv` , `json` , and `text` are supported. The streaming mode currently only supports `text`
+
+### path [string]
+
+The file path is required. The `hdfs file` starts with `hdfs://` , and the `local file` starts with `file://`,
+we can add the variable `${now}` or `${uuid}` in the path, like `hdfs:///test_${uuid}_${now}.txt`,
+`${now}` represents the current time, and its format can be defined by specifying the option `path_time_format`
+
+### path_time_format [string]
+
+When the format in the `path` parameter is `xxxx-${now}` , `path_time_format` can specify the time format of the path, and the default value is `yyyy.MM.dd` . The commonly used time formats are listed as follows:
+
+| Symbol | Description        |
+| ------ | ------------------ |
+| y      | Year               |
+| M      | Month              |
+| d      | Day of month       |
+| H      | Hour in day (0-23) |
+| m      | Minute in hour     |
+| s      | Second in minute   |
+
+See [Java SimpleDateFormat](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) for detailed time format syntax.
+
+### write_mode [string]
+
+- NO_OVERWRITE
+
+- No overwrite, there is an error in the path
+
+- OVERWRITE
+
+- Overwrite, delete and then write if the path exists
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for FileSink
+
+### rollover_interval [long]
+
+The new file part rollover interval, unit min.
+
+### max_part_size [long]
+
+The max size of each file part, unit MB.
+
+### prefix [string]
+
+The prefix of each file part.
+
+### suffix [string]
+
+The suffix of each file part.
+
+</TabItem>
+</Tabs>
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+file {
+    path = "file:///var/logs"
+    serializer = "text"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+FileSink {
+    format = "json"
+    path = "hdfs://localhost:9000/flink/output/"
+    write_mode = "OVERWRITE"
+}
+```
+
+</TabItem>
+</Tabs>
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Hbase.md b/versioned_docs/version-2.2.0-beta/connector/sink/Hbase.md
new file mode 100644
index 000000000..5108be750
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Hbase.md
@@ -0,0 +1,68 @@
+# Hbase
+
+> Hbase sink connector
+
+## Description
+
+Use [hbase-connectors](https://github.com/apache/hbase-connectors/tree/master/spark) to output data to `Hbase` , `Hbase (>=2.1.0)` and `Spark (>=2.0.0)` version compatibility depends on `hbase-connectors` . The `hbase-connectors` in the official Apache Hbase documentation is also one of the [Apache Hbase Repos](https://hbase.apache.org/book.html#repos).
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hbase
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                   | type   | required | default value |
+| ---------------------- | ------ | -------- | ------------- |
+| hbase.zookeeper.quorum | string | yes      |               |
+| catalog                | string | yes      |               |
+| staging_dir            | string | yes      |               |
+| save_mode              | string | no       | append        |
+| hbase.*                | string | no       |               |
+
+### hbase.zookeeper.quorum [string]
+
+The address of the `zookeeper` cluster, the format is: `host01:2181,host02:2181,host03:2181`
+
+### catalog [string]
+
+The structure of the `hbase` table is defined by `catalog` , the name of the `hbase` table and its `namespace` , which `columns` are used as `rowkey`, and the correspondence between `column family` and `columns` can be defined by `catalog` `hbase table catalog`
+
+### staging_dir [string]
+
+A path on `HDFS` that will generate data that needs to be loaded into `hbase` . After the data is loaded, the data file will be deleted and the directory is still there.
+
+### save_mode [string]
+
+Two write modes are supported, `overwrite` and `append` . `overwrite` means that if there is data in the `hbase table` , `truncate` will be performed and then the data will be loaded.
+
+`append` means that the original data of the `hbase table` will not be cleared, and the load operation will be performed directly.
+
+### hbase.* [string]
+
+Users can also specify multiple optional parameters. For a detailed list of parameters, see [Hbase Supported Parameters](https://hbase.apache.org/book.html#config.files).
+
+If these non-essential parameters are not specified, they will use the default values given in the official documentation.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+## Examples
+
+```bash
+ hbase {
+    source_table_name = "hive_dataset"
+    hbase.zookeeper.quorum = "host01:2181,host02:2181,host03:2181"
+    catalog = "{\"table\":{\"namespace\":\"default\", \"name\":\"customer\"},\"rowkey\":\"c_custkey\",\"columns\":{\"c_custkey\":{\"cf\":\"rowkey\", \"col\":\"c_custkey\", \"type\":\"bigint\"},\"c_name\":{\"cf\":\"info\", \"col\":\"c_name\", \"type\":\"string\"},\"c_address\":{\"cf\":\"info\", \"col\":\"c_address\", \"type\":\"string\"},\"c_city\":{\"cf\":\"info\", \"col\":\"c_city\", \"type\":\"string\"},\"c_nation\":{\"cf\":\"info\", \"col\":\"c_nation\", \"type\":\"string\"},\"c_regio [...]
+    staging_dir = "/tmp/hbase-staging/"
+    save_mode = "overwrite"
+}
+```
+
+This plugin of `Hbase` does not provide users with the function of creating tables, because the pre-partitioning method of the `hbase` table will be related to business logic, so when running the plugin, the user needs to create the `hbase` table and its pre-partition in advance; for `rowkey` Design, catalog itself supports multi-column combined `rowkey="col1:col2:col3"` , but if there are other design requirements for `rowkey` , such as `add salt` , etc., it can be completely decoupled  [...]
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Hive.md b/versioned_docs/version-2.2.0-beta/connector/sink/Hive.md
new file mode 100644
index 000000000..50df7cb86
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Hive.md
@@ -0,0 +1,72 @@
+# Hive
+
+> Hive sink connector
+
+### Description
+
+Write Rows to [Apache Hive](https://hive.apache.org).
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hive
+* [ ] Flink
+
+:::
+
+### Options
+
+| name                                    | type          | required | default value |
+| --------------------------------------- | ------------- | -------- | ------------- |
+| [sql](#sql-string)                             | string        | no       | -             |
+| [source_table_name](#source_table_name-string) | string        | no       | -             |
+| [result_table_name](#result_table_name-string) | string        | no       | -             |
+| [sink_columns](#sink_columns-string)           | string        | no       | -             |
+| [save_mode](#save_mode-string)                 | string        | no       | -             |
+| [partition_by](#partition_by-arraystring)           | Array[string] | no       | -             |
+
+##### sql [string]
+Hive sql:the whole insert data sql, such as `insert into/overwrite $table  select * from xxx_table `, If this option exists, other options will be ignored.
+
+##### Source_table_name [string]
+
+Datasource of this plugin.
+
+##### result_table_name [string]
+
+The output hive table name if the `sql` option doesn't specified.
+
+##### save_mode [string]
+
+Same with option `spark.mode` in Spark, combined with `result_table_name` if the `sql` option doesn't specified.
+
+##### sink_columns [string]
+
+Specify the selected fields which write to result_table_name, separated by commas, combined with `result_table_name` if the `sql` option doesn't specified.
+
+##### partition_by [Array[string]]
+
+Hive partition fields, combined with `result_table_name` if the `sql` option doesn't specified.
+
+### Example
+
+```conf
+sink {
+  Hive {
+    sql = "insert overwrite table seatunnel.test1 partition(province) select name,age,province from myTable2"
+  }
+}
+```
+
+```conf
+sink {
+  Hive {
+    source_table_name = "myTable2"
+    result_table_name = "seatunnel.test1"
+    save_mode = "overwrite"
+    sink_columns = "name,age,province"
+    partition_by = ["province"]
+  }
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Hudi.md b/versioned_docs/version-2.2.0-beta/connector/sink/Hudi.md
new file mode 100644
index 000000000..b79089cb9
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Hudi.md
@@ -0,0 +1,43 @@
+# Hudi
+
+> Hudi sink connector
+
+## Description
+
+Write Rows to a Hudi.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hudi
+* [ ] Flink
+
+:::
+
+## Options
+
+| name | type | required | default value | engine |
+| --- | --- | --- | --- | --- |
+| hoodie.base.path | string | yes | - | Spark |
+| hoodie.table.name | string | yes | - | Spark |
+| save_mode	 | string | no | append | Spark |
+
+[More hudi Configurations](https://hudi.apache.org/docs/configurations/#Write-Options)
+
+### hoodie.base.path [string]
+
+Base path on lake storage, under which all the table data is stored. Always prefix it explicitly with the storage scheme (e.g hdfs://, s3:// etc). Hudi stores all the main meta-data about commits, savepoints, cleaning audit logs etc in .hoodie directory under this base path directory.
+
+### hoodie.table.name [string]
+
+Table name that will be used for registering with Hive. Needs to be same across runs.
+
+## Examples
+
+```bash
+hudi {
+    hoodie.base.path = "hdfs://"
+    hoodie.table.name = "seatunnel_hudi"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Iceberg.md b/versioned_docs/version-2.2.0-beta/connector/sink/Iceberg.md
new file mode 100644
index 000000000..3831171c8
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Iceberg.md
@@ -0,0 +1,70 @@
+# Iceberg
+
+> Iceberg sink connector
+
+## Description
+
+Write data to Iceberg.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Iceberg
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                                                         | type   | required | default value |
+| ------------------------------------------------------------ | ------ | -------- | ------------- |
+| [path](#path)                                                | string | yes      | -             |
+| [saveMode](#saveMode)                                        | string | no       | append        |
+| [target-file-size-bytes](#target-file-size-bytes)            | long   | no       | -             |
+| [check-nullability](#check-nullability)                      | bool   | no       | -             |
+| [snapshot-property.custom-key](#snapshot-property.custom-key)| string | no       | -             |
+| [fanout-enabled](#fanout-enabled)                            | bool   | no       | -             |
+| [check-ordering](#check-ordering)                            | bool   | no       | -             |
+
+
+Refer to [iceberg write options](https://iceberg.apache.org/docs/latest/spark-configuration/) for more configurations.
+
+### path
+
+Iceberg table location.
+
+### saveMode
+
+append or overwrite. Only these two modes are supported by iceberg. The default value is append.
+
+### target-file-size-bytes
+
+Overrides this table’s write.target-file-size-bytes
+
+### check-nullability
+
+Sets the nullable check on fields
+
+### snapshot-property.custom-key
+
+Adds an entry with custom-key and corresponding value in the snapshot summary
+eg: snapshot-property.aaaa="bbbb"
+
+### fanout-enabled
+
+Overrides this table’s write.spark.fanout.enabled
+
+### check-ordering
+
+Checks if input schema and table schema are same
+
+## Example
+
+```bash
+iceberg {
+    path = "hdfs://localhost:9000/iceberg/warehouse/db/table"
+  }
+```
+
+
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/InfluxDb.md b/versioned_docs/version-2.2.0-beta/connector/sink/InfluxDb.md
new file mode 100644
index 000000000..fc0f1cdba
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/InfluxDb.md
@@ -0,0 +1,90 @@
+# InfluxDB
+
+> InfluxDB sink connector
+
+## Description
+
+Write data to InfluxDB.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: InfluxDB
+
+:::
+
+## Options
+
+| name        | type           | required | default value |
+| ----------- | -------------- | -------- | ------------- |
+| server_url  | `String`       | yes      | -             |
+| username    | `String`       | no       | -             |
+| password    | `String`       | no       | -             |
+| database    | `String`       | yes      | -             |
+| measurement | `String`       | yes      | -             |
+| tags        | `List<String>` | yes      | -             |
+| fields      | `List<String>` | yes      | -             |
+| parallelism | `Int`          | no       | -             |
+
+### server_url [`String`]
+
+The URL of InfluxDB Server.
+
+### username [`String`]
+
+The username of InfluxDB Server.
+
+### password [`String`]
+
+The password of InfluxDB Server.
+
+### database [`String`]
+
+The database name in InfluxDB.
+
+### measurement [`String`]
+
+The Measurement name in InfluxDB.
+
+### tags [`List<String>`]
+
+The list of Tag in InfluxDB.
+
+### fields [`List<String>`]
+
+The list of Field in InfluxDB.
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for InfluxDbSink
+
+
+## Example
+
+### Simple
+
+```hocon
+InfluxDbSink {
+  server_url = "http://127.0.0.1:8086/"
+  database = "influxdb"
+  measurement = "m"
+  tags = ["country", "city"]
+  fields = ["count"]
+}
+```
+
+### Auth
+
+```hocon
+InfluxDbSink {
+  server_url = "http://127.0.0.1:8086/"
+  username = "admin"
+  password = "password"
+  database = "influxdb"
+  measurement = "m"
+  tags = ["country", "city"]
+  fields = ["count"]
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Jdbc.mdx b/versioned_docs/version-2.2.0-beta/connector/sink/Jdbc.mdx
new file mode 100644
index 000000000..17948b367
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Jdbc.mdx
@@ -0,0 +1,213 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Jdbc
+
+> JDBC sink connector
+
+## Description
+
+Write data through jdbc
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Jdbc
+* [x] Flink: Jdbc
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name             | type   | required | default value |
+|------------------| ------ |----------|---------------|
+| driver           | string | yes      | -             |
+| url              | string | yes      | -             |
+| user             | string | yes      | -             |
+| password         | string | yes      | -             |
+| dbTable          | string | yes      | -             |
+| saveMode         | string | no       | update         |
+| useSsl           | string | no       | false         |
+| customUpdateStmt | string | no       | -             |
+| duplicateIncs    | string | no       | -             |
+| showSql          | string | no       | true          |
+
+### url [string]
+
+The URL of the JDBC connection. Refer to a case: `jdbc:mysql://localhost/dbName`
+
+### user [string]
+
+username
+
+##### password [string]
+
+user password
+
+### dbTable [string]
+
+Sink table name, if the table does not exist, it will be created.
+
+### saveMode [string]
+
+Storage mode, add mode `update` , perform data overwrite in a specified way when inserting data key conflicts
+
+Basic mode, currently supports `overwrite` , `append` , `ignore` and `error` . For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+
+### useSsl [string]
+
+Configure when `saveMode` is specified as `update` , whether to enable ssl, the default value is `false`
+
+### isolationLevel [string]
+
+The transaction isolation level, which applies to current connection. The default value is `READ_UNCOMMITTED`
+
+### customUpdateStmt [string]
+
+Configure when `saveMode` is specified as `update` , which is used to specify the update statement template for key conflicts.
+If `customUpdateStmt` is empty, the sql will auto-generate for all columns, else use the sql which refer to the usage of
+`INSERT INTO table (...) values (...) ON DUPLICATE KEY UPDATE... ` of `mysql` , use placeholders or fixed values in `values`
+tips: the tableName of sql should be consistent with the `dbTable`.
+
+### duplicateIncs [string]
+
+Configure when `saveMode` is specified as `update` , and when the specified key conflicts, the value is updated to the existing value plus the original value
+
+### showSql
+
+Configure when `saveMode` is specified as `update` , whether to show sql
+
+</TabItem>
+<TabItem value="flink">
+
+| name                       | type    | required | default value |
+| -------------------------- | ------- | -------- | ------------- |
+| driver                     | string  | yes      | -             |
+| url                        | string  | yes      | -             |
+| username                   | string  | yes      | -             |
+| password                   | string  | no       | -             |
+| query                      | string  | yes      | -             |
+| batch_size                 | int     | no       | -             |
+| source_table_name          | string  | yes      | -             |
+| common-options             | string  | no       | -             |
+| parallelism                | int     | no       | -             |
+| pre_sql                    | string  | no       | -             |
+| post_sql                   | string  | no       | -             |
+| ignore_post_sql_exceptions | boolean | no       | -             |
+
+### driver [string]
+
+Driver name, such as `com.mysql.cj.jdbc.Driver` for MySQL.
+
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
+
+### url [string]
+
+The URL of the JDBC connection. Such as: `jdbc:mysql://localhost:3306/test`
+
+### username [string]
+
+username
+
+### password [string]
+
+password
+
+### query [string]
+
+Insert statement
+
+### batch_size [int]
+
+Number of writes per batch
+
+### parallelism [int]
+
+The parallelism of an individual operator, for JdbcSink.
+
+### pre_sql [string]
+
+This sql can be executed before output.
+
+### post_sql [string]
+
+This sql can be executed after output, and just supports for batch job.
+
+### ignore_post_sql_exceptions [boolean]
+
+Whether to ignore post_sql exceptions.
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+</TabItem>
+</Tabs>
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+jdbc {
+    driver = "com.mysql.cj.jdbc.Driver",
+    saveMode = "update",
+    url = "jdbc:mysql://ip:3306/database",
+    user = "userName",
+    password = "***********",
+    dbTable = "tableName",
+    customUpdateStmt = "INSERT INTO table (column1, column2, created, modified, yn) values(?, ?, now(), now(), 1) ON DUPLICATE KEY UPDATE column1 = IFNULL(VALUES (column1), column1), column2 = IFNULL(VALUES (column2), column2)"
+}
+```
+
+> Insert data through JDBC
+
+```bash
+jdbc {
+    driver = "com.mysql.cj.jdbc.Driver",
+    saveMode = "update",
+    truncate = "true",
+    url = "jdbc:mysql://ip:3306/database",
+    user = "userName",
+    password = "***********",
+    dbTable = "tableName",
+    customUpdateStmt = "INSERT INTO tableName (column1, column2, created, modified, yn) values(?, ?, now(), now(), 1) ON DUPLICATE KEY UPDATE column1 = IFNULL(VALUES (column1), column1), column2 = IFNULL(VALUES (column2), column2)"
+    jdbc.connect_timeout = 10000
+    jdbc.socket_timeout = 10000
+}
+```
+> Timeout config
+
+</TabItem>
+<TabItem value="flink">
+
+```conf
+JdbcSink {
+    source_table_name = fake
+    driver = com.mysql.cj.jdbc.Driver
+    url = "jdbc:mysql://localhost/test"
+    username = root
+    query = "insert into test(name,age) values(?,?)"
+    batch_size = 2
+}
+```
+
+</TabItem>
+</Tabs>
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Kafka.md b/versioned_docs/version-2.2.0-beta/connector/sink/Kafka.md
new file mode 100644
index 000000000..0e225cf3d
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Kafka.md
@@ -0,0 +1,64 @@
+# Kafka
+
+> Kafka sink connector
+
+## Description
+
+Write Rows to a Kafka topic.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kafka
+* [x] Flink: Kafka
+
+:::
+
+## Options
+
+| name                       | type   | required | default value |
+| -------------------------- | ------ | -------- | ------------- |
+| producer.bootstrap.servers | string | yes      | -             |
+| topic                      | string | yes      | -             |
+| producer.*                 | string | no       | -             |
+| semantic                   | string | no       | -             |
+| common-options             | string | no       | -             |
+
+### producer.bootstrap.servers [string]
+
+Kafka Brokers List
+
+### topic [string]
+
+Kafka Topic
+
+### producer [string]
+
+In addition to the above parameters that must be specified by the `Kafka producer` client, the user can also specify multiple non-mandatory parameters for the `producer` client, covering [all the producer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#producerconfigs).
+
+The way to specify the parameter is to add the prefix `producer.` to the original parameter name. For example, the way to specify `request.timeout.ms` is: `producer.request.timeout.ms = 60000` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
+
+### semantic [string]
+Semantics that can be chosen. exactly_once/at_least_once/none, default is at_least_once
+
+In exactly_once, flink producer will write all messages in a Kafka transaction that will be committed to Kafka on a checkpoint.
+
+In at_least_once, flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
+
+NONE does not provide any guarantees: messages may be lost in case of issues on the Kafka broker and messages may be duplicated in case of a Flink failure.
+
+please refer to [Flink Kafka Fault Tolerance](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/kafka/#fault-tolerance)
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+## Examples
+
+```bash
+kafka {
+    topics = "seatunnel"
+    producer.bootstrap.servers = "localhost:9092"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Kudu.md b/versioned_docs/version-2.2.0-beta/connector/sink/Kudu.md
new file mode 100644
index 000000000..857ac34d9
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Kudu.md
@@ -0,0 +1,42 @@
+# Kudu
+
+> Kudu sink connector
+
+## Description
+
+Write data to Kudu.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kudu
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| [kudu_master](#kudu_master-string)            | string | yes      | -             |
+| [kudu_table](#kudu_table-string)       | string | yes      | -         |
+| [mode](#mode-string)       | string | no      | insert         |
+
+### kudu_master [string]
+Kudu master, multiple masters are separated by commas
+
+### kudu_table [string]
+The name of the table to be written in kudu, the table must already exist
+
+### mode [string]
+Write the mode adopted in kudu, support insert|update|upsert|insertIgnore, the default is insert.
+## Example
+
+```bash
+kudu {
+   kudu_master="hadoop01:7051,hadoop02:7051,hadoop03:7051"
+   kudu_table="my_kudu_table"
+   mode="upsert"
+ }
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/MongoDB.md b/versioned_docs/version-2.2.0-beta/connector/sink/MongoDB.md
new file mode 100644
index 000000000..fe35d729a
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/MongoDB.md
@@ -0,0 +1,51 @@
+# MongoDB
+
+> MongoDB sink connector
+
+## Description
+
+Write data to `MongoDB`
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: MongoDB
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                   | type   | required | default value |
+|------------------------| ------ |----------| ------------- |
+| writeconfig.uri        | string | yes      | -             |
+| writeconfig.database   | string | yes      | -             |
+| writeconfig.collection | string | yes      | -             |
+| writeconfig.*          | string | no       | -             |
+
+### writeconfig.uri [string]
+
+uri to write to mongoDB
+
+### writeconfig.database [string]
+
+database to write to mongoDB
+
+### writeconfig.collection [string]
+
+collection to write to mongoDB
+
+### writeconfig.* [string]
+
+More other parameters can be configured here, see [MongoDB Configuration](https://docs.mongodb.com/spark-connector/current/configuration/) for details, see the Output Configuration section. The way to specify parameters is to add a prefix to the original parameter name `writeconfig.` For example, the way to set `localThreshold` is `writeconfig.localThreshold=20` . If you do not specify these optional parameters, the default values of the official MongoDB documentation will be used.
+
+## Examples
+
+```bash
+mongodb {
+    writeconfig.uri = "mongodb://username:password@127.0.0.1:27017/test_db"
+    writeconfig.database = "test_db"
+    writeconfig.collection = "test_collection"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Phoenix.md b/versioned_docs/version-2.2.0-beta/connector/sink/Phoenix.md
new file mode 100644
index 000000000..12547c010
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Phoenix.md
@@ -0,0 +1,55 @@
+# Phoenix
+
+> Phoenix sink connector
+
+## Description
+
+Export data to `Phoenix` , compatible with `Kerberos` authentication
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Phoenix
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                      | type    | required | default value |
+| ------------------------- | ------- | -------- | ------------- |
+| zk-connect                | array   | yes      | -             |
+| table                     | string  | yes      | -             |
+| tenantId                  | string  | no       | -             |
+| skipNormalizingIdentifier | boolean | no       | false         |
+| common-options            | string  | no       | -             |
+
+### zk-connect [string]
+
+Connection string, configuration example: `host1:2181,host2:2181,host3:2181 [/znode]`
+
+### table [string]
+
+Target table name
+
+##### tenantId [string]
+
+Tenant ID, optional configuration item
+
+### skipNormalizingIdentifier [boolean]
+
+Whether to skip the normalized identifier, if the column name is surrounded by double quotes, it is used as is, otherwise the name is uppercase. Optional configuration items, the default is `false`
+
+### common options [string]
+
+Sink plugin common parameters, please refer to [Sink Plugin](common-options.md) for details
+
+## Examples
+
+```bash
+  Phoenix {
+    zk-connect = "host1:2181,host2:2181,host3:2181"
+    table = "tableName"
+  }
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Redis.md b/versioned_docs/version-2.2.0-beta/connector/sink/Redis.md
new file mode 100644
index 000000000..b8860b36b
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Redis.md
@@ -0,0 +1,95 @@
+# Redis
+
+> Redis sink connector
+
+## Description
+
+Write Rows to a Redis.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Redis
+* [ ] Flink
+
+:::
+
+## Options
+
+| name      | type   | required | default value |
+|-----------|--------|----------|---------------|
+| host      | string | no       | "localhost"   |
+| port      | int    | no       | 6379          |
+| auth      | string | no       |               |
+| db_num    | int    | no       | 0             |
+| data_type | string | no       | "KV"          |
+| hash_name | string | no       |               |
+| list_name | string | no       |               |
+| set_name  | string | no       |               |
+| zset_name | string | no       |               |
+| timeout   | int    | no       | 2000          |
+| ttl       | int    | no       | 0             |
+| is_self_achieved    | boolean | no       | false         |
+
+### host [string]
+
+Redis server address, default `"localhost"`
+
+### port [int]
+
+Redis service port, default `6379`
+
+### auth [string]
+
+Redis authentication password
+
+### db_num [int]
+
+Redis database index ID. It is connected to db `0` by default
+
+### redis_timeout [int]
+
+Redis timeout
+
+### data_type [string]
+
+Redis data type eg: `KV HASH LIST SET ZSET`
+
+### hash_name [string]
+
+if redis data type is HASH must config hash name 
+
+### list_name [string]
+
+if redis data type is LIST must config list name
+
+### zset_name [string]
+
+if redis data type is ZSET must config zset name
+
+### set_name [string]
+
+if redis data type is SET must config set name
+
+### ttl [int]
+
+redis data expiration ttl, 0 means no expiration.
+
+### is_self_achieved [boolean]
+
+If a redis access by a self-achieved redis proxy, which is not support redis function of "info Replication"
+
+## Examples
+
+```bash
+redis {
+  host = "localhost"
+  port = 6379
+  auth = "myPassword"
+  db_num = 1
+  data_type = "HASH"
+  hash_name = "test"
+  is_self_achieved = false
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/Tidb.md b/versioned_docs/version-2.2.0-beta/connector/sink/Tidb.md
new file mode 100644
index 000000000..78d193c17
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/Tidb.md
@@ -0,0 +1,88 @@
+# TiDb
+
+> TiDB sink connector
+
+### Description
+
+Write data to TiDB.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: TiDb
+* [ ] Flink
+
+:::
+
+### Env Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| [spark.tispark.pd.addresses](#spark.tispark.pd.addresses-string)       | string | yes      | -             |
+| [spark.sql.extensions](#spark.sql.extensions-string)        | string | yes      | org.apache.spark.sql.TiExtensions         |
+
+##### spark.tispark.pd.addresses [string]
+
+TiDB Pd Address
+
+##### spark.sql.extensions [string]
+
+Spark Sql Extensions
+
+### Options
+
+| name             | type   | required | default value |
+|------------------| ------ |----------|---------------|
+| [addr](#addr-string)              | string | yes      | -             |
+| [port](#port-string)              | string | yes      | -             |
+| [user](#user-string)             | string | yes      | -             |
+| [password](#password-string)         | string | yes      | -             |
+| [table](#table-string)            | string | yes      | -             |
+| [database](#database-string)        | string | yes       |        |
+
+##### addr [string]
+
+TiDB address, which currently only supports one instance
+
+##### port [string]
+
+TiDB port
+
+##### user [string]
+
+TiDB user
+
+##### password [string]
+
+TiDB password
+
+##### table [string]
+
+TiDB table name
+
+##### database [string]
+
+TiDB database name
+
+##### options
+
+Refer to [TiSpark Configurations](https://github.com/pingcap/tispark/blob/v2.4.1/docs/datasource_api_userguide.md)
+
+### Examples
+
+```bash
+env {
+    spark.tispark.pd.addresses = "127.0.0.1:2379"
+    spark.sql.extensions = "org.apache.spark.sql.TiExtensions"
+}
+
+tidb {
+    addr = "127.0.0.1",
+    port = "4000"
+    database = "database",
+    table = "tableName",
+    user = "userName",
+    password = "***********"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/sink/common-options.md b/versioned_docs/version-2.2.0-beta/connector/sink/common-options.md
new file mode 100644
index 000000000..ac4a2e428
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/sink/common-options.md
@@ -0,0 +1,45 @@
+# Common Options
+
+> Common parameters of sink connectors
+
+| name              | type   | required | default value |
+| ----------------- | ------ | -------- | ------------- |
+| source_table_name | string | no       | -             |
+
+### source_table_name [string]
+
+When `source_table_name` is not specified, the current plug-in processes the data set `dataset` output by the previous plugin in the configuration file;
+
+When `source_table_name` is specified, the current plug-in is processing the data set corresponding to this parameter.
+
+## Examples
+
+```bash
+source {
+    FakeSourceStream {
+      result_table_name = "fake"
+      field_name = "name,age"
+    }
+}
+
+transform {
+    sql {
+      source_table_name = "fake"
+      sql = "select name from fake"
+      result_table_name = "fake_name"
+    }
+    sql {
+      source_table_name = "fake"
+      sql = "select age from fake"
+      result_table_name = "fake_age"
+    }
+}
+
+sink {
+    console {
+      source_table_name = "fake_name"
+    }
+}
+```
+
+> If `source_table_name` is not specified, the console outputs the data of the last transform, and if it is set to `fake_name` , it will output the data of `fake_name`
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Druid.md b/versioned_docs/version-2.2.0-beta/connector/source/Druid.md
new file mode 100644
index 000000000..ed7aaa016
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Druid.md
@@ -0,0 +1,67 @@
+# Druid
+
+> Druid source connector
+
+## Description
+
+Read data from Apache Druid.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: Druid
+
+:::
+
+## Options
+
+| name       | type           | required | default value |
+| ---------- | -------------- | -------- | ------------- |
+| jdbc_url   | `String`       | yes      | -             |
+| datasource | `String`       | yes      | -             |
+| start_date | `String`       | no       | -             |
+| end_date   | `String`       | no       | -             |
+| columns    | `List<String>` | no       | `*`           |
+| parallelism      | `Int`    | no       | -             |
+
+### jdbc_url [`String`]
+
+The URL of JDBC of Apache Druid.
+
+### datasource [`String`]
+
+The DataSource name in Apache Druid.
+
+### start_date [`String`]
+
+The start date of DataSource, for example, `'2016-06-27'`, `'2016-06-27 00:00:00'`, etc.
+
+### end_date [`String`]
+
+The end date of DataSource, for example, `'2016-06-28'`, `'2016-06-28 00:00:00'`, etc.
+
+### columns [`List<String>`]
+
+These columns that you want to query of DataSource.
+
+### common options [string]
+
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for DruidSource
+
+## Example
+
+```hocon
+DruidSource {
+  jdbc_url = "jdbc:avatica:remote:url=http://localhost:8082/druid/v2/sql/avatica/"
+  datasource = "wikipedia"
+  start_date = "2016-06-27 00:00:00"
+  end_date = "2016-06-28 00:00:00"
+  columns = ["flags","page"]
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Elasticsearch.md b/versioned_docs/version-2.2.0-beta/connector/source/Elasticsearch.md
new file mode 100644
index 000000000..4d205ea00
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Elasticsearch.md
@@ -0,0 +1,64 @@
+# Elasticsearch
+
+> Elasticsearch source connector
+
+## Description
+
+Read data from Elasticsearch
+
+:::tip 
+
+Engine Supported and plugin name
+
+* [x] Spark: Elasticsearch
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| hosts          | array  | yes      | -             |
+| index          | string | yes      |               |
+| es.*           | string | no       |               |
+| common-options | string | yes      | -             |
+
+### hosts [array]
+
+ElasticSearch cluster address, the format is host:port, allowing multiple hosts to be specified. Such as `["host1:9200", "host2:9200"]` .
+
+### index [string]
+
+ElasticSearch index name, support * fuzzy matching
+
+### es.* [string]
+
+Users can also specify multiple optional parameters. For a detailed list of parameters, see [Parameters Supported by Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html#cfg-mapping).
+
+For example, the way to specify `es.read.metadata` is: `es.read.metadata = true` . If these non-essential parameters are not specified, they will use the default values given in the official documentation.
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Examples
+
+```bash
+elasticsearch {
+    hosts = ["localhost:9200"]
+    index = "seatunnel-20190424"
+    result_table_name = "my_dataset"
+}
+```
+
+```bash
+elasticsearch {
+    hosts = ["localhost:9200"]
+    index = "seatunnel-*"
+    es.read.field.include = "name, age"
+    resulttable_name = "my_dataset"
+}
+```
+
+> Matches all indexes starting with `seatunnel-` , and only reads the two fields `name` and `age` .
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Fake.mdx b/versioned_docs/version-2.2.0-beta/connector/source/Fake.mdx
new file mode 100644
index 000000000..95980afca
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Fake.mdx
@@ -0,0 +1,203 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Fake
+
+> Fake source connector
+
+## Description
+
+`Fake` is mainly used to conveniently generate user-specified data, which is used as input for functional verification, testing, and performance testing of seatunnel.
+
+:::note
+
+Engine Supported and plugin name
+
+* [x] Spark: Fake, FakeStream
+* [x] Flink: FakeSource, FakeSourceStream
+    * Flink `Fake Source` is mainly used to automatically generate data. The data has only two columns. The first column is of `String type` and the content is a random one from `["Gary", "Ricky Huo", "Kid Xiong"]` . The second column is of `Int type` , which is the current 13-digit timestamp is used as input for functional verification and testing of `seatunnel` .
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+:::note
+
+These options is for Spark:`FakeStream`, and Spark:`Fake` do not have any options
+
+:::
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| content        | array  | no       | -             |
+| rate           | number | yes      | -             |
+| common-options | string | yes      | -             |
+
+### content [array]
+
+List of test data strings
+
+### rate [number]
+
+Number of test cases generated per second
+
+</TabItem>
+<TabItem value="flink">
+
+| name               | type                 | required | default value |
+|--------------------|----------------------|----------|---------------|
+| parallelism        | `Int`                | no       | -             |
+| common-options     | `string`             | no       | -             |
+| mock_data_schema   | list [column_config] | no       | see details.  |
+| mock_data_size     | int                  | no       | 300           |
+| mock_data_interval | int (second)         | no       | 1             |
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for Fake Source Stream
+
+</TabItem>
+</Tabs>
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+### mock_data_schema Option  [list[column_config]]
+
+Config mock data's schema. Each is column_config option.
+
+When mock_data_schema is not defined. Data will generate with schema like this:
+```bash
+mock_data_schema = [
+  {
+    name = "name",
+    type = "string",
+    mock_config = {
+      string_seed = ["Gary", "Ricky Huo", "Kid Xiong"]
+      size_range = [1,1]
+    }
+  }
+  {
+    name = "age",
+    type = "int",
+    mock_config = {
+      int_range = [1, 100]
+    }
+  }
+]
+```
+
+column_config option type.
+
+| name        | type        | required | default value | support values                                                                                                                                                                                                                                      |
+|-------------|-------------|----------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| name        | string      | yes      | string        | -                                                                                                                                                                                                                                                   |
+| type        | string      | yes      | string        | int,integer,byte,boolean,char,<br/>character,short,long,float,double,<br/>date,timestamp,decimal,bigdecimal,<br/>bigint,int[],byte[],<br/>boolean[],char[],character[],short[],<br/>long[],float[],double[],string[],<br/>binary,varchar |
+| mock_config | mock_config | no       | -             | -                                                                                                                                                                                                                                                   |
+
+mock_config Option
+
+| name          | type                  | required | default value | sample                                   |
+|---------------|-----------------------|----------|---------------|------------------------------------------|
+| byte_range    | list[byte] [size=2]   | no       | -             | [0,127]                                  |
+| boolean_seed  | list[boolean]         | no       | -             | [true, true, false]                      |
+| char_seed     | list[char] [size=2]   | no       | -             | ['a','b','c']                            |
+| date_range    | list[string] [size=2] | no       | -             | ["1970-01-01", "2100-12-31"]             |
+| decimal_scale | int                   | no       | -             | 2                                        |
+| double_range  | list[double] [size=2] | no       | -             | [0.0, 10000.0]                           |
+| float_range   | list[flout] [size=2]  | no       | -             | [0.0, 10000.0]                           |
+| int_range     | list[int] [size=2]    | no       | -             | [0, 100]                                 |
+| long_range    | list[long] [size=2]   | no       | -             | [0, 100000]                              |
+| number_regex  | string                | no       | -             | "[1-9]{1}\\d?"                           |
+| time_range    | list[int] [size=6]    | no       | -             | [0,24,0,60,0,60]                         |
+| size_range    | list[int] [size=2]    | no       | -             | [6,10]                                   |
+| string_regex  | string                | no       | -             | "[a-z0-9]{5}\\@\\w{3}\\.[a-z]{3}"        |
+| string_seed   | list[string]          | no       | -             | ["Gary", "Ricky Huo", "Kid Xiong"]       |
+
+### mock_data_size Option [int]
+
+Config mock data size.
+
+### mock_data_interval Option [int]
+
+Config the data can mock with interval, The unit is SECOND.
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+### Fake
+
+```bash
+Fake {
+    result_table_name = "my_dataset"
+}
+```
+
+### FakeStream
+
+```bash
+fakeStream {
+    content = ['name=ricky&age=23', 'name=gary&age=28']
+    rate = 5
+}
+```
+
+The generated data is as follows, randomly extract the string from the `content` list
+
+```bash
++-----------------+
+|raw_message      |
++-----------------+
+|name=gary&age=28 |
+|name=ricky&age=23|
++-----------------+
+```
+
+</TabItem>
+<TabItem value="flink">
+
+### FakeSourceStream
+
+
+
+```bash
+source {
+    FakeSourceStream {
+        result_table_name = "fake"
+        field_name = "name,age"
+    }
+}
+```
+
+### FakeSource
+
+```bash
+source {
+    FakeSource {
+        result_table_name = "fake"
+        field_name = "name,age"
+        mock_data_size = 100 // will generate 100 rows mock data.
+    }
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/FeishuSheet.md b/versioned_docs/version-2.2.0-beta/connector/source/FeishuSheet.md
new file mode 100644
index 000000000..3784ac6fb
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/FeishuSheet.md
@@ -0,0 +1,61 @@
+# Feishu Sheet
+
+> Feishu sheet source connector
+
+## Description
+
+Get data from Feishu sheet
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: FeishuSheet
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value       |
+| ---------------| ------ |----------|---------------------|
+| appId          | string | yes      | -                   |
+| appSecret      | string | yes      | -                   |
+| sheetToken     | string | yes      | -                   |
+| range          | string | no       | all values in sheet |
+| sheetNum       | int    | no       | 1                   |
+| titleLineNum   | int    | no       | 1                   |
+| ignoreTitleLine| bool   | no       | true                |
+
+* appId and appSecret
+  * These two parameters need to get from Feishu open platform.
+  * And open the sheet permission in permission management tab.
+* sheetToken
+  * If you Feishu sheet link is https://xxx.feishu.cn/sheets/shtcnGxninxxxxxxx
+  and the "shtcnGxninxxxxxxx" is sheetToken.
+* range 
+  * The format is A1:D5 or A2:C4 and so on.
+* sheetNum
+  * If you want import first sheet you can input 1 and the default value is 1.
+  * If you want import second one you should input 2.
+* titleLineNum
+  * The default title line the first line.
+  * If you title line is not first, you can change number for it. Like 2, 3 or 5.
+* ignoreTitleLine
+  * The title line it not save to data, if you want to save title line to data, you can set value as false.
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+```bash
+    FeishuSheet {
+        result_table_name = "my_dataset"
+        appId = "cli_a2cbxxxxxx"
+        appSecret = "IvhtW7xxxxxxxxxxxxxxx"
+        sheetToken = "shtcn6K3DIixxxxxxxxxxxx"
+        # range = "A1:D4"
+    }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/File.mdx b/versioned_docs/version-2.2.0-beta/connector/source/File.mdx
new file mode 100644
index 000000000..068685ec6
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/File.mdx
@@ -0,0 +1,124 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# File
+
+> File source connector
+
+## Description
+
+Read data from local or hdfs file.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: File
+* [x] Flink: File
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name | type | required | default value |
+| --- | --- | --- | --- |
+| format | string | no | json |
+| path | string | yes | - |
+| common-options| string | yes | - |
+
+##### format [string]
+
+Format for reading files, currently supports text, parquet, json, orc, csv.
+
+</TabItem>
+<TabItem value="flink">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| format.type    | string | yes      | -             |
+| path           | string | yes      | -             |
+| schema         | string | yes      | -             |
+| common-options | string | no       | -             |
+| parallelism    | int    | no       | -             |
+
+### format.type [string]
+
+The format for reading files from the file system, currently supports `csv` , `json` , `parquet` , `orc` and `text` .
+
+### schema [string]
+
+- csv
+    - The `schema` of `csv` is a string of `jsonArray` , such as `"[{\"type\":\"long\"},{\"type\":\"string\"}]"` , this can only specify the type of the field , The field name cannot be specified, and the common configuration parameter `field_name` is generally required.
+- json
+    - The `schema` parameter of `json` is to provide a `json string` of the original data, and the `schema` can be automatically generated, but the original data with the most complete content needs to be provided, otherwise the fields will be lost.
+- parquet
+    - The `schema` of `parquet` is an `Avro schema string` , such as `{\"type\":\"record\",\"name\":\"test\",\"fields\":[{\"name\" :\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"string\"}]}` .
+- orc
+    - The `schema` of `orc` is the string of `orc schema` , such as `"struct<name:string,addresses:array<struct<street:string,zip:smallint>>>"` .
+- text
+    - The `schema` of `text` can be filled with `string` .
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for FileSource
+
+</TabItem>
+</Tabs>
+
+##### path [string]
+
+- If read data from hdfs , the file path should start with `hdfs://`  
+- If read data from local , the file path should start with `file://`
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```
+file {
+    path = "hdfs:///var/logs"
+    result_table_name = "access_log"
+}
+```
+
+```
+file {
+    path = "file:///var/logs"
+    result_table_name = "access_log"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+    FileSource{
+    path = "hdfs://localhost:9000/input/"
+    format.type = "json"
+    schema = "{\"data\":[{\"a\":1,\"b\":2},{\"a\":3,\"b\":4}],\"db\":\"string\",\"q\":{\"s\":\"string\"}}"
+    result_table_name = "test"
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Hbase.md b/versioned_docs/version-2.2.0-beta/connector/source/Hbase.md
new file mode 100644
index 000000000..45f582acf
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Hbase.md
@@ -0,0 +1,46 @@
+# HBase
+
+> Hbase source connector
+
+## Description
+
+Get data from HBase
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: HBase
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| hbase.zookeeper.quorum | string | yes      |               |
+| catalog                | string | yes      |               |
+| common-options| string | yes | - |
+
+### hbase.zookeeper.quorum [string]
+
+The address of the `zookeeper` cluster, the format is: `host01:2181,host02:2181,host03:2181`
+
+### catalog [string]
+
+The structure of the `hbase` table is defined by `catalog` , the name of the `hbase` table and its `namespace` , which `columns` are used as `rowkey`, and the correspondence between `column family` and `columns` can be defined by `catalog` `hbase table catalog`
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+```bash
+  Hbase {
+    hbase.zookeeper.quorum = "localhost:2181"
+    catalog = "{\"table\":{\"namespace\":\"default\", \"name\":\"test\"},\"rowkey\":\"id\",\"columns\":{\"id\":{\"cf\":\"rowkey\", \"col\":\"id\", \"type\":\"string\"},\"a\":{\"cf\":\"f1\", \"col\":\"a\", \"type\":\"string\"},\"b\":{\"cf\":\"f1\", \"col\":\"b\", \"type\":\"string\"},\"c\":{\"cf\":\"f1\", \"col\":\"c\", \"type\":\"string\"}}}"
+    result_table_name = "my_dataset"
+  }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Hive.md b/versioned_docs/version-2.2.0-beta/connector/source/Hive.md
new file mode 100644
index 000000000..1254ee69b
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Hive.md
@@ -0,0 +1,66 @@
+# Hive
+
+> Hive source connector
+
+## Description
+
+Get data from hive
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hive
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| pre_sql        | string | yes      | -             |
+| common-options | string | yes      | -             |
+
+### pre_sql [string]
+
+For preprocessed `sql` , if preprocessing is not required, you can use `select * from hive_db.hive_table` .
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+**Note: The following configuration must be done to use hive source:**
+
+```bash
+# In the spark section in the seatunnel configuration file:
+
+env {
+  ...
+  spark.sql.catalogImplementation = "hive"
+  ...
+}
+```
+
+## Example
+
+```bash
+env {
+  ...
+  spark.sql.catalogImplementation = "hive"
+  ...
+}
+
+source {
+  hive {
+    pre_sql = "select * from mydb.mytb"
+    result_table_name = "myTable"
+  }
+}
+
+...
+```
+
+## Notes
+
+It must be ensured that the `metastore` of `hive` is in service. Start the command `hive --service metastore` service `default port 9083` `cluster` , `client` , `local`  mode, `hive-site.xml` must be placed in the `$HADOOP_CONF` directory of the task submission node (or placed under `$SPARK_HOME/conf` ), IDE local Debug put it in the `resources` directory
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Http.md b/versioned_docs/version-2.2.0-beta/connector/source/Http.md
new file mode 100644
index 000000000..7a5bda992
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Http.md
@@ -0,0 +1,63 @@
+# Http
+
+> Http source connector
+
+## Description
+
+Get data from http or https interface
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Http
+* [x] Flink: Http
+
+:::
+
+## Options
+
+| name           | type   | required | default vale |
+| -------------- | ------ | -------- | ------------ |
+| url            | string | yes      | -            |
+| method         | string | no       | GET          |
+| header         | string | no       |              |
+| request_params | string | no       |              |
+| sync_path      | string | no       |              |
+
+### url [string]
+
+HTTP request path, starting with http:// or https://.
+
+### method[string]
+
+HTTP request method, GET or POST, default GET.
+
+### header[string]
+
+HTTP request header, json format.
+
+### request_params[string]
+
+HTTP request parameters, json format. Use string with escapes to save json
+
+### sync_path[string]
+
+HTTP multiple requests, the storage path of parameters used for synchronization (hdfs).
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details.
+
+## Example
+
+```bash
+ Http {
+    url = "http://date.jsontest.com/"
+    result_table_name= "response_body"
+   }
+```
+
+## Notes
+
+According to the processing result of the http call, to determine whether the synchronization parameters need to be updated, it needs to be written to hdfs through the hdfs sink plugin after the judgment is made outside the http source plugin.
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Hudi.md b/versioned_docs/version-2.2.0-beta/connector/source/Hudi.md
new file mode 100644
index 000000000..05e8d8624
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Hudi.md
@@ -0,0 +1,78 @@
+# Hudi
+
+> Hudi source connector
+
+## Description
+
+Read data from Hudi.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Hudi
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| [hoodie.datasource.read.paths](#hoodiedatasourcereadpaths) | string | yes      | -             |
+| [hoodie.file.index.enable](#hoodiefileindexenable)  | boolean | no      | -             |
+| [hoodie.datasource.read.end.instanttime](#hoodiedatasourcereadendinstanttime)          | string | no      | -             |
+| [hoodie.datasource.write.precombine.field](#hoodiedatasourcewriteprecombinefield)            | string | no      | -             |
+| [hoodie.datasource.read.incr.filters](#hoodiedatasourcereadincrfilters)       | string | no      | -             |
+| [hoodie.datasource.merge.type](#hoodiedatasourcemergetype)  | string | no      | -             |
+| [hoodie.datasource.read.begin.instanttime](#hoodiedatasourcereadbegininstanttime)            | string | no      | -             |
+| [hoodie.enable.data.skipping](#hoodieenabledataskipping)   | string | no      | -             |
+| [as.of.instant](#asofinstant)    | string | no      | -             |
+| [hoodie.datasource.query.type](#hoodiedatasourcequerytype)         | string | no      | -             |
+| [hoodie.datasource.read.schema.use.end.instanttime](#hoodiedatasourcereadschemauseendinstanttime)      | string | no      | -             |
+
+Refer to [hudi read options](https://hudi.apache.org/docs/configurations/#Read-Options) for configurations.
+
+### hoodie.datasource.read.paths
+
+Comma separated list of file paths to read within a Hudi table.
+
+### hoodie.file.index.enable
+Enables use of the spark file index implementation for Hudi, that speeds up listing of large tables.
+
+### hoodie.datasource.read.end.instanttime
+Instant time to limit incrementally fetched data to. New data written with an instant_time <= END_INSTANTTIME are fetched out.
+
+### hoodie.datasource.write.precombine.field
+Field used in preCombining before actual write. When two records have the same key value, we will pick the one with the largest value for the precombine field, determined by Object.compareTo(..)
+
+### hoodie.datasource.read.incr.filters
+For use-cases like DeltaStreamer which reads from Hoodie Incremental table and applies opaque map functions, filters appearing late in the sequence of transformations cannot be automatically pushed down. This option allows setting filters directly on Hoodie Source.
+
+### hoodie.datasource.merge.type
+For Snapshot query on merge on read table, control whether we invoke the record payload implementation to merge (payload_combine) or skip merging altogetherskip_merge
+
+### hoodie.datasource.read.begin.instanttime
+Instant time to start incrementally pulling data from. The instanttime here need not necessarily correspond to an instant on the timeline. New data written with an instant_time > BEGIN_INSTANTTIME are fetched out. For e.g: ‘20170901080000’ will get all new data written after Sep 1, 2017 08:00AM.
+
+### hoodie.enable.data.skipping
+enable data skipping to boost query after doing z-order optimize for current table
+
+### as.of.instant
+The query instant for time travel. Without specified this option, we query the latest snapshot.
+
+### hoodie.datasource.query.type
+Whether data needs to be read, in incremental mode (new data since an instantTime) (or) Read Optimized mode (obtain latest view, based on base files) (or) Snapshot mode (obtain latest view, by merging base and (if any) log files)
+
+### hoodie.datasource.read.schema.use.end.instanttime
+Uses end instant schema when incrementally fetched data to. Default: users latest instant schema.
+
+## Example
+
+```bash
+hudi {
+    hoodie.datasource.read.paths = "hdfs://"
+}
+```
+
+
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Iceberg.md b/versioned_docs/version-2.2.0-beta/connector/source/Iceberg.md
new file mode 100644
index 000000000..db50993ff
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Iceberg.md
@@ -0,0 +1,61 @@
+# Iceberg
+
+> Iceberg source connector
+
+## Description
+
+Read data from Iceberg.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Iceberg
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| common-options |        | yes      | -             |
+| [path](#path)  | string | yes      | -             |
+| [pre_sql](#pre_sql) | string | yes | -             |
+| [snapshot-id](#snapshot-id) | long | no      | -   |
+| [as-of-timestamp](#as-of-timestamp) | long | no| - |
+
+
+Refer to [iceberg read options](https://iceberg.apache.org/docs/latest/spark-configuration/) for more configurations.
+
+### common-options
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+### path
+
+Iceberg table location.
+
+### pre_sql
+
+SQL statements queried from iceberg table. Note that the table name is `result_table_name` configuration
+
+### snapshot-id
+
+Snapshot ID of the table snapshot to read
+
+### as-of-timestamp
+
+A timestamp in milliseconds; the snapshot used will be the snapshot current at this time.
+
+## Example
+
+```bash
+iceberg {
+    path = "hdfs://localhost:9000/iceberg/warehouse/db/table"
+    result_table_name = "my_source"
+    pre_sql="select * from my_source where dt = '2019-01-01'"
+}
+```
+
+
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/InfluxDb.md b/versioned_docs/version-2.2.0-beta/connector/source/InfluxDb.md
new file mode 100644
index 000000000..19fce0c31
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/InfluxDb.md
@@ -0,0 +1,89 @@
+# InfluxDb
+
+> InfluxDb source connector
+
+## Description
+
+Read data from InfluxDB.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [ ] Spark
+* [x] Flink: InfluxDb
+
+:::
+
+## Options
+
+| name        | type           | required | default value |
+| ----------- | -------------- | -------- | ------------- |
+| server_url  | `String`       | yes      | -             |
+| username    | `String`       | no       | -             |
+| password    | `String`       | no       | -             |
+| database    | `String`       | yes      | -             |
+| measurement | `String`       | yes      | -             |
+| fields      | `List<String>` | yes      | -             |
+| field_types | `List<String>` | yes      | -             |
+| parallelism | `Int`          | no       | -             |
+
+### server_url [`String`]
+
+The URL of InfluxDB Server.
+
+### username [`String`]
+
+The username of InfluxDB Server.
+
+### password [`String`]
+
+The password of InfluxDB Server.
+
+### database [`String`]
+
+The database name in InfluxDB.
+
+### measurement [`String`]
+
+The Measurement name in InfluxDB.
+
+### fields [`List<String>`]
+
+The list of Field in InfluxDB.
+
+### field_types [`List<String>`]
+
+The list of Field Types in InfluxDB.
+
+### parallelism [`Int`]
+
+The parallelism of an individual operator, for InfluxDbSource.
+
+## Example
+
+### Simple
+
+```hocon
+InfluxDbSource {
+  server_url = "http://127.0.0.1:8086/"
+  database = "influxdb"
+  measurement = "m"
+  fields = ["time", "temperature"]
+  field_types = ["STRING", "DOUBLE"]
+}
+```
+
+### Auth
+
+```hocon
+InfluxDbSource {
+  server_url = "http://127.0.0.1:8086/"
+  username = "admin"
+  password = "password"
+  database = "influxdb"
+  measurement = "m"
+  fields = ["time", "temperature"]
+  field_types = ["STRING", "DOUBLE"]
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Jdbc.mdx b/versioned_docs/version-2.2.0-beta/connector/source/Jdbc.mdx
new file mode 100644
index 000000000..69a2e4c42
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Jdbc.mdx
@@ -0,0 +1,207 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Jdbc
+
+> JDBC source connector
+
+## Description
+
+Read external data source data through JDBC
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Jdbc
+* [x] Flink: Jdbc
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| driver         | string | yes      | -             |
+| jdbc.*         | string | no       |               |
+| password       | string | yes      | -             |
+| table          | string | yes      | -             |
+| url            | string | yes      | -             |
+| user           | string | yes      | -             |
+| common-options | string | yes      | -             |
+
+</TabItem>
+<TabItem value="flink">
+
+| name                  | type   | required | default value |
+|-----------------------|--------| -------- | ------------- |
+| driver                | string | yes      | -             |
+| url                   | string | yes      | -             |
+| username              | string | yes      | -             |
+| password              | string | no       | -             |
+| query                 | string | yes      | -             |
+| fetch_size            | int    | no       | -             |
+| partition_column      | string | no       | -             |
+| partition_upper_bound | long   | no       | -             |
+| partition_lower_bound | long   | no       | -             |
+| common-options        | string | no       | -             |
+| parallelism           | int    | no       | -             |
+
+</TabItem>
+</Tabs>
+
+### driver [string]
+
+The `jdbc class name` used to connect to the remote data source, if you use MySQL the value is `com.mysql.cj.jdbc.Driver`.
+
+Warn: for license compliance, you have to provide MySQL JDBC driver yourself, e.g. copy `mysql-connector-java-xxx.jar` to `$FLINK_HOME/lib` for Standalone.
+
+### password [string]
+
+##### password
+
+### url [string]
+
+The URL of the JDBC connection. Refer to a case: `jdbc:postgresql://localhost/test`
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+### jdbc [string]
+
+In addition to the parameters that must be specified above, users can also specify multiple optional parameters, which cover [all the parameters](https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases) provided by Spark JDBC.
+
+The way to specify parameters is to add the prefix `jdbc.` to the original parameter name. For example, the way to specify `fetchsize` is: `jdbc.fetchsize = 50000` . If these non-essential parameters are not specified, they will use the default values given by Spark JDBC.
+
+### user [string]
+
+username
+
+### table [string]
+
+table name
+
+</TabItem>
+<TabItem value="flink">
+
+### username [string]
+
+username
+
+### query [string]
+
+Query statement
+
+### fetch_size [int]
+
+fetch size
+
+### parallelism [int]
+
+The parallelism of an individual operator, for JdbcSource.
+
+### partition_column [string]
+
+The column name for parallelism's partition, only support numeric type.
+
+### partition_upper_bound [long]
+
+The partition_column max value for scan, if not set SeaTunnel will query database get max value.
+
+### partition_lower_bound [long]
+
+The partition_column min value for scan, if not set SeaTunnel will query database get min value.
+
+</TabItem>
+</Tabs>
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+jdbc {
+    driver = "com.mysql.cj.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+}
+```
+
+> Read MySQL data through JDBC
+
+```bash
+jdbc {
+    driver = "com.mysql.cj.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+    jdbc.partitionColumn = "item_id"
+    jdbc.numPartitions = "10"
+    jdbc.lowerBound = 0
+    jdbc.upperBound = 100
+}
+```
+
+> Divide partitions based on specified fields
+
+
+```bash
+jdbc {
+    driver = "com.mysql.cj.jdbc.Driver"
+    url = "jdbc:mysql://localhost:3306/info"
+    table = "access"
+    result_table_name = "access_log"
+    user = "username"
+    password = "password"
+    
+    jdbc.connect_timeout = 10000
+    jdbc.socket_timeout = 10000
+}
+```
+> Timeout config
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+JdbcSource {
+    driver = com.mysql.cj.jdbc.Driver
+    url = "jdbc:mysql://localhost/test"
+    username = root
+    query = "select * from test"
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Kafka.mdx b/versioned_docs/version-2.2.0-beta/connector/source/Kafka.mdx
new file mode 100644
index 000000000..4777e0893
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Kafka.mdx
@@ -0,0 +1,179 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Kafka
+
+> Kafka source connector
+
+## Description
+
+To consume data from `Kafka` , supported `Kafka version >= 0.10.0` .
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: KafkaStream
+* [x] Flink: Kafka
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name                       | type   | required | default value |
+| -------------------------- | ------ | -------- | ------------- |
+| topics                     | string | yes      | -             |
+| consumer.group.id          | string | yes      | -             |
+| consumer.bootstrap.servers | string | yes      | -             |
+| consumer.*                 | string | no       | -             |
+| common-options             | string | yes      | -             |
+
+</TabItem>
+<TabItem value="flink">
+
+| name                       | type   | required | default value |
+| -------------------------- | ------ | -------- | ------------- |
+| topics                     | string | yes      | -             |
+| consumer.group.id          | string | yes      | -             |
+| consumer.bootstrap.servers | string | yes      | -             |
+| schema                     | string | yes      | -             |
+| format.type                | string | yes      | -             |
+| format.*                   | string | no       | -             |
+| consumer.*                 | string | no       | -             |
+| rowtime.field              | string | no       | -             |
+| watermark                  | long   | no       | -             |
+| offset.reset               | string | no       | -             |
+| common-options             | string | no       | -             |
+
+</TabItem>
+</Tabs>
+
+### topics [string]
+
+`Kafka topic` name. If there are multiple `topics`, use `,` to split, for example: `"tpc1,tpc2"`
+
+### consumer.group.id [string]
+
+`Kafka consumer group id`, used to distinguish different consumer groups
+
+### consumer.bootstrap.servers [string]
+
+`Kafka` cluster address, separated by `,`
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+</TabItem>
+<TabItem value="flink">
+
+### format.type [string]
+
+Currently supports three formats
+
+- json
+- csv
+- avro
+
+### format.* [string]
+
+The `csv` format uses this parameter to set the separator and so on. For example, set the column delimiter to `\t` , `format.field-delimiter=\\t`
+
+### schema [string]
+
+- csv
+    - The `schema` of `csv` is a string of `jsonArray` , such as `"[{\"field\":\"name\",\"type\":\"string\"},{\"field\":\"age\ ",\"type\":\"int\"}]"` .
+
+- json
+    - The `schema` parameter of `json` is to provide a `json string` of the original data, and the `schema` can be automatically generated, but the original data with the most complete content needs to be provided, otherwise the fields will be lost.
+
+- avro
+    - The `schema` parameter of `avro` is to provide a standard `avro schema JSON string` , such as `{\"name\":\"test\",\"type\":\"record\",\"fields\":[{ \"name\":\"name\",\"type\":\"string\"},{\"name\":\"age\",\"type\":\"long\"} ,{\"name\":\"addrs\",\"type\":{\"name\":\"addrs\",\"type\":\"record\",\"fields\" :[{\"name\":\"province\",\"type\":\"string\"},{\"name\":\"city\",\"type\":\"string \"}]}}]}`
+
+- To learn more about how the `Avro Schema JSON string` should be defined, please refer to: https://avro.apache.org/docs/current/spec.html
+
+### rowtime.field [string]
+
+Extract timestamp using current configuration field for flink event time watermark
+
+### watermark [long]
+
+Sets a built-in watermark strategy for rowtime.field attributes which are out-of-order by a bounded time
+    interval. Emits watermarks which are the maximum observed timestamp minus the specified delay.
+
+### offset.reset [string]
+
+The consumer's initial `offset` is only valid for new consumers. There are three modes
+
+- latest
+    - Start consumption from the latest offset
+- earliest
+    - Start consumption from the earliest offset
+- specific
+    - Start consumption from the specified `offset` , and specify the `start offset` of each partition at this time. The setting method is through `offset.reset.specific="{0:111,1:123}"`
+
+</TabItem>
+</Tabs>
+
+### consumer.* [string]
+
+In addition to the above necessary parameters that must be specified by the `Kafka consumer` client, users can also specify multiple `consumer` client non-mandatory parameters, covering [all consumer parameters specified in the official Kafka document](https://kafka.apache.org/documentation.html#consumerconfigs).
+
+The way to specify parameters is to add the prefix `consumer.` to the original parameter name. For example, the way to specify `auto.offset.reset` is: `consumer.auto.offset.reset = latest` . If these non-essential parameters are not specified, they will use the default values given in the official Kafka documentation.
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+kafkaStream {
+    topics = "seatunnel"
+    consumer.bootstrap.servers = "localhost:9092"
+    consumer.group.id = "seatunnel_group"
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+KafkaTableStream {
+    consumer.bootstrap.servers = "127.0.0.1:9092"
+    consumer.group.id = "seatunnel5"
+    topics = test
+    result_table_name = test
+    format.type = csv
+    schema = "[{\"field\":\"name\",\"type\":\"string\"},{\"field\":\"age\",\"type\":\"int\"}]"
+    format.field-delimiter = ";"
+    format.allow-comments = "true"
+    format.ignore-parse-errors = "true"
+}
+```
+
+</TabItem>
+</Tabs>
\ No newline at end of file
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Kudu.md b/versioned_docs/version-2.2.0-beta/connector/source/Kudu.md
new file mode 100644
index 000000000..431cd6fd5
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Kudu.md
@@ -0,0 +1,45 @@
+# Kudu
+
+> Kudu source connector
+
+## Description
+
+Read data from Kudu.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Kudu
+* [ ] Flink
+
+:::
+
+## Options
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| kudu_master            | string | yes      | -             |
+| kudu_table       | string | yes      | -         |
+
+### kudu_master [string]
+
+Kudu Master
+
+### kudu_table [string]
+
+Kudu Table
+
+### common options [string]
+
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+```bash
+kudu {
+    kudu_master = "master:7051"
+    kudu_table = "impala::test_db.test_table"
+    result_table_name = "kudu_result_table"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/MongoDB.md b/versioned_docs/version-2.2.0-beta/connector/source/MongoDB.md
new file mode 100644
index 000000000..83de599a2
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/MongoDB.md
@@ -0,0 +1,64 @@
+# MongoDb
+
+> MongoDb source connector
+
+## Description
+
+Read data from MongoDB.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: MongoDb
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                  | type   | required | default value |
+|-----------------------| ------ |----------|---------------|
+| readconfig.uri        | string | yes      | -             |
+| readconfig.database   | string | yes      | -             |
+| readconfig.collection | string | yes      | -             |
+| readconfig.*          | string | no       | -             |
+| schema                | string | no       | -             |
+| common-options        | string | yes      | -             |
+
+### readconfig.uri [string]
+
+MongoDB uri
+
+### readconfig.database [string]
+
+MongoDB database
+
+### readconfig.collection [string]
+
+MongoDB collection
+
+### readconfig.* [string]
+
+More other parameters can be configured here, see [MongoDB Configuration](https://docs.mongodb.com/spark-connector/current/configuration/) for details, see the Input Configuration section. The way to specify parameters is to prefix the original parameter name `readconfig.` For example, the way to set `spark.mongodb.input.partitioner` is `readconfig.spark.mongodb.input.partitioner="MongoPaginateBySizePartitioner"` . If you do not specify these optional parameters, the default values of th [...]
+
+### schema [string]
+
+Because `MongoDB` does not have the concept of `schema`, when spark reads `MongoDB` , it will sample `MongoDB` data and infer the `schema` . In fact, this process will be slow and may be inaccurate. This parameter can be manually specified. Avoid these problems. `schema` is a `json` string, such as `{\"name\":\"string\",\"age\":\"integer\",\"addrs\":{\"country\":\"string\ ",\"city\":\"string\"}}`
+
+### common options [string]
+
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+```bash
+mongodb {
+    readconfig.uri = "mongodb://username:password@127.0.0.1:27017/mypost"
+    readconfig.database = "mydatabase"
+    readconfig.collection = "mycollection"
+    readconfig.spark.mongodb.input.partitioner = "MongoPaginateBySizePartitioner"
+    schema="{\"name\":\"string\",\"age\":\"integer\",\"addrs\":{\"country\":\"string\",\"city\":\"string\"}}"
+    result_table_name = "mongodb_result_table"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Phoenix.md b/versioned_docs/version-2.2.0-beta/connector/source/Phoenix.md
new file mode 100644
index 000000000..b44878155
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Phoenix.md
@@ -0,0 +1,60 @@
+# Phoenix
+
+> Phoenix source connector
+
+## Description
+
+Read external data source data through `Phoenix` , compatible with `Kerberos`  authentication
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Phoenix
+* [ ] Flink
+
+:::
+
+## Options
+
+| name       | type   | required | default value |
+| ---------- | ------ | -------- | ------------- |
+| zk-connect | string | yes      | -             |
+| table      | string | yes      |               |
+| columns    | string | no       | -             |
+| tenantId   | string | no       | -             |
+| predicate  | string | no       | -             |
+
+### zk-connect [string]
+
+Connection string, configuration example: `host1:2181,host2:2181,host3:2181 [/znode]`
+
+### table [string]
+
+Source data table name
+
+### columns [string-list]
+
+Read column name configuration. Read all columns set to `[]` , optional configuration item, default is `[]` .
+
+### tenant-id [string]
+
+Tenant ID, optional configuration item
+
+### predicate [string]
+
+Conditional filter string configuration, optional configuration items
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+## Example
+
+```bash
+Phoenix {
+  zk-connect = "host1:2181,host2:2181,host3:2181"
+  table = "table22"
+  result_table_name = "tmp1"
+}
+```
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Redis.md b/versioned_docs/version-2.2.0-beta/connector/source/Redis.md
new file mode 100644
index 000000000..026fbfb11
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Redis.md
@@ -0,0 +1,95 @@
+# Redis
+
+> Redis source connector
+
+## Description
+
+Read data from Redis.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: Redis
+* [ ] Flink
+
+:::
+
+## Options
+
+| name                | type    | required | default value |
+|---------------------|---------|----------|---------------|
+| host                | string  | no       | "localhost"   |
+| port                | int     | no       | 6379          |
+| auth                | string  | no       |               |
+| db_num              | int     | no       | 0             |
+| keys_or_key_pattern | string  | yes      |               |
+| partition_num       | int     | no       | 3             |
+| data_type           | string  | no       | "KV"          |
+| timeout             | int     | no       | 2000          |
+| common-options      | string  | yes      |               |
+| is_self_achieved    | boolean | no       | false         |
+
+### host [string]
+
+Redis server address, default `"localhost"`
+
+### port [int]
+
+Redis service port, default `6379`
+
+### auth [string]
+
+Redis authentication password
+
+### db_num [int]
+
+Redis database index ID. It is connected to db `0` by default
+
+### keys_or_key_pattern [string]
+
+Redis Key, support fuzzy matching
+
+### partition_num [int]
+
+Number of Redis shards. The default is `3`
+
+### data_type [string]
+
+Redis data type eg: `KV HASH LIST SET ZSET`
+
+### timeout [int]
+
+Redis timeout
+
+### common options [string]
+
+Source Plugin common parameters, refer to [Source Plugin](common-options.mdx) for details
+
+### is_self_achieved [boolean]
+
+If a redis access by a self-achieved redis proxy, which is not support redis function of "info Replication"
+
+## Example
+
+```bash
+redis {
+  host = "localhost"
+  port = 6379
+  auth = "myPassword"
+  db_num = 1
+  keys_or_key_pattern = "*"
+  partition_num = 20
+  data_type = "HASH"
+  result_table_name = "hash_result_table"
+  is_self_achieved = false
+}
+```
+
+> The returned table is a data table in which both fields are strings
+
+| raw_key   | raw_message |
+| --------- | ----------- |
+| keys      | xxx         |
+| my_keys   | xxx         |
+| keys_mine | xxx         |
diff --git a/versioned_docs/version-2.2.0-beta/connector/source/Socket.mdx b/versioned_docs/version-2.2.0-beta/connector/source/Socket.mdx
new file mode 100644
index 000000000..3588f5f33
--- /dev/null
+++ b/versioned_docs/version-2.2.0-beta/connector/source/Socket.mdx
@@ -0,0 +1,106 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Socket
+
+> Socket source connector
+
+## Description
+
+`SocketStream` is mainly used to receive `Socket` data and is used to quickly verify `Spark streaming` computing.
+
+:::tip
+
+Engine Supported and plugin name
+
+* [x] Spark: SocketStream
+* [x] Flink: SocketStream
+
+:::
+
+## Options
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| host           | string | no       | localhost     |
+| port           | number | no       | 9999          |
+| common-options | string | yes      | -             |
+
+### host [string]
+
+socket server hostname
+
+### port [number]
+
+socket server port
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+</TabItem>
+<TabItem value="flink">
+
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| host           | string | no       | localhost     |
+| port           | int    | no       | 9999          |
+| common-options | string | no       | -             |
+
+### host [string]
+
+socket server hostname
+
+### port [int]
+
+socket server port
+
+### common options [string]
+
+Source plugin common parameters, please refer to [Source Plugin](common-options.mdx) for details
+
+</TabItem>
+</Tabs>
+
+## Examples
+
+<Tabs
+    groupId="engine-type"
+    defaultValue="spark"
+    values={[
+        {label: 'Spark', value: 'spark'},
+        {label: 'Flink', value: 'flink'},
+    ]}>
+<TabItem value="spark">
+
+```bash
+source {
+    SocketStream {
+      port = 9999
+    }
+}
+```
+
+</TabItem>
+<TabItem value="flink">
+
+```bash
+source {
... 3372 lines suppressed ...