You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by ga...@apache.org on 2022/10/05 05:50:55 UTC

[incubator-seatunnel] branch dev updated: connector V2 docs (#2964)

This is an automated email from the ASF dual-hosted git repository.

gaojun2048 pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new f68059834 connector V2 docs (#2964)
f68059834 is described below

commit f68059834fbaec850660e6b677364255163892c9
Author: TaoZex <45...@users.noreply.github.com>
AuthorDate: Wed Oct 5 13:50:51 2022 +0800

    connector V2 docs (#2964)
---
 docs/en/connector-v2/sink/Assert.md            |  6 ++++-
 docs/en/connector-v2/sink/Clickhouse.md        |  4 +--
 docs/en/connector-v2/sink/ClickhouseFile.md    |  6 ++---
 docs/en/connector-v2/sink/Console.md           | 10 ++++++--
 docs/en/connector-v2/sink/Datahub.md           | 21 ++++++++++------
 docs/en/connector-v2/sink/Elasticsearch.md     |  6 ++++-
 docs/en/connector-v2/sink/Email.md             | 20 +++++++++------
 docs/en/connector-v2/sink/Enterprise-WeChat.md | 15 +++++++----
 docs/en/connector-v2/sink/Feishu.md            | 13 +++++++---
 docs/en/connector-v2/sink/FtpFile.md           |  5 ++++
 docs/en/connector-v2/sink/Greenplum.md         |  6 ++++-
 docs/en/connector-v2/sink/HdfsFile.md          | 22 +++++++++-------
 docs/en/connector-v2/sink/Hive.md              |  9 +++++--
 docs/en/connector-v2/sink/Http.md              |  6 ++++-
 docs/en/connector-v2/sink/IoTDB.md             |  5 ++--
 docs/en/connector-v2/sink/Jdbc.md              |  5 ++++
 docs/en/connector-v2/sink/Kudu.md              |  9 +++++--
 docs/en/connector-v2/sink/LocalFile.md         |  7 +++++-
 docs/en/connector-v2/sink/MongoDB.md           | 15 +++++++----
 docs/en/connector-v2/sink/Neo4j.md             |  6 ++++-
 docs/en/connector-v2/sink/OssFile.md           | 13 +++++++---
 docs/en/connector-v2/sink/Phoenix.md           |  4 +++
 docs/en/connector-v2/sink/Redis.md             | 21 ++++++++++------
 docs/en/connector-v2/sink/Sentry.md            |  8 +++++-
 docs/en/connector-v2/sink/Socket.md            | 15 +++++++----
 docs/en/connector-v2/sink/dingtalk.md          | 13 +++++++---
 docs/en/connector-v2/source/Clickhouse.md      | 13 +++++++---
 docs/en/connector-v2/source/FakeSource.md      | 26 +++++++++++--------
 docs/en/connector-v2/source/FtpFile.md         | 25 +++++++++++-------
 docs/en/connector-v2/source/Greenplum.md       |  8 +++++-
 docs/en/connector-v2/source/HdfsFile.md        | 23 ++++++++++++-----
 docs/en/connector-v2/source/Hive.md            | 20 ++++++++++++---
 docs/en/connector-v2/source/Http.md            | 34 ++++++++++++++-----------
 docs/en/connector-v2/source/Hudi.md            | 21 ++++++++++------
 docs/en/connector-v2/source/Iceberg.md         | 35 +++++++++++++++-----------
 docs/en/connector-v2/source/IoTDB.md           |  5 ++++
 docs/en/connector-v2/source/Jdbc.md            |  5 ++++
 docs/en/connector-v2/source/Kudu.md            |  9 +++++--
 docs/en/connector-v2/source/LocalFile.md       | 17 +++++++++----
 docs/en/connector-v2/source/MongoDB.md         |  6 +++--
 docs/en/connector-v2/source/OssFile.md         |  7 ++++++
 docs/en/connector-v2/source/Phoenix.md         |  4 +++
 docs/en/connector-v2/source/Redis.md           | 23 ++++++++++-------
 docs/en/connector-v2/source/Socket.md          | 13 +++++++---
 docs/en/connector-v2/source/common-options.md  |  7 ------
 docs/en/connector-v2/source/pulsar.md          | 12 +++++++++
 46 files changed, 402 insertions(+), 181 deletions(-)

diff --git a/docs/en/connector-v2/sink/Assert.md b/docs/en/connector-v2/sink/Assert.md
index 5a1612126..30b1072e1 100644
--- a/docs/en/connector-v2/sink/Assert.md
+++ b/docs/en/connector-v2/sink/Assert.md
@@ -21,7 +21,7 @@ A flink sink plugin which can assert illegal data by user defined rules
 |rules.field_value              | ConfigList  | no       | -             |
 |rules.field_value.rule_type    | string      | no       | -             |
 |rules.field_value.rule_value   | double      | no       | -             |
-
+| common-options                |             | no       | -             |
 
 ### rules [ConfigList]
 
@@ -52,6 +52,10 @@ The following rules are supported for now
 
 the value related to rule type
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 
 ## Example
 the whole config obey with `hocon` style
diff --git a/docs/en/connector-v2/sink/Clickhouse.md b/docs/en/connector-v2/sink/Clickhouse.md
index 32ee3b5f8..154d3a6a8 100644
--- a/docs/en/connector-v2/sink/Clickhouse.md
+++ b/docs/en/connector-v2/sink/Clickhouse.md
@@ -34,7 +34,7 @@ Write data to Clickhouse can also be done using JDBC
 | bulk_size      | string | no       | 20000         |
 | split_mode     | string | no       | false         |
 | sharding_key   | string | no       | -             |
-| common-options | string | no       | -             |
+| common-options |        | no       | -             |
 
 ### host [string]
 
@@ -82,7 +82,7 @@ When use split_mode, which node to send data to is a problem, the default is ran
 'sharding_key' parameter can be used to specify the field for the sharding algorithm. This option only
 worked when 'split_mode' is true.
 
-### common options [string]
+### common options
 
 Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
 
diff --git a/docs/en/connector-v2/sink/ClickhouseFile.md b/docs/en/connector-v2/sink/ClickhouseFile.md
index 90e196c92..5e438e88b 100644
--- a/docs/en/connector-v2/sink/ClickhouseFile.md
+++ b/docs/en/connector-v2/sink/ClickhouseFile.md
@@ -22,7 +22,7 @@ Write data to Clickhouse can also be done using JDBC
 ## Options
 
 | name                   | type    | required | default value |
-|------------------------|---------|----------|---------------|
+| ---------------------- | ------- | -------- | ------------- |
 | host                   | string  | yes      | -             |
 | database               | string  | yes      | -             |
 | table                  | string  | yes      | -             |
@@ -36,7 +36,7 @@ Write data to Clickhouse can also be done using JDBC
 | node_pass.node_address | string  | no       | -             |
 | node_pass.username     | string  | no       | "root"        |
 | node_pass.password     | string  | no       | -             |
-| common-options         | string  | no       | -             |
+| common-options         |         | no       | -             |
 
 ### host [string]
 
@@ -94,7 +94,7 @@ The username corresponding to the clickhouse server, default root user.
 
 The password corresponding to the clickhouse server.
 
-### common options [string]
+### common options
 
 Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
 
diff --git a/docs/en/connector-v2/sink/Console.md b/docs/en/connector-v2/sink/Console.md
index 9635d0487..db2613250 100644
--- a/docs/en/connector-v2/sink/Console.md
+++ b/docs/en/connector-v2/sink/Console.md
@@ -14,8 +14,14 @@ Used to send data to Console. Both support streaming and batch mode.
 
 ##  Options
 
-| name | type   | required | default value |
-| --- |--------|----------|---------------|
+| name            | type  | required | default value |
+| -------------  |--------|----------|---------------|
+| common-options |        | no       | -             |
+
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/Datahub.md b/docs/en/connector-v2/sink/Datahub.md
index 800c2a54b..f31936bf8 100644
--- a/docs/en/connector-v2/sink/Datahub.md
+++ b/docs/en/connector-v2/sink/Datahub.md
@@ -14,14 +14,15 @@ A sink plugin which use send message to datahub
 ## Options
 
 | name       | type   | required | default value |
-|------------|--------|----------|---------------|
-| endpoint   | string | yes      | -             |
-| accessId   | string | yes      | -             |
-| accessKey  | string | yes      | -             |
-| project    | string | yes      | -             |
-| topic      | string | yes      | -             |
-| timeout    | int    | yes      | -             |
-| retryTimes | int    | yes      | -             |
+|--------------- |--------|----------|---------------|
+| endpoint       | string | yes      | -             |
+| accessId       | string | yes      | -             |
+| accessKey      | string | yes      | -             |
+| project        | string | yes      | -             |
+| topic          | string | yes      | -             |
+| timeout        | int    | yes      | -             |
+| retryTimes     | int    | yes      | -             |
+| common-options |        | no       | -             |
 
 ### url [string]
 
@@ -51,6 +52,10 @@ the max connection timeout (int)
 
 the max retry times when your client put record failed  (int)
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/sink/Elasticsearch.md b/docs/en/connector-v2/sink/Elasticsearch.md
index fe8198f50..cfa4bd21e 100644
--- a/docs/en/connector-v2/sink/Elasticsearch.md
+++ b/docs/en/connector-v2/sink/Elasticsearch.md
@@ -28,7 +28,7 @@ Engine Supported
 | password       | string | no       |               | 
 | max_retry_size | int    | no       | 3             |
 | max_batch_size | int    | no       | 10            |
-
+| common-options |        | no       | -             |
 
 
 ### hosts [array]
@@ -53,6 +53,10 @@ one bulk request max try size
 ### max_batch_size [int]
 batch bulk doc max size
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Examples
 ```bash
 Elasticsearch {
diff --git a/docs/en/connector-v2/sink/Email.md b/docs/en/connector-v2/sink/Email.md
index cc74cf495..992b06a2e 100644
--- a/docs/en/connector-v2/sink/Email.md
+++ b/docs/en/connector-v2/sink/Email.md
@@ -17,15 +17,15 @@ Send the data as a file to email.
 
 | name                     | type    | required | default value |
 |--------------------------|---------|----------|---------------|
-| email_from_address             | string  | yes      | -             |
-| email_to_address               | string  | yes      | -             |
+| email_from_address       | string  | yes      | -             |
+| email_to_address         | string  | yes      | -             |
 | email_host               | string  | yes      | -             |
-| email_transport_protocol             | string  | yes      | -             |
-| email_smtp_auth               | string  | yes      | -             |
-| email_authorization_code               | string  | yes      | -             |
-| email_message_headline             | string  | yes      | -             |
-| email_message_content               | string  | yes      | -             |
-
+| email_transport_protocol | string  | yes      | -             |
+| email_smtp_auth          | string  | yes      | -             |
+| email_authorization_code | string  | yes      | -             |
+| email_message_headline   | string  | yes      | -             |
+| email_message_content    | string  | yes      | -             |
+| common-options           |         | no       | -             |
 
 ### email_from_address [string]
 
@@ -59,6 +59,10 @@ The subject line of the entire message.
 
 The body of the entire message.
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details.
+
 
 ## Example
 
diff --git a/docs/en/connector-v2/sink/Enterprise-WeChat.md b/docs/en/connector-v2/sink/Enterprise-WeChat.md
index 28ec03059..86e2097f4 100644
--- a/docs/en/connector-v2/sink/Enterprise-WeChat.md
+++ b/docs/en/connector-v2/sink/Enterprise-WeChat.md
@@ -20,11 +20,12 @@ A sink plugin which use Enterprise WeChat robot send message
 
 ##  Options
 
-| name | type   | required | default value |
-| --- |--------|----------| --- |
-| url | String | Yes      | - |
-| mentioned_list | array | No       | - |
-| mentioned_mobile_list | array | No       | - |
+| name                  | type   | required | default value |
+| --------------------- |--------|----------| ------------- |
+| url                   | String | Yes      | -             |
+| mentioned_list        | array  | No       | -             |
+| mentioned_mobile_list | array  | No       | -             |
+| common-options        |        | no       | -             |
 
 ### url [string]
 
@@ -38,6 +39,10 @@ A list of userids to remind the specified members in the group (@ a member), @ a
 
 Mobile phone number list, remind the group member corresponding to the mobile phone number (@ a member), @ all means remind everyone
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/Feishu.md b/docs/en/connector-v2/sink/Feishu.md
index 311a5d7fe..501fc14e9 100644
--- a/docs/en/connector-v2/sink/Feishu.md
+++ b/docs/en/connector-v2/sink/Feishu.md
@@ -17,10 +17,11 @@ Used to launch feishu web hooks using data.
 
 ##  Options
 
-| name | type   | required | default value |
-| --- |--------| --- | --- |
-| url | String | Yes | - |
-| headers | Map    | No | - |
+| name           | type   | required | default value |
+| -------------- |--------| -------- | ------------- |
+| url            | String | Yes      | -             |
+| headers        | Map    | No       | -             |
+| common-options |        | no       | -             |
 
 ### url [string]
 
@@ -30,6 +31,10 @@ Feishu webhook url
 
 Http request headers
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/FtpFile.md b/docs/en/connector-v2/sink/FtpFile.md
index 783346cb3..f8d0542c2 100644
--- a/docs/en/connector-v2/sink/FtpFile.md
+++ b/docs/en/connector-v2/sink/FtpFile.md
@@ -31,6 +31,7 @@ Output data to Ftp .
 | sink_columns                     | array   | no       | When this parameter is empty, all fields are sink columns |
 | is_enable_transaction            | boolean | no       | true                                                      |
 | save_mode                        | string  | no       | "error"                                                   |
+| common-options                   |         | no       | -             |
 
 ### host [string]
 
@@ -126,6 +127,10 @@ If `is_enable_transaction` is `true`, Basically, we won't encounter the same fil
 
 For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details.
+
 ## Example
 
 For text file format
diff --git a/docs/en/connector-v2/sink/Greenplum.md b/docs/en/connector-v2/sink/Greenplum.md
index 91af690d5..d011e972c 100644
--- a/docs/en/connector-v2/sink/Greenplum.md
+++ b/docs/en/connector-v2/sink/Greenplum.md
@@ -29,4 +29,8 @@ Warn: for license compliance, if you use `GreenplumDriver` the have to provide G
 
 ### url [string]
 
-The URL of the JDBC connection. if you use postgresql driver the value is `jdbc:postgresql://${yous_host}:${yous_port}/${yous_database}`, or you use greenplum driver the value is `jdbc:pivotal:greenplum://${yous_host}:${yous_port};DatabaseName=${yous_database}`
\ No newline at end of file
+The URL of the JDBC connection. if you use postgresql driver the value is `jdbc:postgresql://${yous_host}:${yous_port}/${yous_database}`, or you use greenplum driver the value is `jdbc:pivotal:greenplum://${yous_host}:${yous_port};DatabaseName=${yous_database}`
+
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
\ No newline at end of file
diff --git a/docs/en/connector-v2/sink/HdfsFile.md b/docs/en/connector-v2/sink/HdfsFile.md
index 928156760..ec272c7a4 100644
--- a/docs/en/connector-v2/sink/HdfsFile.md
+++ b/docs/en/connector-v2/sink/HdfsFile.md
@@ -25,21 +25,21 @@ By default, we use 2PC commit to ensure `exactly-once`
 In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
 
 | name                             | type   | required | default value                                           |
-|----------------------------------| ------ | -------- |---------------------------------------------------------|
+|----------------------------------| ------ | -------- |--------------------------------------------------------|
 | fs.defaultFS                     | string | yes      | -                                                       |
 | path                             | string | yes      | -                                                       |
-| file_name_expression             | string | no       | "${transactionId}"                                      |
-| file_format                      | string | no       | "text"                                                  |
-| filename_time_format             | string | no       | "yyyy.MM.dd"                                            |
-| field_delimiter                  | string | no       | '\001'                                                  |
-| row_delimiter                    | string | no       | "\n"                                                    |
+| file_name_expression             | string | no       | "${transactionId}"                      |
+| file_format                      | string | no       | "text"                                                 |
+| filename_time_format             | string | no       | "yyyy.MM.dd"                                           |
+| field_delimiter                  | string | no       | '\001'                                                 |
+| row_delimiter                    | string | no       | "\n"                                                   |
 | partition_by                     | array  | no       | -                                                       |
-| partition_dir_expression         | string | no       | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"              |
+| partition_dir_expression         | string | no       | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"             |
 | is_partition_field_write_in_file | boolean| no       | false                                                   |
 | sink_columns                     | array  | no       | When this parameter is empty, all fields are sink columns |
-| is_enable_transaction            | boolean| no       | true                                                    |
+| is_enable_transaction            | boolean| no       | true                                                   |
 | save_mode                        | string | no       | "error"                                                 |
-
+| common-options                   |        | no       | -                                                       |
 ### fs.defaultFS [string]
 
 The hadoop cluster address that start with `hdfs://`, for example: `hdfs://hadoopcluster`
@@ -123,6 +123,10 @@ If `is_enable_transaction` is `true`, Basically, we won't encounter the same fil
 
 For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 For text file format
diff --git a/docs/en/connector-v2/sink/Hive.md b/docs/en/connector-v2/sink/Hive.md
index e7e0a8f78..19b287396 100644
--- a/docs/en/connector-v2/sink/Hive.md
+++ b/docs/en/connector-v2/sink/Hive.md
@@ -26,12 +26,13 @@ By default, we use 2PC commit to ensure `exactly-once`
 
 | name                  | type   | required                                    | default value                                                 |
 |-----------------------| ------ |---------------------------------------------| ------------------------------------------------------------- |
-| table_name            | string | yes                                         | -                                                             |
-| metastore_uri         | string | yes                                         | -                                                             |
+| table_name            | string | yes                                         | -                                                              |
+| metastore_uri         | string | yes                                         | -                                                              |
 | partition_by          | array  | required if hive sink table have partitions | -                                                             |
 | sink_columns          | array  | no                                          | When this parameter is empty, all fields are sink columns     |
 | is_enable_transaction | boolean| no                                          | true                                                          |
 | save_mode             | string | no                                          | "append"                                                      |
+| common-options        |        | no                                  | -      |
 
 ### table_name [string]
 
@@ -62,6 +63,10 @@ Storage mode, we need support `overwrite` and `append`. `append` is now supporte
 
 Streaming Job not support `overwrite`.
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 ```bash
diff --git a/docs/en/connector-v2/sink/Http.md b/docs/en/connector-v2/sink/Http.md
index 2a5cb4385..ec643f8ec 100644
--- a/docs/en/connector-v2/sink/Http.md
+++ b/docs/en/connector-v2/sink/Http.md
@@ -25,7 +25,7 @@ Used to launch web hooks using data.
 | retry                              | int    | No       | -             |
 | retry_backoff_multiplier_ms        | int    | No       | 100           |
 | retry_backoff_max_ms               | int    | No       | 10000         |
-
+| common-options                     |        | no       | -             |
 
 ### url [String]
 
@@ -51,6 +51,10 @@ The retry-backoff times(millis) multiplier if request http failed
 
 The maximum retry-backoff times(millis) if request http failed
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/IoTDB.md b/docs/en/connector-v2/sink/IoTDB.md
index 3affa6f8c..2d8fa45ca 100644
--- a/docs/en/connector-v2/sink/IoTDB.md
+++ b/docs/en/connector-v2/sink/IoTDB.md
@@ -42,8 +42,7 @@ There is a conflict of thrift version between IoTDB and Spark.Therefore, you nee
 | zone_id                       | string            | no       | -                                 |
 | enable_rpc_compression        | boolean           | no       | -                                 |
 | connection_timeout_in_ms      | int               | no       | -                                 |
-| common-options                | string            | no       | -                                 |
-
+| common-options                |                   | no       | -                                 |
 ### node_urls [list]
 
 `IoTDB` cluster address, the format is `["host:port", ...]`
@@ -114,7 +113,7 @@ Enable rpc compression in `IoTDB` client
 
 The maximum time (in ms) to wait when connect `IoTDB`
 
-### common options [string]
+### common options
 
 Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
 
diff --git a/docs/en/connector-v2/sink/Jdbc.md b/docs/en/connector-v2/sink/Jdbc.md
index 5d8385f21..ee8588957 100644
--- a/docs/en/connector-v2/sink/Jdbc.md
+++ b/docs/en/connector-v2/sink/Jdbc.md
@@ -33,6 +33,7 @@ support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
 | xa_data_source_class_name    | String  | No       | -             |
 | max_commit_attempts          | Int     | No       | 3             |
 | transaction_timeout_sec      | Int     | No       | -1            |
+| common-options               |         | no       | -             |
 
 ### driver [string]
 
@@ -93,6 +94,10 @@ The number of retries for transaction commit failures
 The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect
 exactly-once semantics
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## tips
 
 In the case of is_exactly_once = "true", Xa transactions are used. This requires database support, and some databases require some setup : 
diff --git a/docs/en/connector-v2/sink/Kudu.md b/docs/en/connector-v2/sink/Kudu.md
index ae08b3afa..45a4b7261 100644
--- a/docs/en/connector-v2/sink/Kudu.md
+++ b/docs/en/connector-v2/sink/Kudu.md
@@ -17,9 +17,10 @@ Write data to Kudu.
 
 | name                     | type    | required | default value |
 |--------------------------|---------|----------|---------------|
-| kudu_master             | string  | yes      | -             |
+| kudu_master              | string  | yes      | -             |
 | kudu_table               | string  | yes      | -             |
-| save_mode               | string  | yes      | -             |
+| save_mode                | string  | yes      | -             |
+| common-options           |         | no       | -             |
 
 ### kudu_master [string]
 
@@ -33,6 +34,10 @@ Write data to Kudu.
 
 Storage mode, we need support `overwrite` and `append`. `append` is now supported.
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details.
+
 ## Example
 
 ```bash
diff --git a/docs/en/connector-v2/sink/LocalFile.md b/docs/en/connector-v2/sink/LocalFile.md
index b9ddd3f39..2feb3359f 100644
--- a/docs/en/connector-v2/sink/LocalFile.md
+++ b/docs/en/connector-v2/sink/LocalFile.md
@@ -36,6 +36,7 @@ By default, we use 2PC commit to ensure `exactly-once`
 | sink_columns                      | array  | no       | When this parameter is empty, all fields are sink columns |
 | is_enable_transaction             | boolean| no       | true                                                |
 | save_mode                         | string | no       | "error"                                             |
+| common-options                    |        | no       | -                                                  |
 
 ### path [string]
 
@@ -114,7 +115,11 @@ Storage mode, currently supports `overwrite`. This means we will delete the old
 
 If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
 
-For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes).
+
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details.
 
 ## Example
 
diff --git a/docs/en/connector-v2/sink/MongoDB.md b/docs/en/connector-v2/sink/MongoDB.md
index 2768aa03c..20e211e0b 100644
--- a/docs/en/connector-v2/sink/MongoDB.md
+++ b/docs/en/connector-v2/sink/MongoDB.md
@@ -17,11 +17,12 @@ Write data to `MongoDB`
 
 ## Options
 
-| name       | type   | required | default value |
-|------------| ------ |----------| ------------- |
-| uri        | string | yes      | -             |
-| database   | string | yes      | -             |
-| collection | string | yes      | -             |
+| name           | type   | required | default value |
+|--------------- | ------ |----------| ------------- |
+| uri            | string | yes      | -             |
+| database       | string | yes      | -             |
+| collection     | string | yes      | -             |
+| common-options |        | no       | -             |
 
 ### uri [string]
 
@@ -35,6 +36,10 @@ database to write to mongoDB
 
 collection to write to mongoDB
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 ```bash
diff --git a/docs/en/connector-v2/sink/Neo4j.md b/docs/en/connector-v2/sink/Neo4j.md
index 519212b01..b25c57b4c 100644
--- a/docs/en/connector-v2/sink/Neo4j.md
+++ b/docs/en/connector-v2/sink/Neo4j.md
@@ -27,7 +27,7 @@ Write data to Neo4j.
 | queryParamPosition         | Object | Yes      | -             |
 | max_transaction_retry_time | Long   | No       | 30            |
 | max_connection_timeout     | Long   | No       | 30            |
-
+| common-options             |        | no       | -             |
 
 ### uri [string]
 The URI of the Neo4j database. Refer to a case: `neo4j://localhost:7687`
@@ -64,6 +64,10 @@ maximum transaction retry time(seconds). transaction fail if exceeded
 ### max_connection_timeout [long]
 The maximum amount of time to wait for a TCP connection to be established (seconds)
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 
 ## Example
 ```
diff --git a/docs/en/connector-v2/sink/OssFile.md b/docs/en/connector-v2/sink/OssFile.md
index c5a96aae1..07d08c627 100644
--- a/docs/en/connector-v2/sink/OssFile.md
+++ b/docs/en/connector-v2/sink/OssFile.md
@@ -37,12 +37,13 @@ By default, we use 2PC commit to ensure `exactly-once`
 | filename_time_format             | string | no      | "yyyy.MM.dd"                |
 | field_delimiter                  | string | no      | '\001'                      |
 | row_delimiter                    | string | no      | "\n"                        |
-| partition_by                     | array  | no      | -                           |
+| partition_by                     | array  | no      | -                          |
 | partition_dir_expression         | string | no      | "${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/" |
-| is_partition_field_write_in_file | boolean| no      | false                       |
+| is_partition_field_write_in_file | boolean| no      | false                      |
 | sink_columns                     | array  | no      | When this parameter is empty, all fields are sink columns |
 | is_enable_transaction            | boolean| no      | true                        |
-| save_mode                        | string | no      | "error"                     |
+| save_mode                        | string | no      | "error"                    |
+| common-options                   |        | no       | -                          |
 
 ### path [string]
 
@@ -137,7 +138,11 @@ Storage mode, currently supports `overwrite`. This means we will delete the old
 
 If `is_enable_transaction` is `true`, Basically, we won't encounter the same file name. Because we will add the transaction id to file name.
 
-For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes)
+For the specific meaning of each mode, see [save-modes](https://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes).
+
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details.
 
 ## Example
 
diff --git a/docs/en/connector-v2/sink/Phoenix.md b/docs/en/connector-v2/sink/Phoenix.md
index f7383daea..9275e36e1 100644
--- a/docs/en/connector-v2/sink/Phoenix.md
+++ b/docs/en/connector-v2/sink/Phoenix.md
@@ -25,6 +25,10 @@ if you use phoenix (thick) driver the value is `org.apache.phoenix.jdbc.PhoenixD
 ### url [string]
 if you use phoenix (thick) driver the value is `jdbc:phoenix:localhost:2182/hbase` or you use (thin) driver the value is `jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF`
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 use thick client drive
 ```
diff --git a/docs/en/connector-v2/sink/Redis.md b/docs/en/connector-v2/sink/Redis.md
index 550e89e9b..4059b2caf 100644
--- a/docs/en/connector-v2/sink/Redis.md
+++ b/docs/en/connector-v2/sink/Redis.md
@@ -13,14 +13,15 @@ Used to write data to Redis.
 
 ##  Options
 
-| name      | type   | required | default value |
-|-----------|--------|----------|---------------|
-| host      | string | yes      | -             |
-| port      | int    | yes      | -             |
-| key       | string | yes      | -             |
-| data_type | string | yes      | -             |
-| auth      | string | No       | -             |
-| format    | string | No       | json          |
+| name          | type   | required | default value |
+|-------------- |--------|----------|---------------|
+| host          | string | yes      | -             |
+| port          | int    | yes      | -             |
+| key           | string | yes      | -             |
+| data_type     | string | yes      | -             |
+| auth          | string | no       | -             |
+| format        | string | no       | json          |
+| common-options|        | no       | -             |
 
 ### host [string]
 
@@ -98,6 +99,10 @@ Connector will generate data as the following and write it to redis:
 
 ```
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/Sentry.md b/docs/en/connector-v2/sink/Sentry.md
index 1e64e8aab..92fad01b7 100644
--- a/docs/en/connector-v2/sink/Sentry.md
+++ b/docs/en/connector-v2/sink/Sentry.md
@@ -18,10 +18,12 @@ Write message to Sentry.
 | env                        | string  | no       | -             |
 | release                    | string  | no       | -             |
 | cacheDirPath               | string  | no       | -             |
-| enableExternalConfiguration | boolean | no       | -             |
+| enableExternalConfiguration| boolean | no       | -             |
 | maxCacheItems              | number  | no       | -             |
 | flushTimeoutMills          | number  | no       | -             |
 | maxQueueSize               | number  | no       | -             |
+| common-options             |         | no       | -             |
+
 ### dsn [string]
 
 The DSN tells the SDK where to send the events to.
@@ -47,6 +49,10 @@ Controls how many seconds to wait before flushing down. Sentry SDKs cache events
 ### maxQueueSize [number]
 Max queue size before flushing events/envelopes to the disk
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 ```
   Sentry {
diff --git a/docs/en/connector-v2/sink/Socket.md b/docs/en/connector-v2/sink/Socket.md
index 498cfa99d..718cbd65a 100644
--- a/docs/en/connector-v2/sink/Socket.md
+++ b/docs/en/connector-v2/sink/Socket.md
@@ -14,11 +14,12 @@ Used to send data to Socket Server. Both support streaming and batch mode.
 
 ##  Options
 
-| name | type   | required | default value |
-| --- |--------|----------|---------------|
-| host | String | Yes       | -             |
-| port | Integer | yes      | -             |
-| max_retries | Integer | No       | 3             |
+| name           | type   | required | default value |
+| -------------- |--------|----------|---------------|
+| host           | String | Yes      | -             |
+| port           | Integer| yes      | -             |
+| max_retries    | Integer| No       | 3             |
+| common-options |        | no       | -             |
 
 ### host [string]
 socket server host
@@ -31,6 +32,10 @@ socket server port
 
 The number of retries to send record failed
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/sink/dingtalk.md b/docs/en/connector-v2/sink/dingtalk.md
index e949ae2bc..669e07a81 100644
--- a/docs/en/connector-v2/sink/dingtalk.md
+++ b/docs/en/connector-v2/sink/dingtalk.md
@@ -13,10 +13,11 @@ A sink plugin which use DingTalk robot send message
 
 ## Options
 
-| name                         | type        | required | default value |
-|------------------------------| ----------  | -------- | ------------- |
-| url                            | string      | yes      | -             |
-| secret             | string      | yes       | -             |
+| name             | type        | required | default value |
+|------------------| ----------  | -------- | ------------- |
+| url              | string      | yes      | -             |
+| secret           | string      | yes      | -             |
+| common-options   |             | no       | -             |
 
 ### url [string]
 
@@ -26,6 +27,10 @@ DingTalk robot address format is https://oapi.dingtalk.com/robot/send?access_tok
 
 DingTalk robot secret (string)
 
+### common options
+
+Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/source/Clickhouse.md b/docs/en/connector-v2/source/Clickhouse.md
index e73c621b2..b6864c14e 100644
--- a/docs/en/connector-v2/source/Clickhouse.md
+++ b/docs/en/connector-v2/source/Clickhouse.md
@@ -27,13 +27,14 @@ Reading data from Clickhouse can also be done using JDBC
 ## Options
 
 | name           | type   | required | default value |
-|----------------|--------|----------|---------------|
+| -------------- | ------ | -------- | ------------- |
 | host           | string | yes      | -             |
 | database       | string | yes      | -             |
 | sql            | string | yes      | -             |
 | username       | string | yes      | -             |
 | password       | string | yes      | -             |
-| common-options | string | yes      | -             |
+| schema         | config | No       | -             |
+| common-options |        | no       | -             |
 
 ### host [string]
 
@@ -55,7 +56,13 @@ The query sql used to search data though Clickhouse server
 
 `ClickHouse` user password
 
-### common options [string]
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data
+
+### common options 
 
 Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
 
diff --git a/docs/en/connector-v2/source/FakeSource.md b/docs/en/connector-v2/source/FakeSource.md
index ce8aef406..47f604090 100644
--- a/docs/en/connector-v2/source/FakeSource.md
+++ b/docs/en/connector-v2/source/FakeSource.md
@@ -18,21 +18,27 @@ just for some test cases such as type conversion or connector new feature testin
 
 ## Options
 
-| name          | type   | required | default value |
-|---------------|--------|----------|---------------|
-| schema        | config | yes      | -             |
-| row.num       | int    | no       | 5             |
-| map.size      | int    | no       | 5             |
-| array.size    | int    | no       | 5             |
-| bytes.length  | int    | no       | 5             |
-| string.length | int    | no       | 5             |
-
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| schema         | config | yes      | -             |
+| row.num        | int    | no       | 5             |
+| map.size       | int    | no       | 5             |
+| array.size     | int    | no       | 5             |
+| bytes.length   | int    | no       | 5             |
+| string.length  | int    | no       | 5             |
+| common-options |        | no       | -             |
 
 ### schema [config]
 
+#### fields [Config]
+
 The schema of fake data that you want to generate
 
-For example:
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
+## Examples
 
 ```hocon
   schema = {
diff --git a/docs/en/connector-v2/source/FtpFile.md b/docs/en/connector-v2/source/FtpFile.md
index af22fcd8e..07a7f9134 100644
--- a/docs/en/connector-v2/source/FtpFile.md
+++ b/docs/en/connector-v2/source/FtpFile.md
@@ -21,15 +21,16 @@ Read data from ftp file server.
 
 ## Options
 
-| name     | type   | required | default value |
-|----------|--------|----------|---------------|
-| host     | string | yes      | -             |
-| port     | int    | yes      | -             |
-| user     | string | yes      | -             |
-| password | string | yes      | -             |
-| path     | string | yes      | -             |
-| type     | string | yes      | -             |
-| schema   | config | no       | -             |
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| host           | string | yes      | -             |
+| port           | int    | yes      | -             |
+| user           | string | yes      | -             |
+| password       | string | yes      | -             |
+| path           | string | yes      | -             |
+| type           | string | yes      | -             |
+| schema         | config | no       | -             |
+| common-options |        | no       | -             |
 
 ### host [string]
 
@@ -99,8 +100,14 @@ Now connector will treat the upstream data as the following:
 
 ### schema [config]
 
+#### fields [Config]
+
 The schema information of upstream data.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/source/Greenplum.md b/docs/en/connector-v2/source/Greenplum.md
index fad156c24..e51b3e621 100644
--- a/docs/en/connector-v2/source/Greenplum.md
+++ b/docs/en/connector-v2/source/Greenplum.md
@@ -26,4 +26,10 @@ Optional jdbc drivers:
 
 Warn: for license compliance, if you use `GreenplumDriver` the have to provide Greenplum JDBC driver yourself, e.g. copy greenplum-xxx.jar to $SEATNUNNEL_HOME/lib for Standalone.
 
-:::
\ No newline at end of file
+:::
+
+## Options
+
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
\ No newline at end of file
diff --git a/docs/en/connector-v2/source/HdfsFile.md b/docs/en/connector-v2/source/HdfsFile.md
index 5bd4e1e9a..25e3e5fc6 100644
--- a/docs/en/connector-v2/source/HdfsFile.md
+++ b/docs/en/connector-v2/source/HdfsFile.md
@@ -26,12 +26,13 @@ Read all the data in a split in a pollNext call. What splits are read will be sa
 
 ## Options
 
-| name          | type   | required | default value |
-|---------------|--------|----------|---------------|
-| path          | string | yes      | -             |
-| type          | string | yes      | -             |
-| fs.defaultFS  | string | yes      | -             |
-| schema        | config | no       | -             |
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| path           | string | yes      | -             |
+| type           | string | yes      | -             |
+| fs.defaultFS   | string | yes      | -             |
+| schema         | config | no       | -             |
+| common-options |        | no       | -             |
 
 ### path [string]
 
@@ -98,6 +99,16 @@ Now connector will treat the upstream data as the following:
 
 Hdfs cluster address.
 
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data
+
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/source/Hive.md b/docs/en/connector-v2/source/Hive.md
index 99372fbcb..10a54542f 100644
--- a/docs/en/connector-v2/source/Hive.md
+++ b/docs/en/connector-v2/source/Hive.md
@@ -30,10 +30,12 @@ Read all the data in a split in a pollNext call. What splits are read will be sa
 
 ## Options
 
-| name                  | type   | required | default value                                                 |
-|-----------------------| ------ | -------- | ------------------------------------------------------------- |
-| table_name            | string | yes      | -                                                             |
-| metastore_uri         | string | yes      | -                                                             |
+| name           | type   | required | default value |
+| -------------- | ------ | -------- | ------------- |
+| table_name     | string | yes      | -             |
+| metastore_uri  | string | yes      | -             |
+| schema         | config | No       | -             |
+| common-options |        | no       | -             |
 
 ### table_name [string]
 
@@ -43,6 +45,16 @@ Target Hive table name eg: db1.table1
 
 Hive metastore uri
 
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data
+
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 
 ```bash
diff --git a/docs/en/connector-v2/source/Http.md b/docs/en/connector-v2/source/Http.md
index 21ac01e4a..344877f34 100644
--- a/docs/en/connector-v2/source/Http.md
+++ b/docs/en/connector-v2/source/Http.md
@@ -17,21 +17,21 @@ Used to read data from Http.
 
 ##  Options
 
-| name                               | type   | required | default value |
-|------------------------------------|--------|----------|---------------|
-| url                                | String | Yes      | -             |
-| schema                             | Config | No       | -             |
-| schema.fields                      | Config | No       | -             |
-| format                             | String | No       | json          |
-| method                             | String | No       | get           |
-| headers                            | Map    | No       | -             |
-| params                             | Map    | No       | -             |
-| body                               | String | No       | -             |
-| poll_interval_ms                   | int    | No       | -             |
-| retry                              | int    | No       | -             |
-| retry_backoff_multiplier_ms        | int    | No       | 100           |
-| retry_backoff_max_ms               | int    | No       | 10000         |
-
+| name                        | type   | required | default value |
+| --------------------------- | ------ | -------- | ------------- |
+| url                         | String | Yes      | -             |
+| schema                      | Config | No       | -             |
+| schema.fields               | Config | No       | -             |
+| format                      | String | No       | json          |
+| method                      | String | No       | get           |
+| headers                     | Map    | No       | -             |
+| params                      | Map    | No       | -             |
+| body                        | String | No       | -             |
+| poll_interval_ms            | int    | No       | -             |
+| retry                       | int    | No       | -             |
+| retry_backoff_multiplier_ms | int    | No       | 100           |
+| retry_backoff_max_ms        | int    | No       | 10000         |
+| common-options              |        | No       | -             |
 ### url [String]
 
 http request url
@@ -124,6 +124,10 @@ connector will generate data as the following:
 
 the schema fields of upstream data
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/source/Hudi.md b/docs/en/connector-v2/source/Hudi.md
index 7eae78720..b983ba92c 100644
--- a/docs/en/connector-v2/source/Hudi.md
+++ b/docs/en/connector-v2/source/Hudi.md
@@ -22,14 +22,15 @@ Currently, only supports hudi cow table and Snapshot Query with Batch Mode
 
 ## Options
 
-| name                     | type    | required | default value |
-|--------------------------|---------|----------|---------------|
-| table.path               | string  | yes      | -             |
-| table.type               | string  | yes      | -             |
-| conf.files               | string  | yes      | -             |
-| use.kerberos             | boolean | no       | false         |
-| kerberos.principal       | string  | no       | -             |
-| kerberos.principal.file  | string  | no       | -             |
+| name                    | type    | required | default value |
+| ----------------------- | ------- | -------- | ------------- |
+| table.path              | string  | yes      | -             |
+| table.type              | string  | yes      | -             |
+| conf.files              | string  | yes      | -             |
+| use.kerberos            | boolean | no       | false         |
+| kerberos.principal      | string  | no       | -             |
+| kerberos.principal.file | string  | no       | -             |
+| common-options          |         | no       | -             |
 
 ### table.path [string]
 
@@ -55,6 +56,10 @@ Currently, only supports hudi cow table and Snapshot Query with Batch Mode
 
 `kerberos.principal.file` When use kerberos,  we should set kerberos princal file such as '/home/test/test_user.keytab'.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Examples
 
 ```hocon
diff --git a/docs/en/connector-v2/source/Iceberg.md b/docs/en/connector-v2/source/Iceberg.md
index 85458b0ea..98f19dca2 100644
--- a/docs/en/connector-v2/source/Iceberg.md
+++ b/docs/en/connector-v2/source/Iceberg.md
@@ -25,21 +25,22 @@ Source connector for Apache Iceberg. It can support batch and stream mode.
 
 ##  Options
 
-| name                              | type     | required | default value           |
-|-----------------------------------|----------|----------|-------------------------|
-| catalog_name                      | string   | yes      | -                       |
-| catalog_type                      | string   | yes      | -                       |
-| uri                               | string   | false    | -                       |
-| warehouse                         | string   | yes      | -                       |
-| namespace                         | string   | yes      | -                       |
-| table                             | string   | yes      | -                       |
-| case_sensitive                    | boolean  | false    | false                   |
-| start_snapshot_timestamp          | long     | false    | -                       |
-| start_snapshot_id                 | long     | false    | -                       |
-| end_snapshot_id                   | long     | false    | -                       |
-| use_snapshot_id                   | long     | false    | -                       |
-| use_snapshot_timestamp            | long     | false    | -                       |
-| stream_scan_strategy              | enum     | false    | FROM_LATEST_SNAPSHOT    |
+| name                     | type    | required | default value        |
+| ------------------------ | ------- | -------- | -------------------- |
+| catalog_name             | string  | yes      | -                    |
+| catalog_type             | string  | yes      | -                    |
+| uri                      | string  | no       | -                    |
+| warehouse                | string  | yes      | -                    |
+| namespace                | string  | yes      | -                    |
+| table                    | string  | yes      | -                    |
+| case_sensitive           | boolean | no       | false                |
+| start_snapshot_timestamp | long    | no       | -                    |
+| start_snapshot_id        | long    | no       | -                    |
+| end_snapshot_id          | long    | no       | -                    |
+| use_snapshot_id          | long    | no       | -                    |
+| use_snapshot_timestamp   | long    | no       | -                    |
+| stream_scan_strategy     | enum    | no       | FROM_LATEST_SNAPSHOT |
+| common-options           |         | no       | -                    |
 
 ### catalog_name [string]
 
@@ -105,6 +106,10 @@ The optional values are:
 - FROM_SNAPSHOT_ID: Start incremental mode from a snapshot with a specific id inclusive.
 - FROM_SNAPSHOT_TIMESTAMP: Start incremental mode from a snapshot with a specific timestamp inclusive.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Example
 
 simple
diff --git a/docs/en/connector-v2/source/IoTDB.md b/docs/en/connector-v2/source/IoTDB.md
index c8980be94..fa503da4d 100644
--- a/docs/en/connector-v2/source/IoTDB.md
+++ b/docs/en/connector-v2/source/IoTDB.md
@@ -36,6 +36,7 @@ supports query SQL and can achieve projection effect.
 | thrift_default_buffer_size | int     | no       | -             |
 | enable_cache_leader        | boolean | no       | -             |
 | version                    | string  | no       | -             |
+| common-options             |         | no       | -             |
 
 ### single node, you need to set host and port to connect to the remote data source.
 
@@ -147,6 +148,10 @@ lower bound of the time column
 
 ```
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Examples
 
 ### Case1
diff --git a/docs/en/connector-v2/source/Jdbc.md b/docs/en/connector-v2/source/Jdbc.md
index a026e378f..b98366688 100644
--- a/docs/en/connector-v2/source/Jdbc.md
+++ b/docs/en/connector-v2/source/Jdbc.md
@@ -31,6 +31,7 @@ supports query SQL and can achieve projection effect.
 | partition_column             | String | No       | -             |
 | partition_upper_bound        | Long   | No       | -             |
 | partition_lower_bound        | Long   | No       | -             |
+| common-options               |        | No       | -             |
 
 ### driver [string]
 
@@ -70,6 +71,10 @@ The partition_column max value for scan, if not set SeaTunnel will query databas
 
 The partition_column min value for scan, if not set SeaTunnel will query database get min value.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## tips
 
 If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed
diff --git a/docs/en/connector-v2/source/Kudu.md b/docs/en/connector-v2/source/Kudu.md
index 22ff42623..11deb2fdf 100644
--- a/docs/en/connector-v2/source/Kudu.md
+++ b/docs/en/connector-v2/source/Kudu.md
@@ -21,9 +21,10 @@ Used to read data from Kudu.
 
 | name                     | type    | required | default value |
 |--------------------------|---------|----------|---------------|
-| kudu_master             | string  | yes      | -             |
+| kudu_master              | string  | yes      | -             |
 | kudu_table               | string  | yes      | -             |
-| columnsList               | string  | yes      | -             |
+| columnsList              | string  | yes      | -             |
+| common-options           |         | no       | -             |
 
 ### kudu_master [string]
 
@@ -37,6 +38,10 @@ Used to read data from Kudu.
 
 `columnsList` Specifies the column names of the table.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Examples
 
 ```hocon
diff --git a/docs/en/connector-v2/source/LocalFile.md b/docs/en/connector-v2/source/LocalFile.md
index 4f3c0e6c5..d4a019d19 100644
--- a/docs/en/connector-v2/source/LocalFile.md
+++ b/docs/en/connector-v2/source/LocalFile.md
@@ -26,11 +26,12 @@ Read all the data in a split in a pollNext call. What splits are read will be sa
 
 ## Options
 
-| name   | type   | required | default value |
-|--------|--------|----------|---------------|
-| path   | string | yes      | -             |
-| type   | string | yes      | -             |
-| schema | config | no       | -             |
+| name           | type   | required | default value |
+|--------------- |--------|----------|---------------|
+| path           | string | yes      | -             |
+| type           | string | yes      | -             |
+| schema         | config | no       | -             |
+| common-options |        | no       | -             |
 
 ### path [string]
 
@@ -95,8 +96,14 @@ Now connector will treat the upstream data as the following:
 
 ### schema [config]
 
+#### fields [Config]
+
 The schema information of upstream data.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/source/MongoDB.md b/docs/en/connector-v2/source/MongoDB.md
index e587f919a..dec3cb686 100644
--- a/docs/en/connector-v2/source/MongoDB.md
+++ b/docs/en/connector-v2/source/MongoDB.md
@@ -23,7 +23,7 @@ Read data from MongoDB.
 | database       | string | yes      | -             |
 | collection     | string | yes      | -             |
 | schema         | object | yes      | -             |
-| common-options | string | yes      | -             |
+| common-options |        | yes      | -             |
 
 ### uri [string]
 
@@ -39,6 +39,8 @@ MongoDB collection
 
 ### schema [object]
 
+#### fields [Config]
+
 Because `MongoDB` does not have the concept of `schema`, when engine reads `MongoDB` , it will sample `MongoDB` data and infer the `schema` . In fact, this process will be slow and may be inaccurate. This parameter can be manually specified. Avoid these problems. 
 
 such as:
@@ -53,7 +55,7 @@ schema {
 }
 ```
 
-### common options [string]
+### common options 
 
 Source Plugin common parameters, refer to [Source Plugin](common-options.md) for details
 
diff --git a/docs/en/connector-v2/source/OssFile.md b/docs/en/connector-v2/source/OssFile.md
index 8bf87aadd..a71bd41ad 100644
--- a/docs/en/connector-v2/source/OssFile.md
+++ b/docs/en/connector-v2/source/OssFile.md
@@ -38,6 +38,7 @@ Read all the data in a split in a pollNext call. What splits are read will be sa
 | access_secret | string | yes      | -             |
 | endpoint      | string | yes      | -             |
 | schema        | config | no       | -             |
+| common-options|        | no       | -             |
 
 ### path [string]
 
@@ -118,8 +119,14 @@ The endpoint of oss file system.
 
 ### schema [config]
 
+#### fields [Config]
+
 The schema of upstream data.
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Example
 
 ```hocon
diff --git a/docs/en/connector-v2/source/Phoenix.md b/docs/en/connector-v2/source/Phoenix.md
index a82196ea3..a8b836227 100644
--- a/docs/en/connector-v2/source/Phoenix.md
+++ b/docs/en/connector-v2/source/Phoenix.md
@@ -30,6 +30,10 @@ if you use phoenix (thick) driver the value is `org.apache.phoenix.jdbc.PhoenixD
 ### url [string]
 if you use phoenix (thick) driver the value is `jdbc:phoenix:localhost:2182/hbase` or you use (thin) driver the value is `jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF`
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 use thick client drive
 ```
diff --git a/docs/en/connector-v2/source/Redis.md b/docs/en/connector-v2/source/Redis.md
index dfb1b4340..c99c777ef 100644
--- a/docs/en/connector-v2/source/Redis.md
+++ b/docs/en/connector-v2/source/Redis.md
@@ -17,15 +17,16 @@ Used to read data from Redis.
 
 ##  Options
 
-| name      | type   | required | default value |
-|-----------|--------|----------|---------------|
-| host      | string | yes      | -             |
-| port      | int    | yes      | -             |
-| keys      | string | yes      | -             |
-| data_type | string | yes      | -             |
-| auth      | string | No       | -             |
-| schema    | config | No       | -             |
-| format    | string | No       | json          |
+| name           | type   | required | default value |
+|--------------- |--------|----------|---------------|
+| host           | string | yes      | -             |
+| port           | int    | yes      | -             |
+| keys           | string | yes      | -             |
+| data_type      | string | yes      | -             |
+| auth           | string | No       | -             |
+| schema         | config | No       | -             |
+| format         | string | No       | json          |
+| common-options |        | no       | -             |
 
 ### host [string]
 
@@ -126,6 +127,10 @@ connector will generate data as the following:
 
 the schema fields of upstream data
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/source/Socket.md b/docs/en/connector-v2/source/Socket.md
index 84a2b487e..51c7225e4 100644
--- a/docs/en/connector-v2/source/Socket.md
+++ b/docs/en/connector-v2/source/Socket.md
@@ -17,10 +17,11 @@ Used to read data from Socket.
 
 ##  Options
 
-| name | type   | required | default value |
-| --- |--------| --- | --- |
-| host | String | No | localhost |
-| port | Integer | No | 9999 |
+| name           | type   | required | default value |
+| -------------- |--------| -------- | ------------- |
+| host           | String | No       | localhost     |
+| port           | Integer| No       | 9999          |
+| common-options |        | no       | -             |
 
 ### host [string]
 socket server host
@@ -29,6 +30,10 @@ socket server host
 
 socket server port
 
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details
+
 ## Example
 
 simple:
diff --git a/docs/en/connector-v2/source/common-options.md b/docs/en/connector-v2/source/common-options.md
index 529732743..7254c44e5 100644
--- a/docs/en/connector-v2/source/common-options.md
+++ b/docs/en/connector-v2/source/common-options.md
@@ -5,7 +5,6 @@
 | name              | type   | required | default value |
 | ----------------- | ------ | -------- | ------------- |
 | result_table_name | string | no       | -             |
-| field_name        | string | no       | -             |
 
 ### result_table_name [string]
 
@@ -13,21 +12,15 @@ When `result_table_name` is not specified, the data processed by this plugin wil
 
 When `result_table_name` is specified, the data processed by this plugin will be registered as a data set `(dataStream/dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The data set `(dataStream/dataset)` registered here can be directly accessed by other plugins by specifying `source_table_name` .
 
-### field_name [string]
-
-When the data is obtained from the upper-level plug-in, you can specify the name of the obtained field, which is convenient for use in subsequent sql plugins.
-
 ## Example
 
 ```bash
 source {
     FakeSourceStream {
         result_table_name = "fake"
-        field_name = "name,age"
     }
 }
 ```
 
 > The result of the data source `FakeSourceStream` will be registered as a temporary table named `fake` . This temporary table can be used by any `Transform` or `Sink` plugin by specifying `source_table_name` .
 >
-> `field_name` names the two columns of the temporary table `name` and `age` respectively.
diff --git a/docs/en/connector-v2/source/pulsar.md b/docs/en/connector-v2/source/pulsar.md
index 572ecc2e0..367167534 100644
--- a/docs/en/connector-v2/source/pulsar.md
+++ b/docs/en/connector-v2/source/pulsar.md
@@ -35,6 +35,8 @@ Source connector for Apache Pulsar.
 | cursor.reset.mode        | Enum    | No       | LATEST        |
 | cursor.stop.mode         | Enum    | No       | NEVER         |
 | cursor.stop.timestamp    | Long    | No       | -             |
+| schema                   | config  | No       | -             |
+| common-options           |         | no       | -             |
 
 ### topic [String]
 
@@ -122,6 +124,16 @@ Stop from the specified epoch timestamp (in milliseconds).
 
 **Note, This option is required when the "cursor.stop.mode" option used `'TIMESTAMP'`.**
 
+### schema [Config]
+
+#### fields [Config]
+
+the schema fields of upstream data.
+
+### common options 
+
+Source plugin common parameters, please refer to [Source Common Options](common-options.md) for details.
+
 ## Example
 
 ```Jdbc {