You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by ty...@apache.org on 2023/05/18 02:49:05 UTC

[incubator-seatunnel] branch document_restructure updated: [Feature][Docs] Refactor documents structure (#4730)

This is an automated email from the ASF dual-hosted git repository.

tyrantlucifer pushed a commit to branch document_restructure
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git


The following commit(s) were added to refs/heads/document_restructure by this push:
     new 03f8c5309 [Feature][Docs] Refactor documents structure (#4730)
03f8c5309 is described below

commit 03f8c53098241f46daa82eabd90a8238957592bc
Author: Monica MengFei <84...@users.noreply.github.com>
AuthorDate: Thu May 18 10:48:59 2023 +0800

    [Feature][Docs] Refactor documents structure (#4730)
    
    * docment-structure
    
    * updata document structre
    
    * update sidebars file
    
    * fix sidebars.js
    
    * fix sidebars.js
    
    ---------
    
    Co-authored-by: mengfei <35...@qq.com>
    Co-authored-by: tyrantlucifer <ty...@gmail.com>
---
 docs/en/{seatunnel-engine/rest-api.md => API.md}   |   5 -
 docs/en/{about.md => about-seatunnel.md}           |   0
 .../{seatunnel-engine/about.md => about-zeta.md}   |   0
 docs/en/concept/{JobEnvConfig.md => env.md}        |  16 +-
 docs/en/concept/schema-feature.md                  |  64 ---
 .../concept/{connector-v2-features.md => sink.md}  |   0
 .../{connector-v2-features.md => source.md}        |   0
 .../{connector-v2-features.md => transform.md}     |   0
 .../sink/AmazonDynamoDB.md                         |   0
 .../sink/Assert.md                                 |   0
 .../sink/Cassandra.md                              |   0
 .../sink/Clickhouse.md                             |   0
 .../sink/ClickhouseFile.md                         |   0
 .../sink/Console.md                                |   0
 .../sink/Datahub.md                                |   0
 .../sink/DingTalk.md                               |   0
 .../{connector-v2 => connector-list}/sink/Doris.md |   0
 .../sink/Elasticsearch.md                          |   0
 .../{connector-v2 => connector-list}/sink/Email.md |   0
 .../sink/Enterprise-WeChat.md                      |   0
 .../sink/Feishu.md                                 |   0
 .../sink/FtpFile.md                                |   0
 .../sink/GoogleFirestore.md                        |   0
 .../sink/Greenplum.md                              |   0
 .../{connector-v2 => connector-list}/sink/Hbase.md |   0
 .../sink/HdfsFile.md                               |   0
 .../{connector-v2 => connector-list}/sink/Hive.md  |   0
 .../{connector-v2 => connector-list}/sink/Http.md  |   0
 .../sink/InfluxDB.md                               |   0
 .../{connector-v2 => connector-list}/sink/IoTDB.md |   0
 .../{connector-v2 => connector-list}/sink/Jdbc.md  |   0
 .../{connector-v2 => connector-list}/sink/Kafka.md |   0
 .../{connector-v2 => connector-list}/sink/Kudu.md  |   0
 .../sink/LocalFile.md                              |   0
 .../sink/Maxcompute.md                             |   0
 .../sink/MongoDB.md                                |   0
 .../{connector-v2 => connector-list}/sink/Neo4j.md |   0
 .../sink/OssFile.md                                |   0
 .../sink/OssJindoFile.md                           |   0
 .../sink/Phoenix.md                                |   0
 .../sink/Rabbitmq.md                               |   0
 .../{connector-v2 => connector-list}/sink/Redis.md |   0
 .../sink/RocketMQ.md                               |   0
 .../sink/S3-Redshift.md                            |   0
 .../sink/S3File.md                                 |   0
 .../sink/SelectDB-Cloud.md                         |   0
 .../sink/Sentry.md                                 |   0
 .../sink/SftpFile.md                               |   0
 .../{connector-v2 => connector-list}/sink/Slack.md |   0
 .../sink/Socket.md                                 |   0
 .../sink/StarRocks.md                              |   0
 .../sink/TDengine.md                               |   0
 .../sink/Tablestore.md                             |   0
 .../sink/common-options.md                         |   0
 .../source/AmazonDynamoDB.md                       |   0
 .../source/Cassandra.md                            |   0
 .../source/Clickhouse.md                           |   0
 .../source/Elasticsearch.md                        |   0
 .../source/FakeSource.md                           |   0
 .../source/FtpFile.md                              |   0
 .../source/Github.md                               |   0
 .../source/Gitlab.md                               |   0
 .../source/GoogleSheets.md                         |   0
 .../source/Greenplum.md                            |   0
 .../source/HdfsFile.md                             |   0
 .../source/Hive.md                                 |   0
 .../source/Http.md                                 |   0
 .../source/Hudi.md                                 |   0
 .../source/Iceberg.md                              |   0
 .../source/InfluxDB.md                             |   0
 .../source/IoTDB.md                                |   0
 .../source/Jdbc.md                                 |   0
 .../source/Jira.md                                 |   0
 .../source/Klaviyo.md                              |   0
 .../source/Kudu.md                                 |   0
 .../source/Lemlist.md                              |   0
 .../source/LocalFile.md                            |   0
 .../source/Maxcompute.md                           |   0
 .../source/MongoDB.md                              |   0
 .../source/MyHours.md                              |   0
 .../source/MySQL-CDC.md                            |   0
 .../source/Neo4j.md                                |   0
 .../source/Notion.md                               |   0
 .../source/OneSignal.md                            |   0
 .../source/OpenMldb.md                             |   0
 .../source/OssFile.md                              |   0
 .../source/OssJindoFile.md                         |   0
 .../source/Persistiq.md                            |   0
 .../source/Phoenix.md                              |   0
 .../source/Rabbitmq.md                             |   0
 .../source/Redis.md                                |   0
 .../source/RocketMQ.md                             |   0
 .../source/S3File.md                               |   0
 .../source/SftpFile.md                             |   0
 .../source/Socket.md                               |   0
 .../source/SqlServer-CDC.md                        |   0
 .../source/StarRocks.md                            |   0
 .../source/TDengine.md                             |   0
 .../source/common-options.md                       |   0
 .../source/kafka.md                                |   0
 .../source/pulsar.md                               |   0
 .../transform}/common-options.md                   |   0
 .../transform}/copy.md                             |   0
 .../transform}/field-mapper.md                     |   0
 .../transform}/filter-rowkind.md                   |   0
 .../transform}/filter.md                           |   0
 .../transform}/replace.md                          |   0
 .../transform}/split.md                            |   0
 .../transform}/sql-functions.md                    |   0
 .../transform}/sql-udf.md                          |   0
 .../transform}/sql.md                              |   0
 .../usage.mdx => contribution/command-usage.mdx}   |   0
 docs/en/contribution/contribute-plugin.md          |   2 +-
 ...ansform-v2-guide.md => contribute-transform.md} |   2 +-
 .../error-quick-reference-manual.md}               |   0
 .../deployment.md => deploy/seatunnel-client.md}   |   2 +-
 .../seatunnel-zeta/local.md}                       |   4 +-
 docs/en/deploy/seatunnel-zeta/standalone.md        | 449 +++++++++++++++++++++
 .../quick-start-flink.md => deploy/with-flink.md}  |   0
 .../quick-start-spark.md => deploy/with-spark.md}  |   0
 .../configuring-connector-formats}/canal-json.md   |   0
 .../cdc-compatible-debezium-json.md                |   0
 .../configuring-encryption-decryption.md}          |   0
 .../using-on-standalone.md}                        |  24 +-
 docs/en/other-engine/flink.md                      |   0
 docs/en/other-engine/spark.md                      |   0
 docs/en/{concept/config.md => quick-started.md}    |   3 +-
 .../connector-release-status.md}                   |   2 +-
 docs/en/seatunnel-engine/checkpoint-storage.md     | 173 --------
 docs/en/seatunnel-engine/cluster-manager.md        |   7 -
 docs/en/seatunnel-engine/cluster-mode.md           |  21 -
 docs/en/seatunnel-engine/deployment.md             | 239 -----------
 docs/en/seatunnel-engine/local-mode.md             |  25 --
 docs/en/seatunnel-engine/tcp.md                    |  37 --
 docs/en/start-v2/docker/docker.md                  |   9 -
 docs/en/start-v2/kubernetes/kubernetes.mdx         | 295 --------------
 docs/sidebars.js                                   | 151 +++----
 137 files changed, 556 insertions(+), 974 deletions(-)

diff --git a/docs/en/seatunnel-engine/rest-api.md b/docs/en/API.md
similarity index 98%
rename from docs/en/seatunnel-engine/rest-api.md
rename to docs/en/API.md
index 0a58c3925..777d07942 100644
--- a/docs/en/seatunnel-engine/rest-api.md
+++ b/docs/en/API.md
@@ -1,8 +1,3 @@
----
-
-sidebar_position: 7
--------------------
-
 # REST API
 
 Seatunnel has a monitoring API that can be used to query status and statistics of running jobs, as well as recent
diff --git a/docs/en/about.md b/docs/en/about-seatunnel.md
similarity index 100%
rename from docs/en/about.md
rename to docs/en/about-seatunnel.md
diff --git a/docs/en/seatunnel-engine/about.md b/docs/en/about-zeta.md
similarity index 100%
rename from docs/en/seatunnel-engine/about.md
rename to docs/en/about-zeta.md
diff --git a/docs/en/concept/JobEnvConfig.md b/docs/en/concept/env.md
similarity index 85%
rename from docs/en/concept/JobEnvConfig.md
rename to docs/en/concept/env.md
index 7272c90fc..df1a27479 100644
--- a/docs/en/concept/JobEnvConfig.md
+++ b/docs/en/concept/env.md
@@ -1,28 +1,30 @@
-# JobEnvConfig
+# Core concept
+
+## Intro seatunnel configuring
 
 This document describes env configuration information,env unifies the environment variables of all engines.
 
-## job.name
+### job.name
 
 This parameter configures the task name.
 
-## jars
+### jars
 
 Third-party packages can be loaded via `jars`, like `jars="file://local/jar1.jar;file://local/jar2.jar"`
 
-## job.mode
+### job.mode
 
 You can configure whether the task is in batch mode or stream mode through `job.mode`, like `job.mode = "BATCH"` or `job.mode = "STREAMING"`
 
-## checkpoint.interval
+### checkpoint.interval
 
 Gets the interval in which checkpoints are periodically scheduled.
 
-## parallelism
+### parallelism
 
 This parameter configures the parallelism of source and sink.
 
-## shade.identifier
+### shade.identifier
 
 Specify the method of encryption, if you didn't have the requirement for encrypting or decrypting config files, this option can be ignored.
 
diff --git a/docs/en/concept/schema-feature.md b/docs/en/concept/schema-feature.md
deleted file mode 100644
index 88c2efe3d..000000000
--- a/docs/en/concept/schema-feature.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Intro to schema feature
-
-## Why we need schema
-
-Some NoSQL databases or message queue are not strongly limited schema, so the schema cannot be obtained through the api. At this time, a schema needs to be defined to convert to SeaTunnelRowType and obtain data.
-
-## What type supported at now
-
-| Data type | Description                                                                                                                                                                                                                                                                                                                                           |
-|:----------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| string    | string                                                                                                                                                                                                                                                                                                                                                |
-| boolean   | boolean                                                                                                                                                                                                                                                                                                                                               |
-| tinyint   | -128 to 127 regular. 0 to 255 unsigned*. Specify the maximum number of digits in parentheses.                                                                                                                                                                                                                                                         |
-| smallint  | -32768 to 32767 General. 0 to 65535 unsigned*. Specify the maximum number of digits in parentheses.                                                                                                                                                                                                                                                   |
-| int       | All numbers from -2,147,483,648 to 2,147,483,647 are allowed.                                                                                                                                                                                                                                                                                         |
-| bigint    | All numbers between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807 are allowed.                                                                                                                                                                                                                                                             |
-| float     | Float-precision numeric data from -1.79E+308 to 1.79E+308.                                                                                                                                                                                                                                                                                            |
-| double    | Double precision floating point. Handle most decimals.                                                                                                                                                                                                                                                                                                |
-| decimal   | DOUBLE type stored as a string, allowing a fixed decimal point.                                                                                                                                                                                                                                                                                       |
-| null      | null                                                                                                                                                                                                                                                                                                                                                  |
-| bytes     | bytes.                                                                                                                                                                                                                                                                                                                                                |
-| date      | Only the date is stored. From January 1, 0001 to December 31, 9999.                                                                                                                                                                                                                                                                                   |
-| time      | Only store time. Accuracy is 100 nanoseconds.                                                                                                                                                                                                                                                                                                         |
-| timestamp | Stores a unique number that is updated whenever a row is created or modified. timestamp is based on the internal clock and does not correspond to real time. There can only be one timestamp variable per table.                                                                                                                                      |
-| row       | Row type,can be nested.                                                                                                                                                                                                                                                                                                                               |
-| map       | A Map is an object that maps keys to values. The key type includes `int` `string` `boolean` `tinyint` `smallint` `bigint` `float` `double` `decimal` `date` `time` `timestamp` `null` , and the value type includes `int` `string` `boolean` `tinyint` `smallint` `bigint` `float` `double` `decimal` `date` `time` `timestamp` `null` `array` `map`. |
-| array     | A array is a data type that represents a collection of elements. The element type includes `int` `string` `boolean` `tinyint` `smallint` `bigint` `float` `double` `array` `map`.                                                                                                                                                                     |
-
-## How to use schema
-
-`schema` defines the format of the data,it contains`fields` properties. `fields` define the field properties,it's a K-V key-value pair, the Key is the field name and the value is field type. Here is an example.
-
-```
-source {
-  FakeSource {
-    parallelism = 2
-    result_table_name = "fake"
-    row.num = 16
-    schema = {
-      fields {
-        id = bigint
-        c_map = "map<string, smallint>"
-        c_array = "array<tinyint>"
-        c_string = string
-        c_boolean = boolean
-        c_tinyint = tinyint
-        c_smallint = smallint
-        c_int = int
-        c_bigint = bigint
-        c_float = float
-        c_double = double
-        c_decimal = "decimal(2, 1)"
-        c_bytes = bytes
-        c_date = date
-        c_timestamp = timestamp
-      }
-    }
-  }
-}
-```
-
-## When we should use it or not
-
-If there is a `schema` configuration project in Options,the connector can then customize the schema. Like `Fake` `Pulsar` `Http` source connector etc.
diff --git a/docs/en/concept/connector-v2-features.md b/docs/en/concept/sink.md
similarity index 100%
copy from docs/en/concept/connector-v2-features.md
copy to docs/en/concept/sink.md
diff --git a/docs/en/concept/connector-v2-features.md b/docs/en/concept/source.md
similarity index 100%
copy from docs/en/concept/connector-v2-features.md
copy to docs/en/concept/source.md
diff --git a/docs/en/concept/connector-v2-features.md b/docs/en/concept/transform.md
similarity index 100%
rename from docs/en/concept/connector-v2-features.md
rename to docs/en/concept/transform.md
diff --git a/docs/en/connector-v2/sink/AmazonDynamoDB.md b/docs/en/connector-list/sink/AmazonDynamoDB.md
similarity index 100%
rename from docs/en/connector-v2/sink/AmazonDynamoDB.md
rename to docs/en/connector-list/sink/AmazonDynamoDB.md
diff --git a/docs/en/connector-v2/sink/Assert.md b/docs/en/connector-list/sink/Assert.md
similarity index 100%
rename from docs/en/connector-v2/sink/Assert.md
rename to docs/en/connector-list/sink/Assert.md
diff --git a/docs/en/connector-v2/sink/Cassandra.md b/docs/en/connector-list/sink/Cassandra.md
similarity index 100%
rename from docs/en/connector-v2/sink/Cassandra.md
rename to docs/en/connector-list/sink/Cassandra.md
diff --git a/docs/en/connector-v2/sink/Clickhouse.md b/docs/en/connector-list/sink/Clickhouse.md
similarity index 100%
rename from docs/en/connector-v2/sink/Clickhouse.md
rename to docs/en/connector-list/sink/Clickhouse.md
diff --git a/docs/en/connector-v2/sink/ClickhouseFile.md b/docs/en/connector-list/sink/ClickhouseFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/ClickhouseFile.md
rename to docs/en/connector-list/sink/ClickhouseFile.md
diff --git a/docs/en/connector-v2/sink/Console.md b/docs/en/connector-list/sink/Console.md
similarity index 100%
rename from docs/en/connector-v2/sink/Console.md
rename to docs/en/connector-list/sink/Console.md
diff --git a/docs/en/connector-v2/sink/Datahub.md b/docs/en/connector-list/sink/Datahub.md
similarity index 100%
rename from docs/en/connector-v2/sink/Datahub.md
rename to docs/en/connector-list/sink/Datahub.md
diff --git a/docs/en/connector-v2/sink/DingTalk.md b/docs/en/connector-list/sink/DingTalk.md
similarity index 100%
rename from docs/en/connector-v2/sink/DingTalk.md
rename to docs/en/connector-list/sink/DingTalk.md
diff --git a/docs/en/connector-v2/sink/Doris.md b/docs/en/connector-list/sink/Doris.md
similarity index 100%
rename from docs/en/connector-v2/sink/Doris.md
rename to docs/en/connector-list/sink/Doris.md
diff --git a/docs/en/connector-v2/sink/Elasticsearch.md b/docs/en/connector-list/sink/Elasticsearch.md
similarity index 100%
rename from docs/en/connector-v2/sink/Elasticsearch.md
rename to docs/en/connector-list/sink/Elasticsearch.md
diff --git a/docs/en/connector-v2/sink/Email.md b/docs/en/connector-list/sink/Email.md
similarity index 100%
rename from docs/en/connector-v2/sink/Email.md
rename to docs/en/connector-list/sink/Email.md
diff --git a/docs/en/connector-v2/sink/Enterprise-WeChat.md b/docs/en/connector-list/sink/Enterprise-WeChat.md
similarity index 100%
rename from docs/en/connector-v2/sink/Enterprise-WeChat.md
rename to docs/en/connector-list/sink/Enterprise-WeChat.md
diff --git a/docs/en/connector-v2/sink/Feishu.md b/docs/en/connector-list/sink/Feishu.md
similarity index 100%
rename from docs/en/connector-v2/sink/Feishu.md
rename to docs/en/connector-list/sink/Feishu.md
diff --git a/docs/en/connector-v2/sink/FtpFile.md b/docs/en/connector-list/sink/FtpFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/FtpFile.md
rename to docs/en/connector-list/sink/FtpFile.md
diff --git a/docs/en/connector-v2/sink/GoogleFirestore.md b/docs/en/connector-list/sink/GoogleFirestore.md
similarity index 100%
rename from docs/en/connector-v2/sink/GoogleFirestore.md
rename to docs/en/connector-list/sink/GoogleFirestore.md
diff --git a/docs/en/connector-v2/sink/Greenplum.md b/docs/en/connector-list/sink/Greenplum.md
similarity index 100%
rename from docs/en/connector-v2/sink/Greenplum.md
rename to docs/en/connector-list/sink/Greenplum.md
diff --git a/docs/en/connector-v2/sink/Hbase.md b/docs/en/connector-list/sink/Hbase.md
similarity index 100%
rename from docs/en/connector-v2/sink/Hbase.md
rename to docs/en/connector-list/sink/Hbase.md
diff --git a/docs/en/connector-v2/sink/HdfsFile.md b/docs/en/connector-list/sink/HdfsFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/HdfsFile.md
rename to docs/en/connector-list/sink/HdfsFile.md
diff --git a/docs/en/connector-v2/sink/Hive.md b/docs/en/connector-list/sink/Hive.md
similarity index 100%
rename from docs/en/connector-v2/sink/Hive.md
rename to docs/en/connector-list/sink/Hive.md
diff --git a/docs/en/connector-v2/sink/Http.md b/docs/en/connector-list/sink/Http.md
similarity index 100%
rename from docs/en/connector-v2/sink/Http.md
rename to docs/en/connector-list/sink/Http.md
diff --git a/docs/en/connector-v2/sink/InfluxDB.md b/docs/en/connector-list/sink/InfluxDB.md
similarity index 100%
rename from docs/en/connector-v2/sink/InfluxDB.md
rename to docs/en/connector-list/sink/InfluxDB.md
diff --git a/docs/en/connector-v2/sink/IoTDB.md b/docs/en/connector-list/sink/IoTDB.md
similarity index 100%
rename from docs/en/connector-v2/sink/IoTDB.md
rename to docs/en/connector-list/sink/IoTDB.md
diff --git a/docs/en/connector-v2/sink/Jdbc.md b/docs/en/connector-list/sink/Jdbc.md
similarity index 100%
rename from docs/en/connector-v2/sink/Jdbc.md
rename to docs/en/connector-list/sink/Jdbc.md
diff --git a/docs/en/connector-v2/sink/Kafka.md b/docs/en/connector-list/sink/Kafka.md
similarity index 100%
rename from docs/en/connector-v2/sink/Kafka.md
rename to docs/en/connector-list/sink/Kafka.md
diff --git a/docs/en/connector-v2/sink/Kudu.md b/docs/en/connector-list/sink/Kudu.md
similarity index 100%
rename from docs/en/connector-v2/sink/Kudu.md
rename to docs/en/connector-list/sink/Kudu.md
diff --git a/docs/en/connector-v2/sink/LocalFile.md b/docs/en/connector-list/sink/LocalFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/LocalFile.md
rename to docs/en/connector-list/sink/LocalFile.md
diff --git a/docs/en/connector-v2/sink/Maxcompute.md b/docs/en/connector-list/sink/Maxcompute.md
similarity index 100%
rename from docs/en/connector-v2/sink/Maxcompute.md
rename to docs/en/connector-list/sink/Maxcompute.md
diff --git a/docs/en/connector-v2/sink/MongoDB.md b/docs/en/connector-list/sink/MongoDB.md
similarity index 100%
rename from docs/en/connector-v2/sink/MongoDB.md
rename to docs/en/connector-list/sink/MongoDB.md
diff --git a/docs/en/connector-v2/sink/Neo4j.md b/docs/en/connector-list/sink/Neo4j.md
similarity index 100%
rename from docs/en/connector-v2/sink/Neo4j.md
rename to docs/en/connector-list/sink/Neo4j.md
diff --git a/docs/en/connector-v2/sink/OssFile.md b/docs/en/connector-list/sink/OssFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/OssFile.md
rename to docs/en/connector-list/sink/OssFile.md
diff --git a/docs/en/connector-v2/sink/OssJindoFile.md b/docs/en/connector-list/sink/OssJindoFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/OssJindoFile.md
rename to docs/en/connector-list/sink/OssJindoFile.md
diff --git a/docs/en/connector-v2/sink/Phoenix.md b/docs/en/connector-list/sink/Phoenix.md
similarity index 100%
rename from docs/en/connector-v2/sink/Phoenix.md
rename to docs/en/connector-list/sink/Phoenix.md
diff --git a/docs/en/connector-v2/sink/Rabbitmq.md b/docs/en/connector-list/sink/Rabbitmq.md
similarity index 100%
rename from docs/en/connector-v2/sink/Rabbitmq.md
rename to docs/en/connector-list/sink/Rabbitmq.md
diff --git a/docs/en/connector-v2/sink/Redis.md b/docs/en/connector-list/sink/Redis.md
similarity index 100%
rename from docs/en/connector-v2/sink/Redis.md
rename to docs/en/connector-list/sink/Redis.md
diff --git a/docs/en/connector-v2/sink/RocketMQ.md b/docs/en/connector-list/sink/RocketMQ.md
similarity index 100%
rename from docs/en/connector-v2/sink/RocketMQ.md
rename to docs/en/connector-list/sink/RocketMQ.md
diff --git a/docs/en/connector-v2/sink/S3-Redshift.md b/docs/en/connector-list/sink/S3-Redshift.md
similarity index 100%
rename from docs/en/connector-v2/sink/S3-Redshift.md
rename to docs/en/connector-list/sink/S3-Redshift.md
diff --git a/docs/en/connector-v2/sink/S3File.md b/docs/en/connector-list/sink/S3File.md
similarity index 100%
rename from docs/en/connector-v2/sink/S3File.md
rename to docs/en/connector-list/sink/S3File.md
diff --git a/docs/en/connector-v2/sink/SelectDB-Cloud.md b/docs/en/connector-list/sink/SelectDB-Cloud.md
similarity index 100%
rename from docs/en/connector-v2/sink/SelectDB-Cloud.md
rename to docs/en/connector-list/sink/SelectDB-Cloud.md
diff --git a/docs/en/connector-v2/sink/Sentry.md b/docs/en/connector-list/sink/Sentry.md
similarity index 100%
rename from docs/en/connector-v2/sink/Sentry.md
rename to docs/en/connector-list/sink/Sentry.md
diff --git a/docs/en/connector-v2/sink/SftpFile.md b/docs/en/connector-list/sink/SftpFile.md
similarity index 100%
rename from docs/en/connector-v2/sink/SftpFile.md
rename to docs/en/connector-list/sink/SftpFile.md
diff --git a/docs/en/connector-v2/sink/Slack.md b/docs/en/connector-list/sink/Slack.md
similarity index 100%
rename from docs/en/connector-v2/sink/Slack.md
rename to docs/en/connector-list/sink/Slack.md
diff --git a/docs/en/connector-v2/sink/Socket.md b/docs/en/connector-list/sink/Socket.md
similarity index 100%
rename from docs/en/connector-v2/sink/Socket.md
rename to docs/en/connector-list/sink/Socket.md
diff --git a/docs/en/connector-v2/sink/StarRocks.md b/docs/en/connector-list/sink/StarRocks.md
similarity index 100%
rename from docs/en/connector-v2/sink/StarRocks.md
rename to docs/en/connector-list/sink/StarRocks.md
diff --git a/docs/en/connector-v2/sink/TDengine.md b/docs/en/connector-list/sink/TDengine.md
similarity index 100%
rename from docs/en/connector-v2/sink/TDengine.md
rename to docs/en/connector-list/sink/TDengine.md
diff --git a/docs/en/connector-v2/sink/Tablestore.md b/docs/en/connector-list/sink/Tablestore.md
similarity index 100%
rename from docs/en/connector-v2/sink/Tablestore.md
rename to docs/en/connector-list/sink/Tablestore.md
diff --git a/docs/en/connector-v2/sink/common-options.md b/docs/en/connector-list/sink/common-options.md
similarity index 100%
rename from docs/en/connector-v2/sink/common-options.md
rename to docs/en/connector-list/sink/common-options.md
diff --git a/docs/en/connector-v2/source/AmazonDynamoDB.md b/docs/en/connector-list/source/AmazonDynamoDB.md
similarity index 100%
rename from docs/en/connector-v2/source/AmazonDynamoDB.md
rename to docs/en/connector-list/source/AmazonDynamoDB.md
diff --git a/docs/en/connector-v2/source/Cassandra.md b/docs/en/connector-list/source/Cassandra.md
similarity index 100%
rename from docs/en/connector-v2/source/Cassandra.md
rename to docs/en/connector-list/source/Cassandra.md
diff --git a/docs/en/connector-v2/source/Clickhouse.md b/docs/en/connector-list/source/Clickhouse.md
similarity index 100%
rename from docs/en/connector-v2/source/Clickhouse.md
rename to docs/en/connector-list/source/Clickhouse.md
diff --git a/docs/en/connector-v2/source/Elasticsearch.md b/docs/en/connector-list/source/Elasticsearch.md
similarity index 100%
rename from docs/en/connector-v2/source/Elasticsearch.md
rename to docs/en/connector-list/source/Elasticsearch.md
diff --git a/docs/en/connector-v2/source/FakeSource.md b/docs/en/connector-list/source/FakeSource.md
similarity index 100%
rename from docs/en/connector-v2/source/FakeSource.md
rename to docs/en/connector-list/source/FakeSource.md
diff --git a/docs/en/connector-v2/source/FtpFile.md b/docs/en/connector-list/source/FtpFile.md
similarity index 100%
rename from docs/en/connector-v2/source/FtpFile.md
rename to docs/en/connector-list/source/FtpFile.md
diff --git a/docs/en/connector-v2/source/Github.md b/docs/en/connector-list/source/Github.md
similarity index 100%
rename from docs/en/connector-v2/source/Github.md
rename to docs/en/connector-list/source/Github.md
diff --git a/docs/en/connector-v2/source/Gitlab.md b/docs/en/connector-list/source/Gitlab.md
similarity index 100%
rename from docs/en/connector-v2/source/Gitlab.md
rename to docs/en/connector-list/source/Gitlab.md
diff --git a/docs/en/connector-v2/source/GoogleSheets.md b/docs/en/connector-list/source/GoogleSheets.md
similarity index 100%
rename from docs/en/connector-v2/source/GoogleSheets.md
rename to docs/en/connector-list/source/GoogleSheets.md
diff --git a/docs/en/connector-v2/source/Greenplum.md b/docs/en/connector-list/source/Greenplum.md
similarity index 100%
rename from docs/en/connector-v2/source/Greenplum.md
rename to docs/en/connector-list/source/Greenplum.md
diff --git a/docs/en/connector-v2/source/HdfsFile.md b/docs/en/connector-list/source/HdfsFile.md
similarity index 100%
rename from docs/en/connector-v2/source/HdfsFile.md
rename to docs/en/connector-list/source/HdfsFile.md
diff --git a/docs/en/connector-v2/source/Hive.md b/docs/en/connector-list/source/Hive.md
similarity index 100%
rename from docs/en/connector-v2/source/Hive.md
rename to docs/en/connector-list/source/Hive.md
diff --git a/docs/en/connector-v2/source/Http.md b/docs/en/connector-list/source/Http.md
similarity index 100%
rename from docs/en/connector-v2/source/Http.md
rename to docs/en/connector-list/source/Http.md
diff --git a/docs/en/connector-v2/source/Hudi.md b/docs/en/connector-list/source/Hudi.md
similarity index 100%
rename from docs/en/connector-v2/source/Hudi.md
rename to docs/en/connector-list/source/Hudi.md
diff --git a/docs/en/connector-v2/source/Iceberg.md b/docs/en/connector-list/source/Iceberg.md
similarity index 100%
rename from docs/en/connector-v2/source/Iceberg.md
rename to docs/en/connector-list/source/Iceberg.md
diff --git a/docs/en/connector-v2/source/InfluxDB.md b/docs/en/connector-list/source/InfluxDB.md
similarity index 100%
rename from docs/en/connector-v2/source/InfluxDB.md
rename to docs/en/connector-list/source/InfluxDB.md
diff --git a/docs/en/connector-v2/source/IoTDB.md b/docs/en/connector-list/source/IoTDB.md
similarity index 100%
rename from docs/en/connector-v2/source/IoTDB.md
rename to docs/en/connector-list/source/IoTDB.md
diff --git a/docs/en/connector-v2/source/Jdbc.md b/docs/en/connector-list/source/Jdbc.md
similarity index 100%
rename from docs/en/connector-v2/source/Jdbc.md
rename to docs/en/connector-list/source/Jdbc.md
diff --git a/docs/en/connector-v2/source/Jira.md b/docs/en/connector-list/source/Jira.md
similarity index 100%
rename from docs/en/connector-v2/source/Jira.md
rename to docs/en/connector-list/source/Jira.md
diff --git a/docs/en/connector-v2/source/Klaviyo.md b/docs/en/connector-list/source/Klaviyo.md
similarity index 100%
rename from docs/en/connector-v2/source/Klaviyo.md
rename to docs/en/connector-list/source/Klaviyo.md
diff --git a/docs/en/connector-v2/source/Kudu.md b/docs/en/connector-list/source/Kudu.md
similarity index 100%
rename from docs/en/connector-v2/source/Kudu.md
rename to docs/en/connector-list/source/Kudu.md
diff --git a/docs/en/connector-v2/source/Lemlist.md b/docs/en/connector-list/source/Lemlist.md
similarity index 100%
rename from docs/en/connector-v2/source/Lemlist.md
rename to docs/en/connector-list/source/Lemlist.md
diff --git a/docs/en/connector-v2/source/LocalFile.md b/docs/en/connector-list/source/LocalFile.md
similarity index 100%
rename from docs/en/connector-v2/source/LocalFile.md
rename to docs/en/connector-list/source/LocalFile.md
diff --git a/docs/en/connector-v2/source/Maxcompute.md b/docs/en/connector-list/source/Maxcompute.md
similarity index 100%
rename from docs/en/connector-v2/source/Maxcompute.md
rename to docs/en/connector-list/source/Maxcompute.md
diff --git a/docs/en/connector-v2/source/MongoDB.md b/docs/en/connector-list/source/MongoDB.md
similarity index 100%
rename from docs/en/connector-v2/source/MongoDB.md
rename to docs/en/connector-list/source/MongoDB.md
diff --git a/docs/en/connector-v2/source/MyHours.md b/docs/en/connector-list/source/MyHours.md
similarity index 100%
rename from docs/en/connector-v2/source/MyHours.md
rename to docs/en/connector-list/source/MyHours.md
diff --git a/docs/en/connector-v2/source/MySQL-CDC.md b/docs/en/connector-list/source/MySQL-CDC.md
similarity index 100%
rename from docs/en/connector-v2/source/MySQL-CDC.md
rename to docs/en/connector-list/source/MySQL-CDC.md
diff --git a/docs/en/connector-v2/source/Neo4j.md b/docs/en/connector-list/source/Neo4j.md
similarity index 100%
rename from docs/en/connector-v2/source/Neo4j.md
rename to docs/en/connector-list/source/Neo4j.md
diff --git a/docs/en/connector-v2/source/Notion.md b/docs/en/connector-list/source/Notion.md
similarity index 100%
rename from docs/en/connector-v2/source/Notion.md
rename to docs/en/connector-list/source/Notion.md
diff --git a/docs/en/connector-v2/source/OneSignal.md b/docs/en/connector-list/source/OneSignal.md
similarity index 100%
rename from docs/en/connector-v2/source/OneSignal.md
rename to docs/en/connector-list/source/OneSignal.md
diff --git a/docs/en/connector-v2/source/OpenMldb.md b/docs/en/connector-list/source/OpenMldb.md
similarity index 100%
rename from docs/en/connector-v2/source/OpenMldb.md
rename to docs/en/connector-list/source/OpenMldb.md
diff --git a/docs/en/connector-v2/source/OssFile.md b/docs/en/connector-list/source/OssFile.md
similarity index 100%
rename from docs/en/connector-v2/source/OssFile.md
rename to docs/en/connector-list/source/OssFile.md
diff --git a/docs/en/connector-v2/source/OssJindoFile.md b/docs/en/connector-list/source/OssJindoFile.md
similarity index 100%
rename from docs/en/connector-v2/source/OssJindoFile.md
rename to docs/en/connector-list/source/OssJindoFile.md
diff --git a/docs/en/connector-v2/source/Persistiq.md b/docs/en/connector-list/source/Persistiq.md
similarity index 100%
rename from docs/en/connector-v2/source/Persistiq.md
rename to docs/en/connector-list/source/Persistiq.md
diff --git a/docs/en/connector-v2/source/Phoenix.md b/docs/en/connector-list/source/Phoenix.md
similarity index 100%
rename from docs/en/connector-v2/source/Phoenix.md
rename to docs/en/connector-list/source/Phoenix.md
diff --git a/docs/en/connector-v2/source/Rabbitmq.md b/docs/en/connector-list/source/Rabbitmq.md
similarity index 100%
rename from docs/en/connector-v2/source/Rabbitmq.md
rename to docs/en/connector-list/source/Rabbitmq.md
diff --git a/docs/en/connector-v2/source/Redis.md b/docs/en/connector-list/source/Redis.md
similarity index 100%
rename from docs/en/connector-v2/source/Redis.md
rename to docs/en/connector-list/source/Redis.md
diff --git a/docs/en/connector-v2/source/RocketMQ.md b/docs/en/connector-list/source/RocketMQ.md
similarity index 100%
rename from docs/en/connector-v2/source/RocketMQ.md
rename to docs/en/connector-list/source/RocketMQ.md
diff --git a/docs/en/connector-v2/source/S3File.md b/docs/en/connector-list/source/S3File.md
similarity index 100%
rename from docs/en/connector-v2/source/S3File.md
rename to docs/en/connector-list/source/S3File.md
diff --git a/docs/en/connector-v2/source/SftpFile.md b/docs/en/connector-list/source/SftpFile.md
similarity index 100%
rename from docs/en/connector-v2/source/SftpFile.md
rename to docs/en/connector-list/source/SftpFile.md
diff --git a/docs/en/connector-v2/source/Socket.md b/docs/en/connector-list/source/Socket.md
similarity index 100%
rename from docs/en/connector-v2/source/Socket.md
rename to docs/en/connector-list/source/Socket.md
diff --git a/docs/en/connector-v2/source/SqlServer-CDC.md b/docs/en/connector-list/source/SqlServer-CDC.md
similarity index 100%
rename from docs/en/connector-v2/source/SqlServer-CDC.md
rename to docs/en/connector-list/source/SqlServer-CDC.md
diff --git a/docs/en/connector-v2/source/StarRocks.md b/docs/en/connector-list/source/StarRocks.md
similarity index 100%
rename from docs/en/connector-v2/source/StarRocks.md
rename to docs/en/connector-list/source/StarRocks.md
diff --git a/docs/en/connector-v2/source/TDengine.md b/docs/en/connector-list/source/TDengine.md
similarity index 100%
rename from docs/en/connector-v2/source/TDengine.md
rename to docs/en/connector-list/source/TDengine.md
diff --git a/docs/en/connector-v2/source/common-options.md b/docs/en/connector-list/source/common-options.md
similarity index 100%
rename from docs/en/connector-v2/source/common-options.md
rename to docs/en/connector-list/source/common-options.md
diff --git a/docs/en/connector-v2/source/kafka.md b/docs/en/connector-list/source/kafka.md
similarity index 100%
rename from docs/en/connector-v2/source/kafka.md
rename to docs/en/connector-list/source/kafka.md
diff --git a/docs/en/connector-v2/source/pulsar.md b/docs/en/connector-list/source/pulsar.md
similarity index 100%
rename from docs/en/connector-v2/source/pulsar.md
rename to docs/en/connector-list/source/pulsar.md
diff --git a/docs/en/transform-v2/common-options.md b/docs/en/connector-list/transform/common-options.md
similarity index 100%
rename from docs/en/transform-v2/common-options.md
rename to docs/en/connector-list/transform/common-options.md
diff --git a/docs/en/transform-v2/copy.md b/docs/en/connector-list/transform/copy.md
similarity index 100%
rename from docs/en/transform-v2/copy.md
rename to docs/en/connector-list/transform/copy.md
diff --git a/docs/en/transform-v2/field-mapper.md b/docs/en/connector-list/transform/field-mapper.md
similarity index 100%
rename from docs/en/transform-v2/field-mapper.md
rename to docs/en/connector-list/transform/field-mapper.md
diff --git a/docs/en/transform-v2/filter-rowkind.md b/docs/en/connector-list/transform/filter-rowkind.md
similarity index 100%
rename from docs/en/transform-v2/filter-rowkind.md
rename to docs/en/connector-list/transform/filter-rowkind.md
diff --git a/docs/en/transform-v2/filter.md b/docs/en/connector-list/transform/filter.md
similarity index 100%
rename from docs/en/transform-v2/filter.md
rename to docs/en/connector-list/transform/filter.md
diff --git a/docs/en/transform-v2/replace.md b/docs/en/connector-list/transform/replace.md
similarity index 100%
rename from docs/en/transform-v2/replace.md
rename to docs/en/connector-list/transform/replace.md
diff --git a/docs/en/transform-v2/split.md b/docs/en/connector-list/transform/split.md
similarity index 100%
rename from docs/en/transform-v2/split.md
rename to docs/en/connector-list/transform/split.md
diff --git a/docs/en/transform-v2/sql-functions.md b/docs/en/connector-list/transform/sql-functions.md
similarity index 100%
rename from docs/en/transform-v2/sql-functions.md
rename to docs/en/connector-list/transform/sql-functions.md
diff --git a/docs/en/transform-v2/sql-udf.md b/docs/en/connector-list/transform/sql-udf.md
similarity index 100%
rename from docs/en/transform-v2/sql-udf.md
rename to docs/en/connector-list/transform/sql-udf.md
diff --git a/docs/en/transform-v2/sql.md b/docs/en/connector-list/transform/sql.md
similarity index 100%
rename from docs/en/transform-v2/sql.md
rename to docs/en/connector-list/transform/sql.md
diff --git a/docs/en/command/usage.mdx b/docs/en/contribution/command-usage.mdx
similarity index 100%
rename from docs/en/command/usage.mdx
rename to docs/en/contribution/command-usage.mdx
diff --git a/docs/en/contribution/contribute-plugin.md b/docs/en/contribution/contribute-plugin.md
index 7e1aebddf..efce5aee6 100644
--- a/docs/en/contribution/contribute-plugin.md
+++ b/docs/en/contribution/contribute-plugin.md
@@ -1,4 +1,4 @@
-# Contribute Connector-v2 Plugins
+# Connector Plugins
 
 If you want to contribute Connector-V2, please click the Connector-V2 Contribution Guide below for reference. It can help you enter development more quickly.
 
diff --git a/docs/en/contribution/contribute-transform-v2-guide.md b/docs/en/contribution/contribute-transform.md
similarity index 99%
rename from docs/en/contribution/contribute-transform-v2-guide.md
rename to docs/en/contribution/contribute-transform.md
index 1ec2493a1..e962a9f87 100644
--- a/docs/en/contribution/contribute-transform-v2-guide.md
+++ b/docs/en/contribution/contribute-transform.md
@@ -1,4 +1,4 @@
-# Contribute Transform Guide
+# Transform Guide
 
 This document describes how to understand, develop and contribute a transform.
 
diff --git a/docs/en/connector-v2/Error-Quick-Reference-Manual.md b/docs/en/contribution/error-quick-reference-manual.md
similarity index 100%
rename from docs/en/connector-v2/Error-Quick-Reference-Manual.md
rename to docs/en/contribution/error-quick-reference-manual.md
diff --git a/docs/en/start-v2/locally/deployment.md b/docs/en/deploy/seatunnel-client.md
similarity index 99%
rename from docs/en/start-v2/locally/deployment.md
rename to docs/en/deploy/seatunnel-client.md
index c10009b5a..362b458a2 100644
--- a/docs/en/start-v2/locally/deployment.md
+++ b/docs/en/deploy/seatunnel-client.md
@@ -6,7 +6,7 @@ sidebar_position: 1
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-# Deployment
+# Setup&launch
 
 ## Step 1: Prepare the environment
 
diff --git a/docs/en/start-v2/locally/quick-start-seatunnel-engine.md b/docs/en/deploy/seatunnel-zeta/local.md
similarity index 98%
rename from docs/en/start-v2/locally/quick-start-seatunnel-engine.md
rename to docs/en/deploy/seatunnel-zeta/local.md
index 5e43dbfdf..5725c3390 100644
--- a/docs/en/start-v2/locally/quick-start-seatunnel-engine.md
+++ b/docs/en/deploy/seatunnel-zeta/local.md
@@ -1,9 +1,9 @@
 ---
 
-sidebar_position: 2
+sidebar_position: 1
 -------------------
 
-# Quick Start With SeaTunnel Engine
+# Quick Start With Locally
 
 ## Step 1: Deployment SeaTunnel And Connectors
 
diff --git a/docs/en/deploy/seatunnel-zeta/standalone.md b/docs/en/deploy/seatunnel-zeta/standalone.md
new file mode 100644
index 000000000..1a80c151e
--- /dev/null
+++ b/docs/en/deploy/seatunnel-zeta/standalone.md
@@ -0,0 +1,449 @@
+---
+
+sidebar_position: 2
+-------------------
+
+# Run Job With Cluster Mode
+
+This is the most recommended way to use SeaTunnel Engine in the production environment. Full functionality of SeaTunnel Engine is supported in this mode and the cluster mode will have better performance and stability.
+
+In the cluster mode, the SeaTunnel Engine cluster needs to be deployed first, and the client will submit the job to the SeaTunnel Engine cluster for running.
+
+## Deploy SeaTunnel Engine Cluster
+
+### 1. Download
+
+SeaTunnel Engine is the default engine of SeaTunnel. The installation package of SeaTunnel already contains all the contents of SeaTunnel Engine.
+
+### 2 Config SEATUNNEL_HOME
+
+You can config `SEATUNNEL_HOME` by add `/etc/profile.d/seatunnel.sh` file. The content of `/etc/profile.d/seatunnel.sh` are
+
+```
+export SEATUNNEL_HOME=${seatunnel install path}
+export PATH=$PATH:$SEATUNNEL_HOME/bin
+```
+
+### 3. Config SeaTunnel Engine JVM options
+
+SeaTunnel Engine supported two ways to set jvm options.
+
+1. Add JVM Options to `$SEATUNNEL_HOME/bin/seatunnel-cluster.sh`.
+
+   Modify the `$SEATUNNEL_HOME/bin/seatunnel-cluster.sh` file and add `JAVA_OPTS="-Xms2G -Xmx2G"` in the first line.
+
+2. Add JVM Options when start SeaTunnel Engine. For example `seatunnel-cluster.sh -DJvmOption="-Xms2G -Xmx2G"`
+
+### 4. Config SeaTunnel Engine
+
+SeaTunnel Engine provides many functions, which need to be configured in seatunnel.yaml.
+
+#### 4.1 Backup count
+
+SeaTunnel Engine implement cluster management based on [Hazelcast IMDG](https://docs.hazelcast.com/imdg/4.1/). The state data of cluster(Job Running State, Resource State) are storage is [Hazelcast IMap](https://docs.hazelcast.com/imdg/4.1/data-structures/map).
+The data saved in Hazelcast IMap will be distributed and stored in all nodes of the cluster. Hazelcast will partition the data stored in Imap. Each partition can specify the number of backups.
+Therefore, SeaTunnel Engine can achieve cluster HA without using other services(for example zookeeper).
+
+The `backup count` is to define the number of synchronous backups. For example, if it is set to 1, backup of a partition will be placed on one other member. If it is 2, it will be placed on two other members.
+
+We suggest the value of `backup-count` is the `min(1, max(5, N/2))`. `N` is the number of the cluster node.
+
+```
+seatunnel:
+    engine:
+        backup-count: 1
+        # other config
+```
+
+#### 4.2 Slot service
+
+The number of Slots determines the number of TaskGroups the cluster node can run in parallel. SeaTunnel Engine is a data synchronization engine and most jobs are IO intensive.
+
+Dynamic Slot is suggest.
+
+```
+seatunnel:
+    engine:
+        slot-service:
+            dynamic-slot: true
+        # other config
+```
+
+#### 4.3 Checkpoint Manager
+
+Like Flink, SeaTunnel Engine support Chandy–Lamport algorithm. Therefore, SeaTunnel Engine can realize data synchronization without data loss and duplication.
+
+**interval**
+
+The interval between two checkpoints, unit is milliseconds. If the `checkpoint.interval` parameter is configured in the `env` of the job config file, the value set here will be overwritten.
+
+**timeout**
+
+The timeout of a checkpoint. If a checkpoint cannot be completed within the timeout period, a checkpoint failure will be triggered. Therefore, Job will be restored.
+
+**max-concurrent**
+
+How many checkpoints can be performed simultaneously at most.
+
+**tolerable-failure**
+
+Maximum number of retries after checkpoint failure.
+
+Example
+
+```
+seatunnel:
+    engine:
+        backup-count: 1
+        print-execution-info-interval: 10
+        slot-service:
+            dynamic-slot: true
+        checkpoint:
+            interval: 300000
+            timeout: 10000
+            max-concurrent: 1
+            tolerable-failure: 2
+```
+
+**checkpoint storage**
+
+#### Introduction
+
+Checkpoint is a fault-tolerant recovery mechanism. This mechanism ensures that when the program is running, it can recover itself even if it suddenly encounters an exception.
+
+##### Checkpoint Storage
+
+Checkpoint Storage is a storage mechanism for storing checkpoint data.
+
+SeaTunnel Engine supports the following checkpoint storage types:
+
+- HDFS (OSS,S3,HDFS,LocalFile)
+- LocalFile (native), (it's deprecated: use Hdfs(LocalFile) instead.
+
+We used the microkernel design pattern to separate the checkpoint storage module from the engine. This allows users to implement their own checkpoint storage modules.
+
+`checkpoint-storage-api` is the checkpoint storage module API, which defines the interface of the checkpoint storage module.
+
+if you want to implement your own checkpoint storage module, you need to implement the `CheckpointStorage` and provide the corresponding `CheckpointStorageFactory` implementation.
+
+##### Checkpoint Storage Configuration
+
+The configuration of the `seatunnel-server` module is in the `seatunnel.yaml` file.
+
+```yaml
+
+seatunnel:
+    engine:
+        checkpoint:
+            storage:
+                type: hdfs #plugin name of checkpoint storage, we support hdfs(S3, local, hdfs), localfile (native local file) is the default, but this plugin is de
+              # plugin configuration
+                plugin-config: 
+                  namespace: #checkpoint storage parent path, the default value is /seatunnel/checkpoint/
+                  K1: V1 # plugin other configuration
+                  K2: V2 # plugin other configuration   
+```
+
+Notice: namespace must end with "/".
+
+###### OSS
+
+Aliyun oss base on hdfs-file, so you can refer [hadoop oss docs](https://hadoop.apache.org/docs/stable/hadoop-aliyun/tools/hadoop-aliyun/index.html) to config oss.
+
+Except when interacting with oss buckets, the oss client needs the credentials needed to interact with buckets.
+The client supports multiple authentication mechanisms and can be configured as to which mechanisms to use, and their order of use. Custom implementations of org.apache.hadoop.fs.aliyun.oss.AliyunCredentialsProvider may also be used.
+if you used AliyunCredentialsProvider (can be obtained from the Aliyun Access Key Management), these consist of an access key, a secret key.
+you can config like this:
+
+```yaml
+seatunnel:
+  engine:
+    checkpoint:
+      interval: 6000
+      timeout: 7000
+      max-concurrent: 5
+      tolerable-failure: 2
+      storage:
+        type: hdfs
+        max-retained: 3
+        plugin-config:
+          storage.type: oss
+          oss.bucket: your-bucket
+          fs.oss.accessKeyId: your-access-key
+          fs.oss.accessKeySecret: your-secret-key
+          fs.oss.endpoint: endpoint address
+          fs.oss.credentials.provider: org.apache.hadoop.fs.aliyun.oss.AliyunCredentialsProvider
+```
+
+For additional reading on the Hadoop Credential Provider API see: [Credential Provider API](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html).
+
+Aliyun oss Credential Provider implements see: [Auth Credential Providers](https://github.com/aliyun/aliyun-oss-java-sdk/tree/master/src/main/java/com/aliyun/oss/common/auth)
+
+###### S3
+
+S3 base on hdfs-file, so you can refer [hadoop s3 docs](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html) to config s3.
+
+Except when interacting with public S3 buckets, the S3A client needs the credentials needed to interact with buckets.
+The client supports multiple authentication mechanisms and can be configured as to which mechanisms to use, and their order of use. Custom implementations of com.amazonaws.auth.AWSCredentialsProvider may also be used.
+if you used SimpleAWSCredentialsProvider (can be obtained from the Amazon Security Token Service), these consist of an access key, a secret key.
+you can config like this:
+
+```yaml
+``` yaml
+
+seatunnel:
+    engine:
+        checkpoint:
+            interval: 6000
+            timeout: 7000
+            max-concurrent: 5
+            tolerable-failure: 2
+            storage:
+                type: hdfs
+                max-retained: 3
+                plugin-config:
+                    storage.type: s3
+                    s3.bucket: your-bucket
+                    fs.s3a.access.key: your-access-key
+                    fs.s3a.secret.key: your-secret-key
+                    fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
+                    
+
+```
+
+if you used `InstanceProfileCredentialsProvider`, this supports use of instance profile credentials if running in an EC2 VM, you could check [iam-roles-for-amazon-ec2](https://docs.aws.amazon.com/zh_cn/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
+you can config like this:
+
+```yaml
+
+seatunnel:
+  engine:
+    checkpoint:
+      interval: 6000
+      timeout: 7000
+      max-concurrent: 5
+      tolerable-failure: 2
+      storage:
+        type: hdfs
+        max-retained: 3
+        plugin-config:
+          storage.type: s3
+          s3.bucket: your-bucket
+          fs.s3a.endpoint: your-endpoint
+          fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.InstanceProfileCredentialsProvider
+```
+
+For additional reading on the Hadoop Credential Provider API see: [Credential Provider API](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html).
+
+###### HDFS
+
+if you used HDFS, you can config like this:
+
+```yaml
+seatunnel:
+  engine:
+    checkpoint:
+      storage:
+        type: hdfs
+        max-retained: 3
+        plugin-config:
+          storage.type: hdfs
+          fs.defaultFS: hdfs://localhost:9000
+          // if you used kerberos, you can config like this:
+          kerberosPrincipal: your-kerberos-principal
+          kerberosKeytab: your-kerberos-keytab  
+```
+
+###### LocalFile
+
+```yaml
+seatunnel:
+  engine:
+    checkpoint:
+      interval: 6000
+      timeout: 7000
+      max-concurrent: 5
+      tolerable-failure: 2
+      storage:
+        type: hdfs
+        max-retained: 3
+        plugin-config:
+          storage.type: hdfs
+          fs.defaultFS: file:/// # Ensure that the directory has written permission 
+
+```
+
+
+
+### 5. Config SeaTunnel Engine Server
+
+All SeaTunnel Engine Server config in `hazelcast.yaml` file.
+
+#### 5.1 cluster-name
+
+The SeaTunnel Engine nodes use the cluster name to determine whether the other is a cluster with themselves. If the cluster names between the two nodes are different, the SeaTunnel Engine will reject the service request.
+
+#### 5.2 Network
+
+Base on [Hazelcast](https://docs.hazelcast.com/imdg/4.1/clusters/discovery-mechanisms), A SeaTunnel Engine cluster is a network of cluster members that run SeaTunnel Engine Server. Cluster members automatically join together to form a cluster. This automatic joining takes place with various discovery mechanisms that the cluster members use to find each other.
+
+Please note that, after a cluster is formed, communication between cluster members is always via TCP/IP, regardless of the discovery mechanism used.
+
+SeaTunnel Engine uses the following discovery mechanisms.
+
+##### TCP NetWork
+
+If multicast is not the preferred way of discovery for your environment, then you can configure SeaTunnel Engine to be a full TCP/IP cluster. When you configure SeaTunnel Engine to discover members by TCP/IP, you must list all or a subset of the members' host names and/or IP addresses as cluster members. You do not have to list all of these cluster members, but at least one of the listed members has to be active in the cluster when a new member joins.
+
+To configure your Hazelcast to be a full TCP/IP cluster, set the following configuration elements. See the tcp-ip element section for the full descriptions of the TCP/IP discovery configuration elements.
+
+- Set the enabled attribute of the tcp-ip element to true.
+- Provide your member elements within the tcp-ip element.
+
+The following is an example declarative configuration.
+
+```yaml
+hazelcast:
+  network:
+    join:
+      tcp-ip:
+        enabled: true
+        member-list:
+          - machine1
+          - machine2
+          - machine3:5799
+          - 192.168.1.0-7
+          - 192.168.1.21
+```
+
+As shown above, you can provide IP addresses or host names for member elements. You can also give a range of IP addresses, such as `192.168.1.0-7`.
+
+Instead of providing members line-by-line as shown above, you also have the option to use the members element and write comma-separated IP addresses, as shown below.
+
+`<members>192.168.1.0-7,192.168.1.21</members>`
+
+If you do not provide ports for the members, Hazelcast automatically tries the ports `5701`, `5702` and so on.
+
+
+An example is like this `hazelcast.yaml`
+
+```yaml
+hazelcast:
+  cluster-name: seatunnel
+  network:
+    join:
+      tcp-ip:
+        enabled: true
+        member-list:
+          - hostname1
+    port:
+      auto-increment: false
+      port: 5801
+  properties:
+    hazelcast.logging.type: log4j2
+```
+
+TCP is our suggest way in a standalone SeaTunnel Engine cluster.
+
+On the other hand, Hazelcast provides some other service discovery methods. For details, please refer to [hazelcast network](https://docs.hazelcast.com/imdg/4.1/clusters/setting-up-clusters)
+
+#### 5.3 Map
+
+MapStores connect to an external data store only when they are configured on a map. This topic explains how to configure a map with a MapStore. For details, please refer to [hazelcast map](https://docs.hazelcast.com/imdg/4.2/data-structures/map)
+
+**type**
+
+The type of imap persistence, currently only supports `hdfs`.
+
+**namespace**
+
+It is used to distinguish data storage locations of different business, like OSS bucket name.
+
+**clusterName**
+
+This parameter is primarily used for cluster isolation, we can use this to distinguish different cluster, like cluster1,
+cluster2 and this is also used to distinguish different business
+
+**fs.defaultFS**
+
+We used hdfs api read/write file, so used this storage need provide hdfs configuration
+
+if you used HDFS, you can config like this:
+
+```yaml
+map:
+    engine*:
+       map-store:
+         enabled: true
+         initial-mode: EAGER
+         factory-class-name: org.apache.seatunnel.engine.server.persistence.FileMapStoreFactory
+         properties:
+           type: hdfs
+           namespace: /tmp/seatunnel/imap
+           clusterName: seatunnel-cluster
+           fs.defaultFS: hdfs://localhost:9000
+```
+
+If there is no HDFS and your cluster only have one node, you can config to use local file like this:
+
+```yaml
+map:
+    engine*:
+       map-store:
+         enabled: true
+         initial-mode: EAGER
+         factory-class-name: org.apache.seatunnel.engine.server.persistence.FileMapStoreFactory
+         properties:
+           type: hdfs
+           namespace: /tmp/seatunnel/imap
+           clusterName: seatunnel-cluster
+           fs.defaultFS: file:///
+```
+
+### 6. Config SeaTunnel Engine Client
+
+All SeaTunnel Engine Client config in `hazelcast-client.yaml`.
+
+#### 6.1 cluster-name
+
+The Client must have the same `cluster-name` with the SeaTunnel Engine. Otherwise, SeaTunnel Engine will reject the client request.
+
+#### 6.2 Network
+
+**cluster-members**
+
+All SeaTunnel Engine Server Node address need add to here.
+
+```yaml
+hazelcast-client:
+  cluster-name: seatunnel
+  properties:
+      hazelcast.logging.type: log4j2
+  network:
+    cluster-members:
+      - hostname1:5801
+```
+
+### 7. Start SeaTunnel Engine Server Node
+
+Can be started by a daemon with `-d`.
+
+```shell
+mkdir -p $SEATUNNEL_HOME/logs
+./bin/seatunnel-cluster.sh -d
+```
+
+The logs will write in `$SEATUNNEL_HOME/logs/seatunnel-engine-server.log`
+
+### 8. Install SeaTunnel Engine Client
+
+You only need to copy the `$SEATUNNEL_HOME` directory on the SeaTunnel Engine node to the Client node and config the `SEATUNNEL_HOME` like SeaTunnel Engine Server Node.
+
+
+
+
+## Submit Job
+
+```shell
+$SEATUNNEL_HOME/bin/seatunnel.sh --config $SEATUNNEL_HOME/config/v2.batch.config.template
+```
diff --git a/docs/en/start-v2/locally/quick-start-flink.md b/docs/en/deploy/with-flink.md
similarity index 100%
rename from docs/en/start-v2/locally/quick-start-flink.md
rename to docs/en/deploy/with-flink.md
diff --git a/docs/en/start-v2/locally/quick-start-spark.md b/docs/en/deploy/with-spark.md
similarity index 100%
rename from docs/en/start-v2/locally/quick-start-spark.md
rename to docs/en/deploy/with-spark.md
diff --git a/docs/en/connector-v2/formats/canal-json.md b/docs/en/manager/configuring-connector-formats/canal-json.md
similarity index 100%
rename from docs/en/connector-v2/formats/canal-json.md
rename to docs/en/manager/configuring-connector-formats/canal-json.md
diff --git a/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md b/docs/en/manager/configuring-connector-formats/cdc-compatible-debezium-json.md
similarity index 100%
rename from docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
rename to docs/en/manager/configuring-connector-formats/cdc-compatible-debezium-json.md
diff --git a/docs/en/connector-v2/Config-Encryption-Decryption.md b/docs/en/manager/configuring-encryption-decryption.md
similarity index 100%
rename from docs/en/connector-v2/Config-Encryption-Decryption.md
rename to docs/en/manager/configuring-encryption-decryption.md
diff --git a/docs/en/seatunnel-engine/savepoint.md b/docs/en/manager/using-on-standalone.md
similarity index 51%
rename from docs/en/seatunnel-engine/savepoint.md
rename to docs/en/manager/using-on-standalone.md
index 7bed7ba86..265128da1 100644
--- a/docs/en/seatunnel-engine/savepoint.md
+++ b/docs/en/manager/using-on-standalone.md
@@ -1,8 +1,10 @@
 ---
 
-sidebar_position: 5
+sidebar_position: 2
 -------------------
 
+# Using On Local With Standalone
+
 # savepoint and restore with savepoint
 
 savepoint is created using the checkpoint. a global mirror of job execution status, which can be used for job or seatunnel stop and recovery, upgrade, etc.
@@ -22,3 +24,23 @@ After successful execution, the checkpoint data will be saved and the task will
 
 Resume from savepoint using jobId  
 ```./bin/seatunnel.sh -c {jobConfig} -r {jobId}```
+
+
+# Submit Job
+
+
+# Check Job List
+
+# Pause Job (savepoint)
+
+# Renew Job (from the nearest checkpoint)
+
+# Obtain Job Monitoring Information
+
+
+## What's More
+
+For now, you are already take a quick look about SeaTunnel, you could see [connector](../../connector-v2/source/FakeSource.md) to find all
+source and sink SeaTunnel supported. Or see [SeaTunnel Engine](../../seatunnel-engine/about.md) if you want to know more about SeaTunnel Engine.
+
+SeaTunnel also supports running jobs in Spark/Flink. You can see [Quick Start With Spark](quick-start-spark.md) or [Quick Start With Flink](quick-start-flink.md).
diff --git a/docs/en/other-engine/flink.md b/docs/en/other-engine/flink.md
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/en/other-engine/spark.md b/docs/en/other-engine/spark.md
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/en/concept/config.md b/docs/en/quick-started.md
similarity index 99%
rename from docs/en/concept/config.md
rename to docs/en/quick-started.md
index a341e484d..a0eea08a4 100644
--- a/docs/en/concept/config.md
+++ b/docs/en/quick-started.md
@@ -3,7 +3,7 @@
 sidebar_position: 2
 -------------------
 
-# Intro to config file
+# set up a job config file
 
 In SeaTunnel, the most important thing is the Config file, through which users can customize their own data
 synchronization requirements to maximize the potential of SeaTunnel. So next, I will introduce you how to
@@ -194,3 +194,4 @@ This is much more convenient when there is only one source.
 
 If you want to know the details of this format configuration, Please
 see [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md).
+
diff --git a/docs/en/Connector-v2-release-state.md b/docs/en/release-notes/connector-release-status.md
similarity index 99%
rename from docs/en/Connector-v2-release-state.md
rename to docs/en/release-notes/connector-release-status.md
index 74a183c73..f9cd82da5 100644
--- a/docs/en/Connector-v2-release-state.md
+++ b/docs/en/release-notes/connector-release-status.md
@@ -1,4 +1,4 @@
-# Connector Release Status
+# Connector Release
 
 SeaTunnel uses a grading system for connectors to help you understand what to expect from a connector:
 
diff --git a/docs/en/seatunnel-engine/checkpoint-storage.md b/docs/en/seatunnel-engine/checkpoint-storage.md
deleted file mode 100644
index d1f8d9746..000000000
--- a/docs/en/seatunnel-engine/checkpoint-storage.md
+++ /dev/null
@@ -1,173 +0,0 @@
----
-
-sidebar_position: 7
--------------------
-
-# Checkpoint Storage
-
-## Introduction
-
-Checkpoint is a fault-tolerant recovery mechanism. This mechanism ensures that when the program is running, it can recover itself even if it suddenly encounters an exception.
-
-### Checkpoint Storage
-
-Checkpoint Storage is a storage mechanism for storing checkpoint data.
-
-SeaTunnel Engine supports the following checkpoint storage types:
-
-- HDFS (OSS,S3,HDFS,LocalFile)
-- LocalFile (native), (it's deprecated: use Hdfs(LocalFile) instead.
-
-We used the microkernel design pattern to separate the checkpoint storage module from the engine. This allows users to implement their own checkpoint storage modules.
-
-`checkpoint-storage-api` is the checkpoint storage module API, which defines the interface of the checkpoint storage module.
-
-if you want to implement your own checkpoint storage module, you need to implement the `CheckpointStorage` and provide the corresponding `CheckpointStorageFactory` implementation.
-
-### Checkpoint Storage Configuration
-
-The configuration of the `seatunnel-server` module is in the `seatunnel.yaml` file.
-
-```yaml
-
-seatunnel:
-    engine:
-        checkpoint:
-            storage:
-                type: hdfs #plugin name of checkpoint storage, we support hdfs(S3, local, hdfs), localfile (native local file) is the default, but this plugin is de
-              # plugin configuration
-                plugin-config: 
-                  namespace: #checkpoint storage parent path, the default value is /seatunnel/checkpoint/
-                  K1: V1 # plugin other configuration
-                  K2: V2 # plugin other configuration   
-```
-
-Notice: namespace must end with "/".
-
-#### OSS
-
-Aliyun oss base on hdfs-file, so you can refer [hadoop oss docs](https://hadoop.apache.org/docs/stable/hadoop-aliyun/tools/hadoop-aliyun/index.html) to config oss.
-
-Except when interacting with oss buckets, the oss client needs the credentials needed to interact with buckets.
-The client supports multiple authentication mechanisms and can be configured as to which mechanisms to use, and their order of use. Custom implementations of org.apache.hadoop.fs.aliyun.oss.AliyunCredentialsProvider may also be used.
-if you used AliyunCredentialsProvider (can be obtained from the Aliyun Access Key Management), these consist of an access key, a secret key.
-you can config like this:
-
-```yaml
-seatunnel:
-  engine:
-    checkpoint:
-      interval: 6000
-      timeout: 7000
-      max-concurrent: 5
-      tolerable-failure: 2
-      storage:
-        type: hdfs
-        max-retained: 3
-        plugin-config:
-          storage.type: oss
-          oss.bucket: your-bucket
-          fs.oss.accessKeyId: your-access-key
-          fs.oss.accessKeySecret: your-secret-key
-          fs.oss.endpoint: endpoint address
-          fs.oss.credentials.provider: org.apache.hadoop.fs.aliyun.oss.AliyunCredentialsProvider
-```
-
-For additional reading on the Hadoop Credential Provider API see: [Credential Provider API](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html).
-
-Aliyun oss Credential Provider implements see: [Auth Credential Providers](https://github.com/aliyun/aliyun-oss-java-sdk/tree/master/src/main/java/com/aliyun/oss/common/auth)
-
-#### S3
-
-S3 base on hdfs-file, so you can refer [hadoop s3 docs](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html) to config s3.
-
-Except when interacting with public S3 buckets, the S3A client needs the credentials needed to interact with buckets.
-The client supports multiple authentication mechanisms and can be configured as to which mechanisms to use, and their order of use. Custom implementations of com.amazonaws.auth.AWSCredentialsProvider may also be used.
-if you used SimpleAWSCredentialsProvider (can be obtained from the Amazon Security Token Service), these consist of an access key, a secret key.
-you can config like this:
-
-```yaml
-``` yaml
-
-seatunnel:
-    engine:
-        checkpoint:
-            interval: 6000
-            timeout: 7000
-            max-concurrent: 5
-            tolerable-failure: 2
-            storage:
-                type: hdfs
-                max-retained: 3
-                plugin-config:
-                    storage.type: s3
-                    s3.bucket: your-bucket
-                    fs.s3a.access.key: your-access-key
-                    fs.s3a.secret.key: your-secret-key
-                    fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
-                    
-
-```
-
-if you used `InstanceProfileCredentialsProvider`, this supports use of instance profile credentials if running in an EC2 VM, you could check [iam-roles-for-amazon-ec2](https://docs.aws.amazon.com/zh_cn/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
-you can config like this:
-
-```yaml
-
-seatunnel:
-  engine:
-    checkpoint:
-      interval: 6000
-      timeout: 7000
-      max-concurrent: 5
-      tolerable-failure: 2
-      storage:
-        type: hdfs
-        max-retained: 3
-        plugin-config:
-          storage.type: s3
-          s3.bucket: your-bucket
-          fs.s3a.endpoint: your-endpoint
-          fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.InstanceProfileCredentialsProvider
-```
-
-For additional reading on the Hadoop Credential Provider API see: [Credential Provider API](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html).
-
-#### HDFS
-
-if you used HDFS, you can config like this:
-
-```yaml
-seatunnel:
-  engine:
-    checkpoint:
-      storage:
-        type: hdfs
-        max-retained: 3
-        plugin-config:
-          storage.type: hdfs
-          fs.defaultFS: hdfs://localhost:9000
-          // if you used kerberos, you can config like this:
-          kerberosPrincipal: your-kerberos-principal
-          kerberosKeytab: your-kerberos-keytab  
-```
-
-#### LocalFile
-
-```yaml
-seatunnel:
-  engine:
-    checkpoint:
-      interval: 6000
-      timeout: 7000
-      max-concurrent: 5
-      tolerable-failure: 2
-      storage:
-        type: hdfs
-        max-retained: 3
-        plugin-config:
-          storage.type: hdfs
-          fs.defaultFS: file:/// # Ensure that the directory has written permission 
-
-```
-
diff --git a/docs/en/seatunnel-engine/cluster-manager.md b/docs/en/seatunnel-engine/cluster-manager.md
deleted file mode 100644
index 508190393..000000000
--- a/docs/en/seatunnel-engine/cluster-manager.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-
-sidebar_position: 5
--------------------
-
-# SeaTunnel Engine Cluster Manager
-
diff --git a/docs/en/seatunnel-engine/cluster-mode.md b/docs/en/seatunnel-engine/cluster-mode.md
deleted file mode 100644
index 774eb4347..000000000
--- a/docs/en/seatunnel-engine/cluster-mode.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-
-sidebar_position: 3
--------------------
-
-# Run Job With Cluster Mode
-
-This is the most recommended way to use SeaTunnel Engine in the production environment. Full functionality of SeaTunnel Engine is supported in this mode and the cluster mode will have better performance and stability.
-
-In the cluster mode, the SeaTunnel Engine cluster needs to be deployed first, and the client will submit the job to the SeaTunnel Engine cluster for running.
-
-## Deploy SeaTunnel Engine Cluster
-
-Deploy a SeaTunnel Engine Cluster reference [SeaTunnel Engine Cluster Deploy](deployment.md)
-
-## Submit Job
-
-```shell
-$SEATUNNEL_HOME/bin/seatunnel.sh --config $SEATUNNEL_HOME/config/v2.batch.config.template
-```
-
diff --git a/docs/en/seatunnel-engine/deployment.md b/docs/en/seatunnel-engine/deployment.md
deleted file mode 100644
index be38ac2db..000000000
--- a/docs/en/seatunnel-engine/deployment.md
+++ /dev/null
@@ -1,239 +0,0 @@
----
-
-sidebar_position: 4
--------------------
-
-# Deployment SeaTunnel Engine
-
-## 1. Download
-
-SeaTunnel Engine is the default engine of SeaTunnel. The installation package of SeaTunnel already contains all the contents of SeaTunnel Engine.
-
-## 2 Config SEATUNNEL_HOME
-
-You can config `SEATUNNEL_HOME` by add `/etc/profile.d/seatunnel.sh` file. The content of `/etc/profile.d/seatunnel.sh` are
-
-```
-export SEATUNNEL_HOME=${seatunnel install path}
-export PATH=$PATH:$SEATUNNEL_HOME/bin
-```
-
-## 3. Config SeaTunnel Engine JVM options
-
-SeaTunnel Engine supported two ways to set jvm options.
-
-1. Add JVM Options to `$SEATUNNEL_HOME/bin/seatunnel-cluster.sh`.
-
-   Modify the `$SEATUNNEL_HOME/bin/seatunnel-cluster.sh` file and add `JAVA_OPTS="-Xms2G -Xmx2G"` in the first line.
-
-2. Add JVM Options when start SeaTunnel Engine. For example `seatunnel-cluster.sh -DJvmOption="-Xms2G -Xmx2G"`
-
-## 4. Config SeaTunnel Engine
-
-SeaTunnel Engine provides many functions, which need to be configured in seatunnel.yaml.
-
-### 4.1 Backup count
-
-SeaTunnel Engine implement cluster management based on [Hazelcast IMDG](https://docs.hazelcast.com/imdg/4.1/). The state data of cluster(Job Running State, Resource State) are storage is [Hazelcast IMap](https://docs.hazelcast.com/imdg/4.1/data-structures/map).
-The data saved in Hazelcast IMap will be distributed and stored in all nodes of the cluster. Hazelcast will partition the data stored in Imap. Each partition can specify the number of backups.
-Therefore, SeaTunnel Engine can achieve cluster HA without using other services(for example zookeeper).
-
-The `backup count` is to define the number of synchronous backups. For example, if it is set to 1, backup of a partition will be placed on one other member. If it is 2, it will be placed on two other members.
-
-We suggest the value of `backup-count` is the `min(1, max(5, N/2))`. `N` is the number of the cluster node.
-
-```
-seatunnel:
-    engine:
-        backup-count: 1
-        # other config
-```
-
-### 4.2 Slot service
-
-The number of Slots determines the number of TaskGroups the cluster node can run in parallel. SeaTunnel Engine is a data synchronization engine and most jobs are IO intensive.
-
-Dynamic Slot is suggest.
-
-```
-seatunnel:
-    engine:
-        slot-service:
-            dynamic-slot: true
-        # other config
-```
-
-### 4.3 Checkpoint Manager
-
-Like Flink, SeaTunnel Engine support Chandy–Lamport algorithm. Therefore, SeaTunnel Engine can realize data synchronization without data loss and duplication.
-
-**interval**
-
-The interval between two checkpoints, unit is milliseconds. If the `checkpoint.interval` parameter is configured in the `env` of the job config file, the value set here will be overwritten.
-
-**timeout**
-
-The timeout of a checkpoint. If a checkpoint cannot be completed within the timeout period, a checkpoint failure will be triggered. Therefore, Job will be restored.
-
-**max-concurrent**
-
-How many checkpoints can be performed simultaneously at most.
-
-**tolerable-failure**
-
-Maximum number of retries after checkpoint failure.
-
-Example
-
-```
-seatunnel:
-    engine:
-        backup-count: 1
-        print-execution-info-interval: 10
-        slot-service:
-            dynamic-slot: true
-        checkpoint:
-            interval: 300000
-            timeout: 10000
-            max-concurrent: 1
-            tolerable-failure: 2
-```
-
-**checkpoint storage**
-
-About the checkpoint storage, you can see [checkpoint storage](checkpoint-storage.md)
-
-## 5. Config SeaTunnel Engine Server
-
-All SeaTunnel Engine Server config in `hazelcast.yaml` file.
-
-### 5.1 cluster-name
-
-The SeaTunnel Engine nodes use the cluster name to determine whether the other is a cluster with themselves. If the cluster names between the two nodes are different, the SeaTunnel Engine will reject the service request.
-
-### 5.2 Network
-
-Base on [Hazelcast](https://docs.hazelcast.com/imdg/4.1/clusters/discovery-mechanisms), A SeaTunnel Engine cluster is a network of cluster members that run SeaTunnel Engine Server. Cluster members automatically join together to form a cluster. This automatic joining takes place with various discovery mechanisms that the cluster members use to find each other.
-
-Please note that, after a cluster is formed, communication between cluster members is always via TCP/IP, regardless of the discovery mechanism used.
-
-SeaTunnel Engine uses the following discovery mechanisms.
-
-#### TCP
-
-You can configure SeaTunnel Engine to be a full TCP/IP cluster. See the [Discovering Members by TCP section](tcp.md) for configuration details.
-
-An example is like this `hazelcast.yaml`
-
-```yaml
-hazelcast:
-  cluster-name: seatunnel
-  network:
-    join:
-      tcp-ip:
-        enabled: true
-        member-list:
-          - hostname1
-    port:
-      auto-increment: false
-      port: 5801
-  properties:
-    hazelcast.logging.type: log4j2
-```
-
-TCP is our suggest way in a standalone SeaTunnel Engine cluster.
-
-On the other hand, Hazelcast provides some other service discovery methods. For details, please refer to [hazelcast network](https://docs.hazelcast.com/imdg/4.1/clusters/setting-up-clusters)
-
-### 5.3 Map
-
-MapStores connect to an external data store only when they are configured on a map. This topic explains how to configure a map with a MapStore. For details, please refer to [hazelcast map](https://docs.hazelcast.com/imdg/4.2/data-structures/map)
-
-**type**
-
-The type of imap persistence, currently only supports `hdfs`.
-
-**namespace**
-
-It is used to distinguish data storage locations of different business, like OSS bucket name.
-
-**clusterName**
-
-This parameter is primarily used for cluster isolation, we can use this to distinguish different cluster, like cluster1,
-cluster2 and this is also used to distinguish different business
-
-**fs.defaultFS**
-
-We used hdfs api read/write file, so used this storage need provide hdfs configuration
-
-if you used HDFS, you can config like this:
-
-```yaml
-map:
-    engine*:
-       map-store:
-         enabled: true
-         initial-mode: EAGER
-         factory-class-name: org.apache.seatunnel.engine.server.persistence.FileMapStoreFactory
-         properties:
-           type: hdfs
-           namespace: /tmp/seatunnel/imap
-           clusterName: seatunnel-cluster
-           fs.defaultFS: hdfs://localhost:9000
-```
-
-If there is no HDFS and your cluster only have one node, you can config to use local file like this:
-
-```yaml
-map:
-    engine*:
-       map-store:
-         enabled: true
-         initial-mode: EAGER
-         factory-class-name: org.apache.seatunnel.engine.server.persistence.FileMapStoreFactory
-         properties:
-           type: hdfs
-           namespace: /tmp/seatunnel/imap
-           clusterName: seatunnel-cluster
-           fs.defaultFS: file:///
-```
-
-## 6. Config SeaTunnel Engine Client
-
-All SeaTunnel Engine Client config in `hazelcast-client.yaml`.
-
-### 6.1 cluster-name
-
-The Client must have the same `cluster-name` with the SeaTunnel Engine. Otherwise, SeaTunnel Engine will reject the client request.
-
-### 6.2 Network
-
-**cluster-members**
-
-All SeaTunnel Engine Server Node address need add to here.
-
-```yaml
-hazelcast-client:
-  cluster-name: seatunnel
-  properties:
-      hazelcast.logging.type: log4j2
-  network:
-    cluster-members:
-      - hostname1:5801
-```
-
-## 7. Start SeaTunnel Engine Server Node
-
-Can be started by a daemon with `-d`.
-
-```shell
-mkdir -p $SEATUNNEL_HOME/logs
-./bin/seatunnel-cluster.sh -d
-```
-
-The logs will write in `$SEATUNNEL_HOME/logs/seatunnel-engine-server.log`
-
-## 8. Install SeaTunnel Engine Client
-
-You only need to copy the `$SEATUNNEL_HOME` directory on the SeaTunnel Engine node to the Client node and config the `SEATUNNEL_HOME` like SeaTunnel Engine Server Node.
-
diff --git a/docs/en/seatunnel-engine/local-mode.md b/docs/en/seatunnel-engine/local-mode.md
deleted file mode 100644
index 558c3cd1d..000000000
--- a/docs/en/seatunnel-engine/local-mode.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-
-sidebar_position: 2
--------------------
-
-# Run Job With Local Mode
-
-Only for test.
-
-The most recommended way to use SeaTunnel Engine in the production environment is [Cluster Mode](cluster-mode.md).
-
-## Deploy SeaTunnel Engine Local Mode
-
-[Deploy a SeaTunnel Engine Local Mode reference](../start-v2/locally/deployment.md)
-
-## Change SeaTunnel Engine Config
-
-Update the auto-increment to true in the $SEATUNNEL_HOME/config/hazelcast.yaml
-
-## Submit Job
-
-```shell
-$SEATUNNEL_HOME/bin/seatunnel.sh --config $SEATUNNEL_HOME/config/v2.batch.config.template -e local
-```
-
diff --git a/docs/en/seatunnel-engine/tcp.md b/docs/en/seatunnel-engine/tcp.md
deleted file mode 100644
index d680668d2..000000000
--- a/docs/en/seatunnel-engine/tcp.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-
-sidebar_position: 6
--------------------
-
-# TCP NetWork
-
-If multicast is not the preferred way of discovery for your environment, then you can configure SeaTunnel Engine to be a full TCP/IP cluster. When you configure SeaTunnel Engine to discover members by TCP/IP, you must list all or a subset of the members' host names and/or IP addresses as cluster members. You do not have to list all of these cluster members, but at least one of the listed members has to be active in the cluster when a new member joins.
-
-To configure your Hazelcast to be a full TCP/IP cluster, set the following configuration elements. See the tcp-ip element section for the full descriptions of the TCP/IP discovery configuration elements.
-
-- Set the enabled attribute of the tcp-ip element to true.
-- Provide your member elements within the tcp-ip element.
-
-The following is an example declarative configuration.
-
-```yaml
-hazelcast:
-  network:
-    join:
-      tcp-ip:
-        enabled: true
-        member-list:
-          - machine1
-          - machine2
-          - machine3:5799
-          - 192.168.1.0-7
-          - 192.168.1.21
-```
-
-As shown above, you can provide IP addresses or host names for member elements. You can also give a range of IP addresses, such as `192.168.1.0-7`.
-
-Instead of providing members line-by-line as shown above, you also have the option to use the members element and write comma-separated IP addresses, as shown below.
-
-`<members>192.168.1.0-7,192.168.1.21</members>`
-
-If you do not provide ports for the members, Hazelcast automatically tries the ports `5701`, `5702` and so on.
diff --git a/docs/en/start-v2/docker/docker.md b/docs/en/start-v2/docker/docker.md
deleted file mode 100644
index fd927deab..000000000
--- a/docs/en/start-v2/docker/docker.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-
-sidebar_position: 3
--------------------
-
-# Set Up with Docker
-
-<!-- TODO -->
--->
diff --git a/docs/en/start-v2/kubernetes/kubernetes.mdx b/docs/en/start-v2/kubernetes/kubernetes.mdx
deleted file mode 100644
index e33bc131d..000000000
--- a/docs/en/start-v2/kubernetes/kubernetes.mdx
+++ /dev/null
@@ -1,295 +0,0 @@
----
-sidebar_position: 4
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Set Up with Kubernetes
-
-This section provides a quick guide to using SeaTunnel with Kubernetes.
-
-## Prerequisites
-
-We assume that you have a local installations of the following:
-
-- [docker](https://docs.docker.com/)
-- [kubernetes](https://kubernetes.io/)
-- [helm](https://helm.sh/docs/intro/quickstart/)
-
-So that the `kubectl` and `helm` commands are available on your local system.
-
-For kubernetes [minikube](https://minikube.sigs.k8s.io/docs/start/) is our choice, at the time of writing this we are using version v1.23.3. You can start a cluster with the following command:
-
-```bash
-minikube start --kubernetes-version=v1.23.3
-```
-
-## Installation
-
-### SeaTunnel docker image
-
-To run the image with SeaTunnel, first create a `Dockerfile`:
-
-<Tabs
-  groupId="engine-type"
-  defaultValue="flink"
-  values={[
-    {label: 'Flink', value: 'flink'},
-  ]}>
-<TabItem value="flink">
-
-```Dockerfile
-FROM flink:1.13
-
-ENV SEATUNNEL_VERSION="2.3.0"
-ENV SEATUNNEL_HOME = "/opt/seatunnel"
-
-RUN mkdir -p $SEATUNNEL_HOME
-
-RUN wget https://archive.apache.org/dist/incubator/seatunnel/${SEATUNNEL_VERSION}/apache-seatunnel-incubating-${SEATUNNEL_VERSION}-bin.tar.gz
-RUN tar -xzvf apache-seatunnel-incubating-${SEATUNNEL_VERSION}-bin.tar.gz
-
-RUN cp -r apache-seatunnel-incubating-${SEATUNNEL_VERSION}/* $SEATUNNEL_HOME/
-RUN rm -rf apache-seatunnel-incubating-${SEATUNNEL_VERSION}*
-```
-
-Then run the following commands to build the image:
-```bash
-docker build -t seatunnel:2.3.0-flink-1.13 -f Dockerfile .
-```
-Image `seatunnel:2.3.0-flink-1.13` need to be present in the host (minikube) so that the deployment can take place.
-
-Load image to minikube via:
-```bash
-minikube image load seatunnel:2.3.0-flink-1.13
-```
-
-</TabItem>
-</Tabs>
-
-### Deploying the operator
-
-<Tabs
-  groupId="engine-type"
-  defaultValue="flink"
-  values={[
-    {label: 'Flink', value: 'flink'},
-  ]}>
-<TabItem value="flink">
-
-The steps below provide a quick walk-through on setting up the Flink Kubernetes Operator.   
-You can refer to [Flink Kubernetes Operator - Quick Start](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/try-flink-kubernetes-operator/quick-start/) for more details.
-
-> Notice: All the Kubernetes resources bellow are created in default namespace.
-
-Install the certificate manager on your Kubernetes cluster to enable adding the webhook component (only needed once per Kubernetes cluster):
-
-```bash
-kubectl create -f https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
-```
-Now you can deploy the latest stable Flink Kubernetes Operator version using the included Helm chart:
-
-```bash
-helm repo add flink-operator-repo https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.1/
-
-helm install flink-kubernetes-operator flink-operator-repo/flink-kubernetes-operator \
---set image.repository=apache/flink-kubernetes-operator
-```
-
-You may verify your installation via `kubectl`:
-
-```bash
-kubectl get pods
-NAME                                                   READY   STATUS    RESTARTS      AGE
-flink-kubernetes-operator-5f466b8549-mgchb             1/1     Running   3 (23h ago)   16d
-
-```
-
-</TabItem>
-</Tabs>
-
-## Run SeaTunnel Application
-
-**Run Application:**: SeaTunnel already providers out-of-the-box [configurations](https://github.com/apache/incubator-seatunnel/tree/dev/config).
-
-<Tabs
-  groupId="engine-type"
-  defaultValue="flink"
-  values={[
-    {label: 'Flink', value: 'flink'},
-  ]}>
-<TabItem value="flink">
-
-In this guide we are going to use [seatunnel.streaming.conf](https://github.com/apache/incubator-seatunnel/blob/2.3.0-release/config/v2.streaming.conf.template):
-
-```conf
-env {
-  execution.parallelism = 1
-  job.mode = "STREAMING"
-  checkpoint.interval = 2000
-}
-
-source {
-    FakeSource {
-      result_table_name = "fake"
-      row.num = 160000
-      schema = {
-        fields {
-          name = "string"
-          age = "int"
-        }
-      }
-    }
-}
-
-transform {
-  FieldMapper {
-    source_table_name = "fake"
-    result_table_name = "fake1"
-    field_mapper = {
-      age = age
-      name = new_name
-    }
-  }
-}
-
-sink {
-  Console {
-    source_table_name = "fake1"
-  }
-}
-```
-
-Generate a configmap named seatunnel-config in Kubernetes for the seatunnel.streaming.conf so that we can mount the config content in pod.
-```bash
-kubectl create cm seatunnel-config \
---from-file=seatunnel.streaming.conf=seatunnel.streaming.conf
-```
-
-Once the Flink Kubernetes Operator is running as seen in the previous steps you are ready to submit a Flink (SeaTunnel) job:
-- Create `seatunnel-flink.yaml` FlinkDeployment manifest:
-```yaml
-apiVersion: flink.apache.org/v1beta1
-kind: FlinkDeployment
-metadata:
-  name: seatunnel-flink-streaming-example
-spec:
-  image: seatunnel:2.3.0-flink-1.13
-  flinkVersion: v1_13
-  flinkConfiguration:
-    taskmanager.numberOfTaskSlots: "2"
-  serviceAccount: flink
-  jobManager:
-    replicas: 1
-    resource:
-      memory: "1024m"
-      cpu: 1
-  taskManager:
-    resource:
-      memory: "1024m"
-      cpu: 1
-  podTemplate:
-    spec:
-      containers:
-        - name: flink-main-container
-          volumeMounts:
-            - name: seatunnel-config
-              mountPath: /data/seatunnel.streaming.conf
-              subPath: seatunnel.streaming.conf
-      volumes:
-        - name: seatunnel-config
-          configMap:
-            name: seatunnel-config
-            items:
-            - key: seatunnel.streaming.conf
-              path: seatunnel.streaming.conf
-  job:
-    jarURI: local:///opt/seatunnel/starter/seatunnel-flink-starter.jar
-    entryClass: org.apache.seatunnel.core.starter.flink.SeatunnelFlink
-    args: ["--config", "/data/seatunnel.streaming.conf"]
-    parallelism: 2
-    upgradeMode: stateless
-```
-
-- Run the example application:
-```bash
-kubectl apply -f seatunnel-flink.yaml
-```
-
-</TabItem>
-</Tabs>
-
-**See The Output**
-
-<Tabs
-  groupId="engine-type"
-  defaultValue="flink"
-  values={[
-    {label: 'Flink', value: 'flink'},
-  ]}>
-<TabItem value="flink">
-
-You may follow the logs of your job, after a successful startup (which can take on the order of a minute in a fresh environment, seconds afterwards) you can:
-
-```bash
-kubectl logs -f deploy/seatunnel-flink-streaming-example
-```
-looks like the below:
-
-```shell
-...
-2023-01-31 12:13:54,349 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: SeaTunnel FakeSource -> Sink Writer: Console (1/1) (1665d2d011b2f6cf6525c0e5e75ec251) switched from SCHEDULED to DEPLOYING.
-2023-01-31 12:13:56,684 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Deploying Source: SeaTunnel FakeSource -> Sink Writer: Console (1/1) (attempt #0) with attempt id 1665d2d011b2f6cf6525c0e5e75ec251 to seatunnel-flink-streaming-example-taskmanager-1-1 @ 100.103.244.106 (dataPort=39137) with allocation id fbe162650c4126649afcdaff00e46875
-2023-01-31 12:13:57,794 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: SeaTunnel FakeSource -> Sink Writer: Console (1/1) (1665d2d011b2f6cf6525c0e5e75ec251) switched from DEPLOYING to INITIALIZING.
-2023-01-31 12:13:58,203 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: SeaTunnel FakeSource -> Sink Writer: Console (1/1) (1665d2d011b2f6cf6525c0e5e75ec251) switched from INITIALIZING to RUNNING.
-```
-
-If OOM error accur in the log, you can decrease the `row.num` value in seatunnel.streaming.conf
-
-To expose the Flink Dashboard you may add a port-forward rule:
-```bash
-kubectl port-forward svc/seatunnel-flink-streaming-example-rest 8081
-```
-Now the Flink Dashboard is accessible at [localhost:8081](http://localhost:8081).
-
-Or launch `minikube dashboard` for a web-based Kubernetes user interface.
-
-The content printed in the TaskManager Stdout log:
-```bash
-kubectl logs \
--l 'app in (seatunnel-flink-streaming-example), component in (taskmanager)' \
---tail=-1 \
--f
-```
-looks like the below (your content may be different since we use `FakeSource` to automatically generate random stream data):
-
-```shell
-...
-subtaskIndex=0: row=159991 : VVgpp, 978840000
-subtaskIndex=0: row=159992 : JxrOC, 1493825495
-subtaskIndex=0: row=159993 : YmCZR, 654146216
-subtaskIndex=0: row=159994 : LdmUn, 643140261
-subtaskIndex=0: row=159995 : tURkE, 837012821
-subtaskIndex=0: row=159996 : uPDfd, 2021489045
-subtaskIndex=0: row=159997 : mjrdG, 2074957853
-subtaskIndex=0: row=159998 : xbeUi, 864518418
-subtaskIndex=0: row=159999 : sSWLb, 1924451911
-subtaskIndex=0: row=160000 : AuPlM, 1255017876
-```
-
-To stop your job and delete your FlinkDeployment you can simply:
-
-```bash
-kubectl delete -f seatunnel-flink.yaml
-```
-</TabItem>
-</Tabs>
-
-
-Happy SeaTunneling!
-
-## What's More
-
-For now, you are already taking a quick look at SeaTunnel, you could see [connector](/category/connector) to find all source and sink SeaTunnel supported.
-Or see [deployment](../deployment.mdx) if you want to submit your application in another kind of your engine cluster.
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 0ff7206de..edcd16e84 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -26,7 +26,7 @@
  Create as many sidebars as you want.
  */
 
-// @ts-check
+// @ts-nocheck
 
 /** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
 const sidebars = {
@@ -45,57 +45,22 @@ const sidebars = {
      */
 
     "docs": [
-        "about",
-        {
-            "type": "category",
-            "label": "Quick Start - V2",
-            "items": [
-                {
-                    "type": "category",
-                    "label": "Start With Locally",
-                    "items": [
-                        {
-                            "type": "autogenerated",
-                            "dirName": "start-v2/locally"
-                        }
-                    ]
-                },
-                {
-                    "type": "category",
-                    "label": "Start With Docker",
-                    "items": [
-                        {
-                            "type": "autogenerated",
-                            "dirName": "start-v2/docker"
-                        }
-                    ]
-                },
-                {
-                    "type": "category",
-                    "label": "Start With K8s",
-                    "items": [
-                        {
-                            "type": "autogenerated",
-                            "dirName": "start-v2/kubernetes"
-                        }
-                    ]
-                }
-            ]
-        },
+        "about-seatunnel",
+        "about-zeta",
+        "quick-started",
         {
             "type": "category",
             "label": "Concept",
             "items": [
-                "concept/config",
-                "concept/connector-v2-features",
-                'concept/schema-feature',
-                'concept/JobEnvConfig'
+                "concept/env",
+                "concept/sink",
+                "concept/source",
+                "concept/transform"
             ]
         },
-        "Connector-v2-release-state",
         {
             "type": "category",
-            "label": "Connector-V2",
+            "label": "Connector List",
             "items": [
                 {
                     "type": "category",
@@ -111,7 +76,7 @@ const sidebars = {
                     "items": [
                         {
                             "type": "autogenerated",
-                            "dirName": "connector-v2/source"
+                            "dirName": "connector-list/source"
                         }
                     ]
                 },
@@ -129,72 +94,90 @@ const sidebars = {
                     "items": [
                         {
                             "type": "autogenerated",
-                            "dirName": "connector-v2/sink"
+                            "dirName": "connector-list/sink"
                         }
                     ]
                 },
-                "connector-v2/Error-Quick-Reference-Manual",
-                "connector-v2/Config-Encryption-Decryption"
+               {
+                    "type": "category",
+                    "label": "Transform",
+                    "link": {
+                    "type": "generated-index",
+                    "title": "Transform V2 of SeaTunnel",
+                    "description": "List all transform v2 supported Apache SeaTunnel for now.",
+                    "slug": "/category/transform-v2",
+                    "keywords": ["transform-v2"],
+                    "image": "/img/favicon.ico"
+                },
+                    "items": [
+                        {
+                            "type": "autogenerated",
+                            "dirName": "connector-list/transform"
+                        }
+                ]
+               }
             ]
         },
         {
             "type": "category",
-            "label": "Transform-V2",
-            "link": {
-                "type": "generated-index",
-                "title": "Transform V2 of SeaTunnel",
-                "description": "List all transform v2 supported Apache SeaTunnel for now.",
-                "slug": "/category/transform-v2",
-                "keywords": ["transform-v2"],
-                "image": "/img/favicon.ico"
-            },
+            "label": "Deploy",
             "items": [
                 {
-                    "type": "autogenerated",
-                    "dirName": "transform-v2"
-                }
-            ]
-        },
-        {
-            "type": "category",
-            "label": "Command",
-            "items": [
-                "command/usage"
+                    "type": "category",
+                    "label": "Seatunnel Zeta",
+                    "items": [
+                        {
+                            "type": "autogenerated",
+                            "dirName": "deploy/seatunnel-zeta"
+                        }
+                    ]
+                },
+                "deploy/seatunnel-client",
+                "deploy/with-flink",
+                "deploy/with-spark"
             ]
         },
         {
             "type": "category",
-            "label": "SeaTunnel Engine",
+            "label": "Contribution",
             "items": [
-                "seatunnel-engine/about",
-                "seatunnel-engine/deployment",
-                "seatunnel-engine/local-mode",
-                "seatunnel-engine/cluster-mode",
-                "seatunnel-engine/checkpoint-storage",
-                "seatunnel-engine/rest-api",
-                "seatunnel-engine/tcp"
+                "contribution/coding-guide",
+                "contribution/command-usage",
+                "contribution/contribute-plugin",
+                "contribution/contribute-transform",
+                "contribution/new-license",
+                "contribution/error-quick-reference-manual",
+                "contribution/setup"
             ]
         },
         {
             "type": "category",
-            "label": "Other Engine",
+            "label": "Manager",
             "items": [
-                "other-engine/flink",
-                "other-engine/spark"
+                {
+                    "type": "category",
+                    "label": "configuring-connector-formats",
+                    "items": [
+                        {
+                            "type": "autogenerated",
+                            "dirName": "manager/configuring-connector-formats"
+                        }
+                    ]
+                },
+                "manager/configuring-encryption-decryption",
+                "manager/using-on-standalone"
             ]
         },
         {
             type: 'category',
-            label: 'Contribution',
+            label: 'Release Notes',
             items: [
-                'contribution/setup',
-                'contribution/new-license',
-                'contribution/coding-guide',
-                'contribution/contribute-transform-v2-guide',
+                'release-notes/connector-release-status'
             ],
         },
+        "API",
         "faq"
     ]
 };
 
-module.exports = sidebars
+module.exports = sidebars
\ No newline at end of file