You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by ca...@apache.org on 2022/09/22 06:11:48 UTC

[dolphinscheduler] branch 3.1.0-prepare updated: [Cherry-Pick][3.1.0-Prepare] Cherry pick docs format fix for 3.1.0 (#12095)

This is an automated email from the ASF dual-hosted git repository.

caishunfeng pushed a commit to branch 3.1.0-prepare
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git


The following commit(s) were added to refs/heads/3.1.0-prepare by this push:
     new 0d16d7b323 [Cherry-Pick][3.1.0-Prepare] Cherry pick docs format fix for 3.1.0 (#12095)
0d16d7b323 is described below

commit 0d16d7b323589fc2fcf049a55d702b80d3b39f21
Author: Eric Gao <er...@gmail.com>
AuthorDate: Thu Sep 22 14:11:43 2022 +0800

    [Cherry-Pick][3.1.0-Prepare] Cherry pick docs format fix for 3.1.0 (#12095)
    
    * Cherry pick docs format fix for 3.1.0
    
    * Fix doc dead link
    
    * Fix sphinx dead link
---
 .github/PULL_REQUEST_TEMPLATE.md                   |  10 +-
 CONTRIBUTING.md                                    |  12 +-
 README.md                                          |  49 ++-
 README_zh_CN.md                                    |  31 +-
 deploy/README.md                                   |   1 +
 docs/docs/en/DSIP.md                               |   7 +-
 docs/docs/en/about/features.md                     |   3 +-
 docs/docs/en/about/glossary.md                     |   1 -
 docs/docs/en/about/hardware.md                     |  28 +-
 docs/docs/en/about/introduction.md                 |   2 +-
 docs/docs/en/architecture/cache.md                 |   2 +-
 docs/docs/en/architecture/configuration.md         |  13 +-
 docs/docs/en/architecture/design.md                |  93 +++---
 docs/docs/en/architecture/load-balance.md          |   1 +
 docs/docs/en/architecture/metadata.md              |   9 +-
 docs/docs/en/architecture/task-structure.md        | 288 +++++++++---------
 docs/docs/en/contribute/api-standard.md            |  25 +-
 docs/docs/en/contribute/api-test.md                |   4 +-
 docs/docs/en/contribute/architecture-design.md     |  80 ++---
 .../backend/mechanism/global-parameter.md          |   1 +
 .../en/contribute/backend/mechanism/overview.md    |   2 +-
 .../en/contribute/backend/mechanism/task/switch.md |   1 +
 docs/docs/en/contribute/backend/spi/alert.md       |  27 +-
 docs/docs/en/contribute/backend/spi/datasource.md  |   2 +-
 docs/docs/en/contribute/backend/spi/registry.md    |   5 +-
 docs/docs/en/contribute/backend/spi/task.md        |   2 +-
 docs/docs/en/contribute/e2e-test.md                |  72 ++---
 docs/docs/en/contribute/frontend-development.md    |  77 ++++-
 docs/docs/en/contribute/have-questions.md          |   3 +-
 docs/docs/en/contribute/join/DS-License.md         |   2 +-
 docs/docs/en/contribute/join/become-a-committer.md |   2 +-
 docs/docs/en/contribute/join/code-conduct.md       | 109 +++----
 docs/docs/en/contribute/join/contribute.md         |   6 +-
 docs/docs/en/contribute/join/document.md           |   6 +-
 docs/docs/en/contribute/join/issue.md              |   9 +-
 docs/docs/en/contribute/join/pull-request.md       |   8 +-
 docs/docs/en/contribute/join/review.md             |  81 ++---
 docs/docs/en/contribute/join/submit-code.md        |  31 +-
 docs/docs/en/contribute/join/subscribe.md          |   1 +
 docs/docs/en/contribute/join/unit-test.md          |  55 ++--
 docs/docs/en/contribute/log-specification.md       |   5 +-
 docs/docs/en/contribute/release/release-prepare.md |   7 +-
 docs/docs/en/contribute/release/release.md         |  24 +-
 .../docs/en/guide/alert/alert_plugin_user_guide.md |   4 +-
 docs/docs/en/guide/alert/dingtalk.md               |  27 +-
 docs/docs/en/guide/alert/email.md                  |   5 +-
 docs/docs/en/guide/alert/enterprise-webexteams.md  |  17 +-
 docs/docs/en/guide/alert/enterprise-wechat.md      |   4 +-
 docs/docs/en/guide/alert/feishu.md                 |   1 +
 docs/docs/en/guide/alert/http.md                   |  16 +-
 docs/docs/en/guide/alert/script.md                 |  14 +-
 docs/docs/en/guide/alert/telegram.md               |  25 +-
 docs/docs/en/guide/data-quality.md                 | 312 +++++++++----------
 docs/docs/en/guide/datasource/athena.md            |  19 +-
 docs/docs/en/guide/datasource/clickhouse.md        |  22 +-
 docs/docs/en/guide/datasource/db2.md               |  22 +-
 docs/docs/en/guide/datasource/hive.md              |  28 +-
 docs/docs/en/guide/datasource/mysql.md             |  20 +-
 docs/docs/en/guide/datasource/oracle.md            |  22 +-
 docs/docs/en/guide/datasource/postgresql.md        |  22 +-
 docs/docs/en/guide/datasource/presto.md            |  23 +-
 docs/docs/en/guide/datasource/redshift.md          |  22 +-
 docs/docs/en/guide/datasource/spark.md             |  22 +-
 docs/docs/en/guide/datasource/sqlserver.md         |  22 +-
 docs/docs/en/guide/expansion-reduction.md          | 124 ++++----
 docs/docs/en/guide/healthcheck.md                  |   1 +
 docs/docs/en/guide/howto/datasource-setting.md     |  11 +-
 docs/docs/en/guide/howto/general-setting.md        |   2 +-
 docs/docs/en/guide/installation/cluster.md         |   2 +-
 docs/docs/en/guide/installation/pseudo-cluster.md  |   7 +-
 docs/docs/en/guide/installation/standalone.md      |   2 +-
 docs/docs/en/guide/integration/rainbond.md         |  17 +-
 docs/docs/en/guide/metrics/metrics.md              |  34 +--
 docs/docs/en/guide/monitor.md                      |  14 +-
 docs/docs/en/guide/parameter/built-in.md           |  42 +--
 docs/docs/en/guide/parameter/context.md            |   2 +-
 docs/docs/en/guide/parameter/local.md              |   2 +-
 docs/docs/en/guide/project/project-list.md         |  18 +-
 docs/docs/en/guide/project/task-definition.md      |   4 +-
 docs/docs/en/guide/project/task-instance.md        |   2 +
 docs/docs/en/guide/project/workflow-definition.md  | 130 ++++----
 docs/docs/en/guide/project/workflow-instance.md    |  10 +-
 docs/docs/en/guide/project/workflow-relation.md    |   2 +-
 docs/docs/en/guide/resource/file-manage.md         |   5 +-
 docs/docs/en/guide/resource/intro.md               |   2 +-
 docs/docs/en/guide/resource/task-group.md          |  32 +-
 docs/docs/en/guide/security.md                     |  11 +-
 docs/docs/en/guide/start/docker.md                 |   5 +-
 docs/docs/en/guide/start/quick-start.md            |   2 +-
 docs/docs/en/guide/task/java.md                    |  48 +++
 docs/docs/en/guide/upgrade/incompatible.md         |   5 +-
 docs/docs/en/guide/upgrade/upgrade.md              |  38 +--
 docs/docs/en/history-versions.md                   |   1 +
 docs/docs/zh/DSIP.md                               |   7 +-
 docs/docs/zh/about/features.md                     |   1 +
 docs/docs/zh/about/glossary.md                     |   1 -
 docs/docs/zh/about/hardware.md                     |  32 +-
 docs/docs/zh/about/introduction.md                 |   2 +-
 docs/docs/zh/architecture/cache.md                 |   2 +-
 docs/docs/zh/architecture/configuration.md         |  17 +-
 docs/docs/zh/architecture/design.md                | 109 +++----
 docs/docs/zh/architecture/load-balance.md          |   2 +
 docs/docs/zh/architecture/metadata.md              |   6 +-
 docs/docs/zh/architecture/task-structure.md        | 329 ++++++++++-----------
 docs/docs/zh/contribute/api-standard.md            |  26 +-
 docs/docs/zh/contribute/api-test.md                |   4 +-
 docs/docs/zh/contribute/architecture-design.md     | 191 ++++++------
 .../zh/contribute/backend/mechanism/overview.md    |   2 +-
 .../zh/contribute/backend/mechanism/task/switch.md |   1 +
 docs/docs/zh/contribute/backend/spi/alert.md       |   9 +-
 docs/docs/zh/contribute/backend/spi/registry.md    |   6 +-
 docs/docs/zh/contribute/e2e-test.md                |  67 ++---
 docs/docs/zh/contribute/frontend-development.md    |  93 ++++--
 docs/docs/zh/contribute/have-questions.md          |   1 +
 docs/docs/zh/contribute/join/DS-License.md         |  10 +-
 docs/docs/zh/contribute/join/become-a-committer.md |   2 +-
 docs/docs/zh/contribute/join/code-conduct.md       | 111 +++----
 docs/docs/zh/contribute/join/commit-message.md     |  17 +-
 docs/docs/zh/contribute/join/contribute.md         |   5 +-
 docs/docs/zh/contribute/join/document.md           |   4 +-
 docs/docs/zh/contribute/join/issue.md              |  10 +-
 docs/docs/zh/contribute/join/microbench.md         |  48 +--
 docs/docs/zh/contribute/join/pull-request.md       |  16 +-
 docs/docs/zh/contribute/join/review.md             |  69 ++---
 docs/docs/zh/contribute/join/submit-code.md        |  56 ++--
 docs/docs/zh/contribute/join/subscribe.md          |   1 +
 docs/docs/zh/contribute/join/unit-test.md          |  16 +-
 docs/docs/zh/contribute/log-specification.md       |   3 +-
 docs/docs/zh/contribute/release/release-post.md    |   2 +-
 docs/docs/zh/contribute/release/release-prepare.md |   9 +-
 docs/docs/zh/contribute/release/release.md         |  15 +-
 docs/docs/zh/guide/alert/dingtalk.md               |  18 +-
 docs/docs/zh/guide/alert/email.md                  |   3 +-
 docs/docs/zh/guide/alert/enterprise-webexteams.md  |  11 +
 docs/docs/zh/guide/alert/enterprise-wechat.md      |   2 +-
 docs/docs/zh/guide/alert/feishu.md                 |   1 +
 docs/docs/zh/guide/alert/http.md                   |   9 +
 docs/docs/zh/guide/alert/script.md                 |   5 +
 docs/docs/zh/guide/alert/telegram.md               |  37 +--
 docs/docs/zh/guide/data-quality.md                 | 229 ++++++++------
 docs/docs/zh/guide/datasource/athena.md            |   2 +-
 docs/docs/zh/guide/expansion-reduction.md          | 134 ++++-----
 docs/docs/zh/guide/healthcheck.md                  |   1 +
 docs/docs/zh/guide/howto/datasource-setting.md     |   6 +-
 docs/docs/zh/guide/installation/pseudo-cluster.md  |   3 +-
 docs/docs/zh/guide/integration/rainbond.md         |  14 +-
 docs/docs/zh/guide/metrics/metrics.md              |  32 +-
 docs/docs/zh/guide/monitor.md                      |   4 +-
 docs/docs/zh/guide/parameter/built-in.md           |  37 +--
 docs/docs/zh/guide/project/task-definition.md      |   4 +-
 docs/docs/zh/guide/project/task-instance.md        |   2 +
 docs/docs/zh/guide/project/workflow-definition.md  | 119 ++++----
 docs/docs/zh/guide/project/workflow-instance.md    |  22 +-
 docs/docs/zh/guide/resource/file-manage.md         |   4 +-
 docs/docs/zh/guide/resource/intro.md               |   2 +-
 docs/docs/zh/guide/resource/task-group.md          |  10 +-
 docs/docs/zh/guide/resource/udf-manage.md          |   9 +-
 docs/docs/zh/guide/security.md                     |  30 +-
 docs/docs/zh/guide/start/quick-start.md            |  21 +-
 docs/docs/zh/guide/upgrade/incompatible.md         |   3 +-
 docs/docs/zh/guide/upgrade/upgrade.md              |  22 +-
 docs/docs/zh/history-versions.md                   |   2 +
 docs/img/tasks/demo/java_task02.png                | Bin 0 -> 286501 bytes
 dolphinscheduler-api-test/README.md                |   1 +
 dolphinscheduler-bom/README.md                     |  11 +-
 dolphinscheduler-e2e/README.md                     |   1 +
 .../pydolphinscheduler/DEVELOP.md                  |  43 ++-
 .../pydolphinscheduler/README.md                   |  36 +--
 .../pydolphinscheduler/RELEASE.md                  |  32 +-
 .../pydolphinscheduler/UPDATING.md                 |  35 +--
 dolphinscheduler-registry/README.md                |  24 +-
 .../dolphinscheduler-registry-etcd/README.md       |   3 +-
 .../dolphinscheduler-registry-mysql/README.md      |   4 +-
 .../dolphinscheduler-registry-zookeeper/README.md  |   2 +-
 dolphinscheduler-ui/README.md                      |   3 +-
 175 files changed, 2556 insertions(+), 2180 deletions(-)

diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 5417822bc6..ea2405e86f 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,6 +1,5 @@
 <!--Thanks very much for contributing to Apache DolphinScheduler. Please review https://dolphinscheduler.apache.org/en-us/community/development/pull-request.html before opening a pull request.-->
 
-
 ## Purpose of the pull request
 
 <!--(For example: This pull request adds checkstyle plugin).-->
@@ -8,8 +7,9 @@
 ## Brief change log
 
 <!--*(for example:)*
-  - *Add maven-checkstyle-plugin to root pom.xml*
+- *Add maven-checkstyle-plugin to root pom.xml*
 -->
+
 ## Verify this pull request
 
 <!--*(Please pick either of the following options)*-->
@@ -25,9 +25,9 @@ This pull request is already covered by existing tests, such as *(please describ
 This change added tests and can be verified as follows:
 
 <!--*(example:)*
-  - *Added dolphinscheduler-dao tests for end-to-end.*
-  - *Added CronUtilsTest to verify the change.*
-  - *Manually verified the change by testing locally.* -->
+- *Added dolphinscheduler-dao tests for end-to-end.*
+- *Added CronUtilsTest to verify the change.*
+- *Manually verified the change by testing locally.* -->
 
 (or)
 
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 1ea7469b38..c5802c5d01 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -6,11 +6,11 @@ Start by forking the dolphinscheduler GitHub repository, make changes in a branc
 
 There are three branches in the remote repository currently:
 
-  - `master` : normal delivery branch. After the stable version is released, the code for the stable version branch is merged into the master branch.
-            
-  - `dev` : daily development branch. The daily development branch, the newly submitted code can pull requests to this branch.
-  
-  - `x.x.x-release` : the stable release version.
+- `master` : normal delivery branch. After the stable version is released, the code for the stable version branch is merged into the master branch.
+
+- `dev` : daily development branch. The daily development branch, the newly submitted code can pull requests to this branch.
+
+- `x.x.x-release` : the stable release version.
 
 So, you should fork the `dev` branch.
 
@@ -40,7 +40,6 @@ There will be two repositories at this time: origin (your own warehouse) and ups
 
 Get/update remote repository code (already the latest code, skip it).
 
-
 ```sh
 git fetch upstream
 ```
@@ -91,7 +90,6 @@ After submitting changes to your remote repository, you should click on the new
 <img src = "http://geek.analysys.cn/static/upload/221/2019-04-02/90f3abbf-70ef-4334-b8d6-9014c9cf4c7f.png" width ="60%"/>
 </p>
 
-
 Select the modified local branch and the branch to merge past to create a pull request.
 
 <p align = "center">
diff --git a/README.md b/README.md
index a2e6c3c2fd..a14bce1bc6 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
 Dolphin Scheduler Official Website
 [dolphinscheduler.apache.org](https://dolphinscheduler.apache.org)
-============
+==================================================================
 
 [![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
 [![codecov](https://codecov.io/gh/apache/dolphinscheduler/branch/dev/graph/badge.svg)](https://codecov.io/gh/apache/dolphinscheduler/branch/dev)
@@ -8,9 +8,6 @@ Dolphin Scheduler Official Website
 [![Twitter Follow](https://img.shields.io/twitter/follow/dolphinschedule.svg?style=social&label=Follow)](https://twitter.com/dolphinschedule)
 [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://s.apache.org/dolphinscheduler-slack)
 
-
-
-
 [![Stargazers over time](https://starchart.cc/apache/dolphinscheduler.svg)](https://starchart.cc/apache/dolphinscheduler)
 
 [![EN doc](https://img.shields.io/badge/document-English-blue.svg)](README.md)
@@ -21,35 +18,35 @@ Dolphin Scheduler Official Website
 DolphinScheduler is a distributed and extensible workflow scheduler platform with powerful DAG visual interfaces, dedicated to solving complex job dependencies in the data pipeline and providing various types of jobs available `out of the box`.
 
 Its main objectives are as follows:
- -  Highly Reliable,
+-  Highly Reliable,
 DolphinScheduler adopts a decentralized multi-master and multi-worker architecture design, which naturally supports easy expansion and high availability (not restricted by a single point of bottleneck), and its performance increases linearly with the increase of machines
- - High performance, supporting tens of millions of tasks every day
- - Support multi-tenant.
- - Cloud Native, DolphinScheduler supports multi-cloud/data center workflow management, also
+- High performance, supporting tens of millions of tasks every day
+- Support multi-tenant.
+- Cloud Native, DolphinScheduler supports multi-cloud/data center workflow management, also
 supports Kubernetes, Docker deployment and custom task types, distributed
 scheduling, with overall scheduling capability increased linearly with the
 scale of the cluster
- - Support various task types: Shell, MR, Spark, SQL (MySQL, PostgreSQL, hive, spark SQL), Python, Sub_Process, Procedure, etc.
- - Support scheduling of workflows and dependencies, manual scheduling to pause/stop/recover task, support failure task retry/alarm, recover specified nodes from failure, kill task, etc.
- - Associate the tasks according to the dependencies of the tasks in a DAG graph, which can visualize the running state of the task in real-time.
- - WYSIWYG online editing tasks
- - Support the priority of workflows & tasks, task failover, and task timeout alarm or failure.
- - Support workflow global parameters and node customized parameter settings.
- - Support online upload/download/management of resource files, etc. Support online file creation and editing.
- - Support task log online viewing and scrolling and downloading, etc.
- - Support the viewing of Master/Worker CPU load, memory, and CPU usage metrics.
- - Support displaying workflow history in tree/Gantt chart, as well as statistical analysis on the task status & process status in each workflow.
- - Support back-filling data.
- - Support internationalization.
- - More features waiting for partners to explore...
+- Support various task types: Shell, MR, Spark, SQL (MySQL, PostgreSQL, hive, spark SQL), Python, Sub_Process, Procedure, etc.
+- Support scheduling of workflows and dependencies, manual scheduling to pause/stop/recover task, support failure task retry/alarm, recover specified nodes from failure, kill task, etc.
+- Associate the tasks according to the dependencies of the tasks in a DAG graph, which can visualize the running state of the task in real-time.
+- WYSIWYG online editing tasks
+- Support the priority of workflows & tasks, task failover, and task timeout alarm or failure.
+- Support workflow global parameters and node customized parameter settings.
+- Support online upload/download/management of resource files, etc. Support online file creation and editing.
+- Support task log online viewing and scrolling and downloading, etc.
+- Support the viewing of Master/Worker CPU load, memory, and CPU usage metrics.
+- Support displaying workflow history in tree/Gantt chart, as well as statistical analysis on the task status & process status in each workflow.
+- Support back-filling data.
+- Support internationalization.
+- More features waiting for partners to explore...
 
 ## What's in DolphinScheduler
 
- Stability | Accessibility | Features | Scalability |
- --------- | ------------- | -------- | ------------|
-Decentralized multi-master and multi-worker | Visualization of workflow key information, such as task status, task type, retry times, task operation machine information, visual variables, and so on at a glance.  |  Support pause, recover operation | Support customized task types
-support HA | Visualization of all workflow operations, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, provide API mode operations. | Users on DolphinScheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs.  | The scheduler supports distributed scheduling, and the overall scheduling capability will increase linearly with the [...]
-Overload processing: By using the task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured. Machine jam can be avoided with high tolerance to numbers of tasks cached in task queue. | One-click deployment | Support traditional shell tasks, and big data platform task scheduling: MR, Spark, SQL (MySQL, PostgreSQL, hive, spark SQL), Python, Procedure, Sub_Process |  |
+|                                                                                                            Stability                                                                                                             |                                                                                     Accessibility                                                                                      |                                                                                [...]
+|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------- [...]
+| Decentralized multi-master and multi-worker                                                                                                                                                                                      | Visualization of workflow key information, such as task status, task type, retry times, task operation machine information, visual variables, and so on at a glance.                   |  Support pause, recover operation                                              [...]
+| support HA                                                                                                                                                                                                                       | Visualization of all workflow operations, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, provide API mode operations. | Users on DolphinScheduler can achieve many-to-one or one-to-one mapping relati [...]
+| Overload processing: By using the task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured. Machine jam can be avoided with high tolerance to numbers of tasks cached in task queue. | One-click deployment                                                                                                                                                                   | Support traditional shell tasks, and big data platform task scheduling: MR, Sp [...]
 
 ## User Interface Screenshots
 
diff --git a/README_zh_CN.md b/README_zh_CN.md
index c5058eac15..2226b9edba 100644
--- a/README_zh_CN.md
+++ b/README_zh_CN.md
@@ -1,12 +1,11 @@
 Dolphin Scheduler Official Website
 [dolphinscheduler.apache.org](https://dolphinscheduler.apache.org)
-============
+==================================================================
 
 [![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
 [![codecov](https://codecov.io/gh/apache/dolphinscheduler/branch/dev/graph/badge.svg)](https://codecov.io/gh/apache/dolphinscheduler/branch/dev)
 [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
 
-
 [![Stargazers over time](https://starchart.cc/apache/dolphinscheduler.svg)](https://starchart.cc/apache/dolphinscheduler)
 
 [![CN doc](https://img.shields.io/badge/文档-中文版-blue.svg)](README_zh_CN.md)
@@ -18,20 +17,20 @@ Dolphin Scheduler Official Website
 
 其主要目标如下:
 
- - 以DAG图的方式将Task按照任务的依赖关系关联起来,可实时可视化监控任务的运行状态
- - 支持丰富的任务类型:Shell、MR、Spark、SQL(mysql、postgresql、hive、sparksql)、Python、Sub_Process、Procedure等
- - 支持工作流定时调度、依赖调度、手动调度、手动暂停/停止/恢复,同时支持失败重试/告警、从指定节点恢复失败、Kill任务等操作
- - 支持工作流优先级、任务优先级及任务的故障转移及任务超时告警/失败
- - 支持工作流全局参数及节点自定义参数设置
- - 支持资源文件的在线上传/下载,管理等,支持在线文件创建、编辑
- - 支持任务日志在线查看及滚动、在线下载日志等
- - 实现集群HA,通过Zookeeper实现Master集群和Worker集群去中心化
- - 支持对`Master/Worker` cpu load,memory,cpu在线查看
- - 支持工作流运行历史树形/甘特图展示、支持任务状态统计、流程状态统计
- - 支持补数
- - 支持多租户
- - 支持国际化
- - 还有更多等待伙伴们探索
+- 以DAG图的方式将Task按照任务的依赖关系关联起来,可实时可视化监控任务的运行状态
+- 支持丰富的任务类型:Shell、MR、Spark、SQL(mysql、postgresql、hive、sparksql)、Python、Sub_Process、Procedure等
+- 支持工作流定时调度、依赖调度、手动调度、手动暂停/停止/恢复,同时支持失败重试/告警、从指定节点恢复失败、Kill任务等操作
+- 支持工作流优先级、任务优先级及任务的故障转移及任务超时告警/失败
+- 支持工作流全局参数及节点自定义参数设置
+- 支持资源文件的在线上传/下载,管理等,支持在线文件创建、编辑
+- 支持任务日志在线查看及滚动、在线下载日志等
+- 实现集群HA,通过Zookeeper实现Master集群和Worker集群去中心化
+- 支持对`Master/Worker` cpu load,memory,cpu在线查看
+- 支持工作流运行历史树形/甘特图展示、支持任务状态统计、流程状态统计
+- 支持补数
+- 支持多租户
+- 支持国际化
+- 还有更多等待伙伴们探索
 
 ## 系统部分截图
 
diff --git a/deploy/README.md b/deploy/README.md
index c1b8fa5434..925c40530c 100644
--- a/deploy/README.md
+++ b/deploy/README.md
@@ -2,3 +2,4 @@
 
 * [Start Up DolphinScheduler with Docker](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/start/docker.html)
 * [Start Up DolphinScheduler with Kubernetes](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html)
+
diff --git a/docs/docs/en/DSIP.md b/docs/docs/en/DSIP.md
index 07d617875e..69475f0804 100644
--- a/docs/docs/en/DSIP.md
+++ b/docs/docs/en/DSIP.md
@@ -55,11 +55,11 @@ Here is the template for mail
 
   ```text
   Hi community,
-  
+
   <CHANGE-TO-YOUR-PROPOSAL-DETAIL>
-  
+
   I already add a GitHub Issue for my proposal, which you could see in <CHANGE-TO-YOUR-GITHUB-ISSUE-LINK>.
-  
+
   Looking forward any feedback for this thread.
   ```
 
@@ -89,3 +89,4 @@ closed and transfer from [current DSIPs][current-DSIPs] to [past DSIPs][past-DSI
 [github-issue-choose]: https://github.com/apache/dolphinscheduler/issues/new/choose
 [mail-to-dev]: mailto:dev@dolphinscheduler.apache.org
 [DSIP-1]: https://github.com/apache/dolphinscheduler/issues/6407
+
diff --git a/docs/docs/en/about/features.md b/docs/docs/en/about/features.md
index 75393ce142..e45f75d565 100644
--- a/docs/docs/en/about/features.md
+++ b/docs/docs/en/about/features.md
@@ -16,4 +16,5 @@
 
 ## High Scalability
 
-- **Scalability**: Supports multitenancy and online resource management. Stable operation of 100,000 data tasks per day is supported.
\ No newline at end of file
+- **Scalability**: Supports multitenancy and online resource management. Stable operation of 100,000 data tasks per day is supported.
+
diff --git a/docs/docs/en/about/glossary.md b/docs/docs/en/about/glossary.md
index f8ad9355bc..dc3df7bb5c 100644
--- a/docs/docs/en/about/glossary.md
+++ b/docs/docs/en/about/glossary.md
@@ -71,4 +71,3 @@ process fails and ends
 From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation
 ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
 
-
diff --git a/docs/docs/en/about/hardware.md b/docs/docs/en/about/hardware.md
index f67066e8c9..b10a0b6880 100644
--- a/docs/docs/en/about/hardware.md
+++ b/docs/docs/en/about/hardware.md
@@ -6,15 +6,15 @@ This section briefs about the hardware requirements for DolphinScheduler. Dolphi
 
 The Linux operating systems specified below can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
 
-| Operating System | Version         |
-| :----------------------- | :----------: |
-| Red Hat Enterprise Linux | 7.0 and above   |
-| CentOS                   | 7.0 and above   |
-| Oracle Enterprise Linux  | 7.0 and above   |
+| Operating System         |     Version     |
+|:-------------------------|:---------------:|
+| Red Hat Enterprise Linux |  7.0 and above  |
+| CentOS                   |  7.0 and above  |
+| Oracle Enterprise Linux  |  7.0 and above  |
 | Ubuntu LTS               | 16.04 and above |
 
 > **Note:**
->The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
+> The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
 
 ## Server Configuration
 
@@ -23,8 +23,8 @@ DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architectu
 ### Production Environment
 
 | **CPU** | **MEM** | **HD** | **NIC** | **Num** |
-| --- | --- | --- | --- | --- |
-| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
+|---------|---------|--------|---------|---------|
+| 4 core+ | 8 GB+   | SAS    | GbE     | 1+      |
 
 > **Note:**
 > - The above recommended configuration is the minimum configuration for deploying DolphinScheduler. Higher configuration is strongly recommended for production environments.
@@ -34,11 +34,11 @@ DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architectu
 
 DolphinScheduler provides the following network port configurations for normal operation:
 
-| Server | Port | Desc |
-|  --- | --- | --- |
-| MasterServer |  5678  | not the communication port, require the native ports do not conflict |
-| WorkerServer | 1234  | not the communication port, require the native ports do not conflict |
-| ApiApplicationServer |  12345 | backend communication port |
+|        Server        | Port  |                                 Desc                                 |
+|----------------------|-------|----------------------------------------------------------------------|
+| MasterServer         | 5678  | not the communication port, require the native ports do not conflict |
+| WorkerServer         | 1234  | not the communication port, require the native ports do not conflict |
+| ApiApplicationServer | 12345 | backend communication port                                           |
 
 > **Note:**
 > - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
@@ -46,4 +46,4 @@ DolphinScheduler provides the following network port configurations for normal o
 
 ## Browser Requirements
 
-The minimum supported version of Google Chrome is version 85, but version 90 or above is recommended.
\ No newline at end of file
+The minimum supported version of Google Chrome is version 85, but version 90 or above is recommended.
diff --git a/docs/docs/en/about/introduction.md b/docs/docs/en/about/introduction.md
index 059401a4ac..4bc7ee49af 100644
--- a/docs/docs/en/about/introduction.md
+++ b/docs/docs/en/about/introduction.md
@@ -4,4 +4,4 @@ Apache DolphinScheduler provides a distributed and easy to expand visual workflo
 
 Apache DolphinScheduler aims to solve complex big data task dependencies and to trigger relationships in data OPS orchestration for various big data applications. Solves the intricate dependencies of data R&D ETL and the inability to monitor the health status of tasks. DolphinScheduler assembles tasks in the Directed Acyclic Graph (DAG) streaming mode, which can monitor the execution status of tasks in time, and supports operations like retry, recovery failure from specified nodes, pause [...]
 
-![Apache DolphinScheduler](../../../img/introduction_ui.png)
\ No newline at end of file
+![Apache DolphinScheduler](../../../img/introduction_ui.png)
diff --git a/docs/docs/en/architecture/cache.md b/docs/docs/en/architecture/cache.md
index 3885dddd24..6084a5cc65 100644
--- a/docs/docs/en/architecture/cache.md
+++ b/docs/docs/en/architecture/cache.md
@@ -39,4 +39,4 @@ Note: the final strategy for cache update comes from the expiration strategy con
 
 The sequence diagram shows below:
 
-<img src="../../../img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
\ No newline at end of file
+<img src="../../../img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
diff --git a/docs/docs/en/architecture/configuration.md b/docs/docs/en/architecture/configuration.md
index cfa853c896..279411ef75 100644
--- a/docs/docs/en/architecture/configuration.md
+++ b/docs/docs/en/architecture/configuration.md
@@ -101,8 +101,6 @@ The directory structure of DolphinScheduler is as follows:
 
 ## Configurations in Details
 
-
-
 ### dolphinscheduler-daemon.sh [startup or shutdown DolphinScheduler application]
 
 dolphinscheduler-daemon.sh is responsible for DolphinScheduler startup and shutdown.
@@ -110,6 +108,7 @@ Essentially, start-all.sh or stop-all.sh startup and shutdown the cluster via do
 Currently, DolphinScheduler just makes a basic config, remember to config further JVM options based on your practical situation of resources.
 
 Default simplified parameters are:
+
 ```bash
 export DOLPHINSCHEDULER_OPTS="
 -server
@@ -157,8 +156,8 @@ The default configuration is as follows:
 
 Note that DolphinScheduler also supports database configuration through `bin/env/dolphinscheduler_env.sh`.
 
-
 ### Zookeeper related configuration
+
 DolphinScheduler uses Zookeeper for cluster management, fault tolerance, event monitoring and other functions. Configuration file location:
 |Service| Configuration file  |
 |--|--|
@@ -226,8 +225,8 @@ The default configuration is as follows:
 |alert.rpc.port | 50052 | the RPC port of Alert Server|
 |zeppelin.rest.url | http://localhost:8080 | the RESTful API url of zeppelin|
 
-
 ### Api-server related configuration
+
 Location: `api-server/conf/application.yaml`
 
 |Parameters | Default value| Description|
@@ -257,6 +256,7 @@ Location: `api-server/conf/application.yaml`
 |traffic.control.customize-tenant-qps-rate||customize tenant max request number per second|
 
 ### Master Server related configuration
+
 Location: `master-server/conf/application.yaml`
 
 |Parameters | Default value| Description|
@@ -278,8 +278,8 @@ Location: `master-server/conf/application.yaml`
 |master.registry-disconnect-strategy.strategy|stop|Used when the master disconnect from registry, default value: stop. Optional values include stop, waiting|
 |master.registry-disconnect-strategy.max-waiting-time|100s|Used when the master disconnect from registry, and the disconnect strategy is waiting, this config means the master will waiting to reconnect to registry in given times, and after the waiting times, if the master still cannot connect to registry, will stop itself, if the value is 0s, the Master will waitting infinitely|
 
-
 ### Worker Server related configuration
+
 Location: `worker-server/conf/application.yaml`
 
 |Parameters | Default value| Description|
@@ -298,6 +298,7 @@ Location: `worker-server/conf/application.yaml`
 |worker.registry-disconnect-strategy.max-waiting-time|100s|Used when the worker disconnect from registry, and the disconnect strategy is waiting, this config means the worker will waiting to reconnect to registry in given times, and after the waiting times, if the worker still cannot connect to registry, will stop itself, if the value is 0s, will waitting infinitely |
 
 ### Alert Server related configuration
+
 Location: `alert-server/conf/application.yaml`
 
 |Parameters | Default value| Description|
@@ -305,7 +306,6 @@ Location: `alert-server/conf/application.yaml`
 |server.port|50053|the port of Alert Server|
 |alert.port|50052|the port of alert|
 
-
 ### Quartz related configuration
 
 This part describes quartz configs and configure them based on your practical situation and resources.
@@ -335,7 +335,6 @@ The default configuration is as follows:
 |spring.quartz.properties.org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate|
 |spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval | 5000|
 
-
 ### dolphinscheduler_env.sh [load environment variables configs]
 
 When using shell to commit tasks, DolphinScheduler will export environment variables from `bin/env/dolphinscheduler_env.sh`. The
diff --git a/docs/docs/en/architecture/design.md b/docs/docs/en/architecture/design.md
index 9e09e15948..9579ab3651 100644
--- a/docs/docs/en/architecture/design.md
+++ b/docs/docs/en/architecture/design.md
@@ -22,58 +22,58 @@
 
 ### Architecture Description
 
-* **MasterServer** 
+* **MasterServer**
 
-    MasterServer adopts a distributed and decentralized design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with ZooKeeper, and perform fault tolerance by monitoring changes in the temporary node of ZooKeeper.
-    MasterServer provides monitoring services based on netty.
+  MasterServer adopts a distributed and decentralized design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
+  When the MasterServer service starts, register a temporary node with ZooKeeper, and perform fault tolerance by monitoring changes in the temporary node of ZooKeeper.
+  MasterServer provides monitoring services based on netty.
 
-    #### The Service Mainly Includes:
-  
-    - **DistributedQuartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When quartz start the task, there will be a thread pool inside the Master responsible for the follow-up operation of the processing task;
+  #### The Service Mainly Includes:
 
-    - **MasterSchedulerService** is a scanning thread that regularly scans the `t_ds_command` table in the database, runs different business operations according to different **command types**;
+  - **DistributedQuartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When quartz start the task, there will be a thread pool inside the Master responsible for the follow-up operation of the processing task;
 
-    - **WorkflowExecuteRunnable** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of different event types;
+  - **MasterSchedulerService** is a scanning thread that regularly scans the `t_ds_command` table in the database, runs different business operations according to different **command types**;
 
-    - **TaskExecuteRunnable** is mainly responsible for the processing and persistence of tasks, and generates task events and submits them to the event queue of the process instance;
+  - **WorkflowExecuteRunnable** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of different event types;
 
-    - **EventExecuteService** is mainly responsible for the polling of the event queue of the process instances;
+  - **TaskExecuteRunnable** is mainly responsible for the processing and persistence of tasks, and generates task events and submits them to the event queue of the process instance;
 
-    - **StateWheelExecuteThread** is mainly responsible for process instance and task timeout, task retry, task-dependent polling, and generates the corresponding process instance or task event and submits it to the event queue of the process instance;
+  - **EventExecuteService** is mainly responsible for the polling of the event queue of the process instances;
 
-    - **FailoverExecuteThread** is mainly responsible for the logic of Master fault tolerance and Worker fault tolerance;
+  - **StateWheelExecuteThread** is mainly responsible for process instance and task timeout, task retry, task-dependent polling, and generates the corresponding process instance or task event and submits it to the event queue of the process instance;
 
-* **WorkerServer** 
+  - **FailoverExecuteThread** is mainly responsible for the logic of Master fault tolerance and Worker fault tolerance;
 
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
+* **WorkerServer**
 
-     When the WorkerServer service starts, register a temporary node with ZooKeeper and maintain a heartbeat.
-     WorkerServer provides monitoring services based on netty.
-  
-     #### The Service Mainly Includes:
+  WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
 
-    - **WorkerManagerThread** is mainly responsible for the submission of the task queue, continuously receives tasks from the task queue, and submits them to the thread pool for processing;
+  When the WorkerServer service starts, register a temporary node with ZooKeeper and maintain a heartbeat.
+  WorkerServer provides monitoring services based on netty.
 
-    - **TaskExecuteThread** is mainly responsible for the process of task execution, and the actual processing of tasks according to different task types;
+  #### The Service Mainly Includes:
 
-    - **RetryReportTaskStatusThread** is mainly responsible for regularly polling to report the task status to the Master until the Master replies to the status ack to avoid the loss of the task status;
+  - **WorkerManagerThread** is mainly responsible for the submission of the task queue, continuously receives tasks from the task queue, and submits them to the thread pool for processing;
 
-* **ZooKeeper** 
+  - **TaskExecuteThread** is mainly responsible for the process of task execution, and the actual processing of tasks according to different task types;
 
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system implements event monitoring and distributed locks based on ZooKeeper.
+  - **RetryReportTaskStatusThread** is mainly responsible for regularly polling to report the task status to the Master until the Master replies to the status ack to avoid the loss of the task status;
 
-    We have also implemented queues based on Redis, but we hope DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+* **ZooKeeper**
 
-* **AlertServer** 
+  ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system implements event monitoring and distributed locks based on ZooKeeper.
+
+  We have also implemented queues based on Redis, but we hope DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+
+* **AlertServer**
 
   Provides alarm services, and implements rich alarm methods through alarm plugins.
 
-* **API** 
+* **API**
 
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to external.
+  The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to external.
 
-* **UI** 
+* **UI**
 
   The front-end page of the system provides various visual operation interfaces of the system, see more at [Introduction to Functions](../guide/homepage.md) section.
 
@@ -84,6 +84,7 @@
 ##### Centralized Thinking
 
 The centralized design concept is relatively simple. The nodes in the distributed cluster are roughly divided into two roles according to responsibilities:
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
  </p>
@@ -120,8 +121,6 @@ The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and
  </p>
 Among them, the Master monitors the directories of other Masters and Workers. If the remove event is triggered, perform fault tolerance of the process instance or task instance according to the specific business logic.
 
-
-
 - Master fault tolerance:
 
 <p align="center">
@@ -146,7 +145,7 @@ Fault-tolerant content: When sending the remove event of the Worker node, the Ma
 
 Fault-tolerant post-processing: Once the Master Scheduler thread finds that the task instance is in the "fault-tolerant" state, it takes over the task and resubmits it.
 
- Note: Due to "network jitter", the node may lose heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+Note: Due to "network jitter", the node may lose heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
 
 ##### Task Failed and Try Again
 
@@ -170,26 +169,26 @@ If there is a task failure in the workflow that reaches the maximum retry times,
 
 In the early schedule design, if there is no priority design and use the fair scheduling, the task submitted first may complete at the same time with the task submitted later, thus invalid the priority of process or task. So we have re-designed this, and the following is our current design:
 
--  According to **the priority of different process instances** prior over **priority of the same process instance** prior over **priority of tasks within the same process** prior over **tasks within the same process**, process task submission order from highest to Lowest.
-    - The specific implementation is to parse the priority according to the JSON of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information to the ZooKeeper task queue. When obtain from the task queue, we can get the highest priority task by comparing string.
+- According to **the priority of different process instances** prior over **priority of the same process instance** prior over **priority of tasks within the same process** prior over **tasks within the same process**, process task submission order from highest to Lowest.
+  - The specific implementation is to parse the priority according to the JSON of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information to the ZooKeeper task queue. When obtain from the task queue, we can get the highest priority task by comparing string.
+    - The priority of the process definition is to consider that some processes need to process before other processes. Configure the priority when the process starts or schedules. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
 
-          - The priority of the process definition is to consider that some processes need to process before other processes. Configure the priority when the process starts or schedules. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://user-images.githubusercontent.com/10797147/146744784-eb351b14-c94a-4ed6-8ba4-5132c2a3d116.png" alt="Process priority configuration"  width="40%" />
-             </p>
+      <p align="center">
+         <img src="https://user-images.githubusercontent.com/10797147/146744784-eb351b14-c94a-4ed6-8ba4-5132c2a3d116.png" alt="Process priority configuration"  width="40%" />
+       </p>
 
-        - The priority of the task is also divides into 5 levels, ordered by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below:
-            <p align="center">
-               <img src="https://user-images.githubusercontent.com/10797147/146744830-5eac611f-5933-4f53-a0c6-31613c283708.png" alt="Task priority configuration"  width="35%" />
-             </p>
+    - The priority of the task is also divides into 5 levels, ordered by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below:
 
-#### Logback and Netty Implement Log Access
+        <p align="center">
+           <img src="https://user-images.githubusercontent.com/10797147/146744830-5eac611f-5933-4f53-a0c6-31613c283708.png" alt="Task priority configuration"  width="35%" />
+         </p>
 
--  Since Web (UI) and Worker are not always on the same machine, to view the log cannot be like querying a local file. There are two options:
-  -  Put logs on the ES search engine.
-  -  Obtain remote log information through netty communication.
+#### Logback and Netty Implement Log Access
 
--  In consideration of the lightness of DolphinScheduler as much as possible, so choose gRPC to achieve remote access to log information.
+- Since Web (UI) and Worker are not always on the same machine, to view the log cannot be like querying a local file. There are two options:
+- Put logs on the ES search engine.
+- Obtain remote log information through netty communication.
+- In consideration of the lightness of DolphinScheduler as much as possible, so choose gRPC to achieve remote access to log information.
 
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
diff --git a/docs/docs/en/architecture/load-balance.md b/docs/docs/en/architecture/load-balance.md
index 5cf2d4e34a..f58f864128 100644
--- a/docs/docs/en/architecture/load-balance.md
+++ b/docs/docs/en/architecture/load-balance.md
@@ -57,3 +57,4 @@ You can customise the configuration by changing the following properties in work
 
 - worker.max.cpuload.avg=-1 (worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2)
 - worker.reserved.memory=0.3 (worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G)
+
diff --git a/docs/docs/en/architecture/metadata.md b/docs/docs/en/architecture/metadata.md
index 2e55e1d925..b4633707f5 100644
--- a/docs/docs/en/architecture/metadata.md
+++ b/docs/docs/en/architecture/metadata.md
@@ -1,8 +1,8 @@
 # MetaData
 
 ## Table Schema
-see sql files in `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`
 
+see sql files in `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`
 
 ---
 
@@ -15,7 +15,7 @@ see sql files in `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`
 - One tenant can own Multiple users.
 - The queue field in the `t_ds_user` table stores the `queue_name` information in the `t_ds_queue` table, `t_ds_tenant` stores queue information using `queue_id` column. During the execution of the process definition, the user queue has the highest priority. If the user queue is null, use the tenant queue.
 - The `user_id` field in the `t_ds_datasource` table shows the user who create the data source. The user_id in `t_ds_relation_datasource_user` shows the user who has permission to the data source.
-  
+
 ### Project Resource Alert
 
 ![image.png](../../../img/metadata-erd/project-resource-alert.png)
@@ -26,6 +26,7 @@ see sql files in `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`
 - The `user_id` in the `t_ds_udfs` table represents the user who create the UDF, and the `user_id` in the `t_ds_relation_udfs_user` table represents a user who has permission to the UDF.
 
 ### Project - Tenant - ProcessDefinition - Schedule
+
 ![image.png](../../../img/metadata-erd/project_tenant_process_definition_schedule.png)
 
 - A project can have multiple process definitions, and each process definition belongs to only one project.
@@ -33,8 +34,10 @@ see sql files in `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`
 - A workflow definition can have one or more schedules.
 
 ### Process Definition Execution
+
 ![image.png](../../../img/metadata-erd/process_definition.png)
 
 - A process definition corresponds to multiple task definitions, which are associated through `t_ds_process_task_relation` and the associated key is `code + version`. When the pre-task of the task is empty, the corresponding `pre_task_node` and `pre_task_version` are 0.
 - A process definition can have multiple process instances `t_ds_process_instance`, one process instance corresponds to one or more task instances `t_ds_task_instance`.
-- The data stored in the `t_ds_relation_process_instance` table is used to handle the case that the process definition contains sub-processes. `parent_process_instance_id` represents the id of the main process instance containing the sub-process, `process_instance_id` represents the id of the sub-process instance, `parent_task_instance_id` represents the task instance id of the sub-process node. The process instance table and the task instance table correspond to the `t_ds_process_instan [...]
\ No newline at end of file
+- The data stored in the `t_ds_relation_process_instance` table is used to handle the case that the process definition contains sub-processes. `parent_process_instance_id` represents the id of the main process instance containing the sub-process, `process_instance_id` represents the id of the sub-process instance, `parent_task_instance_id` represents the task instance id of the sub-process node. The process instance table and the task instance table correspond to the `t_ds_process_instan [...]
+
diff --git a/docs/docs/en/architecture/task-structure.md b/docs/docs/en/architecture/task-structure.md
index bd64824016..73042e3dcc 100644
--- a/docs/docs/en/architecture/task-structure.md
+++ b/docs/docs/en/architecture/task-structure.md
@@ -6,28 +6,28 @@ All tasks in DolphinScheduler are saved in the `t_ds_process_definition` table.
 
 The following shows the `t_ds_process_definition` table structure:
 
-No. | field  | type  |  description
--------- | ---------| -------- | ---------
-1|id|int(11)|primary key
-2|name|varchar(255)|process definition name
-3|version|int(11)|process definition version
-4|release_state|tinyint(4)|release status of process definition: 0 not released, 1 released
-5|project_id|int(11)|project id
-6|user_id|int(11)|user id of the process definition
-7|process_definition_json|longtext|process definition JSON
-8|description|text|process definition description
-9|global_params|text|global parameters
-10|flag|tinyint(4)|specify whether the process is available: 0 is not available, 1 is available
-11|locations|text|node location information
-12|connects|text|node connectivity info
-13|receivers|text|receivers
-14|receivers_cc|text|CC receivers
-15|create_time|datetime|create time
-16|timeout|int(11) |timeout
-17|tenant_id|int(11) |tenant id
-18|update_time|datetime|update time
-19|modify_by|varchar(36)|specify the user that made the modification
-20|resource_ids|varchar(255)|resource ids
+| No. |          field          |     type     |                                 description                                  |
+|-----|-------------------------|--------------|------------------------------------------------------------------------------|
+| 1   | id                      | int(11)      | primary key                                                                  |
+| 2   | name                    | varchar(255) | process definition name                                                      |
+| 3   | version                 | int(11)      | process definition version                                                   |
+| 4   | release_state           | tinyint(4)   | release status of process definition: 0 not released, 1 released             |
+| 5   | project_id              | int(11)      | project id                                                                   |
+| 6   | user_id                 | int(11)      | user id of the process definition                                            |
+| 7   | process_definition_json | longtext     | process definition JSON                                                      |
+| 8   | description             | text         | process definition description                                               |
+| 9   | global_params           | text         | global parameters                                                            |
+| 10  | flag                    | tinyint(4)   | specify whether the process is available: 0 is not available, 1 is available |
+| 11  | locations               | text         | node location information                                                    |
+| 12  | connects                | text         | node connectivity info                                                       |
+| 13  | receivers               | text         | receivers                                                                    |
+| 14  | receivers_cc            | text         | CC receivers                                                                 |
+| 15  | create_time             | datetime     | create time                                                                  |
+| 16  | timeout                 | int(11)      | timeout                                                                      |
+| 17  | tenant_id               | int(11)      | tenant id                                                                    |
+| 18  | update_time             | datetime     | update time                                                                  |
+| 19  | modify_by               | varchar(36)  | specify the user that made the modification                                  |
+| 20  | resource_ids            | varchar(255) | resource ids                                                                 |
 
 The `process_definition_json` field is the core field, which defines the task information in the DAG diagram, and it is stored in JSON format.
 
@@ -40,6 +40,7 @@ No. | field  | type  |  description
 4|timeout|int|timeout
 
 Data example:
+
 ```bash
 {
     "globalParams":[
@@ -74,7 +75,7 @@ No.|parameter name||type|description |notes
 9|runFlag | |String |execution flag| |
 10|conditionResult | |Object|condition branch | |
 11| | successNode| Array|jump to node if success| |
-12| | failedNode|Array|jump to node if failure| 
+12| | failedNode|Array|jump to node if failure|
 13| dependence| |Object |task dependency |mutual exclusion with params
 14|maxRetryTimes | |String|max retry times | |
 15|retryInterval | |String |retry interval| |
@@ -159,7 +160,7 @@ No.|parameter name||type|description |note
 19|runFlag | |String |execution flag| |
 20|conditionResult | |Object|condition branch  | |
 21| | successNode| Array|jump to node if success| |
-22| | failedNode|Array|jump to node if failure| 
+22| | failedNode|Array|jump to node if failure|
 23| dependence| |Object |task dependency |mutual exclusion with params
 24|maxRetryTimes | |String|max retry times | |
 25|retryInterval | |String |retry interval| |
@@ -238,38 +239,38 @@ No.|parameter name||type|description |note
 
 **The following shows the node data structure:**
 
-No.|parameter name||type|description |notes
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| task Id|
-2|type ||String |task type |SPARK
-3| name| |String|task name |
-4| params| |Object|customized parameters |JSON format
-5| |mainClass |String | main class
-6| |mainArgs | String| execution arguments
-7| |others | String| other arguments
-8| |mainJar |Object | application jar package
-9| |deployMode |String |deployment mode |local,client,cluster
-10| |driverCores | String| driver cores
-11| |driverMemory | String| driver memory
-12| |numExecutors |String | executor count
-13| |executorMemory |String | executor memory
-14| |executorCores |String | executor cores
-15| |programType | String| program type|JAVA,SCALA,PYTHON
-16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
-17| | localParams| Array|customized local parameters
-18| | resourceList| Array|resource files
-19|description | |String|description | |
-20|runFlag | |String |execution flag| |
-21|conditionResult | |Object|condition branch| |
-22| | successNode| Array|jump to node if success| |
-23| | failedNode|Array|jump to node if failure| 
-24| dependence| |Object |task dependency |mutual exclusion with params
-25|maxRetryTimes | |String|max retry times | |
-26|retryInterval | |String |retry interval| |
-27|timeout | |Object|timeout | |
-28| taskInstancePriority| |String|task priority | |
-29|workerGroup | |String |Worker group| |
-30|preTasks | |Array|preposition tasks| |
+| No. |            parameter name            ||  type  |         description         |            notes             |
+|-----|----------------------|----------------|--------|-----------------------------|------------------------------|
+| 1   | id                   |                | String | task Id                     |
+| 2   | type                                 || String | task type                   | SPARK                        |
+| 3   | name                 |                | String | task name                   |
+| 4   | params               |                | Object | customized parameters       | JSON format                  |
+| 5   |                      | mainClass      | String | main class                  |
+| 6   |                      | mainArgs       | String | execution arguments         |
+| 7   |                      | others         | String | other arguments             |
+| 8   |                      | mainJar        | Object | application jar package     |
+| 9   |                      | deployMode     | String | deployment mode             | local,client,cluster         |
+| 10  |                      | driverCores    | String | driver cores                |
+| 11  |                      | driverMemory   | String | driver memory               |
+| 12  |                      | numExecutors   | String | executor count              |
+| 13  |                      | executorMemory | String | executor memory             |
+| 14  |                      | executorCores  | String | executor cores              |
+| 15  |                      | programType    | String | program type                | JAVA,SCALA,PYTHON            |
+| 16  |                      | sparkVersion   | String | Spark version               | SPARK1 , SPARK2              |
+| 17  |                      | localParams    | Array  | customized local parameters |
+| 18  |                      | resourceList   | Array  | resource files              |
+| 19  | description          |                | String | description                 |                              |
+| 20  | runFlag              |                | String | execution flag              |                              |
+| 21  | conditionResult      |                | Object | condition branch            |                              |
+| 22  |                      | successNode    | Array  | jump to node if success     |                              |
+| 23  |                      | failedNode     | Array  | jump to node if failure     |
+| 24  | dependence           |                | Object | task dependency             | mutual exclusion with params |
+| 25  | maxRetryTimes        |                | String | max retry times             |                              |
+| 26  | retryInterval        |                | String | retry interval              |                              |
+| 27  | timeout              |                | Object | timeout                     |                              |
+| 28  | taskInstancePriority |                | String | task priority               |                              |
+| 29  | workerGroup          |                | String | Worker group                |                              |
+| 30  | preTasks             |                | Array  | preposition tasks           |                              |
 
 **Node data example:**
 
@@ -336,31 +337,31 @@ No.|parameter name||type|description |notes
 
 **The following shows the node data structure:**
 
-No.|parameter name||type|description |notes
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| task Id|
-2|type ||String |task type |MR
-3| name| |String|task name |
-4| params| |Object|customized parameters |JSON format
-5| |mainClass |String | main class
-6| |mainArgs | String|execution arguments
-7| |others | String|other arguments
-8| |mainJar |Object | application jar package
-9| |programType | String|program type|JAVA,PYTHON
-10| | localParams| Array|customized local parameters
-11| | resourceList| Array|resource files
-12|description | |String|description | |
-13|runFlag | |String |execution flag| |
-14|conditionResult | |Object|condition branch| |
-15| | successNode| Array|jump to node if success| |
-16| | failedNode|Array|jump to node if failure| 
-17| dependence| |Object |task dependency |mutual exclusion with params
-18|maxRetryTimes | |String|max retry times | |
-19|retryInterval | |String |retry interval| |
-20|timeout | |Object|timeout | |
-21| taskInstancePriority| |String|task priority| |
-22|workerGroup | |String |Worker group| |
-23|preTasks | |Array|preposition tasks| |
+| No. |           parameter name           ||  type  |         description         |            notes             |
+|-----|----------------------|--------------|--------|-----------------------------|------------------------------|
+| 1   | id                   |              | String | task Id                     |
+| 2   | type                               || String | task type                   | MR                           |
+| 3   | name                 |              | String | task name                   |
+| 4   | params               |              | Object | customized parameters       | JSON format                  |
+| 5   |                      | mainClass    | String | main class                  |
+| 6   |                      | mainArgs     | String | execution arguments         |
+| 7   |                      | others       | String | other arguments             |
+| 8   |                      | mainJar      | Object | application jar package     |
+| 9   |                      | programType  | String | program type                | JAVA,PYTHON                  |
+| 10  |                      | localParams  | Array  | customized local parameters |
+| 11  |                      | resourceList | Array  | resource files              |
+| 12  | description          |              | String | description                 |                              |
+| 13  | runFlag              |              | String | execution flag              |                              |
+| 14  | conditionResult      |              | Object | condition branch            |                              |
+| 15  |                      | successNode  | Array  | jump to node if success     |                              |
+| 16  |                      | failedNode   | Array  | jump to node if failure     |
+| 17  | dependence           |              | Object | task dependency             | mutual exclusion with params |
+| 18  | maxRetryTimes        |              | String | max retry times             |                              |
+| 19  | retryInterval        |              | String | retry interval              |                              |
+| 20  | timeout              |              | Object | timeout                     |                              |
+| 21  | taskInstancePriority |              | String | task priority               |                              |
+| 22  | workerGroup          |              | String | Worker group                |                              |
+| 23  | preTasks             |              | Array  | preposition tasks           |                              |
 
 **Node data example:**
 
@@ -432,7 +433,7 @@ No.|parameter name||type|description |notes
 9|runFlag | |String |execution flag| |
 10|conditionResult | |Object|condition branch| |
 11| | successNode| Array|jump to node if success| |
-12| | failedNode|Array|jump to node if failure | 
+12| | failedNode|Array|jump to node if failure |
 13| dependence| |Object |task dependency |mutual exclusion with params
 14|maxRetryTimes | |String|max retry times | |
 15|retryInterval | |String |retry interval| |
@@ -493,36 +494,36 @@ No.|parameter name||type|description |notes
 
 **The following shows the node data structure:**
 
-No.|parameter name||type|description |notes
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String|task Id|
-2|type ||String |task type|FLINK
-3| name| |String|task name|
-4| params| |Object|customized parameters |JSON format
-5| |mainClass |String |main class
-6| |mainArgs | String|execution arguments
-7| |others | String|other arguments
-8| |mainJar |Object |application jar package
-9| |deployMode |String |deployment mode |local,client,cluster
-10| |slot | String| slot count
-11| |taskManager |String | taskManager count
-12| |taskManagerMemory |String |taskManager memory size
-13| |jobManagerMemory |String | jobManager memory size
-14| |programType | String| program type|JAVA,SCALA,PYTHON
-15| | localParams| Array|local parameters
-16| | resourceList| Array|resource files
-17|description | |String|description | |
-18|runFlag | |String |execution flag| |
-19|conditionResult | |Object|condition branch| |
-20| | successNode| Array|jump node if success| |
-21| | failedNode|Array|jump node if failure| 
-22| dependence| |Object |task dependency |mutual exclusion with params
-23|maxRetryTimes | |String|max retry times| |
-24|retryInterval | |String |retry interval| |
-25|timeout | |Object|timeout | |
-26| taskInstancePriority| |String|task priority| |
-27|workerGroup | |String |Worker group| |
-38|preTasks | |Array|preposition tasks| |
+| No. |             parameter name              ||  type  |       description       |            notes             |
+|-----|----------------------|-------------------|--------|-------------------------|------------------------------|
+| 1   | id                   |                   | String | task Id                 |
+| 2   | type                                    || String | task type               | FLINK                        |
+| 3   | name                 |                   | String | task name               |
+| 4   | params               |                   | Object | customized parameters   | JSON format                  |
+| 5   |                      | mainClass         | String | main class              |
+| 6   |                      | mainArgs          | String | execution arguments     |
+| 7   |                      | others            | String | other arguments         |
+| 8   |                      | mainJar           | Object | application jar package |
+| 9   |                      | deployMode        | String | deployment mode         | local,client,cluster         |
+| 10  |                      | slot              | String | slot count              |
+| 11  |                      | taskManager       | String | taskManager count       |
+| 12  |                      | taskManagerMemory | String | taskManager memory size |
+| 13  |                      | jobManagerMemory  | String | jobManager memory size  |
+| 14  |                      | programType       | String | program type            | JAVA,SCALA,PYTHON            |
+| 15  |                      | localParams       | Array  | local parameters        |
+| 16  |                      | resourceList      | Array  | resource files          |
+| 17  | description          |                   | String | description             |                              |
+| 18  | runFlag              |                   | String | execution flag          |                              |
+| 19  | conditionResult      |                   | Object | condition branch        |                              |
+| 20  |                      | successNode       | Array  | jump node if success    |                              |
+| 21  |                      | failedNode        | Array  | jump node if failure    |
+| 22  | dependence           |                   | Object | task dependency         | mutual exclusion with params |
+| 23  | maxRetryTimes        |                   | String | max retry times         |                              |
+| 24  | retryInterval        |                   | String | retry interval          |                              |
+| 25  | timeout              |                   | Object | timeout                 |                              |
+| 26  | taskInstancePriority |                   | String | task priority           |                              |
+| 27  | workerGroup          |                   | String | Worker group            |                              |
+| 38  | preTasks             |                   | Array  | preposition tasks       |                              |
 
 **Node data example:**
 
@@ -588,30 +589,30 @@ No.|parameter name||type|description |notes
 
 **The following shows the node data structure:**
 
-No.|parameter name||type|description |notes
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String|task Id|
-2|type ||String |task type|HTTP
-3| name| |String|task name|
-4| params| |Object|customized parameters |JSON format
-5| |url |String |request url
-6| |httpMethod | String|http method|GET,POST,HEAD,PUT,DELETE
-7| | httpParams| Array|http parameters
-8| |httpCheckCondition | String|validation of HTTP code status|default code 200
-9| |condition |String |validation conditions
-10| | localParams| Array|customized local parameters
-11|description | |String|description| |
-12|runFlag | |String |execution flag| |
-13|conditionResult | |Object|condition branch| |
-14| | successNode| Array|jump node if success| |
-15| | failedNode|Array|jump node if failure| 
-16| dependence| |Object |task dependency |mutual exclusion with params
-17|maxRetryTimes | |String|max retry times | |
-18|retryInterval | |String |retry interval| |
-19|timeout | |Object|timeout | |
-20| taskInstancePriority| |String|task priority| |
-21|workerGroup | |String |Worker group| |
-22|preTasks | |Array|preposition tasks| |
+| No. |              parameter name              ||  type  |          description           |            notes             |
+|-----|----------------------|--------------------|--------|--------------------------------|------------------------------|
+| 1   | id                   |                    | String | task Id                        |
+| 2   | type                                     || String | task type                      | HTTP                         |
+| 3   | name                 |                    | String | task name                      |
+| 4   | params               |                    | Object | customized parameters          | JSON format                  |
+| 5   |                      | url                | String | request url                    |
+| 6   |                      | httpMethod         | String | http method                    | GET,POST,HEAD,PUT,DELETE     |
+| 7   |                      | httpParams         | Array  | http parameters                |
+| 8   |                      | httpCheckCondition | String | validation of HTTP code status | default code 200             |
+| 9   |                      | condition          | String | validation conditions          |
+| 10  |                      | localParams        | Array  | customized local parameters    |
+| 11  | description          |                    | String | description                    |                              |
+| 12  | runFlag              |                    | String | execution flag                 |                              |
+| 13  | conditionResult      |                    | Object | condition branch               |                              |
+| 14  |                      | successNode        | Array  | jump node if success           |                              |
+| 15  |                      | failedNode         | Array  | jump node if failure           |
+| 16  | dependence           |                    | Object | task dependency                | mutual exclusion with params |
+| 17  | maxRetryTimes        |                    | String | max retry times                |                              |
+| 18  | retryInterval        |                    | String | retry interval                 |                              |
+| 19  | timeout              |                    | Object | timeout                        |                              |
+| 20  | taskInstancePriority |                    | String | task priority                  |                              |
+| 21  | workerGroup          |                    | String | Worker group                   |                              |
+| 22  | preTasks             |                    | Array  | preposition tasks              |                              |
 
 **Node data example:**
 
@@ -682,7 +683,7 @@ No.|parameter name||type|description |notes
 6| |dsType |String | datasource type
 7| |dataSource |Int | datasource ID
 8| |dtType | String|target database type
-9| |dataTarget | Int|target database ID 
+9| |dataTarget | Int|target database ID
 10| |sql |String | SQL statements
 11| |targetTable |String |target table
 12| |jobSpeedByte |Int |job speed limiting(bytes)
@@ -695,7 +696,7 @@ No.|parameter name||type|description |notes
 19|runFlag | |String |execution flag| |
 20|conditionResult | |Object|condition branch| |
 21| | successNode| Array|jump node if success| |
-22| | failedNode|Array|jump node if failure| 
+22| | failedNode|Array|jump node if failure|
 23| dependence| |Object |task dependency |mutual exclusion with params
 24|maxRetryTimes | |String|max retry times| |
 25|retryInterval | |String |retry interval| |
@@ -776,7 +777,7 @@ No.|parameter name||type|description |notes
 13|runFlag | |String |execution flag| |
 14|conditionResult | |Object|condition branch| |
 15| | successNode| Array|jump node if success| |
-16| | failedNode|Array|jump node if failure| 
+16| | failedNode|Array|jump node if failure|
 17| dependence| |Object |task dependency |mutual exclusion with params
 18|maxRetryTimes | |String|max retry times| |
 19|retryInterval | |String |retry interval| |
@@ -844,7 +845,7 @@ No.|parameter name||type|description |notes
 6|runFlag | |String |execution flag| |
 7|conditionResult | |Object|condition branch | |
 8| | successNode| Array|jump to node if success| |
-9| | failedNode|Array|jump to node if failure| 
+9| | failedNode|Array|jump to node if failure|
 10| dependence| |Object |task dependency |mutual exclusion with params
 11|maxRetryTimes | |String|max retry times | |
 12|retryInterval | |String |retry interval| |
@@ -909,7 +910,7 @@ No.|parameter name||type|description |notes
 7|runFlag | |String |execution flag| |
 8|conditionResult | |Object|condition branch | |
 9| | successNode| Array|jump to node if success| |
-10| | failedNode|Array|jump to node if failure| 
+10| | failedNode|Array|jump to node if failure|
 11| dependence| |Object |task dependency |mutual exclusion with params
 12|maxRetryTimes | |String|max retry times| |
 13|retryInterval | |String |retry interval| |
@@ -970,7 +971,7 @@ No.|parameter name||type|description |notes
 9|runFlag | |String |execution flag| |
 10|conditionResult | |Object|condition branch| |
 11| | successNode| Array|jump to node if success| |
-12| | failedNode|Array|jump to node if failure| 
+12| | failedNode|Array|jump to node if failure|
 13| dependence| |Object |task dependency |mutual exclusion with params
 14| | relation|String |relation|AND,OR
 15| | dependTaskList|Array |dependent task list|
@@ -1111,4 +1112,5 @@ No.|parameter name||type|description |notes
 
             ]
         }
-```
\ No newline at end of file
+```
+
diff --git a/docs/docs/en/contribute/api-standard.md b/docs/docs/en/contribute/api-standard.md
index 61d6622165..cebde7f3a7 100644
--- a/docs/docs/en/contribute/api-standard.md
+++ b/docs/docs/en/contribute/api-standard.md
@@ -1,9 +1,11 @@
 # API design standard
+
 A standardized and unified API is the cornerstone of project design.The API of DolphinScheduler follows the REST ful standard. REST ful is currently the most popular Internet software architecture. It has a clear structure, conforms to standards, is easy to understand and extend.
 
 This article uses the DolphinScheduler API as an example to explain how to construct a Restful API.
 
 ## 1. URI design
+
 REST is "Representational State Transfer".The design of Restful URI is based on resources.The resource corresponds to an entity on the network, for example: a piece of text, a picture, and a service. And each resource corresponds to a URI.
 
 + One Kind of Resource: expressed in the plural, such as `task-instances`、`groups` ;
@@ -12,36 +14,43 @@ REST is "Representational State Transfer".The design of Restful URI is based on
 + A Sub Resource:`/instances/{instanceId}/tasks/{taskId}`;
 
 ## 2. Method design
+
 We need to locate a certain resource by URI, and then use Method or declare actions in the path suffix to reflect the operation of the resource.
 
 ### ① Query - GET
+
 Use URI to locate the resource, and use GET to indicate query.
 
 + When the URI is a type of resource, it means to query a type of resource. For example, the following example indicates paging query `alter-groups`.
+
 ```
 Method: GET
 /dolphinscheduler/alert-groups
 ```
 
 + When the URI is a single resource, it means to query this resource. For example, the following example means to query the specified `alter-group`.
+
 ```
 Method: GET
 /dolphinscheduler/alter-groups/{id}
 ```
 
 + In addition, we can also express query sub-resources based on URI, as follows:
+
 ```
 Method: GET
 /dolphinscheduler/projects/{projectId}/tasks
 ```
 
 **The above examples all represent paging query. If we need to query all data, we need to add `/list` after the URI to distinguish. Do not mix the same API for both paged query and query.**
+
 ```
 Method: GET
 /dolphinscheduler/alert-groups/list
 ```
 
 ### ② Create - POST
+
 Use URI to locate the resource, use POST to indicate create, and then return the created id to requester.
 
 + create an `alter-group`:
@@ -52,35 +61,42 @@ Method: POST
 ```
 
 + create sub-resources is also the same as above.
+
 ```
 Method: POST
 /dolphinscheduler/alter-groups/{alterGroupId}/tasks
 ```
 
 ### ③ Modify - PUT
+
 Use URI to locate the resource, use PUT to indicate modify.
 + modify an `alert-group`
+
 ```
 Method: PUT
 /dolphinscheduler/alter-groups/{alterGroupId}
 ```
 
 ### ④ Delete -DELETE
+
 Use URI to locate the resource, use DELETE to indicate delete.
 
 + delete an `alert-group`
+
 ```
 Method: DELETE
 /dolphinscheduler/alter-groups/{alterGroupId}
 ```
 
 + batch deletion: batch delete the id array,we should use POST. **(Do not use the DELETE method, because the body of the DELETE request has no semantic meaning, and it is possible that some gateways, proxies, and firewalls will directly strip off the request body after receiving the DELETE request.)**
+
 ```
 Method: POST
 /dolphinscheduler/alter-groups/batch-delete
 ```
 
 ### ⑤ Partial Modifications -PATCH
+
 Use URI to locate the resource, use PATCH to partial modifications.
 
 ```
@@ -89,20 +105,27 @@ Method: PATCH
 ```
 
 ### ⑥ Others
+
 In addition to creating, deleting, modifying and quering, we also locate the corresponding resource through url, and then append operations to it after the path, such as:
+
 ```
 /dolphinscheduler/alert-groups/verify-name
 /dolphinscheduler/projects/{projectCode}/process-instances/{code}/view-gantt
 ```
 
 ## 3. Parameter design
+
 There are two types of parameters, one is request parameter and the other is path parameter. And the parameter must use small hump.
 
 In the case of paging, if the parameter entered by the user is less than 1, the front end needs to automatically turn to 1, indicating that the first page is requested; When the backend finds that the parameter entered by the user is greater than the total number of pages, it should directly return to the last page.
 
 ## 4. Others design
+
 ### base URL
+
 The URI of the project needs to use `/<project_name>` as the base path, so as to identify that these APIs are under this project.
+
 ```
 /dolphinscheduler
-```
\ No newline at end of file
+```
+
diff --git a/docs/docs/en/contribute/api-test.md b/docs/docs/en/contribute/api-test.md
index 7953e9dbd8..c7005e9540 100644
--- a/docs/docs/en/contribute/api-test.md
+++ b/docs/docs/en/contribute/api-test.md
@@ -10,7 +10,6 @@ In contrast, API testing focuses on whether a complete operation chain can be co
 
 For example, the API test of the tenant management interface focuses on whether users can log in normally; If the login fails, whether the error message can be displayed correctly. After logging in, you can perform tenant management operations through the sessionid you carry.
 
-
 ## API Test
 
 ### API-Pages
@@ -49,7 +48,6 @@ In addition, during the testing process, the interface are not requested directl
 
 On the login page, only the input parameter specification of the interface request is defined. For the output parameter of the interface request, only the unified basic response structure is defined. The data actually returned by the interface is tested in the actual test case. Whether the input and output of main test interfaces can meet the requirements of test cases.
 
-
 ### API-Cases
 
 The following is an example of a tenant management test. As explained earlier, we use docker-compose for deployment, so for each test case, we need to import the corresponding file in the form of an annotation.
@@ -86,7 +84,7 @@ https://github.com/apache/dolphinscheduler/tree/dev/dolphinscheduler-api-test/do
 
 ## Supplements
 
-When running API tests locally, First, you need to start the local service, you can refer to this page: 
+When running API tests locally, First, you need to start the local service, you can refer to this page:
 [development-environment-setup](./development-environment-setup.md)
 
 When running API tests locally, the `-Dlocal=true` parameter can be configured to connect locally and facilitate changes to the UI.
diff --git a/docs/docs/en/contribute/architecture-design.md b/docs/docs/en/contribute/architecture-design.md
index a46bfb2859..1e50f2592d 100644
--- a/docs/docs/en/contribute/architecture-design.md
+++ b/docs/docs/en/contribute/architecture-design.md
@@ -1,4 +1,5 @@
 ## Architecture Design
+
 Before explaining the architecture of the schedule system, let us first understand the common nouns of the schedule system.
 
 ### 1.Noun Interpretation
@@ -12,7 +13,7 @@ Before explaining the architecture of the schedule system, let us first understa
   </p>
 </p>
 
-**Process definition**: Visualization **DAG** by dragging task nodes and establishing associations of task nodes 
+**Process definition**: Visualization **DAG** by dragging task nodes and establishing associations of task nodes
 
 **Process instance**: A process instance is an instantiation of a process definition, which can be generated by manual startup or  scheduling. The process definition runs once, a new process instance is generated
 
@@ -34,11 +35,10 @@ Before explaining the architecture of the schedule system, let us first understa
 
 **Complement**: Complement historical data, support **interval parallel and serial** two complement methods
 
-
-
 ### 2.System architecture
 
 #### 2.1 System Architecture Diagram
+
 <p align="center">
   <img src="../../../img/architecture.jpg" alt="System Architecture Diagram"  />
   <p align="center">
@@ -46,60 +46,51 @@ Before explaining the architecture of the schedule system, let us first understa
   </p>
 </p>
 
-
-
 #### 2.2 Architectural description
 
-* **MasterServer** 
+* **MasterServer**
 
-    MasterServer adopts the distributed non-central design concept. MasterServer is mainly responsible for DAG task split, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer.
-    When the MasterServer service starts, it registers a temporary node with Zookeeper, and listens to the Zookeeper temporary node state change for fault tolerance processing.
+  MasterServer adopts the distributed non-central design concept. MasterServer is mainly responsible for DAG task split, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer.
+  When the MasterServer service starts, it registers a temporary node with Zookeeper, and listens to the Zookeeper temporary node state change for fault tolerance processing.
 
-    
+  ##### The service mainly contains:
 
-    ##### The service mainly contains:
+  - **Distributed Quartz** distributed scheduling component, mainly responsible for the start and stop operation of the scheduled task. When the quartz picks up the task, the master internally has a thread pool to be responsible for the subsequent operations of the task.
 
-    - **Distributed Quartz** distributed scheduling component, mainly responsible for the start and stop operation of the scheduled task. When the quartz picks up the task, the master internally has a thread pool to be responsible for the subsequent operations of the task.
+  - **MasterSchedulerThread** is a scan thread that periodically scans the **command** table in the database for different business operations based on different **command types**
 
-    - **MasterSchedulerThread** is a scan thread that periodically scans the **command** table in the database for different business operations based on different **command types**
+  - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, logic processing of various command types
 
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, logic processing of various command types
+  - **MasterTaskExecThread** is mainly responsible for task persistence
 
-    - **MasterTaskExecThread** is mainly responsible for task persistence
+* **WorkerServer**
 
-      
+  - WorkerServer also adopts a distributed, non-central design concept. WorkerServer is mainly responsible for task execution and providing log services. When the WorkerServer service starts, it registers the temporary node with Zookeeper and maintains the heartbeat.
 
-* **WorkerServer** 
+    ##### This service contains:
 
-     - WorkerServer also adopts a distributed, non-central design concept. WorkerServer is mainly responsible for task execution and providing log services. When the WorkerServer service starts, it registers the temporary node with Zookeeper and maintains the heartbeat.
+    - **FetchTaskThread** is mainly responsible for continuously receiving tasks from **Task Queue** and calling **TaskScheduleThread** corresponding executors according to different task types.
+  - **ZooKeeper**
 
-       ##### This service contains:
+    The ZooKeeper service, the MasterServer and the WorkerServer nodes in the system all use the ZooKeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper.
+    We have also implemented queues based on Redis, but we hope that DolphinScheduler relies on as few components as possible, so we finally removed the Redis implementation.
 
-       - **FetchTaskThread** is mainly responsible for continuously receiving tasks from **Task Queue** and calling **TaskScheduleThread** corresponding executors according to different task types.
+  - **Task Queue**
 
-     - **ZooKeeper**
+    The task queue operation is provided. Currently, the queue is also implemented based on Zookeeper. Since there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have over-measured a million-level data storage queue, which has no effect on system stability and performance.
 
-       The ZooKeeper service, the MasterServer and the WorkerServer nodes in the system all use the ZooKeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper.
-       We have also implemented queues based on Redis, but we hope that DolphinScheduler relies on as few components as possible, so we finally removed the Redis implementation.
+  - **Alert**
 
-     - **Task Queue**
+    Provides alarm-related interfaces. The interfaces mainly include **Alarms**. The storage, query, and notification functions of the two types of alarm data. The notification function has two types: **mail notification** and **SNMP (not yet implemented)**.
 
-       The task queue operation is provided. Currently, the queue is also implemented based on Zookeeper. Since there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have over-measured a million-level data storage queue, which has no effect on system stability and performance.
+  - **API**
 
-     - **Alert**
+    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service provides a RESTful api to provide request services externally.
+    Interfaces include workflow creation, definition, query, modification, release, offline, manual start, stop, pause, resume, start execution from this node, and more.
 
-       Provides alarm-related interfaces. The interfaces mainly include **Alarms**. The storage, query, and notification functions of the two types of alarm data. The notification function has two types: **mail notification** and **SNMP (not yet implemented)**.
+  - **UI**
 
-     - **API**
-
-       The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service provides a RESTful api to provide request services externally.
-       Interfaces include workflow creation, definition, query, modification, release, offline, manual start, stop, pause, resume, start execution from this node, and more.
-
-     - **UI**
-
-       The front-end page of the system provides various visual operation interfaces of the system. For details, see the [quick start](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/about/introduction.html) section.
-
-     
+    The front-end page of the system provides various visual operation interfaces of the system. For details, see the [quick start](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/about/introduction.html) section.
 
 #### 2.3 Architectural Design Ideas
 
@@ -130,10 +121,9 @@ Problems in the design of centralized :
 - In the decentralized design, there is usually no Master/Slave concept, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, networked arbitrary node equipment down machine , all will only affect a small range of features.
 - The core design of decentralized design is that there is no "manager" that is different from other nodes in the entire distributed system, so there is no single point of failure problem. However, since there is no "manager" node, each node needs to communicate with other nodes to get the necessary machine information, and the unreliable line of distributed system communication greatly increases the difficulty of implementing the above functions.
 - In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly emerging. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will spontaneously hold "meetings" to elect new "managers". Go to preside over the work. The most typical case is the Etcd implemented in ZooKeeper and Go.
-
 - Decentralization of DolphinScheduler is the registration of Master/Worker to ZooKeeper. The Master Cluster and the Worker Cluster are not centered, and the Zookeeper distributed lock is used to elect one Master or Worker as the “manager” to perform the task.
 
-#####  二、Distributed lock practice
+##### 二、Distributed lock practice
 
 DolphinScheduler uses ZooKeeper distributed locks to implement only one Master to execute the Scheduler at the same time, or only one Worker to perform task submission.
 
@@ -184,8 +174,6 @@ Service fault tolerance design relies on ZooKeeper's Watcher mechanism. The impl
 
 The Master monitors the directories of other Masters and Workers. If the remove event is detected, the process instance is fault-tolerant or the task instance is fault-tolerant according to the specific business logic.
 
-
-
 - Master fault tolerance flow chart:
 
  <p align="center">
@@ -194,8 +182,6 @@ The Master monitors the directories of other Masters and Workers. If the remove
 
 After the ZooKeeper Master is fault-tolerant, it is rescheduled by the Scheduler thread in DolphinScheduler. It traverses the DAG to find the "Running" and "Submit Successful" tasks, and monitors the status of its task instance for the "Running" task. You need to determine whether the Task Queue already exists. If it exists, monitor the status of the task instance. If it does not exist, resubmit the task instance.
 
-
-
 - Worker fault tolerance flow chart:
 
  <p align="center">
@@ -204,7 +190,7 @@ After the ZooKeeper Master is fault-tolerant, it is rescheduled by the Scheduler
 
 Once the Master Scheduler thread finds the task instance as "need to be fault tolerant", it takes over the task and resubmits.
 
- Note: Because the "network jitter" may cause the node to lose the heartbeat of ZooKeeper in a short time, the node's remove event occurs. In this case, we use the easiest way, that is, once the node has timeout connection with ZooKeeper, it will directly stop the Master or Worker service.
+Note: Because the "network jitter" may cause the node to lose the heartbeat of ZooKeeper in a short time, the node's remove event occurs. In this case, we use the easiest way, that is, once the node has timeout connection with ZooKeeper, it will directly stop the Master or Worker service.
 
 ###### 2. Task failure retry
 
@@ -214,8 +200,6 @@ Here we must first distinguish between the concept of task failure retry, proces
 - Process failure recovery is process level, is done manually, recovery can only be performed **from the failed node** or **from the current node**
 - Process failure rerun is also process level, is done manually, rerun is from the start node
 
-
-
 Next, let's talk about the topic, we divided the task nodes in the workflow into two types.
 
 - One is a business node, which corresponds to an actual script or processing statement, such as a Shell node, an MR node, a Spark node, a dependent node, and so on.
@@ -225,16 +209,12 @@ Each **service node** can configure the number of failed retries. When the task
 
 If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process resumed.
 
-
-
 ##### V. Task priority design
 
 In the early scheduling design, if there is no priority design and fair scheduling design, it will encounter the situation that the task submitted first may be completed simultaneously with the task submitted subsequently, but the priority of the process or task cannot be set. We have redesigned this, and we are currently designing it as follows:
 
 - According to **different process instance priority** prioritizes **same process instance priority** prioritizes **task priority within the same process** takes precedence over **same process** commit order from high Go to low for task processing.
-
   - The specific implementation is to resolve the priority according to the json of the task instance, and then save the **process instance priority _ process instance id_task priority _ task id** information in the ZooKeeper task queue, when obtained from the task queue, Through string comparison, you can get the task that needs to be executed first.
-
     - The priority of the process definition is that some processes need to be processed before other processes. This can be configured at the start of the process or at the time of scheduled start. There are 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
 
       <p align="center">
@@ -308,8 +288,6 @@ Public class TaskLogFilter extends Filter<ILoggingEvent> {
 }
 ```
 
-
-
 ### summary
 
 Starting from the scheduling, this paper introduces the architecture principle and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
diff --git a/docs/docs/en/contribute/backend/mechanism/global-parameter.md b/docs/docs/en/contribute/backend/mechanism/global-parameter.md
index 53b73747d8..b7e1f0897d 100644
--- a/docs/docs/en/contribute/backend/mechanism/global-parameter.md
+++ b/docs/docs/en/contribute/backend/mechanism/global-parameter.md
@@ -59,3 +59,4 @@ Assign the parameters with matching values to varPool (List, which contains the
 
 * Format the varPool as json and pass it to master.
 * The parameters that are OUT would be written into the localParam after the master has received the varPool.
+
diff --git a/docs/docs/en/contribute/backend/mechanism/overview.md b/docs/docs/en/contribute/backend/mechanism/overview.md
index 4f0d592c46..2054f283da 100644
--- a/docs/docs/en/contribute/backend/mechanism/overview.md
+++ b/docs/docs/en/contribute/backend/mechanism/overview.md
@@ -1,6 +1,6 @@
 # Overview
 
 <!-- TODO Since the side menu does not support multiple levels, add new page to keep all sub page here -->
-
 * [Global Parameter](global-parameter.md)
 * [Switch Task type](task/switch.md)
+
diff --git a/docs/docs/en/contribute/backend/mechanism/task/switch.md b/docs/docs/en/contribute/backend/mechanism/task/switch.md
index 490510405e..fcff2643d6 100644
--- a/docs/docs/en/contribute/backend/mechanism/task/switch.md
+++ b/docs/docs/en/contribute/backend/mechanism/task/switch.md
@@ -6,3 +6,4 @@ Switch task workflow step as follows
 * `SwitchTaskExecThread` processes the expressions defined in `switch` from top to bottom, obtains the value of the variable from `varPool`, and parses the expression through `javascript`. If the expression returns true, stop checking and record The order of the expression, here we record as resultConditionLocation. The task of SwitchTaskExecThread is over
 * After the `switch` task runs, if there is no error (more commonly, the user-defined expression is out of specification or there is a problem with the parameter name), then `MasterExecThread.submitPostNode` will obtain the downstream node of the `DAG` to continue execution.
 * If it is found in `DagHelper.parsePostNodes` that the current node (the node that has just completed the work) is a `switch` node, the `resultConditionLocation` will be obtained, and all branches except `resultConditionLocation` in the SwitchParameters will be skipped. In this way, only the branches that need to be executed are left
+
diff --git a/docs/docs/en/contribute/backend/spi/alert.md b/docs/docs/en/contribute/backend/spi/alert.md
index 9b6c45e547..b7934242e1 100644
--- a/docs/docs/en/contribute/backend/spi/alert.md
+++ b/docs/docs/en/contribute/backend/spi/alert.md
@@ -6,7 +6,7 @@ DolphinScheduler is undergoing a microkernel + plug-in architecture change. All
 
 For alarm-related codes, please refer to the `dolphinscheduler-alert-api` module. This module defines the extension interface of the alarm plug-in and some basic codes. When we need to realize the plug-inization of related functions, it is recommended to read the code of this block first. Of course, it is recommended that you read the document. This will reduce a lot of time, but the document There is a certain degree of lag. When the document is missing, it is recommended to take the so [...]
 
-We use the native JAVA-SPI, when you need to extend, in fact, you only need to pay attention to the extension of the `org.apache.dolphinscheduler.alert.api.AlertChannelFactory` interface, the underlying logic such as plug-in loading, and other kernels have been implemented, Which makes our development more focused and simple. 
+We use the native JAVA-SPI, when you need to extend, in fact, you only need to pay attention to the extension of the `org.apache.dolphinscheduler.alert.api.AlertChannelFactory` interface, the underlying logic such as plug-in loading, and other kernels have been implemented, Which makes our development more focused and simple.
 
 In additional, the `AlertChannelFactory` extends from `PrioritySPI`, this means you can set the plugin priority, when you have two plugin has the same name, you can customize the priority by override the `getIdentify` method. The high priority plugin will be load, but if you have two plugin with the same name and same priority, the server will throw `IllegalArgumentException` when load the plugin.
 
@@ -26,8 +26,8 @@ If you don't care about its internal design, but simply want to know how to deve
 
   This module is currently a plug-in provided by us, and now we have supported dozens of plug-ins, such as Email, DingTalk, Script, etc.
 
-
 #### Alert SPI Main class information.
+
 AlertChannelFactory
 Alarm plug-in factory interface. All alarm plug-ins need to implement this interface. This interface is used to define the name of the alarm plug-in and the required parameters. The create method is used to create a specific alarm plug-in instance.
 
@@ -56,36 +56,40 @@ The specific design of alert_spi can be seen in the issue: [Alert Plugin Design]
 
 * Email
 
-     Email alert notification
+  Email alert notification
 
 * DingTalk
 
-     Alert for DingTalk group chat bots
-  
-     Related parameter configuration can refer to the DingTalk robot document.
+  Alert for DingTalk group chat bots
+
+  Related parameter configuration can refer to the DingTalk robot document.
 
 * EnterpriseWeChat
 
-     EnterpriseWeChat alert notifications
+  EnterpriseWeChat alert notifications
 
-     Related parameter configuration can refer to the EnterpriseWeChat robot document.
+  Related parameter configuration can refer to the EnterpriseWeChat robot document.
 
 * Script
 
-     We have implemented a shell script for alerting. We will pass the relevant alert parameters to the script and you can implement your alert logic in the shell. This is a good way to interface with internal alerting applications.
+  We have implemented a shell script for alerting. We will pass the relevant alert parameters to the script and you can implement your alert logic in the shell. This is a good way to interface with internal alerting applications.
 
 * SMS
 
-     SMS alerts
+  SMS alerts
+
 * FeiShu
 
   FeiShu alert notification
+
 * Slack
 
   Slack alert notification
+
 * PagerDuty
 
   PagerDuty alert notification
+
 * WebexTeams
 
   WebexTeams alert notification
@@ -95,9 +99,10 @@ The specific design of alert_spi can be seen in the issue: [Alert Plugin Design]
 * Telegram
 
   Telegram alert notification
-  
+
   Related parameter configuration can refer to the Telegram document.
 
 * Http
 
   We have implemented a Http script for alerting. And calling most of the alerting plug-ins end up being Http requests, if we not support your alert plug-in yet, you can use Http to realize your alert login. Also welcome to contribute your common plug-ins to the community :)
+
diff --git a/docs/docs/en/contribute/backend/spi/datasource.md b/docs/docs/en/contribute/backend/spi/datasource.md
index 9738e07330..caf8a5be46 100644
--- a/docs/docs/en/contribute/backend/spi/datasource.md
+++ b/docs/docs/en/contribute/backend/spi/datasource.md
@@ -22,4 +22,4 @@ In additional, the `DataSourceChannelFactory` extends from `PrioritySPI`, this m
 
 #### **Future plan**
 
-Support data sources such as kafka, http, files, sparkSQL, FlinkSQL, etc.
\ No newline at end of file
+Support data sources such as kafka, http, files, sparkSQL, FlinkSQL, etc.
diff --git a/docs/docs/en/contribute/backend/spi/registry.md b/docs/docs/en/contribute/backend/spi/registry.md
index 0957ff3cdd..b612ba5dcd 100644
--- a/docs/docs/en/contribute/backend/spi/registry.md
+++ b/docs/docs/en/contribute/backend/spi/registry.md
@@ -6,9 +6,10 @@ Make the following configuration (take zookeeper as an example)
 
 * Registry plug-in configuration, take Zookeeper as an example (registry.properties)
   dolphinscheduler-service/src/main/resources/registry.properties
+
   ```registry.properties
-   registry.plugin.name=zookeeper
-   registry.servers=127.0.0.1:2181
+  registry.plugin.name=zookeeper
+  registry.servers=127.0.0.1:2181
   ```
 
 For specific configuration information, please refer to the parameter information provided by the specific plug-in, for example zk: `org/apache/dolphinscheduler/plugin/registry/zookeeper/ZookeeperConfiguration.java`
diff --git a/docs/docs/en/contribute/backend/spi/task.md b/docs/docs/en/contribute/backend/spi/task.md
index f909d42fa8..91ee108bad 100644
--- a/docs/docs/en/contribute/backend/spi/task.md
+++ b/docs/docs/en/contribute/backend/spi/task.md
@@ -14,4 +14,4 @@ In additional, the `TaskChannelFactory` extends from `PrioritySPI`, this means y
 
 Since the task plug-in involves the front-end page, the front-end SPI has not yet been implemented, so you need to implement the front-end page corresponding to the plug-in separately.
 
-If there is a class conflict in the task plugin, you can use [Shade-Relocating Classes](https://maven.apache.org/plugins/maven-shade-plugin/) to solve this problem.
\ No newline at end of file
+If there is a class conflict in the task plugin, you can use [Shade-Relocating Classes](https://maven.apache.org/plugins/maven-shade-plugin/) to solve this problem.
diff --git a/docs/docs/en/contribute/e2e-test.md b/docs/docs/en/contribute/e2e-test.md
index 82affec552..c6c49168a8 100644
--- a/docs/docs/en/contribute/e2e-test.md
+++ b/docs/docs/en/contribute/e2e-test.md
@@ -77,31 +77,31 @@ In addition, during the testing process, the elements are not manipulated direct
 The SecurityPage provides goToTab methods to test the corresponding sidebar jumps, mainly including TenantPage, UserPage, WorkerGroupPage and QueuePage. These pages are implemented in the same way, mainly to test whether the input, add and delete buttons of the form can return to the corresponding page.
 
 ```java
- public <T extends SecurityPage.Tab> T goToTab(Class<T> tab) {
-        if (tab == TenantPage.class) {
-            WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
-            return tab.cast(new TenantPage(driver));
-        }
-        if (tab == UserPage.class) {
-            WebElement menUserManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menUserManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
-            return tab.cast(new UserPage(driver));
-        }
-        if (tab == WorkerGroupPage.class) {
-            WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
-            return tab.cast(new WorkerGroupPage(driver));
-        }
-        if (tab == QueuePage.class) {
-            menuQueueManage().click();
-            return tab.cast(new QueuePage(driver));
-        }
-        throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
-    }
+public <T extends SecurityPage.Tab> T goToTab(Class<T> tab) {
+       if (tab == TenantPage.class) {
+           WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
+           return tab.cast(new TenantPage(driver));
+       }
+       if (tab == UserPage.class) {
+           WebElement menUserManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menUserManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
+           return tab.cast(new UserPage(driver));
+       }
+       if (tab == WorkerGroupPage.class) {
+           WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
+           return tab.cast(new WorkerGroupPage(driver));
+       }
+       if (tab == QueuePage.class) {
+           menuQueueManage().click();
+           return tab.cast(new QueuePage(driver));
+       }
+       throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
+   }
 ```
 
 ![SecurityPage](../../../img/e2e-test/SecurityPage.png)
@@ -146,14 +146,14 @@ The following is an example of a tenant management test. As explained earlier, w
 The browser is loaded using the RemoteWebDriver provided with Selenium. Before each test case is started there is some preparation work that needs to be done. For example: logging in the user, jumping to the corresponding page (depending on the specific test case).
 
 ```java
-    @BeforeAll
-    public static void setup() {
-        new LoginPage(browser)
-                .login("admin", "dolphinscheduler123") 
-                .goToNav(SecurityPage.class) 
-                .goToTab(TenantPage.class)
-        ;
-    }
+@BeforeAll
+public static void setup() {
+    new LoginPage(browser)
+            .login("admin", "dolphinscheduler123") 
+            .goToNav(SecurityPage.class) 
+            .goToTab(TenantPage.class)
+    ;
+}
 ```
 
 When the preparation is complete, it is time for the formal test case writing. We use a form of @Order() annotation for modularity, to confirm the order of the tests. After the tests have been run, assertions are used to determine if the tests were successful, and if the assertion returns true, the tenant creation was successful. The following code can be used as a reference:
@@ -176,14 +176,14 @@ The rest are similar cases and can be understood by referring to the specific so
 
 https://github.com/apache/dolphinscheduler/tree/dev/dolphinscheduler-e2e/dolphinscheduler-e2e-case/src/test/java/org/apache/dolphinscheduler/e2e/cases
 
-##  III. Supplements
+## III. Supplements
 
-When running E2E tests locally, First, you need to start the local service, you can refer to this page: 
+When running E2E tests locally, First, you need to start the local service, you can refer to this page:
 [development-environment-setup](./development-environment-setup.md)
 
 When running E2E tests locally, the `-Dlocal=true` parameter can be configured to connect locally and facilitate changes to the UI.
 
-When running E2E tests with `M1` chip, you can use `-Dm1_chip=true` parameter to configure containers supported by 
+When running E2E tests with `M1` chip, you can use `-Dm1_chip=true` parameter to configure containers supported by
 `ARM64`.
 
 ![Dlocal](../../../img/e2e-test/Dlocal.png)
diff --git a/docs/docs/en/contribute/frontend-development.md b/docs/docs/en/contribute/frontend-development.md
index 297a7ccee0..9ab23cc5be 100644
--- a/docs/docs/en/contribute/frontend-development.md
+++ b/docs/docs/en/contribute/frontend-development.md
@@ -1,6 +1,7 @@
 # Front-end development documentation
 
 ### Technical selection
+
 ```
 Vue mvvm framework
 
@@ -17,10 +18,16 @@ Lodash high performance JavaScript utility library
 
 ### Development environment
 
-- #### Node installation
-Node package download (note version v12.20.2) `https://nodejs.org/download/release/v12.20.2/` 
+- 
+
+#### Node installation
+
+Node package download (note version v12.20.2) `https://nodejs.org/download/release/v12.20.2/`
+
+- 
+
+#### Front-end project construction
 
-- #### Front-end project construction
 Use the command line mode `cd`  enter the `dolphinscheduler-ui` project directory and execute `npm install` to pull the project dependency package.
 
 > If `npm install` is very slow, you can set the taobao mirror
@@ -36,13 +43,16 @@ npm config set registry http://registry.npm.taobao.org/
 API_BASE = http://127.0.0.1:12345
 ```
 
-> #####  ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
+##### ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
 
 ```bash
 npm install node-sass --unsafe-perm #Install node-sass dependency separately
 ```
 
-- #### Development environment operation
+- 
+
+#### Development environment operation
+
 - `npm start` project development environment (after startup address http://localhost:8888)
 
 #### Front-end project release
@@ -140,6 +150,7 @@ Public module and utill `src/js/module`
 Home  => `http://localhost:8888/#/home`
 
 Project Management => `http://localhost:8888/#/projects/list`
+
 ```
 | Project Home
 | Workflow
@@ -149,6 +160,7 @@ Project Management => `http://localhost:8888/#/projects/list`
 ```
 
 Resource Management => `http://localhost:8888/#/resource/file`
+
 ```
 | File Management
 | udf Management
@@ -159,6 +171,7 @@ Resource Management => `http://localhost:8888/#/resource/file`
 Data Source Management => `http://localhost:8888/#/datasource/list`
 
 Security Center => `http://localhost:8888/#/security/tenant`
+
 ```
 | Tenant Management
 | User Management
@@ -174,16 +187,19 @@ User Center => `http://localhost:8888/#/user/account`
 The project `src/js/conf/home` is divided into
 
 `pages` => route to page directory
+
 ```
- The page file corresponding to the routing address
+The page file corresponding to the routing address
 ```
 
 `router` => route management
+
 ```
 vue router, the entry file index.js in each page will be registered. Specific operations: https://router.vuejs.org/zh/
 ```
 
 `store` => status management
+
 ```
 The page corresponding to each route has a state management file divided into:
 
@@ -201,9 +217,13 @@ Specific action:https://vuex.vuejs.org/zh/
 ```
 
 ## specification
+
 ## Vue specification
+
 ##### 1.Component name
+
 The component is named multiple words and is connected with a wire (-) to avoid conflicts with HTML tags and a clearer structure.
+
 ```
 // positive example
 export default {
@@ -212,7 +232,9 @@ export default {
 ```
 
 ##### 2.Component files
+
 The internal common component of the `src/js/module/components` project writes the folder name with the same name as the file name. The subcomponents and util tools that are split inside the common component are placed in the internal `_source` folder of the component.
+
 ```
 └── components
     ├── header
@@ -228,6 +250,7 @@ The internal common component of the `src/js/module/components` project writes t
 ```
 
 ##### 3.Prop
+
 When you define Prop, you should always name it in camel format (camelCase) and use the connection line (-) when assigning values to the parent component.
 This follows the characteristics of each language, because it is case-insensitive in HTML tags, and the use of links is more friendly; in JavaScript, the more natural is the hump name.
 
@@ -270,7 +293,9 @@ props: {
 ```
 
 ##### 4.v-for
+
 When performing v-for traversal, you should always bring a key value to make rendering more efficient when updating the DOM.
+
 ```
 <ul>
     <li v-for="item in list" :key="item.id">
@@ -280,6 +305,7 @@ When performing v-for traversal, you should always bring a key value to make ren
 ```
 
 v-for should be avoided on the same element as v-if (`for example: <li>`) because v-for has a higher priority than v-if. To avoid invalid calculations and rendering, you should try to use v-if Put it on top of the container's parent element.
+
 ```
 <ul v-if="showList">
     <li v-for="item in list" :key="item.id">
@@ -289,7 +315,9 @@ v-for should be avoided on the same element as v-if (`for example: <li>`) becaus
 ```
 
 ##### 5.v-if / v-else-if / v-else
+
 If the elements in the same set of v-if logic control are logically identical, Vue reuses the same part for more efficient element switching, `such as: value`. In order to avoid the unreasonable effect of multiplexing, you should add key to the same element for identification.
+
 ```
 <div v-if="hasData" key="mazey-data">
     <span>{{ mazeyData }}</span>
@@ -300,12 +328,15 @@ If the elements in the same set of v-if logic control are logically identical, V
 ```
 
 ##### 6.Instruction abbreviation
+
 In order to unify the specification, the instruction abbreviation is always used. Using `v-bind`, `v-on` is not bad. Here is only a unified specification.
+
 ```
 <input :value="mazeyUser" @click="verifyUser">
 ```
 
 ##### 7.Top-level element order of single file components
+
 Styles are packaged in a file, all the styles defined in a single vue file, the same name in other files will also take effect. All will have a top class name before creating a component.
 Note: The sass plugin has been added to the project, and the sas syntax can be written directly in a single vue file.
 For uniformity and ease of reading, they should be placed in the order of  `<template>`、`<script>`、`<style>`.
@@ -357,25 +388,31 @@ For uniformity and ease of reading, they should be placed in the order of  `<tem
 ## JavaScript specification
 
 ##### 1.var / let / const
+
 It is recommended to no longer use var, but use let / const, prefer const. The use of any variable must be declared in advance, except that the function defined by function can be placed anywhere.
 
 ##### 2.quotes
+
 ```
 const foo = 'after division'
 const bar = `${foo},ront-end engineer`
 ```
 
 ##### 3.function
+
 Anonymous functions use the arrow function uniformly. When multiple parameters/return values are used, the object's structure assignment is used first.
+
 ```
 function getPersonInfo ({name, sex}) {
     // ...
     return {name, gender}
 }
 ```
+
 The function name is uniformly named with a camel name. The beginning of the capital letter is a constructor. The lowercase letters start with ordinary functions, and the new operator should not be used to operate ordinary functions.
 
 ##### 4.object
+
 ```
 const foo = {a: 0, b: 1}
 const bar = JSON.parse(JSON.stringify(foo))
@@ -393,7 +430,9 @@ for (let [key, value] of myMap.entries()) {
 ```
 
 ##### 5.module
+
 Unified management of project modules using import / export.
+
 ```
 // lib.js
 export default {}
@@ -411,13 +450,16 @@ If the module has only one output value, use `export default`,otherwise no.
 ##### 1.Label
 
 Do not write the type attribute when referencing external CSS or JavaScript. The HTML5 default type is the text/css and text/javascript properties, so there is no need to specify them.
+
 ```
 <link rel="stylesheet" href="//www.test.com/css/test.css">
 <script src="//www.test.com/js/test.js"></script>
 ```
 
 ##### 2.Naming
+
 The naming of Class and ID should be semantic, and you can see what you are doing by looking at the name; multiple words are connected by a link.
+
 ```
 // positive example
 .test-header{
@@ -426,6 +468,7 @@ The naming of Class and ID should be semantic, and you can see what you are doin
 ```
 
 ##### 3.Attribute abbreviation
+
 CSS attributes use abbreviations as much as possible to improve the efficiency and ease of understanding of the code.
 
 ```
@@ -439,6 +482,7 @@ border: 1px solid #ccc;
 ```
 
 ##### 4.Document type
+
 The HTML5 standard should always be used.
 
 ```
@@ -446,7 +490,9 @@ The HTML5 standard should always be used.
 ```
 
 ##### 5.Notes
+
 A block comment should be written to a module file.
+
 ```
 /**
 * @module mazey/api
@@ -457,7 +503,8 @@ A block comment should be written to a module file.
 
 ## interface
 
-##### All interfaces are returned as Promise 
+##### All interfaces are returned as Promise
+
 Note that non-zero is wrong for catching catch
 
 ```
@@ -477,6 +524,7 @@ test.then(res => {
 ```
 
 Normal return
+
 ```
 {
   code:0,
@@ -486,6 +534,7 @@ Normal return
 ```
 
 Error return
+
 ```
 {
   code:10000, 
@@ -493,8 +542,10 @@ Error return
   msg:'failed'
 }
 ```
+
 If the interface is a post request, the Content-Type defaults to application/x-www-form-urlencoded; if the Content-Type is changed to application/json,
 Interface parameter transfer needs to be changed to the following way
+
 ```
 io.post('url', payload, null, null, { emulateJSON: false } res => {
   resolve(res)
@@ -524,6 +575,7 @@ User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
 (1) First place the icon icon of the node in the `src/js/conf/home/pages/dag/img `folder, and note the English name of the node defined by the `toolbar_${in the background. For example: SHELL}.png`
 
 (2)  Find the `tasksType` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
+
 ```
 'DEPENDENT': {  //  The background definition node type English name is used as the key value
   desc: 'DEPENDENT',  // tooltip desc
@@ -532,6 +584,7 @@ User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
 ```
 
 (3)  Add a `${node type (lowercase)}`.vue file in `src/js/conf/home/pages/dag/_source/formModel/tasks`. The contents of the components related to the current node are written here. Must belong to a node component must have a function _verification () After the verification is successful, the relevant data of the current component is thrown to the parent component.
+
 ```
 /**
  * Verification
@@ -566,6 +619,7 @@ User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
 (4) Common components used inside the node component are under` _source`, and `commcon.js` is used to configure public data.
 
 ##### 2.Increase the status type
+
 (1) Find the `tasksState` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
 
 ```
@@ -579,7 +633,9 @@ User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
 ```
 
 ##### 3.Add the action bar tool
+
 (1)  Find the `toolOper` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
+
 ```
 {
   code: 'pointer',  // tool identifier
@@ -599,13 +655,12 @@ User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
 
 `util.js`  =>   belongs to the `plugIn` tool class
 
-
 The operation is handled in the `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` event.
 
-
 ##### 3.Add a routing page
 
 (1) First add a routing address`src/js/conf/home/router/index.js` in route management
+
 ```
 routing address{
   path: '/test',  // routing address
@@ -619,12 +674,12 @@ routing address{
 
 (2)Create a `test` folder in `src/js/conf/home/pages` and create an `index.vue `entry file in the folder.
 
-    This will give you direct access to`http://localhost:8888/#/test`
-
+        This will give you direct access to`http://localhost:8888/#/test`
 
 ##### 4.Increase the preset mailbox
 
 Find the `src/lib/localData/email.js` startup and timed email address input to automatically pull down the match.
+
 ```
 export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
 ```
diff --git a/docs/docs/en/contribute/have-questions.md b/docs/docs/en/contribute/have-questions.md
index 8497d033cd..72e49988f1 100644
--- a/docs/docs/en/contribute/have-questions.md
+++ b/docs/docs/en/contribute/have-questions.md
@@ -21,8 +21,9 @@ Some quick tips when using email:
 - Tagging the subject line of your email will help you get a faster response, e.g. [api-server]: How to get open api interface?
 
 - Tags may help identify a topic by:
+
   - Component: MasterServer,ApiServer,WorkerServer,AlertServer, etc
   - Level: Beginner, Intermediate, Advanced
   - Scenario: Debug, How-to
-
 - For error logs or long code examples, please use [GitHub gist](https://gist.github.com/) and include only a few lines of the pertinent code / log within the email.
+
diff --git a/docs/docs/en/contribute/join/DS-License.md b/docs/docs/en/contribute/join/DS-License.md
index c3f13d7bfb..f365c65d31 100644
--- a/docs/docs/en/contribute/join/DS-License.md
+++ b/docs/docs/en/contribute/join/DS-License.md
@@ -20,7 +20,6 @@ Moreover, when we intend to refer a new software ( not limited to 3rd party jar,
 
 * [COMMUNITY-LED DEVELOPMENT "THE APACHE WAY"](https://apache.org/dev/licensing-howto.html)
 
-
 For example, we should contain the NOTICE file (every open-source project has NOTICE file, generally under root directory) of ZooKeeper in our project when we are using ZooKeeper. As the Apache explains, "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work.
 
 We are not going to dive into every 3rd party open-source license policy, you may look up them if interested.
@@ -40,3 +39,4 @@ We need to follow the following steps when we need to add new jars or external r
 
 * [COMMUNITY-LED DEVELOPMENT "THE APACHE WAY"](https://apache.org/dev/licensing-howto.html)
 * [ASF 3RD PARTY LICENSE POLICY](https://apache.org/legal/resolved.html)
+
diff --git a/docs/docs/en/contribute/join/become-a-committer.md b/docs/docs/en/contribute/join/become-a-committer.md
index deac7d863b..f17b6b99a6 100644
--- a/docs/docs/en/contribute/join/become-a-committer.md
+++ b/docs/docs/en/contribute/join/become-a-committer.md
@@ -8,4 +8,4 @@ In Dolphinscheduler community, if a committer who have earned even more merit, c
 
 One thing that is sometimes hard to understand when you are new to the open development process used at the ASF, is that we value the community more than the code. A strong and healthy community will be respectful and be a fun and rewarding place. More importantly, a diverse and healthy community can continue to support the code over the longer term, even as individual companies come and go from the field.
 
-More details could be found [here](https://community.apache.org/contributors/).
\ No newline at end of file
+More details could be found [here](https://community.apache.org/contributors/).
diff --git a/docs/docs/en/contribute/join/code-conduct.md b/docs/docs/en/contribute/join/code-conduct.md
index 5505e95852..4a6c20b89c 100644
--- a/docs/docs/en/contribute/join/code-conduct.md
+++ b/docs/docs/en/contribute/join/code-conduct.md
@@ -3,66 +3,67 @@
 The following Code of Conduct is based on full compliance with the [Apache Software Foundation Code of Conduct](https://www.apache.org/foundation/policies/conduct.html).
 
 ## Development philosophy
- - **Consistent** code style, naming, and usage are consistent.  
- - **Easy to read** code is obvious, easy to read and understand, when debugging one knows the intent of the code.
- - **Neat** agree with the concepts of《Refactoring》and《Code Cleanliness》and pursue clean and elegant code.
- - **Abstract** hierarchy is clear and the concepts are refined and reasonable. Keep methods, classes, packages, and modules at the same level of abstraction.
- - **Heart** Maintain a sense of responsibility and continue to be carved in the spirit of artisans.
- 
+
+- **Consistent** code style, naming, and usage are consistent.
+- **Easy to read** code is obvious, easy to read and understand, when debugging one knows the intent of the code.
+- **Neat** agree with the concepts of《Refactoring》and《Code Cleanliness》and pursue clean and elegant code.
+- **Abstract** hierarchy is clear and the concepts are refined and reasonable. Keep methods, classes, packages, and modules at the same level of abstraction.
+- **Heart** Maintain a sense of responsibility and continue to be carved in the spirit of artisans.
+
 ## Development specifications
 
- - Executing `mvn -U clean package -Prelease` can compile and test through all test cases. 
- - The test coverage tool checks for no less than dev branch coverage.
- - In the root directory, use Checkstyle to check your code for special reasons for violating validation rules. The template location is located at ds_check_style.xml.
- - Follow the coding specifications.
+- Executing `mvn -U clean package -Prelease` can compile and test through all test cases.
+- The test coverage tool checks for no less than dev branch coverage.
+- In the root directory, use Checkstyle to check your code for special reasons for violating validation rules. The template location is located at ds_check_style.xml.
+- Follow the coding specifications.
 
 ## Coding specifications
 
- - Use linux line breaks.
- - Indentation (including empty lines) is consistent with the last line.
- - An empty line is required between the class declaration and the following variable or method.
- - There should be no meaningless empty lines.
- - Classes, methods, and variables should be named as the name implies and abbreviations should be avoided.
- - Return value variables are named after `result`; `each` is used in loops to name loop variables; and `entry` is used in map instead of `each`.
- - The cached exception is called `e`; Catch the exception and do nothing, and the exception is named `ignored`.
- - Configuration Files are named in camelCase, and file names are lowercase with uppercase initial/starting letter.
- - Code that requires comment interpretation should be as small as possible and interpreted by method name.
- - `equals` and `==` In a conditional expression, the constant is left, the variable is on the right, and in the expression greater than less than condition, the variable is left and the constant is right.
- - In addition to the abstract classes used for inheritance, try to design the class as `final`.
- - Nested loops are as much a method as possible.
- - The order in which member variables are defined and the order in which parameters are passed is consistent across classes and methods.
- - Priority is given to the use of guard statements.
- - Classes and methods have minimal access control.
- - The private method used by the method should follow the method, and if there are multiple private methods, the writing private method should appear in the same order as the private method in the original method.
- - Method entry and return values are not allowed to be `null`.
- - The return and assignment statements of if else are preferred with the tri-objective operator.
- - Priority is given to `LinkedList` and only use `ArrayList` if you need to get element values in the collection through the index.
- - Collection types such as `ArrayList`,`HashMap` that may produce expansion must specify the initial size of the collection to avoid expansion.
- - Logs and notes are always in English.
- - Comments can only contain `javadoc`, `todo` and `fixme`.
- - Exposed classes and methods must have javadoc, other classes and methods and methods that override the parent class do not require javadoc.
+- Use linux line breaks.
+- Indentation (including empty lines) is consistent with the last line.
+- An empty line is required between the class declaration and the following variable or method.
+- There should be no meaningless empty lines.
+- Classes, methods, and variables should be named as the name implies and abbreviations should be avoided.
+- Return value variables are named after `result`; `each` is used in loops to name loop variables; and `entry` is used in map instead of `each`.
+- The cached exception is called `e`; Catch the exception and do nothing, and the exception is named `ignored`.
+- Configuration Files are named in camelCase, and file names are lowercase with uppercase initial/starting letter.
+- Code that requires comment interpretation should be as small as possible and interpreted by method name.
+- `equals` and `==` In a conditional expression, the constant is left, the variable is on the right, and in the expression greater than less than condition, the variable is left and the constant is right.
+- In addition to the abstract classes used for inheritance, try to design the class as `final`.
+- Nested loops are as much a method as possible.
+- The order in which member variables are defined and the order in which parameters are passed is consistent across classes and methods.
+- Priority is given to the use of guard statements.
+- Classes and methods have minimal access control.
+- The private method used by the method should follow the method, and if there are multiple private methods, the writing private method should appear in the same order as the private method in the original method.
+- Method entry and return values are not allowed to be `null`.
+- The return and assignment statements of if else are preferred with the tri-objective operator.
+- Priority is given to `LinkedList` and only use `ArrayList` if you need to get element values in the collection through the index.
+- Collection types such as `ArrayList`,`HashMap` that may produce expansion must specify the initial size of the collection to avoid expansion.
+- Logs and notes are always in English.
+- Comments can only contain `javadoc`, `todo` and `fixme`.
+- Exposed classes and methods must have javadoc, other classes and methods and methods that override the parent class do not require javadoc.
 
 ## Unit test specifications
 
- - Test code and production code are subject to the same code specifications.
- - Unit tests are subject to AIR (Automatic, Independent, Repeatable) Design concept.
-   - Automatic: Unit tests should be fully automated, not interactive. Manual checking of output results is prohibited, `System.out`, `log`, etc. are not allowed, and must be verified with assertions. 
-   - Independent: It is prohibited to call each other between unit test cases and to rely on the order of execution. Each unit test can be run independently.
-   - Repeatable: Unit tests cannot be affected by the external environment and can be repeated. 
- - Unit tests are subject to BCDE(Border, Correct, Design, Error) Design principles.
-   - Border (Boundary value test): The expected results are obtained by entering the boundaries of loop boundaries, special values, data order, etc.
-   - Correct (Correctness test): The expected results are obtained with the correct input.
-   - Design (Rationality Design): Design high-quality unit tests in combination with production code design.
-   - Error (Fault tolerance test): The expected results are obtained through incorrect input such as illegal data, abnormal flow, etc.
- - If there is no special reason, the test needs to be fully covered.
- - Each test case needs to be accurately asserted.
- - Prepare the environment for code separation from the test code.
- - Only jUnit `Assert`,hamcrest `CoreMatchers`,Mockito Correlation can use static import.
- - Single-data assertions should use `assertTrue`,`assertFalse`,`assertNull` and `assertNotNull`.
- - Multi-data assertions should use `assertThat`.
- - Accurate assertion, try not to use `not`,`containsString` assertion.
- - The true value of the test case should be named actualXXX, and the expected value should be named expectedXXX.
- - Classes and Methods with `@Test` labels do not require javadoc.
+- Test code and production code are subject to the same code specifications.
+- Unit tests are subject to AIR (Automatic, Independent, Repeatable) Design concept.
+  - Automatic: Unit tests should be fully automated, not interactive. Manual checking of output results is prohibited, `System.out`, `log`, etc. are not allowed, and must be verified with assertions.
+  - Independent: It is prohibited to call each other between unit test cases and to rely on the order of execution. Each unit test can be run independently.
+  - Repeatable: Unit tests cannot be affected by the external environment and can be repeated.
+- Unit tests are subject to BCDE(Border, Correct, Design, Error) Design principles.
+  - Border (Boundary value test): The expected results are obtained by entering the boundaries of loop boundaries, special values, data order, etc.
+  - Correct (Correctness test): The expected results are obtained with the correct input.
+  - Design (Rationality Design): Design high-quality unit tests in combination with production code design.
+  - Error (Fault tolerance test): The expected results are obtained through incorrect input such as illegal data, abnormal flow, etc.
+- If there is no special reason, the test needs to be fully covered.
+- Each test case needs to be accurately asserted.
+- Prepare the environment for code separation from the test code.
+- Only jUnit `Assert`,hamcrest `CoreMatchers`,Mockito Correlation can use static import.
+- Single-data assertions should use `assertTrue`,`assertFalse`,`assertNull` and `assertNotNull`.
+- Multi-data assertions should use `assertThat`.
+- Accurate assertion, try not to use `not`,`containsString` assertion.
+- The true value of the test case should be named actualXXX, and the expected value should be named expectedXXX.
+- Classes and Methods with `@Test` labels do not require javadoc.
+- Public specifications.
+  - Each line is no longer than `200` in length, ensuring that each line is semantically complete for easy understanding.
 
- - Public specifications.
-   - Each line is no longer than `200` in length, ensuring that each line is semantically complete for easy understanding.
diff --git a/docs/docs/en/contribute/join/contribute.md b/docs/docs/en/contribute/join/contribute.md
index ea89596046..9a7cf2ff54 100644
--- a/docs/docs/en/contribute/join/contribute.md
+++ b/docs/docs/en/contribute/join/contribute.md
@@ -13,8 +13,8 @@ We encourage any form of participation in the community that will eventually bec
 * Help promote DolphinScheduler, participate in technical conferences or meetup, sharing and more.
 
 Welcome to the contributing team and join open source starting with submitting your first PR.
- - For example, add code comments or find "easy to fix" tags or some very simple issue (misspellings, etc.) and so on, first familiarize yourself with the submission process through the first simple PR.
- 
+- For example, add code comments or find "easy to fix" tags or some very simple issue (misspellings, etc.) and so on, first familiarize yourself with the submission process through the first simple PR.
+
 Note: Contributions are not limited to PR Only, but contribute to the development of the project.
 
 I'm sure you'll benefit from open source by participating in DolphinScheduler!
@@ -37,4 +37,4 @@ If you want to implement a Feature or fix a Bug. Please refer to the following:
 * You should create a new branch to start your work, to get the name of the branch refer to the [Submit Guide-Pull Request Notice](./pull-request.md). For example, if you want to complete the feature and submit Issue 111, your branch name should be feature-111. The feature name can be determined after discussion with the instructor.
 * When you're done, send a Pull Request to dolphinscheduler, please refer to the《[Submit Guide-Submit Pull Request Process](./submit-code.md)》
 
-If you want to submit a Pull Request to complete a Feature or fix a Bug, it is recommended that you start with the `good first issue`, `easy-to-fix` issues, complete a small function to submit, do not change too many files at a time, changing too many files will also put a lot of pressure on Reviewers, it is recommended to submit them through multiple Pull Requests, not all at once.
\ No newline at end of file
+If you want to submit a Pull Request to complete a Feature or fix a Bug, it is recommended that you start with the `good first issue`, `easy-to-fix` issues, complete a small function to submit, do not change too many files at a time, changing too many files will also put a lot of pressure on Reviewers, it is recommended to submit them through multiple Pull Requests, not all at once.
diff --git a/docs/docs/en/contribute/join/document.md b/docs/docs/en/contribute/join/document.md
index f2fd83140c..16ed650b9f 100644
--- a/docs/docs/en/contribute/join/document.md
+++ b/docs/docs/en/contribute/join/document.md
@@ -2,7 +2,7 @@
 
 Good documentation is critical for any type of software. Any contribution that can improve the DolphinScheduler documentation is welcome.
 
-###  Get the document project
+### Get the document project
 
 Documentation for the DolphinScheduler project is maintained in a separate [git repository](https://github.com/apache/dolphinscheduler-website).
 
@@ -52,8 +52,8 @@ Now you can run and build the website in your local environment.
 
 2. Simply push the changed files, for example:
 
- * `*.md`
- * `blog.js or docs.js or site.js`
+* `*.md`
+* `blog.js or docs.js or site.js`
 
 3. Submit the Pull Request to the **master** branch.
 
diff --git a/docs/docs/en/contribute/join/issue.md b/docs/docs/en/contribute/join/issue.md
index 376b065980..b7a763ddf8 100644
--- a/docs/docs/en/contribute/join/issue.md
+++ b/docs/docs/en/contribute/join/issue.md
@@ -1,6 +1,7 @@
 # Issue Notice
 
 ## Preface
+
 Issues function is used to track various Features, Bugs, Functions, etc. The project maintainer can organize the tasks to be completed through issues.
 
 Issue is an important step in drawing out a feature or bug,
@@ -129,8 +130,8 @@ The main purpose of this is to avoid wasting time caused by different opinions o
 
 - How to deal with the user who raises an issue does not know the module corresponding to the issue.
 
-    It is true that most users when raising issue do not know which module the issue belongs to.
-    In fact, this is very common in many open source communities. In this case, the committer / contributor actually knows the module affected by the issue.
-    If the issue is really valuable after being approved by committer and contributor, then the committer can modify the issue title according to the specific module involved in the issue,
-    or leave a message to the user who raises the issue to modify it into the corresponding title.
+  It is true that most users when raising issue do not know which module the issue belongs to.
+  In fact, this is very common in many open source communities. In this case, the committer / contributor actually knows the module affected by the issue.
+  If the issue is really valuable after being approved by committer and contributor, then the committer can modify the issue title according to the specific module involved in the issue,
+  or leave a message to the user who raises the issue to modify it into the corresponding title.
 
diff --git a/docs/docs/en/contribute/join/pull-request.md b/docs/docs/en/contribute/join/pull-request.md
index 5127845e3a..c3eccff663 100644
--- a/docs/docs/en/contribute/join/pull-request.md
+++ b/docs/docs/en/contribute/join/pull-request.md
@@ -1,6 +1,7 @@
 # Pull Request Notice
 
 ## Preface
+
 Pull Request is a way of software cooperation, which is a process of bringing code involving different functions into the trunk. During this process, the code can be discussed, reviewed, and modified.
 
 In Pull Request, we try not to discuss the implementation of the code. The general implementation of the code and its logic should be determined in Issue. In the Pull Request, we only focus on the code format and code specification, so as to avoid wasting time caused by different opinions on implementation.
@@ -62,8 +63,8 @@ Please refer to the commit message section.
 
 ### Pull Request Code Style
 
-DolphinScheduler uses `Spotless` to automatically fix code style and formatting errors, 
-see [Code Style](../development-environment-setup.md#code-style) for details. 
+DolphinScheduler uses `Spotless` to automatically fix code style and formatting errors,
+see [Code Style](../development-environment-setup.md#code-style) for details.
 
 ### Question
 
@@ -74,4 +75,5 @@ see [Code Style](../development-environment-setup.md#code-style) for details.
   Usually, there are two solutions to this scenario: the first is to merge multiple issues with into the same issue, and then close the other issues;
   the second is multiple issues have subtle differences.
   In this scenario, the responsibilities of each issue can be clearly divided. The type of each issue is marked as Sub-Task, and then these sub task type issues are associated with one issue.
-  And each Pull Request is submitted should be associated with only one issue of a sub task.
\ No newline at end of file
+  And each Pull Request is submitted should be associated with only one issue of a sub task.
+
diff --git a/docs/docs/en/contribute/join/review.md b/docs/docs/en/contribute/join/review.md
index 40c8a23a7a..cdfb01d653 100644
--- a/docs/docs/en/contribute/join/review.md
+++ b/docs/docs/en/contribute/join/review.md
@@ -10,7 +10,7 @@ from the community to review them. You could see detail in [mail][mail-review-wa
 in [GitHub Discussion][discussion-result-review-wanted].
 
 > Note: It is only users mentioned in the [GitHub Discussion][discussion-result-review-wanted] can review Issues or Pull
-> Requests, Community advocates **Anyone is encouraged to review Issues and Pull Requests**. Users in 
+> Requests, Community advocates **Anyone is encouraged to review Issues and Pull Requests**. Users in
 > [GitHub Discussion][discussion-result-review-wanted] show their willing to review when we collect in the mail thread.
 > The advantage of this list is when the community has discussion, in addition to the mention Members in [team](/us-en/community/community.html),
 > you can also find some help in [GitHub Discussion][discussion-result-review-wanted] people. If you want to join the
@@ -27,43 +27,43 @@ go to section [review Pull Requests](#pull-requests).
 
 Review Issues means discuss [Issues][all-issues] in GitHub and give suggestions on it. Include but are not limited to the following situations
 
-| Situation | Reason | Label | Action |
-| ------ | ------ | ------ | ------ |
-| wont fix | Has been fixed in dev branch | [wontfix][label-wontfix] | Close Issue, inform creator the fixed version if it already release |
-| duplicate issue | Had the same problem before | [duplicate][label-duplicate] | Close issue, inform creator the link of same issue |
-| Description not clearly | Without detail reproduce step | [need more information][label-need-more-information] | Inform creator add more description |
+|        Situation        |            Reason             |                        Label                         |                               Action                                |
+|-------------------------|-------------------------------|------------------------------------------------------|---------------------------------------------------------------------|
+| wont fix                | Has been fixed in dev branch  | [wontfix][label-wontfix]                             | Close Issue, inform creator the fixed version if it already release |
+| duplicate issue         | Had the same problem before   | [duplicate][label-duplicate]                         | Close issue, inform creator the link of same issue                  |
+| Description not clearly | Without detail reproduce step | [need more information][label-need-more-information] | Inform creator add more description                                 |
 
 In addition give suggestion, add label for issue is also important during review. The labeled issues can be retrieved
 better, which convenient for further processing. An issue can with more than one label. Common issue categories are:
 
-| Label | Meaning |
-| ------ | ------ |
-| [UI][label-UI] | UI and front-end related |
-| [security][label-security] | Security Issue |
-| [user experience][label-user-experience] | User experience Issue |
-| [development][label-development] | Development Issue |
-| [Python][label-Python] | Python Issue |
-| [plug-in][label-plug-in] | Plug-in Issue |
-| [document][label-document] | Document Issue |
-| [docker][label-docker] | Docker Issue |
-| [need verify][label-need-verify] | Need verify Issue |
-| [e2e][label-e2e] | E2E Issue |
-| [win-os][label-win-os] | windows operating system Issue |
-| [suggestion][label-suggestion] | Give suggestion to us |
- 
+|                  Label                   |            Meaning             |
+|------------------------------------------|--------------------------------|
+| [UI][label-UI]                           | UI and front-end related       |
+| [security][label-security]               | Security Issue                 |
+| [user experience][label-user-experience] | User experience Issue          |
+| [development][label-development]         | Development Issue              |
+| [Python][label-Python]                   | Python Issue                   |
+| [plug-in][label-plug-in]                 | Plug-in Issue                  |
+| [document][label-document]               | Document Issue                 |
+| [docker][label-docker]                   | Docker Issue                   |
+| [need verify][label-need-verify]         | Need verify Issue              |
+| [e2e][label-e2e]                         | E2E Issue                      |
+| [win-os][label-win-os]                   | windows operating system Issue |
+| [suggestion][label-suggestion]           | Give suggestion to us          |
+
 Beside classification, label could also set the priority of Issues. The higher the priority, the more attention pay
 in the community, the easier it is to be fixed or implemented. The priority label are as follows
 
-| Label | priority |
-| ------ | ------ |
-| [priority:high][label-priority-high] | High priority |
+|                  Label                   |    priority     |
+|------------------------------------------|-----------------|
+| [priority:high][label-priority-high]     | High priority   |
 | [priority:middle][label-priority-middle] | Middle priority |
-| [priority:low][label-priority-low] | Low priority |
+| [priority:low][label-priority-low]       | Low priority    |
 
 All the labels above in common label. For all labels in this project you could see in [full label list][label-all-list]
 
 Before reading following content, please make sure you have labeled the Issue.
-  
+
 * Remove label [Waiting for reply][label-waiting-for-reply] after replying: Label [Waiting for reply][label-waiting-for-reply]
   added when [creating an Issue][issue-choose]. It makes positioning un reply issue more convenient, and you should remove
   this label after you reviewed it. If you do not remove it, will cause others to waste time looking on the same issue.
@@ -74,12 +74,12 @@ Before reading following content, please make sure you have labeled the Issue.
 
 When an Issue need to create Pull Requests, you could also labeled it from below.
 
-| Label | Mean |
-| ------ | ------ |
-| [Chore][label-Chore] | Chore for project |
-| [Good first issue][label-good-first-issue] | Good first issue for new contributor |
-| [easy to fix][label-easy-to-fix] | Easy to fix, harder than `Good first issue` |
-| [help wanted][label-help-wanted] | Help wanted |
+|                   Label                    |                    Mean                     |
+|--------------------------------------------|---------------------------------------------|
+| [Chore][label-Chore]                       | Chore for project                           |
+| [Good first issue][label-good-first-issue] | Good first issue for new contributor        |
+| [easy to fix][label-easy-to-fix]           | Easy to fix, harder than `Good first issue` |
+| [help wanted][label-help-wanted]           | Help wanted                                 |
 
 > Note: Only members have permission to add or delete label. When you need to add or remove lebals but are not member,
 > you can `@`  members to do that. But as long as you have a GitHub account, you can comment on issues and give suggestions.
@@ -90,14 +90,14 @@ When an Issue need to create Pull Requests, you could also labeled it from below
 <!-- markdown-link-check-disable -->
 Review Pull mean discussing in [Pull Requests][all-PRs] in GitHub and giving suggestions to it. DolphinScheduler's 
 Pull Requests reviewing are the same as [GitHub's reviewing changes in pull requests][gh-review-pr]. You can give your
-suggestions in Pull Requests
-
+suggestions in Pull Reque-->
 * When you think the Pull Request is OK to be merged, you can agree to the Pull Request according to the "Approve" process
   in [GitHub's reviewing changes in pull requests][gh-review-pr].
-* When you think Pull Request needs to be changed, you can comment it according to the "Comment" process in 
+* When you think Pull Request needs to be changed, you can comment it according to the "Comment" process in
   [GitHub's reviewing changes in pull requests][gh-review-pr]. And when you think issues that must be fixed before they
   merged, please follow "Request changes" in [GitHub's reviewing changes in pull requests][gh-review-pr] to ask contributors
   modify it.
+
 <!-- markdown-link-check-enable -->
 
 Labeled Pull Requests is an important part. Reasonable classification can save a lot of time for reviewers. The good news
@@ -107,11 +107,11 @@ and [priority:high][label-priority-high].
 
 Pull Requests have some unique labels of it own
 
-| Label | Mean |
-| ------ | ------ |
-| [miss document][label-miss-document] | Pull Requests miss document, and should be add |
-| [first time contributor][label-first-time-contributor] | Pull Requests submit by first time contributor |
-| [don't merge][label-do-not-merge] | Pull Requests have some problem and should not be merged |
+|                         Label                          |                           Mean                           |
+|--------------------------------------------------------|----------------------------------------------------------|
+| [miss document][label-miss-document]                   | Pull Requests miss document, and should be add           |
+| [first time contributor][label-first-time-contributor] | Pull Requests submit by first time contributor           |
+| [don't merge][label-do-not-merge]                      | Pull Requests have some problem and should not be merged |
 
 > Note: Only members have permission to add or delete label. When you need to add or remove lebals but are not member,
 > you can `@`  members to do that. But as long as you have a GitHub account, you can comment on Pull Requests and give suggestions.
@@ -151,3 +151,4 @@ Pull Requests have some unique labels of it own
 [all-issues]: https://github.com/apache/dolphinscheduler/issues
 [all-PRs]: https://github.com/apache/dolphinscheduler/pulls
 [gh-review-pr]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/about-pull-request-reviews
+
diff --git a/docs/docs/en/contribute/join/submit-code.md b/docs/docs/en/contribute/join/submit-code.md
index ac87950320..b1e249d338 100644
--- a/docs/docs/en/contribute/join/submit-code.md
+++ b/docs/docs/en/contribute/join/submit-code.md
@@ -3,26 +3,23 @@
 * First from the remote repository *https://github.com/apache/dolphinscheduler.git* fork a copy of the code into your own repository
 
 * There are currently three branches in the remote repository:
-    * master           normal delivery branch
-        After the stable release, merge the code from the stable branch into the master.
-    
-    * dev              daily development branch
-        Every day dev development branch, newly submitted code can pull request to this branch.
-
 
+  * master           normal delivery branch
+    After the stable release, merge the code from the stable branch into the master.
+        
+  * dev              daily development branch
+    Every day dev development branch, newly submitted code can pull request to this branch.
 * Clone your repository to your local
-    `git clone https://github.com/apache/dolphinscheduler.git`
-
+  `git clone https://github.com/apache/dolphinscheduler.git`
 * Add remote repository address, named upstream
-    `git remote add upstream https://github.com/apache/dolphinscheduler.git`
-
+  `git remote add upstream https://github.com/apache/dolphinscheduler.git`
 * View repository
-    `git remote -v`
+      `git remote -v`
 
->At this time, there will be two repositories: origin (your own repository) and upstream (remote repository)
+> At this time, there will be two repositories: origin (your own repository) and upstream (remote repository)
 
 * Get/Update remote repository code
-    `git fetch upstream`
+      `git fetch upstream`
 
 * Synchronize remote repository code to local repository
 
@@ -32,22 +29,23 @@ git merge --no-ff upstream/dev
 ```
 
 If remote branch has a new branch such as `dev-1.0`, you need to synchronize this branch to the local repository
-      
+
 ```
 git checkout -b dev-1.0 upstream/dev-1.0
 git push --set-upstream origin dev-1.0
 ```
 
 * Create new branch
+
 ```
 git checkout -b xxx origin/dev
 ```
 
 Make sure that the branch `xxx` is building successfully on the latest code of the official dev branch
 * After modifying the code locally in the new branch, submit it to your own repository:
-  
+
 `git commit -m 'commit content'`
-    
+
 `git push origin xxx --set-upstream`
 
 * Submit changes to the remote repository
@@ -60,4 +58,3 @@ Make sure that the branch `xxx` is building successfully on the latest code of t
 
 * Finally, congratulations, you have become an official contributor to dolphinscheduler!
 
-
diff --git a/docs/docs/en/contribute/join/subscribe.md b/docs/docs/en/contribute/join/subscribe.md
index f6e8a74934..31d670fceb 100644
--- a/docs/docs/en/contribute/join/subscribe.md
+++ b/docs/docs/en/contribute/join/subscribe.md
@@ -21,3 +21,4 @@ Unsubscribe from the mailing list steps are as follows:
 2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@dolphinscheduler.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.
 
 3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@dolphinscheduler.apache.org, and you have successfully unsubscribed to the Apache DolphinScheduler mailing list, and you will not receive emails from dev@dolphinscheduler.apache.org.
+
diff --git a/docs/docs/en/contribute/join/unit-test.md b/docs/docs/en/contribute/join/unit-test.md
index 796cf59e89..932a0bf64a 100644
--- a/docs/docs/en/contribute/join/unit-test.md
+++ b/docs/docs/en/contribute/join/unit-test.md
@@ -2,25 +2,27 @@
 
 ### 1. The Benefits of Writing Unit Tests
 
--    Unit tests help everyone to get into the details of the code and understand how it works.
--    Through test cases we can find bugs and submit robust code.
--    The test case is also a demo usage of the code.
+- Unit tests help everyone to get into the details of the code and understand how it works.
+- Through test cases we can find bugs and submit robust code.
+- The test case is also a demo usage of the code.
 
 ### 2. Some design principles for unit test cases
 
--    The steps, granularity and combination of conditions should be carefully designed.
--    Pay attention to boundary conditions.
--    Unit tests should be well designed as well as avoiding useless code.
--    When you find a `method` is difficult to write unit test, and if you confirm that the `method` is `bad code`, then refactor it with the developer.
+- The steps, granularity and combination of conditions should be carefully designed.
+- Pay attention to boundary conditions.
+- Unit tests should be well designed as well as avoiding useless code.
+- When you find a `method` is difficult to write unit test, and if you confirm that the `method` is `bad code`, then refactor it with the developer.
+
 <!-- markdown-link-check-disable -->
--    DolphinScheduler: [mockito](http://site.mockito.org/). Here are some development guides: [mockito tutorial](http://www.baeldung.com/bdd-mockito), [mockito refcard](https://dzone.com/refcardz/mockito)
+- DolphinScheduler: [mockito](http://site.mockito.org/). Here are some development guides: [mockito tutorial](http://www.baeldung.com/bdd-mockito), [mockito refcard](https://dzone.com/refcardz/mockito)
+
 <!-- markdown-link-check-enable -->
--    TDD(option): When you start writing a new feature, you can try writing test cases first.
+- TDD(option): When you start writing a new feature, you can try writing test cases first.
 
 ### 3. Test coverage setpoint
 
--    At this stage, the default value for test coverage of Delta change codes is >= 60%, the higher the better.
--    We can see the test reports on this page:  https://codecov.io/gh/apache/dolphinscheduler
+- At this stage, the default value for test coverage of Delta change codes is >= 60%, the higher the better.
+- We can see the test reports on this page:  https://codecov.io/gh/apache/dolphinscheduler
 
 ## Fundamental guidelines for unit test
 
@@ -64,13 +66,13 @@ Invalid assertions make the test itself meaningless, it has little to do with wh
 
 There are several types of invalid assertions:
 
-1.   Different types of comparisons.
+1. Different types of comparisons.
 
-2.   Determines that an object or variable with a default value is not null.
+2. Determines that an object or variable with a default value is not null.
 
-     This seems meaningless. Therefore, when making the relevant judgements you should pay attention to whether it contains a default value itself.
+   This seems meaningless. Therefore, when making the relevant judgements you should pay attention to whether it contains a default value itself.
 
-3.   Assertions should be affirmative rather than negative if possible. Assertions should be within a range of predicted results, or exact values, whenever possible (otherwise you may end up with something that doesn't match your actual expectations but passes the assertion) unless your code only cares about whether it is empty or not.
+3. Assertions should be affirmative rather than negative if possible. Assertions should be within a range of predicted results, or exact values, whenever possible (otherwise you may end up with something that doesn't match your actual expectations but passes the assertion) unless your code only cares about whether it is empty or not.
 
 ### 8. Some points to note for unit tests
 
@@ -90,17 +92,18 @@ For example @Ignore("see #1").
 
 The test will fail when the code in the unit test throws an exception. Therefore, there is no need to use try-catch to catch exceptions.
 
-     ```java
-     @Test
-     public void testMethod() {
-       try {
-                 // Some code
-       } catch (MyException e) {
-         Assert.fail(e.getMessage());  // Noncompliant
-       }
-     }
-     ```
-You should this: 
+        ```java
+        @Test
+        public void testMethod() {
+          try {
+                    // Some code
+          } catch (MyException e) {
+            Assert.fail(e.getMessage());  // Noncompliant
+          }
+        }
+        ```
+
+You should this:
 
 ```java
 @Test
diff --git a/docs/docs/en/contribute/log-specification.md b/docs/docs/en/contribute/log-specification.md
index 69692495e1..9746b05ca4 100644
--- a/docs/docs/en/contribute/log-specification.md
+++ b/docs/docs/en/contribute/log-specification.md
@@ -35,7 +35,7 @@ The content of the logs determines whether the logs can completely restore the s
 
 ### Log format specification
 
-The logs of Master module and Worker module are printed using the following format. 
+The logs of Master module and Worker module are printed using the following format.
 
 ```xml
 [%level] %date{yyyy-MM-dd HH:mm:ss.SSS Z} %logger{96}:[%line] - [WorkflowInstance-%X{workflowInstanceId:-0}][TaskInstance-%X{taskInstanceId:-0}] - %msg%n
@@ -49,4 +49,5 @@ That is, the workflow instance ID and task instance ID are injected in the print
 - The use of printStackTrace() is prohibited for exception handling. This method prints the exception stack information to the standard error output.
 - Branch printing of logs is prohibited. The contents of the logs need to be associated with the relevant information in the log format, and printing them in separate lines will cause the contents of the logs to not match the time and other information, and cause the logs to be mixed in a large number of log environments, which will make log retrieval more difficult.
 - The use of the "+" operator for splicing log content is prohibited. Use placeholders for formatting logs for printing to improve memory usage efficiency.
-- When the log content includes object instances, you need to make sure to override the toString() method to prevent printing meaningless hashcode.
\ No newline at end of file
+- When the log content includes object instances, you need to make sure to override the toString() method to prevent printing meaningless hashcode.
+
diff --git a/docs/docs/en/contribute/release/release-prepare.md b/docs/docs/en/contribute/release/release-prepare.md
index fe51973f1a..74fc22672f 100644
--- a/docs/docs/en/contribute/release/release-prepare.md
+++ b/docs/docs/en/contribute/release/release-prepare.md
@@ -4,9 +4,9 @@
 
 Compared with the last release, the `release-docs` of the current release needs to be updated to the latest, if there are dependencies and versions changes
 
- - `dolphinscheduler-dist/release-docs/LICENSE`
- - `dolphinscheduler-dist/release-docs/NOTICE`
- - `dolphinscheduler-dist/release-docs/licenses`
+- `dolphinscheduler-dist/release-docs/LICENSE`
+- `dolphinscheduler-dist/release-docs/NOTICE`
+- `dolphinscheduler-dist/release-docs/licenses`
 
 ## Update Version
 
@@ -29,3 +29,4 @@ For example, to release `x.y.z`, the following updates are required:
   - Add new history version
     - `docs/docs/en/history-versions.md` and `docs/docs/zh/history-versions.md`: Add the new version and link for `x.y.z`
   - `docs/configs/docsdev.js`: change `/dev/` to `/x.y.z/`
+
diff --git a/docs/docs/en/contribute/release/release.md b/docs/docs/en/contribute/release/release.md
index 8451e7a4f9..2b21c342cf 100644
--- a/docs/docs/en/contribute/release/release.md
+++ b/docs/docs/en/contribute/release/release.md
@@ -210,7 +210,7 @@ git push origin --tags
 > Note1: In this step, you should use github token for password because native password no longer supported, you can see
 > https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token for more
 > detail about how to create token about it.
-
+>
 > Note2: After the command done, it will auto-created `release.properties` file and `*.Backup` files, their will be need
 > in the following command and DO NOT DELETE THEM
 
@@ -293,6 +293,7 @@ cd ~/ds_svn/dev/dolphinscheduler
 svn add *
 svn --username="${A_USERNAME}" commit -m "release ${VERSION}"
 ```
+
 ## Check Release
 
 ### Check sha512 hash
@@ -353,14 +354,14 @@ cd ../
 
 Decompress `apache-dolphinscheduler-<VERSION>-src.tar.gz` and `python/apache-dolphinscheduler-python-<VERSION>.tar.gz` then check the following items:
 
-*   Check whether source tarball is oversized for including nonessential files
-*   `LICENSE` and `NOTICE` files exist
-*   Correct year in `NOTICE` file
-*   There is only text files but no binary files
-*   All source files have ASF headers
-*   Codes can be compiled and pass the unit tests (mvn install)
-*   The contents of the release match with what's tagged in version control (diff -r a verify_dir tag_dir)
-*   Check if there is any extra files or folders, empty folders for example
+* Check whether source tarball is oversized for including nonessential files
+* `LICENSE` and `NOTICE` files exist
+* Correct year in `NOTICE` file
+* There is only text files but no binary files
+* All source files have ASF headers
+* Codes can be compiled and pass the unit tests (mvn install)
+* The contents of the release match with what's tagged in version control (diff -r a verify_dir tag_dir)
+* Check if there is any extra files or folders, empty folders for example
 
 #### Check binary packages
 
@@ -387,8 +388,8 @@ maybe not correct, you should filter them by yourself) and classify them and pas
 ### Vote procedure
 
 1. DolphinScheduler community vote: send the vote e-mail to `dev@dolphinscheduler.apache.org`.
-PMC needs to check the rightness of the version according to the document before they vote.
-After at least 72 hours and with at least 3 `+1 and no -1 PMC member` votes, it can come to the next stage of the vote.
+   PMC needs to check the rightness of the version according to the document before they vote.
+   After at least 72 hours and with at least 3 `+1 and no -1 PMC member` votes, it can come to the next stage of the vote.
 
 2. Announce the vote result: send the result vote e-mail to `dev@dolphinscheduler.apache.org`。
 
@@ -538,3 +539,4 @@ DolphinScheduler Resources:
 - Mailing list: dev@dolphinscheduler.apache.org
 - Documents: https://dolphinscheduler.apache.org/zh-cn/docs/<VERSION>/user_doc/about/introduction.html
 ```
+
diff --git a/docs/docs/en/guide/alert/alert_plugin_user_guide.md b/docs/docs/en/guide/alert/alert_plugin_user_guide.md
index d8ceebefea..330a7e3f7f 100644
--- a/docs/docs/en/guide/alert/alert_plugin_user_guide.md
+++ b/docs/docs/en/guide/alert/alert_plugin_user_guide.md
@@ -9,7 +9,7 @@ The alarm module supports the following scenarios:
 
 Steps to be used are as follows:
 
-- Go to `Security -> Alarm Group Management -> Alarm Instance Management -> Alarm Instance`. 
+- Go to `Security -> Alarm Group Management -> Alarm Instance Management -> Alarm Instance`.
 - Select the corresponding alarm plug-in and fill in the relevant alarm parameters.
 - Select `Alarm Group Management`, create an alarm group, and choose the corresponding alarm instance.
 
@@ -19,4 +19,4 @@ Steps to be used are as follows:
 
 ![alert-instance03](../../../../img/new_ui/dev/alert/alert_instance03.png)
 
-![alert-instance04](../../../../img/new_ui/dev/alert/alert_instance04.png)
\ No newline at end of file
+![alert-instance04](../../../../img/new_ui/dev/alert/alert_instance04.png)
diff --git a/docs/docs/en/guide/alert/dingtalk.md b/docs/docs/en/guide/alert/dingtalk.md
index f0b9196386..422811bd88 100644
--- a/docs/docs/en/guide/alert/dingtalk.md
+++ b/docs/docs/en/guide/alert/dingtalk.md
@@ -8,20 +8,21 @@ The following shows the `DingTalk` configuration example:
 
 ## Parameter Configuration
 
-| **Parameter** | **Description** |
-| --- | --- |
-| Warning Type | Alert on success or failure or both. |
-| WebHook | The format is: [https://oapi.dingtalk.com/robot/send?access\_token=XXXXXX](https://oapi.dingtalk.com/robot/send?access_token=XXXXXX) |
-| Keyword | Custom keywords for security settings. |
-| Secret | Signature of security settings   |
-| Msg Type | Message parse type (support txt, markdown, markdownV2, html). |
+| **Parameter**  |                                                                                                                                                                               **Description**                                                                                                                                                                               |
+|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Warning Type   | Alert on success or failure or both.                                                                                                                                                                                                                                                                                                                                        |
+| WebHook        | The format is: [https://oapi.dingtalk.com/robot/send?access\_token=XXXXXX](https://oapi.dingtalk.com/robot/send?access_token=XXXXXX)                                                                                                                                                                                                                                        |
+| Keyword        | Custom keywords for security settings.                                                                                                                                                                                                                                                                                                                                      |
+| Secret         | Signature of security settings                                                                                                                                                                                                                                                                                                                                              |
+| Msg Type       | Message parse type (support txt, markdown, markdownV2, html).                                                                                                                                                                                                                                                                                                               |
 | At User Mobile | When a custom bot sends a message, you can specify the "@person list" by their mobile phone number. When the selected people in the "@people list" receive the message, there will be a `@` message reminder. `No disturb` mode always receives reminders, and "someone @ you" appears in the message. The "At User Mobile" represents mobile phone number of the "@person" |
-| At User Ids | The user ID by "@person" |
-| Proxy | The proxy address of the proxy server. |
-| Port | The proxy port of Proxy-Server. |
-| User | Authentication(Username) for the proxy server. |
-| Password | Authentication(Password) for the proxy server. |
+| At User Ids    | The user ID by "@person"                                                                                                                                                                                                                                                                                                                                                    |
+| Proxy          | The proxy address of the proxy server.                                                                                                                                                                                                                                                                                                                                      |
+| Port           | The proxy port of Proxy-Server.                                                                                                                                                                                                                                                                                                                                             |
+| User           | Authentication(Username) for the proxy server.                                                                                                                                                                                                                                                                                                                              |
+| Password       | Authentication(Password) for the proxy server.                                                                                                                                                                                                                                                                                                                              |
 
 ## Reference
 
-- [DingTalk Custom Robot Access Development Documentation](https://open.dingtalk.com/document/robots/custom-robot-access) 
\ No newline at end of file
+- [DingTalk Custom Robot Access Development Documentation](https://open.dingtalk.com/document/robots/custom-robot-access)
+
diff --git a/docs/docs/en/guide/alert/email.md b/docs/docs/en/guide/alert/email.md
index d87b2964f1..09761733b5 100644
--- a/docs/docs/en/guide/alert/email.md
+++ b/docs/docs/en/guide/alert/email.md
@@ -1,4 +1,5 @@
-# Email  
+# Email
+
 If you need to use `Email` for alerting, create an alert instance in the alert instance management and select the Email plugin.
 
 The following shows the `Email` configuration example:
@@ -7,4 +8,4 @@ The following shows the `Email` configuration example:
 
 ![alert-email](../../../../img/alert/email-alter-setup2-en.png)
 
-![alert-email](../../../../img/alert/email-alter-setup3-en.png)
\ No newline at end of file
+![alert-email](../../../../img/alert/email-alter-setup3-en.png)
diff --git a/docs/docs/en/guide/alert/enterprise-webexteams.md b/docs/docs/en/guide/alert/enterprise-webexteams.md
index 2427b1c626..cc7070ce0d 100644
--- a/docs/docs/en/guide/alert/enterprise-webexteams.md
+++ b/docs/docs/en/guide/alert/enterprise-webexteams.md
@@ -7,14 +7,14 @@ The following is the `WebexTeams` configuration example:
 
 ## Parameter Configuration
 
-| **Parameter** | **Description** |
-| --- | --- |
-| botAccessToken | The access token of robot. |
-| roomID | The ID of the room that receives message (only support one room ID). |
-| toPersonId | The person ID of the recipient when sending a private 1:1 message. |
-| toPersonEmail | The email address of the recipient when sending a private 1:1 message. |
+|  **Parameter**  |                                                     **Description**                                                     |
+|-----------------|-------------------------------------------------------------------------------------------------------------------------|
+| botAccessToken  | The access token of robot.                                                                                              |
+| roomID          | The ID of the room that receives message (only support one room ID).                                                    |
+| toPersonId      | The person ID of the recipient when sending a private 1:1 message.                                                      |
+| toPersonEmail   | The email address of the recipient when sending a private 1:1 message.                                                  |
 | atSomeoneInRoom | If the message destination is room, the emails of the person being @, use `,` (eng commas) to separate multiple emails. |
-| destination |The destination of the message (one message only support one destination). |
+| destination     | The destination of the message (one message only support one destination).                                              |
 
 ## Create Bot
 
@@ -58,4 +58,5 @@ The `Room ID` we can acquire it from the `id` of creating a new group chat room
 ## References:
 
 - [WebexTeams Application Bot Guide](https://developer.webex.com/docs/bots)
-- [WebexTeams Message Guide](https://developer.webex.com/docs/api/v1/messages/create-a-message)
\ No newline at end of file
+- [WebexTeams Message Guide](https://developer.webex.com/docs/api/v1/messages/create-a-message)
+
diff --git a/docs/docs/en/guide/alert/enterprise-wechat.md b/docs/docs/en/guide/alert/enterprise-wechat.md
index a4fdb84bd5..baa2eb0920 100644
--- a/docs/docs/en/guide/alert/enterprise-wechat.md
+++ b/docs/docs/en/guide/alert/enterprise-wechat.md
@@ -40,7 +40,6 @@ The following is the `query userId` API example:
 
 APP: https://work.weixin.qq.com/api/doc/90000/90135/90236
 
-
 ### Group Chat
 
 The Group Chat send type means to notify the alert results via group chat created by Enterprise WeChat API, sending messages to all members of the group and specified users are not supported.
@@ -68,4 +67,5 @@ The following is the `create new group chat` API and `query userId` API example:
 
 ## Reference
 
-- Group Chat:https://work.weixin.qq.com/api/doc/90000/90135/90248
\ No newline at end of file
+- Group Chat:https://work.weixin.qq.com/api/doc/90000/90135/90248
+
diff --git a/docs/docs/en/guide/alert/feishu.md b/docs/docs/en/guide/alert/feishu.md
index bb0e94675c..93a4e6ac29 100644
--- a/docs/docs/en/guide/alert/feishu.md
+++ b/docs/docs/en/guide/alert/feishu.md
@@ -10,6 +10,7 @@ The following shows the `Feishu` configuration example:
 ## Parameter Configuration
 
 * Webhook
+
   > Copy the robot webhook URL shown below:
 
   ![alert-feishu-webhook](../../../../img/new_ui/dev/alert/alert_feishu_webhook.png)
diff --git a/docs/docs/en/guide/alert/http.md b/docs/docs/en/guide/alert/http.md
index 3725b516f7..ba5c009211 100644
--- a/docs/docs/en/guide/alert/http.md
+++ b/docs/docs/en/guide/alert/http.md
@@ -4,13 +4,13 @@ If you need to use `Http script` for alerting, create an alert instance in the a
 
 ## Parameter Configuration
 
-| **Parameter** | **Description** |
-| --- | --- |
-| URL | The `Http` request URL needs to contain protocol, host, path and parameters if the method is `GET`. |
-| Request Type | Select the request type from `POST` or `GET`. |
-| Headers | The headers of the `Http` request in JSON format. |
-| Body | The request body of the `Http` request in JSON format, when using `POST` method to alert. |
-| Content Field | The field name to place the alert information. |
+| **Parameter** |                                           **Description**                                           |
+|---------------|-----------------------------------------------------------------------------------------------------|
+| URL           | The `Http` request URL needs to contain protocol, host, path and parameters if the method is `GET`. |
+| Request Type  | Select the request type from `POST` or `GET`.                                                       |
+| Headers       | The headers of the `Http` request in JSON format.                                                   |
+| Body          | The request body of the `Http` request in JSON format, when using `POST` method to alert.           |
+| Content Field | The field name to place the alert information.                                                      |
 
 ## Send Type
 
@@ -28,4 +28,4 @@ The following shows the `GET` configuration example:
 Send alert information inside `Http` body by `Http` POST method.
 The following shows the `POST` configuration example:
 
-![enterprise-wechat-app-msg-config](../../../../img/alert/http-post-example.png)
\ No newline at end of file
+![enterprise-wechat-app-msg-config](../../../../img/alert/http-post-example.png)
diff --git a/docs/docs/en/guide/alert/script.md b/docs/docs/en/guide/alert/script.md
index b87b4f5f82..0f0e3a300b 100644
--- a/docs/docs/en/guide/alert/script.md
+++ b/docs/docs/en/guide/alert/script.md
@@ -1,18 +1,18 @@
 # Script
 
-If you need to use `Shell script` for alerting, create an alert instance in the alert instance management and select the `Script` plugin. 
+If you need to use `Shell script` for alerting, create an alert instance in the alert instance management and select the `Script` plugin.
 The following shows the `Script` configuration example:
 
 ![dingtalk-plugin](../../../../img/alert/script-plugin.png)
 
 ## Parameter Configuration
 
-| **Parameter** | **Description** |
-| --- | --- |
-| User Params | User defined parameters will pass to the script. |
-| Script Path |The file location path in the server. |
-| Type | Support `Shell` script. |
+| **Parameter** |                 **Description**                  |
+|---------------|--------------------------------------------------|
+| User Params   | User defined parameters will pass to the script. |
+| Script Path   | The file location path in the server.            |
+| Type          | Support `Shell` script.                          |
 
 ### Note
 
-Consider the script file access privileges with the executing tenant.
\ No newline at end of file
+Consider the script file access privileges with the executing tenant.
diff --git a/docs/docs/en/guide/alert/telegram.md b/docs/docs/en/guide/alert/telegram.md
index cdfb026ed4..5138b6772d 100644
--- a/docs/docs/en/guide/alert/telegram.md
+++ b/docs/docs/en/guide/alert/telegram.md
@@ -7,17 +7,17 @@ The following shows the `Telegram` configuration example:
 
 ## Parameter Configuration
 
-| **Parameter** | **Description** |
-| --- | --- |
-| WebHook | The WebHook of Telegram when use robot to send message. |
-| botToken | The access token of robot. |
-| chatId | Sub Telegram Channel. |
-| parseMode | Message parse type (support txt, markdown, markdownV2, html). |
-| EnableProxy | Enable proxy sever. |
-| Proxy | The proxy address of the proxy server. |
-| Port | The proxy port of proxy server. |
-| User | Authentication(Username) for the proxy server. |
-| Password | Authentication(Password) for the proxy server. |
+| **Parameter** |                        **Description**                        |
+|---------------|---------------------------------------------------------------|
+| WebHook       | The WebHook of Telegram when use robot to send message.       |
+| botToken      | The access token of robot.                                    |
+| chatId        | Sub Telegram Channel.                                         |
+| parseMode     | Message parse type (support txt, markdown, markdownV2, html). |
+| EnableProxy   | Enable proxy sever.                                           |
+| Proxy         | The proxy address of the proxy server.                        |
+| Port          | The proxy port of proxy server.                               |
+| User          | Authentication(Username) for the proxy server.                |
+| Password      | Authentication(Password) for the proxy server.                |
 
 ### NOTE
 
@@ -34,4 +34,5 @@ The webhook needs to be able to receive and use the same JSON body of HTTP POST
 
 - [Telegram Application Bot Guide](https://core.telegram.org/bots)
 - [Telegram Bots Api](https://core.telegram.org/bots/api)
-- [Telegram SendMessage Api](https://core.telegram.org/bots/api#sendmessage)
\ No newline at end of file
+- [Telegram SendMessage Api](https://core.telegram.org/bots/api#sendmessage)
+
diff --git a/docs/docs/en/guide/data-quality.md b/docs/docs/en/guide/data-quality.md
index ec308df3bb..ef0d5e65b4 100644
--- a/docs/docs/en/guide/data-quality.md
+++ b/docs/docs/en/guide/data-quality.md
@@ -1,4 +1,5 @@
 # Data Quality
+
 ## Introduction
 
 The data quality task is used to check the data accuracy during the integration and processing of data. Data quality tasks in this release include single-table checking, single-table custom SQL checking, multi-table accuracy, and two-table value comparisons. The running environment of the data quality task is Spark 2.4.0, and other versions have not been verified, and users can verify by themselves.
@@ -7,9 +8,9 @@ The execution logic of the data quality task is as follows:
 
 - The user defines the task in the interface, and the user input value is stored in `TaskParam`.
 - When running a task, `Master` will parse `TaskParam`, encapsulate the parameters required by `DataQualityTask` and send it to `Worker`.
-- Worker runs the data quality task. After the data quality task finishes running, it writes the statistical results to the specified storage engine. 
+- Worker runs the data quality task. After the data quality task finishes running, it writes the statistical results to the specified storage engine.
 - The current data quality task result is stored in the `t_ds_dq_execute_result` table of `dolphinscheduler`
-`Worker` sends the task result to `Master`, after `Master` receives `TaskResponse`, it will judge whether the task type is `DataQualityTask`, if so, it will read the corresponding result from `t_ds_dq_execute_result` according to `taskInstanceId`, and then The result is judged according to the check mode, operator and threshold configured by the user. 
+  `Worker` sends the task result to `Master`, after `Master` receives `TaskResponse`, it will judge whether the task type is `DataQualityTask`, if so, it will read the corresponding result from `t_ds_dq_execute_result` according to `taskInstanceId`, and then The result is judged according to the check mode, operator and threshold configured by the user.
 - If the result is a failure, the corresponding operation, alarm or interruption will be performed according to the failure policy configured by the user.
 - Add config : `<server-name>/conf/common.properties`
 
@@ -27,14 +28,14 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 
 ## Detailed Inspection Logic
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| CheckMethod | [CheckFormula][Operator][Threshold], if the result is true, it indicates that the data does not meet expectations, and the failure strategy is executed. |
-| CheckFormula |  <ul><li>Expected-Actual</li><li>Actual-Expected</li><li>(Actual/Expected)x100%</li><li>(Expected-Actual)/Expected x100%</li></ul> |
-| Operator | =, >, >=, <, <=, != |
+| **Parameter** |                                                                                 **Description**                                                                                  |
+|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CheckMethod   | [CheckFormula][Operator][Threshold], if the result is true, it indicates that the data does not meet expectations, and the failure strategy is executed.                         |
+| CheckFormula  | <ul><li>Expected-Actual</li><li>Actual-Expected</li><li>(Actual/Expected)x100%</li><li>(Expected-Actual)/Expected x100%</li></ul>                                                |
+| Operator      | =, >, >=, <, <=, !=                                                                                                                                                              |
 | ExpectedValue | <ul><li>FixValue</li><li>DailyAvg</li><li>WeeklyAvg</li><li>MonthlyAvg</li><li>Last7DayAvg</li><li>Last30DayAvg</li><li>SrcTableTotalRows</li><li>TargetTableTotalRows</li></ul> |
-| Example |<ul><li>CheckFormula:Expected-Actual</li><li>Operator:></li><li>Threshold:0</li><li>ExpectedValue:FixValue=9</li></ul>
-    
+| Example       | <ul><li>CheckFormula:Expected-Actual</li><li>Operator:></li><li>Threshold:0</li><li>ExpectedValue:FixValue=9</li></ul>                                                           |
+
 In the example, assuming that the actual value is 10, the operator is >, and the expected value is 9, then the result 10 -9 > 0 is true, which means that the row data in the empty column has exceeded the threshold, and the task is judged to fail.
 
 # Task Operation Guide
@@ -50,7 +51,6 @@ The goal of the null value check is to check the number of empty rows in the spe
   ```sql
   SELECT COUNT(*) AS miss FROM ${src_table} WHERE (${src_field} is null or ${src_field} = '') AND (${src_filter})
   ```
-
 - The SQL to calculate the total number of rows in the table is as follows:
 
   ```sql
@@ -61,155 +61,163 @@ The goal of the null value check is to check the number of empty rows in the spe
 
 ![dataquality_null_check](../../../img/tasks/demo/null_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select the check column name. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select the check column name.                                                                                                                                                                                                    |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Timeliness Check of Single Table Check
+
 ### Inspection Introduction
+
 The timeliness check is used to check whether the data is processed within the expected time. The start time and end time can be specified to define the time range. If the amount of data within the time range does not reach the set threshold, the check task will be judged as fail.
 
 ### Interface Operation Guide
 
 ![dataquality_timeliness_check](../../../img/tasks/demo/timeliness_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc.
-| Source data source | The corresponding data source under the source data type.
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select check column name. |
-| Start time | The start time of a time range. |
-| end time | The end time of a time range. |
-| Time Format | Set the corresponding time format. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select check column name.                                                                                                                                                                                                        |
+| Start time             | The start time of a time range.                                                                                                                                                                                                               |
+| end time               | The end time of a time range.                                                                                                                                                                                                                 |
+| Time Format            | Set the corresponding time format.                                                                                                                                                                                                            |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Field Length Check for Single Table Check
 
 ### Inspection Introduction
+
 The goal of field length verification is to check whether the length of the selected field meets the expectations. If there is data that does not meet the requirements, and the number of rows exceeds the threshold, the task will be judged to fail.
 
 ### Interface Operation Guide
 
 ![dataquality_length_check](../../../img/tasks/demo/field_length_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select the check column name. |
-| Logical operators | =, >, >=, <, <=, ! = |
-| Field length limit | Like the title. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select the check column name.                                                                                                                                                                                                    |
+| Logical operators      | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Field length limit     | Like the title.                                                                                                                                                                                                                               |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Uniqueness Check for Single Table Check
 
 ### Inspection Introduction
+
 The goal of the uniqueness check is to check whether the fields are duplicated. It is generally used to check whether the primary key is duplicated. If there are duplicates and the threshold is reached, the check task will be judged to be failed.
 
 ### Interface Operation Guide
 
 ![dataquality_uniqueness_check](../../../img/tasks/demo/uniqueness_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select the check column name. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select the check column name.                                                                                                                                                                                                    |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Regular Expression Check for Single Table Check
 
 ### Inspection Introduction
+
 The goal of regular expression verification is to check whether the format of the value of a field meets the requirements, such as time format, email format, ID card format, etc. If there is data that does not meet the format and exceeds the threshold, the task will be judged as failed.
 
 ### Interface Operation Guide
 
 ![dataquality_regex_check](../../../img/tasks/demo/regexp_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select check column name. |
-| Regular expression | As title. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select check column name.                                                                                                                                                                                                        |
+| Regular expression     | As title.                                                                                                                                                                                                                                     |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Enumeration Value Validation for Single Table Check
+
 ### Inspection Introduction
+
 The goal of enumeration value verification is to check whether the value of a field is within the range of the enumeration value. If there is data that is not in the range of the enumeration value and exceeds the threshold, the task will be judged to fail.
 
 ### Interface Operation Guide
 
 ![dataquality_enum_check](../../../img/tasks/demo/enumeration_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src table filter conditions | Such as title, also used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select the check column name. |
-| List of enumeration values | Separated by commas. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|        **Parameter**        |                                                                                                                **Description**                                                                                                                |
+|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type            | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source          | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table           | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src table filter conditions | Such as title, also used when counting the total number of rows in the table, optional.                                                                                                                                                       |
+| Src table check column      | Drop-down to select the check column name.                                                                                                                                                                                                    |
+| List of enumeration values  | Separated by commas.                                                                                                                                                                                                                          |
+| Check method                | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators             | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold                   | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy            | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type         | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Table Row Number Verification for Single Table Check
 
 ### Inspection Introduction
+
 The goal of table row number verification is to check whether the number of rows in the table reaches the expected value. If the number of rows does not meet the standard, the task will be judged as failed.
 
 ### Interface Operation Guide
 
 ![dataquality_count_check](../../../img/tasks/demo/table_count_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the validation data is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Src table check column | Drop-down to select the check column name. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|     **Parameter**      |                                                                                                                **Description**                                                                                                                |
+|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type       | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                |
+| Source data source     | The corresponding data source under the source data type.                                                                                                                                                                                     |
+| Source data table      | Drop-down to select the table where the validation data is located.                                                                                                                                                                           |
+| Src filter conditions  | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                        |
+| Src table check column | Drop-down to select the check column name.                                                                                                                                                                                                    |
+| Check method           | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                        |
+| Check operators        | =, >, >=, <, <=, ! =                                                                                                                                                                                                                          |
+| Threshold              | The value used in the formula for comparison.                                                                                                                                                                                                 |
+| Failure strategy       | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Expected value type    | Select the desired type from the drop-down menu.                                                                                                                                                                                              |
 
 ## Custom SQL Check for Single Table Check
 
@@ -217,36 +225,38 @@ The goal of table row number verification is to check whether the number of rows
 
 ![dataquality_custom_sql_check](../../../img/tasks/demo/custom_sql_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the data to be verified is located. |
-| Actual value name | Alias in SQL for statistical value calculation, such as max_num. |
-|Actual value calculation SQL | SQL for outputting actual values. Note:<ul><li>The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.</li><li>Select max(a) as max_num from ${src_table}, the table name must be filled like this.</li></ul> |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Check method | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul> |
-| Check operators | =, >, >=, <, <=, ! = |
-| Threshold | The value used in the formula for comparison. |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
-| Expected value type | Select the desired type from the drop-down menu. |
+|        **Parameter**         |                                                                                                                                **Description**                                                                                                                                 |
+|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type             | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                                                 |
+| Source data source           | The corresponding data source under the source data type.                                                                                                                                                                                                                      |
+| Source data table            | Drop-down to select the table where the data to be verified is located.                                                                                                                                                                                                        |
+| Actual value name            | Alias in SQL for statistical value calculation, such as max_num.                                                                                                                                                                                                               |
+| Actual value calculation SQL | SQL for outputting actual values. Note:<ul><li>The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.</li><li>Select max(a) as max_num from ${src_table}, the table name must be filled like this.</li></ul> |
+| Src filter conditions        | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                                                         |
+| Check method                 | <ul><li>[Expected-Actual]</li><li>[Actual-Expected]</li><li>[Actual/Expected]x100%</li><li>[(Expected-Actual)/Expected]x100%</li></ul>                                                                                                                                         |
+| Check operators              | =, >, >=, <, <=, ! =                                                                                                                                                                                                                                                           |
+| Threshold                    | The value used in the formula for comparison.                                                                                                                                                                                                                                  |
+| Failure strategy             | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul>                                  |
+| Expected value type          | Select the desired type from the drop-down menu.                                                                                                                                                                                                                               |
 
 ## Accuracy Check of Multi-table
+
 ### Inspection Introduction
+
 Accuracy checks are performed by comparing the accuracy differences of data records for selected fields between two tables, examples are as follows
 - table test1
 
 | c1 | c2 |
-| :---: | :---: |
-| a | 1 |
-| b | 2 |
+|:--:|:--:|
+| a  | 1  |
+| b  | 2  |
 
 - table test2
 
 | c21 | c22 |
-| :---: | :---: |
-| a | 1 |
-| b | 3 |
+|:---:|:---:|
+|  a  |  1  |
+|  b  |  3  |
 
 If you compare the data in c1 and c21, the tables test1 and test2 are exactly the same. If you compare c2 and c22, the data in table test1 and table test2 are inconsistent.
 
@@ -254,45 +264,47 @@ If you compare the data in c1 and c21, the tables test1 and test2 are exactly th
 
 ![dataquality_multi_table_accuracy_check](../../../img/tasks/demo/multi_table_accuracy_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | Drop-down to select the table where the data to be verified is located. |
-| Src filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Target data type | Choose MySQL, PostgreSQL, etc. |
-| Target data source | The corresponding data source under the source data type. |
-| Target data table | Drop-down to select the table where the data to be verified is located. |
-| Target filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional. |
-| Check column | Fill in the source data column, operator and target data column respectively. |
-| Verification method | Select the desired verification method. |
-| Operators | =, >, >=, <, <=, ! = |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li><ul> |
-| Expected value type | Select the desired type in the drop-down menu, only `SrcTableTotalRow`, `TargetTableTotalRow` and fixed value are suitable for selection here. |
+|      **Parameter**       |                                                                                                               **Description**                                                                                                                |
+|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type         | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                               |
+| Source data source       | The corresponding data source under the source data type.                                                                                                                                                                                    |
+| Source data table        | Drop-down to select the table where the data to be verified is located.                                                                                                                                                                      |
+| Src filter conditions    | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                       |
+| Target data type         | Choose MySQL, PostgreSQL, etc.                                                                                                                                                                                                               |
+| Target data source       | The corresponding data source under the source data type.                                                                                                                                                                                    |
+| Target data table        | Drop-down to select the table where the data to be verified is located.                                                                                                                                                                      |
+| Target filter conditions | Such as the title, it will also be used when counting the total number of rows in the table, optional.                                                                                                                                       |
+| Check column             | Fill in the source data column, operator and target data column respectively.                                                                                                                                                                |
+| Verification method      | Select the desired verification method.                                                                                                                                                                                                      |
+| Operators                | =, >, >=, <, <=, ! =                                                                                                                                                                                                                         |
+| Failure strategy         | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li><ul> |
+| Expected value type      | Select the desired type in the drop-down menu, only `SrcTableTotalRow`, `TargetTableTotalRow` and fixed value are suitable for selection here.                                                                                               |
 
 ## Comparison of the values checked by the two tables
+
 ### Inspection Introduction
+
 Two-table value comparison allows users to customize different SQL statistics for two tables and compare the corresponding values. For example, for the source table A, the total amount of a certain column is calculated, and for the target table, the total amount of a certain column is calculated. value sum2, compare sum1 and sum2 to determine the check result.
 
 ### Interface Operation Guide
 
 ![dataquality_multi_table_comparison_check](../../../img/tasks/demo/multi_table_comparison_check.png)
 
-| **Parameter** | **Description** |
-| ----- | ---- |
-| Source data type | Select MySQL, PostgreSQL, etc. |
-| Source data source | The corresponding data source under the source data type. |
-| Source data table | The table where the data is to be verified. |
-| Actual value name | Calculate the alias in SQL for the actual value, such as max_age1. |
-| Actual value calculation SQL | SQL for outputting actual values. Note: <ul><li>The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.</li><li>Select max(age) as max_age1 from ${src_table} The table name must be filled like this.</li></ul> |
-| Target data type | Choose MySQL, PostgreSQL, etc. |
-| Target data source | The corresponding data source under the source data type. |
-| Target data table | The table where the data is to be verified. |
-| Expected value name | Calculate the alias in SQL for the expected value, such as max_age2. |
+|         **Parameter**          |                                                                                                                                    **Description**                                                                                                                                    |
+|--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Source data type               | Select MySQL, PostgreSQL, etc.                                                                                                                                                                                                                                                        |
+| Source data source             | The corresponding data source under the source data type.                                                                                                                                                                                                                             |
+| Source data table              | The table where the data is to be verified.                                                                                                                                                                                                                                           |
+| Actual value name              | Calculate the alias in SQL for the actual value, such as max_age1.                                                                                                                                                                                                                    |
+| Actual value calculation SQL   | SQL for outputting actual values. Note: <ul><li>The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.</li><li>Select max(age) as max_age1 from ${src_table} The table name must be filled like this.</li></ul>     |
+| Target data type               | Choose MySQL, PostgreSQL, etc.                                                                                                                                                                                                                                                        |
+| Target data source             | The corresponding data source under the source data type.                                                                                                                                                                                                                             |
+| Target data table              | The table where the data is to be verified.                                                                                                                                                                                                                                           |
+| Expected value name            | Calculate the alias in SQL for the expected value, such as max_age2.                                                                                                                                                                                                                  |
 | Expected value calculation SQL | SQL for outputting expected value. Note: <ul><li>The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.</li><li>Select max(age) as max_age2 from ${target_table} The table name must be filled like this.</li></ul> |
-| Verification method | Select the desired verification method. |
-| Operators | =, >, >=, <, <=, ! = |
-| Failure strategy | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul> |
+| Verification method            | Select the desired verification method.                                                                                                                                                                                                                                               |
+| Operators                      | =, >, >=, <, <=, ! =                                                                                                                                                                                                                                                                  |
+| Failure strategy               | <ul><li>Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent.</li><li>Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent.</li></ul>                                         |
 
 ## Task result view
 
@@ -306,4 +318,4 @@ Two-table value comparison allows users to customize different SQL statistics fo
 
 ### Rules Details
 
-![dataquality_rule_detail](../../../img/tasks/demo/rule_detail.png)
\ No newline at end of file
+![dataquality_rule_detail](../../../img/tasks/demo/rule_detail.png)
diff --git a/docs/docs/en/guide/datasource/athena.md b/docs/docs/en/guide/datasource/athena.md
index ab92e06238..035c8f7c4b 100644
--- a/docs/docs/en/guide/datasource/athena.md
+++ b/docs/docs/en/guide/datasource/athena.md
@@ -4,15 +4,15 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select ATHENA. |
-| Datasource name | Enter the name of the DataSource. |
-| Description | Enter a description of the DataSource. |
-| Username | Set the AWS access key. |
-| Password | Set the AWS secret access key. |
-| AwsRegion | Set the AWS region. |
-| Database name | Enter the database name of the ATHENA connection. |
+|       **Datasource**       |                      **Description**                      |
+|----------------------------|-----------------------------------------------------------|
+| Datasource                 | Select ATHENA.                                            |
+| Datasource name            | Enter the name of the DataSource.                         |
+| Description                | Enter a description of the DataSource.                    |
+| Username                   | Set the AWS access key.                                   |
+| Password                   | Set the AWS secret access key.                            |
+| AwsRegion                  | Set the AWS region.                                       |
+| Database name              | Enter the database name of the ATHENA connection.         |
 | Jdbc connection parameters | Parameter settings for ATHENA connection, in JSON format. |
 
 ## Native Supported
@@ -20,3 +20,4 @@
 - No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate this datasource.
 - JDBC driver configuration reference document [athena-connect-with-jdbc](https://docs.amazonaws.cn/athena/latest/ug/connect-with-jdbc.html)
 - Driver download link [SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar](https://s3.cn-north-1.amazonaws.com.cn/athena-downloads-cn/drivers/JDBC/SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar)
+
diff --git a/docs/docs/en/guide/datasource/clickhouse.md b/docs/docs/en/guide/datasource/clickhouse.md
index 0fb78366cd..8de091a938 100644
--- a/docs/docs/en/guide/datasource/clickhouse.md
+++ b/docs/docs/en/guide/datasource/clickhouse.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select CLICKHOUSE. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the CLICKHOUSE service IP. |
-| Port | Enter the CLICKHOUSE service port. |
-| Username | Set the username for CLICKHOUSE connection. |
-| Password | Set the password for CLICKHOUSE connection. |
-| Database Name | Enter the database name of the CLICKHOUSE connection. |
+|     **Datasource**      |                        **Description**                        |
+|-------------------------|---------------------------------------------------------------|
+| Datasource              | Select CLICKHOUSE.                                            |
+| Datasource Name         | Enter the name of the datasource.                             |
+| Description             | Enter a description of the datasource.                        |
+| IP/Host Name            | Enter the CLICKHOUSE service IP.                              |
+| Port                    | Enter the CLICKHOUSE service port.                            |
+| Username                | Set the username for CLICKHOUSE connection.                   |
+| Password                | Set the password for CLICKHOUSE connection.                   |
+| Database Name           | Enter the database name of the CLICKHOUSE connection.         |
 | jdbc connect parameters | Parameter settings for CLICKHOUSE connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/db2.md b/docs/docs/en/guide/datasource/db2.md
index 33d459a0a5..ef839e38c9 100644
--- a/docs/docs/en/guide/datasource/db2.md
+++ b/docs/docs/en/guide/datasource/db2.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select DB2. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the DB2 service IP. |
-| Port | Enter the DB2 service port. |
-| Username | Set the username for DB2 connection. |
-| Password | Set the password for DB2 connection. |
-| Database Name | Enter the database name of the DB2 connection. |
+|     **Datasource**      |                    **Description**                     |
+|-------------------------|--------------------------------------------------------|
+| Datasource              | Select DB2.                                            |
+| Datasource Name         | Enter the name of the datasource.                      |
+| Description             | Enter a description of the datasource.                 |
+| IP/Host Name            | Enter the DB2 service IP.                              |
+| Port                    | Enter the DB2 service port.                            |
+| Username                | Set the username for DB2 connection.                   |
+| Password                | Set the password for DB2 connection.                   |
+| Database Name           | Enter the database name of the DB2 connection.         |
 | jdbc connect parameters | Parameter settings for DB2 connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/hive.md b/docs/docs/en/guide/datasource/hive.md
index 2a29ccb319..4af38ac0ea 100644
--- a/docs/docs/en/guide/datasource/hive.md
+++ b/docs/docs/en/guide/datasource/hive.md
@@ -6,27 +6,27 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select HIVE. |
-| Datasource name | Enter the name of the DataSource. |
-| Description | Enter a description of the DataSource. |
-| IP/Host Name | Enter the HIVE service IP. |
-| Port | Enter the HIVE service port. |
-| Username | Set the username for HIVE connection. |
-| Password | Set the password for HIVE connection. |
-| Database name | Enter the database name of the HIVE connection. |
+|       **Datasource**       |                     **Description**                     |
+|----------------------------|---------------------------------------------------------|
+| Datasource                 | Select HIVE.                                            |
+| Datasource name            | Enter the name of the DataSource.                       |
+| Description                | Enter a description of the DataSource.                  |
+| IP/Host Name               | Enter the HIVE service IP.                              |
+| Port                       | Enter the HIVE service port.                            |
+| Username                   | Set the username for HIVE connection.                   |
+| Password                   | Set the password for HIVE connection.                   |
+| Database name              | Enter the database name of the HIVE connection.         |
 | Jdbc connection parameters | Parameter settings for HIVE connection, in JSON format. |
 
-> NOTICE: If you wish to execute multiple HIVE SQL in the same session, you could set `support.hive.oneSession = true` in `common.properties`. 
+> NOTICE: If you wish to execute multiple HIVE SQL in the same session, you could set `support.hive.oneSession = true` in `common.properties`.
 > It is helpful when you try to set env variables before running HIVE SQL. Default value of `support.hive.oneSession` is `false` and multi-SQLs run in different sessions.
 
 ## Use HiveServer2 HA ZooKeeper
 
 ![hive-server2](../../../../img/new_ui/dev/datasource/hiveserver2.png)
 
-NOTICE: If Kerberos is disabled, ensure the parameter `hadoop.security.authentication.startup.state` is false, and parameter `java.security.krb5.conf.path` value sets null. 
-If **Kerberos** is enabled, needs to set the following parameters  in `common.properties`: 
+NOTICE: If Kerberos is disabled, ensure the parameter `hadoop.security.authentication.startup.state` is false, and parameter `java.security.krb5.conf.path` value sets null.
+If **Kerberos** is enabled, needs to set the following parameters  in `common.properties`:
 
 ```conf
 # whether to startup kerberos
@@ -44,4 +44,4 @@ login.user.keytab.path=/opt/hdfs.headless.keytab
 
 ## Native Supported
 
-Yes, could use this datasource by default. 
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/mysql.md b/docs/docs/en/guide/datasource/mysql.md
index 5b9c4642f5..e4d430fb0d 100644
--- a/docs/docs/en/guide/datasource/mysql.md
+++ b/docs/docs/en/guide/datasource/mysql.md
@@ -4,16 +4,16 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select MYSQL. |
-| Datasource name | Enter the name of the DataSource. |
-| Description | Enter a description of the DataSource. |
-| IP/Host Name | Enter the MYSQL service IP. |
-| Port | Enter the MYSQL service port. |
-| Username | Set the username for MYSQL connection. |
-| Password | Set the password for MYSQL connection. |
-| Database name | Enter the database name of the MYSQL connection. |
+|       **Datasource**       |                     **Description**                      |
+|----------------------------|----------------------------------------------------------|
+| Datasource                 | Select MYSQL.                                            |
+| Datasource name            | Enter the name of the DataSource.                        |
+| Description                | Enter a description of the DataSource.                   |
+| IP/Host Name               | Enter the MYSQL service IP.                              |
+| Port                       | Enter the MYSQL service port.                            |
+| Username                   | Set the username for MYSQL connection.                   |
+| Password                   | Set the password for MYSQL connection.                   |
+| Database name              | Enter the database name of the MYSQL connection.         |
 | Jdbc connection parameters | Parameter settings for MYSQL connection, in JSON format. |
 
 ## Native Supported
diff --git a/docs/docs/en/guide/datasource/oracle.md b/docs/docs/en/guide/datasource/oracle.md
index c7d217ad51..4fdaf94195 100644
--- a/docs/docs/en/guide/datasource/oracle.md
+++ b/docs/docs/en/guide/datasource/oracle.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select Oracle. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the Oracle service IP. |
-| Port | Enter the Oracle service port. |
-| Username | Set the username for Oracle connection. |
-| Password | Set the password for Oracle connection. |
-| Database Name | Enter the database name of the Oracle connection. |
+|     **Datasource**      |                      **Description**                      |
+|-------------------------|-----------------------------------------------------------|
+| Datasource              | Select Oracle.                                            |
+| Datasource Name         | Enter the name of the datasource.                         |
+| Description             | Enter a description of the datasource.                    |
+| IP/Host Name            | Enter the Oracle service IP.                              |
+| Port                    | Enter the Oracle service port.                            |
+| Username                | Set the username for Oracle connection.                   |
+| Password                | Set the password for Oracle connection.                   |
+| Database Name           | Enter the database name of the Oracle connection.         |
 | jdbc connect parameters | Parameter settings for Oracle connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/postgresql.md b/docs/docs/en/guide/datasource/postgresql.md
index 08d92edd31..cb3daf41c4 100644
--- a/docs/docs/en/guide/datasource/postgresql.md
+++ b/docs/docs/en/guide/datasource/postgresql.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select POSTGRESQL. |
-| Datasource name | Enter the name of the DataSource. |
-| Description | Enter a description of the DataSource. |
-| IP/Host Name | Enter the PostgreSQL service IP. |
-| Port | Enter the PostgreSQL service port. |
-| Username | Set the username for PostgreSQL connection. |
-| Password | Set the password for PostgreSQL connection. |
-| Database name | Enter the database name of the PostgreSQL connection. |
+|       **Datasource**       |                        **Description**                        |
+|----------------------------|---------------------------------------------------------------|
+| Datasource                 | Select POSTGRESQL.                                            |
+| Datasource name            | Enter the name of the DataSource.                             |
+| Description                | Enter a description of the DataSource.                        |
+| IP/Host Name               | Enter the PostgreSQL service IP.                              |
+| Port                       | Enter the PostgreSQL service port.                            |
+| Username                   | Set the username for PostgreSQL connection.                   |
+| Password                   | Set the password for PostgreSQL connection.                   |
+| Database name              | Enter the database name of the PostgreSQL connection.         |
 | Jdbc connection parameters | Parameter settings for PostgreSQL connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default. 
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/presto.md b/docs/docs/en/guide/datasource/presto.md
index 6302954056..70b7fb90d9 100644
--- a/docs/docs/en/guide/datasource/presto.md
+++ b/docs/docs/en/guide/datasource/presto.md
@@ -4,19 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select Presto. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the Presto service IP. |
-| Port | Enter the Presto service port. |
-| Username | Set the username for Presto connection. |
-| Password | Set the password for Presto connection. |
-| Database Name | Enter the database name of the Presto connection. |
+|     **Datasource**      |                      **Description**                      |
+|-------------------------|-----------------------------------------------------------|
+| Datasource              | Select Presto.                                            |
+| Datasource Name         | Enter the name of the datasource.                         |
+| Description             | Enter a description of the datasource.                    |
+| IP/Host Name            | Enter the Presto service IP.                              |
+| Port                    | Enter the Presto service port.                            |
+| Username                | Set the username for Presto connection.                   |
+| Password                | Set the password for Presto connection.                   |
+| Database Name           | Enter the database name of the Presto connection.         |
 | jdbc connect parameters | Parameter settings for Presto connection, in JSON format. |
 
-
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/redshift.md b/docs/docs/en/guide/datasource/redshift.md
index 3dbae981d1..60dd982492 100644
--- a/docs/docs/en/guide/datasource/redshift.md
+++ b/docs/docs/en/guide/datasource/redshift.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select Redshift. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the Redshift service IP. |
-| Port | Enter the Redshift service port. |
-| Username | Set the username for Redshift connection. |
-| Password | Set the password for Redshift connection. |
-| Database Name | Enter the database name of the Redshift connection. |
+|     **Datasource**      |                       **Description**                       |
+|-------------------------|-------------------------------------------------------------|
+| Datasource              | Select Redshift.                                            |
+| Datasource Name         | Enter the name of the datasource.                           |
+| Description             | Enter a description of the datasource.                      |
+| IP/Host Name            | Enter the Redshift service IP.                              |
+| Port                    | Enter the Redshift service port.                            |
+| Username                | Set the username for Redshift connection.                   |
+| Password                | Set the password for Redshift connection.                   |
+| Database Name           | Enter the database name of the Redshift connection.         |
 | jdbc connect parameters | Parameter settings for Redshift connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/spark.md b/docs/docs/en/guide/datasource/spark.md
index e3ea0acac7..bbf1075dc1 100644
--- a/docs/docs/en/guide/datasource/spark.md
+++ b/docs/docs/en/guide/datasource/spark.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select Spark. |
-| Datasource name | Enter the name of the DataSource. |
-| Description | Enter a description of the DataSource. |
-| IP/Host Name | Enter the Spark service IP. |
-| Port | Enter the Spark service port. |
-| Username | Set the username for Spark connection. |
-| Password | Set the password for Spark connection. |
-| Database name | Enter the database name of the Spark connection. |
+|       **Datasource**       |                     **Description**                      |
+|----------------------------|----------------------------------------------------------|
+| Datasource                 | Select Spark.                                            |
+| Datasource name            | Enter the name of the DataSource.                        |
+| Description                | Enter a description of the DataSource.                   |
+| IP/Host Name               | Enter the Spark service IP.                              |
+| Port                       | Enter the Spark service port.                            |
+| Username                   | Set the username for Spark connection.                   |
+| Password                   | Set the password for Spark connection.                   |
+| Database name              | Enter the database name of the Spark connection.         |
 | Jdbc connection parameters | Parameter settings for Spark connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default. 
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/datasource/sqlserver.md b/docs/docs/en/guide/datasource/sqlserver.md
index be0addf991..788d1b477f 100644
--- a/docs/docs/en/guide/datasource/sqlserver.md
+++ b/docs/docs/en/guide/datasource/sqlserver.md
@@ -4,18 +4,18 @@
 
 ## Datasource Parameters
 
-| **Datasource** | **Description** |
-| --- | --- |
-| Datasource | Select SQLSERVER. |
-| Datasource Name | Enter the name of the datasource. |
-| Description | Enter a description of the datasource. |
-| IP/Host Name | Enter the SQLSERVER service IP. |
-| Port | Enter the SQLSERVER service port. |
-| Username | Set the username for SQLSERVER connection. |
-| Password | Set the password for SQLSERVER connection. |
-| Database Name | Enter the database name of the SQLSERVER connection. |
+|     **Datasource**      |                       **Description**                        |
+|-------------------------|--------------------------------------------------------------|
+| Datasource              | Select SQLSERVER.                                            |
+| Datasource Name         | Enter the name of the datasource.                            |
+| Description             | Enter a description of the datasource.                       |
+| IP/Host Name            | Enter the SQLSERVER service IP.                              |
+| Port                    | Enter the SQLSERVER service port.                            |
+| Username                | Set the username for SQLSERVER connection.                   |
+| Password                | Set the password for SQLSERVER connection.                   |
+| Database Name           | Enter the database name of the SQLSERVER connection.         |
 | jdbc connect parameters | Parameter settings for SQLSERVER connection, in JSON format. |
 
 ## Native Supported
 
-Yes, could use this datasource by default.
\ No newline at end of file
+Yes, could use this datasource by default.
diff --git a/docs/docs/en/guide/expansion-reduction.md b/docs/docs/en/guide/expansion-reduction.md
index f7cd12e589..c58a85e9e2 100644
--- a/docs/docs/en/guide/expansion-reduction.md
+++ b/docs/docs/en/guide/expansion-reduction.md
@@ -1,12 +1,12 @@
 # DolphinScheduler Expansion and Reduction
 
-## Expansion 
+## Expansion
 
 This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.
 
 ```
- Attention: There cannot be more than one master service process or worker service process on a physical machine.
-       If the physical machine which locate the expansion master or worker node has already installed the scheduled service, check the [1.4 Modify configuration] and edit the configuration file `conf/config/install_config.conf` on ** all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
+Attention: There cannot be more than one master service process or worker service process on a physical machine.
+      If the physical machine which locate the expansion master or worker node has already installed the scheduled service, check the [1.4 Modify configuration] and edit the configuration file `conf/config/install_config.conf` on ** all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
 ```
 
 ### Basic software installation
@@ -14,16 +14,15 @@ This article describes how to add a new master service or worker service to an e
 * [required] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (version 1.8+): must install, install and configure `JAVA_HOME` and `PATH` variables under `/etc/profile`
 * [optional] If the expansion is a worker node, you need to consider whether to install an external client, such as Hadoop, Hive, Spark Client.
 
-
 ```markdown
- Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but will only call their Client for the corresponding task submission.
+Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but will only call their Client for the corresponding task submission.
 ```
 
 ### Get Installation Package
 
 - Check the version of DolphinScheduler used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.
 - Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in `/opt/` directory, and the full path is `/opt/dolphinscheduler`.
-- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to `dolphinscheduler` and store it in the `/opt` directory. 
+- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to `dolphinscheduler` and store it in the `/opt` directory.
 - Add database dependency package, this document uses Mysql database, add `mysql-connector-java` driver package to `/opt/dolphinscheduler/lib` directory.
 
 ```shell
@@ -37,7 +36,7 @@ mv apache-dolphinscheduler-<version>-bin  dolphinscheduler
 ```
 
 ```markdown
- Attention: You can copy the installation package directly from an existing environment to an expanded physical machine.
+Attention: You can copy the installation package directly from an existing environment to an expanded physical machine.
 ```
 
 ### Create Deployment Users
@@ -58,53 +57,49 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
 ```
 
 ```markdown
- Attention:
- - Since it is `sudo -u {linux-user}` to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
- - If you find the line `Default requiretty` in the `/etc/sudoers` file, please also comment it out.
- - If have needs to use resource uploads, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
+Attention:
+- Since it is `sudo -u {linux-user}` to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+- If you find the line `Default requiretty` in the `/etc/sudoers` file, please also comment it out.
+- If have needs to use resource uploads, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
 ```
 
 ### Modify Configuration
 
 - From an existing node such as `Master/Worker`, copy the configuration directory directly to replace the configuration directory in the new node. After finishing the file copy, check whether the configuration items are correct.
-    
-    ```markdown
-    Highlights:
-    datasource.properties: database connection information 
-    zookeeper.properties: information for connecting zk 
-    common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
-    dolphinscheduler_env.sh: environment Variables
-    ````
 
+  ```markdown
+  Highlights:
+  datasource.properties: database connection information 
+  zookeeper.properties: information for connecting zk 
+  common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
+  dolphinscheduler_env.sh: environment Variables
+  ```
 - Modify the `dolphinscheduler_env.sh` environment variable in the `bin/env/dolphinscheduler_env.sh` directory according to the machine configuration (the following is the example that all the used software install under `/opt/soft`)
 
-    ```shell
-        export HADOOP_HOME=/opt/soft/hadoop
-        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-        # export SPARK_HOME1=/opt/soft/spark1
-        export SPARK_HOME2=/opt/soft/spark2
-        export PYTHON_HOME=/opt/soft/python
-        export JAVA_HOME=/opt/soft/jav
-        export HIVE_HOME=/opt/soft/hive
-        export FLINK_HOME=/opt/soft/flink
-        export DATAX_HOME=/opt/soft/datax/bin/datax.py
-        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-    
-    ```
+  ```shell
+      export HADOOP_HOME=/opt/soft/hadoop
+      export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+      # export SPARK_HOME1=/opt/soft/spark1
+      export SPARK_HOME2=/opt/soft/spark2
+      export PYTHON_HOME=/opt/soft/python
+      export JAVA_HOME=/opt/soft/jav
+      export HIVE_HOME=/opt/soft/hive
+      export FLINK_HOME=/opt/soft/flink
+      export DATAX_HOME=/opt/soft/datax/bin/datax.py
+      export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
 
-    `Attention: This step is very important, such as `JAVA_HOME` and `PATH` is necessary to configure if haven not used just ignore or comment out`
+  ```
 
+  `Attention: This step is very important, such as `JAVA_HOME` and `PATH` is necessary to configure if haven not used just ignore or comment out`
 
 - Soft link the `JDK` to `/usr/bin/java` (still using `JAVA_HOME=/opt/soft/java` as an example)
 
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
-    
-    * To add a new master node, you need to modify the IPs and masters parameters.
-    * To add a new worker node, modify the IPs and workers parameters.
+  ```shell
+  sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+  ```
+- Modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+  * To add a new master node, you need to modify the IPs and masters parameters.
+  * To add a new worker node, modify the IPs and workers parameters.
 
 ```shell
 # which machines to deploy DS services on, separated by commas between multiple physical machines
@@ -120,6 +115,7 @@ masters="existing master01,existing master02,ds1,ds2"
 workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
 
 ```
+
 - If the expansion is for worker nodes, you need to set the worker group, refer to the security of the [Worker grouping](./security.md)
 
 - On all new nodes, change the directory permissions so that the deployment user has access to the DolphinScheduler directory
@@ -154,26 +150,26 @@ bash bin/dolphinscheduler-daemon.sh start alert-server   # start alert  service
 ```
 
 ```
- Attention: When using `stop-all.sh` or `stop-all.sh`, if the physical machine execute the command is not configured to be ssh-free on all machines, it will prompt to enter the password
+Attention: When using `stop-all.sh` or `stop-all.sh`, if the physical machine execute the command is not configured to be ssh-free on all machines, it will prompt to enter the password
 ```
 
 - After completing the script, use the `jps` command to see if every node service is started (`jps` comes with the `Java JDK`)
 
 ```
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    ApiApplicationServer ----- api    service
-    AlertServer          ----- alert  service
+MasterServer         ----- master service
+WorkerServer         ----- worker service
+ApiApplicationServer ----- api    service
+AlertServer          ----- alert  service
 ```
 
 After successful startup, you can view the logs, which are stored in the `logs` folder.
 
 ```Log Path
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    ├── dolphinscheduler-worker-server.log
-    ├── dolphinscheduler-api-server.log
+logs/
+   ├── dolphinscheduler-alert-server.log
+   ├── dolphinscheduler-master-server.log
+   ├── dolphinscheduler-worker-server.log
+   ├── dolphinscheduler-api-server.log
 ```
 
 If the above services start normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
@@ -187,9 +183,9 @@ There are two steps for shrinking. After performing the following two steps, the
 
 ### Stop the Service on the Scaled-Down Node
 
- * If you are scaling down the master node, identify the physical machine where the master service is located, and stop the master service on the physical machine.
- * If scale down the worker node, determine the physical machine where the worker service scale down and stop the worker services on the physical machine.
- 
+* If you are scaling down the master node, identify the physical machine where the master service is located, and stop the master service on the physical machine.
+* If scale down the worker node, determine the physical machine where the worker service scale down and stop the worker services on the physical machine.
+
 ```shell
 # stop command:
 bin/stop-all.sh # stop all services
@@ -211,26 +207,25 @@ bash bin/dolphinscheduler-daemon.sh start alert-server  # start alert  service
 ```
 
 ```
- Attention: When using `stop-all.sh` or `stop-all.sh`, if the machine without the command is not configured to be ssh-free for all machines, it will prompt to enter the password
+Attention: When using `stop-all.sh` or `stop-all.sh`, if the machine without the command is not configured to be ssh-free for all machines, it will prompt to enter the password
 ```
 
 - After the script is completed, use the `jps` command to see if every node service was successfully shut down (`jps` comes with the `Java JDK`)
 
 ```
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    ApiApplicationServer ----- api    service
-    AlertServer          ----- alert  service
+MasterServer         ----- master service
+WorkerServer         ----- worker service
+ApiApplicationServer ----- api    service
+AlertServer          ----- alert  service
 ```
-If the corresponding master service or worker service does not exist, then the master or worker service is successfully shut down.
 
+If the corresponding master service or worker service does not exist, then the master or worker service is successfully shut down.
 
 ### Modify the Configuration File
 
- - modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
-    
-    * to scale down the master node, modify the IPs and masters parameters.
-    * to scale down worker nodes, modify the IPs and workers parameters.
+- modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+  * to scale down the master node, modify the IPs and masters parameters.
+  * to scale down worker nodes, modify the IPs and workers parameters.
 
 ```shell
 # which machines to deploy DS services on, "localhost" for this machine
@@ -246,3 +241,4 @@ masters="existing master01,existing master02,ds1,ds2"
 workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
 
 ```
+
diff --git a/docs/docs/en/guide/healthcheck.md b/docs/docs/en/guide/healthcheck.md
index fdb6efd456..d80c683daf 100644
--- a/docs/docs/en/guide/healthcheck.md
+++ b/docs/docs/en/guide/healthcheck.md
@@ -39,3 +39,4 @@ curl --request GET 'http://localhost:50053/actuator/health'
 ```
 
 > Notice: If you modify the default service port and address, you need to modify the IP+Port to the modified value.
+
diff --git a/docs/docs/en/guide/howto/datasource-setting.md b/docs/docs/en/guide/howto/datasource-setting.md
index 4dddb9a7a2..5fc4e4be09 100644
--- a/docs/docs/en/guide/howto/datasource-setting.md
+++ b/docs/docs/en/guide/howto/datasource-setting.md
@@ -5,7 +5,7 @@
 We here use MySQL as an example to illustrate how to configure an external database:
 
 > NOTE: If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the libs directory of DolphinScheduler
-which is `api-server/libs` and `alert-server/libs` and `master-server/libs` and `worker-server/libs`.
+> which is `api-server/libs` and `alert-server/libs` and `master-server/libs` and `worker-server/libs`.
 
 * First of all, follow the instructions in [datasource-setting](datasource-setting.md) `Pseudo-Cluster/Cluster Initialize the Database` section to create and initialize database
 * Set the following environment variables in your terminal or modify the `bin/env/dolphinscheduler_env.sh` with your database username and password for `{user}` and `{password}`:
@@ -26,7 +26,6 @@ DolphinScheduler stores metadata in `relational database`. Currently, we support
 
 > If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the libs directory of DolphinScheduler which is `api-server/libs` and `alert-server/libs` and `master-server/libs` and `worker-server/libs`.
 
-
 For mysql 5.6 / 5.7
 
 ```shell
@@ -54,9 +53,10 @@ mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%';
 mysql> CREATE USER '{user}'@'localhost' IDENTIFIED BY '{password}';
 mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost';
 mysql> FLUSH PRIVILEGES;
-``` 
+```
+
+For PostgreSQL:
 
-For PostgreSQL: 
 ```shell
 # Use psql-tools to login PostgreSQL
 psql
@@ -75,6 +75,7 @@ pg_ctl reload
 Then, modify `./bin/env/dolphinscheduler_env.sh`, change {user} and {password} to what you set in the previous step.
 
 For MySQL:
+
 ```shell
 # for mysql
 export DATABASE=${DATABASE:-mysql}
@@ -85,6 +86,7 @@ export SPRING_DATASOURCE_PASSWORD={password}
 ```
 
 For PostgreSQL:
+
 ```shell
 # for postgresql
 export DATABASE=${DATABASE:-postgresql}
@@ -125,3 +127,4 @@ like Docker.
 > But if you want to use MySQL as the metabase of DolphinScheduler, it only supports [8.0.16 and above](https:/ /repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar) version.
 
 [mysql]: https://downloads.MySQL.com/archives/c-j/
+
diff --git a/docs/docs/en/guide/howto/general-setting.md b/docs/docs/en/guide/howto/general-setting.md
index e8a1a5f011..7a12460681 100644
--- a/docs/docs/en/guide/howto/general-setting.md
+++ b/docs/docs/en/guide/howto/general-setting.md
@@ -14,7 +14,7 @@ of to [language](#language) control button.
 
 ## Time Zone
 
-DolphinScheduler support time zone setting. 
+DolphinScheduler support time zone setting.
 
 Server Time Zone
 
diff --git a/docs/docs/en/guide/installation/cluster.md b/docs/docs/en/guide/installation/cluster.md
index d7054ac721..14ae58a479 100644
--- a/docs/docs/en/guide/installation/cluster.md
+++ b/docs/docs/en/guide/installation/cluster.md
@@ -36,4 +36,4 @@ Same as [pseudo-cluster](pseudo-cluster.md)
 
 ## Start and Stop Server
 
-Same as [pseudo-cluster](pseudo-cluster.md)
\ No newline at end of file
+Same as [pseudo-cluster](pseudo-cluster.md)
diff --git a/docs/docs/en/guide/installation/pseudo-cluster.md b/docs/docs/en/guide/installation/pseudo-cluster.md
index b01602e1dd..23fbe341d2 100644
--- a/docs/docs/en/guide/installation/pseudo-cluster.md
+++ b/docs/docs/en/guide/installation/pseudo-cluster.md
@@ -154,7 +154,7 @@ bash ./bin/install.sh
 ```
 
 > **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in the terminal,
- this is non-important information that you can ignore.
+> this is non-important information that you can ignore.
 
 ## Login DolphinScheduler
 
@@ -190,11 +190,12 @@ bash ./bin/dolphinscheduler-daemon.sh stop alert-server
 > for micro-services need. It means that you could start all servers by command `<service>/bin/start.sh` with different
 > environment variable from `<service>/conf/dolphinscheduler_env.sh`. But it will use file `bin/env/dolphinscheduler_env.sh` overwrite
 > `<service>/conf/dolphinscheduler_env.sh` if you start server with command `/bin/dolphinscheduler-daemon.sh start <service>`.
-
+>
 > **_Note2:_**: Please refer to the section of "System Architecture Design" for service usage. Python gateway service is
 > started along with the api-server, and if you do not want to start Python gateway service please disabled it by changing
-> the yaml config `python-gateway.enabled : false` in api-server's configuration path `api-server/conf/application.yaml` 
+> the yaml config `python-gateway.enabled : false` in api-server's configuration path `api-server/conf/application.yaml`
 
 [jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
 [zookeeper]: https://zookeeper.apache.org/releases.html
 [issue]: https://github.com/apache/dolphinscheduler/issues/6597
+
diff --git a/docs/docs/en/guide/installation/standalone.md b/docs/docs/en/guide/installation/standalone.md
index 1c3028f238..98bb1f555a 100644
--- a/docs/docs/en/guide/installation/standalone.md
+++ b/docs/docs/en/guide/installation/standalone.md
@@ -5,7 +5,7 @@ Standalone only for quick experience for DolphinScheduler.
 If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow Standalone deployment. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
 
 > **_Note:_** Standalone only recommends the usage of fewer than 20 workflows, because it uses in-memory H2 Database in default, ZooKeeper Testing Server, too many tasks may cause instability.
-> When Standalone stops or restarts, in-memory H2 database will clear up. To use Standalone with external databases like mysql or postgresql, please see [`Database Configuration`](#database-configuration).    
+> When Standalone stops or restarts, in-memory H2 database will clear up. To use Standalone with external databases like mysql or postgresql, please see [`Database Configuration`](#database-configuration).
 
 ## Preparation
 
diff --git a/docs/docs/en/guide/integration/rainbond.md b/docs/docs/en/guide/integration/rainbond.md
index 32b71f217d..25d45dacbe 100644
--- a/docs/docs/en/guide/integration/rainbond.md
+++ b/docs/docs/en/guide/integration/rainbond.md
@@ -6,7 +6,7 @@ This section describes the one-click deployment of high availability DolphinSche
 
 * Available Rainbond cloud native application management platform is a prerequisite,please refer to the official `Rainbond` documentation [Rainbond Quick install](https://www.rainbond.com/docs/quick-start/quick-install)
 
-## DolphinScheduler Cluster One-click Deployment 
+## DolphinScheduler Cluster One-click Deployment
 
 * Logging in and accessing the built-in open source app store, search the keyword `dolphinscheduler` to find the DolphinScheduler App.
 
@@ -14,12 +14,12 @@ This section describes the one-click deployment of high availability DolphinSche
 
 * Click `install` on the right side of DolphinScheduler to go to the installation page. Fill in the corresponding information and click `OK` to start the installation. You will get automatically redirected to the application view.
 
-| Select item  | Description                          |
-| ------------ | ------------------------------------ |
+| Select item  |             Description             |
+|--------------|-------------------------------------|
 | Team name    | user workspace,Isolate by namespace |
-| Cluster name | select kubernetes cluster            |
-| Select app   | select application                   |
-| app version  | select DolphinScheduler version      |
+| Cluster name | select kubernetes cluster           |
+| Select app   | select application                  |
+| app version  | select DolphinScheduler version     |
 
 ![](../../../../img/rainbond/install-dolphinscheduler.png)
 
@@ -42,6 +42,7 @@ Take `worker` as an example: enter the `component -> Telescopic` page, and set t
 To verify `worker` node, enter `DolphinScheduler UI -> Monitoring -> Worker` page to view detailed node information.
 
 ![](../../../../img/rainbond/monitor-dolphinscheduler.png)
+
 ## Configuration file
 
 API and Worker Services share the configuration file `/opt/dolphinscheduler/conf/common.properties`. To modify the configurations, you only need to modify that of the API service.
@@ -60,5 +61,7 @@ Take `DataX` as an example:
    * FILE_PATH:/opt/soft
    * LOCK_PATH:/opt/soft
 3. Update component, the plug-in `Datax` will be downloaded automatically and decompress to `/opt/soft`
-![](../../../../img/rainbond/plugin.png)
+   ![](../../../../img/rainbond/plugin.png)
+
 ---
+
diff --git a/docs/docs/en/guide/metrics/metrics.md b/docs/docs/en/guide/metrics/metrics.md
index 6e2730af67..fa0f07bf3f 100644
--- a/docs/docs/en/guide/metrics/metrics.md
+++ b/docs/docs/en/guide/metrics/metrics.md
@@ -3,13 +3,13 @@
 Apache DolphinScheduler exports metrics for system observability. We use [Micrometer](https://micrometer.io/) as application metrics facade.
 Currently, we only support `Prometheus Exporter` but more are coming soon.
 
-## Quick Start 
+## Quick Start
 
-- We enable Apache DolphinScheduler to export metrics in `standalone` mode to help users get hands dirty easily. 
+- We enable Apache DolphinScheduler to export metrics in `standalone` mode to help users get hands dirty easily.
 - After triggering tasks in `standalone` mode, you could access metrics list by visiting url `http://localhost:12345/dolphinscheduler/actuator/metrics`.
 - After triggering tasks in `standalone` mode, you could access `prometheus-format` metrics by visiting url `http://localhost:12345/dolphinscheduler/actuator/prometheus`.
 - For a better experience with `Prometheus` and `Grafana`, we have prepared the out-of-the-box `Grafana` configurations for you, you could find the `Grafana` dashboards
-at `dolphinscheduler-meter/resources/grafana` and directly import these dashboards to your `Grafana` instance.
+  at `dolphinscheduler-meter/resources/grafana` and directly import these dashboards to your `Grafana` instance.
 - If you want to try with `docker`, you can use the following command to start the out-of-the-box `Prometheus` and `Grafana`:
 
 ```shell
@@ -17,12 +17,12 @@ cd dolphinscheduler-meter/src/main/resources/grafana-demo
 docker compose up
 ```
 
-then access the `Grafana` by the url: `http://localhost/3001` for dashboards.    
+then access the `Grafana` by the url: `http://localhost/3001` for dashboards.
 
 ![image.png](../../../../img/metrics/metrics-master.png)
 ![image.png](../../../../img/metrics/metrics-worker.png)
 ![image.png](../../../../img/metrics/metrics-datasource.png)
-      
+
 - If you prefer to have some experiments in `cluster` mode, please refer to the [Configuration](#configuration) section below:
 
 ## Configuration
@@ -48,7 +48,7 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
 ### Prometheus
 
 - all dots mapped to underscores
-- metric name starting with number added with prefix `m_` 
+- metric name starting with number added with prefix `m_`
 - COUNTER: add `_total` suffix if not ending with it
 - LONG_TASK_TIMER: `_timer_seconds` suffix added if not ending with them
 - GAUGE: `_baseUnit` suffix added if not ending with it
@@ -56,7 +56,7 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
 ## Dolphin Scheduler Metrics Cheatsheet
 
 - We categorize metrics by dolphin scheduler components such as `master server`, `worker server`, `api server` and `alert server`.
-- Although task / workflow related metrics exported by `master server` and `worker server`, we categorize them separately for users to query them more conveniently.  
+- Although task / workflow related metrics exported by `master server` and `worker server`, we categorize them separately for users to query them more conveniently.
 
 ### Task Related Metrics
 
@@ -66,19 +66,18 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
   - success: the number of successful tasks
   - fail: the number of failed tasks
   - stop: the number of stopped tasks
-  - retry: the number of retried tasks 
+  - retry: the number of retried tasks
   - submit: the number of submitted tasks
   - failover: the number of task fail-overs
 - ds.task.dispatch.count: (counter) the number of tasks dispatched to worker
 - ds.task.dispatch.failure.count: (counter) the number of tasks failed to dispatch, retry failure included
 - ds.task.dispatch.error.count: (counter) the number of task dispatch errors
 - ds.task.execution.count.by.type: (counter) the number of task executions grouped by tag `task_type`
-- ds.task.running: (gauge) the number of running tasks 
-- ds.task.prepared: (gauge) the number of tasks prepared for task queue 
-- ds.task.execution.count: (counter) the number of executed tasks  
+- ds.task.running: (gauge) the number of running tasks
+- ds.task.prepared: (gauge) the number of tasks prepared for task queue
+- ds.task.execution.count: (counter) the number of executed tasks
 - ds.task.execution.duration: (histogram) duration of task executions
 
-
 ### Workflow Related Metrics
 
 - ds.workflow.create.command.count: (counter) the number of commands created and inserted by workflows
@@ -88,14 +87,14 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
   - timeout: the number of timeout workflow instances
   - finish: the number of finished workflow instances, both successes and failures included
   - success: the number of successful workflow instances
-  - fail: the number of failed workflow instances 
-  - stop: the number of stopped workflow instances 
+  - fail: the number of failed workflow instances
+  - stop: the number of stopped workflow instances
   - failover: the number of workflow instance fail-overs
 
 ### Master Server Metrics
 
 - ds.master.overload.count: (counter) the number of times the master overloaded
-- ds.master.consume.command.count: (counter) the number of commands consumed by master 
+- ds.master.consume.command.count: (counter) the number of commands consumed by master
 - ds.master.scheduler.failover.check.count: (counter) the number of scheduler (master) fail-over checks
 - ds.master.scheduler.failover.check.time: (histogram) the total time cost of scheduler (master) fail-over checks
 - ds.master.quartz.job.executed: the total number of quartz jobs executed
@@ -111,7 +110,7 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
 
 ### Api Server Metrics
 
-- Currently, we have not embedded any metrics in Api Server. 
+- Currently, we have not embedded any metrics in Api Server.
 
 ### Alert Server Related
 
@@ -124,7 +123,7 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
 
 - hikaricp.connections: the total number of connections
 - hikaricp.connections.creation: connection creation time (max, count, sum included)
-- hikaricp.connections.acquire: connection acquirement time (max, count, sum included) 
+- hikaricp.connections.acquire: connection acquirement time (max, count, sum included)
 - hikaricp.connections.usage: connection usage time (max, count, sum included)
 - hikaricp.connections.max: the max number of connections
 - hikaricp.connections.min: the min number of connections
@@ -175,3 +174,4 @@ For example, you can get the master metrics by `curl http://localhost:5679/actua
 - system.load.average.1m: the total number of runnable entities queued to available processors and runnable entities running on the available processors averaged over a period
 - logback.events: the number of events that made it to the logs grouped by the tag `level`
 - http.server.requests: total number of http requests
+
diff --git a/docs/docs/en/guide/monitor.md b/docs/docs/en/guide/monitor.md
index 327945310d..eb8600d8b7 100644
--- a/docs/docs/en/guide/monitor.md
+++ b/docs/docs/en/guide/monitor.md
@@ -28,16 +28,16 @@
 
 ![statistics](../../../img/new_ui/dev/monitor/statistics.png)
 
-| **Parameter** | **Description** |
-| ----- | ----- |
-| Number of commands wait to be executed | Statistics of the `t_ds_command` table data. |
-| The number of failed commands | Statistics of the `t_ds_error_command` table data. |
-| Number of tasks wait to run | Count the data of `task_queue` in the ZooKeeper. |
-| Number of tasks wait to be killed | Count the data of `task_kill` in the ZooKeeper. |
+|             **Parameter**              |                  **Description**                   |
+|----------------------------------------|----------------------------------------------------|
+| Number of commands wait to be executed | Statistics of the `t_ds_command` table data.       |
+| The number of failed commands          | Statistics of the `t_ds_error_command` table data. |
+| Number of tasks wait to run            | Count the data of `task_queue` in the ZooKeeper.   |
+| Number of tasks wait to be killed      | Count the data of `task_kill` in the ZooKeeper.    |
 
 ### Audit Log
 
 The audit log provides information about who accesses the system and the operations made to the system and record related
 time, which strengthen the security of the system and maintenance.
 
-![audit-log](../../../img/new_ui/dev/monitor/audit-log.jpg)
\ No newline at end of file
+![audit-log](../../../img/new_ui/dev/monitor/audit-log.jpg)
diff --git a/docs/docs/en/guide/parameter/built-in.md b/docs/docs/en/guide/parameter/built-in.md
index bdc7ea8f8d..a19e0903bf 100644
--- a/docs/docs/en/guide/parameter/built-in.md
+++ b/docs/docs/en/guide/parameter/built-in.md
@@ -2,11 +2,11 @@
 
 ## Basic Built-in Parameter
 
-| Variable | Declaration Method | Meaning |
-| ---- | ---- | -----------------------------| 
-| system.biz.date | `${system.biz.date}` | The day before the schedule time of the daily scheduling instance, the format is `yyyyMMdd` |
-| system.biz.curdate | `${system.biz.curdate}` | The schedule time of the daily scheduling instance, the format is `yyyyMMdd` |
-| system.datetime | `${system.datetime}` | The schedule time of the daily scheduling instance, the format is `yyyyMMddHHmmss` |
+|      Variable      |   Declaration Method    |                                           Meaning                                           |
+|--------------------|-------------------------|---------------------------------------------------------------------------------------------|
+| system.biz.date    | `${system.biz.date}`    | The day before the schedule time of the daily scheduling instance, the format is `yyyyMMdd` |
+| system.biz.curdate | `${system.biz.curdate}` | The schedule time of the daily scheduling instance, the format is `yyyyMMdd`                |
+| system.datetime    | `${system.datetime}`    | The schedule time of the daily scheduling instance, the format is `yyyyMMddHHmmss`          |
 
 ## Extended Built-in Parameter
 
@@ -16,19 +16,19 @@
 
 - Or define by the following two ways:
 
-    1. Use add_month(yyyyMMdd, offset) function to add or minus number of months.
-      The first parameter of this function is [yyyyMMdd], represents the time format and the second parameter is offset, represents the number of months the user wants to add or minus.
-        - Next N years:`$[add_months(yyyyMMdd,12*N)]`
-        - N years before:`$[add_months(yyyyMMdd,-12*N)]`
-        - Next N months:`$[add_months(yyyyMMdd,N)]`
-        - N months before:`$[add_months(yyyyMMdd,-N)]`
-      
-    2. Add or minus numbers directly after the time format.
-       - Next N weeks:`$[yyyyMMdd+7*N]`
-       - First N weeks:`$[yyyyMMdd-7*N]`
-       - Next N days:`$[yyyyMMdd+N]`
-       - N days before:`$[yyyyMMdd-N]`
-       - Next N hours:`$[HHmmss+N/24]`
-       - First N hours:`$[HHmmss-N/24]`
-       - Next N minutes:`$[HHmmss+N/24/60]`
-       - First N minutes:`$[HHmmss-N/24/60]`
\ No newline at end of file
+  1. Use add_month(yyyyMMdd, offset) function to add or minus number of months.
+     The first parameter of this function is [yyyyMMdd], represents the time format and the second parameter is offset, represents the number of months the user wants to add or minus.
+     - Next N years:`$[add_months(yyyyMMdd,12*N)]`
+     - N years before:`$[add_months(yyyyMMdd,-12*N)]`
+     - Next N months:`$[add_months(yyyyMMdd,N)]`
+     - N months before:`$[add_months(yyyyMMdd,-N)]`
+  2. Add or minus numbers directly after the time format.
+     - Next N weeks:`$[yyyyMMdd+7*N]`
+     - First N weeks:`$[yyyyMMdd-7*N]`
+     - Next N days:`$[yyyyMMdd+N]`
+     - N days before:`$[yyyyMMdd-N]`
+     - Next N hours:`$[HHmmss+N/24]`
+     - First N hours:`$[HHmmss-N/24]`
+     - Next N minutes:`$[HHmmss+N/24/60]`
+     - First N minutes:`$[HHmmss-N/24/60]`
+
diff --git a/docs/docs/en/guide/parameter/context.md b/docs/docs/en/guide/parameter/context.md
index 482a1cd8df..2869ae3347 100644
--- a/docs/docs/en/guide/parameter/context.md
+++ b/docs/docs/en/guide/parameter/context.md
@@ -49,7 +49,7 @@ When the SHELL task is completed, we can use the output passed upstream as the q
 
 > Note: If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
 >
->If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+> If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
 
 #### Save the workflow and set the global parameters
 
diff --git a/docs/docs/en/guide/parameter/local.md b/docs/docs/en/guide/parameter/local.md
index 29a377e8e5..2dcae8d433 100644
--- a/docs/docs/en/guide/parameter/local.md
+++ b/docs/docs/en/guide/parameter/local.md
@@ -61,7 +61,7 @@ You could get this value in downstream task using syntax `echo '${set_val_param}
 
 If you want to export parameters with bash variable instead of constants value, and then use them in downstream tasks,
 you could use `setValue` in your task, which more flexible such as you can get variable for exists local or HTTP resource.
-You can use syntax like 
+You can use syntax like
 
 ```shell
 lines_num=$(wget https://raw.githubusercontent.com/apache/dolphinscheduler/dev/README.md -q -O - | wc -l | xargs)
diff --git a/docs/docs/en/guide/project/project-list.md b/docs/docs/en/guide/project/project-list.md
index 96c046981a..fb7df9053b 100644
--- a/docs/docs/en/guide/project/project-list.md
+++ b/docs/docs/en/guide/project/project-list.md
@@ -1,15 +1,15 @@
-# Project 
+# Project
 
 This page describes details regarding Project screen in Apache DolphinScheduler. Here, you will see all the functions which can be handled in this screen. The following table explains commonly used terms in Apache DolphinScheduler:
 
-| Glossary | description                                                                                                                                                                                                                                                                                                               |
-| ------ |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| DAG | Tasks in a workflow are assembled in form of Directed Acyclic Graph (DAG). A topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes.                                                                                                                                |
-| Workflow Definition | Visualization formed by dragging task nodes and establishing task node associations (DAG).                                                                                                                                                                                                                                | 
-| Workflow Instance | Instantiation of the workflow definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a workflow instance is generated.                                                                                                                                        |
-| Workflow Relation | Shows dynamic status of all the workflows in a project.                                                                                                                                                                                                                                                                   |
-| Task | Task is a discrete action in a Workflow. Apache DolphinScheduler supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT ( depends), and plans to support dynamic plug-in expansion, (SUB_PROCESS). It is also a separate process definition that can be started and executed separately. |
-| Task Instance | Instantiation of the task node in the process definition, which identifies the specific task execution status.                                                                                                                                                                                                            |
+|      Glossary       |                                                                                                                                                        description                                                                                                                                                        |
+|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| DAG                 | Tasks in a workflow are assembled in form of Directed Acyclic Graph (DAG). A topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes.                                                                                                                                |
+| Workflow Definition | Visualization formed by dragging task nodes and establishing task node associations (DAG).                                                                                                                                                                                                                                |
+| Workflow Instance   | Instantiation of the workflow definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a workflow instance is generated.                                                                                                                                        |
+| Workflow Relation   | Shows dynamic status of all the workflows in a project.                                                                                                                                                                                                                                                                   |
+| Task                | Task is a discrete action in a Workflow. Apache DolphinScheduler supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT ( depends), and plans to support dynamic plug-in expansion, (SUB_PROCESS). It is also a separate process definition that can be started and executed separately. |
+| Task Instance       | Instantiation of the task node in the process definition, which identifies the specific task execution status.                                                                                                                                                                                                            |
 
 ## Project List
 
diff --git a/docs/docs/en/guide/project/task-definition.md b/docs/docs/en/guide/project/task-definition.md
index a08d3a3da8..a48cc1a848 100644
--- a/docs/docs/en/guide/project/task-definition.md
+++ b/docs/docs/en/guide/project/task-definition.md
@@ -1,6 +1,7 @@
 # Task Definition
 
 ## Batch Task Definition
+
 Task definition allows to modify or operate tasks at the task level rather than modifying them in the workflow definition.
 We already have workflow level task editor in [workflow definition](workflow-definition.md) which you can click the specific
 workflow and then edit its task definition. It is depressing when you want to edit the task definition but do not remember
@@ -14,10 +15,11 @@ name but forget which workflow it belongs to. It is also supported query by the
 `Workflow Name`
 
 ## Stream Task Definition
+
 Stream task definitions are created in the workflow definition, and can be modified and executed.
 
 ![task-definition](../../../../img/new_ui/dev/project/stream-task-definition.png)
 
 Click the execute button, check the execution parameters and click Confirm to submit the stream task.
 
-![task-definition](../../../../img/new_ui/dev/project/stream-task-execute.png)
\ No newline at end of file
+![task-definition](../../../../img/new_ui/dev/project/stream-task-execute.png)
diff --git a/docs/docs/en/guide/project/task-instance.md b/docs/docs/en/guide/project/task-instance.md
index 0371ed9d79..e5532bba1a 100644
--- a/docs/docs/en/guide/project/task-instance.md
+++ b/docs/docs/en/guide/project/task-instance.md
@@ -1,6 +1,7 @@
 # Task Instance
 
 ## Batch Task Instance
+
 ### Create Task Instance
 
 Click `Project Management -> Workflow -> Task Instance` to enter the task instance page, as shown in the figure below, click the name of the workflow instance to jump to the DAG diagram of the workflow instance to view the task status.
@@ -21,3 +22,4 @@ Click the `View Log` button in the operation column to view the log of the task
 
 - SavePoint: Click the `SavePoint` button in the operation column to do stream task savepoint.
 - Stop: Click the `Stop` button in the operation column to stop the stream task.
+
diff --git a/docs/docs/en/guide/project/workflow-definition.md b/docs/docs/en/guide/project/workflow-definition.md
index f81669cbd4..a19dc8d756 100644
--- a/docs/docs/en/guide/project/workflow-definition.md
+++ b/docs/docs/en/guide/project/workflow-definition.md
@@ -29,15 +29,16 @@ Drag from the toolbar <img src="../../../../img/tasks/icons/shell.png" width="15
 7. Click the `Confirm Add` button to save the task settings.
 
 ### Set dependencies between tasks
- 
+
 Click the plus sign on the right of the task node to connect the task; as shown in the figure below, task Node_B and task Node_C execute in parallel, When task Node_A finished execution, tasks Node_B and Node_C will execute simultaneously.
 
 ![workflow-dependent](../../../../img/new_ui/dev/project/workflow-dependent.png)
 
 ### Dependencies with stream task
+
 If the DAG contains stream tasks, the relationship between stream tasks is displayed as a dotted line, and the execution of stream tasks will be skipped when the workflow instance is executed.
 
-  ![workflow-dependent](../../../../img/new_ui/dev/project/workflow-definition-with-stream-task.png)
+![workflow-dependent](../../../../img/new_ui/dev/project/workflow-definition-with-stream-task.png)
 
 **Delete dependencies:** Using your mouse to select the connection line, and click the "Delete" icon in the upper right corner <img src= "../../../../img/delete.png" width="35"/>, delete dependencies between tasks.
 
@@ -57,15 +58,15 @@ Click `Project Management -> Workflow -> Workflow Definition` to enter the workf
 
 Workflow running parameter description:
 
-* **Failure strategy**: When a task node fails to execute, other parallel task nodes need to execute the strategy. "Continue" means: After a task fails, other task nodes execute normally; "End" means: Terminate all tasks being executed, and terminate the entire process. 
-* **Notification strategy**: When the process ends, send process execution information notification emails according to the process status, including no status, success, failure, success or failure. 
-* **Process priority**: the priority of process operation, divided into five levels: the highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), the lowest (LOWEST). When the number of master threads is insufficient, processes with higher levels will be executed first in the execution queue, and processes with the same priority will be executed in the order of first-in, first-out. 
-* **Worker grouping**: This process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker. 
-* **Notification Group**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, process information or emails will be sent to all members in the notification group. 
-* **Recipient**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list. 
-* **Cc**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, the process information or alarm email will be copied to the Cc list. 
-* **Startup parameters**: Set or override the value of global parameters when starting a new process instance. 
-* **Complement**: There are 2 modes of serial complement and parallel complement. Serial complement: within the specified time range, perform complements in sequence from the start date to the end date, and generate N process instances in turn; parallel complement: within the specified time range, perform multiple complements at the same time, and generate N process instances at the same time . 
+* **Failure strategy**: When a task node fails to execute, other parallel task nodes need to execute the strategy. "Continue" means: After a task fails, other task nodes execute normally; "End" means: Terminate all tasks being executed, and terminate the entire process.
+* **Notification strategy**: When the process ends, send process execution information notification emails according to the process status, including no status, success, failure, success or failure.
+* **Process priority**: the priority of process operation, divided into five levels: the highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), the lowest (LOWEST). When the number of master threads is insufficient, processes with higher levels will be executed first in the execution queue, and processes with the same priority will be executed in the order of first-in, first-out.
+* **Worker grouping**: This process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
+* **Notification Group**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, process information or emails will be sent to all members in the notification group.
+* **Recipient**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
+* **Cc**: Select Notification Policy||Timeout Alarm||When fault tolerance occurs, the process information or alarm email will be copied to the Cc list.
+* **Startup parameters**: Set or override the value of global parameters when starting a new process instance.
+* **Complement**: There are 2 modes of serial complement and parallel complement. Serial complement: within the specified time range, perform complements in sequence from the start date to the end date, and generate N process instances in turn; parallel complement: within the specified time range, perform multiple complements at the same time, and generate N process instances at the same time .
   * **Complement**: Execute the workflow definition of the specified date, you can select the time range of the supplement (currently only supports the supplement for consecutive days), for example, the data from May 1st to May 10th needs to be supplemented, as shown in the following figure:
 
 The following are the operation functions of the workflow definition list:
@@ -91,58 +92,58 @@ The following are the operation functions of the workflow definition list:
 - Click the `Run` button to pop up the startup parameter setting window, as shown in the figure below, set the startup parameters, click the `Run` button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.
 
 ![workflow-run](../../../../img/new_ui/dev/project/workflow-run.png)
- 
-  Description of workflow operating parameters: 
-       
-  * Failure strategy: When a task node fails to execute, other parallel task nodes need to execute this strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all tasks execution, and terminate the entire process.
-  * Notification strategy: When the process is over, send the process execution result notification email according to the process status, options including no send, send if sucess, send of failure, send whatever result.
-  * Process priority: The priority of process operation, divide into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high priority processes will execute first in the execution queue, and processes with the same priority will execute in the order of first in, first out.
-  * Worker group: The process can only be executed in the specified worker machine group. The default is `Default`, which can execute on any worker.
-  * Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process result information or email will send to all members in the notification group.
-  * Recipient: select notification policy||timeout alarm||when fault tolerance occurs, process result information or alarm email will be sent to the recipient list.
-  * Cc: select notification policy||timeout alarm||when fault tolerance occurs, the process result information or warning email will be copied to the CC list.
-  * Startup parameter: Set or overwrite global parameter values when starting a new process instance.
-  * Complement: refers to running the workflow definition within the specified date range and generating the corresponding workflow instance according to the complement policy. The complement policy includes two modes: **serial complement** and **parallel complement**. The date can be selected on the page or entered manually.
-  
-    * Serial complement: within the specified time range, complement is executed from the start date to the end date, and multiple process instances are generated in turn; Click Run workflow and select the serial complement mode: for example, from July 9 to July 10, execute in sequence, and generate two process instances in sequence on the process instance page.
-        
-        ![workflow-serial](../../../../img/new_ui/dev/project/workflow-serial.png)
-    
-    * Parallel Replenishment: within the specified time range, replenishment is performed simultaneously for multiple days, and multiple process instances are generated at the same time. Enter date manually: manually enter a date in the comma separated date format of 'yyyy MM DD hh:mm:ss'.Click Run workflow and select the parallel complement mode: for example, execute the workflow definition from July 9 to July 10 at the same time, and generate two process instances on the process instan [...]
-       
-        ![workflow-parallel](../../../../img/new_ui/dev/project/workflow-parallel.png)
-    
-    * Concurrency: refers to the maximum number of instances executed in parallel in the parallel complement mode.For example, if tasks from July 6 to July 10 are executed at the same time, and the concurrency is 2, then the process instance is:
-       
-        ![workflow-concurrency-from](../../../../img/new_ui/dev/project/workflow-concurrency-from.png)
-        
-        ![workflow-concurrency](../../../../img/new_ui/dev/project/workflow-concurrency.png)
-    
-    * Dependency mode: whether to trigger the replenishment of workflow instances that downstream dependent nodes depend on the current workflow (the timing status of workflow instances that require the current replenishment is online, which will only trigger the replenishment of downstream directly dependent on the current workflow).
-       
-        ![workflow-dependency](../../../../img/new_ui/dev/project/workflow-dependency.png)
-    
-    * Date selection:
-  
-         1. Select the date through the page:
-        
-         ![workflow-pageSelection](../../../../img/new_ui/dev/project/workflow-pageSelection.png)
-         
-         2. Manual input:
-         
-         ![workflow-input](../../../../img/new_ui/dev/project/workflow-input.png)
-    
-    * Relationship between complement and timing configuration:
-  
-         1. Unconfigured timing: When there is no timing configuration, the daily replenishment will be performed by default according to the selected time range. For example, the workflow scheduling date is July 7 to July 10. If timing is not configured, the process instance is:
-        
-         ![workflow-unconfiguredTimingResult](../../../../img/new_ui/dev/project/workflow-unconfiguredTimingResult.png)
-         
-         2. Configured timing: If there is a timing configuration, it will be supplemented according to the selected time range in combination with the timing configuration. For example, the workflow scheduling date is July 7 to July 10, and the timing is configured (running every 5 a.m.). The process example is:
-         
-         ![workflow-configuredTiming](../../../../img/new_ui/dev/project/workflow-configuredTiming.png)
-         
-         ![workflow-configuredTimingResult](../../../../img/new_ui/dev/project/workflow-configuredTimingResult.png)
+
+Description of workflow operating parameters:
+
+* Failure strategy: When a task node fails to execute, other parallel task nodes need to execute this strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all tasks execution, and terminate the entire process.
+* Notification strategy: When the process is over, send the process execution result notification email according to the process status, options including no send, send if sucess, send of failure, send whatever result.
+* Process priority: The priority of process operation, divide into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high priority processes will execute first in the execution queue, and processes with the same priority will execute in the order of first in, first out.
+* Worker group: The process can only be executed in the specified worker machine group. The default is `Default`, which can execute on any worker.
+* Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process result information or email will send to all members in the notification group.
+* Recipient: select notification policy||timeout alarm||when fault tolerance occurs, process result information or alarm email will be sent to the recipient list.
+* Cc: select notification policy||timeout alarm||when fault tolerance occurs, the process result information or warning email will be copied to the CC list.
+* Startup parameter: Set or overwrite global parameter values when starting a new process instance.
+* Complement: refers to running the workflow definition within the specified date range and generating the corresponding workflow instance according to the complement policy. The complement policy includes two modes: **serial complement** and **parallel complement**. The date can be selected on the page or entered manually.
+  * Serial complement: within the specified time range, complement is executed from the start date to the end date, and multiple process instances are generated in turn; Click Run workflow and select the serial complement mode: for example, from July 9 to July 10, execute in sequence, and generate two process instances in sequence on the process instance page.
+
+    ![workflow-serial](../../../../img/new_ui/dev/project/workflow-serial.png)
+
+  * Parallel Replenishment: within the specified time range, replenishment is performed simultaneously for multiple days, and multiple process instances are generated at the same time. Enter date manually: manually enter a date in the comma separated date format of 'yyyy MM DD hh:mm:ss'.Click Run workflow and select the parallel complement mode: for example, execute the workflow definition from July 9 to July 10 at the same time, and generate two process instances on the process instance [...]
+
+    ![workflow-parallel](../../../../img/new_ui/dev/project/workflow-parallel.png)
+
+  * Concurrency: refers to the maximum number of instances executed in parallel in the parallel complement mode.For example, if tasks from July 6 to July 10 are executed at the same time, and the concurrency is 2, then the process instance is:
+
+    ![workflow-concurrency-from](../../../../img/new_ui/dev/project/workflow-concurrency-from.png)
+
+    ![workflow-concurrency](../../../../img/new_ui/dev/project/workflow-concurrency.png)
+
+  * Dependency mode: whether to trigger the replenishment of workflow instances that downstream dependent nodes depend on the current workflow (the timing status of workflow instances that require the current replenishment is online, which will only trigger the replenishment of downstream directly dependent on the current workflow).
+
+    ![workflow-dependency](../../../../img/new_ui/dev/project/workflow-dependency.png)
+
+  * Date selection:
+
+    1. Select the date through the page:
+
+    ![workflow-pageSelection](../../../../img/new_ui/dev/project/workflow-pageSelection.png)
+
+    2. Manual input:
+
+    ![workflow-input](../../../../img/new_ui/dev/project/workflow-input.png)
+
+  * Relationship between complement and timing configuration:
+
+    1. Unconfigured timing: When there is no timing configuration, the daily replenishment will be performed by default according to the selected time range. For example, the workflow scheduling date is July 7 to July 10. If timing is not configured, the process instance is:
+
+    ![workflow-unconfiguredTimingResult](../../../../img/new_ui/dev/project/workflow-unconfiguredTimingResult.png)
+
+    2. Configured timing: If there is a timing configuration, it will be supplemented according to the selected time range in combination with the timing configuration. For example, the workflow scheduling date is July 7 to July 10, and the timing is configured (running every 5 a.m.). The process example is:
+
+    ![workflow-configuredTiming](../../../../img/new_ui/dev/project/workflow-configuredTiming.png)
+
+    ![workflow-configuredTimingResult](../../../../img/new_ui/dev/project/workflow-configuredTimingResult.png)
+
 ## Run the task alone
 
 - Right-click the task and click the `Start` button (only online tasks can be clicked to run).
@@ -160,12 +161,15 @@ The following are the operation functions of the workflow definition list:
   ![workflow-time01](../../../../img/new_ui/dev/project/workflow-time01.png)
 
 - Select a start and end time. Within the start and end time range, the workflow is run regularly; outside the start and end time range, no timed workflow instance will be generated.
+
 - Add a timing that execute 5 minutes once, as shown in the following figure:
 
   ![workflow-time02](../../../../img/new_ui/dev/project/workflow-time02.png)
 
 - Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as workflow running parameters.
+
 - Click the "Create" button to create the timing. Now the timing status is "**Offline**" and the timing needs to be **Online** to make effect.
+
 - Schedule online: Click the `Timing Management` button <img src="../../../../img/timeManagement.png" width="35"/>, enter the timing management page, click the `online` button, the timing status will change to `online`, as shown in the below figure, the workflow makes effect regularly.
 
   ![workflow-time03](../../../../img/new_ui/dev/project/workflow-time03.png)
diff --git a/docs/docs/en/guide/project/workflow-instance.md b/docs/docs/en/guide/project/workflow-instance.md
index d9bffa239b..6ef391ee67 100644
--- a/docs/docs/en/guide/project/workflow-instance.md
+++ b/docs/docs/en/guide/project/workflow-instance.md
@@ -30,7 +30,7 @@ Double-click the task node, click `View History` to jump to the task instance pa
 
 ## View Running Parameters
 
-Click `Project Management -> Workflow -> Workflow Instance` to enter the workflow instance page, click the workflow name to enter the workflow DAG page; 
+Click `Project Management -> Workflow -> Workflow Instance` to enter the workflow instance page, click the workflow name to enter the workflow DAG page;
 
 Click the icon in the upper left corner <img src="../../../../img/run_params_button.png" width="35"/> to view the startup parameters of the workflow instance; click the icon <img src="../../../../img/global_param.png" width="35"/> to view the global parameters and local parameters of the workflow instance, as shown in the following figure:
 
@@ -43,15 +43,23 @@ Click `Project Management -> Workflow -> Workflow Instance`, enter the workflow
 ![workflow-instance](../../../../img/new_ui/dev/project/workflow-instance.png)
 
 - **Edit:** Only processes with success/failed/stop status can be edited. Click the "Edit" button or the workflow instance name to enter the DAG edit page. After the edit, click the "Save" button to confirm, as shown in the figure below. In the pop-up box, check "Whether to update the workflow definition", after saving, the information modified by the instance will be updated to the workflow definition; if not checked, the workflow definition would not be updated.
+
      <p align="center">
        <img src="../../../../img/editDag-en.png" width="80%" />
      </p>
+
 - **Rerun:** Re-execute the terminated process
+
 - **Recovery Failed:** For failed processes, you can perform failure recovery operations, starting from the failed node
+
 - **Stop:** **Stop** the running process, the background code will first `kill` the worker process, and then execute `kill -9` operation
+
 - **Pause:** **Pause** the running process, the system status will change to **waiting for execution**, it will wait for the task to finish, and pause the next sequence task.
+
 - **Resume pause:** Resume the paused process, start running directly from the **paused node**
+
 - **Delete:** Delete the workflow instance and the task instance under the workflow instance
+
 - **Gantt Chart:** The vertical axis of the Gantt chart is the topological sorting of task instances of the workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:
 
 ![instance-gantt](../../../../img/new_ui/dev/project/instance-gantt.png)
diff --git a/docs/docs/en/guide/project/workflow-relation.md b/docs/docs/en/guide/project/workflow-relation.md
index e386af3801..e5ba10720b 100644
--- a/docs/docs/en/guide/project/workflow-relation.md
+++ b/docs/docs/en/guide/project/workflow-relation.md
@@ -1,3 +1,3 @@
 Workflow Relation screen shows all the existing workflows in a project and their status.
 
-![](../../../../img/new_ui/dev/project/work-relation.png)
\ No newline at end of file
+![](../../../../img/new_ui/dev/project/work-relation.png)
diff --git a/docs/docs/en/guide/resource/file-manage.md b/docs/docs/en/guide/resource/file-manage.md
index 53a737166d..992fbb2ed4 100644
--- a/docs/docs/en/guide/resource/file-manage.md
+++ b/docs/docs/en/guide/resource/file-manage.md
@@ -6,11 +6,11 @@ When the third-party jar needs to be used in the scheduling process or the user
 
 > **_Note:_**
 >
-> * When you manage files as `admin`, remember to set up `tenant` for `admin` first. 
+> * When you manage files as `admin`, remember to set up `tenant` for `admin` first.
 
 ## Basic Operations
 
-### Create File 
+### Create File
 
 The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties.
 
@@ -65,6 +65,7 @@ In the workflow definition module of project Manage, create a new workflow using
 
 - Script: 'sh hello.sh'
 - Resource: Select 'hello.sh'
+
 > Notice: When using a resource file in the script, the file name needs to be the same as the full path of the selected resource:
 > For example: if the resource path is `/resource/hello.sh`, you need to use the full path of `/resource/hello.sh` to use it in the script.
 
diff --git a/docs/docs/en/guide/resource/intro.md b/docs/docs/en/guide/resource/intro.md
index f4d70c2a3b..d067da6735 100644
--- a/docs/docs/en/guide/resource/intro.md
+++ b/docs/docs/en/guide/resource/intro.md
@@ -2,4 +2,4 @@
 
 The Resource Center is typically used for uploading files, UDF functions, and task group management. For a stand-alone
 environment, you can select the local file directory as the upload folder (**this operation does not require Hadoop or HDFS deployment**).
-Of course, you can also choose to upload to Hadoop or MinIO cluster. In this case, you need to have Hadoop (2.6+) or MinIOn and other related environments.
\ No newline at end of file
+Of course, you can also choose to upload to Hadoop or MinIO cluster. In this case, you need to have Hadoop (2.6+) or MinIOn and other related environments.
diff --git a/docs/docs/en/guide/resource/task-group.md b/docs/docs/en/guide/resource/task-group.md
index b8f62f0757..87e04b4cb0 100644
--- a/docs/docs/en/guide/resource/task-group.md
+++ b/docs/docs/en/guide/resource/task-group.md
@@ -1,16 +1,16 @@
 # Task Group Settings
 
-The task group is mainly used to control the concurrency of task instances, and is designed to control the pressure of other resources (it can also control the pressure of the Hadoop cluster, the cluster will have queue control it). When creating a new task definition, you can configure the corresponding task group and configure the priority of the task running in the task group. 
+The task group is mainly used to control the concurrency of task instances, and is designed to control the pressure of other resources (it can also control the pressure of the Hadoop cluster, the cluster will have queue control it). When creating a new task definition, you can configure the corresponding task group and configure the priority of the task running in the task group.
 
-## Task Group Configuration 
+## Task Group Configuration
 
-### Create Task Group 
+### Create Task Group
 
 ![create-taskGroup](../../../../img/new_ui/dev/resource/create-taskGroup.png)
 
-The user clicks `Resources -> Task Group Management -> Task Group option -> Create Task Group` 
+The user clicks `Resources -> Task Group Management -> Task Group option -> Create Task Group`
 
-![create-taskGroup](../../../../img/new_ui/dev/resource/create-taskGroup.png) 
+![create-taskGroup](../../../../img/new_ui/dev/resource/create-taskGroup.png)
 
 You need to enter the information inside the picture:
 
@@ -18,39 +18,39 @@ You need to enter the information inside the picture:
 - **Project name**: The project that the task group functions, this item is optional, if not selected, all the projects in the whole system can use this task group.
 - **Resource pool size**: The maximum number of concurrent task instances allowed.
 
-### View Task Group Queue 
+### View Task Group Queue
 
-![view-queue](../../../../img/new_ui/dev/resource/view-queue.png) 
+![view-queue](../../../../img/new_ui/dev/resource/view-queue.png)
 
 Click the button to view task group usage information:
 
-![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png) 
+![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png)
 
-### Use of Task Groups 
+### Use of Task Groups
 
 **Note**: The use of task groups is applicable to tasks executed by workers, such as `switch` nodes, `condition` nodes, `sub_process` and other node types executed by the master are not controlled by the task group.
 
 Let's take the shell node as an example:
 
-![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)                 
+![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)
 
 Regarding the configuration of the task group, all you need to do is to configure these parts in the red box:
 
 - Task group name: The task group name is displayed on the task group configuration page. Here you can only see the task group that the project has permission to access (the project is selected when creating a task group) or the task group that scope globally (no project is selected when creating a task group).
 
-- Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
+- Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority.
 
-## Implementation Logic of Task Group 
+## Implementation Logic of Task Group
 
 ### Get Task Group Resources
 
-The master judges whether the task is configured with a task group when distributing the task. If the task is not configured, it is normally thrown to the worker to run; if a task group is configured, it checks whether the remaining size of the task group resource pool meets the current task operation before throwing it to the worker for execution. , if the resource pool -1 is satisfied, continue to run; if not, exit the task distribution and wait for other tasks to wake up. 
+The master judges whether the task is configured with a task group when distributing the task. If the task is not configured, it is normally thrown to the worker to run; if a task group is configured, it checks whether the remaining size of the task group resource pool meets the current task operation before throwing it to the worker for execution. , if the resource pool -1 is satisfied, continue to run; if not, exit the task distribution and wait for other tasks to wake up.
 
 ### Release and Wake Up
 
-When the task that has occupied the task group resource is finished, the task group resource will be released. After the release, it will check whether there is a task waiting in the current task group. If there is, mark the task with the best priority to run, and create a new executable event. The event stores the task ID that is marked to acquire the resource, and then the task obtains the task group resource and run. 
+When the task that has occupied the task group resource is finished, the task group resource will be released. After the release, it will check whether there is a task waiting in the current task group. If there is, mark the task with the best priority to run, and create a new executable event. The event stores the task ID that is marked to acquire the resource, and then the task obtains the task group resource and run.
 
-#### Task Group Flowchart 
+#### Task Group Flowchart
 
 ![task_group](../../../../img/task_group_process.png)
-      
+
diff --git a/docs/docs/en/guide/security.md b/docs/docs/en/guide/security.md
index b9484a1e72..eaeb12d8a0 100644
--- a/docs/docs/en/guide/security.md
+++ b/docs/docs/en/guide/security.md
@@ -138,8 +138,8 @@ worker:
 ......
 ```
 
-- You can add new worker groups for the workers during runtime regardless of the configurations in `application.yaml` as below: 
-`Security Center` -> `Worker Group Manage` -> `Create Worker Group` -> fill in `Group Name` and `Worker Addresses` -> click `confirm`. 
+- You can add new worker groups for the workers during runtime regardless of the configurations in `application.yaml` as below:
+  `Security Center` -> `Worker Group Manage` -> `Create Worker Group` -> fill in `Group Name` and `Worker Addresses` -> click `confirm`.
 
 ## Environmental Management
 
@@ -164,10 +164,10 @@ Create a task node in the workflow definition, select the worker group and the e
 ## Cluster Management
 
 > Add or update cluster
-- Each process can be related to zero or several clusters to support multiple environment, now just support k8s.
-
+> - Each process can be related to zero or several clusters to support multiple environment, now just support k8s.
+>
 > Usage cluster
-- After creation and authorization, k8s namespaces and processes will associate clusters. Each cluster will have separate workflows and task instances running independently.
+> - After creation and authorization, k8s namespaces and processes will associate clusters. Each cluster will have separate workflows and task instances running independently.
 
 ![create-cluster](../../../img/new_ui/dev/security/create-cluster.png)
 
@@ -183,4 +183,3 @@ Create a task node in the workflow definition, select the worker group and the e
 
 ![create-environment](../../../img/new_ui/dev/security/create-namespace.png)
 
-
diff --git a/docs/docs/en/guide/start/docker.md b/docs/docs/en/guide/start/docker.md
index b09d63b0f4..5120382f83 100644
--- a/docs/docs/en/guide/start/docker.md
+++ b/docs/docs/en/guide/start/docker.md
@@ -42,8 +42,8 @@ modify docker-compose's free memory up to 4 GB.
 
 - Mac:Click `Docker Desktop -> Preferences -> Resources -> Memory` modified it
 - Windows Docker Desktop:
-    - Hyper-V mode: Click `Docker Desktop -> Settings -> Resources -> Memory` modified it
-    - WSL 2 mode: see [WSL 2 utility VM](https://docs.microsoft.com/zh-cn/windows/wsl/wsl-config#configure-global-options-with-wslconfig) for more detail.
+  - Hyper-V mode: Click `Docker Desktop -> Settings -> Resources -> Memory` modified it
+  - WSL 2 mode: see [WSL 2 utility VM](https://docs.microsoft.com/zh-cn/windows/wsl/wsl-config#configure-global-options-with-wslconfig) for more detail.
 
 After complete the configuration, we can get the `docker-compose.yaml` file from [download page](/en-us/download/download.html)
 form its source package, and make sure you get the right version. After download the package, you can run the commands as below.
@@ -71,7 +71,6 @@ $ docker-compose --profile all up -d
 [Using docker-compose to start server](#using-docker-compose-to-start-server) will create new a database and the ZooKeeper
 container when it up. You could start DolphinScheduler server separately if you want to reuse your exists services.
 
-
 ```shell
 $ DOLPHINSCHEDULER_VERSION=<version>
 # Initialize the database, make sure database <DATABASE> already exists
diff --git a/docs/docs/en/guide/start/quick-start.md b/docs/docs/en/guide/start/quick-start.md
index ff6c19749f..0549f12152 100644
--- a/docs/docs/en/guide/start/quick-start.md
+++ b/docs/docs/en/guide/start/quick-start.md
@@ -42,7 +42,7 @@ This is a Quick Start guide to help you get a basic idea of working with Apache
 ![create-environment](../../../../img/new_ui/dev/quick-start/create-environment.png)
 
 ## Create a token
-  
+
 ![create-token](../../../../img/new_ui/dev/quick-start/create-token.png)
 
 ## Login with regular users
diff --git a/docs/docs/en/guide/task/java.md b/docs/docs/en/guide/task/java.md
new file mode 100644
index 0000000000..c0a1a1cd35
--- /dev/null
+++ b/docs/docs/en/guide/task/java.md
@@ -0,0 +1,48 @@
+# Overview
+
+This node is for executing java-type tasks and supports using files and jar packages as program entries.
+
+# Create Tasks
+
+- Click on `Project Management` -> `Project Name` -> `Workflow Definition`, click on the “Create workflow” button, go to the DAG edit page:
+
+- Drag the toolbar's Java task node to the palette.
+
+# Task Parameters
+
+|      **Parameter**       |                                                                              **Description**                                                                               |
+|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Node Name                | The name of the set task. The node name in a workflow definition is unique.                                                                                                |
+| Run Flag                 | Indicates whether the node is scheduled properly and turns on the kill switch, if not needed.                                                                              |
+| Description              | Describes the functionality of the node.                                                                                                                                   |
+| Task Priority            | When the number of worker threads is insufficient, the worker executes tasks according to the priority. When the priority is the same, the worker executes tasks by order. |
+| Worker Group             | The group of machines who execute the tasks. If selecting `Default`, DolphinScheduler will randomly choose a worker machine to execute the task.                           |
+| Environment Name         | Configure the environment in which the task runs.                                                                                                                          |
+| Number Of Failed Retries | Number of resubmitted tasks that failed. You can choose the number in the drop-down menu or fill it manually.                                                              |
+| Failed Retry Interval    | the interval between the failure and resubmission of a task. You can choose the number in the drop-down menu or fill it manually.                                          |
+| Delayed Execution Time   | the amount of time a task is delayed, in units.                                                                                                                            |
+| Timeout Alarm            | Check timeout warning, timeout failure, when the task exceeds the“Timeout length”, send a warning message and the task execution fails.                                    |
+| Module Path              | pick Java 9 + 's modularity feature, put all resources into-module-path, and require that the JDK version in your worker supports modularity.                              |
+| Main Parameter           | Java program main method entry parameter.                                                                                                                                  |
+| Java VM Parameters       | JVM startup parameters.                                                                                                                                                    |
+| Script                   | You need to write Java code if you use the Java run type. The public class must exist in the code without writing a package statement.                                     |
+| Resources                | External JAR packages or other resource files that are added to the classpath or module path and can be easily retrieved in your JAVA script.                              |
+| Custom parameter         | A user-defined parameter that is part of HTTP and replaces `${ variable }` in the script .                                                                                 |
+| Pre Tasks                | Selects a pre-task for the current task and sets the pre-task as the upstream of the current task.                                                                         |
+
+## Example
+
+Java type tasks have two modes of execution, here is a demonstration of executing tasks in Java mode.
+
+The main configuration parameters are as follows:
+- Run Type
+- Module Path
+- Main Parameters
+- Java VM Parameters
+- Script
+
+![java_task](../../../../img/tasks/demo/java_task02.png)
+
+## Note
+
+When you run the task in JAVA execution mode, the public class must exist in the code, and you could omit writing a package statement.
diff --git a/docs/docs/en/guide/upgrade/incompatible.md b/docs/docs/en/guide/upgrade/incompatible.md
index d1043983c9..fcdd7dd199 100644
--- a/docs/docs/en/guide/upgrade/incompatible.md
+++ b/docs/docs/en/guide/upgrade/incompatible.md
@@ -1,9 +1,10 @@
 # Incompatible
 
-This document records the incompatible updates between each version. You need to check this document before you upgrade to related version. 
+This document records the incompatible updates between each version. You need to check this document before you upgrade to related version.
 
 ## dev
 
 ## 3.0.0
 
-* Copy and import workflow without 'copy' suffix [#10607](https://github.com/apache/dolphinscheduler/pull/10607)
\ No newline at end of file
+* Copy and import workflow without 'copy' suffix [#10607](https://github.com/apache/dolphinscheduler/pull/10607)
+
diff --git a/docs/docs/en/guide/upgrade/upgrade.md b/docs/docs/en/guide/upgrade/upgrade.md
index f1a518e644..b2b302117a 100644
--- a/docs/docs/en/guide/upgrade/upgrade.md
+++ b/docs/docs/en/guide/upgrade/upgrade.md
@@ -28,13 +28,13 @@ Change configuration in `./bin/env/dolphinscheduler_env.sh` ({user} and {passwor
 Using MySQL as an example, change the value if you use other databases. Please manually download the [mysql-connector-java driver jar](https://downloads.MySQL.com/archives/c-j/)
 jar package and add it to the `./tools/libs` directory, then change `./bin/ env/dolphinscheduler_env.sh` file
 
-    ```shell
-    export DATABASE=${DATABASE:-mysql}
-    export SPRING_PROFILES_ACTIVE=${DATABASE}
-    export SPRING_DATASOURCE_URL="jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
-    export SPRING_DATASOURCE_USERNAME={user}
-    export SPRING_DATASOURCE_PASSWORD={password}
-    ```
+        ```shell
+        export DATABASE=${DATABASE:-mysql}
+        export SPRING_PROFILES_ACTIVE=${DATABASE}
+        export SPRING_DATASOURCE_URL="jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
+        export SPRING_DATASOURCE_USERNAME={user}
+        export SPRING_DATASOURCE_PASSWORD={password}
+        ```
 
 Execute database upgrade script: `sh ./tools/bin/upgrade-schema.sh`
 
@@ -45,7 +45,7 @@ Execute database upgrade script: `sh ./tools/bin/upgrade-schema.sh`
 - If you deploy with Pseudo-Cluster deployment, change it according to [Pseudo-Cluster](../installation/pseudo-cluster.md) section "Modify Configuration".
 - If you deploy with Cluster deployment, change it according to [Cluster](../installation/cluster.md) section "Modify Configuration".
 
-And them run command `sh ./bin/start-all.sh` to start all services. 
+And them run command `sh ./bin/start-all.sh` to start all services.
 
 ## Notice
 
@@ -54,26 +54,26 @@ And them run command `sh ./bin/start-all.sh` to start all services.
 The architecture of worker group is different between version before version 1.3.1 until version 2.0.0
 
 - Before version 1.3.1(include itself) worker group can be created through UI interface.
-- Since version 1.3.1 and before version 2.0.0, worker group can be created by modifying the worker configuration. 
+- Since version 1.3.1 and before version 2.0.0, worker group can be created by modifying the worker configuration.
 
 #### How Can I Do When I Upgrade from 1.3.1 to version before 2.0.0
 
 * Check the backup database, search records in table `t_ds_worker_group` table and mainly focus on three columns: `id, name and IP`.
 
-| id | name | ip_list    |
-| :---         |     :---:      |          ---: |
-| 1   | service1     | 192.168.xx.10    |
-| 2   | service2     | 192.168.xx.11,192.168.xx.12      |
+| id |   name   |                     ip_list |
+|:---|:--------:|----------------------------:|
+| 1  | service1 |               192.168.xx.10 |
+| 2  | service2 | 192.168.xx.11,192.168.xx.12 |
 
 * Modify worker related configuration in `bin/env/install_config.conf`.
 
 Assume bellow are the machine worker service to be deployed:
 
-| hostname | ip |
-| :---  | :---:  |
-| ds1   | 192.168.xx.10     |
-| ds2   | 192.168.xx.11     |
-| ds3   | 192.168.xx.12     |
+| hostname |      ip       |
+|:---------|:-------------:|
+| ds1      | 192.168.xx.10 |
+| ds2      | 192.168.xx.11 |
+| ds3      | 192.168.xx.12 |
 
 To keep worker group config consistent with the previous version, we need to modify workers configuration as below:
 
@@ -84,7 +84,7 @@ workers="ds1:service1,ds2:service2,ds3:service2"
 
 #### The Worker Group has Been Enhanced in Version 1.3.2
 
-Workers in 1.3.1 can only belong to one worker group, but after version 1.3.2 and before version 2.0.0 worker support more than one worker group. 
+Workers in 1.3.1 can only belong to one worker group, but after version 1.3.2 and before version 2.0.0 worker support more than one worker group.
 
 ```sh
 workers="ds1:service1,ds1:service2"
diff --git a/docs/docs/en/history-versions.md b/docs/docs/en/history-versions.md
index f879431286..2ece5358cd 100644
--- a/docs/docs/en/history-versions.md
+++ b/docs/docs/en/history-versions.md
@@ -79,3 +79,4 @@
 ### Versions:Dev
 
 #### Links:[Dev Document](../dev/user_doc/about/introduction.md)
+
diff --git a/docs/docs/zh/DSIP.md b/docs/docs/zh/DSIP.md
index 520ffc11b4..a24a904159 100644
--- a/docs/docs/zh/DSIP.md
+++ b/docs/docs/zh/DSIP.md
@@ -52,11 +52,11 @@ integer in [All DSIPs][all-DSIPs] issues.
 
   ```text
   Hi community,
-  
+
   <CHANGE-TO-YOUR-PROPOSAL-DETAIL>
-  
+
   I already add a GitHub Issue for my proposal, which you could see in <CHANGE-TO-YOUR-GITHUB-ISSUE-LINK>.
-  
+
   Looking forward any feedback for this thread.
   ```
 
@@ -83,3 +83,4 @@ integer in [All DSIPs][all-DSIPs] issues.
 [github-issue-choose]: https://github.com/apache/dolphinscheduler/issues/new/choose
 [mail-to-dev]: mailto:dev@dolphinscheduler.apache.org
 [DSIP-1]: https://github.com/apache/dolphinscheduler/issues/6407
+
diff --git a/docs/docs/zh/about/features.md b/docs/docs/zh/about/features.md
index 1348a54580..25aa47915e 100644
--- a/docs/docs/zh/about/features.md
+++ b/docs/docs/zh/about/features.md
@@ -17,3 +17,4 @@
 ## High Scalability
 
 - **高扩展性**: 支持多租户和在线资源管理。支持每天10万个数据任务的稳定运行。
+
diff --git a/docs/docs/zh/about/glossary.md b/docs/docs/zh/about/glossary.md
index 2b9f967661..7642a4a4c1 100644
--- a/docs/docs/zh/about/glossary.md
+++ b/docs/docs/zh/about/glossary.md
@@ -50,4 +50,3 @@
 
 - dolphinscheduler-ui 前端模块
 
-
diff --git a/docs/docs/zh/about/hardware.md b/docs/docs/zh/about/hardware.md
index 1ec2f24775..ce5d3269e6 100644
--- a/docs/docs/zh/about/hardware.md
+++ b/docs/docs/zh/about/hardware.md
@@ -4,39 +4,39 @@ DolphinScheduler 作为一款开源分布式工作流任务调度系统,可以
 
 ## 1. Linux 操作系统版本要求
 
-| 操作系统       | 版本         |
-| :----------------------- | :----------: |
-| Red Hat Enterprise Linux | 7.0 及以上   |
-| CentOS                   | 7.0 及以上   |
-| Oracle Enterprise Linux  | 7.0 及以上   |
+| 操作系统                     |    版本     |
+|:-------------------------|:---------:|
+| Red Hat Enterprise Linux |  7.0 及以上  |
+| CentOS                   |  7.0 及以上  |
+| Oracle Enterprise Linux  |  7.0 及以上  |
 | Ubuntu LTS               | 16.04 及以上 |
 
 > **注意:**
->以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上
+> 以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上
 
 ## 2. 服务器建议配置
+
 DolphinScheduler 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
+
 ### 生产环境
 
 | **CPU** | **内存** | **硬盘类型** | **网络** | **实例数量** |
-| --- | --- | --- | --- | --- |
-| 4核+ | 8 GB+ | SAS | 千兆网卡 | 1+ |
+|---------|--------|----------|--------|----------|
+| 4核+     | 8 GB+  | SAS      | 千兆网卡   | 1+       |
 
 > **注意:**
 > - 以上建议配置为部署 DolphinScheduler 的最低配置,生产环境强烈推荐使用更高的配置
 > - 硬盘大小配置建议 50GB+ ,系统盘和数据盘分开
 
-
 ## 3. 网络要求
 
 DolphinScheduler正常运行提供如下的网络端口配置:
 
-| 组件 | 默认端口 | 说明 |
-|  --- | --- | --- |
-| MasterServer |  5678  | 非通信端口,只需本机端口不冲突即可 |
-| WorkerServer | 1234  | 非通信端口,只需本机端口不冲突即可 |
-| ApiApplicationServer |  12345 | 提供后端通信端口 |
-
+|          组件          | 默认端口  |        说明         |
+|----------------------|-------|-------------------|
+| MasterServer         | 5678  | 非通信端口,只需本机端口不冲突即可 |
+| WorkerServer         | 1234  | 非通信端口,只需本机端口不冲突即可 |
+| ApiApplicationServer | 12345 | 提供后端通信端口          |
 
 > **注意:**
 > - MasterServer 和 WorkerServer 不需要开启网络间通信,只需本机端口不冲突即可
@@ -44,4 +44,4 @@ DolphinScheduler正常运行提供如下的网络端口配置:
 
 ## 4. 客户端 Web 浏览器要求
 
-DolphinScheduler 推荐 Chrome 以及使用 Chromium 内核的较新版本浏览器访问前端可视化操作界面
\ No newline at end of file
+DolphinScheduler 推荐 Chrome 以及使用 Chromium 内核的较新版本浏览器访问前端可视化操作界面
diff --git a/docs/docs/zh/about/introduction.md b/docs/docs/zh/about/introduction.md
index f4e9ab0ddd..250f72e82d 100644
--- a/docs/docs/zh/about/introduction.md
+++ b/docs/docs/zh/about/introduction.md
@@ -5,4 +5,4 @@ Apache DolphinScheduler 是一个分布式易扩展的可视化DAG工作流任
 Apache DolphinScheduler 旨在解决复杂的大数据任务依赖关系,并为应用程序提供数据和各种 OPS 编排中的关系。 解决数据研发ETL依赖错综复杂,无法监控任务健康状态的问题。
 DolphinScheduler 以 DAG(Directed Acyclic Graph,DAG)流式方式组装任务,可以及时监控任务的执行状态,支持重试、指定节点恢复失败、暂停、恢复、终止任务等操作。
 
-![Apache DolphinScheduler](../../../img/introduction_ui.png)
\ No newline at end of file
+![Apache DolphinScheduler](../../../img/introduction_ui.png)
diff --git a/docs/docs/zh/architecture/cache.md b/docs/docs/zh/architecture/cache.md
index e5a55842c4..6926eddfa1 100644
--- a/docs/docs/zh/architecture/cache.md
+++ b/docs/docs/zh/architecture/cache.md
@@ -39,4 +39,4 @@ spring:
 
 时序图如下图所示:
 
-<img src="../../../img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
\ No newline at end of file
+<img src="../../../img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
diff --git a/docs/docs/zh/architecture/configuration.md b/docs/docs/zh/architecture/configuration.md
index fb985a96d3..be822c89a2 100644
--- a/docs/docs/zh/architecture/configuration.md
+++ b/docs/docs/zh/architecture/configuration.md
@@ -1,9 +1,11 @@
 <!-- markdown-link-check-disable -->
 
 # 前言
+
 本文档为dolphinscheduler配置文件说明文档。
 
 # 目录结构
+
 DolphinScheduler的目录结构如下:
 
 ```
@@ -98,11 +100,13 @@ DolphinScheduler的目录结构如下:
 # 配置文件详解
 
 ## dolphinscheduler-daemon.sh [启动/关闭DolphinScheduler服务脚本]
+
 dolphinscheduler-daemon.sh脚本负责DolphinScheduler的启动&关闭.
 start-all.sh/stop-all.sh最终也是通过dolphinscheduler-daemon.sh对集群进行启动/关闭操作.
 目前DolphinScheduler只是做了一个基本的设置,JVM参数请根据各自资源的实际情况自行设置.
 
 默认简化参数如下:
+
 ```bash
 export DOLPHINSCHEDULER_OPTS="
 -server
@@ -120,6 +124,7 @@ export DOLPHINSCHEDULER_OPTS="
 > 不建议设置"-XX:DisableExplicitGC" , DolphinScheduler使用Netty进行通讯,设置该参数,可能会导致内存泄漏.
 
 ## 数据库连接相关配置
+
 在DolphinScheduler中使用Spring Hikari对数据库连接进行管理,配置文件位置:
 
 |服务名称| 配置文件 |
@@ -149,8 +154,8 @@ export DOLPHINSCHEDULER_OPTS="
 
 DolphinScheduler同样可以通过`bin/env/dolphinscheduler_env.sh`进行数据库连接相关的配置。
 
-
 ## Zookeeper相关配置
+
 DolphinScheduler使用Zookeeper进行集群管理、容错、事件监听等功能,配置文件位置:
 |服务名称| 配置文件 |
 |--|--|
@@ -175,6 +180,7 @@ DolphinScheduler使用Zookeeper进行集群管理、容错、事件监听等功
 DolphinScheduler同样可以通过`bin/env/dolphinscheduler_env.sh`进行Zookeeper相关的配置。
 
 ## common.properties [hadoop、s3、yarn配置]
+
 common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置,配置文件位置:
 |服务名称| 配置文件 |
 |--|--|
@@ -217,6 +223,7 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置
 |zeppelin.rest.url | http://localhost:8080 | zeppelin RESTful API 接口地址|
 
 ## Api-server相关配置
+
 位置:`api-server/conf/application.yaml`
 |参数 |默认值| 描述|
 |--|--|--|
@@ -245,6 +252,7 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置
 |traffic.control.customize-tenant-qps-rate||自定义租户最大请求数/秒限制|
 
 ## Master Server相关配置
+
 位置:`master-server/conf/application.yaml`
 |参数 |默认值| 描述|
 |--|--|--|
@@ -266,6 +274,7 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置
 |master.registry-disconnect-strategy.max-waiting-time|100s|当Master与注册中心失联之后重连时间, 之后当strategy为waiting时,该值生效。 该值表示当Master与注册中心失联时会在给定时间之内进行重连, 在给定时间之内重连失败将会停止自己,在重连时,Master会丢弃目前正在执行的工作流,值为0表示会无限期等待 |
 
 ## Worker Server相关配置
+
 位置:`worker-server/conf/application.yaml`
 |参数 |默认值| 描述|
 |--|--|--|
@@ -282,16 +291,16 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置
 |worker.registry-disconnect-strategy.strategy|stop|当Worker与注册中心失联之后采取的策略, 默认值是: stop. 可选值包括: stop, waiting|
 |worker.registry-disconnect-strategy.max-waiting-time|100s|当Worker与注册中心失联之后重连时间, 之后当strategy为waiting时,该值生效。 该值表示当Worker与注册中心失联时会在给定时间之内进行重连, 在给定时间之内重连失败将会停止自己,在重连时,Worker会丢弃kill正在执行的任务。值为0表示会无限期等待 |
 
-
 ## Alert Server相关配置
+
 位置:`alert-server/conf/application.yaml`
 |参数 |默认值| 描述|
 |--|--|--|
 |server.port|50053|Alert Server监听端口|
 |alert.port|50052|alert监听端口|
 
-
 ## Quartz相关配置
+
 这里面主要是quartz配置,请结合实际业务场景&资源进行配置,本文暂时不做展开,配置文件位置:
 
 |服务名称| 配置文件 |
@@ -319,7 +328,6 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn相关的配置
 |spring.quartz.properties.org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate|
 |spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval | 5000|
 
-
 ## dolphinscheduler_env.sh [环境变量配置]
 
 通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中。涉及到的 `JAVA_HOME`、元数据库、注册中心和任务类型配置,其中任务类型主要有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等。
@@ -358,6 +366,7 @@ export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:
 ```
 
 ## 日志相关配置
+
 |服务名称| 配置文件 |
 |--|--|
 |Master Server | `master-server/conf/logback-spring.xml`|
diff --git a/docs/docs/zh/architecture/design.md b/docs/docs/zh/architecture/design.md
index c8910e642a..f3368a7609 100644
--- a/docs/docs/zh/architecture/design.md
+++ b/docs/docs/zh/architecture/design.md
@@ -3,6 +3,7 @@
 ## 系统架构
 
 ### 系统架构图
+
 <p align="center">
   <img src="../../../img/architecture-1.3.0.jpg" alt="系统架构图"  width="70%" />
   <p align="center">
@@ -11,6 +12,7 @@
 </p>
 
 ### 启动流程活动图
+
 <p align="center">
   <img src="../../../img/process-start-flow-1.3.0.png" alt="启动流程活动图"  width="70%" />
   <p align="center">
@@ -20,65 +22,67 @@
 
 ### 架构说明
 
-* **MasterServer** 
+* **MasterServer**
+
+  MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+  MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+  MasterServer基于netty提供监听服务。
+
+  ##### 该服务内主要包含:
+
+  - **DistributedQuartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作;
 
-    MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
-    MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
-    MasterServer基于netty提供监听服务。
+  - **MasterSchedulerService**是一个扫描线程,定时扫描数据库中的`t_ds_command`表,根据不同的命令类型进行不同的业务操作;
 
-    ##### 该服务内主要包含:
+  - **WorkflowExecuteRunnable**主要是负责DAG任务切分、任务提交监控、各种不同事件类型的逻辑处理;
 
-    - **DistributedQuartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作;
+  - **TaskExecuteRunnable**主要负责任务的处理和持久化,并生成任务事件提交到工作流的事件队列;
 
-    - **MasterSchedulerService**是一个扫描线程,定时扫描数据库中的`t_ds_command`表,根据不同的命令类型进行不同的业务操作;
+  - **EventExecuteService**主要负责工作流实例的事件队列的轮询;
 
-    - **WorkflowExecuteRunnable**主要是负责DAG任务切分、任务提交监控、各种不同事件类型的逻辑处理;
+  - **StateWheelExecuteThread**主要负责工作流和任务超时、任务重试、任务依赖的轮询,并生成对应的工作流或任务事件提交到工作流的事件队列;
 
-    - **TaskExecuteRunnable**主要负责任务的处理和持久化,并生成任务事件提交到工作流的事件队列;
+  - **FailoverExecuteThread**主要负责Master容错和Worker容错的相关逻辑;
 
-    - **EventExecuteService**主要负责工作流实例的事件队列的轮询;
+* **WorkerServer**
 
-    - **StateWheelExecuteThread**主要负责工作流和任务超时、任务重试、任务依赖的轮询,并生成对应的工作流或任务事件提交到工作流的事件队列;
+  WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。
+  WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
+  WorkerServer基于netty提供监听服务。
 
-    - **FailoverExecuteThread**主要负责Master容错和Worker容错的相关逻辑;
-  
-* **WorkerServer** 
+  ##### 该服务包含:
 
-     WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。
-     WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
-     WorkerServer基于netty提供监听服务。
-     ##### 该服务包含:
+  - **WorkerManagerThread**主要负责任务队列的提交,不断从任务队列中领取任务,提交到线程池处理;
 
-     - **WorkerManagerThread**主要负责任务队列的提交,不断从任务队列中领取任务,提交到线程池处理;
-     
-     - **TaskExecuteThread**主要负责任务执行的流程,根据不同的任务类型进行任务的实际处理;
+  - **TaskExecuteThread**主要负责任务执行的流程,根据不同的任务类型进行任务的实际处理;
 
-     - **RetryReportTaskStatusThread**主要负责定时轮询向Master汇报任务的状态,直到Master回复状态的ack,避免任务状态丢失;
+  - **RetryReportTaskStatusThread**主要负责定时轮询向Master汇报任务的状态,直到Master回复状态的ack,避免任务状态丢失;
 
-* **ZooKeeper** 
+* **ZooKeeper**
 
-    ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
-    我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
+  ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
+  我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
 
-* **AlertServer** 
+* **AlertServer**
 
-    提供告警服务,通过告警插件的方式实现丰富的告警手段。
+  提供告警服务,通过告警插件的方式实现丰富的告警手段。
 
-* **ApiServer** 
+* **ApiServer**
 
-    API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+  API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
 
-* **UI** 
+* **UI**
 
-    系统的前端页面,提供系统的各种可视化操作界面。
+  系统的前端页面,提供系统的各种可视化操作界面。
 
 ### 架构设计思想
 
-#### 一、去中心化vs中心化 
+#### 一、去中心化vs中心化
 
 ##### 中心化思想
 
 中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave角色"  width="50%" />
  </p>
@@ -86,14 +90,13 @@
 - Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
 - Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
 
-
-
 中心化思想设计存在的问题:
 
 - 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
 - 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
 
 ##### 去中心化
+
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="去中心化"  width="50%" />
  </p>
@@ -101,10 +104,10 @@
 - 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
 - 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
 - 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
-
 - DolphinScheduler的去中心化是Master/Worker注册心跳到Zookeeper中,Master基于slot处理各自的Command,通过selector分发任务给worker,实现Master集群和Worker集群无中心。
 
 #### 二、容错设计
+
 容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
 
 ##### 宕机容错
@@ -140,7 +143,7 @@
 
 容错后处理:Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
 
- 注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
 
 ##### 三、任务失败重试
 
@@ -160,37 +163,35 @@
 
 如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作。
 
-
 #### 四、任务优先级设计
+
 在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
 
--  按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
-    - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+- 按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+  - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+    - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
 
-        - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
-             </p>
+        <p align="center">
+           <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
+         </p>
 
-        - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
-             </p>
+    - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
 
+        <p align="center">
+           <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
+         </p>
 
 #### 五、Logback和netty实现日志访问
 
--  由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
-  -  将日志放到ES搜索引擎上
-  -  通过netty通信获取远程日志信息
-
--  介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+- 由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+- 将日志放到ES搜索引擎上
+- 通过netty通信获取远程日志信息
+- 介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
 
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc远程访问"  width="50%" />
  </p>
 
-
 - 详情可参考Master和Worker的logback配置,如下示例:
 
 ```xml
@@ -217,6 +218,6 @@
 ```
 
 ## 总结
-本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
 
+本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
 
diff --git a/docs/docs/zh/architecture/load-balance.md b/docs/docs/zh/architecture/load-balance.md
index cb381f316b..6baeb167ad 100644
--- a/docs/docs/zh/architecture/load-balance.md
+++ b/docs/docs/zh/architecture/load-balance.md
@@ -1,4 +1,5 @@
 ### 负载均衡
+
 负载均衡即通过路由算法(通常是集群环境),合理的分摊服务器压力,达到服务器性能的最大优化。
 
 ### DolphinScheduler-Worker 负载均衡算法
@@ -56,3 +57,4 @@ eg:master.host.selector=random(不区分大小写)
 
 * worker.max.cpuload.avg=-1 (worker最大cpuload均值,只有高于系统cpuload均值时,worker服务才能被派发任务. 默认值为-1: cpu cores * 2)
 * worker.reserved.memory=0.3 (worker预留内存,只有低于系统可用内存时,worker服务才能被派发任务,单位为G)
+
diff --git a/docs/docs/zh/architecture/metadata.md b/docs/docs/zh/architecture/metadata.md
index 010ef7189f..7f82a73609 100644
--- a/docs/docs/zh/architecture/metadata.md
+++ b/docs/docs/zh/architecture/metadata.md
@@ -1,11 +1,13 @@
 # DolphinScheduler 元数据文档
 
 ## 表Schema
+
 详见`dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql`目录下的sql文件
 
 ## E-R图
 
 ### 用户	队列	数据源
+
 ![image.png](../../../img/metadata-erd/user-queue-datasource.png)
 
 - 一个租户下可以有多个用户;<br />
@@ -13,6 +15,7 @@
 - `t_ds_datasource`表中的`user_id`字段表示创建该数据源的用户,`t_ds_relation_datasource_user`中的`user_id`表示对数据源有权限的用户;<br />
 
 ### 项目	资源	告警
+
 ![image.png](../../../img/metadata-erd/project-resource-alert.png)
 
 - 一个用户可以有多个项目,用户项目授权通过`t_ds_relation_project_user`表完成project_id和user_id的关系绑定;<br />
@@ -21,6 +24,7 @@
 - `t_ds_udfs`表中的`user_id`表示创建该UDF的用户,`t_ds_relation_udfs_user`表中的`user_id`表示对UDF有权限的用户;<br />
 
 ### 项目 - 租户 - 工作流定义 - 定时
+
 ![image.png](../../../img/metadata-erd/project_tenant_process_definition_schedule.png)
 
 - 一个项目可以有多个工作流定义,每个工作流定义只属于一个项目;<br />
@@ -28,10 +32,10 @@
 - 一个工作流定义可以有一个或多个定时的配置;<br />
 
 ### 工作流定义和执行
+
 ![image.png](../../../img/metadata-erd/process_definition.png)
 
 - 一个工作流定义对应多个任务定义,通过`t_ds_process_task_relation`进行关联,关联的key是`code + version`,当任务的前置节点为空时,对应的`pre_task_node`和`pre_task_version`为0;
 - 一个工作流定义可以有多个工作流实例`t_ds_process_instance`,一个工作流实例对应一个或多个任务实例`t_ds_task_instance`;
 - `t_ds_relation_process_instance`表存放的数据用于处理流程定义中含有子流程的情况,`parent_process_instance_id`表示含有子流程的主流程实例id,`process_instance_id`表示子流程实例的id,`parent_task_instance_id`表示子流程节点的任务实例id,流程实例表和任务实例表分别对应`t_ds_process_instance`表和`t_ds_task_instance`表;
 
-
diff --git a/docs/docs/zh/architecture/task-structure.md b/docs/docs/zh/architecture/task-structure.md
index f369116060..36ec2537f6 100644
--- a/docs/docs/zh/architecture/task-structure.md
+++ b/docs/docs/zh/architecture/task-structure.md
@@ -1,32 +1,31 @@
-
 # 任务总体存储结构
+
 在dolphinscheduler中创建的所有任务都保存在t_ds_process_definition 表中.
 
 该数据库表结构如下表所示:
 
-
-序号 | 字段  | 类型  |  描述
--------- | ---------| -------- | ---------
-1|id|int(11)|主键
-2|name|varchar(255)|流程定义名称
-3|version|int(11)|流程定义版本
-4|release_state|tinyint(4)|流程定义的发布状态:0 未上线 ,  1已上线
-5|project_id|int(11)|项目id
-6|user_id|int(11)|流程定义所属用户id
-7|process_definition_json|longtext|流程定义JSON
-8|description|text|流程定义描述
-9|global_params|text|全局参数
-10|flag|tinyint(4)|流程是否可用:0 不可用,1 可用
-11|locations|text|节点坐标信息
-12|connects|text|节点连线信息
-13|receivers|text|收件人
-14|receivers_cc|text|抄送人
-15|create_time|datetime|创建时间
-16|timeout|int(11) |超时时间
-17|tenant_id|int(11) |租户id
-18|update_time|datetime|更新时间
-19|modify_by|varchar(36)|修改用户
-20|resource_ids|varchar(255)|资源ids
+| 序号 |           字段            |      类型      |           描述            |
+|----|-------------------------|--------------|-------------------------|
+| 1  | id                      | int(11)      | 主键                      |
+| 2  | name                    | varchar(255) | 流程定义名称                  |
+| 3  | version                 | int(11)      | 流程定义版本                  |
+| 4  | release_state           | tinyint(4)   | 流程定义的发布状态:0 未上线 ,  1已上线 |
+| 5  | project_id              | int(11)      | 项目id                    |
+| 6  | user_id                 | int(11)      | 流程定义所属用户id              |
+| 7  | process_definition_json | longtext     | 流程定义JSON                |
+| 8  | description             | text         | 流程定义描述                  |
+| 9  | global_params           | text         | 全局参数                    |
+| 10 | flag                    | tinyint(4)   | 流程是否可用:0 不可用,1 可用       |
+| 11 | locations               | text         | 节点坐标信息                  |
+| 12 | connects                | text         | 节点连线信息                  |
+| 13 | receivers               | text         | 收件人                     |
+| 14 | receivers_cc            | text         | 抄送人                     |
+| 15 | create_time             | datetime     | 创建时间                    |
+| 16 | timeout                 | int(11)      | 超时时间                    |
+| 17 | tenant_id               | int(11)      | 租户id                    |
+| 18 | update_time             | datetime     | 更新时间                    |
+| 19 | modify_by               | varchar(36)  | 修改用户                    |
+| 20 | resource_ids            | varchar(255) | 资源ids                   |
 
 其中process_definition_json 字段为核心字段, 定义了 DAG 图中的任务信息.该数据以JSON 的方式进行存储.
 
@@ -39,6 +38,7 @@
 4|timeout|int|超时时间
 
 数据示例:
+
 ```bash
 {
     "globalParams":[
@@ -58,6 +58,7 @@
 # 各任务类型存储结构详解
 
 ## Shell节点
+
 **节点数据结构如下:**
 序号|参数名||类型|描述 |描述
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -72,7 +73,7 @@
 9|runFlag | |String |运行标识| |
 10|conditionResult | |Object|条件分支 | |
 11| | successNode| Array|成功跳转节点| |
-12| | failedNode|Array|失败跳转节点 | 
+12| | failedNode|Array|失败跳转节点 |
 13| dependence| |Object |任务依赖 |与params互斥
 14|maxRetryTimes | |String|最大重试次数 | |
 15|retryInterval | |String |重试间隔| |
@@ -81,7 +82,6 @@
 18|workerGroup | |String |Worker 分组| |
 19|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -131,8 +131,8 @@
 
 ```
 
-
 ## SQL节点
+
 通过 SQL对指定的数据源进行数据查询、更新操作.
 
 **节点数据结构如下:**
@@ -159,7 +159,7 @@
 19|runFlag | |String |运行标识| |
 20|conditionResult | |Object|条件分支 | |
 21| | successNode| Array|成功跳转节点| |
-22| | failedNode|Array|失败跳转节点 | 
+22| | failedNode|Array|失败跳转节点 |
 23| dependence| |Object |任务依赖 |与params互斥
 24|maxRetryTimes | |String|最大重试次数 | |
 25|retryInterval | |String |重试间隔| |
@@ -168,7 +168,6 @@
 28|workerGroup | |String |Worker 分组| |
 29|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -230,47 +229,47 @@
 }
 ```
 
-
 ## PROCEDURE[存储过程]节点
+
 **节点数据结构如下:**
 **节点数据样例:**
 
 ## SPARK节点
-**节点数据结构如下:**
 
-序号|参数名||类型|描述 |描述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| 任务编码|
-2|type ||String |类型 |SPARK
-3| name| |String|名称 |
-4| params| |Object| 自定义参数 |Json 格式
-5| |mainClass |String | 运行主类
-6| |mainArgs | String| 运行参数
-7| |others | String| 其他参数
-8| |mainJar |Object | 程序 jar 包
-9| |deployMode |String | 部署模式  |local,client,cluster
-10| |driverCores | String| driver核数
-11| |driverMemory | String| driver 内存数
-12| |numExecutors |String | executor数量
-13| |executorMemory |String | executor内存
-14| |executorCores |String | executor核数
-15| |programType | String| 程序类型|JAVA,SCALA,PYTHON
-16| | sparkVersion| String|	Spark 版本| SPARK1 , SPARK2
-17| | localParams| Array|自定义参数
-18| | resourceList| Array|资源文件
-19|description | |String|描述 | |
-20|runFlag | |String |运行标识| |
-21|conditionResult | |Object|条件分支 | |
-22| | successNode| Array|成功跳转节点| |
-23| | failedNode|Array|失败跳转节点 | 
-24| dependence| |Object |任务依赖 |与params互斥
-25|maxRetryTimes | |String|最大重试次数 | |
-26|retryInterval | |String |重试间隔| |
-27|timeout | |Object|超时控制 | |
-28| taskInstancePriority| |String|任务优先级 | |
-29|workerGroup | |String |Worker 分组| |
-30|preTasks | |Array|前置任务 | |
+**节点数据结构如下:**
 
+| 序号 |                 参数名                  ||   类型   |     描述     |          描述          |
+|----|----------------------|----------------|--------|------------|----------------------|
+| 1  | id                   |                | String | 任务编码       |
+| 2  | type                                 || String | 类型         | SPARK                |
+| 3  | name                 |                | String | 名称         |
+| 4  | params               |                | Object | 自定义参数      | Json 格式              |
+| 5  |                      | mainClass      | String | 运行主类       |
+| 6  |                      | mainArgs       | String | 运行参数       |
+| 7  |                      | others         | String | 其他参数       |
+| 8  |                      | mainJar        | Object | 程序 jar 包   |
+| 9  |                      | deployMode     | String | 部署模式       | local,client,cluster |
+| 10 |                      | driverCores    | String | driver核数   |
+| 11 |                      | driverMemory   | String | driver 内存数 |
+| 12 |                      | numExecutors   | String | executor数量 |
+| 13 |                      | executorMemory | String | executor内存 |
+| 14 |                      | executorCores  | String | executor核数 |
+| 15 |                      | programType    | String | 程序类型       | JAVA,SCALA,PYTHON    |
+| 16 |                      | sparkVersion   | String | Spark 版本   | SPARK1 , SPARK2      |
+| 17 |                      | localParams    | Array  | 自定义参数      |
+| 18 |                      | resourceList   | Array  | 资源文件       |
+| 19 | description          |                | String | 描述         |                      |
+| 20 | runFlag              |                | String | 运行标识       |                      |
+| 21 | conditionResult      |                | Object | 条件分支       |                      |
+| 22 |                      | successNode    | Array  | 成功跳转节点     |                      |
+| 23 |                      | failedNode     | Array  | 失败跳转节点     |
+| 24 | dependence           |                | Object | 任务依赖       | 与params互斥            |
+| 25 | maxRetryTimes        |                | String | 最大重试次数     |                      |
+| 26 | retryInterval        |                | String | 重试间隔       |                      |
+| 27 | timeout              |                | Object | 超时控制       |                      |
+| 28 | taskInstancePriority |                | String | 任务优先级      |                      |
+| 29 | workerGroup          |                | String | Worker 分组  |                      |
+| 30 | preTasks             |                | Array  | 前置任务       |                      |
 
 **节点数据样例:**
 
@@ -333,38 +332,35 @@
 }
 ```
 
-
-
 ## MapReduce(MR)节点
-**节点数据结构如下:**
-
-序号|参数名||类型|描述 |描述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| 任务编码|
-2|type ||String |类型 |MR
-3| name| |String|名称 |
-4| params| |Object| 自定义参数 |Json 格式
-5| |mainClass |String | 运行主类
-6| |mainArgs | String| 运行参数
-7| |others | String| 其他参数
-8| |mainJar |Object | 程序 jar 包
-9| |programType | String| 程序类型|JAVA,PYTHON
-10| | localParams| Array|自定义参数
-11| | resourceList| Array|资源文件
-12|description | |String|描述 | |
-13|runFlag | |String |运行标识| |
-14|conditionResult | |Object|条件分支 | |
-15| | successNode| Array|成功跳转节点| |
-16| | failedNode|Array|失败跳转节点 | 
-17| dependence| |Object |任务依赖 |与params互斥
-18|maxRetryTimes | |String|最大重试次数 | |
-19|retryInterval | |String |重试间隔| |
-20|timeout | |Object|超时控制 | |
-21| taskInstancePriority| |String|任务优先级 | |
-22|workerGroup | |String |Worker 分组| |
-23|preTasks | |Array|前置任务 | |
 
+**节点数据结构如下:**
 
+| 序号 |                参数名                 ||   类型   |    描述     |     描述      |
+|----|----------------------|--------------|--------|-----------|-------------|
+| 1  | id                   |              | String | 任务编码      |
+| 2  | type                               || String | 类型        | MR          |
+| 3  | name                 |              | String | 名称        |
+| 4  | params               |              | Object | 自定义参数     | Json 格式     |
+| 5  |                      | mainClass    | String | 运行主类      |
+| 6  |                      | mainArgs     | String | 运行参数      |
+| 7  |                      | others       | String | 其他参数      |
+| 8  |                      | mainJar      | Object | 程序 jar 包  |
+| 9  |                      | programType  | String | 程序类型      | JAVA,PYTHON |
+| 10 |                      | localParams  | Array  | 自定义参数     |
+| 11 |                      | resourceList | Array  | 资源文件      |
+| 12 | description          |              | String | 描述        |             |
+| 13 | runFlag              |              | String | 运行标识      |             |
+| 14 | conditionResult      |              | Object | 条件分支      |             |
+| 15 |                      | successNode  | Array  | 成功跳转节点    |             |
+| 16 |                      | failedNode   | Array  | 失败跳转节点    |
+| 17 | dependence           |              | Object | 任务依赖      | 与params互斥   |
+| 18 | maxRetryTimes        |              | String | 最大重试次数    |             |
+| 19 | retryInterval        |              | String | 重试间隔      |             |
+| 20 | timeout              |              | Object | 超时控制      |             |
+| 21 | taskInstancePriority |              | String | 任务优先级     |             |
+| 22 | workerGroup          |              | String | Worker 分组 |             |
+| 23 | preTasks             |              | Array  | 前置任务      |             |
 
 **节点数据样例:**
 
@@ -420,8 +416,8 @@
 }
 ```
 
-
 ## Python节点
+
 **节点数据结构如下:**
 序号|参数名||类型|描述 |描述
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -436,7 +432,7 @@
 9|runFlag | |String |运行标识| |
 10|conditionResult | |Object|条件分支 | |
 11| | successNode| Array|成功跳转节点| |
-12| | failedNode|Array|失败跳转节点 | 
+12| | failedNode|Array|失败跳转节点 |
 13| dependence| |Object |任务依赖 |与params互斥
 14|maxRetryTimes | |String|最大重试次数 | |
 15|retryInterval | |String |重试间隔| |
@@ -445,7 +441,6 @@
 18|workerGroup | |String |Worker 分组| |
 19|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -494,43 +489,40 @@
 }
 ```
 
-
-
-
 ## Flink节点
-**节点数据结构如下:**
 
-序号|参数名||类型|描述 |描述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| 任务编码|
-2|type ||String |类型 |FLINK
-3| name| |String|名称 |
-4| params| |Object| 自定义参数 |Json 格式
-5| |mainClass |String | 运行主类
-6| |mainArgs | String| 运行参数
-7| |others | String| 其他参数
-8| |mainJar |Object | 程序 jar 包
-9| |deployMode |String | 部署模式  |local,client,cluster
-10| |slot | String| slot数量
-11| |taskManager |String | taskManager数量
-12| |taskManagerMemory |String | taskManager内存数
-13| |jobManagerMemory |String | jobManager内存数
-14| |programType | String| 程序类型|JAVA,SCALA,PYTHON
-15| | localParams| Array|自定义参数
-16| | resourceList| Array|资源文件
-17|description | |String|描述 | |
-18|runFlag | |String |运行标识| |
-19|conditionResult | |Object|条件分支 | |
-20| | successNode| Array|成功跳转节点| |
-21| | failedNode|Array|失败跳转节点 | 
-22| dependence| |Object |任务依赖 |与params互斥
-23|maxRetryTimes | |String|最大重试次数 | |
-24|retryInterval | |String |重试间隔| |
-25|timeout | |Object|超时控制 | |
-26| taskInstancePriority| |String|任务优先级 | |
-27|workerGroup | |String |Worker 分组| |
-38|preTasks | |Array|前置任务 | |
+**节点数据结构如下:**
 
+| 序号 |                   参数名                   ||   类型   |       描述       |          描述          |
+|----|----------------------|-------------------|--------|----------------|----------------------|
+| 1  | id                   |                   | String | 任务编码           |
+| 2  | type                                    || String | 类型             | FLINK                |
+| 3  | name                 |                   | String | 名称             |
+| 4  | params               |                   | Object | 自定义参数          | Json 格式              |
+| 5  |                      | mainClass         | String | 运行主类           |
+| 6  |                      | mainArgs          | String | 运行参数           |
+| 7  |                      | others            | String | 其他参数           |
+| 8  |                      | mainJar           | Object | 程序 jar 包       |
+| 9  |                      | deployMode        | String | 部署模式           | local,client,cluster |
+| 10 |                      | slot              | String | slot数量         |
+| 11 |                      | taskManager       | String | taskManager数量  |
+| 12 |                      | taskManagerMemory | String | taskManager内存数 |
+| 13 |                      | jobManagerMemory  | String | jobManager内存数  |
+| 14 |                      | programType       | String | 程序类型           | JAVA,SCALA,PYTHON    |
+| 15 |                      | localParams       | Array  | 自定义参数          |
+| 16 |                      | resourceList      | Array  | 资源文件           |
+| 17 | description          |                   | String | 描述             |                      |
+| 18 | runFlag              |                   | String | 运行标识           |                      |
+| 19 | conditionResult      |                   | Object | 条件分支           |                      |
+| 20 |                      | successNode       | Array  | 成功跳转节点         |                      |
+| 21 |                      | failedNode        | Array  | 失败跳转节点         |
+| 22 | dependence           |                   | Object | 任务依赖           | 与params互斥            |
+| 23 | maxRetryTimes        |                   | String | 最大重试次数         |                      |
+| 24 | retryInterval        |                   | String | 重试间隔           |                      |
+| 25 | timeout              |                   | Object | 超时控制           |                      |
+| 26 | taskInstancePriority |                   | String | 任务优先级          |                      |
+| 27 | workerGroup          |                   | String | Worker 分组      |                      |
+| 38 | preTasks             |                   | Array  | 前置任务           |                      |
 
 **节点数据样例:**
 
@@ -593,33 +585,33 @@
 ```
 
 ## HTTP节点
-**节点数据结构如下:**
 
-序号|参数名||类型|描述 |描述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| 任务编码|
-2|type ||String |类型 |HTTP
-3| name| |String|名称 |
-4| params| |Object| 自定义参数 |Json 格式
-5| |url |String | 请求地址
-6| |httpMethod | String| 请求方式|GET,POST,HEAD,PUT,DELETE
-7| | httpParams| Array|请求参数
-8| |httpCheckCondition | String| 校验条件|默认响应码200
-9| |condition |String | 校验内容
-10| | localParams| Array|自定义参数
-11|description | |String|描述 | |
-12|runFlag | |String |运行标识| |
-13|conditionResult | |Object|条件分支 | |
-14| | successNode| Array|成功跳转节点| |
-15| | failedNode|Array|失败跳转节点 | 
-16| dependence| |Object |任务依赖 |与params互斥
-17|maxRetryTimes | |String|最大重试次数 | |
-18|retryInterval | |String |重试间隔| |
-19|timeout | |Object|超时控制 | |
-20| taskInstancePriority| |String|任务优先级 | |
-21|workerGroup | |String |Worker 分组| |
-22|preTasks | |Array|前置任务 | |
+**节点数据结构如下:**
 
+| 序号 |                   参数名                    ||   类型   |    描述     |            描述            |
+|----|----------------------|--------------------|--------|-----------|--------------------------|
+| 1  | id                   |                    | String | 任务编码      |
+| 2  | type                                     || String | 类型        | HTTP                     |
+| 3  | name                 |                    | String | 名称        |
+| 4  | params               |                    | Object | 自定义参数     | Json 格式                  |
+| 5  |                      | url                | String | 请求地址      |
+| 6  |                      | httpMethod         | String | 请求方式      | GET,POST,HEAD,PUT,DELETE |
+| 7  |                      | httpParams         | Array  | 请求参数      |
+| 8  |                      | httpCheckCondition | String | 校验条件      | 默认响应码200                 |
+| 9  |                      | condition          | String | 校验内容      |
+| 10 |                      | localParams        | Array  | 自定义参数     |
+| 11 | description          |                    | String | 描述        |                          |
+| 12 | runFlag              |                    | String | 运行标识      |                          |
+| 13 | conditionResult      |                    | Object | 条件分支      |                          |
+| 14 |                      | successNode        | Array  | 成功跳转节点    |                          |
+| 15 |                      | failedNode         | Array  | 失败跳转节点    |
+| 16 | dependence           |                    | Object | 任务依赖      | 与params互斥                |
+| 17 | maxRetryTimes        |                    | String | 最大重试次数    |                          |
+| 18 | retryInterval        |                    | String | 重试间隔      |                          |
+| 19 | timeout              |                    | Object | 超时控制      |                          |
+| 20 | taskInstancePriority |                    | String | 任务优先级     |                          |
+| 21 | workerGroup          |                    | String | Worker 分组 |                          |
+| 22 | preTasks             |                    | Array  | 前置任务      |                          |
 
 **节点数据样例:**
 
@@ -677,8 +669,6 @@
 }
 ```
 
-
-
 ## DataX节点
 
 **节点数据结构如下:**
@@ -692,7 +682,7 @@
 6| |dsType |String | 源数据库类型
 7| |dataSource |Int | 源数据库ID
 8| |dtType | String| 目标数据库类型
-9| |dataTarget | Int| 目标数据库ID 
+9| |dataTarget | Int| 目标数据库ID
 10| |sql |String | SQL语句
 11| |targetTable |String | 目标表
 12| |jobSpeedByte |Int | 限流(字节数)
@@ -705,7 +695,7 @@
 19|runFlag | |String |运行标识| |
 20|conditionResult | |Object|条件分支 | |
 21| | successNode| Array|成功跳转节点| |
-22| | failedNode|Array|失败跳转节点 | 
+22| | failedNode|Array|失败跳转节点 |
 23| dependence| |Object |任务依赖 |与params互斥
 24|maxRetryTimes | |String|最大重试次数 | |
 25|retryInterval | |String |重试间隔| |
@@ -714,11 +704,8 @@
 28|workerGroup | |String |Worker 分组| |
 29|preTasks | |Array|前置任务 | |
 
-
-
 **节点数据样例:**
 
-
 ```bash
 {
     "type":"DATAX",
@@ -789,7 +776,7 @@
 13|runFlag | |String |运行标识| |
 14|conditionResult | |Object|条件分支 | |
 15| | successNode| Array|成功跳转节点| |
-16| | failedNode|Array|失败跳转节点 | 
+16| | failedNode|Array|失败跳转节点 |
 17| dependence| |Object |任务依赖 |与params互斥
 18|maxRetryTimes | |String|最大重试次数 | |
 19|retryInterval | |String |重试间隔| |
@@ -798,9 +785,6 @@
 22|workerGroup | |String |Worker 分组| |
 23|preTasks | |Array|前置任务 | |
 
-
-
-
 **节点数据样例:**
 
 ```bash
@@ -860,7 +844,7 @@
 6|runFlag | |String |运行标识| |
 7|conditionResult | |Object|条件分支 | |
 8| | successNode| Array|成功跳转节点| |
-9| | failedNode|Array|失败跳转节点 | 
+9| | failedNode|Array|失败跳转节点 |
 10| dependence| |Object |任务依赖 |与params互斥
 11|maxRetryTimes | |String|最大重试次数 | |
 12|retryInterval | |String |重试间隔| |
@@ -869,7 +853,6 @@
 15|workerGroup | |String |Worker 分组| |
 16|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -912,8 +895,8 @@
 }
 ```
 
-
 ## 子流程节点
+
 **节点数据结构如下:**
 序号|参数名||类型|描述 |描述
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -926,7 +909,7 @@
 7|runFlag | |String |运行标识| |
 8|conditionResult | |Object|条件分支 | |
 9| | successNode| Array|成功跳转节点| |
-10| | failedNode|Array|失败跳转节点 | 
+10| | failedNode|Array|失败跳转节点 |
 11| dependence| |Object |任务依赖 |与params互斥
 12|maxRetryTimes | |String|最大重试次数 | |
 13|retryInterval | |String |重试间隔| |
@@ -935,7 +918,6 @@
 16|workerGroup | |String |Worker 分组| |
 17|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -972,9 +954,8 @@
         }
 ```
 
-
-
 ## 依赖(DEPENDENT)节点
+
 **节点数据结构如下:**
 序号|参数名||类型|描述 |描述
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -989,7 +970,7 @@
 9|runFlag | |String |运行标识| |
 10|conditionResult | |Object|条件分支 | |
 11| | successNode| Array|成功跳转节点| |
-12| | failedNode|Array|失败跳转节点 | 
+12| | failedNode|Array|失败跳转节点 |
 13| dependence| |Object |任务依赖 |与params互斥
 14| | relation|String |关系 |AND,OR
 15| | dependTaskList|Array |依赖任务清单 |
@@ -1000,7 +981,6 @@
 20|workerGroup | |String |Worker 分组| |
 21|preTasks | |Array|前置任务 | |
 
-
 **节点数据样例:**
 
 ```bash
@@ -1132,3 +1112,4 @@
             ]
         }
 ```
+
diff --git a/docs/docs/zh/contribute/api-standard.md b/docs/docs/zh/contribute/api-standard.md
index 0d528cea26..fbdecc678e 100644
--- a/docs/docs/zh/contribute/api-standard.md
+++ b/docs/docs/zh/contribute/api-standard.md
@@ -1,9 +1,11 @@
 # API 设计规范
+
 规范统一的 API 是项目设计的基石。DolphinScheduler 的 API 遵循 REST ful 标准,REST ful 是目前最流行的一种互联网软件架构,它结构清晰,符合标准,易于理解,扩展方便。
 
 本文以 DolphinScheduler 项目的接口为样例,讲解如何构造具有 Restful 风格的 API。
 
 ## 1. URI 设计
+
 REST 即为 Representational State Transfer 的缩写,即“表现层状态转化”。
 
 “表现层”指的就是“资源”。资源对应网络上的一种实体,例如:一段文本,一张图片,一种服务。且每种资源都对应一个特定的 URI。
@@ -15,36 +17,43 @@ Restful URI 的设计基于资源:
 + 子资源下的单个资源:`/instances/{instanceId}/tasks/{taskId}`;
 
 ## 2. Method 设计
+
 我们需要通过 URI 来定位某种资源,再通过 Method,或者在路径后缀声明动作来体现对资源的操作。
 
 ### ① 查询操作 - GET
+
 通过 URI 来定位要资源,通过 GET 表示查询。
 
 + 当 URI 为一类资源时表示查询一类资源,例如下面样例表示分页查询 `alter-groups`。
+
 ```
 Method: GET
 /dolphinscheduler/alert-groups
 ```
 
 + 当 URI 为单个资源时表示查询此资源,例如下面样例表示查询对应的 `alter-group`。
+
 ```
 Method: GET
 /dolphinscheduler/alter-groups/{id}
 ```
 
 + 此外,我们还可以根据 URI 来表示查询子资源,如下:
+
 ```
 Method: GET
 /dolphinscheduler/projects/{projectId}/tasks
 ```
 
 **上述的关于查询的方式都表示分页查询,如果我们需要查询全部数据的话,则需在 URI 的后面加 `/list` 来区分。分页查询和查询全部不要混用一个 API。**
+
 ```
 Method: GET
 /dolphinscheduler/alert-groups/list
 ```
 
 ### ② 创建操作 - POST
+
 通过 URI 来定位要创建的资源类型,通过 POST 表示创建动作,并且将创建后的 `id` 返回给请求者。
 
 + 下面样例表示创建一个 `alter-group`:
@@ -55,57 +64,72 @@ Method: POST
 ```
 
 + 创建子资源也是类似的操作:
+
 ```
 Method: POST
 /dolphinscheduler/alter-groups/{alterGroupId}/tasks
 ```
 
 ### ③ 修改操作 - PUT
+
 通过 URI 来定位某一资源,通过 PUT 指定对其修改。
+
 ```
 Method: PUT
 /dolphinscheduler/alter-groups/{alterGroupId}
 ```
 
 ### ④ 删除操作 -DELETE
+
 通过 URI 来定位某一资源,通过 DELETE 指定对其删除。
 
 + 下面例子表示删除 `alterGroupId` 对应的资源:
+
 ```
 Method: DELETE
 /dolphinscheduler/alter-groups/{alterGroupId}
 ```
 
 + 批量删除:对传入的 id 数组进行批量删除,使用 POST 方法。**(这里不要用 DELETE 方法,因为 DELETE 请求的 body 在语义上没有任何意义,而且有可能一些网关,代理,防火墙在收到 DELETE 请求后会把请求的 body 直接剥离掉。)**
+
 ```
 Method: POST
 /dolphinscheduler/alter-groups/batch-delete
 ```
 
 ### ⑤ 部分更新操作 -PATCH
+
 通过 URI 来定位某一资源,通过 PATCH 指定对其部分更新。
 
 + 下面例子表示部分更新 `alterGroupId` 对应的资源:
+
 ```
 Method: PATCH
 /dolphinscheduler/alter-groups/{alterGroupId}
 ```
 
 ### ⑥ 其他操作
+
 除增删改查外的操作,我们同样也通过 `url` 定位到对应的资源,然后再在路径后面追加对其进行的操作。例如:
+
 ```
 /dolphinscheduler/alert-groups/verify-name
 /dolphinscheduler/projects/{projectCode}/process-instances/{code}/view-gantt
 ```
 
 ## 3. 参数设计
+
 参数分为两种,一种是请求参数(Request Param 或 Request Body),另一种是路径参数(Path Param)。
 
 参数变量必须用小驼峰表示,并且在分页场景中,用户输入的参数小于 1,则前端需要返给后端 1 表示请求第一页;当后端发现用户输入的参数大于总页数时,直接返回最后一页。
 
 ## 4. 其他设计
+
 ### 基础路径
+
 整个项目的 URI 需要以 `/<project_name>` 作为基础路径,从而标识这类 API 都是项目下的,即:
+
 ```
 /dolphinscheduler
-```
\ No newline at end of file
+```
+
diff --git a/docs/docs/zh/contribute/api-test.md b/docs/docs/zh/contribute/api-test.md
index 01760ea05c..e41892207c 100644
--- a/docs/docs/zh/contribute/api-test.md
+++ b/docs/docs/zh/contribute/api-test.md
@@ -1,4 +1,5 @@
 # DolphinScheduler — API 测试
+
 ## 前置知识:
 
 ### API 测试与单元测试的区别
@@ -47,10 +48,8 @@ public final class LoginPage {
 
 在登陆页面(LoginPage)只定义接口请求的入参规范,对于接口请求出参只定义统一的基础响应结构,接口实际返回的data数据则再实际的测试用例中测试。主要测试接口的输入和输出是否能够符合测试用例的要求。
 
-
 ### API-Cases
 
-
 下面以租户管理测试为例,前文已经说明,我们使用 docker-compose 进行部署,所以每个测试案例,都需要以注解的形式引入对应的文件。
 
 使用 OkHttpClient 框架来进行 HTTP 请求。在每个测试案例开始之前都需要进行一些准备工作。比如:登录用户、创建对应的租户(根据具体的测试案例而定)。
@@ -83,7 +82,6 @@ public final class LoginPage {
 
 https://github.com/apache/dolphinscheduler/tree/dev/dolphinscheduler-api-test/dolphinscheduler-api-test-case/src/test/java/org/apache/dolphinscheduler/api.test/cases
 
-
 ## 补充
 
 在本地运行的时候,首先需要启动相应的本地服务,可以参考该页面: [环境搭建](./development-environment-setup.md)
diff --git a/docs/docs/zh/contribute/architecture-design.md b/docs/docs/zh/contribute/architecture-design.md
index 35bee1a1da..bd594efa42 100644
--- a/docs/docs/zh/contribute/architecture-design.md
+++ b/docs/docs/zh/contribute/architecture-design.md
@@ -1,7 +1,9 @@
 ## 系统架构设计
+
 在对调度系统架构说明之前,我们先来认识一下调度系统常用的名词
 
 ### 1.名词解释
+
 **DAG:** 全称Directed Acyclic Graph,简称DAG。工作流中的Task任务以有向无环图的形式组装起来,从入度为零的节点进行拓扑遍历,直到无后继节点为止。举例如下图:
 
 <p align="center">
@@ -36,6 +38,7 @@
 ### 2.系统架构
 
 #### 2.1 系统架构图
+
 <p align="center">
   <img src="../../../img/architecture.jpg" alt="系统架构图"  />
   <p align="center">
@@ -45,56 +48,58 @@
 
 #### 2.2 架构说明
 
-* **MasterServer** 
+* **MasterServer**
 
-    MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
-    MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+  MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+  MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
 
-    ##### 该服务内主要包含:
+  ##### 该服务内主要包含:
 
-    - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
+  - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
 
-    - **MasterSchedulerThread**是一个扫描线程,定时扫描数据库中的 **command** 表,根据不同的**命令类型**进行不同的业务操作
+  - **MasterSchedulerThread**是一个扫描线程,定时扫描数据库中的 **command** 表,根据不同的**命令类型**进行不同的业务操作
 
-    - **MasterExecThread**主要是负责DAG任务切分、任务提交监控、各种不同命令类型的逻辑处理
+  - **MasterExecThread**主要是负责DAG任务切分、任务提交监控、各种不同命令类型的逻辑处理
 
-    - **MasterTaskExecThread**主要负责任务的持久化
+  - **MasterTaskExecThread**主要负责任务的持久化
 
-* **WorkerServer** 
+* **WorkerServer**
 
-     WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
-     ##### 该服务包含:
-     - **FetchTaskThread**主要负责不断从**Task Queue**中领取任务,并根据不同任务类型调用**TaskScheduleThread**对应执行器。
+  WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
 
-* **ZooKeeper** 
+  ##### 该服务包含:
 
-    ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
-    我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
+  - **FetchTaskThread**主要负责不断从**Task Queue**中领取任务,并根据不同任务类型调用**TaskScheduleThread**对应执行器。
+* **ZooKeeper**
 
-* **Task Queue** 
+  ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
+  我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
 
-    提供任务队列的操作,目前队列也是基于Zookeeper来实现。由于队列中存的信息较少,不必担心队列里数据过多的情况,实际上我们压测过百万级数据存队列,对系统稳定性和性能没影响。
+* **Task Queue**
 
-* **Alert** 
+  提供任务队列的操作,目前队列也是基于Zookeeper来实现。由于队列中存的信息较少,不必担心队列里数据过多的情况,实际上我们压测过百万级数据存队列,对系统稳定性和性能没影响。
 
-    提供告警相关接口,接口主要包括两种类型的告警数据的存储、查询和通知功能。其中通知功能又有**邮件通知**和**SNMP(暂未实现)**两种。
+* **Alert**
 
-* **API** 
+  提供告警相关接口,接口主要包括两种类型的告警数据的存储、查询和通知功能。其中通知功能又有**邮件通知**和**SNMP(暂未实现)**两种。
 
-    API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
-    接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
+* **API**
 
-* **UI** 
+  API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+  接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
 
-    系统的前端页面,提供系统的各种可视化操作界面,详见 [快速开始](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/about/introduction.html) 部分。
+* **UI**
+
+  系统的前端页面,提供系统的各种可视化操作界面,详见 [快速开始](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/about/introduction.html) 部分。
 
 #### 2.3 架构设计思想
 
-##### 一、去中心化vs中心化 
+##### 一、去中心化vs中心化
 
 ###### 中心化思想
 
 中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave角色"  width="50%" />
  </p>
@@ -102,16 +107,13 @@
 - Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
 - Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
 
-
-
 中心化思想设计存在的问题:
 
 - 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
 - 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
 
-
-
 ###### 去中心化
+
  <p align="center"
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="去中心化"  width="50%" />
  </p>
@@ -119,29 +121,27 @@
 - 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
 - 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
 - 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
-
-
-
 - DolphinScheduler的去中心化是Master/Worker注册到Zookeeper中,实现Master集群和Worker集群无中心,并使用Zookeeper分布式锁来选举其中的一台Master或Worker为“管理者”来执行任务。
 
-#####  二、分布式锁实践
+##### 二、分布式锁实践
 
 DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Master执行Scheduler,或者只有一台Worker执行任务的提交。
 1. 获取分布式锁的核心流程算法如下
+
  <p align="center">
    <img src="../../../img/architecture-design/distributed_lock.png" alt="获取分布式锁流程"  width="70%" />
  </p>
 
 2. DolphinScheduler中Scheduler线程分布式锁实现流程图:
+
  <p align="center">
    <img src="../../../img/architecture-design/distributed_lock_procss.png" alt="获取分布式锁流程" />
  </p>
 
-
 ##### 三、线程不足循环等待问题
 
--  如果一个DAG中没有子流程,则如果Command中的数据条数大于线程池设置的阈值,则直接流程等待或失败。
--  如果一个大的DAG中嵌套了很多子流程,如下图则会产生“死等”状态:
+- 如果一个DAG中没有子流程,则如果Command中的数据条数大于线程池设置的阈值,则直接流程等待或失败。
+- 如果一个大的DAG中嵌套了很多子流程,如下图则会产生“死等”状态:
 
  <p align="center">
    <img src="../../../img/architecture-design/lack_thread.png" alt="线程不足循环等待问题"  width="70%" />
@@ -150,7 +150,7 @@ DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Mast
 
 对于启动新Master来打破僵局,似乎有点差强人意,于是我们提出了以下三种方案来降低这种风险:
 
-1. 计算所有Master的线程总和,然后对每一个DAG需要计算其需要的线程数,也就是在DAG流程执行之前做预计算。因为是多Master线程池,所以总线程数不太可能实时获取。 
+1. 计算所有Master的线程总和,然后对每一个DAG需要计算其需要的线程数,也就是在DAG流程执行之前做预计算。因为是多Master线程池,所以总线程数不太可能实时获取。
 2. 对单Master线程池进行判断,如果线程池已经满了,则让线程直接失败。
 3. 增加一种资源不足的Command类型,如果线程池不足,则将主流程挂起。这样线程池就有了新的线程,可以让资源不足挂起的流程重新唤醒执行。
 
@@ -158,8 +158,8 @@ DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Mast
 
 于是我们选择了第三种方式来解决线程不足的问题。
 
-
 ##### 四、容错设计
+
 容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
 
 ###### 1. 宕机容错
@@ -171,8 +171,6 @@ DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Mast
  </p>
 其中Master监控其他Master和Worker的目录,如果监听到remove事件,则会根据具体的业务逻辑进行流程实例容错或者任务实例容错。
 
-
-
 - Master容错流程图:
 
  <p align="center">
@@ -180,8 +178,6 @@ DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Mast
  </p>
 ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程调度,遍历 DAG 找到”正在运行”和“提交成功”的任务,对”正在运行”的任务监控其任务实例的状态,对”提交成功”的任务需要判断Task Queue中是否已经存在,如果存在则同样监控任务实例的状态,如果不存在则重新提交任务实例。
 
-
-
 - Worker容错流程图:
 
  <p align="center">
@@ -190,7 +186,7 @@ ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程
 
 Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
 
- 注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
 
 ###### 2.任务失败重试
 
@@ -200,8 +196,6 @@ Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则
 - 流程失败恢复是流程级别的,是手动进行的,恢复是从只能**从失败的节点开始执行**或**从当前节点开始执行**
 - 流程失败重跑也是流程级别的,是手动进行的,重跑是从开始节点进行
 
-
-
 接下来说正题,我们将工作流中的任务节点分了两种类型。
 
 - 一种是业务节点,这种节点都对应一个实际的脚本或者处理语句,比如Shell节点,MR节点、Spark节点、依赖节点等。
@@ -212,67 +206,63 @@ Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则
 
 如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作
 
-
-
 ##### 五、任务优先级设计
+
 在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
 
--  按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
-    - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+- 按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+  - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+    - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
 
-        - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
-             </p>
+        <p align="center">
+           <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
+         </p>
 
-        - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
-             </p>
+    - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
 
+        <p align="center">
+           <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
+         </p>
 
 ##### 六、Logback和gRPC实现日志访问
 
--  由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
-  -  将日志放到ES搜索引擎上
-  -  通过gRPC通信获取远程日志信息
-
--  介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+- 由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+- 将日志放到ES搜索引擎上
+- 通过gRPC通信获取远程日志信息
+- 介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
 
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc远程访问"  width="60%" />
  </p>
 
-
 - 我们使用自定义Logback的FileAppender和Filter功能,实现每个任务实例生成一个日志文件。
 - FileAppender主要实现如下:
 
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
+```java
+/**
+ * task log appender
+ */
+public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+
+    ...
+
+   @Override
+   protected void append(ILoggingEvent event) {
+
+       if (currentlyActiveFile == null){
+           currentlyActiveFile = getFile();
+       }
+       String activeFile = currentlyActiveFile;
+       // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+       String threadName = event.getThreadName();
+       String[] threadNameArr = threadName.split("-");
+       // logId = processDefineId_processInstanceId_taskInstanceId
+       String logId = threadNameArr[1];
+       ...
+       super.subAppend(event);
+   }
 }
- ```
-
+```
 
 以/流程定义id/流程实例id/任务实例id.log的形式生成日志
 
@@ -280,22 +270,23 @@ Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则
 
 - TaskLogFilter实现如下:
 
- ```java
- /**
- *  task log filter
- */
+```java
+/**
+*  task log filter
+*/
 public class TaskLogFilter extends Filter<ILoggingEvent> {
 
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
+   @Override
+   public FilterReply decide(ILoggingEvent event) {
+       if (event.getThreadName().startsWith("TaskLogInfo-")){
+           return FilterReply.ACCEPT;
+       }
+       return FilterReply.DENY;
+   }
 }
- ```
+```
 
 ### 总结
+
 本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
 
diff --git a/docs/docs/zh/contribute/backend/mechanism/overview.md b/docs/docs/zh/contribute/backend/mechanism/overview.md
index 22bed2737f..7d0ff0a094 100644
--- a/docs/docs/zh/contribute/backend/mechanism/overview.md
+++ b/docs/docs/zh/contribute/backend/mechanism/overview.md
@@ -1,6 +1,6 @@
 # 综述
 
 <!-- TODO 由于 side menu 不支持多个等级,所以新建了一个leading page存放 -->
-
 * [全局参数](global-parameter.md)
 * [switch任务类型](task/switch.md)
+
diff --git a/docs/docs/zh/contribute/backend/mechanism/task/switch.md b/docs/docs/zh/contribute/backend/mechanism/task/switch.md
index 27ed7f9cfa..7c27be6f49 100644
--- a/docs/docs/zh/contribute/backend/mechanism/task/switch.md
+++ b/docs/docs/zh/contribute/backend/mechanism/task/switch.md
@@ -6,3 +6,4 @@ Switch任务类型的工作流程如下
 * SwitchTaskExecThread从上到下(用户在页面上定义的表达式顺序)处理switch中定义的表达式,从varPool中获取变量的值,通过js解析表达式,如果表达式返回true,则停止检查,并且记录该表达式的顺序,这里我们记录为resultConditionLocation。SwitchTaskExecThread的任务便结束了。
 * 当switch节点运行结束之后,如果没有发生错误(较为常见的是用户定义的表达式不合规范或参数名有问题),这个时候MasterExecThread.submitPostNode会获取DAG的下游节点继续执行。
 * DagHelper.parsePostNodes中如果发现当前节点(刚刚运行完成功的节点)是switch节点的话,会获取resultConditionLocation,将SwitchParameters中除了resultConditionLocation以外的其他分支全部skip掉。这样留下来的就只有需要执行的分支了。
+
diff --git a/docs/docs/zh/contribute/backend/spi/alert.md b/docs/docs/zh/contribute/backend/spi/alert.md
index 21ea651967..78dec78224 100644
--- a/docs/docs/zh/contribute/backend/spi/alert.md
+++ b/docs/docs/zh/contribute/backend/spi/alert.md
@@ -26,7 +26,6 @@ DolphinScheduler 正在处于微内核 + 插件化的架构更改之中,所有
 
   该模块是目前我们提供的插件,目前我们已经支持数十种插件,如 Email、DingTalk、Script等。
 
-
 #### Alert SPI 主要类信息:
 
 AlertChannelFactory
@@ -64,31 +63,39 @@ alert_spi 具体设计可见 issue:[Alert Plugin Design](https://github.com/ap
   钉钉群聊机器人告警
 
   相关参数配置可以参考钉钉机器人文档。
+
 * EnterpriseWeChat
 
   企业微信告警通知
 
   相关参数配置可以参考企业微信机器人文档。
+
 * Script
 
   我们实现了 Shell 脚本告警,我们会将相关告警参数透传给脚本,你可以在 Shell 中实现你的相关告警逻辑,如果你需要对接内部告警应用,这是一种不错的方法。
+
 * FeiShu
 
   飞书告警通知
+
 * Slack
 
   Slack告警通知
+
 * PagerDuty
 
   PagerDuty告警通知
+
 * WebexTeams
 
   WebexTeams告警通知
   相关参数配置可以参考WebexTeams文档。
+
 * Telegram
 
   Telegram告警通知
   相关参数配置可以参考Telegram文档。
+
 * Http
 
   我们实现了Http告警,调用大部分的告警插件最终都是Http请求,如果我们没有支持你常用插件,可以使用Http来实现你的告警需求,同时也欢迎将你常用插件贡献到社区。
diff --git a/docs/docs/zh/contribute/backend/spi/registry.md b/docs/docs/zh/contribute/backend/spi/registry.md
index 36c4d1f00f..d53e21af85 100644
--- a/docs/docs/zh/contribute/backend/spi/registry.md
+++ b/docs/docs/zh/contribute/backend/spi/registry.md
@@ -6,9 +6,10 @@
 
 * 注册中心插件配置, 以Zookeeper 为例 (registry.properties)
   dolphinscheduler-service/src/main/resources/registry.properties
+
   ```registry.properties
-   registry.plugin.name=zookeeper
-   registry.servers=127.0.0.1:2181
+  registry.plugin.name=zookeeper
+  registry.servers=127.0.0.1:2181
   ```
 
 具体配置信息请参考具体插件提供的参数信息,例如 zk:`org/apache/dolphinscheduler/plugin/registry/zookeeper/ZookeeperConfiguration.java`
@@ -19,6 +20,7 @@
 `dolphinscheduler-registry-api` 定义了实现插件的标准,当你需要扩展插件的时候只需要实现 `org.apache.dolphinscheduler.registry.api.RegistryFactory` 即可。
 
 `dolphinscheduler-registry-plugin` 模块下是我们目前所提供的注册中心插件。
+
 #### FAQ
 
 1:registry connect timeout
diff --git a/docs/docs/zh/contribute/e2e-test.md b/docs/docs/zh/contribute/e2e-test.md
index 13b8d4ab1e..9747a88550 100644
--- a/docs/docs/zh/contribute/e2e-test.md
+++ b/docs/docs/zh/contribute/e2e-test.md
@@ -1,4 +1,5 @@
 # DolphinScheduler — E2E 自动化测试
+
 ## 一、前置知识:
 
 ### 1、E2E 测试与单元测试的区别
@@ -76,31 +77,31 @@ public final class LoginPage extends NavBarPage {
 在安全中心页面(SecurityPage)提供了 goToTab 方法,用于测试对应侧栏的跳转,主要包括:租户管理(TenantPage)、用户管理(UserPage)、工作组管理(WorkerGroupPage)和队列管理(QueuePage)。这些页面的实现方式同理,主要测试表单的输入、增加和删除按钮是否能够返回出对应的页面。
 
 ```java
- public <T extends SecurityPage.Tab> T goToTab(Class<T> tab) {
-        if (tab == TenantPage.class) {
-            WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
-            return tab.cast(new TenantPage(driver));
-        }
-        if (tab == UserPage.class) {
-            WebElement menUserManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menUserManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
-            return tab.cast(new UserPage(driver));
-        }
-        if (tab == WorkerGroupPage.class) {
-            WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
-                    .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
-            ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
-            return tab.cast(new WorkerGroupPage(driver));
-        }
-        if (tab == QueuePage.class) {
-            menuQueueManage().click();
-            return tab.cast(new QueuePage(driver));
-        }
-        throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
-    }
+public <T extends SecurityPage.Tab> T goToTab(Class<T> tab) {
+       if (tab == TenantPage.class) {
+           WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
+           return tab.cast(new TenantPage(driver));
+       }
+       if (tab == UserPage.class) {
+           WebElement menUserManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menUserManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
+           return tab.cast(new UserPage(driver));
+       }
+       if (tab == WorkerGroupPage.class) {
+           WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
+                   .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
+           ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
+           return tab.cast(new WorkerGroupPage(driver));
+       }
+       if (tab == QueuePage.class) {
+           menuQueueManage().click();
+           return tab.cast(new QueuePage(driver));
+       }
+       throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
+   }
 ```
 
 ![SecurityPage](../../../img/e2e-test/SecurityPage.png)
@@ -145,14 +146,14 @@ public final class LoginPage extends NavBarPage {
 使用 Selenium 所提供的 RemoteWebDriver 来加载浏览器。在每个测试案例开始之前都需要进行一些准备工作。比如:登录用户、跳转到对应的页面(根据具体的测试案例而定)。
 
 ```java
-    @BeforeAll
-    public static void setup() {
-        new LoginPage(browser)
-                .login("admin", "dolphinscheduler123") // 登录进入租户界面
-                .goToNav(SecurityPage.class) // 安全中心
-                .goToTab(TenantPage.class)
-        ;
-    }
+@BeforeAll
+public static void setup() {
+    new LoginPage(browser)
+            .login("admin", "dolphinscheduler123") // 登录进入租户界面
+            .goToNav(SecurityPage.class) // 安全中心
+            .goToTab(TenantPage.class)
+    ;
+}
 ```
 
 在完成准备工作之后,就是正式的测试案例编写。我们使用 @Order() 注解的形式,用于模块化,确认测试顺序。在进行测试之后,使用断言来判断测试是否成功,如果断言返回 true,则表示创建租户成功。可参考创建租户的测试代码:
diff --git a/docs/docs/zh/contribute/frontend-development.md b/docs/docs/zh/contribute/frontend-development.md
index bfb0e5cd30..42eb2973dd 100644
--- a/docs/docs/zh/contribute/frontend-development.md
+++ b/docs/docs/zh/contribute/frontend-development.md
@@ -1,6 +1,7 @@
 # 前端开发文档
 
 ### 技术选型
+
 ```
 Vue mvvm 框架
 
@@ -16,11 +17,17 @@ Lodash 高性能的 JavaScript 实用工具库
 ```
 
 ### 开发环境搭建
-   
-- #### Node安装
-Node包下载 (注意版本 v12.20.2) `https://nodejs.org/download/release/v12.20.2/` 
 
-- #### 前端项目构建
+- 
+
+#### Node安装
+
+Node包下载 (注意版本 v12.20.2) `https://nodejs.org/download/release/v12.20.2/`
+
+- 
+
+#### 前端项目构建
+
 用命令行模式 `cd`  进入 `dolphinscheduler-ui`项目目录并执行 `npm install` 拉取项目依赖包
 
 > 如果 `npm install` 速度非常慢,你可以设置淘宝镜像
@@ -36,13 +43,16 @@ npm config set registry http://registry.npm.taobao.org/
 API_BASE = http://127.0.0.1:12345
 ```
 
-> #####  !!!这里特别注意 项目如果在拉取依赖包的过程中报 " node-sass error " 错误,请在执行完后再次执行以下命令
+##### !!!这里特别注意 项目如果在拉取依赖包的过程中报 " node-sass error " 错误,请在执行完后再次执行以下命令
 
 ```bash
 npm install node-sass --unsafe-perm #单独安装node-sass依赖
 ```
 
-- #### 开发环境运行
+- 
+
+#### 开发环境运行
+
 - `npm start` 项目开发环境 (启动后访问地址 http://localhost:8888)
 
 #### 前端项目发布
@@ -53,7 +63,7 @@ npm install node-sass --unsafe-perm #单独安装node-sass依赖
 
 再拷贝到服务器对应的目录下(前端服务静态页面存放目录)
 
-访问地址 `http://localhost:8888` 
+访问地址 `http://localhost:8888`
 
 #### Linux下使用node启动并且守护进程
 
@@ -140,6 +150,7 @@ npm install node-sass --unsafe-perm #单独安装node-sass依赖
 首页 => `http://localhost:8888/#/home`
 
 项目管理 => `http://localhost:8888/#/projects/list`
+
 ```
 | 项目首页
 | 工作流
@@ -147,8 +158,9 @@ npm install node-sass --unsafe-perm #单独安装node-sass依赖
   - 工作流实例
   - 任务实例
 ```
- 
+
 资源管理 => `http://localhost:8888/#/resource/file`
+
 ```
 | 文件管理
 | UDF管理
@@ -159,6 +171,7 @@ npm install node-sass --unsafe-perm #单独安装node-sass依赖
 数据源管理 => `http://localhost:8888/#/datasource/list`
 
 安全中心 => `http://localhost:8888/#/security/tenant`
+
 ```
 | 租户管理
 | 用户管理
@@ -174,16 +187,19 @@ npm install node-sass --unsafe-perm #单独安装node-sass依赖
 项目 `src/js/conf/home` 下分为
 
 `pages` => 路由指向页面目录
+
 ```
- 路由地址对应的页面文件
+路由地址对应的页面文件
 ```
 
 `router` => 路由管理
+
 ```
 vue的路由器,在每个页面的入口文件index.js 都会注册进来 具体操作:https://router.vuejs.org/zh/
 ```
 
 `store` => 状态管理
+
 ```
 每个路由对应的页面都有一个状态管理的文件 分为:
 
@@ -201,9 +217,13 @@ state => mapState => 详情:https://vuex.vuejs.org/zh/guide/state.html
 ```
 
 ## 规范
+
 ## Vue规范
+
 ##### 1.组件名
+
 组件名为多个单词,并且用连接线(-)连接,避免与 HTML 标签冲突,并且结构更加清晰。
+
 ```
 // 正例
 export default {
@@ -212,7 +232,9 @@ export default {
 ```
 
 ##### 2.组件文件
+
 `src/js/module/components`项目内部公共组件书写文件夹名与文件名同名,公共组件内部所拆分的子组件与util工具都放置组件内部 `_source`文件夹里。
+
 ```
 └── components
     ├── header
@@ -228,6 +250,7 @@ export default {
 ```
 
 ##### 3.Prop
+
 定义 Prop 的时候应该始终以驼峰格式(camelCase)命名,在父组件赋值的时候使用连接线(-)。
 这里遵循每个语言的特性,因为在 HTML 标记中对大小写是不敏感的,使用连接线更加友好;而在 JavaScript 中更自然的是驼峰命名。
 
@@ -270,7 +293,9 @@ props: {
 ```
 
 ##### 4.v-for
+
 在执行 v-for 遍历的时候,总是应该带上 key 值使更新 DOM 时渲染效率更高。
+
 ```
 <ul>
     <li v-for="item in list" :key="item.id">
@@ -280,6 +305,7 @@ props: {
 ```
 
 v-for 应该避免与 v-if 在同一个元素(`例如:<li>`)上使用,因为 v-for 的优先级比 v-if 更高,为了避免无效计算和渲染,应该尽量将 v-if 放到容器的父元素之上。
+
 ```
 <ul v-if="showList">
     <li v-for="item in list" :key="item.id">
@@ -289,7 +315,9 @@ v-for 应该避免与 v-if 在同一个元素(`例如:<li>`)上使用,
 ```
 
 ##### 5.v-if / v-else-if / v-else
+
 若同一组 v-if 逻辑控制中的元素逻辑相同,Vue 为了更高效的元素切换,会复用相同的部分,`例如:value`。为了避免复用带来的不合理效果,应该在同种元素上加上 key 做标识。
+
 ```
 <div v-if="hasData" key="mazey-data">
     <span>{{ mazeyData }}</span>
@@ -300,12 +328,15 @@ v-for 应该避免与 v-if 在同一个元素(`例如:<li>`)上使用,
 ```
 
 ##### 6.指令缩写
+
 为了统一规范始终使用指令缩写,使用`v-bind`,`v-on`并没有什么不好,这里仅为了统一规范。
+
 ```
 <input :value="mazeyUser" @click="verifyUser">
 ```
 
 ##### 7.单文件组件的顶级元素顺序
+
 样式后续都是打包在一个文件里,所有在单个vue文件中定义的样式,在别的文件里同类名的样式也是会生效的所有在创建一个组件前都会有个顶级类名
 注意:项目内已经增加了sass插件,单个vue文件里可以直接书写sass语法
 为了统一和便于阅读,应该按 `<template>`、`<script>`、`<style>`的顺序放置。
@@ -357,25 +388,31 @@ v-for 应该避免与 v-if 在同一个元素(`例如:<li>`)上使用,
 ## JavaScript规范
 
 ##### 1.var / let / const
+
 建议不再使用 var,而使用 let / const,优先使用 const。任何一个变量的使用都要提前申明,除了 function 定义的函数可以随便放在任何位置。
 
 ##### 2.引号
+
 ```
 const foo = '后除'
 const bar = `${foo},前端工程师`
 ```
 
 ##### 3.函数
+
 匿名函数统一使用箭头函数,多个参数/返回值时优先使用对象的结构赋值。
+
 ```
 function getPersonInfo ({name, sex}) {
     // ...
     return {name, gender}
 }
 ```
+
 函数名统一使用驼峰命名,以大写字母开头申明的都是构造函数,使用小写字母开头的都是普通函数,也不该使用 new 操作符去操作普通函数。
 
 ##### 4.对象
+
 ```
 const foo = {a: 0, b: 1}
 const bar = JSON.parse(JSON.stringify(foo))
@@ -393,7 +430,9 @@ for (let [key, value] of myMap.entries()) {
 ```
 
 ##### 5.模块
+
 统一使用 import / export 的方式管理项目的模块。
+
 ```
 // lib.js
 export default {}
@@ -406,18 +445,21 @@ import 统一放在文件顶部。
 
 如果模块只有一个输出值,使用 `export default`,否则不用。
 
-
 ## HTML / CSS
 
 ###### 1.标签
+
 在引用外部 CSS 或 JavaScript 时不写 type 属性。HTML5 默认 type 为 `text/css` 和 `text/javascript` 属性,所以没必要指定。
+
 ```
 <link rel="stylesheet" href="//www.test.com/css/test.css">
 <script src="//www.test.com/js/test.js"></script>
 ```
 
 ##### 2.命名
+
 Class 和 ID 的命名应该语义化,通过看名字就知道是干嘛的;多个单词用连接线 - 连接。
+
 ```
 // 正例
 .test-header{
@@ -426,6 +468,7 @@ Class 和 ID 的命名应该语义化,通过看名字就知道是干嘛的;
 ```
 
 ##### 3.属性缩写
+
 CSS 属性尽量使用缩写,提高代码的效率和方便理解。
 
 ```
@@ -439,6 +482,7 @@ border: 1px solid #ccc;
 ```
 
 ##### 4.文档类型
+
 应该总是使用 HTML5 标准。
 
 ```
@@ -446,7 +490,9 @@ border: 1px solid #ccc;
 ```
 
 ##### 5.注释
+
 应该给一个模块文件写一个区块注释。
+
 ```
 /**
 * @module mazey/api
@@ -457,7 +503,8 @@ border: 1px solid #ccc;
 
 ## 接口
 
-##### 所有的接口都以 Promise 形式返回 
+##### 所有的接口都以 Promise 形式返回
+
 注意非0都为错误走catch
 
 ```
@@ -477,6 +524,7 @@ test.then(res => {
 ```
 
 正常返回
+
 ```
 {
   code:0,
@@ -486,6 +534,7 @@ test.then(res => {
 ```
 
 错误返回
+
 ```
 {
   code:10000, 
@@ -493,8 +542,10 @@ test.then(res => {
   msg:'失败'
 }
 ```
+
 接口如果是post请求,Content-Type默认为application/x-www-form-urlencoded;如果Content-Type改成application/json,
 接口传参需要改成下面的方式
+
 ```
 io.post('url', payload, null, null, { emulateJSON: false } res => {
   resolve(res)
@@ -524,6 +575,7 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
 (1) 先将节点的icon小图标放置`src/js/conf/home/pages/dag/img`文件夹内,注意 `toolbar_${后台定义的节点的英文名称 例如:SHELL}.png`
 
 (2) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `tasksType` 对象,往里增加
+
 ```
 'DEPENDENT': {  // 后台定义节点类型英文名称用作key值
   desc: 'DEPENDENT',  // tooltip desc
@@ -532,6 +584,7 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
 ```
 
 (3) 在 `src/js/conf/home/pages/dag/_source/formModel/tasks` 增加一个 `${节点类型(小写)}`.vue 文件,跟当前节点相关的组件内容都在这里写。 属于节点组件内的必须拥有一个函数 `_verification()` 验证成功后将当前组件的相关数据往父组件抛。
+
 ```
 /**
  * 验证
@@ -561,13 +614,14 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
     })
     return true
   }
-``` 
+```
 
 (4) 节点组件内部所用到公共的组件都在`_source`下,`commcon.js`用于配置公共数据
 
 ##### 2.增加状态类型
 
 (1) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `tasksState` 对象,往里增加
+
 ```
 'WAITTING_DEPEND': {  //后端定义状态类型 前端用作key值
   id: 11,  // 前端定义id 后续用作排序
@@ -579,7 +633,9 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
 ```
 
 ##### 3.增加操作栏工具
+
 (1) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `toolOper` 对象,往里增加
+
 ```
 {
   code: 'pointer',  // 工具标识
@@ -591,21 +647,20 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
 
 (2) 工具类都以一个构造函数返回 `src/js/conf/home/pages/dag/_source/plugIn`
 
-`downChart.js`  =>  dag 图片下载处理 
+`downChart.js`  =>  dag 图片下载处理
 
-`dragZoom.js`  =>  鼠标缩放效果处理 
+`dragZoom.js`  =>  鼠标缩放效果处理
 
-`jsPlumbHandle.js`  =>  拖拽线条处理 
+`jsPlumbHandle.js`  =>  拖拽线条处理
 
 `util.js`  =>   属于 `plugIn` 工具类
 
-
 操作则在 `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` 事件中处理。
 
-
 ##### 3.增加一个路由页面
 
 (1) 首先在路由管理增加一个路由地址`src/js/conf/home/router/index.js`
+
 ```
 {
   path: '/test',  // 路由地址 
@@ -619,12 +674,12 @@ dag 相关接口 `src/js/conf/home/store/dag/actions.js`
 
 (2) 在`src/js/conf/home/pages` 建一个 `test` 文件夹,在文件夹里建一个`index.vue`入口文件。
 
-    这样就可以直接访问 `http://localhost:8888/#/test`
-
+        这样就可以直接访问 `http://localhost:8888/#/test`
 
 ##### 4.增加预置邮箱
 
 找到`src/lib/localData/email.js`启动和定时邮箱地址输入可以自动下拉匹配。
+
 ```
 export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
 ```
diff --git a/docs/docs/zh/contribute/have-questions.md b/docs/docs/zh/contribute/have-questions.md
index 6eb85aba88..9cf1a7c01b 100644
--- a/docs/docs/zh/contribute/have-questions.md
+++ b/docs/docs/zh/contribute/have-questions.md
@@ -24,3 +24,4 @@
   - 级别:Beginner、Intermediate、Advanced
   - 场景相关:Debug,、How-to
 - 如果内容包括错误日志或长代码,请使用 [GitHub gist](https://gist.github.com/),并在邮件中只附加相关代码/日志的几行。
+
diff --git a/docs/docs/zh/contribute/join/DS-License.md b/docs/docs/zh/contribute/join/DS-License.md
index 94aa2cd3d4..a8fb70c8f2 100644
--- a/docs/docs/zh/contribute/join/DS-License.md
+++ b/docs/docs/zh/contribute/join/DS-License.md
@@ -21,7 +21,6 @@
 
 * [COMMUNITY-LED DEVELOPMENT "THE APACHE WAY"](https://apache.org/dev/licensing-howto.html)
 
-
 以Apache为例,当我们使用了ZooKeeper,那么ZooKeeper的NOTICE文件(每个开源项目都会有NOTICE文件,一般位于根目录)则必须在我们的项目中体现,用Apache的话来讲,就是"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a
 copyright notice that is included in or attached to the work.
 
@@ -37,7 +36,9 @@ copyright notice that is included in or attached to the work.
 * 在dolphinscheduler-dist/release-docs/LICENSE中添加相关的maven仓库地址。
 * 在dolphinscheduler-dist/release-docs/NOTICE中追加相关的NOTICE文件,此文件请务必和原代码仓库地址中的NOTICE文件一致。
 * 在dolphinscheduler-dist/release-docs/license/下添加相关源代码的协议,文件命名为license+文件名.txt。
+
 #### check dependency license fail
+
 ```
 --- /dev/fd/63	2020-12-03 03:08:57.191579482 +0000
 +++ /dev/fd/62	2020-12-03 03:08:57.191579482 +0000
@@ -49,13 +50,16 @@ copyright notice that is included in or attached to the work.
 +mchange-commons-java-0.2.11.jar
 Error: Process completed with exit code 1.
 ```
+
 一般来讲,添加一个jar的工作往往不会如此轻易的结束,因为它往往依赖了其它各种各样的jar,这些jar我们同样需要添加相应的license。
 这种情况下,我们会在check里面得到 check dependency license fail的错误信息,如上,我们缺少了HikariCP-java6-2.3.13、c3p0等的license声明,
 按照添加jar的步骤补充即可,提示还是蛮友好的(哈哈)。
+
 ### 附件
 
 <!-- markdown-link-check-disable -->
-附件:新jar的邮件格式 
+附件:新jar的邮-->
+
 ```
 [VOTE][New Jar] jetcd-core(registry plugin support etcd3 ) 
 
@@ -96,9 +100,11 @@ https://mvnrepository.com/artifact/io.etcd/jetcd-core
 
 https://mvnrepository.com/artifact/io.etcd/jetcd-launcher
 ```
+
 <!-- markdown-link-check-enable -->
 
 ### 参考文章:
+
 * [COMMUNITY-LED DEVELOPMENT "THE APACHE WAY"](https://apache.org/dev/licensing-howto.html)
 * [ASF 3RD PARTY LICENSE POLICY](https://apache.org/legal/resolved.html)
 
diff --git a/docs/docs/zh/contribute/join/become-a-committer.md b/docs/docs/zh/contribute/join/become-a-committer.md
index b38dd0cec9..949a318701 100644
--- a/docs/docs/zh/contribute/join/become-a-committer.md
+++ b/docs/docs/zh/contribute/join/become-a-committer.md
@@ -9,4 +9,4 @@
 当您不熟悉ASF使用的开源的开发过程时,有时难以理解的一点,就是我们更重视社区而不是代码。一个强大而健康的社区将受到尊重,成为一个有趣和有益的地方。更重要的是,一个多元化和健康的社区
 可以长时间的持续支持代码,即使个别公司在这个领域来来往往,也是如此。
 
-更多详细信息可以在[这里](https://community.apache.org/contributors/)找到
\ No newline at end of file
+更多详细信息可以在[这里](https://community.apache.org/contributors/)找到
diff --git a/docs/docs/zh/contribute/join/code-conduct.md b/docs/docs/zh/contribute/join/code-conduct.md
index a3668b52f3..f649fd1053 100644
--- a/docs/docs/zh/contribute/join/code-conduct.md
+++ b/docs/docs/zh/contribute/join/code-conduct.md
@@ -3,66 +3,67 @@
 以下行为准则以完全遵循[Apache软件基金会行为准则](https://www.apache.org/foundation/policies/conduct.html)为前提。
 
 ## 开发理念
- - **一致** 代码风格、命名以及使用方式保持一致。
- - **易读** 代码无歧义,易于阅读和理解而非调试手段才知晓代码意图。
- - **整洁** 认同《重构》和《代码整洁之道》的理念,追求整洁优雅代码。
- - **抽象** 层次划分清晰,概念提炼合理。保持方法、类、包以及模块处于同一抽象层级。
- - **用心** 保持责任心,持续以工匠精神雕琢。
- 
+
+- **一致** 代码风格、命名以及使用方式保持一致。
+- **易读** 代码无歧义,易于阅读和理解而非调试手段才知晓代码意图。
+- **整洁** 认同《重构》和《代码整洁之道》的理念,追求整洁优雅代码。
+- **抽象** 层次划分清晰,概念提炼合理。保持方法、类、包以及模块处于同一抽象层级。
+- **用心** 保持责任心,持续以工匠精神雕琢。
+
 ## 开发规范
 
- - 执行`mvn -U clean package -Prelease`可以编译和测试通过全部测试用例。
- - 测试覆盖率工具检查不低于dev分支覆盖率。
- - 请使用Checkstyle检查代码,违反验证规则的需要有特殊理由。模板位置在根目录下ds_check_style.xml。
- - 遵守编码规范。
- 
+- 执行`mvn -U clean package -Prelease`可以编译和测试通过全部测试用例。
+- 测试覆盖率工具检查不低于dev分支覆盖率。
+- 请使用Checkstyle检查代码,违反验证规则的需要有特殊理由。模板位置在根目录下ds_check_style.xml。
+- 遵守编码规范。
+
 ## 编码规范
 
- - 使用linux换行符。
- - 缩进(包含空行)和上一行保持一致。
- - 类声明后与下面的变量或方法之间需要空一行。
- - 不应有无意义的空行。
- - 类、方法和变量的命名要做到顾名思义,避免使用缩写。
- - 返回值变量使用`result`命名;循环中使用`each`命名循环变量;map中使用`entry`代替`each`。
- - 捕获的异常名称命名为`e`;捕获异常且不做任何事情,异常名称命名为`ignored`。
- - 配置文件使用驼峰命名,文件名首字母小写。
- - 需要注释解释的代码尽量提成小方法,用方法名称解释。
- - `equals`和`==`条件表达式中,常量在左,变量在右;大于小于等条件表达式中,变量在左,常量在右。
- - 除了用于继承的抽象类之外,尽量将类设计为`final`。
- - 嵌套循环尽量提成方法。
- - 成员变量定义顺序以及参数传递顺序在各个类和方法中保持一致。
- - 优先使用卫语句。
- - 类和方法的访问权限控制为最小。
- - 方法所用到的私有方法应紧跟该方法,如果有多个私有方法,书写私有方法应与私有方法在原方法的出现顺序相同。
- - 方法入参和返回值不允许为`null`。
- - 优先使用三目运算符代替if else的返回和赋值语句。
- - 优先考虑使用`LinkedList`,只有在需要通过下标获取集合中元素值时再使用`ArrayList`。
- - `ArrayList`,`HashMap`等可能产生扩容的集合类型必须指定集合初始大小,避免扩容。
- - 日志与注释一律使用英文。
- - 注释只能包含javadoc,todo和fixme。
- - 公开的类和方法必须有javadoc,其他类和方法以及覆盖自父类的方法无需javadoc。
+- 使用linux换行符。
+- 缩进(包含空行)和上一行保持一致。
+- 类声明后与下面的变量或方法之间需要空一行。
+- 不应有无意义的空行。
+- 类、方法和变量的命名要做到顾名思义,避免使用缩写。
+- 返回值变量使用`result`命名;循环中使用`each`命名循环变量;map中使用`entry`代替`each`。
+- 捕获的异常名称命名为`e`;捕获异常且不做任何事情,异常名称命名为`ignored`。
+- 配置文件使用驼峰命名,文件名首字母小写。
+- 需要注释解释的代码尽量提成小方法,用方法名称解释。
+- `equals`和`==`条件表达式中,常量在左,变量在右;大于小于等条件表达式中,变量在左,常量在右。
+- 除了用于继承的抽象类之外,尽量将类设计为`final`。
+- 嵌套循环尽量提成方法。
+- 成员变量定义顺序以及参数传递顺序在各个类和方法中保持一致。
+- 优先使用卫语句。
+- 类和方法的访问权限控制为最小。
+- 方法所用到的私有方法应紧跟该方法,如果有多个私有方法,书写私有方法应与私有方法在原方法的出现顺序相同。
+- 方法入参和返回值不允许为`null`。
+- 优先使用三目运算符代替if else的返回和赋值语句。
+- 优先考虑使用`LinkedList`,只有在需要通过下标获取集合中元素值时再使用`ArrayList`。
+- `ArrayList`,`HashMap`等可能产生扩容的集合类型必须指定集合初始大小,避免扩容。
+- 日志与注释一律使用英文。
+- 注释只能包含javadoc,todo和fixme。
+- 公开的类和方法必须有javadoc,其他类和方法以及覆盖自父类的方法无需javadoc。
 
 ## 单元测试规范
 
- - 测试代码和生产代码需遵守相同代码规范。
- - 单元测试需遵循AIR(Automatic, Independent, Repeatable)设计理念。
-   - 自动化(Automatic):单元测试应全自动执行,而非交互式。禁止人工检查输出结果,不允许使用`System.out`,`log`等,必须使用断言进行验证。
-   - 独立性(Independent):禁止单元测试用例间的互相调用,禁止依赖执行的先后次序。每个单元测试均可独立运行。
-   - 可重复执行(Repeatable):单元测试不能受到外界环境的影响,可以重复执行。
- - 单元测试需遵循BCDE(Border, Correct, Design, Error)设计原则。
-   - 边界值测试(Border):通过循环边界、特殊数值、数据顺序等边界的输入,得到预期结果。
-   - 正确性测试(Correct):通过正确的输入,得到预期结果。
-   - 合理性设计(Design):与生产代码设计相结合,设计高质量的单元测试。
-   - 容错性测试(Error):通过非法数据、异常流程等错误的输入,得到预期结果。
- - 如无特殊理由,测试需全覆盖。
- - 每个测试用例需精确断言。
- - 准备环境的代码和测试代码分离。
- - 只有junit `Assert`,hamcrest `CoreMatchers`,Mockito相关可以使用static import。
- - 单数据断言,应使用`assertTrue`,`assertFalse`,`assertNull`和`assertNotNull`。
- - 多数据断言,应使用`assertThat`。
- - 精确断言,尽量不使用`not`,`containsString`断言。
- - 测试用例的真实值应名为为actualXXX,期望值应命名为expectedXXX。
- - 测试类和`@Test`标注的方法无需javadoc。
+- 测试代码和生产代码需遵守相同代码规范。
+- 单元测试需遵循AIR(Automatic, Independent, Repeatable)设计理念。
+  - 自动化(Automatic):单元测试应全自动执行,而非交互式。禁止人工检查输出结果,不允许使用`System.out`,`log`等,必须使用断言进行验证。
+  - 独立性(Independent):禁止单元测试用例间的互相调用,禁止依赖执行的先后次序。每个单元测试均可独立运行。
+  - 可重复执行(Repeatable):单元测试不能受到外界环境的影响,可以重复执行。
+- 单元测试需遵循BCDE(Border, Correct, Design, Error)设计原则。
+  - 边界值测试(Border):通过循环边界、特殊数值、数据顺序等边界的输入,得到预期结果。
+  - 正确性测试(Correct):通过正确的输入,得到预期结果。
+  - 合理性设计(Design):与生产代码设计相结合,设计高质量的单元测试。
+  - 容错性测试(Error):通过非法数据、异常流程等错误的输入,得到预期结果。
+- 如无特殊理由,测试需全覆盖。
+- 每个测试用例需精确断言。
+- 准备环境的代码和测试代码分离。
+- 只有junit `Assert`,hamcrest `CoreMatchers`,Mockito相关可以使用static import。
+- 单数据断言,应使用`assertTrue`,`assertFalse`,`assertNull`和`assertNotNull`。
+- 多数据断言,应使用`assertThat`。
+- 精确断言,尽量不使用`not`,`containsString`断言。
+- 测试用例的真实值应名为为actualXXX,期望值应命名为expectedXXX。
+- 测试类和`@Test`标注的方法无需javadoc。
+- 公共规范
+  - 每行长度不超过`200`个字符,保证每一行语义完整以便于理解。
 
- - 公共规范
-   - 每行长度不超过`200`个字符,保证每一行语义完整以便于理解。
diff --git a/docs/docs/zh/contribute/join/commit-message.md b/docs/docs/zh/contribute/join/commit-message.md
index 3b1b0e8fcf..3d8974535e 100644
--- a/docs/docs/zh/contribute/join/commit-message.md
+++ b/docs/docs/zh/contribute/join/commit-message.md
@@ -1,7 +1,8 @@
 # Commit Message 须知
 
 ### 前言
-  一个好的 commit message 是能够帮助其他的开发者(或者未来的开发者)快速理解相关变更的上下文,同时也可以帮助项目管理人员确定该提交是否适合包含在发行版中。但当我们在查看了很多开源项目的 commit log 后,发现一个有趣的问题,一部分开发者,代码质量很不错,但是 commit message 记录却比较混乱,当其他贡献者或者学习者在查看代码的时候,并不能通过 commit log 很直观的了解
+
+一个好的 commit message 是能够帮助其他的开发者(或者未来的开发者)快速理解相关变更的上下文,同时也可以帮助项目管理人员确定该提交是否适合包含在发行版中。但当我们在查看了很多开源项目的 commit log 后,发现一个有趣的问题,一部分开发者,代码质量很不错,但是 commit message 记录却比较混乱,当其他贡献者或者学习者在查看代码的时候,并不能通过 commit log 很直观的了解
 该提交前后变更的目的,正如 Peter Hutterer 所言:Re-establishing the context of a piece of code is wasteful. We can’t avoid it completely, so our efforts should go to reducing it as much as possible. Commit messages can do exactly that and as a result, a commit message shows whether a developer is a good collaborator. 因此,DolphinScheduler 结合其他社区以及 Apache 官方文档制定了该规约。
 
 ### Commit Message RIP
@@ -21,6 +22,7 @@ commit message 应该明确说明该提交解决了哪些问题(bug 修复、
 Commit message 应该包括三个部分:Header,Body 和 Footer。其中,Header 是必需的,Body 和 Footer 可以省略。
 
 ##### header
+
 Header 部分只有一行,包括三个字段:type(必需)、scope(可选)和 subject(必需)。
 
 [DS-ISSUE编号][type] subject
@@ -57,7 +59,6 @@ Body 部分需要注意以下几点:
 
 * 语句最后不需要 ‘.’ (句号) 结尾
 
-
 ##### Footer
 
 Footer只适用于两种情况
@@ -71,19 +72,21 @@ Footer只适用于两种情况
 如果当前 commit 针对某个issue,那么可以在 Footer 部分关闭这个 issue,也可以一次关闭多个 issue 。
 
 ##### 举个例子
+
 [DS-001][docs-zh] add commit message
 
 * commit message RIP
-* build some conventions 
-* help the commit messages become clean and tidy 
-* help developers and release managers better track issues 
-and clarify the optimization in the version iteration
+* build some conventions
+* help the commit messages become clean and tidy
+* help developers and release managers better track issues
+  and clarify the optimization in the version iteration
 
 This closes #001
 
 ### 参考文档
+
 [提交消息格式](https://cwiki.apache.org/confluence/display/GEODE/Commit+Message+Format)
 
 [On commit messages-Peter Hutterer](http://who-t.blogspot.com/2009/12/on-commit-messages.html)
 
-[RocketMQ Community Operation Conventions](https://mp.weixin.qq.com/s/LKM4IXAY-7dKhTzGu5-oug)
\ No newline at end of file
+[RocketMQ Community Operation Conventions](https://mp.weixin.qq.com/s/LKM4IXAY-7dKhTzGu5-oug)
diff --git a/docs/docs/zh/contribute/join/contribute.md b/docs/docs/zh/contribute/join/contribute.md
index 5049621412..fb74d24d6d 100644
--- a/docs/docs/zh/contribute/join/contribute.md
+++ b/docs/docs/zh/contribute/join/contribute.md
@@ -2,7 +2,7 @@
 
 首先非常感谢大家选择和使用 DolphinScheduler,非常欢迎大家加入 DolphinScheduler 大家庭,融入开源世界!
 
-我们鼓励任何形式的参与社区,最终成为 Committer 或 PPMC,如: 
+我们鼓励任何形式的参与社区,最终成为 Committer 或 PPMC,如:
 * 将遇到的问题通过 github 上 [issue](https://github.com/apache/dolphinscheduler/issues) 的形式反馈出来
 * 回答别人遇到的 issue 问题
 * 帮助完善文档
@@ -13,7 +13,7 @@
 * 帮助推广 DolphinScheduler,参与技术大会或者 meetup 的分享等
 
 欢迎加入贡献的队伍,加入开源从提交第一个 PR 开始
-  - 比如添加代码注释或找到带有 ”easy to fix” 标记或一些非常简单的 issue(拼写错误等) 等等,先通过第一个简单的 PR 熟悉提交流程
+- 比如添加代码注释或找到带有 ”easy to fix” 标记或一些非常简单的 issue(拼写错误等) 等等,先通过第一个简单的 PR 熟悉提交流程
 
 注:贡献不仅仅限于 PR 哈,对促进项目发展的都是贡献
 
@@ -27,7 +27,6 @@
 
 参考[参与贡献 Issue 需知](./issue.md),[参与贡献 Pull Request 需知](./pull-request.md),[参与贡献 CommitMessage 需知](./commit-message.md)
 
-
 ### 3. 如何领取 Issue,提交 Pull Request
 
 如果你想实现某个 Feature 或者修复某个 Bug。请参考以下内容:
diff --git a/docs/docs/zh/contribute/join/document.md b/docs/docs/zh/contribute/join/document.md
index 6d9fde3e2e..194e97b6ca 100644
--- a/docs/docs/zh/contribute/join/document.md
+++ b/docs/docs/zh/contribute/join/document.md
@@ -52,8 +52,8 @@ DolphinScheduler 网站由 [docsite](https://github.com/chengshiwen/docsite-ext)
 
 2. 只需推送更改的文件,例如:
 
- * `*.md`
- * `blog.js or docs.js or site.js`
+* `*.md`
+* `blog.js or docs.js or site.js`
 
 3. 向 **master** 分支提交 Pull Request。
 
diff --git a/docs/docs/zh/contribute/join/issue.md b/docs/docs/zh/contribute/join/issue.md
index b81cbd8226..a61c678486 100644
--- a/docs/docs/zh/contribute/join/issue.md
+++ b/docs/docs/zh/contribute/join/issue.md
@@ -1,6 +1,7 @@
 # Issue 须知
 
 ## 前言
+
 Issues 功能被用来追踪各种特性,Bug,功能等。项目维护者可以通过 Issues 来组织需要完成的任务。
 
 Issue 是引出一个 Feature 或 Bug 等的重要步骤,在单个
@@ -181,6 +182,7 @@ Priority分为四级: Critical、Major、Minor、Trivial
 * 尽量列出其他调度已经具备的类似功能。商用与开源软件均可。
 
 以下是 **Feature 的 Markdown 内容模板**,请按照该模板填写 issue 内容。
+
 ```shell
 **标题** 
 标题格式: [Feature][Priority] feature标题
@@ -197,7 +199,6 @@ Priority分为四级: Critical、Major、Minor、Trivial
 
 ```
 
-
 ### Contributor
 
 除一些特殊情况之外,在开始完成
@@ -212,6 +213,7 @@ Pull Request review 阶段针对实现思路的意见不同或需要重构而导
 
 - 当出现提出 Issue 的用户不清楚该 Issue 对应的模块时的处理方式。
 
-    确实存在大多数提出 Issue 用户不清楚这个 Issue 是属于哪个模块的,其实这在很多开源社区都是很常见的。在这种情况下,其实
-    committer/contributor 是知道这个 Issue 影响的模块的,如果之后这个 Issue 被 committer 和 contributor approve
-    确实有价值,那么 committer 就可以按照 Issue 涉及到的具体的模块去修改 Issue 标题,或者留言给提出 Issue 的用户去修改成对应的标题。
\ No newline at end of file
+  确实存在大多数提出 Issue 用户不清楚这个 Issue 是属于哪个模块的,其实这在很多开源社区都是很常见的。在这种情况下,其实
+  committer/contributor 是知道这个 Issue 影响的模块的,如果之后这个 Issue 被 committer 和 contributor approve
+  确实有价值,那么 committer 就可以按照 Issue 涉及到的具体的模块去修改 Issue 标题,或者留言给提出 Issue 的用户去修改成对应的标题。
+
diff --git a/docs/docs/zh/contribute/join/microbench.md b/docs/docs/zh/contribute/join/microbench.md
index 97f37ae3e0..c0db873720 100644
--- a/docs/docs/zh/contribute/join/microbench.md
+++ b/docs/docs/zh/contribute/join/microbench.md
@@ -22,26 +22,25 @@ JMH,即Java MicroBenchmark Harness,是专门用于代码微基准测试的
 
 * 3:对比一个函数的多种实现方式
 
-
 DolphinScheduler-MicroBench提供了AbstractBaseBenchmark,你可以在其基础上继承,编写你的基准测试代码,AbstractMicroBenchmark能保证以JUnit的方式运行。
 
 ### 定制运行参数
- 
- 默认的AbstractMicrobenchmark配置是
- 
- Warmup次数 10(warmupIterations)
- 
- 测试次数 10(measureIterations)
- 
- Fork数量 2 (forkCount)
- 
- 你可以在启动的时候指定这些参数,-DmeasureIterations、-DperfReportDir(输出基准测试结果文件目录)、-DwarmupIterations、-DforkCount
- 
+
+默认的AbstractMicrobenchmark配置是
+
+Warmup次数 10(warmupIterations)
+
+测试次数 10(measureIterations)
+
+Fork数量 2 (forkCount)
+
+你可以在启动的时候指定这些参数,-DmeasureIterations、-DperfReportDir(输出基准测试结果文件目录)、-DwarmupIterations、-DforkCount
+
 ### DolphinScheduler-MicroBench 介绍
 
+通常并不建议跑测试时,用较少的循环次数,但是较少的次数有助于确认基准测试时工作的,在确认结束后,再运行大量的基准测试。
 
- 通常并不建议跑测试时,用较少的循环次数,但是较少的次数有助于确认基准测试时工作的,在确认结束后,再运行大量的基准测试。
- ```java
+```java
 @Warmup(iterations = 2, time = 1)
 @Measurement(iterations = 4, time = 1)
 @State(Scope.Benchmark)
@@ -49,15 +48,16 @@ public class EnumBenchMark extends AbstractBaseBenchmark {
 
 }
 ```
- 这可以以方法级别或者类级别来运行基准测试,命令行的参数会覆盖annotation上的参数。
- 
+
+这可以以方法级别或者类级别来运行基准测试,命令行的参数会覆盖annotation上的参数。
+
 ```java
-    @Benchmark //方法注解,表示该方法是需要进行 benchmark 的对象。
-    @BenchmarkMode(Mode.AverageTime) //可选基准测试模式通过枚举Mode得到
-    @OutputTimeUnit(TimeUnit.MICROSECONDS) // 输出的时间单位
-    public void enumStaticMapTest() {
-        TestTypeEnum.newGetNameByType(testNum);
-    }
+@Benchmark //方法注解,表示该方法是需要进行 benchmark 的对象。
+@BenchmarkMode(Mode.AverageTime) //可选基准测试模式通过枚举Mode得到
+@OutputTimeUnit(TimeUnit.MICROSECONDS) // 输出的时间单位
+public void enumStaticMapTest() {
+    TestTypeEnum.newGetNameByType(testNum);
+}
 ```
 
 当你的基准测试编写完成后,你可以运行它查看具体的测试情况:(实际结果取决于你的系统配置情况)
@@ -72,7 +72,9 @@ Iteration   2: 0.004 us/op
 Iteration   3: 0.004 us/op
 Iteration   4: 0.004 us/op
 ```
+
 在经过预热后,我们通常会得到如下结果
+
 ```java
 Benchmark                        (testNum)   Mode  Cnt          Score           Error  Units
 EnumBenchMark.simpleTest               101  thrpt    8  428750972.826 ±  66511362.350  ops/s
@@ -95,4 +97,4 @@ EnumBenchMark.enumValuesTest           105   avgt    8          0.014 ±
 EnumBenchMark.enumValuesTest           103   avgt    8          0.012 ±         0.009  us/op
 ```
 
-OpenJDK官方给了很多样例代码,有兴趣的同学可以自己查询并学习JMH:[OpenJDK-JMH-Example](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/)
\ No newline at end of file
+OpenJDK官方给了很多样例代码,有兴趣的同学可以自己查询并学习JMH:[OpenJDK-JMH-Example](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/)
diff --git a/docs/docs/zh/contribute/join/pull-request.md b/docs/docs/zh/contribute/join/pull-request.md
index 3db4b68e27..68189751b1 100644
--- a/docs/docs/zh/contribute/join/pull-request.md
+++ b/docs/docs/zh/contribute/join/pull-request.md
@@ -1,6 +1,7 @@
 # Pull Request 须知
 
 ## 前言
+
 Pull Request 本质上是一种软件的合作方式,是将涉及不同功能的代码,纳入主干的一种流程。这个过程中,可以进行讨论、审核和修改代码。
 
 在 Pull Request 中尽量不讨论代码的实现方案,代码及其逻辑的大体实现方案应该尽量在
@@ -71,10 +72,11 @@ DolphinScheduler使用`Spotless`为您自动修复代码风格和格式问题,
 
 - 怎样处理一个 Pull Request 对应多个 Issue 的场景。
 
-    首先 Pull Request 和 Issue 一对多的场景是比较少的。Pull Request 和 Issue 一对多的根本原因就是出现了多个
-    Issue 需要做大体相同的一件事情的场景,通常针对这种场景有两种解决方法:第一种就是把多个功能相同的 Issue 合并到同一个 Issue 上,然后把其他的
-    Issue 进行关闭;第二种就是多个 Issue 大体上是在做一个功能,但是存在一些细微的差别,这类场景下可以把每个 Issue 的职责划分清楚,每一个
-    Issue 的类型都标记为 Sub-Task,然后将这些 Sub-Task 类型的 Issue 关联到一个总 Issue 上,在提交
-    Pull Request 时,每个 Pull Request 都只关联一个 Sub-Task 的 Issue。
-    
-    尽量把一个 Pull Request 作为最小粒度。如果一个 Pull Request 只做一件事,Contributor 容易完成,Pull Request 影响的范围也会更加清晰,对 reviewer 的压力也会小。
\ No newline at end of file
+  首先 Pull Request 和 Issue 一对多的场景是比较少的。Pull Request 和 Issue 一对多的根本原因就是出现了多个
+  Issue 需要做大体相同的一件事情的场景,通常针对这种场景有两种解决方法:第一种就是把多个功能相同的 Issue 合并到同一个 Issue 上,然后把其他的
+  Issue 进行关闭;第二种就是多个 Issue 大体上是在做一个功能,但是存在一些细微的差别,这类场景下可以把每个 Issue 的职责划分清楚,每一个
+  Issue 的类型都标记为 Sub-Task,然后将这些 Sub-Task 类型的 Issue 关联到一个总 Issue 上,在提交
+  Pull Request 时,每个 Pull Request 都只关联一个 Sub-Task 的 Issue。
+
+  尽量把一个 Pull Request 作为最小粒度。如果一个 Pull Request 只做一件事,Contributor 容易完成,Pull Request 影响的范围也会更加清晰,对 reviewer 的压力也会小。
+
diff --git a/docs/docs/zh/contribute/join/review.md b/docs/docs/zh/contribute/join/review.md
index 6f86ff0937..758fc42015 100644
--- a/docs/docs/zh/contribute/join/review.md
+++ b/docs/docs/zh/contribute/join/review.md
@@ -23,36 +23,36 @@ DolphinScheduler 主要通过 GitHub 接收社区的贡献,其所有的 Issues
 
 Review Issues 是指在 GitHub 中参与 [Issues][all-issues] 的讨论,并在对应的 Issues 给出建议。给出的建议包括但不限于如下的情况
 
-| 情况 | 原因 | 需增加标签 | 需要的动作 |
-| ------ | ------ | ------ | ------ |
-| 不需要修改 | 问题在 dev 分支最新代码中已经修复了 | [wontfix][label-wontfix] | 关闭 Issue,告知提出者将在那个版本发布,如已发布告知版本 |
-| 重复的问题 | 之前已经存在相同的问题 | [duplicate][label-duplicate] | 关闭 Issue,告知提出者相同问题的连接 |
-| 问题描述不清晰 | 没有明确说明问题如何复现 | [need more information][label-need-more-information] | 提醒用户需要增加缺失的描述 |
+|   情况    |          原因          |                        需增加标签                         |              需要的动作              |
+|---------|----------------------|------------------------------------------------------|---------------------------------|
+| 不需要修改   | 问题在 dev 分支最新代码中已经修复了 | [wontfix][label-wontfix]                             | 关闭 Issue,告知提出者将在那个版本发布,如已发布告知版本 |
+| 重复的问题   | 之前已经存在相同的问题          | [duplicate][label-duplicate]                         | 关闭 Issue,告知提出者相同问题的连接           |
+| 问题描述不清晰 | 没有明确说明问题如何复现         | [need more information][label-need-more-information] | 提醒用户需要增加缺失的描述                   |
 
 除了个 issue 建议之外,给 Issue 分类也是非常重要的一个工作。分类后的 Issue 可以更好的被检索,为以后进一步处理提供便利。一个 Issue 可以被打上多个标签,常见的 Issue 分类有
 
-| 标签 | 标签代表的情况 |
-| ------ | ------ |
-| [UI][label-UI] | UI以及前端相关的 Issue |
-| [security][label-security] | 安全相关的 Issue |
-| [user experience][label-user-experience] | 用户体验相关的 Issue |
-| [development][label-development] | 开发者相关的 Issue |
-| [Python][label-Python] | Python相关的 Issue |
-| [plug-in][label-plug-in] | 插件相关的 Issue |
-| [document][label-document] | 文档相关的 Issue |
-| [docker][label-docker] | docker相关的 Issue |
-| [need verify][label-need-verify] | Issue 需要被验证 |
-| [e2e][label-e2e] | e2e相关的 Issue |
-| [win-os][label-win-os] | windows 操作系统相关的 Issue |
-| [suggestion][label-suggestion] | Issue 为项目提出了建议 |
+|                    标签                    |        标签代表的情况        |
+|------------------------------------------|-----------------------|
+| [UI][label-UI]                           | UI以及前端相关的 Issue       |
+| [security][label-security]               | 安全相关的 Issue           |
+| [user experience][label-user-experience] | 用户体验相关的 Issue         |
+| [development][label-development]         | 开发者相关的 Issue          |
+| [Python][label-Python]                   | Python相关的 Issue       |
+| [plug-in][label-plug-in]                 | 插件相关的 Issue           |
+| [document][label-document]               | 文档相关的 Issue           |
+| [docker][label-docker]                   | docker相关的 Issue       |
+| [need verify][label-need-verify]         | Issue 需要被验证           |
+| [e2e][label-e2e]                         | e2e相关的 Issue          |
+| [win-os][label-win-os]                   | windows 操作系统相关的 Issue |
+| [suggestion][label-suggestion]           | Issue 为项目提出了建议        |
 
 标签除了分类之外,还能区分 Issue 的优先级,优先级越高的标签越重要,越容易被重视,并会尽快被修复或者实现,优先级的标签如下
 
-| 标签 | 优先级 |
-| ------ | ------ |
-| [priority:high][label-priority-high] | 高优先级 |
+|                    标签                    | 优先级  |
+|------------------------------------------|------|
+| [priority:high][label-priority-high]     | 高优先级 |
 | [priority:middle][label-priority-middle] | 中优先级 |
-| [priority:low][label-priority-low] | 低优先级 |
+| [priority:low][label-priority-low]       | 低优先级 |
 
 以上是常见的几个标签,更多的标签请查阅项目[全部的标签列表][label-all-list]
 
@@ -67,12 +67,12 @@ Review Issues 是指在 GitHub 中参与 [Issues][all-issues] 的讨论,并在
 
 当 Issue 需要被创建 Pull Requests 解决,也可以视情况打上部分标签
 
-| 标签 | 标签代表的PR |
-| ------ | ------ |
-| [Chore][label-Chore] | 日常维护工作 |
+|                     标签                     |     标签代表的PR      |
+|--------------------------------------------|------------------|
+| [Chore][label-Chore]                       | 日常维护工作           |
 | [Good first issue][label-good-first-issue] | 适合首次贡献者解决的 Issue |
-| [easy to fix][label-easy-to-fix] | 比较容易解决 |
-| [help wanted][label-help-wanted] | 向社区寻求帮忙 |
+| [easy to fix][label-easy-to-fix]           | 比较容易解决           |
+| [help wanted][label-help-wanted]           | 向社区寻求帮忙          |
 
 > 注意: 上面关于增加和删除标签的操作,目前只有成员可以操作,当你遇到需要增减标签的时候,但是不是成员是,可以 `@` 对应的成员让其帮忙增减。
 > 但只要你有 GitHub 账号就能评论 Issue,并给出建议。我们鼓励社区每人都去评论并为 Issue 给出解答
@@ -81,13 +81,13 @@ Review Issues 是指在 GitHub 中参与 [Issues][all-issues] 的讨论,并在
 
 <!-- markdown-link-check-disable -->
 Review Pull 是指在 GitHub 中参与 [Pull Requests][all-PRs] 的讨论,并在对应的 Pull Requests 给出建议。DolphinScheduler review
-Pull Requests 与 [GitHub 的 reviewing changes in pull requests][gh-review-pr] 一样。你可以为 Pull Requests 提出自己的看法,
-
+Pull Requests 与 [GitHub 的 reviewing changes in pull requests][gh-review-pr] 一样。你可以为 Pull Requests 提出自己的-->
 * 当你认为这个 Pull Requests 没有问题,可以被合并的时候,可以根据 [GitHub 的 reviewing changes in pull requests][gh-review-pr] 的
   approve 流程同意这个 Pull Requests。
 * 当你觉得这个 Pull Requests 需要被修改时,可以根据 [GitHub 的 reviewing changes in pull requests][gh-review-pr] 的 comment
   流程评论这个 Pull Requests。当你认为存在一定要先修复才能合并的问题,请参照 [GitHub 的 reviewing changes in pull requests][gh-review-pr]
   的 Request changes 流程要求贡献者修改 Pull Requests 的内容。
+
 <!-- markdown-link-check-enable -->
 
 为 Pull Requests 打上标签也是非常重要的一个环节,合理的分类能为后来的 reviewer 节省大量的时间。值得高兴的是,Pull Requests 的标签和 [Issues](#issues)
@@ -96,11 +96,11 @@ Pull Requests 与 [GitHub 的 reviewing changes in pull requests][gh-review-pr]
 
 除了和 Issue 类似的标签外,Pull Requests 还有许多自己特有的标签
 
-| 标签 | 含义 |
-| ------ | ------ |
-| [miss document][label-miss-document] | 该 Pull Requests 缺少文档 需要增加 |
+|                           标签                           |             含义              |
+|--------------------------------------------------------|-----------------------------|
+| [miss document][label-miss-document]                   | 该 Pull Requests 缺少文档 需要增加   |
 | [first time contributor][label-first-time-contributor] | 该 Pull Requests 贡献者是第一次贡献项目 |
-| [don't merge][label-do-not-merge] | 该 Pull Requests 有问题 暂时先不要合并 |
+| [don't merge][label-do-not-merge]                      | 该 Pull Requests 有问题 暂时先不要合并 |
 
 > 注意: 上面关于增加和删除标签的操作,目前只有成员可以操作,当你遇到需要增减标签的时候,可以 `@` 对应的成员让其帮忙增减。但只要你有 GitHub
 > 账号就能评论 Pull Requests,并给出建议。我们鼓励社区每人都去评论并为 Pull Requests 给出建议
@@ -139,3 +139,4 @@ Pull Requests 与 [GitHub 的 reviewing changes in pull requests][gh-review-pr]
 [all-issues]: https://github.com/apache/dolphinscheduler/issues
 [all-PRs]: https://github.com/apache/dolphinscheduler/pulls
 [gh-review-pr]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/about-pull-request-reviews
+
diff --git a/docs/docs/zh/contribute/join/submit-code.md b/docs/docs/zh/contribute/join/submit-code.md
index e473112eef..a53cea6c0f 100644
--- a/docs/docs/zh/contribute/join/submit-code.md
+++ b/docs/docs/zh/contribute/join/submit-code.md
@@ -3,39 +3,39 @@
 * 首先从远端仓库*https://github.com/apache/dolphinscheduler.git* fork一份代码到自己的仓库中
 
 * 远端仓库中目前有三个分支:
-    * master 正常交付分支
-	   发布稳定版本以后,将稳定版本分支的代码合并到master上。
-    
-	* dev    日常开发分支
-	   日常dev开发分支,新提交的代码都可以pull request到这个分支上。
-	   
-    * branch-1.0.0 发布版本分支
-	   发布版本分支,后续会有2.0...等版本分支。
+
+  * master 正常交付分支
+    发布稳定版本以后,将稳定版本分支的代码合并到master上。
+
+  * dev    日常开发分支
+    日常dev开发分支,新提交的代码都可以pull request到这个分支上。
+
+  * branch-1.0.0 发布版本分支
+    发布版本分支,后续会有2.0...等版本分支。
 
 * 把自己仓库clone到本地
-  
-    ` git clone https://github.com/apache/dolphinscheduler.git`
 
-*  添加远端仓库地址,命名为upstream
+  ` git clone https://github.com/apache/dolphinscheduler.git`
 
-    ` git remote add upstream https://github.com/apache/dolphinscheduler.git `
+* 添加远端仓库地址,命名为upstream
 
-*  查看仓库:
+  ` git remote add upstream https://github.com/apache/dolphinscheduler.git `
 
-    ` git remote -v`
+* 查看仓库:
+
+  ` git remote -v`
 
 > 此时会有两个仓库:origin(自己的仓库)和upstream(远端仓库)
 
-*  获取/更新远端仓库代码(已经是最新代码,就跳过)
-  
-    ` git fetch upstream `
+* 获取/更新远端仓库代码(已经是最新代码,就跳过)
 
+  ` git fetch upstream `
 
 * 同步远端仓库代码到本地仓库
 
 ```
- git checkout origin/dev
- git merge --no-ff upstream/dev
+git checkout origin/dev
+git merge --no-ff upstream/dev
 ```
 
 如果远端分支有新加的分支比如`dev-1.0`,需要同步这个分支到本地仓库
@@ -53,19 +53,19 @@ git checkout -b xxx origin/dev
 
 确保分支`xxx`是基于官方dev分支的最新代码
 
-
 * 在新建的分支上本地修改代码以后,提交到自己仓库:
-  
-    `git commit -m 'commit content'`
-    
-    `git push origin xxx --set-upstream`
+
+  `git commit -m 'commit content'`
+
+  `git push origin xxx --set-upstream`
 
 * 将修改提交到远端仓库
 
-	* 在github的PullRequest页面,点击"New pull request".
-	 
-	* 选择修改完的本地分支和要合并的目的分支,点击"Create pull request".
-	
+  * 在github的PullRequest页面,点击"New pull request".
+
+  * 选择修改完的本地分支和要合并的目的分支,点击"Create pull request".
+
 * 接着社区Committer们会做CodeReview,然后他会与您讨论一些细节(包括设计,实现,性能等)。当团队中所有人员对本次修改满意后,会将提交合并到dev分支
 
 * 最后,恭喜您已经成为了dolphinscheduler的官方贡献者!
+
diff --git a/docs/docs/zh/contribute/join/subscribe.md b/docs/docs/zh/contribute/join/subscribe.md
index 1d79fdb1bf..f988d1b27b 100644
--- a/docs/docs/zh/contribute/join/subscribe.md
+++ b/docs/docs/zh/contribute/join/subscribe.md
@@ -23,3 +23,4 @@
 2. 接收确认邮件并回复。 完成步骤1后,您将收到一封来自dev-help@dolphinscheduler.apache.org的确认邮件(如未收到,请确认邮件是否被自动归入垃圾邮件、推广邮件、订阅邮件等文件夹)。然后直接回复该邮件,或点击邮件里的链接快捷回复即可,主题和内容任意。
 
 3. 接收告别邮件。 完成以上步骤后,您会收到一封主题为GOODBYE from dev@dolphinscheduler.apache.org的告别邮件,至此您已成功取消订阅Apache DolphinScheduler的邮件列表,以后将不会再接收来自dev@dolphinscheduler.apache.org的邮件通知。
+
diff --git a/docs/docs/zh/contribute/join/unit-test.md b/docs/docs/zh/contribute/join/unit-test.md
index 6670aec7af..5d15a9ae5d 100644
--- a/docs/docs/zh/contribute/join/unit-test.md
+++ b/docs/docs/zh/contribute/join/unit-test.md
@@ -1,23 +1,33 @@
 ## Unit Test 覆盖率
-Unit Test 
+
+Unit Test
+
 ### 1.写单元测试的收益
+
 * 单元测试能帮助每个人深入代码细节,了解代码的功能。
 * 通过测试用例我们能发现 bug,并提交代码的健壮性。
 * 测试用例同时也是代码的 demo 用法。
+
 ### 2.单元测试用例的一些设计原则
+
 * 应该精心设计好步骤,颗粒度和组合条件。
 * 注意边界条件。
 * 单元测试也应该好好设计,不要写无用的代码。
 * 当你发现一个`方法`很难写单元测试时,如果可以确认这个`方法`是`臭代码`,那么就和开发者一起重构它。
+
 <!-- markdown-link-check-disable -->
 * DolphinScheduler: [mockito](http://site.mockito.org/). 下面是一些开发向导: [mockito tutorial](https://www.baeldung.com/bdd-mockito), [mockito refcard](https://dzone.com/refcardz/mockito)
+
 <!-- markdown-link-check-enable -->
 * TDD(可选):当你开始写一个新的功能时,你可以试着先写测试用例。
+
 ### 3.测试覆盖率设定值
+
 * 在现阶段,Delta 更改代码的测试覆盖设定值为:>=60%,越高越好。
 * 我们可以在这个页面中看到测试报告: https://codecov.io/gh/apache/dolphinscheduler
 
 ## 单元测试基本准则
+
 ### 1: 隔离性与单一性
 
 一个测试用例应该精确到方法级别,并应该能够单独执行该测试用例。同时关注点也始终在该方法上(只测试该方法)。
@@ -67,6 +77,7 @@ Unit Test
 3:断言尽可能采用肯定断言而非否定断言,断言尽可能在一个预知结果范围内,或者是准确的数值,(否则有可能会导致一些不符合你的实际预期但是通过了断言)除非你的代码只关心他是否为空。
 
 ### 8:一些单测的注意点
+
 1:Thread.sleep()
 
 测试代码中尽量不要使用 Thread.sleep,这让测试变得不稳定,可能会因为环境或者负载而意外导致失败。建议采用以下方式:
@@ -93,13 +104,16 @@ public void testMethod() {
   }
 }
 ```
+
 你应该这样做:
+
 ```
 @Test
 public void testMethod() throws MyException {
     // Some code
 }
 ```
+
 4:测试异常情况
 
 当你需要进行异常情况测试时,应该避免在测试代码中包含多个方法的调用(尤其是有多个可以引发相同异常的方法),同时应该明确说明你要测试什么。
diff --git a/docs/docs/zh/contribute/log-specification.md b/docs/docs/zh/contribute/log-specification.md
index 4ca68a71e3..04d3103fff 100644
--- a/docs/docs/zh/contribute/log-specification.md
+++ b/docs/docs/zh/contribute/log-specification.md
@@ -47,4 +47,5 @@ Master模块和Worker模块的日志打印使用如下格式。即在打印的
 - 异常处理时禁止使用printStackTrace()。该方法会将异常堆栈信息打印到标准错误输出中。
 - 禁止分行打印日志。日志的内容需要与日志格式中的相关信息关联,如果分行打印会导致日志内容与时间等信息匹配不上,并且在大量日志环境下导致日志混合,会加大日志检索难度。
 - 禁止使用"+"运算符对日志内容进行拼接。使用占位符进行日志格式化打印,提高内存使用效率。
-- 日志内容中包括对象实例时,需要确保重写toString()方法,防止打印无意义的hashcode。
\ No newline at end of file
+- 日志内容中包括对象实例时,需要确保重写toString()方法,防止打印无意义的hashcode。
+
diff --git a/docs/docs/zh/contribute/release/release-post.md b/docs/docs/zh/contribute/release/release-post.md
index 07050594a8..685387ee5e 100644
--- a/docs/docs/zh/contribute/release/release-post.md
+++ b/docs/docs/zh/contribute/release/release-post.md
@@ -27,4 +27,4 @@
 ## 获取全部的贡献者
 
 当您想要发布新版本的新闻或公告时,您可能需要当前版本的所有贡献者,您可以使用 git 命令 `git log --pretty="%an" <PREVIOUS-RELEASE-SHA>..<CURRENT-RELEASE-SHA> | sort | uniq`
-(将对应的版本改成两个版本的 tag 值)自动生成 git 作者姓名。
\ No newline at end of file
+(将对应的版本改成两个版本的 tag 值)自动生成 git 作者姓名。
diff --git a/docs/docs/zh/contribute/release/release-prepare.md b/docs/docs/zh/contribute/release/release-prepare.md
index 9dedd36dd3..7f4a2b1fd6 100644
--- a/docs/docs/zh/contribute/release/release-prepare.md
+++ b/docs/docs/zh/contribute/release/release-prepare.md
@@ -4,9 +4,9 @@
 
 和上一个版本比较,如果有依赖及版本发生了变化,当前版本的 `release-docs` 需要被更新到最新
 
- - `dolphinscheduler-dist/release-docs/LICENSE`
- - `dolphinscheduler-dist/release-docs/NOTICE`
- - `dolphinscheduler-dist/release-docs/licenses`
+- `dolphinscheduler-dist/release-docs/LICENSE`
+- `dolphinscheduler-dist/release-docs/NOTICE`
+- `dolphinscheduler-dist/release-docs/licenses`
 
 ## 更新版本
 
@@ -27,6 +27,7 @@
 - 修改文档(docs模块)中的版本号:
   - 将 `docs` 文件夹下文件的占位符 `<version>` (除了 pom.xml 相关的) 修改成 `x.y.z`
   - 新增历史版本
-     - `docs/docs/en/history-versions.md` 和 `docs/docs/zh/history-versions.md`: 增加新的历史版本为 `x.y.z`
+    - `docs/docs/en/history-versions.md` 和 `docs/docs/zh/history-versions.md`: 增加新的历史版本为 `x.y.z`
   - 修改文档 sidebar
     - `docs/configs/docsdev.js`: 将里面的 `/dev/` 修改成 `/x.y.z/`
+
diff --git a/docs/docs/zh/contribute/release/release.md b/docs/docs/zh/contribute/release/release.md
index 71fb7cf1c3..8a50b12c2c 100644
--- a/docs/docs/zh/contribute/release/release.md
+++ b/docs/docs/zh/contribute/release/release.md
@@ -79,11 +79,14 @@ You selected this USER-ID:
 Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
 You need a Passphrase to protect your secret key. # 输入apache登录密码
 ```
+
 注意:如果遇到以下错误:
+
 ```
 gpg: cancelled by user
 gpg: Key generation canceled.
 ```
+
 需要使用自己的用户登录服务器,而不是root切到自己的账户
 
 ### 查看生成的key
@@ -116,7 +119,6 @@ gpg --keyserver hkp://pool.sks-keyservers.net --send-key 85E11560
 http://keyserver.ubuntu.com:11371/pks/lookup?search=${用户名}&fingerprint=on&op=index
 备用公钥服务器 gpg --keyserver hkp://keyserver.ubuntu.com --send-key ${公钥ID}
 
-
 ## 发布Apache Maven中央仓库
 
 ### 设置 `settings-security.xml` 和 `settings.xml` 文件
@@ -152,7 +154,7 @@ A_USERNAME=<YOUR-APACHE-USERNAME>
 ```
 
 > 注意:设置环境变量后,我们可以直接在你的 bash 中使用该变量,而无需更改任何内容。例如,我们可以直接使用命令 `git clone -b "${VERSION}"-prepare https://github.com/apache/dolphinscheduler.git`
-> 来克隆发布分支,他会自动将其中的 `"${VERSION}"` 转化成你设置的值 `<THE-VERSION-YOU-RELEASE>`。 但是您必须在一些非 bash 步骤中手动更改 
+> 来克隆发布分支,他会自动将其中的 `"${VERSION}"` 转化成你设置的值 `<THE-VERSION-YOU-RELEASE>`。 但是您必须在一些非 bash 步骤中手动更改
 > `<VERSION>` 为对应的版本号,例如发起投票中的内容。我们使用 `<VERSION>` 而不是 `"${VERSION}"` 来提示 release manager 他们必须手动更改这部分内容
 
 ### 创建发布分支
@@ -213,7 +215,7 @@ git push origin --tags
 
 > 注意1:因为 Github 不再支持在 HTTPS 协议中使用原生密码在,所以在这一步你应该使用 github token 作为密码。你可以通过 https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating -a-personal-access-token
 > 了解更多如果创建 token 的信息。
-
+>
 > 注意2:命令完成后,会自动创建 `release.properties` 文件和 `*.Backup` 文件,它们在下面的命令中是需要的,不要删除它们
 
 <!-- markdown-link-check-enable -->
@@ -293,6 +295,7 @@ cd ~/ds_svn/dev/dolphinscheduler
 svn add *
 svn --username="${A_USERNAME}" commit -m "release ${VERSION}"
 ```
+
 ## 检查发布结果
 
 ### 检查sha512哈希
@@ -383,13 +386,13 @@ cd ../
 ### 投票阶段
 
 1. DolphinScheduler社区投票,发起投票邮件到`dev@dolphinscheduler.apache.org`。PMC需要先按照文档检查版本的正确性,然后再进行投票。
-经过至少72小时并统计到至少3个`+1 并且没有-1 PMC member`票后,即可进入下一阶段。
+   经过至少72小时并统计到至少3个`+1 并且没有-1 PMC member`票后,即可进入下一阶段。
 
 2. 宣布投票结果,发起投票结果邮件到`dev@dolphinscheduler.apache.org`。
 
 ### 投票模板
 
- 1. DolphinScheduler社区投票模板
+1. DolphinScheduler社区投票模板
 
 标题:
 
@@ -456,7 +459,6 @@ xxx
 Thanks everyone for taking time to check this release and help us.
 ```
 
-
 ## 完成发布
 
 ### 将源码和二进制包从svn的dev目录移动到release目录
@@ -532,3 +534,4 @@ DolphinScheduler Resources:
 - Mailing list: dev@dolphinscheduler.apache.org
 - Documents: https://dolphinscheduler.apache.org/zh-cn/docs/<VERSION>/user_doc/about/introduction.html
 ```
+
diff --git a/docs/docs/zh/guide/alert/dingtalk.md b/docs/docs/zh/guide/alert/dingtalk.md
index a6cba29443..b93d1dd714 100644
--- a/docs/docs/zh/guide/alert/dingtalk.md
+++ b/docs/docs/zh/guide/alert/dingtalk.md
@@ -7,20 +7,28 @@
 参数配置
 
 * Webhook
+
   > 格式如下:https://oapi.dingtalk.com/robot/send?access_token=XXXXXX
+
 * Keyword
+
   > 安全设置的自定义关键词
+
 * Secret
+
   > 安全设置的加签
+
 * 消息类型
+
   > 支持 text 和 markdown 两种类型
 
 自定义机器人发送消息时,可以通过手机号码指定“被@人列表”。在“被@人列表”里面的人员收到该消息时,会有@消息提醒。免打扰会话仍然通知提醒,首屏出现“有人@你”
 * @Mobiles
-  > 被@人的手机号
-* @UserIds
-  > 被@人的用户userid
-* @All
-  > 是否@所有人
+
+> 被@人的手机号
+> * @UserIds
+> 被@人的用户userid
+> * @All
+> 是否@所有人
 
 [钉钉自定义机器人接入开发文档](https://open.dingtalk.com/document/robots/custom-robot-access)
diff --git a/docs/docs/zh/guide/alert/email.md b/docs/docs/zh/guide/alert/email.md
index b636b03af8..d850b56099 100644
--- a/docs/docs/zh/guide/alert/email.md
+++ b/docs/docs/zh/guide/alert/email.md
@@ -1,6 +1,7 @@
 # Email
+
 如果需要使用`Email`进行告警,请在告警实例管理中创建告警实例,并选择Email插件。
 下面显示了 `Email` 配置示例::
 ![alert-email](../../../../img/alert/email-alter-setup1-en.png)
 ![alert-email](../../../../img/alert/email-alter-setup2-en.png)
-![alert-email](../../../../img/alert/email-alter-setup3-en.png)
\ No newline at end of file
+![alert-email](../../../../img/alert/email-alter-setup3-en.png)
diff --git a/docs/docs/zh/guide/alert/enterprise-webexteams.md b/docs/docs/zh/guide/alert/enterprise-webexteams.md
index bb4d492e44..c5346e3bc2 100644
--- a/docs/docs/zh/guide/alert/enterprise-webexteams.md
+++ b/docs/docs/zh/guide/alert/enterprise-webexteams.md
@@ -9,16 +9,27 @@ WebexTeams的配置样例如下:
 ## 参数配置
 
 * botAccessToken
+
   > 在创建机器人时,获得的访问令牌
+
 * roomID
+
   > 接受消息的room ID(只支持一个ID)
+
 * toPersonId
+
   > 接受消息的用户ID(只支持一个ID)
+
 * toPersonEmail
+
   > 接受消息的用户邮箱(只支持一个邮箱)
+
 * atSomeoneInRoom
+
   > 如果消息目的地为room,被@人的用户邮箱,多个邮箱用英文逗号分隔
+
 * destination
+
   > 消息目的地,一条消息只支持一个目的地
 
 ## 创建一个机器人
diff --git a/docs/docs/zh/guide/alert/enterprise-wechat.md b/docs/docs/zh/guide/alert/enterprise-wechat.md
index 258d0ee9f8..63ceb3027a 100644
--- a/docs/docs/zh/guide/alert/enterprise-wechat.md
+++ b/docs/docs/zh/guide/alert/enterprise-wechat.md
@@ -66,4 +66,4 @@
 
 #### 参考文档
 
-群聊:https://work.weixin.qq.com/api/doc/90000/90135/90248
\ No newline at end of file
+群聊:https://work.weixin.qq.com/api/doc/90000/90135/90248
diff --git a/docs/docs/zh/guide/alert/feishu.md b/docs/docs/zh/guide/alert/feishu.md
index 89db7e8b35..6ac63a5229 100644
--- a/docs/docs/zh/guide/alert/feishu.md
+++ b/docs/docs/zh/guide/alert/feishu.md
@@ -7,6 +7,7 @@
 参数配置
 
 * Webhook
+
   > 复制机器人的webhook地址,如下图所示:
 
   ![alert-feishu-webhook](../../../../img/new_ui/dev/alert/alert_feishu_webhook.png)
diff --git a/docs/docs/zh/guide/alert/http.md b/docs/docs/zh/guide/alert/http.md
index e395bfead7..9f72723d6c 100644
--- a/docs/docs/zh/guide/alert/http.md
+++ b/docs/docs/zh/guide/alert/http.md
@@ -5,14 +5,23 @@
 ## 参数配置
 
 * URL
+
   > 访问的`Http`连接URL,需要包含协议、Host、路径,如果是GET方法可以添加参数
+
 * 请求方式
+
   > 选择该请求为POST或GET方法
+
 * 请求头
+
   > `Http`请求的完整请求头,以JSON为格式
+
 * 请求体
+
   > `Http`请求的完整请求体,以JSON为格式,GET方法不需要写该参数
+
 * 内容字段
+
   > 放置本次告警告警信息的字段名称
 
 ## 发送类型
diff --git a/docs/docs/zh/guide/alert/script.md b/docs/docs/zh/guide/alert/script.md
index 7763f41854..583ffcfc3c 100644
--- a/docs/docs/zh/guide/alert/script.md
+++ b/docs/docs/zh/guide/alert/script.md
@@ -7,10 +7,15 @@
 参数配置
 
 * 自定义参数
+
   > 用户自定义的参数将被传入脚本执行
+
 * 脚本路径
+
   > 脚本在服务器上的文件位置
+
 * 脚本类型
+
   > 支持`Shell`脚本
 
 **_注意:_** 请注意脚本的读写权限与执行租户的关系
diff --git a/docs/docs/zh/guide/alert/telegram.md b/docs/docs/zh/guide/alert/telegram.md
index 643368c52d..73833587b4 100644
--- a/docs/docs/zh/guide/alert/telegram.md
+++ b/docs/docs/zh/guide/alert/telegram.md
@@ -8,23 +8,24 @@
 
 参数配置:
 * WebHook:
-  > 使用 Telegram 的机器人,发送消息的 WebHook。
-* botToken
-  > 创建 Telegram 的机器人,获取的访问令牌。
-* chatId
-  > 订阅的 Telegram 频道
-* parseMode
-  > 消息解析类型, 支持: txt、markdown、markdownV2、html
-* EnableProxy
-  > 开启代理
-* Proxy
-  > 代理地址
-* Port
-  > 代理端口
-* User
-  > 代理鉴权用户
-* Password
-  > 代理鉴权密码
+
+> 使用 Telegram 的机器人,发送消息的 WebHook。
+> * botToken
+> 创建 Telegram 的机器人,获取的访问令牌。
+> * chatId
+> 订阅的 Telegram 频道
+> * parseMode
+> 消息解析类型, 支持: txt、markdown、markdownV2、html
+> * EnableProxy
+> 开启代理
+> * Proxy
+> 代理地址
+> * Port
+> 代理端口
+> * User
+> 代理鉴权用户
+> * Password
+> 代理鉴权密码
 
 **注意**:用户配置的 WebHook 需要能够接收和使用与 DolphinScheduler 构造的 HTTP POST 请求 BODY 相同的结构,JSON 结构如下:
 
@@ -36,6 +37,6 @@
 ```
 
 [Telegram 如何申请机器人,如何创建频道](https://core.telegram.org/bots)
-[Telegram 机器人开发文档](https://core.telegram.org/bots/api) 
+[Telegram 机器人开发文档](https://core.telegram.org/bots/api)
 [Telegram SendMessage 接口文档](https://core.telegram.org/bots/api#sendmessage)
 
diff --git a/docs/docs/zh/guide/data-quality.md b/docs/docs/zh/guide/data-quality.md
index 50e2c92f14..4ee9c2a0b1 100644
--- a/docs/docs/zh/guide/data-quality.md
+++ b/docs/docs/zh/guide/data-quality.md
@@ -1,4 +1,5 @@
 # 概述
+
 ## 任务类型介绍
 
 数据质量任务是用于检查数据在集成、处理过程中的数据准确性。本版本的数据质量任务包括单表检查、单表自定义SQL检查、多表准确性以及两表值比对。数据质量任务的运行环境为Spark2.4.0,其他版本尚未进行过验证,用户可自行验证。
@@ -6,10 +7,11 @@
 - 数据质量任务的执行逻辑如下:
 
 > 用户在界面定义任务,用户输入值保存在`TaskParam`中
-运行任务时,`Master`会解析`TaskParam`,封装`DataQualityTask`所需要的参数下发至`Worker。
-Worker`运行数据质量任务,数据质量任务在运行结束之后将统计结果写入到指定的存储引擎中,当前数据质量任务结果存储在`dolphinscheduler`的`t_ds_dq_execute_result`表中
-`Worker`发送任务结果给`Master`,`Master`收到`TaskResponse`之后会判断任务类型是否为`DataQualityTask`,如果是的话会根据`taskInstanceId`从`t_ds_dq_execute_result`中读取相应的结果,然后根据用户配置好的检查方式,操作符和阈值进行结果判断,如果结果为失败的话,会根据用户配置好的的失败策略进行相应的操作,告警或者中断
-## 注意事项
+> 运行任务时,`Master`会解析`TaskParam`,封装`DataQualityTask`所需要的参数下发至`Worker。
+> Worker`运行数据质量任务,数据质量任务在运行结束之后将统计结果写入到指定的存储引擎中,当前数据质量任务结果存储在`dolphinscheduler`的`t_ds_dq_execute_result`表中
+> `Worker`发送任务结果给`Master`,`Master`收到`TaskResponse`之后会判断任务类型是否为`DataQualityTask`,如果是的话会根据`taskInstanceId`从`t_ds_dq_execute_result`中读取相应的结果,然后根据用户配置好的检查方式,操作符和阈值进行结果判断,如果结果为失败的话,会根据用户配置好的的失败策略进行相应的操作,告警或者中断
+>
+  ## 注意事项
 
 添加配置信息:`<server-name>/conf/common.properties`
 
@@ -28,38 +30,40 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 
 - 校验公式:[校验方式][操作符][阈值],如果结果为真,则表明数据不符合期望,执行失败策略
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+  - [Expected-Actual][期望值-实际值]
+  - [Actual-Expected][实际值-期望值]
+  - [Actual/Expected][实际值/期望值]x100%
+  - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 操作符:=、>、>=、<、<=、!=
 - 期望值类型
-    - 固定值
-    - 日均值
-    - 周均值
-    - 月均值
-    - 最近7天均值
-    - 最近30天均值
-    - 源表总行数
-    - 目标表总行数
-    
+  - 固定值
+  - 日均值
+  - 周均值
+  - 月均值
+  - 最近7天均值
+  - 最近30天均值
+  - 源表总行数
+  - 目标表总行数
 - 例子
-    - 校验方式为:[Expected-Actual][期望值-实际值]
-    - [操作符]:>
-    - [阈值]:0
-    - 期望值类型:固定值=9。
-    
-    假设实际值为10,操作符为 >, 期望值为9,那么结果 10 -9 > 0 为真,那就意味列为空的行数据已经超过阈值,任务被判定为失败
+  - 校验方式为:[Expected-Actual][期望值-实际值]
+  - [操作符]:>
+  - [阈值]:0
+  - 期望值类型:固定值=9。
+
+  假设实际值为10,操作符为 >, 期望值为9,那么结果 10 -9 > 0 为真,那就意味列为空的行数据已经超过阈值,任务被判定为失败
 
 # 任务操作指南
+
 ## 单表检查之空值检查
+
 ### 检查介绍
+
 空值检查的目标是检查出指定列为空的行数,可将为空的行数与总行数或者指定阈值进行比较,如果大于某个阈值则判定为失败
 - 计算指定列为空的SQL语句如下:
 
-  ```sql
-  SELECT COUNT(*) AS miss FROM ${src_table} WHERE (${src_field} is null or ${src_field} = '') AND (${src_filter})
-  ```
+```sql
+SELECT COUNT(*) AS miss FROM ${src_table} WHERE (${src_field} is null or ${src_field} = '') AND (${src_filter})
+```
 
 - 计算表总行数的SQL如下:
 
@@ -68,6 +72,7 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
   ```
 
 ### 界面操作指南
+
 ![dataquality_null_check](../../../img/tasks/demo/null_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -75,21 +80,25 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 源过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 源表检查列:下拉选择检查列名
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之及时性检查
+
 ### 检查介绍
+
 及时性检查用于检查数据是否在预期时间内处理完成,可指定开始时间、结束时间来界定时间范围,如果在该时间范围内的数据量没有达到设定的阈值,那么会判断该检查任务为失败
+
 ### 界面操作指南
+
 ![dataquality_timeliness_check](../../../img/tasks/demo/timeliness_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -100,21 +109,25 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 结束时间:某个时间范围的结束时间
 - 时间格式:设置对应的时间格式
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之字段长度校验
+
 ### 检查介绍
+
 字段长度校验的目标是检查所选字段的长度是否满足预期,如果有存在不满足要求的数据,并且行数超过阈值则会判断任务为失败
+
 ### 界面操作指南
+
 ![dataquality_length_check](../../../img/tasks/demo/field_length_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -124,21 +137,25 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 逻辑操作符:=,>、>=、<、<=、!=
 - 字段长度限制:如标题
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之唯一性校验
+
 ### 检查介绍
+
 唯一性校验的目标是检查字段是否存在重复的情况,一般用于检验primary key是否有重复,如果存在重复且达到阈值,则会判断检查任务为失败
+
 ### 界面操作指南
+
 ![dataquality_uniqueness_check](../../../img/tasks/demo/uniqueness_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -146,21 +163,25 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 源过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 源表检查列:下拉选择检查列名
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之正则表达式校验
+
 ### 检查介绍
+
 正则表达式校验的目标是检查某字段的值的格式是否符合要求,例如时间格式、邮箱格式、身份证格式等等,如果存在不符合格式的数据并超过阈值,则会判断任务为失败
+
 ### 界面操作指南
+
 ![dataquality_regex_check](../../../img/tasks/demo/regexp_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -169,43 +190,52 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 源表检查列:下拉选择检查列名
 - 正则表达式:如标题
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之枚举值校验
+
 ### 检查介绍
+
 枚举值校验的目标是检查某字段的值是否在枚举值的范围内,如果存在不在枚举值范围里的数据并超过阈值,则会判断任务为失败
+
 ### 界面操作指南
+
 ![dataquality_enum_check](../../../img/tasks/demo/enumeration_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
 - 源数据表:下拉选择验证数据所在表
 - 源表过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 源表检查列:下拉选择检查列名
-- 枚举值列表:用英文逗号,隔开 
+- 枚举值列表:用英文逗号,隔开
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
+
 ## 单表检查之表行数校验
+
 ### 检查介绍
+
 表行数校验的目标是检查表的行数是否达到预期的值,如果行数未达标,则会判断任务为失败
+
 ### 界面操作指南
+
 ![dataquality_count_check](../../../img/tasks/demo/table_count_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -213,55 +243,63 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 源过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 源表检查列:下拉选择检查列名
 - 校验方式:
-    - [Expected-Actual][期望值-实际值]
-    - [Actual-Expected][实际值-期望值]
-    - [Actual/Expected][实际值/期望值]x100%
-    - [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
+- [Expected-Actual][期望值-实际值]
+- [Actual-Expected][实际值-期望值]
+- [Actual/Expected][实际值/期望值]x100%
+- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 单表检查之自定义SQL检查
+
 ### 检查介绍
+
 ### 界面操作指南
+
 ![dataquality_custom_sql_check](../../../img/tasks/demo/custom_sql_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
 - 源数据表:下拉选择要验证数据所在表
-- 实际值名:为统计值计算SQL中的别名,如max_num 
+- 实际值名:为统计值计算SQL中的别名,如max_num
 - 实际值计算SQL: 用于输出实际值的SQL、
-    - 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
-    - select max(a) as max_num from ${src_table},表名必须这么填
+- 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
+- select max(a) as max_num from ${src_table},表名必须这么填
 - 源过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 校验方式:
 - 校验操作符:=,>、>=、<、<=、!=
 - 阈值:公式中用于比较的值
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型
 
 ## 多表检查之准确性检查
+
 ### 检查介绍
+
 准确性检查是通过比较两个表之间所选字段的数据记录的准确性差异,例子如下
 - 表test1
 
 | c1 | c2 |
-| :---: | :---: |
-| a | 1 | 
-| b | 2|
+|:--:|:--:|
+| a  | 1  |
+| b  | 2  |
 
 - 表test2
 
 | c21 | c22 |
-| :---: | :---: |
-| a | 1 | 
-| b | 3|
+|:---:|:---:|
+|  a  |  1  |
+|  b  |  3  |
+
 如果对比c1和c21中的数据,则表test1和test2完全一致。 如果对比c2和c22则表test1和表test2中的数据则存在不一致了。
+
 ### 界面操作指南
+
 ![dataquality_multi_table_accuracy_check](../../../img/tasks/demo/multi_table_accuracy_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
@@ -272,42 +310,53 @@ data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
 - 目标数据表:下拉选择要验证数据所在表
 - 目标过滤条件:如标题,统计表总行数的时候也会用到,选填
 - 检查列:
-    - 分别填写 源数据列,操作符,目标数据列
+- 分别填写 源数据列,操作符,目标数据列
 - 校验方式:选择想要的校验方式
 - 操作符:=,>、>=、<、<=、!=
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 - 期望值类型:在下拉菜单中选择所要的类型,这里只适合选择SrcTableTotalRow、TargetTableTotalRow和固定值
+
 ## 两表检查之值比对
+
 ### 检查介绍
+
 两表值比对允许用户对两张表自定义不同的SQL统计出相应的值进行比对,例如针对源表A统计出某一列的金额总值sum1,针对目标表统计出某一列的金额总值sum2,将sum1和sum2进行比较来判定检查结果
+
 ### 界面操作指南
+
 ![dataquality_multi_table_comparison_check](../../../img/tasks/demo/multi_table_comparison_check.png)
 - 源数据类型:选择MySQL、PostgreSQL等
 - 源数据源:源数据类型下对应的数据源
 - 源数据表:要验证数据所在表
 - 实际值名:为实际值计算SQL中的别名,如max_age1
 - 实际值计算SQL: 用于输出实际值的SQL、
-    - 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
-    - select max(age) as max_age1 from ${src_table} 表名必须这么填
+- 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
+- select max(age) as max_age1 from ${src_table} 表名必须这么填
 - 目标数据类型:选择MySQL、PostgreSQL等
 - 目标数据源:源数据类型下对应的数据源
 - 目标数据表:要验证数据所在表
 - 期望值名:为期望值计算SQL中的别名,如max_age2
 - 期望值计算SQL: 用于输出期望值的SQL、
-    - 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
-    - select max(age) as max_age2 from ${target_table} 表名必须这么填
+- 注意点:该SQL必须为统计SQL,例如统计行数,计算最大值、最小值等
+- select max(age) as max_age2 from ${target_table} 表名必须这么填
 - 校验方式:选择想要的校验方式
 - 操作符:=,>、>=、<、<=、!=
 - 失败策略
-    - 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
-    - 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
+- 告警:数据质量任务失败了,DolphinScheduler任务结果为成功,发送告警
+- 阻断:数据质量任务失败了,DolphinScheduler任务结果为失败,发送告警
 
 ## 任务结果查看
+
 ![dataquality_result](../../../img/tasks/demo/result.png)
+
 ## 规则查看
+
 ### 规则列表
+
 ![dataquality_rule_list](../../../img/tasks/demo/rule_list.png)
+
 ### 规则详情
-![dataquality_rule_detail](../../../img/tasks/demo/rule_detail.png)
\ No newline at end of file
+
+![dataquality_rule_detail](../../../img/tasks/demo/rule_detail.png)
diff --git a/docs/docs/zh/guide/datasource/athena.md b/docs/docs/zh/guide/datasource/athena.md
index 77134714b8..447303ebca 100644
--- a/docs/docs/zh/guide/datasource/athena.md
+++ b/docs/docs/zh/guide/datasource/athena.md
@@ -2,7 +2,6 @@
 
 ![AWS Athena](../../../../img/new_ui/dev/datasource/athena.png)
 
-
 - 数据源:选择 ATHENA
 - 数据源名称:输入数据源的名称
 - 描述:输入数据源的描述
@@ -17,3 +16,4 @@
 - 否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
 - JDBC驱动配置参考文档 [athena-connect-with-jdbc](https://docs.amazonaws.cn/athena/latest/ug/connect-with-jdbc.html)
 - 驱动下载链接 [SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar](https://s3.cn-north-1.amazonaws.com.cn/athena-downloads-cn/drivers/JDBC/SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar)
+
diff --git a/docs/docs/zh/guide/expansion-reduction.md b/docs/docs/zh/guide/expansion-reduction.md
index 35faa15db0..5d3652bae1 100644
--- a/docs/docs/zh/guide/expansion-reduction.md
+++ b/docs/docs/zh/guide/expansion-reduction.md
@@ -1,12 +1,12 @@
 # DolphinScheduler扩容/缩容 文档
 
-
 ## 1. DolphinScheduler扩容文档
+
 本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.
 
 ```
- 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.
-       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.
+注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.
+      如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.
 ```
 
 ### 1.1. 基础软件安装(必装项请自行安装)
@@ -14,16 +14,17 @@
 * [必装] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
 * [可选] 如果扩容的是worker类型的节点,需要考虑是否要安装外部客户端,比如Hadoop、Hive、Spark 的Client.
 
-
 ```markdown
- 注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
+注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
 ```
 
 ### 1.2. 获取安装包
+
 - 确认现有环境使用的DolphinScheduler是哪个版本,获取对应版本的安装包,如果版本不同,可能存在兼容性的问题.
 - 确认其他节点的统一安装目录,本文假设DolphinScheduler统一安装在 /opt/ 目录中,安装全路径为/opt/dolphinscheduler.
-- 请下载对应版本的安装包至服务器安装目录,解压并重名为dolphinscheduler存放在/opt目录中. 
+- 请下载对应版本的安装包至服务器安装目录,解压并重名为dolphinscheduler存放在/opt目录中.
 - 添加数据库依赖包,本文使用Mysql数据库,添加mysql-connector-java驱动包到/opt/dolphinscheduler/lib目录中
+
 ```shell
 # 创建安装目录,安装目录请不要创建在/root、/home等高权限目录 
 mkdir -p /opt
@@ -35,7 +36,7 @@ mv apache-dolphinscheduler-<version>-bin  dolphinscheduler
 ```
 
 ```markdown
- 注意:安装包可以从现有的环境直接复制到扩容的物理机上使用.
+注意:安装包可以从现有的环境直接复制到扩容的物理机上使用.
 ```
 
 ### 1.3. 创建部署用户
@@ -56,53 +57,51 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
 ```
 
 ```markdown
- 注意:
- - 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
- - 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
- - 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
+注意:
+- 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
+- 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
+- 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
 ```
 
 ### 1.4. 修改配置
 
 - 从现有的节点比如Master/Worker节点,直接拷贝conf目录替换掉新增节点中的conf目录.拷贝之后检查一下配置项是否正确.
-    
-    ```markdown
-    重点检查:
-    datasource.properties 中的数据库连接信息. 
-    zookeeper.properties 中的连接zk的信息.
-    common.properties 中关于资源存储的配置信息(如果设置了hadoop,请检查是否存在core-site.xml和hdfs-site.xml配置文件).
-    dolphinscheduler_env.sh 中的环境变量
-    ````
 
+  ```markdown
+  重点检查:
+  datasource.properties 中的数据库连接信息. 
+  zookeeper.properties 中的连接zk的信息.
+  common.properties 中关于资源存储的配置信息(如果设置了hadoop,请检查是否存在core-site.xml和hdfs-site.xml配置文件).
+  dolphinscheduler_env.sh 中的环境变量
+  ```
 - 根据机器配置,修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)
 
-    ```shell
-        export HADOOP_HOME=/opt/soft/hadoop
-        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-        #export SPARK_HOME1=/opt/soft/spark1
-        export SPARK_HOME2=/opt/soft/spark2
-        export PYTHON_HOME=/opt/soft/python
-        export JAVA_HOME=/opt/soft/java
-        export HIVE_HOME=/opt/soft/hive
-        export FLINK_HOME=/opt/soft/flink
-        export DATAX_HOME=/opt/soft/datax/bin/datax.py
-        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-    
-        ```
+  ```shell
+      export HADOOP_HOME=/opt/soft/hadoop
+      export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+      #export SPARK_HOME1=/opt/soft/spark1
+      export SPARK_HOME2=/opt/soft/spark2
+      export PYTHON_HOME=/opt/soft/python
+      export JAVA_HOME=/opt/soft/java
+      export HIVE_HOME=/opt/soft/hive
+      export FLINK_HOME=/opt/soft/flink
+      export DATAX_HOME=/opt/soft/datax/bin/datax.py
+      export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
 
-     `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉`
+      ```
 
+   `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉`
 
-- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
 
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
+  ```
+- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
 
- - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
-    
-    * 新增的master节点, 需要修改 ips 和 masters 参数.
-    * 新增的worker节点, 需要修改 ips 和  workers 参数.
+  ```shell
+  sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+  ```
+- 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+  * 新增的master节点, 需要修改 ips 和 masters 参数.
+  * 新增的worker节点, 需要修改 ips 和  workers 参数.
 
 ```shell
 #在哪些机器上新增部署DS服务,多个物理机之间用逗号隔开.
@@ -118,6 +117,7 @@ masters="现有master01,现有master02,ds1,ds2"
 workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
 
 ```
+
 - 如果扩容的是worker节点,需要设置worker分组.请参考安全中心[创建worker分组](./security.md)
 
 - 在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限
@@ -126,8 +126,6 @@ workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
 sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
 ```
 
-
-
 ### 1.4. 重启集群&验证
 
 - 重启集群
@@ -153,40 +151,42 @@ bash bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
 ```
 
 ```
- 注意: 使用stop-all.sh或者stop-all.sh的时候,如果执行该命令的物理机没有配置到所有机器的ssh免登陆的话,会提示输入密码
+注意: 使用stop-all.sh或者stop-all.sh的时候,如果执行该命令的物理机没有配置到所有机器的ssh免登陆的话,会提示输入密码
 ```
 
-
 - 脚本完成后,使用`jps`命令查看各个节点服务是否启动(`jps`为`java JDK`自带)
 
 ```
-    MasterServer         ----- master服务
-    WorkerServer         ----- worker服务
-    ApiApplicationServer ----- api服务
-    AlertServer          ----- alert服务
+MasterServer         ----- master服务
+WorkerServer         ----- worker服务
+ApiApplicationServer ----- api服务
+AlertServer          ----- alert服务
 ```
 
 启动成功后,可以进行日志查看,日志统一存放于logs文件夹内
 
 ```日志路径
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    ├── dolphinscheduler-worker-server.log
-    ├── dolphinscheduler-api-server.log
+logs/
+   ├── dolphinscheduler-alert-server.log
+   ├── dolphinscheduler-master-server.log
+   ├── dolphinscheduler-worker-server.log
+   ├── dolphinscheduler-api-server.log
 ```
+
 如果以上服务都正常启动且调度系统页面正常,在web系统的[监控中心]查看是否有扩容的Master或者Worker服务.如果存在,则扩容完成
 
 -----------------------------------------------------------------------------
 
 ## 2. 缩容
+
 缩容是针对现有的DolphinScheduler集群减少master或者worker服务,
 缩容一共分两个步骤,执行完以下两步,即可完成缩容操作.
 
 ### 2.1 停止缩容节点上的服务
- * 如果缩容master节点,要确定要缩容master服务所在的物理机,并在物理机上停止该master服务.
- * 如果缩容worker节点,要确定要缩容worker服务所在的物理机,并在物理机上停止worker服务.
- 
+
+* 如果缩容master节点,要确定要缩容master服务所在的物理机,并在物理机上停止该master服务.
+* 如果缩容worker节点,要确定要缩容worker服务所在的物理机,并在物理机上停止worker服务.
+
 ```shell
 停止命令:
 bin/stop-all.sh 停止所有服务
@@ -208,26 +208,25 @@ bash bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
 ```
 
 ```
- 注意: 使用stop-all.sh或者stop-all.sh的时候,如果没有执行该命令的机器没有配置到所有机器的ssh免登陆的话,会提示输入密码
+注意: 使用stop-all.sh或者stop-all.sh的时候,如果没有执行该命令的机器没有配置到所有机器的ssh免登陆的话,会提示输入密码
 ```
 
 - 脚本完成后,使用`jps`命令查看各个节点服务是否成功关闭(`jps`为`java JDK`自带)
 
 ```
-    MasterServer         ----- master服务
-    WorkerServer         ----- worker服务
-    ApiApplicationServer ----- api服务
-    AlertServer          ----- alert服务
+MasterServer         ----- master服务
+WorkerServer         ----- worker服务
+ApiApplicationServer ----- api服务
+AlertServer          ----- alert服务
 ```
-如果对应的master服务或者worker服务不存在,则代表master/worker服务成功关闭.
 
+如果对应的master服务或者worker服务不存在,则代表master/worker服务成功关闭.
 
 ### 2.2 修改配置文件
 
- - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
-    
-    * 缩容master节点, 需要修改 ips 和 masters 参数.
-    * 缩容worker节点, 需要修改 ips 和  workers 参数.
+- 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+  * 缩容master节点, 需要修改 ips 和 masters 参数.
+  * 缩容worker节点, 需要修改 ips 和  workers 参数.
 
 ```shell
 #在哪些机器上部署DS服务,本机选localhost
@@ -243,3 +242,4 @@ masters="现有master01,现有master02,ds1,ds2"
 workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
 
 ```
+
diff --git a/docs/docs/zh/guide/healthcheck.md b/docs/docs/zh/guide/healthcheck.md
index 59f8d61341..4877df8a42 100644
--- a/docs/docs/zh/guide/healthcheck.md
+++ b/docs/docs/zh/guide/healthcheck.md
@@ -39,3 +39,4 @@ curl --request GET 'http://localhost:50053/actuator/health'
 ```
 
 > 注意: 如果你修改过默认的服务端口和地址,那么你需要修改 IP+Port 为你修改后的值。
+
diff --git a/docs/docs/zh/guide/howto/datasource-setting.md b/docs/docs/zh/guide/howto/datasource-setting.md
index f9a223ee84..ce8f199ada 100644
--- a/docs/docs/zh/guide/howto/datasource-setting.md
+++ b/docs/docs/zh/guide/howto/datasource-setting.md
@@ -25,7 +25,6 @@ DolphinScheduler 元数据存储在关系型数据库中,目前支持 PostgreS
 
 > 如果使用 MySQL 需要手动下载 [mysql-connector-java 驱动][mysql] (8.0.16) 并移动到 DolphinScheduler 的每个模块的 libs 目录下,其中包括 `api-server/libs` 和 `alert-server/libs` 和 `master-server/libs` 和 `worker-server/libs`。
 
-
 对于mysql 5.6 / 5.7:
 
 ```shell
@@ -75,6 +74,7 @@ pg_ctl reload
 然后修改`./bin/env/dolphinscheduler_env.sh`,将username和password改成你在上一步中设置的用户名{user}和密码{password}
 
 对于 MySQL:
+
 ```shell
 # for mysql
 export DATABASE=${DATABASE:-mysql}
@@ -82,9 +82,10 @@ export SPRING_PROFILES_ACTIVE=${DATABASE}
 export SPRING_DATASOURCE_URL="jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
 export SPRING_DATASOURCE_USERNAME={user}
 export SPRING_DATASOURCE_PASSWORD={password}
-```  
+```
 
 对于 PostgreSQL:
+
 ```shell
 # for postgresql
 export DATABASE=${DATABASE:-postgresql}
@@ -123,3 +124,4 @@ DolphinScheduler 分发的二进制包中包含他们。这部分数据源主要
 > 则仅支持 [8.0.16 及以上](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)的版本。
 
 [mysql]: https://downloads.MySQL.com/archives/c-j/
+
diff --git a/docs/docs/zh/guide/installation/pseudo-cluster.md b/docs/docs/zh/guide/installation/pseudo-cluster.md
index 3856f884c7..ab1c1a4578 100644
--- a/docs/docs/zh/guide/installation/pseudo-cluster.md
+++ b/docs/docs/zh/guide/installation/pseudo-cluster.md
@@ -186,10 +186,11 @@ bash ./bin/dolphinscheduler-daemon.sh stop alert-server
 > 服务需求提供便利。意味着您可以基于不同的环境变量来启动各个服务,只需要在对应服务中配置 `<service>/conf/dolphinscheduler_env.sh` 然后通过 `<service>/bin/start.sh`
 > 命令启动即可。但是如果您使用命令 `/bin/dolphinscheduler-daemon.sh start <service>` 启动服务器,它将会用文件 `bin/env/dolphinscheduler_env.sh`
 > 覆盖 `<service>/conf/dolphinscheduler_env.sh` 然后启动服务,目的是为了减少用户修改配置的成本.
-
+>
 > **_注意2:_**:服务用途请具体参见《系统架构设计》小节。Python gateway service 默认与 api-server 一起启动,如果您不想启动 Python gateway service
 > 请通过更改 api-server 配置文件 `api-server/conf/application.yaml` 中的 `python-gateway.enabled : false` 来禁用它。
 
 [jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
 [zookeeper]: https://zookeeper.apache.org/releases.html
 [issue]: https://github.com/apache/dolphinscheduler/issues/6597
+
diff --git a/docs/docs/zh/guide/integration/rainbond.md b/docs/docs/zh/guide/integration/rainbond.md
index 6c39638590..f2f90ccf17 100644
--- a/docs/docs/zh/guide/integration/rainbond.md
+++ b/docs/docs/zh/guide/integration/rainbond.md
@@ -14,12 +14,12 @@
 
 * 点击 DolphinScheduler 右侧的 `安装` 进入安装页面,填写对应的信息,点击确定即可开始安装,自动跳转至应用视图。
 
-| 选择项   | 说明                                                         |
-| -------- | ------------------------------------------------------------ |
-| 团队名称 | 用户自建的工作空间,以命名空间隔离                           |
-| 集群名称 | 选择 DolphinScheduler 被部署到哪一个 K8s 集群                |
+| 选择项  |                      说明                      |
+|------|----------------------------------------------|
+| 团队名称 | 用户自建的工作空间,以命名空间隔离                            |
+| 集群名称 | 选择 DolphinScheduler 被部署到哪一个 K8s 集群           |
 | 选择应用 | 选择 DolphinScheduler 被部署到哪一个应用,应用中包含有若干有关联的组件 |
-| 应用版本 | 选择 DolphinScheduler 的版本,目前可选版本为 3.0.0-beta2     |
+| 应用版本 | 选择 DolphinScheduler 的版本,目前可选版本为 3.0.0-beta2  |
 
 ![](../../../../img/rainbond/install-dolphinscheduler.png)
 
@@ -61,5 +61,7 @@ Worker 服务默认安装了 Python3,使用时可以添加环境变量  `PYTHO
    * FILE_PATH:/opt/soft
    * LOCK_PATH:/opt/soft
 3. 更新组件,初始化插件会自动下载 `Datax` 并解压到 `/opt/soft`目录下。
-![](../../../../img/rainbond/plugin.png)
+   ![](../../../../img/rainbond/plugin.png)
+
 ---
+
diff --git a/docs/docs/zh/guide/metrics/metrics.md b/docs/docs/zh/guide/metrics/metrics.md
index adaffbbbbe..4868ba7686 100644
--- a/docs/docs/zh/guide/metrics/metrics.md
+++ b/docs/docs/zh/guide/metrics/metrics.md
@@ -5,11 +5,11 @@ Apache DolphinScheduler通过向外透出指标来提高系统的监控告警能
 
 ## 快速上手
 
-- 我们提供Apache DolphinScheduler `standalone` 模式下采集并透出指标的能力,提供用户轻松快速的体验。 
+- 我们提供Apache DolphinScheduler `standalone` 模式下采集并透出指标的能力,提供用户轻松快速的体验。
 - 当您在`standalone`模式下触发任务后,您可通过链接 `http://localhost:12345/dolphinscheduler/actuator/metrics` 访问生成的metrics列表。
 - 当您在`standalone`模式下触发任务后,您可通过链接 `http://localhost:12345/dolphinscheduler/actuator/prometheus` 访问`prometheus格式`指标。
 - 为了给您提供一个一站式的`Prometheus` + `Grafana`体验, 我们已经为您准备好了开箱即用的 `Grafana` 配置。您可在`dolphinscheduler-meter/resources/grafana`找到`Grafana`面板配置。
-您可直接将这些配置导入您的`Grafana`实例中。
+  您可直接将这些配置导入您的`Grafana`实例中。
 - 如果您想通过`docker`方式体验,可使用如下命令启动我们为您准备好的开箱即用的`Prometheus`和`Grafana`:
 
 ```shell
@@ -17,12 +17,12 @@ cd dolphinscheduler-meter/src/main/resources/grafana-demo
 docker compose up
 ```
 
-然后,您即可通过http://localhost/3001`链接访问`Grafana`面板。    
+然后,您即可通过http://localhost/3001`链接访问`Grafana`面板。
 
 ![image.png](../../../../img/metrics/metrics-master.png)
 ![image.png](../../../../img/metrics/metrics-worker.png)
 ![image.png](../../../../img/metrics/metrics-datasource.png)
-      
+
 - 如果您想在`集群`模式下体验指标,请参照下面的[配置](#配置)一栏:
 
 ## 配置
@@ -43,13 +43,13 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
 ## 命名规则 & 命名映射
 
 - Apache DolphinScheduler指标命名遵循[Micrometer](https://github.com/micrometer-metrics/micrometer-docs/blob/main/src/docs/concepts/naming.adoc)
-官方推荐的命名方式。
+  官方推荐的命名方式。
 - `Micrometer` 会根据您配置的外部指标系统自动将指标名称转化成适合您指标系统的格式。目前,我们只支持`Prometheus Exporter`,但是多样化的指标格式将会持续贡献给用户。
 
 ### Prometheus
 
 - 指标名中的点会被映射为下划线
-- 以数字开头的指标名会被加上`m_`前缀 
+- 以数字开头的指标名会被加上`m_`前缀
 - COUNTER: 如果没有以`_total`结尾,会自动加上此后缀
 - LONG_TASK_TIMER: 如果没有以`_timer_seconds`结尾,会自动加上此后缀
 - GAUGE: 如果没有以`_baseUnit`结尾,会自动加上此后缀
@@ -57,7 +57,7 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
 ## Dolphin Scheduler指标清单
 
 - Dolphin Scheduler按照组成部分进行指标分类,如:`master server`, `worker server`, `api server` and `alert server`。
-- 尽管任务 / 工作流相关指标是由 `master server` 和 `worker server` 透出的,我们将这两块指标单独罗列出来,以方便您对任务 / 工作流的监控。  
+- 尽管任务 / 工作流相关指标是由 `master server` 和 `worker server` 透出的,我们将这两块指标单独罗列出来,以方便您对任务 / 工作流的监控。
 
 ### 任务相关指标
 
@@ -67,19 +67,18 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
   - success:成功完成的任务数量
   - fail:失败的任务数量
   - stop:暂停的任务数量
-  - retry:重试的任务数量 
+  - retry:重试的任务数量
   - submit:已提交的任务数量
   - failover:容错的任务数量
 - ds.task.dispatch.count: (counter) 分发到worker上的任务数量
 - ds.task.dispatch.failure.count: (counter) 分发失败的任务数量,重试也包含在内
 - ds.task.dispatch.error.count: (counter) 分发任务的错误数量
 - ds.task.execution.count.by.type: (counter) 任务执行数量,按标签`task_type`聚类
-- ds.task.running: (gauge) 正在运行的任务数量 
-- ds.task.prepared: (gauge) 准备好且待提交的任务数量 
-- ds.task.execution.count: (counter) 已执行的任务数量  
+- ds.task.running: (gauge) 正在运行的任务数量
+- ds.task.prepared: (gauge) 准备好且待提交的任务数量
+- ds.task.execution.count: (counter) 已执行的任务数量
 - ds.task.execution.duration: (histogram) 任务执行时长
 
-
 ### 工作流相关指标
 
 - ds.workflow.create.command.count: (counter) 工作量创建并插入的命令数量
@@ -89,14 +88,14 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
   - timeout:运行超时的工作流实例数量
   - finish:已完成的工作流实例数量,包含成功和失败
   - success:运行成功的工作流实例数量
-  - fail:运行失败的工作流实例数量 
-  - stop:停止的工作流实例数量 
+  - fail:运行失败的工作流实例数量
+  - stop:停止的工作流实例数量
   - failover:容错的工作流实例数量
 
 ### Master Server指标
 
 - ds.master.overload.count: (counter) master过载次数
-- ds.master.consume.command.count: (counter) master消耗指令数量 
+- ds.master.consume.command.count: (counter) master消耗指令数量
 - ds.master.scheduler.failover.check.count: (counter) scheduler (master) 容错检查次数
 - ds.master.scheduler.failover.check.time: (histogram) scheduler (master) 容错检查耗时
 - ds.master.quartz.job.executed: 已执行quartz任务数量
@@ -125,7 +124,7 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
 
 - hikaricp.connections: 连接综述
 - hikaricp.connections.creation: 连接创建时间 (包含最长时间,创建数量和时间总和)
-- hikaricp.connections.acquire: 连接获取时间 (包含最长时间,创建数量和时间总和) 
+- hikaricp.connections.acquire: 连接获取时间 (包含最长时间,创建数量和时间总和)
 - hikaricp.connections.usage: 连接使用时长 (包含最长时间,创建数量和时间总和)
 - hikaricp.connections.max: 最大连接数量
 - hikaricp.connections.min: 最小连接数量
@@ -176,3 +175,4 @@ metrics exporter端口`server.port`是在application.yaml里定义的: master: `
 - system.load.average.1m: 系统的平均负荷(1分钟)
 - logback.events: 日志时间数量,以标签`level`聚类
 - http.server.requests: http请求总数
+
diff --git a/docs/docs/zh/guide/monitor.md b/docs/docs/zh/guide/monitor.md
index 63a989df52..b45d7a59b3 100644
--- a/docs/docs/zh/guide/monitor.md
+++ b/docs/docs/zh/guide/monitor.md
@@ -21,13 +21,13 @@
 - 主要是 DB 的健康状况
 
 ![db](../../../img/new_ui/dev/monitor/db.png)
- 
+
 ## 统计管理
 
 ### Statistics
 
 ![statistics](../../../img/new_ui/dev/monitor/statistics.png)
- 
+
 - 待执行命令数:统计 t_ds_command 表的数据
 - 执行失败的命令数:统计 t_ds_error_command 表的数据
 - 待运行任务数:统计 Zookeeper 中 task_queue 的数据
diff --git a/docs/docs/zh/guide/parameter/built-in.md b/docs/docs/zh/guide/parameter/built-in.md
index 2b97e91f3b..32add7231f 100644
--- a/docs/docs/zh/guide/parameter/built-in.md
+++ b/docs/docs/zh/guide/parameter/built-in.md
@@ -29,21 +29,22 @@
 
 - 也可以通过以下两种方式:
 
-    1.使用add_months()函数,该函数用于加减月份,
-    第一个入口参数为[yyyyMMdd],表示返回时间的格式
-    第二个入口参数为月份偏移量,表示加减多少个月
-    * 后 N 年:$[add_months(yyyyMMdd,12*N)]
-    * 前 N 年:$[add_months(yyyyMMdd,-12*N)]
-    * 后 N 月:$[add_months(yyyyMMdd,N)]
-    * 前 N 月:$[add_months(yyyyMMdd,-N)]
-    *******************************************
-    2.直接加减数字
-    在自定义格式后直接“+/-”数字
-    * 后 N 周:$[yyyyMMdd+7*N]
-    * 前 N 周:$[yyyyMMdd-7*N]
-    * 后 N 天:$[yyyyMMdd+N]
-    * 前 N 天:$[yyyyMMdd-N]
-    * 后 N 小时:$[HHmmss+N/24]
-    * 前 N 小时:$[HHmmss-N/24]
-    * 后 N 分钟:$[HHmmss+N/24/60]
-    * 前 N 分钟:$[HHmmss-N/24/60]
+  1.使用add_months()函数,该函数用于加减月份,
+  第一个入口参数为[yyyyMMdd],表示返回时间的格式
+  第二个入口参数为月份偏移量,表示加减多少个月
+  * 后 N 年:$[add_months(yyyyMMdd,12*N)]
+  * 前 N 年:$[add_months(yyyyMMdd,-12*N)]
+  * 后 N 月:$[add_months(yyyyMMdd,N)]
+  * 前 N 月:$[add_months(yyyyMMdd,-N)]
+  *******************************************
+  2.直接加减数字
+  在自定义格式后直接“+/-”数字
+  * 后 N 周:$[yyyyMMdd+7*N]
+  * 前 N 周:$[yyyyMMdd-7*N]
+  * 后 N 天:$[yyyyMMdd+N]
+  * 前 N 天:$[yyyyMMdd-N]
+  * 后 N 小时:$[HHmmss+N/24]
+  * 前 N 小时:$[HHmmss-N/24]
+  * 后 N 分钟:$[HHmmss+N/24/60]
+  * 前 N 分钟:$[HHmmss-N/24/60]
+
diff --git a/docs/docs/zh/guide/project/task-definition.md b/docs/docs/zh/guide/project/task-definition.md
index 6b1ffe55f7..c6247d489c 100644
--- a/docs/docs/zh/guide/project/task-definition.md
+++ b/docs/docs/zh/guide/project/task-definition.md
@@ -1,6 +1,7 @@
 # 任务定义
 
 ## 批量任务定义
+
 批量任务定义允许您在基于任务级别而不是在工作流中操作修改任务。再此之前,我们已经有了工作流级别的任务编辑器,你可以在[工作流定义](workflow-definition.md)
 单击特定的工作流,然后编辑任务的定义。当您想编辑特定的任务定义但不记得它属于哪个工作流时,这是令人沮丧的。所以我们决定在 `任务` 菜单下添加 `任务定义` 视图。
 
@@ -9,8 +10,8 @@
 在该视图中,您可以通过单击 `操作` 列中的相关按钮来进行创建、查询、更新、删除任务定义。最令人兴奋的是您可以通过通配符进行全部任务查询,当您只
 记得任务名称但忘记它属于哪个工作流时是非常有用的。也支持通过任务名称结合使用 `任务类型` 或 `工作流程名称` 进行查询。
 
-
 ## 实时任务定义
+
 实时任务定义在工作流定义中创建,在任务定义页面可以进行修改和执行。
 
 ![task-definition](../../../../img/new_ui/dev/project/stream-task-definition.png)
@@ -19,4 +20,3 @@
 
 ![task-definition](../../../../img/new_ui/dev/project/stream-task-execute.png)
 
-
diff --git a/docs/docs/zh/guide/project/task-instance.md b/docs/docs/zh/guide/project/task-instance.md
index b7a928b7e2..a23c8b921e 100644
--- a/docs/docs/zh/guide/project/task-instance.md
+++ b/docs/docs/zh/guide/project/task-instance.md
@@ -1,6 +1,7 @@
 # 任务实例
 
 ## 批量任务实例
+
 - 点击项目管理->工作流->任务实例,进入任务实例页面,如下图所示,点击工作流实例名称,可跳转到工作流实例DAG图查看任务状态。
 
 ![task-instance](../../../../img/new_ui/dev/project/batch-task-instance.png)
@@ -17,3 +18,4 @@
 
 - SavePoint:点击操作列中的SavePoint按钮,可以进行实时任务的SavePoint。
 - Stop:点击操作列中的Stop按钮,可以停止该实时任务。
+
diff --git a/docs/docs/zh/guide/project/workflow-definition.md b/docs/docs/zh/guide/project/workflow-definition.md
index c0a04b76a3..521ec8757f 100644
--- a/docs/docs/zh/guide/project/workflow-definition.md
+++ b/docs/docs/zh/guide/project/workflow-definition.md
@@ -7,19 +7,18 @@
   ![workflow-dag](../../../../img/new_ui/dev/project/workflow-dag.png)
 
 - 工具栏中拖拽 <img src="../../../../img/tasks/icons/shell.png" width="15"/> 到画板中,新增一个Shell任务,如下图所示:
-  
+
   ![demo-shell-simple](../../../../img/tasks/demo/shell.jpg)
-  
+
 - **添加 Shell 任务的参数设置:**
 
   1. 填写“节点名称”,“描述”,“脚本”字段;
-  1. “运行标志”勾选“正常”,若勾选“禁止执行”,运行工作流不会执行该任务;
-  1. 选择“任务优先级”:当 worker 线程数不足时,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行;
-  1. 超时告警(非必选):勾选超时告警、超时失败,填写“超时时长”,当任务执行时间超过**超时时长**,会发送告警邮件并且任务超时失败;
-  1. 资源(非必选):资源文件是资源中心->文件管理页面创建或上传的文件,如文件名为 `test.sh`,脚本中调用资源命令为 `sh test.sh`。注意调用需要使用资源的全路径;
-  1. 自定义参数(非必填);
-  1. 点击"确认添加"按钮,保存任务设置。
-
+  2. “运行标志”勾选“正常”,若勾选“禁止执行”,运行工作流不会执行该任务;
+  3. 选择“任务优先级”:当 worker 线程数不足时,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行;
+  4. 超时告警(非必选):勾选超时告警、超时失败,填写“超时时长”,当任务执行时间超过**超时时长**,会发送告警邮件并且任务超时失败;
+  5. 资源(非必选):资源文件是资源中心->文件管理页面创建或上传的文件,如文件名为 `test.sh`,脚本中调用资源命令为 `sh test.sh`。注意调用需要使用资源的全路径;
+  6. 自定义参数(非必填);
+  7. 点击"确认添加"按钮,保存任务设置。
 - **配置任务之间的依赖关系:** 点击任务节点的右侧加号连接任务;如下图所示,任务 Node_B 和任务 Node_C 并行执行,当任务 Node_A 执行完,任务 Node_B、Node_C 会同时执行。
 
   ![workflow-dependent](../../../../img/new_ui/dev/project/workflow-dependent.png)
@@ -64,62 +63,59 @@
 
 ![workflow-online](../../../../img/new_ui/dev/project/workflow-online.png)
 
-
 - 点击”运行“按钮,弹出启动参数设置弹框,如下图所示,设置启动参数,点击弹框中的"运行"按钮,工作流开始运行,工作流实例页面生成一条工作流实例。
 
 ![workflow-run](../../../../img/new_ui/dev/project/workflow-run.png)
-  
-  工作流运行参数说明: 
-       
-  * 失败策略:当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略。”继续“表示:某一任务失败后,其他任务节点正常执行;”结束“表示:终止所有正在执行的任务,并终止整个流程。
-  * 通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件,包含任何状态都不发,成功发,失败发,成功或失败都发。
-  * 流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。当 master 线程数不足时,级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。
-  * Worker 分组:该流程只能在指定的 worker 机器组里执行。默认是 Default,可以在任一 worker 上执行。
-  * 通知组:选择通知策略||超时报警||发生容错时,会发送流程信息或邮件到通知组里的所有成员。
-  * 收件人:选择通知策略||超时报警||发生容错时,会发送流程信息或告警邮件到收件人列表。
-  * 抄送人:选择通知策略||超时报警||发生容错时,会抄送流程信息或告警邮件到抄送人列表。
-  * 启动参数: 在启动新的流程实例时,设置或覆盖全局参数的值。
-  * 补数:指运行指定日期范围内的工作流定义,根据补数策略生成对应的工作流实例,补数策略包括串行补数、并行补数 2 种模式,日期可以通过页面选择或者手动输入。
-   
-    * 串行补数:指定时间范围内,从开始日期至结束日期依次执行补数,依次生成多条流程实例;点击运行工作流,选择串行补数模式:例如从7月 9号到7月10号依次执行,依次在流程实例页面生成两条流程实例。
-  
-    ![workflow-serial](../../../../img/new_ui/dev/project/workflow-serial.png)
-    
-    * 并行补数: 指定时间范围内,同时进行多天的补数,同时生成多条流程实例。手动输入日期:手动输入以逗号分割日期格式为 `yyyy-MM-dd HH:mm:ss` 的日期。点击运行工作流,选择并行补数模式:例如同时执行7月9号到7月10号的工作流定义,同时在流程实例页面生成两条流程实例(执行策略为串行时流程实例按照策略执行)。
-    
-    ![workflow-parallel](../../../../img/new_ui/dev/project/workflow-parallel.png)
-  
-        * 并行度:是指在并行补数的模式下,最多并行执行的实例数。例如同时执行7月6号到7月10号的工作流定义,并行度为2,那么流程实例为:
+
+工作流运行参数说明:
+
+* 失败策略:当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略。”继续“表示:某一任务失败后,其他任务节点正常执行;”结束“表示:终止所有正在执行的任务,并终止整个流程。
+* 通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件,包含任何状态都不发,成功发,失败发,成功或失败都发。
+* 流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。当 master 线程数不足时,级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。
+* Worker 分组:该流程只能在指定的 worker 机器组里执行。默认是 Default,可以在任一 worker 上执行。
+* 通知组:选择通知策略||超时报警||发生容错时,会发送流程信息或邮件到通知组里的所有成员。
+* 收件人:选择通知策略||超时报警||发生容错时,会发送流程信息或告警邮件到收件人列表。
+* 抄送人:选择通知策略||超时报警||发生容错时,会抄送流程信息或告警邮件到抄送人列表。
+* 启动参数: 在启动新的流程实例时,设置或覆盖全局参数的值。
+* 补数:指运行指定日期范围内的工作流定义,根据补数策略生成对应的工作流实例,补数策略包括串行补数、并行补数 2 种模式,日期可以通过页面选择或者手动输入。
+  * 串行补数:指定时间范围内,从开始日期至结束日期依次执行补数,依次生成多条流程实例;点击运行工作流,选择串行补数模式:例如从7月 9号到7月10号依次执行,依次在流程实例页面生成两条流程实例。
+
+  ![workflow-serial](../../../../img/new_ui/dev/project/workflow-serial.png)
+
+  * 并行补数: 指定时间范围内,同时进行多天的补数,同时生成多条流程实例。手动输入日期:手动输入以逗号分割日期格式为 `yyyy-MM-dd HH:mm:ss` 的日期。点击运行工作流,选择并行补数模式:例如同时执行7月9号到7月10号的工作流定义,同时在流程实例页面生成两条流程实例(执行策略为串行时流程实例按照策略执行)。
+
+  ![workflow-parallel](../../../../img/new_ui/dev/project/workflow-parallel.png)
+
+  * 并行度:是指在并行补数的模式下,最多并行执行的实例数。例如同时执行7月6号到7月10号的工作流定义,并行度为2,那么流程实例为:
     ![workflow-concurrency-from](../../../../img/new_ui/dev/project/workflow-concurrency-from.png)
-    
-    ![workflow-concurrency](../../../../img/new_ui/dev/project/workflow-concurrency.png)
-   
-    * 依赖模式:是否触发下游依赖节点依赖到当前工作流的工作流实例的补数(要求当前补数的工作流实例的定时状态为已上线,只会触发下游直接依赖到当前工作流的补数)。
-    
-    ![workflow-dependency](../../../../img/new_ui/dev/project/workflow-dependency.png)
-    
-    * 日期选择:
-        1. 通过页面选择日期:
-        
-        ![workflow-pageSelection](../../../../img/new_ui/dev/project/workflow-pageSelection.png)
-        
-        2. 手动输入:
-        
-        ![workflow-input](../../../../img/new_ui/dev/project/workflow-input.png)
-     
-     * 补数与定时配置的关系:
-       
-        1. 未配置定时:当没有定时配置时默认会根据所选时间范围进行每天一次的补数,比如该工作流调度日期为7月 7号到7月10号,未配置定时,流程实例为:
-        
-        ![workflow-unconfiguredTimingResult](../../../../img/new_ui/dev/project/workflow-unconfiguredTimingResult.png)
-
-        2. 已配置定时:如果有定时配置则会根据所选的时间范围结合定时配置进行补数,比如该工作流调度日期为7月 7号到7月10号,配置了定时(每日凌晨5点运行),流程实例为:
-        
-        ![workflow-configuredTiming](../../../../img/new_ui/dev/project/workflow-configuredTiming.png)
-        
-        ![workflow-configuredTimingResult](../../../../img/new_ui/dev/project/workflow-configuredTimingResult.png)
-    
 
+  ![workflow-concurrency](../../../../img/new_ui/dev/project/workflow-concurrency.png)
+
+  * 依赖模式:是否触发下游依赖节点依赖到当前工作流的工作流实例的补数(要求当前补数的工作流实例的定时状态为已上线,只会触发下游直接依赖到当前工作流的补数)。
+
+  ![workflow-dependency](../../../../img/new_ui/dev/project/workflow-dependency.png)
+
+  * 日期选择:
+
+    1. 通过页面选择日期:
+
+    ![workflow-pageSelection](../../../../img/new_ui/dev/project/workflow-pageSelection.png)
+
+    2. 手动输入:
+
+    ![workflow-input](../../../../img/new_ui/dev/project/workflow-input.png)
+
+  * 补数与定时配置的关系:
+
+    1. 未配置定时:当没有定时配置时默认会根据所选时间范围进行每天一次的补数,比如该工作流调度日期为7月 7号到7月10号,未配置定时,流程实例为:
+
+    ![workflow-unconfiguredTimingResult](../../../../img/new_ui/dev/project/workflow-unconfiguredTimingResult.png)
+
+    2. 已配置定时:如果有定时配置则会根据所选的时间范围结合定时配置进行补数,比如该工作流调度日期为7月 7号到7月10号,配置了定时(每日凌晨5点运行),流程实例为:
+
+    ![workflow-configuredTiming](../../../../img/new_ui/dev/project/workflow-configuredTiming.png)
+
+    ![workflow-configuredTimingResult](../../../../img/new_ui/dev/project/workflow-configuredTimingResult.png)
 
 ## 单独运行任务
 
@@ -138,12 +134,15 @@
   ![workflow-time01](../../../../img/new_ui/dev/project/workflow-time01.png)
 
 - 选择起止时间。在起止时间范围内,定时运行工作流;不在起止时间范围内,不再产生定时工作流实例。
+
 - 添加一个每隔 5 分钟执行一次的定时,如下图所示:
 
   ![workflow-time02](../../../../img/new_ui/dev/project/workflow-time02.png)
 
 - 失败策略、通知策略、流程优先级、Worker 分组、通知组、收件人、抄送人同工作流运行参数。
+
 - 点击"创建"按钮,创建定时成功,此时定时状态为"**下线**",定时需**上线**才生效。
+
 - 定时上线:点击"定时管理"按钮<img src="../../../../img/timeManagement.png" width="35"/>,进入定时管理页面,点击"上线"按钮,定时状态变为"上线",如下图所示,工作流定时生效。
 
   ![workflow-time03](../../../../img/new_ui/dev/project/workflow-time03.png)
diff --git a/docs/docs/zh/guide/project/workflow-instance.md b/docs/docs/zh/guide/project/workflow-instance.md
index df348ed0a0..8aeff32dc2 100644
--- a/docs/docs/zh/guide/project/workflow-instance.md
+++ b/docs/docs/zh/guide/project/workflow-instance.md
@@ -5,7 +5,7 @@
 - 点击项目管理->工作流->工作流实例,进入工作流实例页面,如下图所示:
 
 ![workflow-instance](../../../../img/new_ui/dev/project/workflow-instance.png)
-          
+
 - 点击工作流名称,进入DAG查看页面,查看任务执行状态,如下图所示。
 
 ![instance-state](../../../../img/new_ui/dev/project/instance-state.png)
@@ -29,27 +29,35 @@
 
 ## 查看运行参数
 
-- 点击项目管理->工作流->工作流实例,进入工作流实例页面,点击工作流名称,进入工作流 DAG 页面; 
+- 点击项目管理->工作流->工作流实例,进入工作流实例页面,点击工作流名称,进入工作流 DAG 页面;
 - 点击左上角图标<img src="../../../../img/run_params_button.png" width="35"/>,查看工作流实例的启动参数;点击图标<img src="../../../../img/global_param.png" width="35"/>,查看工作流实例的全局参数和局部参数,如下图所示:
 
 ![instance-parameter](../../../../img/new_ui/dev/project/instance-parameter.png)
 
 ## 工作流实例操作功能
 
-点击项目管理->工作流->工作流实例,进入工作流实例页面,如下图所示:          
+点击项目管理->工作流->工作流实例,进入工作流实例页面,如下图所示:
 
 ![workflow-instance](../../../../img/new_ui/dev/project/workflow-instance.png)
 
 - **编辑:** 只能编辑 成功/失败/停止 状态的流程。点击"编辑"按钮或工作流实例名称进入 DAG 编辑页面,编辑后点击"保存"按钮,弹出保存 DAG 弹框,如下图所示,修改流程定义信息,在弹框中勾选"是否更新工作流定义",保存后则将实例修改的信息更新到工作流定义;若不勾选,则不更新工作流定义。
-       <p align="center">
-         <img src="../../../../img/editDag.png" width="80%" />
-       </p>
+
+  <p align="center">
+  <img src="../../../../img/editDag.png" width="80%" />
+  </p>
+
 - **重跑:** 重新执行已经终止的流程。
+
 - **恢复失败:** 针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。
+
 - **停止:** 对正在运行的流程进行**停止**操作,后台会先 `kill` worker 进程,再执行 `kill -9` 操作
+
 - **暂停:** 对正在运行的流程进行**暂停**操作,系统状态变为**等待执行**,会等待正在执行的任务结束,暂停下一个要执行的任务。
+
 - **恢复暂停:** 对暂停的流程恢复,直接从**暂停的节点**开始运行
+
 - **删除:** 删除工作流实例及工作流实例下的任务实例
-- **甘特图:** Gantt 图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:         
+
+- **甘特图:** Gantt 图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:
 
 ![instance-gantt](../../../../img/new_ui/dev/project/instance-gantt.png)
diff --git a/docs/docs/zh/guide/resource/file-manage.md b/docs/docs/zh/guide/resource/file-manage.md
index ab3b0bb923..1000cf06b1 100644
--- a/docs/docs/zh/guide/resource/file-manage.md
+++ b/docs/docs/zh/guide/resource/file-manage.md
@@ -64,6 +64,7 @@
 
 - 脚本:`sh hello.sh`
 - 资源:选择 `hello.sh`
+
 > 注意:脚本中选择资源文件时文件名称需要保持和所选择资源全路径一致:
 > 例如:资源路径为`/resource/hello.sh` 则脚本中调用需要使用`/resource/hello.sh`全路径
 
@@ -75,6 +76,3 @@
 
 ![log-shell](../../../../img/new_ui/dev/resource/demo/file-demo03.png)
 
-
-
-
diff --git a/docs/docs/zh/guide/resource/intro.md b/docs/docs/zh/guide/resource/intro.md
index 88ad32499e..87576e2aa9 100644
--- a/docs/docs/zh/guide/resource/intro.md
+++ b/docs/docs/zh/guide/resource/intro.md
@@ -1,4 +1,4 @@
 # 资源中心简介
 
 资源中心通常用于上传文件、UDF 函数和任务组管理。 对于 standalone 环境,可以选择本地文件目录作为上传文件夹(此操作不需要Hadoop部署)。当然,你也可以
-选择上传到 Hadoop 或者 MinIO 集群。 在这种情况下,您需要有 Hadoop(2.6+)或 MinION 等相关环境。
\ No newline at end of file
+选择上传到 Hadoop 或者 MinIO 集群。 在这种情况下,您需要有 Hadoop(2.6+)或 MinION 等相关环境。
diff --git a/docs/docs/zh/guide/resource/task-group.md b/docs/docs/zh/guide/resource/task-group.md
index fc77f41373..1c5fb18cab 100644
--- a/docs/docs/zh/guide/resource/task-group.md
+++ b/docs/docs/zh/guide/resource/task-group.md
@@ -4,13 +4,13 @@
 
 ### 任务组配置
 
-#### 新建任务组   
+#### 新建任务组
 
 ![taskGroup](../../../../img/new_ui/dev/resource/taskGroup.png)
 
 用户点击【资源中心】-【任务组管理】-【任务组配置】-新建任务组
 
-![create-taskGroup](../../../../img/new_ui/dev/resource/create-taskGroup.png) 
+![create-taskGroup](../../../../img/new_ui/dev/resource/create-taskGroup.png)
 
 您需要输入图片中信息,其中
 
@@ -22,11 +22,11 @@
 
 #### 查看任务组队列
 
-![view-queue](../../../../img/new_ui/dev/resource/view-queue.png) 
+![view-queue](../../../../img/new_ui/dev/resource/view-queue.png)
 
 点击按钮查看任务组使用信息
 
-![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png) 
+![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png)
 
 #### 任务组的使用
 
@@ -34,7 +34,7 @@
 
 我们以 shell 节点为例:
 
-![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)         
+![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)
 
 关于任务组的配置,您需要做的只需要配置红色框内的部分,其中:
 
diff --git a/docs/docs/zh/guide/resource/udf-manage.md b/docs/docs/zh/guide/resource/udf-manage.md
index 27c24b3e26..cc7c77a0ab 100644
--- a/docs/docs/zh/guide/resource/udf-manage.md
+++ b/docs/docs/zh/guide/resource/udf-manage.md
@@ -2,7 +2,6 @@
 
 - 资源管理和文件管理功能类似,不同之处是资源管理是上传的 UDF 函数,文件管理上传的是用户程序,脚本及配置文件。
 - 主要包括以下操作:重命名、下载、删除等。
-
 * 上传 UDF 资源
 
 > 和上传文件相同。
@@ -15,7 +14,7 @@
   > 目前只支持 HIVE 的临时 UDF 函数
 
 - UDF 函数名称:输入 UDF 函数时的名称
-- 包名类名:输入 UDF 函数的全路径  
+- 包名类名:输入 UDF 函数的全路径
 - UDF 资源:设置创建的 UDF 对应的资源文件
 
 ![create-udf](../../../../img/new_ui/dev/resource/create-udf.png)
@@ -45,9 +44,3 @@
 
 ![use-udf](../../../../img/new_ui/dev/resource/demo/udf-demo03.png)
 
-
-
-
-
-
-
diff --git a/docs/docs/zh/guide/security.md b/docs/docs/zh/guide/security.md
index 7c9fc8c797..f6536f4773 100644
--- a/docs/docs/zh/guide/security.md
+++ b/docs/docs/zh/guide/security.md
@@ -20,26 +20,23 @@
 
 ## 创建普通用户
 
--  用户分为**管理员用户**和**普通用户**
-
-    * 管理员有授权和用户管理等权限,没有创建项目和工作流定义的操作的权限。
-    * 普通用户可以创建项目和对工作流定义的创建,编辑,执行等操作。
-    * 注意:如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下。
-
-- 进入安全中心->用户管理页面,点击“创建用户”按钮,创建用户。        
+- 用户分为**管理员用户**和**普通用户**
+  * 管理员有授权和用户管理等权限,没有创建项目和工作流定义的操作的权限。
+  * 普通用户可以创建项目和对工作流定义的创建,编辑,执行等操作。
+  * 注意:如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下。
+- 进入安全中心->用户管理页面,点击“创建用户”按钮,创建用户。
 
 ![create-user](../../../img/new_ui/dev/security/create-user.png)
-  
+
 ### 编辑用户信息
 
 - 管理员进入安全中心->用户管理页面,点击"编辑"按钮,编辑用户信息。
 - 普通用户登录后,点击用户名下拉框中的用户信息,进入用户信息页面,点击"编辑"按钮,编辑用户信息。
-  
+
 ### 修改用户密码
 
 - 管理员进入安全中心->用户管理页面,点击"编辑"按钮,编辑用户信息时,输入新密码修改用户密码。
 - 普通用户登录后,点击用户名下拉框中的用户信息,进入修改密码页面,输入密码并确认密码后点击"编辑"按钮,则修改密码成功。
-   
 
 ## 创建告警组
 
@@ -51,14 +48,14 @@
 ## 令牌管理
 
 > 由于后端接口有登录检查,令牌管理提供了一种可以通过调用接口的方式对系统进行各种操作。
-- 管理员进入安全中心->令牌管理页面,点击“创建令牌”按钮,选择失效时间与用户,点击"生成令牌"按钮,点击"提交"按钮,则选择用户的token创建成功。
+> - 管理员进入安全中心->令牌管理页面,点击“创建令牌”按钮,选择失效时间与用户,点击"生成令牌"按钮,点击"提交"按钮,则选择用户的token创建成功。
 
 ![create-token](../../../img/new_ui/dev/security/create-token.png)
-  
+
 - 普通用户登录后,点击用户名下拉框中的用户信息,进入令牌管理页面,选择失效时间,点击"生成令牌"按钮,点击"提交"按钮,则该用户创建 token 成功。
-    
+
 - 调用示例:
-  
+
 ```java
     /**
      * test token
@@ -99,7 +96,6 @@
 * 授予权限包括项目权限,资源权限,数据源权限,UDF函数权限,k8s命名空间。
 * 管理员可以对普通用户进行非其创建的项目、资源、数据源、UDF函数、k8s命名空间。因为项目、资源、数据源、UDF函数、k8s命名空间授权方式都是一样的,所以以项目授权为例介绍。
 * 注意:对于用户自己创建的项目,该用户拥有所有的权限。则项目列表和已选项目列表中不会显示。
- 
 - 管理员进入安全中心->用户管理页面,点击需授权用户的“授权”按钮,如下图所示:
 
 ![user-authorize](../../../img/new_ui/dev/security/user-authorize.png)
@@ -107,7 +103,7 @@
 - 选择项目,进行项目授权。
 
 ![project-authorize](../../../img/new_ui/dev/security/project-authorize.png)
-  
+
 - 资源、数据源、UDF 函数授权同项目授权。
 
 ## Worker 分组
@@ -118,7 +114,7 @@
 
 ### 新增 / 更新 worker 分组
 
-- 打开要设置分组的 worker 节点上的 `worker-server/conf/application.yaml` 配置文件. 修改 `worker` 配置下的 `groups` 参数. 
+- 打开要设置分组的 worker 节点上的 `worker-server/conf/application.yaml` 配置文件. 修改 `worker` 配置下的 `groups` 参数.
 - `groups` 参数的值为 worker 节点对应的分组名称,默认为 `default`。
 - 如果该 worker 节点对应多个分组,则用连字符列出,示范如下:
 
diff --git a/docs/docs/zh/guide/start/quick-start.md b/docs/docs/zh/guide/start/quick-start.md
index fb78d0120c..ae1dbcbb09 100644
--- a/docs/docs/zh/guide/start/quick-start.md
+++ b/docs/docs/zh/guide/start/quick-start.md
@@ -1,10 +1,11 @@
 # 快速上手
-* 喜欢看视频的伙伴可以参见手把手教你如何《快速上手 Apache DolphinScheduler 教程》
-[![image](https://user-images.githubusercontent.com/15833811/126286960-dfb3bfee-c8fb-4bdf-a717-d3be221c9711.png)](https://www.bilibili.com/video/BV1d64y1s7eZ)
 
+* 喜欢看视频的伙伴可以参见手把手教你如何《快速上手 Apache DolphinScheduler 教程》
+  [![image](https://user-images.githubusercontent.com/15833811/126286960-dfb3bfee-c8fb-4bdf-a717-d3be221c9711.png)](https://www.bilibili.com/video/BV1d64y1s7eZ)
 
 * 管理员用户登录
-  >地址:http://localhost:12345/dolphinscheduler/ui 用户名/密码:admin/dolphinscheduler123
+
+  > 地址:http://localhost:12345/dolphinscheduler/ui 用户名/密码:admin/dolphinscheduler123
 
 ![login](../../../../img/new_ui/dev/quick-start/login.png)
 
@@ -28,30 +29,30 @@
 
 ![create-alarmGroup](../../../../img/new_ui/dev/quick-start/create-alarmGroup.png)
 
- * 创建 Worker 分组
+* 创建 Worker 分组
 
 ![create-workerGroup](../../../../img/new_ui/dev/quick-start/create-workerGroup.png)
 
 * 创建环境
-![create-environment](../../../../img/new_ui/dev/quick-start/create-environment.png)
- 
- * 创建 Token 令牌
+  ![create-environment](../../../../img/new_ui/dev/quick-start/create-environment.png)
 
-![create-token](../../../../img/new_ui/dev/quick-start/create-token.png)
+* 创建 Token 令牌
 
+![create-token](../../../../img/new_ui/dev/quick-start/create-token.png)
 
 * 使用普通用户登录
+
 > 点击右上角用户名“退出”,重新使用普通用户登录。
... 482 lines suppressed ...