You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by li...@apache.org on 2022/01/06 02:53:26 UTC

[dolphinscheduler-website] branch master updated: Add new release doc for 2.0.2 (#612)

This is an automated email from the ASF dual-hosted git repository.

lidongdai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new c3a89e9  Add new release doc for 2.0.2 (#612)
c3a89e9 is described below

commit c3a89e90b2ffcfcfed9a5e5bd745b8095a93250a
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Thu Jan 6 10:53:19 2022 +0800

    Add new release doc for 2.0.2 (#612)
    
    * Add new release doc for 2.0.2
    
    * Switch version to 2.0.2
    
    * Correct alert, pick from #594
    
    * Change fault tolerance, pick from #610
    
    * Add python gateway server start
    
    * Fix doc 202 lead to 201
    
    * Correct release date to Jan 5th
    
    * Fix dead link, pick from #609
    
    * Remove ambari related, pick from apache/dolphinscheduler#7749
    
    * Change mysql jdbc version, pick from #617
    
    * Recover init database shortcut, pick from apache/dolphinscheduler#7530
---
 docs/en-us/2.0.0/user_doc/integration/ambari.md    |  128 ---
 docs/en-us/2.0.1/user_doc/integration/ambari.md    |  128 ---
 .../About_DolphinScheduler.md                      |   10 +
 .../2.0.2/user_doc/architecture/configuration.md   |  408 +++++++
 docs/en-us/2.0.2/user_doc/architecture/design.md   |  339 ++++++
 .../2.0.2/user_doc/architecture/designplus.md      |   79 ++
 docs/en-us/2.0.2/user_doc/architecture/listdocs.md |   63 ++
 .../2.0.2/user_doc/architecture/load-balance.md    |   61 ++
 docs/en-us/2.0.2/user_doc/architecture/metadata.md |  173 +++
 .../2.0.2/user_doc/architecture/task-structure.md  | 1131 +++++++++++++++++++
 docs/en-us/2.0.2/user_doc/dev_run.md               |  142 +++
 docs/en-us/2.0.2/user_doc/expansion-reduction.md   |  251 +++++
 .../guide/alert/alert_plugin_user_guide.md         |   12 +
 .../user_doc/guide/alert/enterprise-wechat.md      |   11 +
 docs/en-us/2.0.2/user_doc/guide/datasource/hive.md |   38 +
 .../user_doc/guide/datasource/introduction.md      |    7 +
 .../en-us/2.0.2/user_doc/guide/datasource/mysql.md |   16 +
 .../2.0.2/user_doc/guide/datasource/postgresql.md  |   15 +
 .../en-us/2.0.2/user_doc/guide/datasource/spark.md |   15 +
 docs/en-us/2.0.2/user_doc/guide/flink-call.md      |  152 +++
 docs/en-us/2.0.2/user_doc/guide/homepage.md        |    7 +
 .../2.0.2/user_doc/guide/installation/cluster.md   |   36 +
 .../2.0.2/user_doc/guide/installation/docker.md    | 1043 ++++++++++++++++++
 .../2.0.2/user_doc/guide/installation/hardware.md  |   47 +
 .../user_doc/guide/installation/kubernetes.md      |  755 +++++++++++++
 .../user_doc/guide/installation/pseudo-cluster.md  |  192 ++++
 .../user_doc/guide/installation/standalone.md      |   42 +
 docs/en-us/2.0.2/user_doc/guide/introduction.md    |    3 +
 docs/en-us/2.0.2/user_doc/guide/monitor.md         |   48 +
 .../guide/observability/skywalking-agent.md        |   74 ++
 docs/en-us/2.0.2/user_doc/guide/open-api.md        |   64 ++
 .../2.0.2/user_doc/guide/parameter/built-in.md     |   48 +
 .../2.0.2/user_doc/guide/parameter/context.md      |   63 ++
 .../en-us/2.0.2/user_doc/guide/parameter/global.md |   19 +
 docs/en-us/2.0.2/user_doc/guide/parameter/local.md |   19 +
 .../2.0.2/user_doc/guide/parameter/priority.md     |   40 +
 docs/en-us/2.0.2/user_doc/guide/project.md         |   21 +
 docs/en-us/2.0.2/user_doc/guide/quick-start.md     |   71 ++
 docs/en-us/2.0.2/user_doc/guide/resource.md        |  112 ++
 docs/en-us/2.0.2/user_doc/guide/security.md        |  163 +++
 docs/en-us/2.0.2/user_doc/guide/task-instance.md   |   12 +
 docs/en-us/2.0.2/user_doc/guide/task/conditions.md |   36 +
 docs/en-us/2.0.2/user_doc/guide/task/datax.md      |   18 +
 docs/en-us/2.0.2/user_doc/guide/task/dependent.md  |   27 +
 docs/en-us/2.0.2/user_doc/guide/task/flink.md      |   23 +
 docs/en-us/2.0.2/user_doc/guide/task/http.md       |   23 +
 docs/en-us/2.0.2/user_doc/guide/task/map-reduce.md |   33 +
 docs/en-us/2.0.2/user_doc/guide/task/pigeon.md     |   19 +
 docs/en-us/2.0.2/user_doc/guide/task/python.md     |   15 +
 docs/en-us/2.0.2/user_doc/guide/task/shell.md      |   22 +
 docs/en-us/2.0.2/user_doc/guide/task/spark.md      |   22 +
 docs/en-us/2.0.2/user_doc/guide/task/sql.md        |   22 +
 .../2.0.2/user_doc/guide/task/stored-procedure.md  |   13 +
 .../en-us/2.0.2/user_doc/guide/task/sub-process.md |   14 +
 docs/en-us/2.0.2/user_doc/guide/task/switch.md     |   37 +
 .../2.0.2/user_doc/guide/workflow-definition.md    |  114 ++
 .../2.0.2/user_doc/guide/workflow-instance.md      |   62 ++
 docs/en-us/2.0.2/user_doc/upgrade.md               |   63 ++
 docs/en-us/dev/user_doc/architecture/design.md     |    3 +-
 docs/en-us/dev/user_doc/integration/ambari.md      |  128 ---
 .../2.0.2/user_doc/architecture/configuration.md   |  406 +++++++
 docs/zh-cn/2.0.2/user_doc/architecture/design.md   |  267 +++++
 .../2.0.2/user_doc/architecture/designplus.md      |   58 +
 docs/zh-cn/2.0.2/user_doc/architecture/listdocs.md |   62 ++
 .../2.0.2/user_doc/architecture/load-balance.md    |   58 +
 docs/zh-cn/2.0.2/user_doc/architecture/metadata.md |  185 ++++
 .../2.0.2/user_doc/architecture/task-structure.md  | 1134 ++++++++++++++++++++
 docs/zh-cn/2.0.2/user_doc/expansion-reduction.md   |  252 +++++
 .../guide/alert/alert_plugin_user_guide.md         |   12 +
 .../user_doc/guide/alert/enterprise-wechat.md      |   11 +
 docs/zh-cn/2.0.2/user_doc/guide/datasource/hive.md |   42 +
 .../user_doc/guide/datasource/introduction.md      |    6 +
 .../zh-cn/2.0.2/user_doc/guide/datasource/mysql.md |   15 +
 .../2.0.2/user_doc/guide/datasource/postgresql.md  |   15 +
 .../zh-cn/2.0.2/user_doc/guide/datasource/spark.md |   21 +
 docs/zh-cn/2.0.2/user_doc/guide/flink-call.md      |  150 +++
 docs/zh-cn/2.0.2/user_doc/guide/homepage.md        |    7 +
 .../2.0.2/user_doc/guide/installation/cluster.md   |   36 +
 .../2.0.2/user_doc/guide/installation/docker.md    | 1043 ++++++++++++++++++
 .../2.0.2/user_doc/guide/installation/hardware.md  |   47 +
 .../user_doc/guide/installation/kubernetes.md      |  755 +++++++++++++
 .../user_doc/guide/installation/pseudo-cluster.md  |  191 ++++
 .../user_doc/guide/installation/standalone.md      |   42 +
 docs/zh-cn/2.0.2/user_doc/guide/introduction.md    |    3 +
 docs/zh-cn/2.0.2/user_doc/guide/monitor.md         |   49 +
 .../guide/observability/skywalking-agent.md        |   74 ++
 docs/zh-cn/2.0.2/user_doc/guide/open-api.md        |   65 ++
 .../2.0.2/user_doc/guide/parameter/built-in.md     |   49 +
 .../2.0.2/user_doc/guide/parameter/context.md      |   69 ++
 .../zh-cn/2.0.2/user_doc/guide/parameter/global.md |   19 +
 docs/zh-cn/2.0.2/user_doc/guide/parameter/local.md |   19 +
 .../2.0.2/user_doc/guide/parameter/priority.md     |   40 +
 docs/zh-cn/2.0.2/user_doc/guide/project.md         |   21 +
 docs/zh-cn/2.0.2/user_doc/guide/quick-start.md     |   66 ++
 docs/zh-cn/2.0.2/user_doc/guide/resource.md        |  109 ++
 docs/zh-cn/2.0.2/user_doc/guide/security.md        |  166 +++
 docs/zh-cn/2.0.2/user_doc/guide/task-instance.md   |   11 +
 docs/zh-cn/2.0.2/user_doc/guide/task/conditions.md |   36 +
 docs/zh-cn/2.0.2/user_doc/guide/task/datax.md      |   17 +
 docs/zh-cn/2.0.2/user_doc/guide/task/dependent.md  |   27 +
 docs/zh-cn/2.0.2/user_doc/guide/task/flink.md      |   23 +
 docs/zh-cn/2.0.2/user_doc/guide/task/http.md       |   22 +
 docs/zh-cn/2.0.2/user_doc/guide/task/map-reduce.md |   34 +
 docs/zh-cn/2.0.2/user_doc/guide/task/pigeon.md     |   19 +
 docs/zh-cn/2.0.2/user_doc/guide/task/python.md     |   15 +
 docs/zh-cn/2.0.2/user_doc/guide/task/shell.md      |   22 +
 docs/zh-cn/2.0.2/user_doc/guide/task/spark.md      |   22 +
 docs/zh-cn/2.0.2/user_doc/guide/task/sql.md        |   22 +
 .../2.0.2/user_doc/guide/task/stored-procedure.md  |   12 +
 .../zh-cn/2.0.2/user_doc/guide/task/sub-process.md |   14 +
 docs/zh-cn/2.0.2/user_doc/guide/task/switch.md     |   37 +
 .../2.0.2/user_doc/guide/workflow-definition.md    |  111 ++
 .../2.0.2/user_doc/guide/workflow-instance.md      |   61 ++
 docs/zh-cn/2.0.2/user_doc/upgrade.md               |   66 ++
 download/en-us/download.md                         |    2 +
 download/zh-cn/download.md                         |    2 +
 site_config/docs2-0-2.js                           |  540 ++++++++++
 site_config/docsdev.js                             |    9 -
 site_config/site.js                                |    6 +-
 sitemap.xml                                        |   15 -
 src/pages/docs/index.md.jsx                        |    2 +
 121 files changed, 13051 insertions(+), 412 deletions(-)

diff --git a/docs/en-us/2.0.0/user_doc/integration/ambari.md b/docs/en-us/2.0.0/user_doc/integration/ambari.md
deleted file mode 100644
index bbc4f85..0000000
--- a/docs/en-us/2.0.0/user_doc/integration/ambari.md
+++ /dev/null
@@ -1,128 +0,0 @@
-### Instructions for using the DolphinScheduler's Ambari plug-in
-
-#### Note
-
-1. This document is intended for users with a basic understanding of Ambari
-2. This document is a description of adding the DolphinScheduler service to the installed Ambari service
-3. This document is based on version 2.5.2 of Ambari 
-
-#### Installation preparation
-
-1. Prepare the RPM packages
-
-   - It is generated by executing the command `mvn -U clean install -Prpmbuild -Dmaven.test.skip=true -X` in the project root directory (In the directory: dolphinscheduler-dist/target/rpm/apache-dolphinscheduler/RPMS/noarch)
-
-2. Create an installation for DolphinScheduler with the user has read and write access to the installation directory (/opt/soft)
-
-3. Install with rpm package
-
-   - Manual installation (recommended):
-      - Copy the prepared RPM packages to each node of the cluster.
-      - Execute with DolphinScheduler installation user: `rpm -ivh apache-dolphinscheduler-xxx.noarch.rpm`
-      - Mysql-connector-java packaged using the default POM file will not be included.
-      - The RPM package was packaged in the project with the installation path of /opt/soft. 
-        If you use MySQL as the database, you need to add it manually.
-      
-   - Automatic installation with Ambari
-      - Each node of the cluster needs to be configured the local yum source
-      - Copy the prepared RPM packages to each node local yum source
-
-4. Copy plug-in directory
-
-   - copy directory ambari_plugin/common-services/DOLPHIN to ambari-server/resources/common-services/
-   - copy directory ambari_plugin/statcks/DOLPHIN to ambari-server/resources/stacks/HDP/2.6/services/--stack version is selected based on the actual situation
-
-5. Initializes the database information
-
-   ```sql
-   -- Create the database for the DolphinScheduler:dolphinscheduler
-   CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-   
-   -- Initialize the user and password for the dolphinscheduler database and assign permissions
-   -- Replace the {user} in the SQL statement below with the user of the dolphinscheduler database
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-   flush privileges;
-   ```
-
-#### Ambari Install DolphinScheduler
-- **NOTE: You have to install Zookeeper first**
-
-1. Install DolphinScheduler on Ambari web interface
-
-   ![](/img/ambari-plugin/DS2_AMBARI_001.png)
-
-2. Select the nodes for the DolphinScheduler's Master installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_002.png)
-
-3. Configure the DolphinScheduler's nodes for Worker, Api, Logger, Alert installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_003.png)
-
-4. Set the installation users of the DolphinScheduler service (created in step 1) and the user groups they belong to
-
-   ![](/img/ambari-plugin/DS2_AMBARI_004.png)
-
-5. System Env Optimization will export some system environment config. Modify according to the actual situation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_020.png)
-   
-6. Configure the database information (same as in the initialization database in step 1)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_005.png)
-
-7. Configure additional information if needed
-
-   ![](/img/ambari-plugin/DS2_AMBARI_006.png)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_007.png)
-
-8. Perform the next steps as normal
-
-   ![](/img/ambari-plugin/DS2_AMBARI_008.png)
-
-9. The interface after successful installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_009.png)
-   
-   
-
-------
-
-
-
-#### Add components to the node through Ambari -- for example, add a DolphinScheduler Worker
-
-***NOTE***: DolphinScheduler Logger is the installation dependent component of DS Worker in Dolphin's Ambari installation (need to add installation first; Prevent the Job log on the corresponding Worker from being checked)
-
-1. Locate the component node to add -- for example, node ark3
-
-   ![DS2_AMBARI_011](/img/ambari-plugin/DS2_AMBARI_011.png)
-
-2. Add components -- the drop-down list is all addable
-
-   ![DS2_AMBARI_012](/img/ambari-plugin/DS2_AMBARI_012.png)
-
-3. Confirm component addition
-
-   ![DS2_AMBARI_013](/img/ambari-plugin/DS2_AMBARI_013.png)
-
-4. After adding DolphinScheduler Worker and DolphinScheduler Logger components
-
-   ![DS2_AMBARI_015](/img/ambari-plugin/DS2_AMBARI_015.png)
-
-5. Start the component
-
-   ![DS2_AMBARI_016](/img/ambari-plugin/DS2_AMBARI_016.png)
-
-
-#### Remove the component from the node with Ambari
-
-1. Stop the component in the corresponding node
-
-   ![DS2_AMBARI_018](/img/ambari-plugin/DS2_AMBARI_018.png)
-
-2. Remove components
-
-   ![DS2_AMBARI_019](/img/ambari-plugin/DS2_AMBARI_019.png)
diff --git a/docs/en-us/2.0.1/user_doc/integration/ambari.md b/docs/en-us/2.0.1/user_doc/integration/ambari.md
deleted file mode 100644
index bbc4f85..0000000
--- a/docs/en-us/2.0.1/user_doc/integration/ambari.md
+++ /dev/null
@@ -1,128 +0,0 @@
-### Instructions for using the DolphinScheduler's Ambari plug-in
-
-#### Note
-
-1. This document is intended for users with a basic understanding of Ambari
-2. This document is a description of adding the DolphinScheduler service to the installed Ambari service
-3. This document is based on version 2.5.2 of Ambari 
-
-#### Installation preparation
-
-1. Prepare the RPM packages
-
-   - It is generated by executing the command `mvn -U clean install -Prpmbuild -Dmaven.test.skip=true -X` in the project root directory (In the directory: dolphinscheduler-dist/target/rpm/apache-dolphinscheduler/RPMS/noarch)
-
-2. Create an installation for DolphinScheduler with the user has read and write access to the installation directory (/opt/soft)
-
-3. Install with rpm package
-
-   - Manual installation (recommended):
-      - Copy the prepared RPM packages to each node of the cluster.
-      - Execute with DolphinScheduler installation user: `rpm -ivh apache-dolphinscheduler-xxx.noarch.rpm`
-      - Mysql-connector-java packaged using the default POM file will not be included.
-      - The RPM package was packaged in the project with the installation path of /opt/soft. 
-        If you use MySQL as the database, you need to add it manually.
-      
-   - Automatic installation with Ambari
-      - Each node of the cluster needs to be configured the local yum source
-      - Copy the prepared RPM packages to each node local yum source
-
-4. Copy plug-in directory
-
-   - copy directory ambari_plugin/common-services/DOLPHIN to ambari-server/resources/common-services/
-   - copy directory ambari_plugin/statcks/DOLPHIN to ambari-server/resources/stacks/HDP/2.6/services/--stack version is selected based on the actual situation
-
-5. Initializes the database information
-
-   ```sql
-   -- Create the database for the DolphinScheduler:dolphinscheduler
-   CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-   
-   -- Initialize the user and password for the dolphinscheduler database and assign permissions
-   -- Replace the {user} in the SQL statement below with the user of the dolphinscheduler database
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-   flush privileges;
-   ```
-
-#### Ambari Install DolphinScheduler
-- **NOTE: You have to install Zookeeper first**
-
-1. Install DolphinScheduler on Ambari web interface
-
-   ![](/img/ambari-plugin/DS2_AMBARI_001.png)
-
-2. Select the nodes for the DolphinScheduler's Master installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_002.png)
-
-3. Configure the DolphinScheduler's nodes for Worker, Api, Logger, Alert installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_003.png)
-
-4. Set the installation users of the DolphinScheduler service (created in step 1) and the user groups they belong to
-
-   ![](/img/ambari-plugin/DS2_AMBARI_004.png)
-
-5. System Env Optimization will export some system environment config. Modify according to the actual situation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_020.png)
-   
-6. Configure the database information (same as in the initialization database in step 1)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_005.png)
-
-7. Configure additional information if needed
-
-   ![](/img/ambari-plugin/DS2_AMBARI_006.png)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_007.png)
-
-8. Perform the next steps as normal
-
-   ![](/img/ambari-plugin/DS2_AMBARI_008.png)
-
-9. The interface after successful installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_009.png)
-   
-   
-
-------
-
-
-
-#### Add components to the node through Ambari -- for example, add a DolphinScheduler Worker
-
-***NOTE***: DolphinScheduler Logger is the installation dependent component of DS Worker in Dolphin's Ambari installation (need to add installation first; Prevent the Job log on the corresponding Worker from being checked)
-
-1. Locate the component node to add -- for example, node ark3
-
-   ![DS2_AMBARI_011](/img/ambari-plugin/DS2_AMBARI_011.png)
-
-2. Add components -- the drop-down list is all addable
-
-   ![DS2_AMBARI_012](/img/ambari-plugin/DS2_AMBARI_012.png)
-
-3. Confirm component addition
-
-   ![DS2_AMBARI_013](/img/ambari-plugin/DS2_AMBARI_013.png)
-
-4. After adding DolphinScheduler Worker and DolphinScheduler Logger components
-
-   ![DS2_AMBARI_015](/img/ambari-plugin/DS2_AMBARI_015.png)
-
-5. Start the component
-
-   ![DS2_AMBARI_016](/img/ambari-plugin/DS2_AMBARI_016.png)
-
-
-#### Remove the component from the node with Ambari
-
-1. Stop the component in the corresponding node
-
-   ![DS2_AMBARI_018](/img/ambari-plugin/DS2_AMBARI_018.png)
-
-2. Remove components
-
-   ![DS2_AMBARI_019](/img/ambari-plugin/DS2_AMBARI_019.png)
diff --git a/docs/en-us/2.0.2/user_doc/About_DolphinScheduler/About_DolphinScheduler.md b/docs/en-us/2.0.2/user_doc/About_DolphinScheduler/About_DolphinScheduler.md
new file mode 100644
index 0000000..d56b029
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/About_DolphinScheduler/About_DolphinScheduler.md
@@ -0,0 +1,10 @@
+Apache DolphinScheduler is a cloud-native visual Big Data workflow scheduler system, committed to “solving complex big-data task dependencies and triggering relationships in data OPS orchestration so that various types of big data tasks can be used out of the box”.
+
+# High Reliability
+- Decentralized multi-master and multi-worker, HA is supported by itself, overload processing
+# User-Friendly
+- All process definition operations are visualized, Visualization process defines key information at a glance, One-click deployment
+# Rich Scenarios
+- Support multi-tenant. Support many task types e.g., spark,flink,hive, mr, shell, python, sub_process
+# High Expansibility
+- Support custom task types, Distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/architecture/configuration.md b/docs/en-us/2.0.2/user_doc/architecture/configuration.md
new file mode 100644
index 0000000..5dbfc5a
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/configuration.md
@@ -0,0 +1,408 @@
+<!-- markdown-link-check-disable -->
+
+# Preface
+This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.
+
+# Directory Structure
+Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations and other modules are not going into.
+
+[Note: the DolphinScheduler (hereinafter called the ‘DS’) .]
+```
+
+├─bin                               DS application commands directory
+│  ├─dolphinscheduler-daemon.sh         startup/shutdown DS application 
+│  ├─start-all.sh                  A     startup all DS services with configurations
+│  ├─stop-all.sh                        shutdown all DS services with configurations
+├─conf                              configurations directory
+│  ├─application-api.properties         API-service config properties
+│  ├─datasource.properties              datasource config properties
+│  ├─zookeeper.properties               zookeeper config properties
+│  ├─master.properties                  master config properties
+│  ├─worker.properties                  worker config properties
+│  ├─quartz.properties                  quartz config properties
+│  ├─common.properties                  common-service[storage] config properties
+│  ├─alert.properties                   alert-service config properties
+│  ├─config                             environment variables config directory
+│      ├─install_config.conf                DS environment variables configuration script[install/start DS]
+│  ├─env                                load environment variables configs script directory
+│      ├─dolphinscheduler_env.sh            load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper files directory
+│  ├─i18n                               i18n configs directory
+│  ├─logback-api.xml                    API-service log config
+│  ├─logback-master.xml                 master-service log config
+│  ├─logback-worker.xml                 worker-service log config
+│  ├─logback-alert.xml                  alert-service log config
+├─sql                                   DS metadata to create/upgrade .sql directory
+│  ├─create                             create SQL scripts directory
+│  ├─upgrade                            upgrade SQL scripts directory
+│  ├─dolphinscheduler_postgre.sql       postgre database init script
+│  ├─dolphinscheduler_mysql.sql         mysql database init script
+│  ├─soft_version                       current DS version-id file
+├─script                            DS services deployment, database create/upgrade scripts directory
+│  ├─create-dolphinscheduler.sh         DS database init script
+│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script
+│  ├─monitor-server.sh                  DS monitor-server start script       
+│  ├─scp-hosts.sh                       transfer installation files script                                     
+│  ├─remove-zk-node.sh                  cleanup zookeeper caches script       
+├─ui                                front-end web resources directory
+├─lib                               DS .jar dependencies directory
+├─install.sh                        auto-setup DS services script
+
+
+```
+
+
+# Configurations in Details
+
+serial number| service classification| config file|
+|--|--|--|
+1|startup/shutdown DS application|dolphinscheduler-daemon.sh
+2|datasource config properties| datasource.properties
+3|zookeeper config properties|zookeeper.properties
+4|common-service[storage] config properties|common.properties
+5|API-service config properties|application-api.properties
+6|master config properties|master.properties
+7|worker config properties|worker.properties
+8|alert-service config properties|alert.properties
+9|quartz config properties|quartz.properties
+10|DS environment variables configuration script[install/start DS]|install_config.conf
+11|load environment variables configs <br /> [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|services log config files|API-service log config : logback-api.xml  <br /> master-service log config  : logback-master.xml    <br /> worker-service log config : logback-worker.xml  <br /> alert-service log config : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [startup/shutdown DS application]
+dolphinscheduler-daemon.sh is responsible for DS startup & shutdown. 
+Essentially, start-all.sh/stop-all.sh startup/shutdown the cluster via dolphinscheduler-daemon.sh.
+Currently, DS just makes a basic config, please config further JVM options based on your practical situation of resources.
+
+Default simplified parameters are:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> "-XX:DisableExplicitGC" is not recommended due to may lead to memory link (DS dependent on Netty to communicate). 
+
+## 2.datasource.properties [datasource config properties]
+DS uses Druid to manage database connections and default simplified configs are:
+|Parameters | Default value| Description|
+|--|--|--|
+spring.datasource.driver-class-name||datasource driver
+spring.datasource.url||datasource connection url
+spring.datasource.username||datasource username
+spring.datasource.password||datasource password
+spring.datasource.initialSize|5| initail connection pool size number
+spring.datasource.minIdle|5| minimum connection pool size number
+spring.datasource.maxActive|5| maximum connection pool size number
+spring.datasource.maxWait|60000| max wait mili-seconds
+spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check interval
+spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval
+spring.datasource.minEvictableIdleTimeMillis|300000| connections over minEvictableIdleTimeMillis will be collect when idle check
+spring.datasource.validationQuery|SELECT 1| validate connection by running the SQL
+spring.datasource.validationQueryTimeout|3| validate connection timeout[seconds]
+spring.datasource.testWhileIdle|true| set whether the pool validates the allocated connection when a new connection request comes
+spring.datasource.testOnBorrow|true| validity check when the program requests a new connection
+spring.datasource.testOnReturn|false| validity check when the program recalls a connection
+spring.datasource.defaultAutoCommit|true| whether auto commit
+spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the connection closed by pool when the connection idles over minEvictableIdleTimeMillis
+spring.datasource.poolPreparedStatements|true| Open PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the size of PSCache on each connection
+
+
+## 3.zookeeper.properties [zookeeper config properties]
+|Parameters | Default value| Description|
+|--|--|--|
+zookeeper.quorum|localhost:2181| zookeeper cluster connection info
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under zookeeper root directory
+zookeeper.session.timeout|60000|  session timeout
+zookeeper.connection.timeout|30000| connection timeout
+zookeeper.retry.base.sleep|100| time to wait between subsequent retries
+zookeeper.retry.max.sleep|30000| maximum time to wait between subsequent retries
+zookeeper.retry.maxtime|10| maximum retry times
+
+
+## 4.common.properties [hadoop、s3、yarn config properties]
+Currently, common.properties mainly configures hadoop/s3a related configurations. 
+|Parameters | Default value| Description|
+|--|--|--|
+data.basedir.path|/tmp/dolphinscheduler| local directory used to store temp files
+resource.storage.type|NONE| type of resource files: HDFS, S3, NONE
+resource.upload.path|/dolphinscheduler| storage path of resource files
+hadoop.security.authentication.startup.state|false| whether hadoop grant kerberos permission
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos config directory
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos username
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos user keytab
+kerberos.expire.time|2|kerberos expire time,integer,the unit is hour
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties| file types supported by resource center
+hdfs.root.user|hdfs| configure users with corresponding permissions if storage type is HDFS
+fs.defaultFS|hdfs://mycluster:8020|If resource.storage.type=S3, then the request url would be similar to 's3a://dolphinscheduler'. Otherwise if resource.storage.type=HDFS and hadoop supports HA, please copy core-site.xml and hdfs-site.xml into 'conf' directory
+fs.s3a.endpoint||s3 endpoint url
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||specify the yarn resourcemanager url. if resourcemanager supports HA, input HA IP addresses (separated by comma), or input null for standalone
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|keep default if resourcemanager supports HA or not use resourcemanager. Or replace ds1 with corresponding hostname if resourcemanager in standalone mode
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false| specify whether in development state
+
+
+## 5.application-api.properties [API-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+server.port|12345|api service communication port
+server.servlet.session.timeout|7200|session timeout
+server.servlet.context-path|/dolphinscheduler | request path
+spring.servlet.multipart.max-file-size|1024MB| maximum file size
+spring.servlet.multipart.max-request-size|1024MB| maximum request size
+server.jetty.max-http-post-size|5000000| jetty maximum post size
+spring.messages.encoding|UTF-8| message encoding
+spring.jackson.time-zone|GMT+8| time zone
+spring.messages.basename|i18n/messages| i18n config
+security.authentication.type|PASSWORD| authentication type
+
+
+## 6.master.properties [master-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+master.listen.port|5678|master listen port
+master.exec.threads|100|master execute thread number to limit process instances in parallel
+master.exec.task.num|20|master execute task number in parallel per process instance
+master.dispatch.task.num|3|master dispatch task number per batch
+master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
+master.heartbeat.interval|10|master heartbeat interval, the unit is second
+master.task.commit.retryTimes|5|master commit task retry times
+master.task.commit.interval|1000|master commit task interval, the unit is millisecond
+master.max.cpuload.avg|-1|master max CPU load avg, only higher than the system CPU load average, master server can schedule. default value -1: the number of CPU cores * 2
+master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
+
+
+## 7.worker.properties [worker-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+worker.listen.port|1234|worker listen port
+worker.exec.threads|100|worker execute thread number to limit task instances in parallel
+worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
+worker.max.cpuload.avg|-1|worker max CPU load avg, only higher than the system CPU load average, worker server can be dispatched tasks. default value -1: the number of CPU cores * 2
+worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
+worker.groups|default|worker groups separated by comma, like 'worker.groups=default,test' <br> worker will join corresponding group according to this config when startup
+
+
+## 8.alert.properties [alert-service log config]
+|Parameters | Default value| Description|
+|--|--|--|
+alert.type|EMAIL|alter type|
+mail.protocol|SMTP|mail server protocol
+mail.server.host|xxx.xxx.com|mail server host
+mail.server.port|25|mail server port
+mail.sender|xxx@xxx.com|mail sender email
+mail.user|xxx@xxx.com|mail sender email name
+mail.passwd|111111|mail sender email password
+mail.smtp.starttls.enable|true|specify mail whether open tls
+mail.smtp.ssl.enable|false|specify mail whether open ssl
+mail.smtp.ssl.trust|xxx.xxx.com|specify mail ssl trust list
+xls.file.path|/tmp/xls|mail attachment temp storage directory
+||following configure WeCom[optional]|
+enterprise.wechat.enable|false|specify whether enable WeCom
+enterprise.wechat.corp.id|xxxxxxx|WeCom corp id
+enterprise.wechat.secret|xxxxxxx|WeCom secret
+enterprise.wechat.agent.id|xxxxxxx|WeCom agent id
+enterprise.wechat.users|xxxxxxx|WeCom users
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|WeCom token url
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|WeCom push url
+enterprise.wechat.user.send.msg||send message format
+enterprise.wechat.team.send.msg||group message format
+plugin.dir|/Users/xx/your/path/to/plugin/dir|plugin directory
+
+
+## 9.quartz.properties [quartz config properties]
+This part describes quartz configs and please configure them based on your practical situation and resources.
+|Parameters | Default value| Description|
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS environment variables configuration script[install/start DS]]
+install_config.conf is a bit complicated and is mainly used in the following two places.
+* 1.DS cluster auto installation
+
+> System will load configs in the install_config.conf and auto-configure files below, based on the file content when executing 'install.sh'.
+> Files such as dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties and etc.
+
+
+* 2.Startup/shutdown DS cluster
+> The system will load masters, workers, alertServer, apiServers and other parameters inside the file to startup/shutdown DS cluster.
+
+File content as follows:
+```bash
+
+# Note:  please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&`.
+#   eg: `[` escape to `\[`
+
+# Database type (DS currently only supports PostgreSQL and MySQL)
+dbtype="mysql"
+
+# Database url & port
+dbhost="192.168.xx.xx:3306"
+
+# Database name
+dbname="dolphinscheduler"
+
+
+# Database username
+username="xx"
+
+# Database password
+password="xx"
+
+# Zookeeper url
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# DS installation path, such as '/data1_1T/dolphinscheduler'
+installPath="/data1_1T/dolphinscheduler"
+
+# Deployment user
+# Note: Deployment user needs 'sudo' privilege and has rights to operate HDFS
+#     Root directory must be created by the same user if using HDFS, otherwise permission related issues will be raised.
+deployUser="dolphinscheduler"
+
+
+# Followings are alert-service configs
+# Mail server host
+mailServerHost="smtp.exmail.qq.com"
+
+# Mail server port
+mailServerPort="25"
+
+# Mail sender
+mailSender="xxxxxxxxxx"
+
+# Mail user
+mailUser="xxxxxxxxxx"
+
+# Mail password
+mailPassword="xxxxxxxxxx"
+
+# Mail supports TLS set true if not set false
+starttlsEnable="true"
+
+# Mail supports SSL set true if not set false. Note: starttlsEnable and sslEnable cannot both set true
+sslEnable="false"
+
+# Mail server host, same as mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+# Specify which resource upload function to use for resources storage such as sql files. And supported options are HDFS, S3 and NONE. HDFS for upload to HDFS and NONE for not using this function.
+resourceStorageType="NONE"
+
+# if S3, write S3 address. HA, for example: s3a://dolphinscheduler,
+# Note: s3 make sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# If parameter 'resourceStorageType' is S3, following configs are needed:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# If ResourceManager supports HA, then input master and standby node IP or hostname, eg: '192.168.xx.xx,192.168.xx.xx'. Or else ResourceManager run in standalone mode, please set yarnHaIps="" and "" for not using yarn.
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+
+# If ResourceManager runs in standalone, then set ResourceManager node ip or hostname, or else remain default.
+singleYarnIp="yarnIp1"
+
+# Storage path when using HDFS/S3
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3 root user
+hdfsRootUser="hdfs"
+
+# Followings are Kerberos configs
+
+# Spicify Kerberos enable or not
+kerberosStartUp="false"
+
+# Kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+
+# Keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+
+# Username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# API-service port
+apiServerPort="12345"
+
+
+# All hosts deploy DS
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# Ssh port, default 22
+sshPort="22"
+
+# Master service hosts
+masters="ds1,ds2"
+
+# All hosts deploy worker service
+# Note: Each worker needs to set a worker group name and default name is "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  Host deploy alert-service
+alertServer="ds3"
+
+# Host deploy API-service
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [load environment variables configs]
+When using shell to commit tasks, DS will load environment variables inside dolphinscheduler_env.sh into the host.
+Types of tasks involved are: Shell task、Python task、Spark task、Flink task、Datax task and etc.
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12. Services logback configs
+Services name| logback config name |
+--|--|
+API-service logback config |logback-api.xml|
+master-service logback config|logback-master.xml |
+worker-service logback config|logback-worker.xml |
+alert-service logback config|logback-alert.xml |
diff --git a/docs/en-us/2.0.2/user_doc/architecture/design.md b/docs/en-us/2.0.2/user_doc/architecture/design.md
new file mode 100644
index 0000000..de3a41a
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/design.md
@@ -0,0 +1,339 @@
+## System Architecture Design
+
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the
+scheduling system
+
+### 1.System Structure
+
+#### 1.1 System architecture diagram
+
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
+  <p align="center">
+        <em>System architecture diagram</em>
+  </p>
+</p>
+
+#### 1.2 Start process activity diagram
+
+<p align="center">
+  <img src="/img/master-process-2.0-en.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>Start process activity diagram</em>
+  </p>
+</p>
+
+#### 1.3 Architecture description
+
+* **MasterServer**
+
+  MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task
+  segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at
+  the same time. When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault
+  tolerance by monitoring changes in the temporary node of Zookeeper. MasterServer provides monitoring services based on
+  netty.
+
+  ##### The service mainly includes:
+    - **MasterSchedulerService** is a scanning thread that scans the **command** table in the database regularly,
+      generates workflow instances, and performs different business operations according to different **command types**
+
+    - **WorkflowExecuteThread** is mainly responsible for DAG task segmentation, task submission, logical processing of
+      various command types, processing task status and workflow status events
+
+    - **EventExecuteService** handles all state change events of the workflow instance that the master is responsible
+      for, and uses the thread pool to process the state events of the workflow
+
+    - **StateWheelExecuteThread** handles timing state updates of dependent tasks and timeout tasks
+
+* **WorkerServer**
+
+      WorkerServer also adopts a distributed centerless design concept, supports custom task plug-ins, and is mainly responsible for task execution and log services.
+      When the WorkerServer service starts, it registers a temporary node with Zookeeper and maintains a heartbeat.
+
+##### The service mainly includes
+
+    - **WorkerManagerThread** mainly receives tasks sent by the master through netty, and calls **TaskExecuteThread** corresponding executors according to different task types.
+     
+    - **RetryReportTaskStatusThread** mainly reports the task status to the master through netty. If the report fails, the report will always be retried.
+
+    - **LoggerServer** is a log service that provides log fragment viewing, refreshing and downloading functions
+
+* **Registry**
+
+  The registry is implemented as a plug-in, and Zookeeper is supported by default. The MasterServer and WorkerServer
+  nodes in the system use the registry for cluster management and fault tolerance. In addition, the system also performs
+  event monitoring and distributed locks based on the registry.
+
+* **Alert**
+
+  Provide alarm-related functions and only support stand-alone service. Support custom alarm plug-ins.
+
+* **API**
+
+  The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service
+  uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow
+  creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from
+  the node and so on.
+
+* **UI**
+
+  The front-end page of the system provides various visual operation interfaces of the system,See more
+  at [Introduction to Functions](../guide/homepage.md) section。
+
+#### 1.4 Architecture design ideas
+
+##### One、Decentralization VS centralization
+
+###### Centralized thinking
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles
+according to roles, which are roughly divided into two roles:
+
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
+ </p>
+
+- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and
+  can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead"
+  state.
+- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can
+  assign tasks to Slave.
+
+Problems in centralized thought design:
+
+- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to
+  solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master,
+  which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems
+  are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on
+  different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG
+  can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be
+  greater.
+
+###### Decentralized
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
+ </p>
+
+- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal,
+  the global Internet is a typical decentralized distributed system, any node equipment connected to the network is
+  down, All will only affect a small range of functions.
+- The core design of decentralized design is that there is no "manager" different from other nodes in the entire
+  distributed system, so there is no single point of failure. However, because there is no "manager" node, each node
+  needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of
+  distributed system communication greatly increases the difficulty of implementing the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are
+  constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than
+  preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "
+  managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
+
+
+- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper to realize the
+  non-centralization of the Master cluster and the Worker cluster. The sharding mechanism is used to fairly distribute
+  the workflow for execution on the master, and tasks are sent to the workers for execution through different sending
+  strategies. Specific task
+
+##### Second, the master execution process
+
+1. DolphinScheduler uses the sharding algorithm to modulate the command and assigns it according to the sort id of the
+   master. The master converts the received command into a workflow instance, and uses the thread pool to process the
+   workflow instance
+
+2. DolphinScheduler's process of workflow:
+
+- Start the workflow through UI or API calls, and persist a command to the database
+- The Master scans the Command table through the sharding algorithm, generates a workflow instance ProcessInstance, and
+  deletes the Command data at the same time
+- The Master uses the thread pool to run WorkflowExecuteThread to execute the process of the workflow instance,
+  including building DAG, creating task instance TaskInstance, and sending TaskInstance to worker through netty
+- After the worker receives the task, it modifies the task status and returns the execution information to the Master
+- The Master receives the task information, persists it to the database, and stores the state change event in the
+  EventExecuteService event queue
+- EventExecuteService calls WorkflowExecuteThread according to the event queue to submit subsequent tasks and modify
+  workflow status
+
+##### Three、Insufficient thread loop waiting problem
+
+- If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the
+  thread pool, the process directly waits or fails.
+- If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
+ </p>
+In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions
+to reduce this risk:
+
+1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is,
+   pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of
+   threads is unlikely to be obtained in real time.
+2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this
+   way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up
+   to execute again.
+
+note: The Master Scheduler thread is executed by FIFO when acquiring the Command.
+
+So we chose the third way to solve the problem of insufficient threads.
+
+##### Four、Fault-tolerant design
+
+Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is
+divided into master fault tolerance and worker fault tolerance.
+
+###### 1. Downtime fault tolerance
+
+The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
+ </p>
+Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
+
+
+
+- Master fault tolerance:
+
+<p align="center">
+   <img src="/img/failover-master.jpg" alt="failover-master"  width="50%" />
+ </p>
+
+Fault tolerance range: From the perspective of host, the fault tolerance range of Master includes: own host + node host that does not exist in the registry, and the entire process of fault tolerance will be locked;
+
+Fault-tolerant content: Master's fault-tolerant content includes: fault-tolerant process instances and task instances. Before fault-tolerant, it compares the start time of the instance with the server start-up time, and skips fault-tolerance if after the server start time;
+
+Fault-tolerant post-processing: After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the [...]
+
+- Worker fault tolerance:
+
+<p align="center">
+   <img src="/img/failover-worker.jpg" alt="failover-worker"  width="50%" />
+ </p>
+
+Fault tolerance range: From the perspective of process instance, each Master is only responsible for fault tolerance of its own process instance; it will lock only when `handleDeadServer`;
+
+Fault-tolerant content: When sending the remove event of the Worker node, the Master only fault-tolerant task instances. Before fault-tolerant, it compares the start time of the instance with the server start-up time, and skips fault-tolerance if after the server start time;
+
+Fault-tolerant post-processing: Once the Master Scheduler thread finds that the task instance is in the "fault-tolerant" state, it takes over the task and resubmits it.
+
+Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+
+###### 2.Task failed and try again
+
+Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
+
+- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a
+  Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
+- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the
+  failed node** or **from the current node**
+- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
+
+Next to the topic, we divide the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node,
+  Spark node, and dependent node.
+
+- There is also a logical node, which does not do actual script or statement processing, but only logical processing of
+  the entire process flow, such as sub-process sections.
+
+Each **business node** can be configured with the number of failed retries. When the task node fails, it will
+automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not
+supported. But the tasks in the logical node support retry.
+
+If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop,
+and the failed workflow can be manually rerun or process recovery operation
+
+##### Five、Task priority design
+
+In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task
+submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot
+be set, so We have redesigned this, and our current design is as follows:
+
+- According to **priority of different process instances** priority over **priority of the same process instance**
+  priority over **priority of tasks within the same process**priority over **tasks within the same process**submission
+  order from high to Low task processing.
+    - The specific implementation is to parse the priority according to the JSON of the task instance, and then save
+      the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task
+      queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
+
+        - The priority of the process definition is to consider that some processes need to be processed before other
+          processes. This can be configured when the process is started or scheduled to start. There are 5 levels in
+          total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
+             </p>
+
+        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As
+          shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
+             </p>
+
+##### Six、Logback and netty implement log access
+
+- Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local
+  file. There are two options:
+- Put logs on the ES search engine
+- Obtain remote log information through netty communication
+
+- In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to
+  log information.
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
+ </p>
+
+- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log
+  file.
+- FileAppender is mainly implemented as follows:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+
+
+Generate logs in the form of /process definition id/process instance id/task instance id.log
+
+- Filter to match the thread name starting with TaskLogInfo:
+
+- TaskLogFilter is implemented as follows:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+
diff --git a/docs/en-us/2.0.2/user_doc/architecture/designplus.md b/docs/en-us/2.0.2/user_doc/architecture/designplus.md
new file mode 100644
index 0000000..541d572
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/designplus.md
@@ -0,0 +1,79 @@
+## System Architecture Design
+
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the
+scheduling system
+
+### 1.Glossary
+
+**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the
+form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until
+there are no subsequent nodes. Examples are as follows:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+
+**Process definition**: Visualization formed by dragging task nodes and establishing task node associations**DAG**
+
+**Process instance**: The process instance is the instantiation of the process definition, which can be generated by
+manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
+
+**Task instance**: The task instance is the instantiation of the task node in the process definition, which identifies
+the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (
+depends), and plans to support dynamic plug-in expansion, note: **SUB_PROCESS**  It is also a separate process
+definition that can be started and executed separately
+
+**Scheduling method**: The system supports scheduled scheduling and manual scheduling based on cron expressions. Command
+type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process,
+start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread. Among them **Resume
+fault-tolerant workflow** and **Resume waiting thread** The two command types are used by the internal control of
+scheduling, and cannot be called from the outside
+
+**Scheduled**: System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
+
+**Rely**: The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also
+provides **task dependent** nodes, supporting **between processes**
+
+**Priority**: Support the priority of process instances and task instances, if the priority of process instances and
+task instances is not set, the default is first-in-first-out
+
+**Email alert**: Support **SQL task** Query result email sending, process instance running result email alert and fault
+tolerance alert notification
+
+**Failure strategy**: For tasks running in parallel, if a task fails, two failure strategy processing methods are
+provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process
+failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the
+process fails and ends
+
+**Complement**: Supplement historical data,Supports **interval parallel and serial** two complement methods
+
+### 2.Module introduction
+
+- dolphinscheduler-alert alarm module, providing AlertServer service.
+
+- dolphinscheduler-api web application module, providing ApiServer service.
+
+- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
+
+- dolphinscheduler-dao provides operations such as database access.
+
+- dolphinscheduler-remote client and server based on netty
+
+- dolphinscheduler-server MasterServer and WorkerServer services
+
+- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server
+  module and api module
+
+- dolphinscheduler-ui front-end module
+
+### Sum up
+
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation
+ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
+
+
diff --git a/docs/en-us/2.0.2/user_doc/architecture/listdocs.md b/docs/en-us/2.0.2/user_doc/architecture/listdocs.md
new file mode 100644
index 0000000..71e9f7f
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/listdocs.md
@@ -0,0 +1,63 @@
+# Older Versions:
+
+#### Setup instructions,  are available for each stable version of Apache DolphinScheduler below:
+
+### Versions: 2.0.2
+
+#### Links:[2.0.2 Document](/en-us/docs/2.0.2/user_doc/guide/quick-start.html)
+
+### Versions: 2.0.1
+
+#### Links:[2.0.1 Document](/en-us/docs/2.0.1/user_doc/guide/quick-start.html)
+
+### Versions: 2.0.0
+
+#### Links:[2.0.0 Document](/en-us/docs/2.0.0/user_doc/guide/quick-start.html)
+
+### Versions:1.3.9
+
+#### Links:[1.3.9 Document](/en-us/docs/1.3.9/user_doc/quick-start.html)
+
+### Versions:1.3.8
+
+#### Links:[1.3.8 Document](/en-us/docs/1.3.8/user_doc/quick-start.html)
+
+### Versions:1.3.6
+
+#### Links:[1.3.6 Document](/en-us/docs/1.3.6/user_doc/quick-start.html)
+
+### Versions:1.3.5
+
+#### Links:[1.3.5 Document](/en-us/docs/1.3.5/user_doc/quick-start.html)
+
+### Versions:1.3.4
+
+##### Links:[1.3.4 Document](/en-us/docs/1.3.4/user_doc/quick-start.html)
+
+### Versions:1.3.3
+
+#### Links:[1.3.3 Document](/en-us/docs/1.3.4/user_doc/quick-start.html)
+
+### Versions:1.3.2
+
+#### Links:[1.3.2 Document](/en-us/docs/1.3.2/user_doc/quick-start.html)
+
+### Versions:1.3.1
+
+#### Links:[1.3.1 Document](/en-us/docs/1.3.1/user_doc/quick-start.html)
+
+### Versions:1.2.1
+
+#### Links:[1.2.1 Document](/en-us/docs/1.2.1/user_doc/quick-start.html)
+
+### Versions:1.2.0
+
+#### Links:[1.2.0 Document](/en-us/docs/1.2.0/user_doc/quick-start.html)
+
+### Versions:1.1.0
+
+#### Links:[1.1.0 Document](/en-us/docs/1.2.0/user_doc/quick-start.html)
+
+### Versions:Dev
+
+#### Links:[Dev Document](/en-us/docs/dev/user_doc/guide/quick-start.html)
diff --git a/docs/en-us/2.0.2/user_doc/architecture/load-balance.md b/docs/en-us/2.0.2/user_doc/architecture/load-balance.md
new file mode 100644
index 0000000..33a8330
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/load-balance.md
@@ -0,0 +1,61 @@
+### Load Balance
+
+Load balancing refers to the reasonable allocation of server pressure through routing algorithms (usually in cluster environments) to achieve the maximum optimization of server performance.
+
+
+
+### DolphinScheduler-Worker load balancing algorithms
+
+DolphinScheduler-Master allocates tasks to workers, and by default provides three algorithms:
+
+Weighted random (random)
+
+Smoothing polling (roundrobin)
+
+Linear load (lowerweight)
+
+The default configuration is the linear load.
+
+As the routing is done on the client side, the master service, you can change master.host.selector in master.properties to configure the algorithm what you want.
+
+eg: master.host.selector = random (case-insensitive)
+
+### Worker load balancing configuration
+
+The configuration file is worker.properties
+
+#### weight
+
+All of the above load algorithms are weighted based on weights, which affect the outcome of the triage. You can set different weights for different machines by modifying the worker.weight value.
+
+####  Preheating
+
+With JIT optimisation in mind, we will let the worker run at low power for a period of time after startup so that it can gradually reach its optimal state, a process we call preheating. If you are interested, you can read some articles about JIT.
+
+So the worker will gradually reach its maximum weight over time after it starts (by default ten minutes, we don't provide a configuration item, you can change it and submit a PR if needed).
+
+### Load balancing algorithm breakdown
+
+#### Random (weighted)
+
+This algorithm is relatively simple, one of the matched workers is selected at random (the weighting affects his weighting).
+
+#### Smoothed polling (weighted)
+
+An obvious drawback of the weighted polling algorithm. Namely, under certain specific weights, weighted polling scheduling generates an uneven sequence of instances, and this unsmoothed load may cause some instances to experience transient high loads, leading to a risk of system downtime. To address this scheduling flaw, we provide a smooth weighted polling algorithm.
+
+Each worker is given two weights, weight (which remains constant after warm-up is complete) and current_weight (which changes dynamically), for each route. The current_weight + weight is iterated over all the workers, and the weight of all the workers is added up and counted as total_weight, then the worker with the largest current_weight is selected as the worker for this task. current_weight-total_weight.
+
+#### Linear weighting (default algorithm)
+
+The algorithm reports its own load information to the registry at regular intervals. We base our judgement on two main pieces of information
+
+- load average (default is the number of CPU cores * 2)
+- available physical memory (default is 0.3, in G)
+
+If either of the two is lower than the configured item, then this worker will not participate in the load. (no traffic will be allocated)
+
+You can customise the configuration by changing the following properties in worker.properties
+
+- worker.max.cpuload.avg=-1 (worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2)
+- worker.reserved.memory=0.3 (worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G)
diff --git a/docs/en-us/2.0.2/user_doc/architecture/metadata.md b/docs/en-us/2.0.2/user_doc/architecture/metadata.md
new file mode 100644
index 0000000..fe536d4
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/metadata.md
@@ -0,0 +1,173 @@
+# Dolphin Scheduler 2.0.2 MetaData
+
+<a name="V5KOl"></a>
+### Dolphin Scheduler 2.0 DB Table Overview
+| Table Name | Comment |
+| :---: | :---: |
+| t_ds_access_token | token for access ds backend |
+| t_ds_alert | alert detail |
+| t_ds_alertgroup | alert group |
+| t_ds_command | command detail |
+| t_ds_datasource | data source |
+| t_ds_error_command | error command detail |
+| t_ds_process_definition | process difinition |
+| t_ds_process_instance | process instance |
+| t_ds_project | project |
+| t_ds_queue | queue |
+| t_ds_relation_datasource_user | datasource related to user |
+| t_ds_relation_process_instance | sub process |
+| t_ds_relation_project_user | project related to user |
+| t_ds_relation_resources_user | resource related to user |
+| t_ds_relation_udfs_user | UDF related to user |
+| t_ds_relation_user_alertgroup | alert group related to user |
+| t_ds_resources | resoruce center file |
+| t_ds_schedules | process difinition schedule |
+| t_ds_session | user login session |
+| t_ds_task_instance | task instance |
+| t_ds_tenant | tenant |
+| t_ds_udfs | UDF resource |
+| t_ds_user | user detail |
+| t_ds_version | ds version |
+
+
+---
+
+<a name="XCLy1"></a>
+### E-R Diagram
+<a name="5hWWZ"></a>
+#### User Queue DataSource
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- Multiple users can belong to one tenant
+- The queue field in the t_ds_user table stores the queue_name information in the t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.
+- The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
+<a name="7euSN"></a>
+#### Project Resource Alert
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- User can have multiple projects, User project authorization completes the relationship binding using project_id and user_id in t_ds_relation_project_user table
+- The user_id in the t_ds_projcet table represents the user who created the project, and the user_id in the t_ds_relation_project_user table represents users who have permission to the project
+- The user_id in the t_ds_resources table represents the user who created the resource, and the user_id in t_ds_relation_resources_user represents the user who has permissions to the resource
+- The user_id in the t_ds_udfs table represents the user who created the UDF, and the user_id in the t_ds_relation_udfs_user table represents a user who has permission to the UDF
+<a name="JEw4v"></a>
+#### Command Process Task
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
+
+- A project has multiple process definitions, a process definition can generate multiple process instances, and a process instance can generate multiple task instances
+- The t_ds_schedulers table stores the timing schedule information for process difinition
+- The data stored in the t_ds_relation_process_instance table is used to deal with that the process definition contains sub-processes, parent_process_instance_id field represents the id of the main process instance containing the child process, process_instance_id field represents the id of the sub-process instance, parent_task_instance_id field represents the task instance id of the sub-process node
+- The process instance table and the task instance table correspond to the t_ds_process_instance table and the t_ds_task_instance table, respectively.
+
+---
+
+<a name="yd79T"></a>
+### Core Table Schema
+<a name="6bVhH"></a>
+#### t_ds_process_definition
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process definition name |
+| version | int | process definition version |
+| release_state | tinyint | process definition release state:0:offline,1:online |
+| project_id | int | project id |
+| user_id | int | process definition creator id |
+| process_definition_json | longtext | process definition json content |
+| description | text | process difinition desc |
+| global_params | text | global parameters |
+| flag | tinyint | process is available: 0 not available, 1 available |
+| locations | text | Node location information |
+| connects | text | Node connection information |
+| receivers | text | receivers |
+| receivers_cc | text | carbon copy list |
+| create_time | datetime | create time |
+| timeout | int | timeout |
+| tenant_id | int | tenant id |
+| update_time | datetime | update time |
+
+<a name="t5uxM"></a>
+#### t_ds_process_instance
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process instance name |
+| process_definition_id | int | process definition id |
+| state | tinyint | process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| recovery | tinyint | process instance failover flag:0:normal,1:failover instance |
+| start_time | datetime | process instance start time |
+| end_time | datetime | process instance end time |
+| run_times | int | process instance run times |
+| host | varchar | process instance host |
+| command_type | tinyint | command type:0 start ,1 Start from the current node,2 Resume a fault-tolerant process,3 Resume Pause Process, 4 Execute from the failed node,5 Complement, 6 dispatch, 7 re-run, 8 pause, 9 stop ,10 Resume waiting thread |
+| command_param | text | json command parameters |
+| task_depend_type | tinyint | task depend type. 0: only current node,1:before the node,2:later nodes |
+| max_try_times | tinyint | max try times |
+| failure_strategy | tinyint | failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed |
+| warning_type | tinyint | warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success |
+| warning_group_id | int | warning group id |
+| schedule_time | datetime | schedule time |
+| command_start_time | datetime | command start time |
+| global_params | text | global parameters |
+| process_instance_json | longtext | process instance json |
+| flag | tinyint | process instance is available: 0 not available, 1 available |
+| update_time | timestamp | update time |
+| is_sub_process | int | whether the process is sub process: 1 sub-process, 0 not sub-process |
+| executor_id | int | executor id |
+| locations | text | Node location information |
+| connects | text | Node connection information |
+| history_cmd | text | history commands of process instance operation |
+| dependence_schedule_times | text | depend schedule fire time |
+| process_instance_priority | int | process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+| timeout | int | time out |
+| tenant_id | int | tenant id |
+
+<a name="tHZsY"></a>
+#### t_ds_task_instance
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | task name |
+| task_type | varchar | task type |
+| process_definition_id | int | process definition id |
+| process_instance_id | int | process instance id |
+| task_json | longtext | task content json |
+| state | tinyint | Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| submit_time | datetime | task submit time |
+| start_time | datetime | task start time |
+| end_time | datetime | task end time |
+| host | varchar | host of task running on |
+| execute_path | varchar | task execute path in the host |
+| log_path | varchar | task log path |
+| alert_flag | tinyint | whether alert |
+| retry_times | int | task retry times |
+| pid | int | pid of task |
+| app_link | varchar | yarn app id |
+| flag | tinyint | taskinstance is available: 0 not available, 1 available |
+| retry_interval | int | retry interval when task failed  |
+| max_retry_times | int | max retry times |
+| task_instance_priority | int | task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+
+<a name="gLGtm"></a>
+#### t_ds_command
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| command_type | tinyint | Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread |
+| process_definition_id | int | process definition id |
+| command_param | text | json command parameters |
+| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward, 2 backward |
+| failure_strategy | tinyint | Failed policy: 0 end, 1 continue |
+| warning_type | tinyint | Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent |
+| warning_group_id | int | warning group |
+| schedule_time | datetime | schedule time |
+| start_time | datetime | start time |
+| executor_id | int | executor id |
+| dependence | varchar | dependence |
+| update_time | datetime | update time |
+| process_instance_priority | int | process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group_id | int | worker group id |
+
+
+
diff --git a/docs/en-us/2.0.2/user_doc/architecture/task-structure.md b/docs/en-us/2.0.2/user_doc/architecture/task-structure.md
new file mode 100644
index 0000000..a62f58d
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/architecture/task-structure.md
@@ -0,0 +1,1131 @@
+
+# Overall Tasks Storage Structure
+All tasks created in DolphinScheduler are saved in the t_ds_process_definition table.
+
+The following shows the 't_ds_process_definition' table structure:
+
+
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|id|int(11)|primary key
+2|name|varchar(255)|process definition name
+3|version|int(11)|process definition version
+4|release_state|tinyint(4)|release status of process definition: 0 not online, 1 online
+5|project_id|int(11)|project id
+6|user_id|int(11)|user id of the process definition
+7|process_definition_json|longtext|process definition JSON
+8|description|text|process definition description
+9|global_params|text|global parameters
+10|flag|tinyint(4)|specify whether the process is available: 0 is not available, 1 is available
+11|locations|text|node location information
+12|connects|text|node connectivity info
+13|receivers|text|receivers
+14|receivers_cc|text|CC receivers
+15|create_time|datetime|create time
+16|timeout|int(11) |timeout
+17|tenant_id|int(11) |tenant id
+18|update_time|datetime|update time
+19|modify_by|varchar(36)|specifics of the user that made the modification
+20|resource_ids|varchar(255)|resource ids
+
+The 'process_definition_json' field is the core field, which defines the task information in the DAG diagram, and it is stored in JSON format.
+
+The following table describes the common data structure.
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|globalParams|Array|global parameters
+2|tasks|Array|task collections in the process [for the structure of each type, please refer to the following sections]
+3|tenantId|int|tenant ID
+4|timeout|int|timeout
+
+Data example:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# The Detailed Explanation of The Storage Structure of Each Task Type
+
+## Shell Nodes
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch | |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks | |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL Node
+Perform data query and update operations on the specified datasource through SQL.
+
+**The node data structure is as follows:**
+No.|parameter name||type|description |note
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task id|
+2|type ||String |task type |SQL
+3| name| |String|task name|
+4| params| |Object|customized parameters|Json format
+5| |type |String |database type
+6| |datasource |Int |datasource id
+7| |sql |String |query SQL statement
+8| |udfs | String| udf functions|specify UDF function ids, separate by comma
+9| |sqlType | String| SQL node type |0 for query and 1 for none-query SQL
+10| |title |String | mail title
+11| |receivers |String |receivers
+12| |receiversCc |String |CC receivers
+13| |showType | String|display type of mail|optionals: TABLE or ATTACHMENT
+14| |connParams | String|connect parameters
+15| |preStatements | Array|preposition SQL statements
+16| | postStatements| Array|postposition SQL statements||
+17| | localParams| Array|customized parameters||
+18|description | |String|description | |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch  | |
+21| | successNode| Array|jump to node if success| |
+22| | failedNode|Array|jump to node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times | |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority | |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks | |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE [stored procedures] Node
+**The node data structure is as follows:**
+**Node data example:**
+
+## SPARK Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |SPARK
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String | main class
+6| |mainArgs | String| execution arguments
+7| |others | String| other arguments
+8| |mainJar |Object | application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |driverCores | String| driver cores
+11| |driverMemory | String| driver memory
+12| |numExecutors |String | executor count
+13| |executorMemory |String | executor memory
+14| |executorCores |String | executor cores
+15| |programType | String| program type|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
+17| | localParams| Array|customized local parameters
+18| | resourceList| Array|resource files
+19|description | |String|description | |
+20|runFlag | |String |execution flag| |
+21|conditionResult | |Object|condition branch| |
+22| | successNode| Array|jump to node if success| |
+23| | failedNode|Array|jump to node if failure| 
+24| dependence| |Object |task dependency |mutual exclusion with params
+25|maxRetryTimes | |String|max retry times | |
+26|retryInterval | |String |retry interval| |
+27|timeout | |Object|timeout | |
+28| taskInstancePriority| |String|task priority | |
+29|workerGroup | |String |Worker group| |
+30|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce(MR) Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |MR
+3| name| |String|task name |
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String | main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object | application jar package
+9| |programType | String|program type|JAVA,PYTHON
+10| | localParams| Array|customized local parameters
+11| | resourceList| Array|resource files
+12|description | |String|description | |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump to node if success| |
+16| | failedNode|Array|jump to node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times | |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|  task Id|
+2|type ||String |task type|PYTHON
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String| Python script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure | 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|FLINK
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |mainClass |String |main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object |application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |slot | String| slot count
+11| |taskManager |String | taskManager count
+12| |taskManagerMemory |String |taskManager memory size
+13| |jobManagerMemory |String | jobManager memory size
+14| |programType | String| program type|JAVA,SCALA,PYTHON
+15| | localParams| Array|local parameters
+16| | resourceList| Array|resource files
+17|description | |String|description | |
+18|runFlag | |String |execution flag| |
+19|conditionResult | |Object|condition branch| |
+20| | successNode| Array|jump node if success| |
+21| | failedNode|Array|jump node if failure| 
+22| dependence| |Object |task dependency |mutual exclusion with params
+23|maxRetryTimes | |String|max retry times| |
+24|retryInterval | |String |retry interval| |
+25|timeout | |Object|timeout | |
+26| taskInstancePriority| |String|task priority| |
+27|workerGroup | |String |Worker group| |
+38|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP Node
+**The node data structure is as follows:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|HTTP
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |url |String |request url
+6| |httpMethod | String|http method|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|http parameters
+8| |httpCheckCondition | String|validation of HTTP code status|default code 200
+9| |condition |String |validation conditions
+10| | localParams| Array|customized local parameters
+11|description | |String|description| |
+12|runFlag | |String |execution flag| |
+13|conditionResult | |Object|condition branch| |
+14| | successNode| Array|jump node if success| |
+15| | failedNode|Array|jump node if failure| 
+16| dependence| |Object |task dependency |mutual exclusion with params
+17|maxRetryTimes | |String|max retry times | |
+18|retryInterval | |String |retry interval| |
+19|timeout | |Object|timeout | |
+20| taskInstancePriority| |String|task priority| |
+21|workerGroup | |String |Worker group| |
+22|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type|DATAX
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |customConfig |Int |specify whether use customized config| 0 none customized, 1 customized
+6| |dsType |String | datasource type
+7| |dataSource |Int | datasource ID
+8| |dtType | String|target database type
+9| |dataTarget | Int|target database ID 
+10| |sql |String | SQL statements
+11| |targetTable |String |target table
+12| |jobSpeedByte |Int |job speed limiting(bytes)
+13| |jobSpeedRecord | Int|job speed limiting(records)
+14| |preStatements | Array|preposition SQL
+15| | postStatements| Array|postposition SQL
+16| | json| String|customized configs|valid if customConfig=1
+17| | localParams| Array|customized parameters|valid if customConfig=1
+18|description | |String|description| |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch| |
+21| | successNode| Array|jump node if success| |
+22| | failedNode|Array|jump node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times| |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority| |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks| |
+
+
+
+**Node data example:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task ID|
+2|type ||String |task type|SQOOP
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| | concurrency| Int|concurrency rate
+6| | modelType|String |flow direction|import,export
+7| |sourceType|String |datasource type|
+8| |sourceParams |String|datasource parameters| JSON format
+9| | targetType|String |target datasource type
+10| |targetParams | String|target datasource parameters|JSON format
+11| |localParams |Array |customized local parameters
+12|description | |String|description| |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump node if success| |
+16| | failedNode|Array|jump node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times| |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## Condition Branch Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters | null
+5|description | |String|description| |
+6|runFlag | |String |execution flag| |
+7|conditionResult | |Object|condition branch | |
+8| | successNode| Array|jump to node if success| |
+9| | failedNode|Array|jump to node if failure| 
+10| dependence| |Object |task dependency |mutual exclusion with params
+11|maxRetryTimes | |String|max retry times | |
+12|retryInterval | |String |retry interval| |
+13|timeout | |Object|timeout | |
+14| taskInstancePriority| |String|task priority | |
+15|workerGroup | |String |Worker group| |
+16|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## Subprocess Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|SHELL
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |processDefinitionId |Int| process definition ID
+6|description | |String|description | |
+7|runFlag | |String |execution flag| |
+8|conditionResult | |Object|condition branch | |
+9| | successNode| Array|jump to node if success| |
+10| | failedNode|Array|jump to node if failure| 
+11| dependence| |Object |task dependency |mutual exclusion with params
+12|maxRetryTimes | |String|max retry times| |
+13|retryInterval | |String |retry interval| |
+14|timeout | |Object|timeout| |
+15| taskInstancePriority| |String|task priority| |
+16|workerGroup | |String |Worker group| |
+17|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## DEPENDENT Node
+**The node data structure is as follows:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|DEPENDENT
+3| name| |String|task name|
+4| params| |Object|customized parameters |Json format
+5| |rawScript |String|Shell script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description| |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14| | relation|String |relation|AND,OR
+15| | dependTaskList|Array |dependent task list|
+16|maxRetryTimes | |String|max retry times| |
+17|retryInterval | |String |retry interval| |
+18|timeout | |Object|timeout| |
+19| taskInstancePriority| |String|task priority| |
+20|workerGroup | |String |Worker group| |
+21|preTasks | |Array|preposition tasks| |
+
+
+**Node data example:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/en-us/2.0.2/user_doc/dev_run.md b/docs/en-us/2.0.2/user_doc/dev_run.md
new file mode 100644
index 0000000..df823bd
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/dev_run.md
@@ -0,0 +1,142 @@
+## Development Environment Setup
+
+>    Reference: [DolphinScheduler builds the development environment on Windows local.](/zh-cn/blog/DS_run_in_windows.html)
+
+#### 1. Download the source code
+
+GitHub :https://github.com/apache/dolphinscheduler
+
+```shell
+mkdir dolphinscheduler
+cd dolphinscheduler
+git clone git@github.com:apache/dolphinscheduler.git
+```
+
+We use the dev branch here.
+
+#### 2. The zookeeper installation for Windows
+
+i. Download [zookeeper](https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz)
+
+ii. Unzip apache-zookeeper-3.6.3-bin.tar.gz
+
+iii. Create new zkData, zkLog folders in zk's directory.
+
+iv. Copy the zoo_sample.cfg file from the conf directory. Then rename it to zoo.cfg and change the configuration of the data and logs in it. For example: 
+
+```
+dataDir=D:\\code\\apache-zookeeper-3.6.3-bin\\zkData
+dataLogDir=D:\\code\\apache-zookeeper-3.6.3-bin\\zkLog
+```
+
+v. Run zkServer.cmd in the bin, and then run zkCli.cmd to view the running status of zk. If you can view the zk node information, it means the installation is successful.
+
+#### 3. Set up the back-end
+
+i. Create a new database locally for debugging. DolphinScheduler supports mysql and postgresql, here we use mysql for configuration and the database name could be : dolphinscheduler.
+
+ii. Import the code into IDEA, modify pom.xml in the root project, and change the scope of the mysql-connector-java dependency to compile.
+
+iii. Run `mvn -U install package -Prelease -Dmaven.test.skip=true `in terminal to install the required registered plugins.
+
+iv. Modify the datasource.properties of the dolphinscheduler-dao module: 
+
+```properties
+# mysql
+spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+spring.datasource.url=jdbc:mysql://localhost:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
+spring.datasource.username=root
+spring.datasource.password=123456
+```
+
+v. Refresh the dao module and run the main method of org.apache.dolphinscheduler.dao.upgrade.shell.CreateDolphinScheduler to automatically insert the tables and data which are required by the project.If you encounter problems such as non-existent database fields, you can try to solve them by running the sql file of the corresponding database under `dolphinscheduler\sql`.
+
+vi. Modify registry.properties for dolphinscheduler-service module and worker.properties for dolphinscheduler-server module respectively, note: `1.3.6-SNAPSHOT` here is based on the actual generated file
+
+```properties
+#registry.plugin.dir config the Registry Plugin dir.
+registry.plugin.dir=./dolphinscheduler-dist/target/dolphinscheduler-dist-1.3.6-SNAPSHOT/lib/plugin/registry/zookeeper
+
+registry.plugin.name=zookeeper
+registry.servers=127.0.0.1:2181
+```
+
+```properties
+#task.plugin.dir config the #task.plugin.dir config the Task Plugin dir . WorkerServer while find and load the Task Plugin Jar from this dir when deploy and start WorkerServer on the server .
+task.plugin.dir=./dolphinscheduler-task-plugin/dolphinscheduler-task-shell/target/dolphinscheduler-task-shell-1.3.6-SNAPSHOT
+```
+
+vii. Add the console output to logback-worker.xml, logback-master.xml, and logback-api.xml.
+
+```xml
+<root level="INFO">
+    <appender-ref ref="STDOUT"/>  <!-- Add the console output -->
+    <appender-ref ref="APILOGFILE"/>
+    <appender-ref ref="SKYWALKING-LOG"/>
+</root>
+```
+
+viii. Start the MasterServer
+
+ Run the main method of org.apache.dolphinscheduler.server.master.MasterServer. You need to set the following VM options:
+
+```
+-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false
+```
+
+ix. Start the WorkerServer 
+
+Run the main method of org.apache.dolphinscheduler.server.worker.WorkerServer. You need to set the following VM options:
+
+```
+-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false
+```
+
+x. Start the ApiApplicationServer
+
+Run the main method of org.apache.dolphinscheduler.api.ApiApplicationServer. You need to set the following VM options:
+
+```
+-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api
+```
+
+xi. If you need to use the log function, execute the main method of org.apache.dolphinscheduler.server.log.LoggerServer.
+
+xii. Backend swagger address: http://localhost:12345/dolphinscheduler/doc.html?language=zh_CN&lang=cn
+
+#### 4. Set up the front-end
+
+i. Install node
+
+​	a. Install nvm
+
+curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
+
+​	b. Refresh the environment variables
+
+​	source ~/.bash_profile
+
+​	c. Install node
+
+​	nvm install v12.20.2 
+
+note: Mac users could install npm through brew: brew install npm
+
+d. Validate the node installation
+
+​	node --version
+
+ii. cd  dolphinscheduler-ui and run the following command:
+
+```shell
+npm install
+npm run start
+```
+
+iii. Visit [http://localhost:8888](http://localhost:8888/)
+
+iv. Sign in with the administrator account
+
+>    username: admin
+>
+>    password: dolphinscheduler123
diff --git a/docs/en-us/2.0.2/user_doc/expansion-reduction.md b/docs/en-us/2.0.2/user_doc/expansion-reduction.md
new file mode 100644
index 0000000..56c3578
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/expansion-reduction.md
@@ -0,0 +1,251 @@
+<!-- markdown-link-check-disable -->
+
+# DolphinScheduler Expansion and Reduction
+
+## 1. Expansion 
+This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.
+```
+ Attention: There cannot be more than one master service process or worker service process on a physical machine.
+       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configuration] Edit the configuration file `conf/config/install_config.conf` on **all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
+```
+
+### 1.1 Basic software installation (please install the mandatory items yourself)
+
+* [required] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+):Must be installed, please install and configure JAVA_HOME and PATH variables under /etc/profile
+* [optional] If the expansion is a worker node, you need to consider whether to install an external client, such as Hadoop, Hive, Spark Client.
+
+
+```markdown
+ Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but will only call their Client for the corresponding task submission.
+```
+
+### 1.2 Get installation package
+- Check which version of DolphinScheduler is used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.
+- Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in /opt/ directory, and the full path is /opt/dolphinscheduler.
+- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to dolphinscheduler and store it in the /opt directory. 
+- Add database dependency package, this article uses Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.
+```shell
+# create the installation directory, please do not create the installation directory in /root, /home and other high privilege directories 
+mkdir -p /opt
+cd /opt
+# decompress
+tar -zxvf apache-dolphinscheduler-2.0.2-bin.tar.gz -C /opt 
+cd /opt
+mv apache-dolphinscheduler-2.0.2-bin  dolphinscheduler
+```
+
+```markdown
+ Attention: The installation package can be copied directly from an existing environment to an expanded physical machine for use.
+```
+
+### 1.3 Create Deployment Users
+
+- Create deployment users on **all** expansion machines, and be sure to configure sudo-free. If we plan to deploy scheduling on four expansion machines, ds1, ds2, ds3, and ds4, we first need to create deployment users on each machine
+
+```shell
+# to create a user, you need to log in with root and set the deployment user name, please modify it yourself, later take dolphinscheduler as an example
+useradd dolphinscheduler;
+
+# set the user password, please change it by yourself, later take dolphinscheduler123 as an example
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# configure sudo password-free
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```markdown
+ Attention:
+ - Since it is sudo -u {linux-user} to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+ - If you find the line "Default requiretty" in the /etc/sudoers file, please also comment it out.
+ - If resource uploads are used, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
+```
+
+### 1.4 Modify configuration
+
+- From an existing node such as Master/Worker, copy the conf directory directly to replace the conf directory in the new node. After copying, check if the configuration items are correct.
+    
+    ```markdown
+    Highlights:
+    datasource.properties: database connection information 
+    zookeeper.properties: information for connecting zk 
+    common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
+    env/dolphinscheduler_env.sh: environment Variables
+    ````
+
+- Modify the `dolphinscheduler_env.sh` environment variable in the conf/env directory according to the machine configuration (take the example that the software used is installed in /opt/soft)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        # export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/jav
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+    
+    ```
+
+    `Attention: This step is very important, such as JAVA_HOME and PATH is necessary to configure, not used can be ignored or commented out`
+
+
+- Softlink the JDK to /usr/bin/java (still using JAVA_HOME=/opt/soft/java as an example)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - Modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+    
+    * To add a new master node, you need to modify the ips and masters parameters.
+    * To add a new worker node, modify the ips and workers parameters.
+
+```shell
+# which machines to deploy DS services on, separated by commas between multiple physical machines
+ips="ds1,ds2,ds3,ds4"
+
+# ssh port,default 22
+sshPort="22"
+
+# which machine the master service is deployed on
+masters="existing master01,existing master02,ds1,ds2"
+
+# the worker service is deployed on which machine, and specify the worker belongs to which worker group, the following example of "default" is the group name
+workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
+
+```
+- If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual [5.7 Worker grouping](/en-us/docs/2.0.2/user_doc/system-manual.html)
+
+- On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory
+
+```shell
+sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
+```
+
+### 1.4. Restart the cluster & verify
+
+- restart the cluster
+
+```shell
+# stop command:
+
+bin/stop-all.sh # stop all services
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  # stop master service
+sh bin/dolphinscheduler-daemon.sh stop worker-server  # stop worker service
+sh bin/dolphinscheduler-daemon.sh stop logger-server  # stop logger service
+sh bin/dolphinscheduler-daemon.sh stop api-server     # stop api    service
+sh bin/dolphinscheduler-daemon.sh stop alert-server   # stop alert  service
+
+
+# start command::
+bin/start-all.sh # start all services
+
+sh bin/dolphinscheduler-daemon.sh start master-server  # start master service
+sh bin/dolphinscheduler-daemon.sh start worker-server  # start worker service
+sh bin/dolphinscheduler-daemon.sh start logger-server  # start logger service
+sh bin/dolphinscheduler-daemon.sh start api-server     # start api    service
+sh bin/dolphinscheduler-daemon.sh start alert-server   # start alert  service
+
+```
+
+```
+ Attention: When using stop-all.sh or stop-all.sh, if the physical machine executing the command is not configured to be ssh-free on all machines, it will prompt for the password
+```
+
+
+- After the script is completed, use the `jps` command to see if each node service is started (`jps` comes with the `Java JDK`)
+
+```
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    LoggerServer         ----- logger service
+    ApiApplicationServer ----- api    service
+    AlertServer          ----- alert  service
+```
+
+After successful startup, you can view the logs, which are stored in the logs folder.
+
+```Log Path
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+If the above services are started normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
+
+-----------------------------------------------------------------------------
+
+## 2. Reduction
+The reduction is to reduce the master or worker services for the existing DolphinScheduler cluster.
+There are two steps for shrinking. After performing the following two steps, the shrinking operation can be completed.
+
+### 2.1 Stop the service on the scaled-down node
+ * If you are scaling down the master node, identify the physical machine where the master service is located, and stop the master service on the physical machine.
+ * If the worker node is scaled down, determine the physical machine where the worker service is to be scaled down and stop the worker and logger services on the physical machine.
+ 
+```shell
+# stop command:
+bin/stop-all.sh # stop all services
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  # stop master service
+sh bin/dolphinscheduler-daemon.sh stop worker-server  # stop worker service
+sh bin/dolphinscheduler-daemon.sh stop logger-server  # stop logger service
+sh bin/dolphinscheduler-daemon.sh stop api-server     # stop api    service
+sh bin/dolphinscheduler-daemon.sh stop alert-server   # stop alert  service
+
+
+# start command:
+bin/start-all.sh # start all services
+
+sh bin/dolphinscheduler-daemon.sh start master-server # start master service
+sh bin/dolphinscheduler-daemon.sh start worker-server # start worker service
+sh bin/dolphinscheduler-daemon.sh start logger-server # start logger service
+sh bin/dolphinscheduler-daemon.sh start api-server    # start api    service
+sh bin/dolphinscheduler-daemon.sh start alert-server  # start alert  service
+
+```
+
+```
+ Attention: When using stop-all.sh or stop-all.sh, if the machine without the command is not configured to be ssh-free for all machines, it will prompt for the password.
+```
+
+- After the script is completed, use the `jps` command to see if each node service was successfully shut down (`jps` comes with the `Java JDK`)
+
+```
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    LoggerServer         ----- logger service
+    ApiApplicationServer ----- api    service
+    AlertServer          ----- alert  service
+```
+If the corresponding master service or worker service does not exist, then the master/worker service is successfully shut down.
+
+
+### 2.2 Modify the configuration file
+
+ - modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+    
+    * to scale down the master node, modify the ips and masters parameters.
+    * to scale down worker nodes, modify the ips and workers parameters.
+
+```shell
+# which machines to deploy DS services on, "localhost" for this machine
+ips="ds1,ds2,ds3,ds4"
+
+# ssh port,default: 22
+sshPort="22"
+
+# which machine the master service is deployed on
+masters="existing master01,existing master02,ds1,ds2"
+
+# The worker service is deployed on which machine, and specify which worker group this worker belongs to, the following example of "default" is the group name
+workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
+
+```
diff --git a/docs/en-us/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md b/docs/en-us/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md
new file mode 100644
index 0000000..53e0ef1
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md
@@ -0,0 +1,12 @@
+## How to create alert plugins and alert groups
+
+In version 2.0.2, users need to create alert instances, and then associate them with alert groups, and an alert group can use multiple alert instances, and we will notify them one by one.
+
+First of all, you need to go to the Security Center, select Alarm Group Management, then click Alarm Instance Management on the left, then create an alarm instance, then select the corresponding alarm plug-in and fill in the relevant alarm parameters.
+
+Then select Alarm Group Management, create an alarm group, and select the corresponding alarm instance.
+
+<img src="/img/alert/alert_step_1.png">
+<img src="/img/alert/alert_step_2.png">
+<img src="/img/alert/alert_step_3.png">
+<img src="/img/alert/alert_step_4.png">
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/alert/enterprise-wechat.md b/docs/en-us/2.0.2/user_doc/guide/alert/enterprise-wechat.md
new file mode 100644
index 0000000..39919a3
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/alert/enterprise-wechat.md
@@ -0,0 +1,11 @@
+# Enterprise WeChat
+
+If you need to use Enterprise WeChat to alert, please create an alarm Instance in warning instance manage, and then choose the wechat plugin. The configuration example of enterprise WeChat is as follows
+
+![enterprise-wechat-plugin](/img/alert/enterprise-wechat-plugin.png)
+
+Where send type corresponds to app and appchat respectively:
+APP: https://work.weixin.qq.com/api/doc/90000/90135/90236
+APPCHAT: https://work.weixin.qq.com/api/doc/90000/90135/90248
+
+user.send.msg corresponds to the content in the document. The variable of the corresponding value is {msg}
diff --git a/docs/en-us/2.0.2/user_doc/guide/datasource/hive.md b/docs/en-us/2.0.2/user_doc/guide/datasource/hive.md
new file mode 100644
index 0000000..38ccd95
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/datasource/hive.md
@@ -0,0 +1,38 @@
+# HIVE
+
+## Use HiveServer2
+
+ <p align="center">
+    <img src="/img/hive-en.png" width="80%" />
+  </p>
+
+- Data source: select HIVE
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Host Name: Enter the IP connected to HIVE
+- Port: Enter the port connected to HIVE
+- Username: Set the username for connecting to HIVE
+- Password: Set the password for connecting to HIVE
+- Database name: Enter the name of the database connected to HIVE
+- Jdbc connection parameters: parameter settings for HIVE connection, filled in in JSON form
+
+## Use HiveServer2 HA Zookeeper
+
+ <p align="center">
+    <img src="/img/hive1-en.png" width="80%" />
+  </p>
+Note: If Kerberos is not enabled, ensure that the parameter `hadoop.security.authentication.startup.state`. The state value is `false`, Parameter `java.security.krb5.conf.path` value is null or empty If **Kerberos** is enabled, it needs to be in common Properties configure the following parameters
+
+```conf
+# whether to startup kerberos
+hadoop.security.authentication.startup.state=true
+
+# java.security.krb5.conf path
+java.security.krb5.conf.path=/opt/krb5.conf
+
+# login user from keytab username
+login.user.keytab.username=hdfs-mycluster@ESZ.COM
+
+# login user from keytab path
+login.user.keytab.path=/opt/hdfs.headless.keytab
+```
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/datasource/introduction.md b/docs/en-us/2.0.2/user_doc/guide/datasource/introduction.md
new file mode 100644
index 0000000..c112812
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/datasource/introduction.md
@@ -0,0 +1,7 @@
+
+# Data Source
+
+Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, ORACLE, SQLSERVER and other data sources
+
+- Click "Data Source Center -> Create Data Source" to create different types of data sources according to requirements.
+- Click "Test Connection" to test whether the data source can be successfully connected.
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/datasource/mysql.md b/docs/en-us/2.0.2/user_doc/guide/datasource/mysql.md
new file mode 100644
index 0000000..7807a00
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/datasource/mysql.md
@@ -0,0 +1,16 @@
+# MySQL
+
+
+- Data source: select MYSQL
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP hostname: enter the IP to connect to MySQL
+- Port: Enter the port to connect to MySQL
+- Username: Set the username for connecting to MySQL
+- Password: Set the password for connecting to MySQL
+- Database name: Enter the name of the database connected to MySQL
+- Jdbc connection parameters: parameter settings for MySQL connection, filled in in JSON form
+
+<p align="center">
+   <img src="/img/mysql-en.png" width="80%" />
+ </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/datasource/postgresql.md b/docs/en-us/2.0.2/user_doc/guide/datasource/postgresql.md
new file mode 100644
index 0000000..77a4fd7
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/datasource/postgresql.md
@@ -0,0 +1,15 @@
+# POSTGRESQL
+
+- Data source: select POSTGRESQL
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Host Name: Enter the IP to connect to POSTGRESQL
+- Port: Enter the port to connect to POSTGRESQL
+- Username: Set the username for connecting to POSTGRESQL
+- Password: Set the password for connecting to POSTGRESQL
+- Database name: Enter the name of the database connected to POSTGRESQL
+- Jdbc connection parameters: parameter settings for POSTGRESQL connection, filled in in JSON form
+
+<p align="center">
+   <img src="/img/postgresql-en.png" width="80%" />
+ </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/datasource/spark.md b/docs/en-us/2.0.2/user_doc/guide/datasource/spark.md
new file mode 100644
index 0000000..ebdff80
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/datasource/spark.md
@@ -0,0 +1,15 @@
+# Spark
+
+<p align="center">
+   <img src="/img/spark-en.png" width="80%" />
+ </p>
+
+- Data source: select Spark
+- Data source name: enter the name of the data source
+- Description: Enter a description of the data source
+- IP/Hostname: Enter the IP connected to Spark
+- Port: Enter the port connected to Spark
+- Username: Set the username for connecting to Spark
+- Password: Set the password for connecting to Spark
+- Database name: Enter the name of the database connected to Spark
+- Jdbc connection parameters: parameter settings for Spark connection, filled in in JSON form
diff --git a/docs/en-us/2.0.2/user_doc/guide/flink-call.md b/docs/en-us/2.0.2/user_doc/guide/flink-call.md
new file mode 100644
index 0000000..2b86d7c
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/flink-call.md
@@ -0,0 +1,152 @@
+# Flink Calls Operating steps
+
+### Create a queue
+
+1. Log in to the scheduling system, click "Security", then click "Queue manage" on the left, and click "Create queue" to create a queue.
+2. Fill in the name and value of the queue, and click "Submit" 
+
+<p align="center">
+   <img src="/img/api/create_queue.png" width="80%" />
+ </p>
+
+
+
+
+### Create a tenant 
+
+```
+1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
+2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
+3. After creating a tenant, there will be a folder in the HDFS relevant directory.  
+```
+
+<p align="center">
+   <img src="/img/api/create_tenant.png" width="80%" />
+ </p>
+
+
+
+
+### Create a user
+
+<p align="center">
+   <img src="/img/api/create_user.png" width="80%" />
+ </p>
+
+
+
+
+### Create a token
+
+1. Log in to the scheduling system, click "Security", then click "Token manage" on the left, and click "Create token" to create a token.
+
+<p align="center">
+   <img src="/img/token-management-en.png" width="80%" />
+ </p>
+
+
+2. Select the "Expiration time" (Token validity), select "User" (to perform the API operation with the specified user), click "Generate token", copy the Token string, and click "Submit"
+
+<p align="center">
+   <img src="/img/create-token-en1.png" width="80%" />
+ </p>
+
+
+### Use token
+
+1. Open the API documentation page
+
+   > Address:http://{api server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
+
+<p align="center">
+   <img src="/img/api-documentation-en.png" width="80%" />
+ </p>
+
+
+2. Select a test API, the API selected for this test: queryAllProjectList
+
+   > projects/query-project-list
+   >                                                                  >
+
+3. Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result
+
+   ```
+   token: The Token just generated
+   ```
+
+<p align="center">
+   <img src="/img/test-api.png" width="80%" />
+ </p>  
+
+
+
+### User authorization
+
+<p align="center">
+   <img src="/img/api/user_authorization.png" width="80%" />
+ </p>
+
+
+
+
+### User login
+
+```
+http://192.168.1.163:12345/dolphinscheduler/ui/#/monitor/servers/master
+```
+
+<p align="center">
+   <img src="/img/api/user_login.png" width="80%" />
+ </p>
+
+
+
+
+### Upload the resource
+
+<p align="center">
+   <img src="/img/api/upload_resource.png" width="80%" />
+ </p>
+
+
+
+
+### Create a workflow
+
+<p align="center">
+   <img src="/img/api/create_workflow1.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow2.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow3.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow4.png" width="80%" />
+ </p>
+
+
+
+
+### View the execution result
+
+<p align="center">
+   <img src="/img/api/execution_result.png" width="80%" />
+ </p>
+
+
+
+
+### View log
+
+<p align="center">
+   <img src="/img/api/log.png" width="80%" />
+ </p>
+
diff --git a/docs/en-us/2.0.2/user_doc/guide/homepage.md b/docs/en-us/2.0.2/user_doc/guide/homepage.md
new file mode 100644
index 0000000..285f7eb
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/homepage.md
@@ -0,0 +1,7 @@
+# Workflow Overview
+
+The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.
+
+<p align="center">
+<img src="/img/home_en.png" width="80%" />
+</p>
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/cluster.md b/docs/en-us/2.0.2/user_doc/guide/installation/cluster.md
new file mode 100644
index 0000000..be179f8
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/cluster.md
@@ -0,0 +1,36 @@
+# Cluster Deployment
+
+Cluster deployment is to deploy the DolphinScheduler on multiple machines for running a large number of tasks in production.
+
+If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+
+## Deployment Step
+
+Cluster deployment uses the same scripts and configuration files as we deploy in [pseudo-cluster deployment](pseudo-cluster.md), so the prepare and required are the same as pseudo-cluster deployment. The difference is that [pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while cluster deployment (Cluster) for multiple. and the steps of "Modify configuration" are quite different between pseudo-cluster deployment and cluster deployment.
+
+### Prepare && DolphinScheduler startup environment
+
+Because of cluster deployment for multiple machine, so you have to run you "Prepare" and "startup" in every machine in [pseudo-cluster.md](pseudo-cluster.md), except section "Configure machine SSH password-free login", "Start zookeeper", "Initialize the database", which is only for deployment or just need an single server
+
+### Modify configuration
+
+This is a step that is quite different from [pseudo-cluster.md](pseudo-cluster.md), because the deployment script will transfer the resources required for installation machine to each deployment machine using `scp`. And we have to declare all machine we want to install DolphinScheduler and then run script `install.sh`. The configuration file is under the path `conf/config/install_config.conf`, here we only need to modify section **INSTALL MACHINE**, **DolphinScheduler ENV, Database, Regi [...]
+
+```shell
+# ---------------------------------------------------------
+# INSTALL MACHINE
+# ---------------------------------------------------------
+# Using IP or machine hostname for server going to deploy master, worker, API server, the IP of the server
+# If you using hostname, make sure machine could connect each others by hostname
+# As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, where ds1, ds2 install master server, ds3, ds4, and ds5 installs worker server, the alert server is installed in ds4, and the api server is installed in ds5
+ips="ds1,ds2,ds3,ds4,ds5"
+masters="ds1,ds2"
+workers="ds3:default,ds4:default,ds5:default"
+alertServer="ds4"
+apiServers="ds5"
+pythonGatewayServers="ds5"
+```
+
+## Start DolphinScheduler && Login DolphinScheduler && Server Start And Stop
+
+Same as pseudo-cluster.md](pseudo-cluster.md)
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/docker.md b/docs/en-us/2.0.2/user_doc/guide/installation/docker.md
new file mode 100644
index 0000000..feb923c
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/docker.md
@@ -0,0 +1,1043 @@
+# QuickStart in Docker
+
+## Prerequisites
+
+ - [Docker](https://docs.docker.com/engine/install/) 1.13.1+
+ - [Docker Compose](https://docs.docker.com/compose/) 1.11.0+
+
+## How to use this Docker image
+
+Here're 3 ways to quickly install DolphinScheduler
+
+### The First Way: Start a DolphinScheduler by docker-compose (recommended)
+
+In this way, you need to install [docker-compose](https://docs.docker.com/compose/) as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet
+
+For Windows 7-10, you can install [Docker Toolbox](https://github.com/docker/toolbox/releases). For Windows 10 64-bit, you can install [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), and pay attention to the [system requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements)
+
+#### 0. Configure memory not less than 4GB
+
+For Mac user, click `Docker Desktop -> Preferences -> Resources -> Memory`
+
+For Windows Docker Toolbox user, two items need to be configured:
+
+ - **Memory**: Open Oracle VirtualBox Manager, if you double-click Docker Quickstart Terminal and successfully run Docker Toolbox, you will see a Virtual Machine named `default`. And click `Settings -> System -> Motherboard -> Base Memory`
+ - **Port Forwarding**: Click `Settings -> Network -> Advanced -> Port forwarding -> Add`. `Name`, `Host Port` and `Guest Port` all fill in `12345`, regardless of `Host IP` and `Guest IP`
+
+For Windows Docker Desktop user
+ - **Hyper-V mode**: Click `Docker Desktop -> Settings -> Resources -> Memory`
+ - **WSL 2 mode**: Refer to [WSL 2 utility VM](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
+
+#### 1. Download the Source Code Package
+
+Please download the source code package apache-dolphinscheduler-2.0.2-src.tar.gz, download address: [download](/en-us/download/download.html)
+
+#### 2. Pull Image and Start the Service
+
+> For Mac and Linux user, open **Terminal**
+> For Windows Docker Toolbox user, open **Docker Quickstart Terminal**
+> For Windows Docker Desktop user, open **Windows PowerShell**
+
+```
+$ tar -zxvf apache-dolphinscheduler-2.0.2-src.tar.gz
+$ cd apache-dolphinscheduler-2.0.2-src/docker/docker-swarm
+$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+$ docker tag apache/dolphinscheduler:2.0.2 apache/dolphinscheduler:latest
+$ docker-compose up -d
+```
+
+> PowerShell should use `cd apache-dolphinscheduler-2.0.2-src\docker\docker-swarm`
+
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default
+
+#### 3. Login
+
+Visit the Web UI: http://192.168.xx.xx:12345/dolphinscheduler (The local address is http://127.0.0.1:12345/dolphinscheduler)
+
+The default username is `admin` and the default password is `dolphinscheduler123`
+
+<p align="center">
+  <img src="/img/login_en.png" width="60%" />
+</p>
+
+Please refer to the `Quick Start` in the chapter [User Manual](/en-us/docs/2.0.2/user_doc/guide/quick-start.html) to explore how to use DolphinScheduler
+
+### The Second Way: Start via specifying the existing PostgreSQL and ZooKeeper service
+
+In this way, you need to install [docker](https://docs.docker.com/engine/install/) as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet
+
+#### 1. Basic Required Software (please install by yourself)
+
+ - PostgreSQL (8.2.15+)
+ - ZooKeeper (3.4.6+)
+ - Docker (1.13.1+)
+
+#### 2. Please login to the PostgreSQL database and create a database named `dolphinscheduler`
+
+#### 3. Initialize the database, import `sql/dolphinscheduler_postgre.sql` to create tables and initial data
+
+#### 4. Download the DolphinScheduler Image
+
+We have already uploaded user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:
+
+```
+docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+```
+
+#### 5. Run a DolphinScheduler Instance
+
+```
+$ docker run -d --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-p 12345:12345 \
+apache/dolphinscheduler:2.0.2 all
+```
+
+Note: database username test and password test need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your relate PostgreSQL and ZooKeeper host IP
+
+#### 6. Login
+
+Same as above
+
+### The Third Way: Start a standalone DolphinScheduler server
+
+The following services are automatically started when the container starts:
+
+```
+     MasterServer         ----- master service
+     WorkerServer         ----- worker service
+     LoggerServer         ----- logger service
+     ApiApplicationServer ----- api service
+     AlertServer          ----- alert service
+     PythonGatewayServer  ----- python gateway service
+```
+
+If you just want to run part of the services in the DolphinScheduler
+
+You can start some services in DolphinScheduler by running the following commands.
+
+* Start a **master server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-master \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+apache/dolphinscheduler:2.0.2 master-server
+```
+
+* Start a **worker server** (including **logger server**), For example:
+
+```
+$ docker run -d --name dolphinscheduler-worker \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+apache/dolphinscheduler:2.0.2 worker-server
+```
+
+* Start a **api server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-api \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-p 12345:12345 \
+apache/dolphinscheduler:2.0.2 api-server
+```
+
+* Start a **alert server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-alert \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+apache/dolphinscheduler:2.0.2 alert-server
+```
+
+* Start a **python gateway server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-python-gateway \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+apache/dolphinscheduler:2.0.2 python-gateway
+```
+
+**Note**: You must be specify `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_PASSWORD`, `ZOOKEEPER_QUORUM` when start a standalone dolphinscheduler server.
+
+## Environment Variables
+
+The Docker container is configured through environment variables, and the [Appendix-Environment Variables](#appendix-environment-variables) lists the configurable environment variables of the DolphinScheduler and their default values
+
+Especially, it can be configured through the environment variable configuration file `config.env.sh` in Docker Compose and Docker Swarm
+
+## Support Matrix
+
+| Type                                                         | Support      | Notes                                 |
+| ------------------------------------------------------------ | ------------ | ------------------------------------- |
+| Shell                                                        | Yes          |                                       |
+| Python2                                                      | Yes          |                                       |
+| Python3                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop2                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop3                                                      | Not Sure     | Not tested                            |
+| Spark-Local(client)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-YARN(cluster)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-Standalone(cluster)                                    | Not Yet      |                                       |
+| Spark-Kubernetes(cluster)                                    | Not Yet      |                                       |
+| Flink-Local(local>=1.11)                                     | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-YARN(yarn-cluster)                                     | Indirect Yes | Refer to FAQ                          |
+| Flink-YARN(yarn-session/yarn-per-job/yarn-application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Standalone(default)                                    | Not Yet      |                                       |
+| Flink-Standalone(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Kubernetes(default)                                    | Not Yet      |                                       |
+| Flink-Kubernetes(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-NativeKubernetes(kubernetes-session/application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| MapReduce                                                    | Indirect Yes | Refer to FAQ                          |
+| Kerberos                                                     | Indirect Yes | Refer to FAQ                          |
+| HTTP                                                         | Yes          |                                       |
+| DataX                                                        | Indirect Yes | Refer to FAQ                          |
+| Sqoop                                                        | Indirect Yes | Refer to FAQ                          |
+| SQL-MySQL                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-PostgreSQL                                               | Yes          |                                       |
+| SQL-Hive                                                     | Indirect Yes | Refer to FAQ                          |
+| SQL-Spark                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-ClickHouse                                               | Indirect Yes | Refer to FAQ                          |
+| SQL-Oracle                                                   | Indirect Yes | Refer to FAQ                          |
+| SQL-SQLServer                                                | Indirect Yes | Refer to FAQ                          |
+| SQL-DB2                                                      | Indirect Yes | Refer to FAQ                          |
+
+## FAQ
+
+### How to manage DolphinScheduler by docker-compose?
+
+Start, restart, stop or list containers:
+
+```
+docker-compose start
+docker-compose restart
+docker-compose stop
+docker-compose ps
+```
+
+Stop containers and remove containers, networks:
+
+```
+docker-compose down
+```
+
+Stop containers and remove containers, networks and volumes:
+
+```
+docker-compose down -v
+```
+
+### How to view the logs of a container?
+
+List all running containers:
+
+```
+docker ps
+docker ps --format "{{.Names}}" # only print names
+```
+
+View the logs of a container named docker-swarm_dolphinscheduler-api_1:
+
+```
+docker logs docker-swarm_dolphinscheduler-api_1
+docker logs -f docker-swarm_dolphinscheduler-api_1 # follow log output
+docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # show last 10 lines from the end of the logs
+```
+
+### How to scale master and worker by docker-compose?
+
+Scale master to 2 instances:
+
+```
+docker-compose up -d --scale dolphinscheduler-master=2 dolphinscheduler-master
+```
+
+Scale worker to 3 instances:
+
+```
+docker-compose up -d --scale dolphinscheduler-worker=3 dolphinscheduler-worker
+```
+
+### How to deploy DolphinScheduler on Docker Swarm?
+
+Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to [create-swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/))
+
+Start a stack named dolphinscheduler:
+
+```
+docker stack deploy -c docker-stack.yml dolphinscheduler
+```
+
+List the services in the stack named dolphinscheduler:
+
+```
+docker stack services dolphinscheduler
+```
+
+Stop and remove the stack named dolphinscheduler:
+
+```
+docker stack rm dolphinscheduler
+```
+
+Remove the volumes of the stack named dolphinscheduler:
+
+```
+docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e "^dolphinscheduler")
+```
+
+### How to scale master and worker on Docker Swarm?
+
+Scale master of the stack named dolphinscheduler to 2 instances:
+
+```
+docker service scale dolphinscheduler_dolphinscheduler-master=2
+```
+
+Scale worker of the stack named dolphinscheduler to 3 instances:
+
+```
+docker service scale dolphinscheduler_dolphinscheduler-worker=3
+```
+
+### How to build a Docker image?
+
+#### Build from the source code (Require Maven 3.3+ & JDK 1.8+)
+
+In Unix-Like, execute in Terminal:
+
+```bash
+$ bash ./docker/build/hooks/build
+```
+
+In Windows, execute in cmd or PowerShell:
+
+```bat
+C:\dolphinscheduler-src>.\docker\build\hooks\build.bat
+```
+
+Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` script files if you don't understand
+
+#### Build from the binary distribution (Not require Maven 3.3+ & JDK 1.8+)
+
+Please download the binary distribution package apache-dolphinscheduler-2.0.2-bin.tar.gz, download address: [download](/en-us/download/download.html). And put apache-dolphinscheduler-2.0.2-bin.tar.gz into the `apache-dolphinscheduler-2.0.2-src/docker/build` directory, execute in Terminal or PowerShell:
+
+```
+$ cd apache-dolphinscheduler-2.0.2-src/docker/build
+$ docker build --build-arg VERSION=2.0.2 -t apache/dolphinscheduler:2.0.2 .
+```
+
+> PowerShell should use `cd apache-dolphinscheduler-2.0.2-src/docker/build`
+
+#### Build multi-platform images
+
+Currently support to build images including `linux/amd64` and `linux/arm64` platform architecture, requirements:
+
+1. Support [docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/)
+2. Own the push permission of https://hub.docker.com/r/apache/dolphinscheduler (**Be cautious**: The build command will automatically push the multi-platform architecture images to the docker hub of apache/dolphinscheduler by default)
+
+Execute:
+
+```bash
+$ docker login # login to push apache/dolphinscheduler
+$ bash ./docker/build/hooks/build
+```
+
+### How to add an environment variable for Docker?
+
+If you would like to do additional initialization in an image derived from this one, add one or more environment variables under `/root/start-init-conf.sh`, and modify template files in `/opt/dolphinscheduler/conf/*.tpl`.
+
+For example, to add an environment variable `SECURITY_AUTHENTICATION_TYPE` in `/root/start-init-conf.sh`:
+
+```
+export SECURITY_AUTHENTICATION_TYPE=PASSWORD
+```
+
+and to modify `application-api.properties.tpl` template file, add the `SECURITY_AUTHENTICATION_TYPE`:
+```
+security.authentication.type=${SECURITY_AUTHENTICATION_TYPE}
+```
+
+`/root/start-init-conf.sh` will dynamically generate config file:
+
+```sh
+echo "generate dolphinscheduler config"
+ls ${DOLPHINSCHEDULER_HOME}/conf/ | grep ".tpl" | while read line; do
+eval "cat << EOF
+$(cat ${DOLPHINSCHEDULER_HOME}/conf/${line})
+EOF
+" > ${DOLPHINSCHEDULER_HOME}/conf/${line%.*}
+done
+```
+
+### How to use MySQL as the DolphinScheduler's database instead of PostgreSQL?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+5. Comment the `dolphinscheduler-postgresql` block in `docker-compose.yml`
+
+6. Add `dolphinscheduler-mysql` service in `docker-compose.yml` (**Optional**, you can directly use an external MySQL database)
+
+7. Modify DATABASE environment variables in `config.env.sh`
+
+```
+DATABASE_TYPE=mysql
+DATABASE_DRIVER=com.mysql.jdbc.Driver
+DATABASE_HOST=dolphinscheduler-mysql
+DATABASE_PORT=3306
+DATABASE_USERNAME=root
+DATABASE_PASSWORD=root
+DATABASE_DATABASE=dolphinscheduler
+DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
+```
+
+> If you have added `dolphinscheduler-mysql` service in `docker-compose.yml`, just set `DATABASE_HOST` to `dolphinscheduler-mysql`
+
+8. Run a dolphinscheduler (See **How to use this docker image**)
+
+### How to support MySQL datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+5. Run a dolphinscheduler (See **How to use this docker image**)
+
+6. Add a MySQL datasource in `Datasource manage`
+
+### How to support Oracle datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of Oracle.
+>
+> If you want to add Oracle datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`)
+
+2. Create a new `Dockerfile` to add Oracle driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including Oracle driver:
+
+```
+docker build -t apache/dolphinscheduler:oracle-driver .
+```
+
+4. Modify all `image` fields to `apache/dolphinscheduler:oracle-driver` in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+5. Run a dolphinscheduler (See **How to use this docker image**)
+
+6. Add an Oracle datasource in `Datasource manage`
+
+### How to support Python 2 pip and custom requirements.txt?
+
+1. Create a new `Dockerfile` to install pip:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY requirements.txt /tmp
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python-pip && \
+    pip install --no-cache-dir -r /tmp/requirements.txt && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **pip 18.1**. If you upgrade the pip, just add one line
+
+```
+    pip install --no-cache-dir -U pip && \
+```
+
+2. Build a new docker image including pip:
+
+```
+docker build -t apache/dolphinscheduler:pip .
+```
+
+3. Modify all `image` fields to `apache/dolphinscheduler:pip` in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+4. Run a dolphinscheduler (See **How to use this docker image**)
+
+5. Verify pip under a new Python task
+
+### How to support Python 3?
+
+1. Create a new `Dockerfile` to install Python 3:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python3 && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like
+
+```
+    apt-get install -y --no-install-recommends python3-pip && \
+```
+
+2. Build a new docker image including Python 3:
+
+```
+docker build -t apache/dolphinscheduler:python3 .
+```
+
+3. Modify all `image` fields to `apache/dolphinscheduler:python3` in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+4. Modify `PYTHON_HOME` to `/usr/bin/python3` in `config.env.sh`
+
+5. Run a dolphinscheduler (See **How to use this docker image**)
+
+6. Verify Python 3 under a new Python task
+
+### How to support Hadoop, Spark, Flink, Hive or DataX?
+
+Take Spark 2.4.7 as an example:
+
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`
+
+2. Run a dolphinscheduler (See **How to use this docker image**)
+
+3. Copy the Spark 2.4.7 release binary into Docker container
+
+```bash
+docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
+```
+
+Because the volume `dolphinscheduler-shared-local` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost
+
+4. Attach the container and ensure that `SPARK_HOME2` exists
+
+```bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
+cd /opt/soft
+tar zxf spark-2.4.7-bin-hadoop2.7.tgz
+rm -f spark-2.4.7-bin-hadoop2.7.tgz
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well
+
+5. Verify Spark under a Shell task
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`
+
+6. Verify Spark under a Spark task
+
+The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
+
+- Spark Version: `SPARK2`
+- Main Class: `org.apache.spark.examples.SparkPi`
+- Main Package: `spark-examples_2.11-2.4.7.jar`
+- Deploy Mode: `local`
+
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`
+
+7. Verify Spark on YARN
+
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps
+
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
+
+### How to support Spark 3?
+
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`
+
+Take Spark 3.1.1 as an example:
+
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`
+
+2. Run a dolphinscheduler (See **How to use this docker image**)
+
+3. Copy the Spark 3.1.1 release binary into Docker container
+
+```bash
+docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
+```
+
+4. Attach the container and ensure that `SPARK_HOME2` exists
+
+```bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
+cd /opt/soft
+tar zxf spark-3.1.1-bin-hadoop2.7.tgz
+rm -f spark-3.1.1-bin-hadoop2.7.tgz
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well
+
+5. Verify Spark under a Shell task
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`
+
+### How to support shared storage between Master, Worker and Api server?
+
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 1 and 2 can be skipped directly, and execute the command like `docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put Hadoop into the shared directory `/opt/soft` in the container
+
+For example, Master, Worker and Api server may use Hadoop at the same time
+
+1. Modify the volume `dolphinscheduler-shared-local` to support NFS in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+```yaml
+volumes:
+  dolphinscheduler-shared-local:
+    driver_opts:
+      type: "nfs"
+      o: "addr=10.40.0.199,nolock,soft,rw"
+      device: ":/path/to/shared/dir"
+```
+
+2. Put the Hadoop into the NFS
+
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
+
+### How to support local file resource storage instead of HDFS and S3?
+
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 2 can be skipped directly
+
+1. Modify the following environment variables in `config.env.sh`:
+
+```
+RESOURCE_STORAGE_TYPE=HDFS
+FS_DEFAULT_FS=file:///
+```
+
+2. Modify the volume `dolphinscheduler-resource-local` to support NFS in `docker-compose.yml`
+
+> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+```yaml
+volumes:
+  dolphinscheduler-resource-local:
+    driver_opts:
+      type: "nfs"
+      o: "addr=10.40.0.199,nolock,soft,rw"
+      device: ":/path/to/resource/dir"
+```
+
+### How to support S3 resource storage like MinIO?
+
+Take MinIO as an example: Modify the following environment variables in `config.env.sh`
+
+```
+RESOURCE_STORAGE_TYPE=S3
+RESOURCE_UPLOAD_PATH=/dolphinscheduler
+FS_DEFAULT_FS=s3a://BUCKET_NAME
+FS_S3A_ENDPOINT=http://MINIO_IP:9000
+FS_S3A_ACCESS_KEY=MINIO_ACCESS_KEY
+FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
+```
+
+`BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` need to be modified to actual values
+
+> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access
+
+### How to configure SkyWalking?
+
+Modify SkyWalking environment variables in `config.env.sh`:
+
+```
+SKYWALKING_ENABLE=true
+SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
+SW_GRPC_LOG_SERVER_HOST=127.0.0.1
+SW_GRPC_LOG_SERVER_PORT=11800
+```
+
+## Appendix-Environment Variables
+
+### Database
+
+**`DATABASE_TYPE`**
+
+This environment variable sets the type for the database. The default value is `postgresql`.
+
+**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DRIVER`**
+
+This environment variable sets the type for the database. The default value is `org.postgresql.Driver`.
+
+**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_HOST`**
+
+This environment variable sets the host for the database. The default value is `127.0.0.1`.
+
+**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PORT`**
+
+This environment variable sets the port for the database. The default value is `5432`.
+
+**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_USERNAME`**
+
+This environment variable sets the username for the database. The default value is `root`.
+
+**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PASSWORD`**
+
+This environment variable sets the password for the database. The default value is `root`.
+
+**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DATABASE`**
+
+This environment variable sets the database for the database. The default value is `dolphinscheduler`.
+
+**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PARAMS`**
+
+This environment variable sets the database for the database. The default value is `characterEncoding=utf8`.
+
+**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+### ZooKeeper
+
+**`ZOOKEEPER_QUORUM`**
+
+This environment variable sets zookeeper quorum. The default value is `127.0.0.1:2181`.
+
+**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`.
+
+**`ZOOKEEPER_ROOT`**
+
+This environment variable sets zookeeper root directory for dolphinscheduler. The default value is `/dolphinscheduler`.
+
+### Common
+
+**`DOLPHINSCHEDULER_OPTS`**
+
+This environment variable sets JVM options for dolphinscheduler, suitable for `master-server`, `worker-server`, `api-server`, `alert-server`, `logger-server`. The default value is empty.
+
+**`DATA_BASEDIR_PATH`**
+
+User data directory path, self configuration, please make sure the directory exists and have read-write permissions. The default value is `/tmp/dolphinscheduler`
+
+**`RESOURCE_STORAGE_TYPE`**
+
+This environment variable sets resource storage types for dolphinscheduler like `HDFS`, `S3`, `NONE`. The default value is `HDFS`.
+
+**`RESOURCE_UPLOAD_PATH`**
+
+This environment variable sets resource store path on HDFS/S3 for resource storage. The default value is `/dolphinscheduler`.
+
+**`FS_DEFAULT_FS`**
+
+This environment variable sets fs.defaultFS for resource storage like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`. The default value is `file:///`.
+
+**`FS_S3A_ENDPOINT`**
+
+This environment variable sets s3 endpoint for resource storage. The default value is `s3.xxx.amazonaws.com`.
+
+**`FS_S3A_ACCESS_KEY`**
+
+This environment variable sets s3 access key for resource storage. The default value is `xxxxxxx`.
+
+**`FS_S3A_SECRET_KEY`**
+
+This environment variable sets s3 secret key for resource storage. The default value is `xxxxxxx`.
+
+**`HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE`**
+
+This environment variable sets whether to startup Kerberos. The default value is `false`.
+
+**`JAVA_SECURITY_KRB5_CONF_PATH`**
+
+This environment variable sets java.security.krb5.conf path. The default value is `/opt/krb5.conf`.
+
+**`LOGIN_USER_KEYTAB_USERNAME`**
+
+This environment variable sets login user from the keytab username. The default value is `hdfs@HADOOP.COM`.
+
+**`LOGIN_USER_KEYTAB_PATH`**
+
+This environment variable sets login user from the keytab path. The default value is `/opt/hdfs.keytab`.
+
+**`KERBEROS_EXPIRE_TIME`**
+
+This environment variable sets Kerberos expire time, the unit is hour. The default value is `2`.
+
+**`HDFS_ROOT_USER`**
+
+This environment variable sets HDFS root user when resource.storage.type=HDFS. The default value is `hdfs`.
+
+**`RESOURCE_MANAGER_HTTPADDRESS_PORT`**
+
+This environment variable sets resource manager HTTP address port. The default value is `8088`.
+
+**`YARN_RESOURCEMANAGER_HA_RM_IDS`**
+
+This environment variable sets yarn resourcemanager ha rm ids. The default value is empty.
+
+**`YARN_APPLICATION_STATUS_ADDRESS`**
+
+This environment variable sets yarn application status address. The default value is `http://ds1:%s/ws/v1/cluster/apps/%s`.
+
+**`SKYWALKING_ENABLE`**
+
+This environment variable sets whether to enable SkyWalking. The default value is `false`.
+
+**`SW_AGENT_COLLECTOR_BACKEND_SERVICES`**
+
+This environment variable sets agent collector backend services for SkyWalking. The default value is `127.0.0.1:11800`.
+
+**`SW_GRPC_LOG_SERVER_HOST`**
+
+This environment variable sets gRPC log server host for SkyWalking. The default value is `127.0.0.1`.
+
+**`SW_GRPC_LOG_SERVER_PORT`**
+
+This environment variable sets gRPC log server port for SkyWalking. The default value is `11800`.
+
+**`HADOOP_HOME`**
+
+This environment variable sets `HADOOP_HOME`. The default value is `/opt/soft/hadoop`.
+
+**`HADOOP_CONF_DIR`**
+
+This environment variable sets `HADOOP_CONF_DIR`. The default value is `/opt/soft/hadoop/etc/hadoop`.
+
+**`SPARK_HOME1`**
+
+This environment variable sets `SPARK_HOME1`. The default value is `/opt/soft/spark1`.
+
+**`SPARK_HOME2`**
+
+This environment variable sets `SPARK_HOME2`. The default value is `/opt/soft/spark2`.
+
+**`PYTHON_HOME`**
+
+This environment variable sets `PYTHON_HOME`. The default value is `/usr/bin/python`.
+
+**`JAVA_HOME`**
+
+This environment variable sets `JAVA_HOME`. The default value is `/usr/local/openjdk-8`.
+
+**`HIVE_HOME`**
+
+This environment variable sets `HIVE_HOME`. The default value is `/opt/soft/hive`.
+
+**`FLINK_HOME`**
+
+This environment variable sets `FLINK_HOME`. The default value is `/opt/soft/flink`.
+
+**`DATAX_HOME`**
+
+This environment variable sets `DATAX_HOME`. The default value is `/opt/soft/datax`.
+
+### Master Server
+
+**`MASTER_SERVER_OPTS`**
+
+This environment variable sets JVM options for `master-server`. The default value is `-Xms1g -Xmx1g -Xmn512m`.
+
+**`MASTER_EXEC_THREADS`**
+
+This environment variable sets exec thread number for `master-server`. The default value is `100`.
+
+**`MASTER_EXEC_TASK_NUM`**
+
+This environment variable sets exec task number for `master-server`. The default value is `20`.
+
+**`MASTER_DISPATCH_TASK_NUM`**
+
+This environment variable sets dispatch task number for `master-server`. The default value is `3`.
+
+**`MASTER_HOST_SELECTOR`**
+
+This environment variable sets host selector for `master-server`. Optional values include `Random`, `RoundRobin` and `LowerWeight`. The default value is `LowerWeight`.
+
+**`MASTER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat interval for `master-server`. The default value is `10`.
+
+**`MASTER_TASK_COMMIT_RETRYTIMES`**
+
+This environment variable sets task commit retry times for `master-server`. The default value is `5`.
+
+**`MASTER_TASK_COMMIT_INTERVAL`**
+
+This environment variable sets task commit interval for `master-server`. The default value is `1`.
+
+**`MASTER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max CPU load avg for `master-server`. The default value is `-1`.
+
+**`MASTER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `master-server`, the unit is G. The default value is `0.3`.
+
+### Worker Server
+
+**`WORKER_SERVER_OPTS`**
+
+This environment variable sets JVM options for `worker-server`. The default value is `-Xms1g -Xmx1g -Xmn512m`.
+
+**`WORKER_EXEC_THREADS`**
+
+This environment variable sets exec thread number for `worker-server`. The default value is `100`.
+
+**`WORKER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat interval for `worker-server`. The default value is `10`.
+
+**`WORKER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max CPU load avg for `worker-server`. The default value is `-1`.
+
+**`WORKER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `worker-server`, the unit is G. The default value is `0.3`.
+
+**`WORKER_GROUPS`**
+
+This environment variable sets groups for `worker-server`. The default value is `default`.
+
+### Alert Server
+
+**`ALERT_SERVER_OPTS`**
+
+This environment variable sets JVM options for `alert-server`. The default value is `-Xms512m -Xmx512m -Xmn256m`.
+
+**`XLS_FILE_PATH`**
+
+This environment variable sets xls file path for `alert-server`. The default value is `/tmp/xls`.
+
+**`MAIL_SERVER_HOST`**
+
+This environment variable sets mail server host for `alert-server`. The default value is empty.
+
+**`MAIL_SERVER_PORT`**
+
+This environment variable sets mail server port for `alert-server`. The default value is empty.
+
+**`MAIL_SENDER`**
+
+This environment variable sets mail sender for `alert-server`. The default value is empty.
+
+**`MAIL_USER=`**
+
+This environment variable sets mail user for `alert-server`. The default value is empty.
+
+**`MAIL_PASSWD`**
+
+This environment variable sets mail password for `alert-server`. The default value is empty.
+
+**`MAIL_SMTP_STARTTLS_ENABLE`**
+
+This environment variable sets SMTP tls for `alert-server`. The default value is `true`.
+
+**`MAIL_SMTP_SSL_ENABLE`**
+
+This environment variable sets SMTP ssl for `alert-server`. The default value is `false`.
+
+**`MAIL_SMTP_SSL_TRUST`**
+
+This environment variable sets SMTP ssl truest for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_ENABLE`**
+
+This environment variable sets enterprise wechat enable for `alert-server`. The default value is `false`.
+
+**`ENTERPRISE_WECHAT_CORP_ID`**
+
+This environment variable sets enterprise wechat corp id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_SECRET`**
+
+This environment variable sets enterprise wechat secret for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_AGENT_ID`**
+
+This environment variable sets enterprise wechat agent id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_USERS`**
+
+This environment variable sets enterprise wechat users for `alert-server`. The default value is empty.
+
+### Api Server
+
+**`API_SERVER_OPTS`**
+
+This environment variable sets JVM options for `api-server`. The default value is `-Xms512m -Xmx512m -Xmn256m`.
+
+### Logger Server
+
+**`LOGGER_SERVER_OPTS`**
+
+This environment variable sets JVM options for `logger-server`. The default value is `-Xms512m -Xmx512m -Xmn256m`.
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/hardware.md b/docs/en-us/2.0.2/user_doc/guide/installation/hardware.md
new file mode 100644
index 0000000..0c5df7f
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/hardware.md
@@ -0,0 +1,47 @@
+# Hardware Environment
+
+DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
+
+## 1. Linux Operating System Version Requirements
+
+| OS       | Version         |
+| :----------------------- | :----------: |
+| Red Hat Enterprise Linux | 7.0 and above   |
+| CentOS                   | 7.0 and above   |
+| Oracle Enterprise Linux  | 7.0 and above   |
+| Ubuntu LTS               | 16.04 and above |
+
+> **Attention:**
+>The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
+
+## 2. Recommended Server Configuration
+DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
+### Production Environment
+
+| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
+| --- | --- | --- | --- | --- |
+| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
+
+> **Attention:**
+> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
+> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
+
+
+## 3. Network Requirements
+
+DolphinScheduler provides the following network port configurations for normal operation:
+
+| Server | Port | Desc |
+|  --- | --- | --- |
+| MasterServer |  5678  | Not the communication port. Require the native ports do not conflict |
+| WorkerServer | 1234  | Not the communication port. Require the native ports do not conflict |
+| ApiApplicationServer |  12345 | Backend communication port |
+
+> **Attention:**
+> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
+> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
+
+## 4. Browser Requirements
+
+DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
+
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/kubernetes.md b/docs/en-us/2.0.2/user_doc/guide/installation/kubernetes.md
new file mode 100644
index 0000000..bb9070e
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/kubernetes.md
@@ -0,0 +1,755 @@
+# QuickStart in Kubernetes
+
+Kubernetes deployment is deploy DolphinScheduler in a Kubernetes cluster, which can schedule a large number of tasks and can be used in production.
+
+If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+
+## Prerequisites
+
+ - [Helm](https://helm.sh/) 3.1.0+
+ - [Kubernetes](https://kubernetes.io/) 1.12+
+ - PV provisioner support in the underlying infrastructure
+
+## Installing the Chart
+
+Please download the source code package apache-dolphinscheduler-2.0.2-src.tar.gz, download address: [download](/en-us/download/download.html)
+
+To install the chart with the release name `dolphinscheduler`, please execute the following commands:
+
+```
+$ tar -zxvf apache-dolphinscheduler-2.0.2-src.tar.gz
+$ cd apache-dolphinscheduler-2.0.2-src/docker/kubernetes/dolphinscheduler
+$ helm repo add bitnami https://charts.bitnami.com/bitnami
+$ helm dependency update .
+$ helm install dolphinscheduler . --set image.tag=2.0.2
+```
+
+To install the chart with a namespace named `test`:
+
+```bash
+$ helm install dolphinscheduler . -n test
+```
+
+> **Tip**: If a namespace named `test` is used, the option `-n test` needs to be added to the `helm` and `kubectl` command
+
+These commands deploy DolphinScheduler on the Kubernetes cluster in the default configuration. The [Appendix-Configuration](#appendix-configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default
+
+## Access DolphinScheduler UI
+
+If `ingress.enabled` in `values.yaml` is set to `true`, you just access `http://${ingress.host}/dolphinscheduler` in browser.
+
+> **Tip**: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+
+Otherwise, when `api.service.type=ClusterIP` you need to execute port-forward command like:
+
+```bash
+$ kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
+$ kubectl port-forward --address 0.0.0.0 -n test svc/dolphinscheduler-api 12345:12345 # with test namespace
+```
+
+> **Tip**: If the error of `unable to do port forwarding: socat not found` appears, you need to install `socat` at first
+
+And then access the web: http://192.168.xx.xx:12345/dolphinscheduler (The local address is http://127.0.0.1:12345/dolphinscheduler)
+
+Or when `api.service.type=NodePort` you need to execute the command:
+
+```bash
+NODE_IP=$(kubectl get no -n {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+NODE_PORT=$(kubectl get svc {{ template "dolphinscheduler.fullname" . }}-api -n {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}")
+echo http://$NODE_IP:$NODE_PORT/dolphinscheduler
+```
+
+And then access the web: http://$NODE_IP:$NODE_PORT/dolphinscheduler
+
+The default username is `admin` and the default password is `dolphinscheduler123`
+
+Please refer to the `Quick Start` in the chapter [User Manual](/en-us/docs/2.0.2/user_doc/guide/quick-start.html) to explore how to use DolphinScheduler
+
+## Uninstalling the Chart
+
+To uninstall/delete the `dolphinscheduler` deployment:
+
+```bash
+$ helm uninstall dolphinscheduler
+```
+
+The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release.
+
+To delete the PVC's associated with `dolphinscheduler`:
+
+```bash
+$ kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler
+```
+
+> **Note**: Deleting the PVC's will delete all data as well. Please be cautious before doing it.
+
+## Configuration
+
+The configuration file is `values.yaml`, and the [Appendix-Configuration](#appendix-configuration) tables lists the configurable parameters of the DolphinScheduler and their default values.
+
+## Support Matrix
+
+| Type                                                         | Support      | Notes                                 |
+| ------------------------------------------------------------ | ------------ | ------------------------------------- |
+| Shell                                                        | Yes          |                                       |
+| Python2                                                      | Yes          |                                       |
+| Python3                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop2                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop3                                                      | Not Sure     | Not tested                            |
+| Spark-Local(client)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-YARN(cluster)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-Standalone(cluster)                                    | Not Yet      |                                       |
+| Spark-Kubernetes(cluster)                                    | Not Yet      |                                       |
+| Flink-Local(local>=1.11)                                     | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-YARN(yarn-cluster)                                     | Indirect Yes | Refer to FAQ                          |
+| Flink-YARN(yarn-session/yarn-per-job/yarn-application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Standalone(default)                                    | Not Yet      |                                       |
+| Flink-Standalone(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Kubernetes(default)                                    | Not Yet      |                                       |
+| Flink-Kubernetes(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-NativeKubernetes(kubernetes-session/application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| MapReduce                                                    | Indirect Yes | Refer to FAQ                          |
+| Kerberos                                                     | Indirect Yes | Refer to FAQ                          |
+| HTTP                                                         | Yes          |                                       |
+| DataX                                                        | Indirect Yes | Refer to FAQ                          |
+| Sqoop                                                        | Indirect Yes | Refer to FAQ                          |
+| SQL-MySQL                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-PostgreSQL                                               | Yes          |                                       |
+| SQL-Hive                                                     | Indirect Yes | Refer to FAQ                          |
+| SQL-Spark                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-ClickHouse                                               | Indirect Yes | Refer to FAQ                          |
+| SQL-Oracle                                                   | Indirect Yes | Refer to FAQ                          |
+| SQL-SQLServer                                                | Indirect Yes | Refer to FAQ                          |
+| SQL-DB2                                                      | Indirect Yes | Refer to FAQ                          |
+
+## FAQ
+
+### How to view the logs of a pod container?
+
+List all pods (aka `po`):
+
+```
+kubectl get po
+kubectl get po -n test # with test namespace
+```
+
+View the logs of a pod container named dolphinscheduler-master-0:
+
+```
+kubectl logs dolphinscheduler-master-0
+kubectl logs -f dolphinscheduler-master-0 # follow log output
+kubectl logs --tail 10 dolphinscheduler-master-0 -n test # show last 10 lines from the end of the logs
+```
+
+### How to scale api, master and worker on Kubernetes?
+
+List all deployments (aka `deploy`):
+
+```
+kubectl get deploy
+kubectl get deploy -n test # with test namespace
+```
+
+Scale api to 3 replicas:
+
+```
+kubectl scale --replicas=3 deploy dolphinscheduler-api
+kubectl scale --replicas=3 deploy dolphinscheduler-api -n test # with test namespace
+```
+
+List all stateful sets (aka `sts`):
+
+```
+kubectl get sts
+kubectl get sts -n test # with test namespace
+```
+
+Scale master to 2 replicas:
+
+```
+kubectl scale --replicas=2 sts dolphinscheduler-master
+kubectl scale --replicas=2 sts dolphinscheduler-master -n test # with test namespace
+```
+
+Scale worker to 6 replicas:
+
+```
+kubectl scale --replicas=6 sts dolphinscheduler-worker
+kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test namespace
+```
+
+### How to use MySQL as the DolphinScheduler's database instead of PostgreSQL?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry
+
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`
+
+6. Modify postgresql `enabled` to `false` in `values.yaml`
+
+7. Modify externalDatabase (especially modify `host`, `username` and `password`) in `values.yaml`:
+
+```yaml
+externalDatabase:
+  type: "mysql"
+  driver: "com.mysql.jdbc.Driver"
+  host: "localhost"
+  port: "3306"
+  username: "root"
+  password: "root"
+  database: "dolphinscheduler"
+  params: "useUnicode=true&characterEncoding=UTF-8"
+```
+
+8. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+### How to support MySQL datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry
+
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`
+
+6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+7. Add a MySQL datasource in `Datasource manage`
+
+### How to support Oracle datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of Oracle.
+>
+> If you want to add Oracle datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
+
+1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`)
+
+2. Create a new `Dockerfile` to add Oracle driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including Oracle driver:
+
+```
+docker build -t apache/dolphinscheduler:oracle-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:oracle-driver` to a docker registry
+
+5. Modify image `repository` and update `tag` to `oracle-driver` in `values.yaml`
+
+6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+7. Add an Oracle datasource in `Datasource manage`
+
+### How to support Python 2 pip and custom requirements.txt?
+
+1. Create a new `Dockerfile` to install pip:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+COPY requirements.txt /tmp
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python-pip && \
+    pip install --no-cache-dir -r /tmp/requirements.txt && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **pip 18.1**. If you upgrade the pip, just add one line
+
+```
+    pip install --no-cache-dir -U pip && \
+```
+
+2. Build a new docker image including pip:
+
+```
+docker build -t apache/dolphinscheduler:pip .
+```
+
+3. Push the docker image `apache/dolphinscheduler:pip` to a docker registry
+
+4. Modify image `repository` and update `tag` to `pip` in `values.yaml`
+
+5. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+6. Verify pip under a new Python task
+
+### How to support Python 3?
+
+1. Create a new `Dockerfile` to install Python 3:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.2
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python3 && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like
+
+```
+    apt-get install -y --no-install-recommends python3-pip && \
+```
+
+2. Build a new docker image including Python 3:
+
+```
+docker build -t apache/dolphinscheduler:python3 .
+```
+
+3. Push the docker image `apache/dolphinscheduler:python3` to a docker registry
+
+4. Modify image `repository` and update `tag` to `python3` in `values.yaml`
+
+5. Modify `PYTHON_HOME` to `/usr/bin/python3` in `values.yaml`
+
+6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+7. Verify Python 3 under a new Python task
+
+### How to support Hadoop, Spark, Flink, Hive or DataX?
+
+Take Spark 2.4.7 as an example:
+
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`
+
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on
+
+3. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+4. Copy the Spark 2.4.7 release binary into the Docker container
+
+```bash
+kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
+kubectl cp -n test spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
+```
+
+Because the volume `sharedStoragePersistence` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost
+
+5. Attach the container and ensure that `SPARK_HOME2` exists
+
+```bash
+kubectl exec -it dolphinscheduler-worker-0 bash
+kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
+cd /opt/soft
+tar zxf spark-2.4.7-bin-hadoop2.7.tgz
+rm -f spark-2.4.7-bin-hadoop2.7.tgz
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well
+
+6. Verify Spark under a Shell task
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`
+
+7. Verify Spark under a Spark task
+
+The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
+
+- Spark Version: `SPARK2`
+- Main Class: `org.apache.spark.examples.SparkPi`
+- Main Package: `spark-examples_2.11-2.4.7.jar`
+- Deploy Mode: `local`
+
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`
+
+8. Verify Spark on YARN
+
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps
+
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
+
+### How to support Spark 3?
+
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`
+
+Take Spark 3.1.1 as an example:
+
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`
+
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on
+
+3. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+
+4. Copy the Spark 3.1.1 release binary into the Docker container
+
+```bash
+kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
+kubectl cp -n test spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
+```
+
+5. Attach the container and ensure that `SPARK_HOME2` exists
+
+```bash
+kubectl exec -it dolphinscheduler-worker-0 bash
+kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
+cd /opt/soft
+tar zxf spark-3.1.1-bin-hadoop2.7.tgz
+rm -f spark-3.1.1-bin-hadoop2.7.tgz
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well
+
+6. Verify Spark under a Shell task
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`
+
+### How to support shared storage between Master, Worker and Api server?
+
+For example, Master, Worker and API server may use Hadoop at the same time
+
+1. Modify the following configurations in `values.yaml`
+
+```yaml
+common:
+  sharedStoragePersistence:
+    enabled: false
+    mountPath: "/opt/soft"
+    accessModes:
+    - "ReadWriteMany"
+    storageClassName: "-"
+    storage: "20Gi"
+```
+
+`storageClassName` and `storage` need to be modified to actual values
+
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`
+
+2. Copy the Hadoop into the directory `/opt/soft`
+
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
+
+### How to support local file resource storage instead of HDFS and S3?
+
+Modify the following configurations in `values.yaml`
+
+```yaml
+common:
+  configmap:
+    RESOURCE_STORAGE_TYPE: "HDFS"
+    RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
+    FS_DEFAULT_FS: "file:///"
+  fsFileResourcePersistence:
+    enabled: true
+    accessModes:
+    - "ReadWriteMany"
+    storageClassName: "-"
+    storage: "20Gi"
+```
+
+`storageClassName` and `storage` need to be modified to actual values
+
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`
+
+### How to support S3 resource storage like MinIO?
+
+Take MinIO as an example: Modify the following configurations in `values.yaml`
+
+```yaml
+common:
+  configmap:
+    RESOURCE_STORAGE_TYPE: "S3"
+    RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
+    FS_DEFAULT_FS: "s3a://BUCKET_NAME"
+    FS_S3A_ENDPOINT: "http://MINIO_IP:9000"
+    FS_S3A_ACCESS_KEY: "MINIO_ACCESS_KEY"
+    FS_S3A_SECRET_KEY: "MINIO_SECRET_KEY"
+```
+
+`BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` need to be modified to actual values
+
+> **Note**: `MINIO_IP` can only use IP instead of domain name, because DolphinScheduler currently doesn't support S3 path style access
+
+### How to configure SkyWalking?
+
+Modify SKYWALKING configurations in `values.yaml`:
+
+```yaml
+common:
+  configmap:
+    SKYWALKING_ENABLE: "true"
+    SW_AGENT_COLLECTOR_BACKEND_SERVICES: "127.0.0.1:11800"
+    SW_GRPC_LOG_SERVER_HOST: "127.0.0.1"
+    SW_GRPC_LOG_SERVER_PORT: "11800"
+```
+
+## Appendix-Configuration
+
+| Parameter                                                                         | Description                                                                                                                    | Default                                               |
+| --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `timezone`                                                                        | World time and date for cities in all time zones                                                                               | `Asia/Shanghai`                                       |
+|                                                                                   |                                                                                                                                |                                                       |
+| `image.repository`                                                                | Docker image repository for the DolphinScheduler                                                                               | `apache/dolphinscheduler`                             |
+| `image.tag`                                                                       | Docker image version for the DolphinScheduler                                                                                  | `latest`                                              |
+| `image.pullPolicy`                                                                | Image pull policy. One of Always, Never, IfNotPresent                                                                          | `IfNotPresent`                                        |
+| `image.pullSecret`                                                                | Image pull secret. An optional reference to secret in the same namespace to use for pulling any of the images                  | `nil`                                                 |
+|                                                                                   |                                                                                                                                |                                                       |
+| `postgresql.enabled`                                                              | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL                             | `true`                                                |
+| `postgresql.postgresqlUsername`                                                   | The username for internal PostgreSQL                                                                                           | `root`                                                |
+| `postgresql.postgresqlPassword`                                                   | The password for internal PostgreSQL                                                                                           | `root`                                                |
+| `postgresql.postgresqlDatabase`                                                   | The database for internal PostgreSQL                                                                                           | `dolphinscheduler`                                    |
+| `postgresql.persistence.enabled`                                                  | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL                                   | `false`                                               |
+| `postgresql.persistence.size`                                                     | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `postgresql.persistence.storageClass`                                             | PostgreSQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning      | `-`                                                   |
+| `externalDatabase.type`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database type will use it       | `postgresql`                                          |
+| `externalDatabase.driver`                                                         | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database driver will use it     | `org.postgresql.Driver`                               |
+| `externalDatabase.host`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database host will use it       | `localhost`                                           |
+| `externalDatabase.port`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database port will use it       | `5432`                                                |
+| `externalDatabase.username`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database username will use it   | `root`                                                |
+| `externalDatabase.password`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database password will use it   | `root`                                                |
+| `externalDatabase.database`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database database will use it   | `dolphinscheduler`                                    |
+| `externalDatabase.params`                                                         | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database params will use it     | `characterEncoding=utf8`                              |
+|                                                                                   |                                                                                                                                |                                                       |
+| `zookeeper.enabled`                                                               | If not exists external Zookeeper, by default, the DolphinScheduler will use a internal Zookeeper                               | `true`                                                |
+| `zookeeper.fourlwCommandsWhitelist`                                               | A list of comma separated Four Letter Words commands to use                                                                    | `srvr,ruok,wchs,cons`                                 |
+| `zookeeper.persistence.enabled`                                                   | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal Zookeeper                                     | `false`                                               |
+| `zookeeper.persistence.size`                                                      | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `zookeeper.persistence.storageClass`                                              | Zookeeper data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning       | `-`                                                   |
+| `zookeeper.zookeeperRoot`                                                         | Specify dolphinscheduler root directory in Zookeeper                                                                           | `/dolphinscheduler`                                   |
+| `externalZookeeper.zookeeperQuorum`                                               | If exists external Zookeeper, and set `zookeeper.enabled` value to false. Specify Zookeeper quorum                             | `127.0.0.1:2181`                                      |
+| `externalZookeeper.zookeeperRoot`                                                 | If exists external Zookeeper, and set `zookeeper.enabled` value to false. Specify dolphinscheduler root directory in Zookeeper | `/dolphinscheduler`                                   |
+|                                                                                   |                                                                                                                                |                                                       |
+| `common.configmap.DOLPHINSCHEDULER_OPTS`                                          | The jvm options for dolphinscheduler, suitable for all servers                                                                 | `""`                                                  |
+| `common.configmap.DATA_BASEDIR_PATH`                                              | User data directory path, self configuration, please make sure the directory exists and have read write permissions            | `/tmp/dolphinscheduler`                               |
+| `common.configmap.RESOURCE_STORAGE_TYPE`                                          | Resource storage type: HDFS, S3, NONE                                                                                          | `HDFS`                                                |
+| `common.configmap.RESOURCE_UPLOAD_PATH`                                           | Resource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions                  | `/dolphinscheduler`                                   |
+| `common.configmap.FS_DEFAULT_FS`                                                  | Resource storage file system like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`                              | `file:///`                                            |
+| `common.configmap.FS_S3A_ENDPOINT`                                                | S3 endpoint when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                       | `s3.xxx.amazonaws.com`                                |
+| `common.configmap.FS_S3A_ACCESS_KEY`                                              | S3 access key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                     | `xxxxxxx`                                             |
+| `common.configmap.FS_S3A_SECRET_KEY`                                              | S3 secret key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                     | `xxxxxxx`                                             |
+| `common.configmap.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE`                   | Whether to startup kerberos                                                                                                    | `false`                                               |
+| `common.configmap.JAVA_SECURITY_KRB5_CONF_PATH`                                   | The java.security.krb5.conf path                                                                                               | `/opt/krb5.conf`                                      |
+| `common.configmap.LOGIN_USER_KEYTAB_USERNAME`                                     | The login user from keytab username                                                                                            | `hdfs@HADOOP.COM`                                     |
+| `common.configmap.LOGIN_USER_KEYTAB_PATH`                                         | The login user from keytab path                                                                                                | `/opt/hdfs.keytab`                                    |
+| `common.configmap.KERBEROS_EXPIRE_TIME`                                           | The kerberos expire time, the unit is hour                                                                                     | `2`                                                   |
+| `common.configmap.HDFS_ROOT_USER`                                                 | The HDFS root user who must have the permission to create directories under the HDFS root path                                 | `hdfs`                                                |
+| `common.configmap.RESOURCE_MANAGER_HTTPADDRESS_PORT`                              | Set resource manager httpaddress port for yarn                                                                                 | `8088`                                                |
+| `common.configmap.YARN_RESOURCEMANAGER_HA_RM_IDS`                                 | If resourcemanager HA is enabled, please set the HA IPs                                                                        | `nil`                                                 |
+| `common.configmap.YARN_APPLICATION_STATUS_ADDRESS`                                | If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname, otherwise keep default          | `http://ds1:%s/ws/v1/cluster/apps/%s`               |
+| `common.configmap.SKYWALKING_ENABLE`                                              | Set whether to enable skywalking                                                                                               | `false`                                               |
+| `common.configmap.SW_AGENT_COLLECTOR_BACKEND_SERVICES`                            | Set agent collector backend services for skywalking                                                                            | `127.0.0.1:11800`                                     |
+| `common.configmap.SW_GRPC_LOG_SERVER_HOST`                                        | Set grpc log server host for skywalking                                                                                        | `127.0.0.1`                                           |
+| `common.configmap.SW_GRPC_LOG_SERVER_PORT`                                        | Set grpc log server port for skywalking                                                                                        | `11800`                                               |
+| `common.configmap.HADOOP_HOME`                                                    | Set `HADOOP_HOME` for DolphinScheduler's task environment                                                                      | `/opt/soft/hadoop`                                    |
+| `common.configmap.HADOOP_CONF_DIR`                                                | Set `HADOOP_CONF_DIR` for DolphinScheduler's task environment                                                                  | `/opt/soft/hadoop/etc/hadoop`                         |
+| `common.configmap.SPARK_HOME1`                                                    | Set `SPARK_HOME1` for DolphinScheduler's task environment                                                                      | `/opt/soft/spark1`                                    |
+| `common.configmap.SPARK_HOME2`                                                    | Set `SPARK_HOME2` for DolphinScheduler's task environment                                                                      | `/opt/soft/spark2`                                    |
+| `common.configmap.PYTHON_HOME`                                                    | Set `PYTHON_HOME` for DolphinScheduler's task environment                                                                      | `/usr/bin/python`                                     |
+| `common.configmap.JAVA_HOME`                                                      | Set `JAVA_HOME` for DolphinScheduler's task environment                                                                        | `/usr/local/openjdk-8`                                |
+| `common.configmap.HIVE_HOME`                                                      | Set `HIVE_HOME` for DolphinScheduler's task environment                                                                        | `/opt/soft/hive`                                      |
+| `common.configmap.FLINK_HOME`                                                     | Set `FLINK_HOME` for DolphinScheduler's task environment                                                                       | `/opt/soft/flink`                                     |
+| `common.configmap.DATAX_HOME`                                                     | Set `DATAX_HOME` for DolphinScheduler's task environment                                                                       | `/opt/soft/datax`                                     |
+| `common.sharedStoragePersistence.enabled`                                         | Set `common.sharedStoragePersistence.enabled` to `true` to mount a shared storage volume for Hadoop, Spark binary and etc      | `false`                                               |
+| `common.sharedStoragePersistence.mountPath`                                       | The mount path for the shared storage volume                                                                                   | `/opt/soft`                                           |
+| `common.sharedStoragePersistence.accessModes`                                     | `PersistentVolumeClaim` access modes, must be `ReadWriteMany`                                                                  | `[ReadWriteMany]`                                     |
+| `common.sharedStoragePersistence.storageClassName`                                | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany                                    | `-`                                                   |
+| `common.sharedStoragePersistence.storage`                                         | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `common.fsFileResourcePersistence.enabled`                                        | Set `common.fsFileResourcePersistence.enabled` to `true` to mount a new file resource volume for `api` and `worker`            | `false`                                               |
+| `common.fsFileResourcePersistence.accessModes`                                    | `PersistentVolumeClaim` access modes, must be `ReadWriteMany`                                                                  | `[ReadWriteMany]`                                     |
+| `common.fsFileResourcePersistence.storageClassName`                               | Resource persistent volume storage class, must support the access mode: ReadWriteMany                                          | `-`                                                   |
+| `common.fsFileResourcePersistence.storage`                                        | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `master.podManagementPolicy`                                                      | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down  | `Parallel`                                            |
+| `master.replicas`                                                                 | Replicas is the desired number of replicas of the given Template                                                               | `3`                                                   |
+| `master.annotations`                                                              | The `annotations` for master server                                                                                            | `{}`                                                  |
+| `master.affinity`                                                                 | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `master.nodeSelector`                                                             | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `master.tolerations`                                                              | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `master.resources`                                                                | The `resource` limit and request config for master server                                                                      | `{}`                                                  |
+| `master.configmap.MASTER_SERVER_OPTS`                                             | The jvm options for master server                                                                                              | `-Xms1g -Xmx1g -Xmn512m`                              |
+| `master.configmap.MASTER_EXEC_THREADS`                                            | Master execute thread number to limit process instances                                                                        | `100`                                                 |
+| `master.configmap.MASTER_EXEC_TASK_NUM`                                           | Master execute task number in parallel per process instance                                                                    | `20`                                                  |
+| `master.configmap.MASTER_DISPATCH_TASK_NUM`                                       | Master dispatch task number per batch                                                                                          | `3`                                                   |
+| `master.configmap.MASTER_HOST_SELECTOR`                                           | Master host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeight                      | `LowerWeight`                                         |
+| `master.configmap.MASTER_HEARTBEAT_INTERVAL`                                      | Master heartbeat interval, the unit is second                                                                                  | `10`                                                  |
+| `master.configmap.MASTER_TASK_COMMIT_RETRYTIMES`                                  | Master commit task retry times                                                                                                 | `5`                                                   |
+| `master.configmap.MASTER_TASK_COMMIT_INTERVAL`                                    | master commit task interval, the unit is second                                                                                | `1`                                                   |
+| `master.configmap.MASTER_MAX_CPULOAD_AVG`                                         | Master max cpuload avg, only higher than the system cpu load average, master server can schedule                               | `-1` (`the number of cpu cores * 2`)                  |
+| `master.configmap.MASTER_RESERVED_MEMORY`                                         | Master reserved memory, only lower than system available memory, master server can schedule, the unit is G                     | `0.3`                                                 |
+| `master.livenessProbe.enabled`                                                    | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `master.livenessProbe.initialDelaySeconds`                                        | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `master.livenessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `master.livenessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `master.livenessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `master.livenessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `master.readinessProbe.enabled`                                                   | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `master.readinessProbe.initialDelaySeconds`                                       | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `master.readinessProbe.periodSeconds`                                             | How often to perform the probe                                                                                                 | `30`                                                  |
+| `master.readinessProbe.timeoutSeconds`                                            | When the probe times out                                                                                                       | `5`                                                   |
+| `master.readinessProbe.failureThreshold`                                          | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `master.readinessProbe.successThreshold`                                          | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `master.persistentVolumeClaim.enabled`                                            | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master`                                        | `false`                                               |
+| `master.persistentVolumeClaim.accessModes`                                        | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `master.persistentVolumeClaim.storageClassName`                                   | `Master` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning   | `-`                                                   |
+| `master.persistentVolumeClaim.storage`                                            | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `worker.podManagementPolicy`                                                      | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down  | `Parallel`                                            |
+| `worker.replicas`                                                                 | Replicas is the desired number of replicas of the given Template                                                               | `3`                                                   |
+| `worker.annotations`                                                              | The `annotations` for worker server                                                                                            | `{}`                                                  |
+| `worker.affinity`                                                                 | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `worker.nodeSelector`                                                             | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `worker.tolerations`                                                              | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `worker.resources`                                                                | The `resource` limit and request config for worker server                                                                      | `{}`                                                  |
+| `worker.configmap.LOGGER_SERVER_OPTS`                                             | The jvm options for logger server                                                                                              | `-Xms512m -Xmx512m -Xmn256m`                          |
+| `worker.configmap.WORKER_SERVER_OPTS`                                             | The jvm options for worker server                                                                                              | `-Xms1g -Xmx1g -Xmn512m`                              |
+| `worker.configmap.WORKER_EXEC_THREADS`                                            | Worker execute thread number to limit task instances                                                                           | `100`                                                 |
+| `worker.configmap.WORKER_HEARTBEAT_INTERVAL`                                      | Worker heartbeat interval, the unit is second                                                                                  | `10`                                                  |
+| `worker.configmap.WORKER_MAX_CPULOAD_AVG`                                         | Worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks                    | `-1` (`the number of cpu cores * 2`)                  |
+| `worker.configmap.WORKER_RESERVED_MEMORY`                                         | Worker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G          | `0.3`                                                 |
+| `worker.configmap.WORKER_GROUPS`                                                  | Worker groups                                                                                                                  | `default`                                             |
+| `worker.livenessProbe.enabled`                                                    | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `worker.livenessProbe.initialDelaySeconds`                                        | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `worker.livenessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `worker.livenessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `worker.livenessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `worker.livenessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `worker.readinessProbe.enabled`                                                   | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `worker.readinessProbe.initialDelaySeconds`                                       | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `worker.readinessProbe.periodSeconds`                                             | How often to perform the probe                                                                                                 | `30`                                                  |
+| `worker.readinessProbe.timeoutSeconds`                                            | When the probe times out                                                                                                       | `5`                                                   |
+| `worker.readinessProbe.failureThreshold`                                          | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `worker.readinessProbe.successThreshold`                                          | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `worker.persistentVolumeClaim.enabled`                                            | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker`                            | `false`                                               |
+| `worker.persistentVolumeClaim.dataPersistentVolume.enabled`                       | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker`                  | `false`                                               |
+| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes`                   | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName`              | `Worker` data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning        | `-`                                                   |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storage`                       | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `worker.persistentVolumeClaim.logsPersistentVolume.enabled`                       | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker`                  | `false`                                               |
+| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes`                   | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName`              | `Worker` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning   | `-`                                                   |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storage`                       | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `alert.replicas`                                                                  | Replicas is the desired number of replicas of the given Template                                                               | `1`                                                   |
+| `alert.strategy.type`                                                             | Type of deployment. Can be "Recreate" or "RollingUpdate"                                                                       | `RollingUpdate`                                       |
+| `alert.strategy.rollingUpdate.maxSurge`                                           | The maximum number of pods that can be scheduled above the desired number of pods                                              | `25%`                                                 |
+| `alert.strategy.rollingUpdate.maxUnavailable`                                     | The maximum number of pods that can be unavailable during the update                                                           | `25%`                                                 |
+| `alert.annotations`                                                               | The `annotations` for alert server                                                                                             | `{}`                                                  |
+| `alert.affinity`                                                                  | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `alert.nodeSelector`                                                              | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `alert.tolerations`                                                               | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `alert.resources`                                                                 | The `resource` limit and request config for alert server                                                                       | `{}`                                                  |
+| `alert.configmap.ALERT_SERVER_OPTS`                                               | The jvm options for alert server                                                                                               | `-Xms512m -Xmx512m -Xmn256m`                          |
+| `alert.configmap.XLS_FILE_PATH`                                                   | XLS file path                                                                                                                  | `/tmp/xls`                                            |
+| `alert.configmap.MAIL_SERVER_HOST`                                                | Mail `SERVER HOST `                                                                                                            | `nil`                                                 |
+| `alert.configmap.MAIL_SERVER_PORT`                                                | Mail `SERVER PORT`                                                                                                             | `nil`                                                 |
+| `alert.configmap.MAIL_SENDER`                                                     | Mail `SENDER`                                                                                                                  | `nil`                                                 |
+| `alert.configmap.MAIL_USER`                                                       | Mail `USER`                                                                                                                    | `nil`                                                 |
+| `alert.configmap.MAIL_PASSWD`                                                     | Mail `PASSWORD`                                                                                                                | `nil`                                                 |
+| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE`                                       | Mail `SMTP STARTTLS` enable                                                                                                    | `false`                                               |
+| `alert.configmap.MAIL_SMTP_SSL_ENABLE`                                            | Mail `SMTP SSL` enable                                                                                                         | `false`                                               |
+| `alert.configmap.MAIL_SMTP_SSL_TRUST`                                             | Mail `SMTP SSL TRUST`                                                                                                          | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_ENABLE`                                        | `Enterprise Wechat` enable                                                                                                     | `false`                                               |
+| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID`                                       | `Enterprise Wechat` corp id                                                                                                    | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_SECRET`                                        | `Enterprise Wechat` secret                                                                                                     | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID`                                      | `Enterprise Wechat` agent id                                                                                                   | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_USERS`                                         | `Enterprise Wechat` users                                                                                                      | `nil`                                                 |
+| `alert.livenessProbe.enabled`                                                     | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `alert.livenessProbe.initialDelaySeconds`                                         | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `alert.livenessProbe.periodSeconds`                                               | How often to perform the probe                                                                                                 | `30`                                                  |
+| `alert.livenessProbe.timeoutSeconds`                                              | When the probe times out                                                                                                       | `5`                                                   |
+| `alert.livenessProbe.failureThreshold`                                            | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `alert.livenessProbe.successThreshold`                                            | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `alert.readinessProbe.enabled`                                                    | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `alert.readinessProbe.initialDelaySeconds`                                        | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `alert.readinessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `alert.readinessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `alert.readinessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `alert.readinessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `alert.persistentVolumeClaim.enabled`                                             | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert`                                          | `false`                                               |
+| `alert.persistentVolumeClaim.accessModes`                                         | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `alert.persistentVolumeClaim.storageClassName`                                    | `Alert` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning    | `-`                                                   |
+| `alert.persistentVolumeClaim.storage`                                             | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `api.replicas`                                                                    | Replicas is the desired number of replicas of the given Template                                                               | `1`                                                   |
+| `api.strategy.type`                                                               | Type of deployment. Can be "Recreate" or "RollingUpdate"                                                                       | `RollingUpdate`                                       |
+| `api.strategy.rollingUpdate.maxSurge`                                             | The maximum number of pods that can be scheduled above the desired number of pods                                              | `25%`                                                 |
+| `api.strategy.rollingUpdate.maxUnavailable`                                       | The maximum number of pods that can be unavailable during the update                                                           | `25%`                                                 |
+| `api.annotations`                                                                 | The `annotations` for api server                                                                                               | `{}`                                                  |
+| `api.affinity`                                                                    | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `api.nodeSelector`                                                                | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `api.tolerations`                                                                 | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `api.resources`                                                                   | The `resource` limit and request config for api server                                                                         | `{}`                                                  |
+| `api.configmap.API_SERVER_OPTS`                                                   | The jvm options for api server                                                                                                 | `-Xms512m -Xmx512m -Xmn256m`                          |
+| `api.livenessProbe.enabled`                                                       | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `api.livenessProbe.initialDelaySeconds`                                           | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `api.livenessProbe.periodSeconds`                                                 | How often to perform the probe                                                                                                 | `30`                                                  |
+| `api.livenessProbe.timeoutSeconds`                                                | When the probe times out                                                                                                       | `5`                                                   |
+| `api.livenessProbe.failureThreshold`                                              | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `api.livenessProbe.successThreshold`                                              | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `api.readinessProbe.enabled`                                                      | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `api.readinessProbe.initialDelaySeconds`                                          | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `api.readinessProbe.periodSeconds`                                                | How often to perform the probe                                                                                                 | `30`                                                  |
+| `api.readinessProbe.timeoutSeconds`                                               | When the probe times out                                                                                                       | `5`                                                   |
+| `api.readinessProbe.failureThreshold`                                             | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `api.readinessProbe.successThreshold`                                             | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `api.persistentVolumeClaim.enabled`                                               | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api`                                              | `false`                                               |
+| `api.persistentVolumeClaim.accessModes`                                           | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `api.persistentVolumeClaim.storageClassName`                                      | `api` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning      | `-`                                                   |
+| `api.persistentVolumeClaim.storage`                                               | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `api.service.type`                                                                | `type` determines how the Service is exposed. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer            | `ClusterIP`                                           |
+| `api.service.clusterIP`                                                           | `clusterIP` is the IP address of the service and is usually assigned randomly by the master                                    | `nil`                                                 |
+| `api.service.nodePort`                                                            | `nodePort` is the port on each node on which this service is exposed when type=NodePort                                        | `nil`                                                 |
+| `api.service.externalIPs`                                                         | `externalIPs` is a list of IP addresses for which nodes in the cluster will also accept traffic for this service               | `[]`                                                  |
+| `api.service.externalName`                                                        | `externalName` is the external reference that kubedns or equivalent will return as a CNAME record for this service             | `nil`                                                 |
+| `api.service.loadBalancerIP`                                                      | `loadBalancerIP` when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field          | `nil`                                                 |
+| `api.service.annotations`                                                         | `annotations` may need to be set when service.type is LoadBalancer                                                             | `{}`                                                  |
+|                                                                                   |                                                                                                                                |                                                       |
+| `ingress.enabled`                                                                 | Enable ingress                                                                                                                 | `false`                                               |
+| `ingress.host`                                                                    | Ingress host                                                                                                                   | `dolphinscheduler.org`                                |
+| `ingress.path`                                                                    | Ingress path                                                                                                                   | `/dolphinscheduler`                                   |
+| `ingress.tls.enabled`                                                             | Enable ingress tls                                                                                                             | `false`                                               |
+| `ingress.tls.secretName`                                                          | Ingress tls secret name                                                                                                        | `dolphinscheduler-tls`                                |
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/pseudo-cluster.md b/docs/en-us/2.0.2/user_doc/guide/installation/pseudo-cluster.md
new file mode 100644
index 0000000..51d059d
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/pseudo-cluster.md
@@ -0,0 +1,192 @@
+# Pseudo-Cluster Deployment
+
+The purpose of pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, api server, and logger server are all on the same machine.
+
+If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+
+## Prepare
+
+Pseudo-cluster deployment of DolphinScheduler requires external software support
+
+* JDK:Download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step, if it already exists in your environment.
+* Binary package: Download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html)
+* Database: PostgreSQL (8.2.15+) or MySQL (5.7+), you can choose one of the two, such as MySQL requires JDBC Driver 8.0.16
+* Registry Center: ZooKeeper (3.4.6+),[download link][zookeeper]
+* Process tree analysis
+  * `pstree` for macOS
+  * `psmisc` for Fedora/Red/Hat/CentOS/Ubuntu/Debian
+
+> **_Note:_** DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support
+
+## DolphinScheduler startup environment
+
+### Configure user exemption and permissions
+
+Create a deployment user, and be sure to configure `sudo` without password. We here make a example for user dolphinscheduler.
+
+```shell
+# To create a user, login as root
+useradd dolphinscheduler
+
+# Add password
+echo "dolphinscheduler" | passwd --stdin dolphinscheduler
+
+# Configure sudo without password
+sed -i '$adolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+# Modify directory permissions and grant permissions for user you created above
+chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
+```
+
+> **_NOTICE:_**
+>
+> * Because DolphinScheduler's multi-tenant task switch user by command `sudo -u {linux-user}`, the deployment user needs to have sudo privileges and is password-free. If novice learners don’t understand, you can ignore this point for the time being.
+> * If you find the line "Defaults requirest" in the `/etc/sudoers` file, please comment it
+
+### Configure machine SSH password-free login
+
+Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The steps to configure password-free login are as follows
+
+```shell
+su dolphinscheduler
+
+ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+chmod 600 ~/.ssh/authorized_keys
+```
+
+> **_Notice:_** After the configuration is complete, you can run the command `ssh localhost` to test if it work or not, if you can login with ssh without password.
+
+### Start zookeeper
+
+Go to the zookeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in `conf/zoo.cfg` to `dataDir=./tmp/zookeeper`
+
+```shell
+# Start zookeeper
+./bin/zkServer.sh start
+```
+
+## Modify configuration
+
+After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of `conf/config/install_config.conf`. Generally, you just needs to modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** part to complete the deployment, the following describes the parameters that must be modified
+
+```shell
+# ---------------------------------------------------------
+# INSTALL MACHINE
+# ---------------------------------------------------------
+# Because the master, worker, and API server are deployed on a single node, the IP of the server is the machine IP or localhost
+ips="localhost"
+masters="localhost"
+workers="localhost:default"
+alertServer="localhost"
+apiServers="localhost"
+pythonGatewayServers="localhost"
+
+# DolphinScheduler installation path, it will auto create if not exists
+installPath="~/dolphinscheduler"
+
+# Deploy user, use what you create in section **Configure machine SSH password-free login**
+deployUser="dolphinscheduler"
+
+# ---------------------------------------------------------
+# DolphinScheduler ENV
+# ---------------------------------------------------------
+# The path of JAVA_HOME, which JDK install path in section **Prepare**
+javaHome="/your/java/home/here"
+
+# ---------------------------------------------------------
+# Database
+# ---------------------------------------------------------
+# Database type, username, password, IP, port, metadata. For now dbtype supports `mysql` and `postgresql`, `H2`
+# Please make sure that the value of configuration is quoted in double quotation marks, otherwise may not take effect
+DATABASE_TYPE="mysql"
+SPRING_DATASOURCE_URL="jdbc:mysql://ds1:3306/ds_201_doc?useUnicode=true&characterEncoding=UTF-8"
+# Have to modify if you are not using dolphinscheduler/dolphinscheduler as your username and password
+SPRING_DATASOURCE_USERNAME="dolphinscheduler"
+SPRING_DATASOURCE_PASSWORD="dolphinscheduler"
+
+# ---------------------------------------------------------
+# Registry Server
+# ---------------------------------------------------------
+# Registration center address, the address of zookeeper service
+registryServers="localhost:2181"
+```
+
+## Initialize the database
+
+DolphinScheduler metadata is stored in relational database. Currently, PostgreSQL and MySQL are supported. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database
+
+```shell
+mysql -uroot -p
+
+mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+
+# Change {user} and {password} by requests
+mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+
+mysql> flush privileges;
+```
+
+After above steps done you would create a new database for DolphinScheduler, then run shortcut Shell scripts to init database
+
+```shell
+sh script/create-dolphinscheduler.sh
+```
+
+## Start DolphinScheduler
+
+Use deployment user you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder
+
+```shell
+sh install.sh
+```
+
+> **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in terminal
+, this is non-important information and you can ignore it.
+
+## Login DolphinScheduler
+
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+
+## Start or stop server
+
+```shell
+# Stop all DolphinScheduler server
+sh ./bin/stop-all.sh
+
+# Start all DolphinScheduler server
+sh ./bin/start-all.sh
+
+# Start or stop DolphinScheduler Master
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+
+# Start or stop DolphinScheduler Worker
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+
+# Start or stop DolphinScheduler Api
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+
+# Start or stop Logger
+sh ./bin/dolphinscheduler-daemon.sh start logger-server
+sh ./bin/dolphinscheduler-daemon.sh stop logger-server
+
+# Start or stop Alert
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+
+# Start or stop Python Gateway Server
+sh ./bin/dolphinscheduler-daemon.sh start python-gateway-server
+sh ./bin/dolphinscheduler-daemon.sh stop python-gateway-server
+```
+
+> **_Note:_**: Please refer to the section of "System Architecture Design" for service usage
+
+[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
+[zookeeper]: https://zookeeper.apache.org/releases.html
+[mysql]: https://downloads.MySQL.com/archives/c-j/
+[issue]: https://github.com/apache/dolphinscheduler/issues/6597
diff --git a/docs/en-us/2.0.2/user_doc/guide/installation/standalone.md b/docs/en-us/2.0.2/user_doc/guide/installation/standalone.md
new file mode 100644
index 0000000..9ab7b79
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/installation/standalone.md
@@ -0,0 +1,42 @@
+# Standalone
+
+Standalone only for quick look for DolphinScheduler.
+
+If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+
+> **_Note:_** Standalone only recommends the use of less than 20 workflows, because it uses H2 Database, Zookeeper Testing Server, too many tasks may cause instability
+
+## Prepare
+
+* JDK:Download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step, if it already exists in your environment.
+* Binary package: Download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html)
+
+## Start DolphinScheduler Standalone Server
+
+### Extract and start DolphinScheduler
+
+There is a standalone startup script in the binary compressed package, which can be quickly started after extract. Switch to a user with sudo permission and run the script
+
+```shell
+# Extract and start Standalone Server
+tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
+cd apache-dolphinscheduler-*-bin
+sh ./bin/dolphinscheduler-daemon.sh start standalone-server
+```
+
+### Login DolphinScheduler
+
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+
+## start/stop server
+
+The script `./bin/dolphinscheduler-daemon.sh` can not only quickly start standalone, but also stop the service operation. All the commands are as follows
+
+```shell
+# Start Standalone Server
+sh ./bin/dolphinscheduler-daemon.sh start standalone-server
+# Stop Standalone Server
+sh ./bin/dolphinscheduler-daemon.sh stop standalone-server
+```
+
+[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
diff --git a/docs/en-us/2.0.2/user_doc/guide/introduction.md b/docs/en-us/2.0.2/user_doc/guide/introduction.md
new file mode 100644
index 0000000..b34f1de
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/introduction.md
@@ -0,0 +1,3 @@
+# User Manual
+
+User Manual show you how to play with DolphinScheduler, if you do not installed, please see [Quick Start](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/quick-start.html) to install DolphinScheduler before going forward.
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/monitor.md b/docs/en-us/2.0.2/user_doc/guide/monitor.md
new file mode 100644
index 0000000..2bad35e
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/monitor.md
@@ -0,0 +1,48 @@
+
+# Monitor
+
+## Service management
+
+- Service management is mainly to monitor and display the health status and basic information of each service in the system
+
+## master monitoring
+
+- Mainly related to master information.
+<p align="center">
+   <img src="/img/master-jk-en.png" width="80%" />
+ </p>
+
+## worker monitoring
+
+- Mainly related to worker information.
+
+<p align="center">
+   <img src="/img/worker-jk-en.png" width="80%" />
+ </p>
+
+## Zookeeper monitoring
+
+- Mainly related configuration information of each worker and master in ZooKeeper.
+
+<p alignlinux ="center">
+   <img src="/img/zookeeper-monitor-en.png" width="80%" />
+ </p>
+
+## DB monitoring
+
+- Mainly the health of the DB
+
+<p align="center">
+   <img src="/img/mysql-jk-en.png" width="80%" />
+ </p>
+
+## Statistics management
+
+<p align="center">
+   <img src="/img/statistics-en.png" width="80%" />
+ </p>
+
+- Number of commands to be executed: statistics on the t_ds_command table
+- The number of failed commands: statistics on the t_ds_error_command table
+- Number of tasks to run: Count the data of task_queue in Zookeeper
+- Number of tasks to be killed: Count the data of task_kill in Zookeeper
diff --git a/docs/en-us/2.0.2/user_doc/guide/observability/skywalking-agent.md b/docs/en-us/2.0.2/user_doc/guide/observability/skywalking-agent.md
new file mode 100644
index 0000000..9c00285
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/observability/skywalking-agent.md
@@ -0,0 +1,74 @@
+SkyWalking Agent
+=============================
+
+The dolphinscheduler-skywalking module provides [SkyWalking](https://skywalking.apache.org/) monitor agent for the Dolphinscheduler project.
+
+This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).
+
+# Installation
+
+The following configuration is used to enable SkyWalking agent.
+
+### Through environment variable configuration (for Docker Compose)
+
+Modify SkyWalking environment variables in `docker/docker-swarm/config.env.sh`:
+
+```
+SKYWALKING_ENABLE=true
+SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
+SW_GRPC_LOG_SERVER_HOST=127.0.0.1
+SW_GRPC_LOG_SERVER_PORT=11800
+```
+
+And run
+
+```shell
+$ docker-compose up -d
+```
+
+### Through environment variable configuration (for Docker)
+
+```shell
+$ docker run -d --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-e SKYWALKING_ENABLE="true" \
+-e SW_AGENT_COLLECTOR_BACKEND_SERVICES="your.skywalking-oap-server.com:11800" \
+-e SW_GRPC_LOG_SERVER_HOST="your.skywalking-log-reporter.com" \
+-e SW_GRPC_LOG_SERVER_PORT="11800" \
+-p 12345:12345 \
+apache/dolphinscheduler:2.0.2 all
+```
+
+### Through install_config.conf configuration (for DolphinScheduler install.sh)
+
+Add the following configurations to `${workDir}/conf/config/install_config.conf`.
+
+```properties
+
+# SkyWalking config
+# note: enable SkyWalking tracking plugin
+enableSkywalking="true"
+# note: configure SkyWalking backend service address
+skywalkingServers="your.skywalking-oap-server.com:11800"
+# note: configure SkyWalking log reporter host
+skywalkingLogReporterHost="your.skywalking-log-reporter.com"
+# note: configure SkyWalking log reporter port
+skywalkingLogReporterPort="11800"
+
+```
+
+# Usage
+
+### Import Dashboard
+
+#### Import DolphinScheduler Dashboard to SkyWalking Sever
+
+Copy the `${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml` file into `${skywalking-oap-server.home}/config/ui-initialized-templates/` directory, and restart SkyWalking oap-server.
+
+#### View DolphinScheduler Dashboard
+
+If you have opened SkyWalking dashboard with a browser before, you need to clear the browser cache.
+
+![img1](/img/skywalking/import-dashboard-1.jpg)
diff --git a/docs/en-us/2.0.2/user_doc/guide/open-api.md b/docs/en-us/2.0.2/user_doc/guide/open-api.md
new file mode 100644
index 0000000..e93737a
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/open-api.md
@@ -0,0 +1,64 @@
+# Open API
+
+## Background
+Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.
+
+## The Operation Steps of DS API Calls
+
+### Create a token
+1. Log in to the scheduling system, click "Security", then click "Token manage" on the left, and click "Create token" to create a token.
+
+<p align="center">
+   <img src="/img/token-management-en.png" width="80%" />
+ </p>
+
+2. Select the "Expiration time" (Token validity), select "User" (to perform the API operation with the specified user), click "Generate token", copy the Token string, and click "Submit"
+
+<p align="center">
+   <img src="/img/create-token-en1.png" width="80%" />
+ </p>
+
+### Use token
+1. Open the API documentation page
+    > Address:http://{api server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
+<p align="center">
+   <img src="/img/api-documentation-en.png" width="80%" />
+ </p>
+ 
+2. select a test API, the API selected for this test: queryAllProjectList
+    > projects/query-project-list
+                                                                             >
+3. Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result
+    ```
+    token: The Token just generated
+    ```
+<p align="center">
+   <img src="/img/test-api.png" width="80%" />
+ </p>  
+
+### Create a project
+Here is an example of creating a project named "wudl-flink-test":
+<p align="center">
+   <img src="/img/api/create_project1.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_project2.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_project3.png" width="80%" />
+ </p>
+The returned msg information is "success", indicating that we have successfully created the project through API.
+
+If you are interested in the source code of the project, please continue to read the following:
+### Appendix:The source code of creating a project
+<p align="center">
+   <img src="/img/api/create_source1.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_source2.png" width="80%" />
+ </p>
+
+
diff --git a/docs/en-us/2.0.2/user_doc/guide/parameter/built-in.md b/docs/en-us/2.0.2/user_doc/guide/parameter/built-in.md
new file mode 100644
index 0000000..2c88bed
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/parameter/built-in.md
@@ -0,0 +1,48 @@
+# Built-in Parameter
+
+## Basic Built-in Parameter
+
+<table>
+    <tr><th>variable</th><th>declaration method</th><th>meaning</th></tr>
+    <tr>
+        <td>system.biz.date</td>
+        <td>${system.biz.date}</td>
+        <td>The day before the scheduled time of the daily scheduling instance, the format is yyyyMMdd</td>
+    </tr>
+    <tr>
+        <td>system.biz.curdate</td>
+        <td>${system.biz.curdate}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMdd</td>
+    </tr>
+    <tr>
+        <td>system.datetime</td>
+        <td>${system.datetime}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMddHHmmss</td>
+    </tr>
+</table>
+
+## Extended Built-in Parameter
+
+- Support custom variable names in the code, declaration method: \${variable name}. It can refer to [basic built-in parameter](#basic-built-in-parameter) or specify "constants".
+
+- We define this benchmark variable as \$[...] format, \$[yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as: \$[yyyyMMdd], \$[HHmmss], \$[yyyy-MM-dd], etc.
+
+- Or the 2 following methods may be useful:
+
+      1. use add_month(yyyyMMdd, offset) function to add/minus number of months
+      the first parameter of this function is yyyyMMdd, representing the time format user will get
+      the second is offset, representing the number of months the user wants to add or minus
+      * Next N years:$[add_months(yyyyMMdd,12*N)]
+      * N years before:$[add_months(yyyyMMdd,-12*N)]
+      * Next N months:$[add_months(yyyyMMdd,N)]
+      * N months before:$[add_months(yyyyMMdd,-N)]
+      *********************************************************************************************************
+      1. add numbers directly after the time format
+      * Next N weeks:$[yyyyMMdd+7*N]
+      * First N weeks:$[yyyyMMdd-7*N]
+      * Next N days:$[yyyyMMdd+N]
+      * N days before:$[yyyyMMdd-N]
+      * Next N hours:$[HHmmss+N/24]
+      * First N hours:$[HHmmss-N/24]
+      * Next N minutes:$[HHmmss+N/24/60]
+      * First N minutes:$[HHmmss-N/24/60]
diff --git a/docs/en-us/2.0.2/user_doc/guide/parameter/context.md b/docs/en-us/2.0.2/user_doc/guide/parameter/context.md
new file mode 100644
index 0000000..90cb368
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/parameter/context.md
@@ -0,0 +1,63 @@
+# Parameter Context
+
+DolphinScheduler provides the ability to refer to each other between parameters, including: local parameters refer to global parameters, and upstream and downstream parameter transfer. Because of the existence of references, it involves the priority of parameters when the parameter names are the same. see also [Parameter Priority](priority.md)
+
+## Local task use global parameter
+
+The premise of local tasks referencing global parameters is that you have already defined [Global Parameter](global.md). The usage is similar to the usage in [local parameters](local.md), but the value of the parameter needs to be configured as the key in the global parameter
+
+![parameter-call-global-in-local](/img/global_parameter.png)
+
+As shown in the figure above, `${biz_date}` and `${curdate}` are examples of local parameters referencing global parameters. Observe the last line of the above figure, local_param_bizdate uses \${global_bizdate} to refer to the global parameter. In the shell script, you can use \${local_param_bizdate} to refer to the value of the global variable global_bizdate, or set the value of local_param_bizdate directly through JDBC. In the same way, local_param refers to the global parameters defi [...]
+
+## Pass parameter from upstream task to downstream
+
+DolphinScheduler Parameter transfer between tasks is allowed, and the current transfer direction only supports one-way transfer from upstream to downstream. The task types currently supporting this feature are:
+
+* [Shell](../task/shell.md)
+* [SQL](../task/sql.md)
+* [Procedure](../task/stored-procedure.md)
+
+When defining an upstream node, if there is a need to transmit the result of that node to a downstream node that has a dependency. You need to set a parameter whose direction is OUT in [Custom Parameters] of [Current Node Settings]. At present, we mainly focus on the function of SQL and SHELL nodes that can input parameters.
+
+prop is user-specified; the direction is selected as OUT, and will be defined as parameter output only when the direction is OUT. The data type can be chosen from different data structures as needed; the value part is not required.
+
+If the result of the SQL node  has only one row, one or more fields, the name of the prop needs to be the same as the field name. The data type can be chosen to be something other than LIST. The parameter will select the value corresponding to the column with the same name as this parameter in the column name in the SQL query result.
+
+If the result of the SQL node is multiple rows, one or more fields, the name of the prop needs to be the same as the name of the field. The data type is selected as LIST, and the SQL query result will be converted to LIST, and the result will be converted to JSON as the value of the corresponding parameter.
+
+Let's take another example of the process that contains the SQL node in the above picture:
+
+The [createParam1] node in the above figure is defined as follows:
+
+![png05](/img/globalParam/image-20210723104957031.png)
+
+ [createParam2] node is defined as follows:
+
+![png06](/img/globalParam/image-20210723105026924.png)
+
+You can find the value of the variable in the [Workflow Instance] page to find the corresponding node instance.
+
+Node instance [createparam1] is as follows:
+
+![png07](/img/globalParam/image-20210723105131381.png)
+
+Here, the value of "id" is equal to 12.
+
+Let's see the case of the node instance [createparam2].
+
+![png08](/img/globalParam/image-20210723105255850.png)
+
+There is only the value of "id". Although the user-defined sql looks up the fields "id" and "database_name", only one parameter is set because only one parameter "id" is defined for out. For display reasons, the length of the list is already checked for you here as 10.
+
+### SHELL
+
+prop is user-specified. The direction is selected as OUT. The output is defined as a parameter only when the direction is OUT. Data type can choose different data structures as needed; the value part is not required to be filled. The user needs to pass the parameter, and when defining the shell script, the output format of ${setValue(key=value)} statement is required, key is the prop of the corresponding parameter, and value is the value of the parameter.
+
+For example, ` echo '${setValue (trans = Hello trans)}' `, set "trans" to "Hello trans", and the variable trans can be used in downstream tasks:
+
+<img src="/img/globalParam/trans-shell.png" alt="trans-shell" style="zoom:50%;" />
+
+When the shell node is defined, when the log detects the format of ${setValue (key = value1)}, value1 will be assigned to the key, and the downstream node can directly use the value of the variable key. Similarly, you can find the corresponding node instance on the workflow instance page to view the value of the variable.
+
+<img src="/img/globalParam/use-parameter-shell.png" alt="use-parameter-shell" style="zoom:50%;" />
diff --git a/docs/en-us/2.0.2/user_doc/guide/parameter/global.md b/docs/en-us/2.0.2/user_doc/guide/parameter/global.md
new file mode 100644
index 0000000..e9db8e7
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/parameter/global.md
@@ -0,0 +1,19 @@
+# Global Parameter
+
+## Scope
+
+The parameters configured on the workflow definition dialog, the whole workflow is it's scope.
+
+## Usage
+
+the approach to set global parameters is, after defining the workflow, click the 'save' button, then click the '+' button below the 'Set global':
+
+<p align="center">
+   <img src="/img/supplement_global_parameter_en.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/local_parameter_en.png" width="80%" />
+ </p>
+
+The global_bizdate parameter defined here can be referenced by local parameters of any other task node, and the value of global_bizdate is set to the figure obtained by referencing the system parameter system.biz.date
diff --git a/docs/en-us/2.0.2/user_doc/guide/parameter/local.md b/docs/en-us/2.0.2/user_doc/guide/parameter/local.md
new file mode 100644
index 0000000..41e74f3
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/parameter/local.md
@@ -0,0 +1,19 @@
+# Local Parameter
+
+## Scope
+
+Parameters configured on the task definition dialog, the scope of this parameter only for this task, but if you configured follow [Parameter Context](context.md), it could passed follow task.
+
+## Usage
+
+The approach to set local parameters is, double-click on any node while defining the workflow and click the '+' button next to the 'Custom Parameters':
+
+<p align="center">
+     <img src="/img/supplement_local_parameter_en.png" width="80%" />
+</p>
+
+<p align="center">
+     <img src="/img/global_parameter_en.png" width="80%" />
+</p>
+
+If you want to call the [built-in parameter](built-in.md) in the local parameters, fill in the value corresponding to the built-in parameters in `value`, as in the above figure, `${biz_date}` and `${curdate}`
diff --git a/docs/en-us/2.0.2/user_doc/guide/parameter/priority.md b/docs/en-us/2.0.2/user_doc/guide/parameter/priority.md
new file mode 100644
index 0000000..e2ae733
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/parameter/priority.md
@@ -0,0 +1,40 @@
+# Parameter Priority
+
+DolphinScheduler definition of parameter values ​​involved in may come from three types:
+
+* [Global Parameter](global.md): Parameters defined when the workflow saves page definitions
+* [Parameter Context](context.md): Parameters passed by upstream nodes
+* [Local Parameter](local.md):The node's own parameters, which is the parameters defined by the user in [Custom Parameters]. The user can define the value of this part of the parameters at the time of workflow definition.
+
+Because the value of a parameter has multiple sources, when the parameter name is the same, there needs to be a parameter priority problem. The priority of DolphinScheduler parameters from high to low is: `Global Parameter > Parameter Context > Local Parameter`
+
+In the case of parameters passed by the upstream task, there may be multiple tasks upstream to pass parameters to the downstream. When the parameter names passed upstream are the same:
+
+* Downstream nodes will preferentially use parameters with non-empty values
+* If there are multiple parameters with non-empty values, sort according to the completion time of the upstream task, and select the parameter corresponding to the upstream task with the earliest completion time
+
+## Example
+
+For example, the relationships are shown in the figures below:
+
+1: The first case is explained by the shell nodes.
+
+![png01](/img/globalParam/image-20210723102938239.png)
+
+The [useParam] node can use the parameters which is set in the [createParam] node. The [useParam] node does not have a dependency on the [noUseParam] node, so it does not get the parameters of the [noUseParam] node. The above picture is just an example of a shell node, other types of nodes have the same usage rules.
+
+![png02](/img/globalParam/image-20210723103316896.png)
+
+Among all, the [createParam] node can use parameters directly. In addition, the node sets two parameters named "key" and "key1". Here the user defines a parameter named "key1" with the same name as the one passed by the upstream node and copies the value "12". However, due to the priority we set, the value "12" here would be discarded and the parameter value set by the upstream node would be finally used.
+
+2: Let's explain another situation in SQL nodes.
+
+![png03](/img/globalParam/image-20210723103937052.png)
+
+The definition of the [use_create] node is as follows:
+
+![png04](/img/globalParam/image-20210723104411489.png)
+
+"status" is the own parameters of the node set by the current node. However, the user also sets the "status" parameter when saving, assigning its value to -1. Then the value of status will be -1 with higher priority when the SQL is executed. The value of the node's own variable is discarded.
+
+The "ID" here is the parameter set by the upstream node. The user sets the parameters of the same parameter name "ID" for the [createparam1] node and [createparam2] node. And the [use_create] node uses the value of [createParam1] which is finished first.
diff --git a/docs/en-us/2.0.2/user_doc/guide/project.md b/docs/en-us/2.0.2/user_doc/guide/project.md
new file mode 100644
index 0000000..37c7b9f
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/project.md
@@ -0,0 +1,21 @@
+# Project
+
+## Create project
+
+- Click "Project Management" to enter the project management page, click the "Create Project" button, enter the project name, project description, and click "Submit" to create a new project.
+
+  <p align="center">
+      <img src="/img/create_project_en1.png" width="80%" />
+  </p>
+
+## Project home
+
+- Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project. The introduction for those metric:
+
+- Task status statistics: within the specified time range, count the number of task instances as successful submission, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
+- Process status statistics: within the specified time range, count the number of the status of the workflow instance as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
+- Workflow definition statistics: Count the workflow definitions created by this user and the workflow definitions granted to this user by the administrator
+
+  <p align="center">
+     <img src="/img/project_home_en.png" width="80%" />
+  </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/quick-start.md b/docs/en-us/2.0.2/user_doc/guide/quick-start.md
new file mode 100644
index 0000000..418248a
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/quick-start.md
@@ -0,0 +1,71 @@
+# Quick Start
+
+* Administrator user login
+
+  > Address:http://192.168.xx.xx:12345/dolphinscheduler  Username and password: admin/dolphinscheduler123
+
+<p align="center">
+   <img src="/img/login_en.png" width="60%" />
+ </p>
+
+* Create queue
+
+<p align="center">
+   <img src="/img/create-queue-en.png" width="60%" />
+ </p>
+
+  * Create tenant
+      <p align="center">
+    <img src="/img/create-tenant-en.png" width="60%" />
+  </p>
+
+  * Creating Ordinary Users
+<p align="center">
+      <img src="/img/create-user-en.png" width="60%" />
+ </p>
+
+  * Create an alarm group
+
+ <p align="center">
+    <img src="/img/alarm-group-en.png" width="60%" />
+  </p>
+
+  
+  * Create a worker group
+  
+   <p align="center">
+      <img src="/img/worker-group-en.png" width="60%" />
+    </p>
+
+   * Create environment
+
+   <p align="center">
+    <img src="/img/create-environment.png" width="60%" />
+   </p>
+    
+ * Create a token
+  
+   <p align="center">
+      <img src="/img/token-en.png" width="60%" />
+    </p>
+     
+  
+  * Login with regular users
+  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
+
+  * Project Management - > Create Project - > Click on Project Name
+<p align="center">
+      <img src="/img/create_project_en.png" width="60%" />
+ </p>
+
+  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
+
+<p align="center">
+   <img src="/img/process_definition_en.png" width="60%" />
+ </p>
+
+  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
+
+ <p align="center">
+   <img src="/img/log_en.png" width="60%" />
+</p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/resource.md b/docs/en-us/2.0.2/user_doc/guide/resource.md
new file mode 100644
index 0000000..58e6fc0
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/resource.md
@@ -0,0 +1,112 @@
+# Resource Center
+
+If you want to use the resource upload function, you can select the local file directory for a single machine(this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO and other related environments
+
+> **_Note:_**
+>
+> * If the resource upload function is used, the deployment user in [installation and deployment](installation/standalone.md) must to have operation authority
+> * If you using Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise Skip step
+
+## hdfs resource configuration
+
+- Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
+
+```
+conf/common/common.properties
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
+    data.store2hdfs.basepath=/dolphinscheduler
+    # resource upload startup type : HDFS,S3,NONE
+    res.upload.startup.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab
+
+conf/common/hadoop.properties
+    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
+    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+```
+
+- Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids and yarn.application.status.address, and the other address is empty.
+- You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project, and restart the api-server service.
+
+## File management
+
+> It is the management of various resource files, including creating basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, and can do edit, rename, download, delete and other operations.
+
+  <p align="center">
+   <img src="/img/file-manage-en.png" width="80%" />
+ </p>
+
+- Create a file
+  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties
+
+<p align="center">
+   <img src="/img/file_create_en.png" width="80%" />
+ </p>
+
+- upload files
+
+> Upload file: Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name
+
+<p align="center">
+   <img src="/img/file-upload-en.png" width="80%" />
+ </p>
+
+- File View
+
+> For the file types that can be viewed, click the file name to view the file details
+
+<p align="center">
+   <img src="/img/file_detail_en.png" width="80%" />
+ </p>
+
+- download file
+
+> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file
+
+- File rename
+
+<p align="center">
+   <img src="/img/file_rename_en.png" width="80%" />
+ </p>
+
+- delete
+  > File list -> Click the "Delete" button to delete the specified file
+
+## UDF management
+
+### Resource management
+
+> The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
+> Operation function: rename, download, delete.
+
+- Upload udf resources
+  > Same as uploading files.
+
+### Function management
+
+- Create UDF function
+  > Click "Create UDF Function", enter the udf function parameters, select the udf resource, and click "Submit" to create the udf function.
+
+> Currently only supports temporary UDF functions of HIVE
+
+- UDF function name: the name when the UDF function is entered
+- Package name Class name: Enter the full path of the UDF function
+- UDF resource: Set the resource file corresponding to the created UDF
+
+<p align="center">
+   <img src="/img/udf_edit_en.png" width="80%" />
+ </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/security.md b/docs/en-us/2.0.2/user_doc/guide/security.md
new file mode 100644
index 0000000..bbab492
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/security.md
@@ -0,0 +1,163 @@
+
+# Security
+
+* Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
+* Administrator login, default user name and password: admin/dolphinscheduler123
+
+## Create queue
+
+- Queue is used when the "queue" parameter is needed to execute programs such as spark and mapreduce.
+- The administrator enters the Security Center->Queue Management page and clicks the "Create Queue" button to create a queue.
+<p align="center">
+   <img src="/img/create-queue-en.png" width="80%" />
+ </p>
+
+## Add tenant
+
+- The tenant corresponds to the Linux user, which is used by the worker to submit the job. Task will fail if Linux does not exists this user. You can set the parameter `worker.tenant.auto.create` as `true` in configuration file `worker.properties`. After that DolphinScheduler would create user if not exists, The property `worker.tenant.auto.create=true` requests worker run `sudo` command without password.
+- Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
+- The administrator enters the Security Center->Tenant Management page and clicks the "Create Tenant" button to create a tenant.
+
+ <p align="center">
+    <img src="/img/addtenant-en.png" width="80%" />
+  </p>
+
+## Create normal user
+
+- Users are divided into **administrator users** and **normal users**
+
+  - The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.
+  - Ordinary users can create projects and create, edit, and execute workflow definitions.
+  - Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.
+
+- The administrator enters the Security Center -> User Management page and clicks the "Create User" button to create a user.
+<p align="center">
+   <img src="/img/user-en.png" width="80%" />
+ </p>
+
+> **Edit user information**
+
+- The administrator enters the Security Center->User Management page and clicks the "Edit" button to edit user information.
+- After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the "Edit" button to edit the user information.
+
+> **Modify user password**
+
+- The administrator enters the Security Center->User Management page and clicks the "Edit" button. When editing user information, enter the new password to modify the user password.
+- After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the "Edit" button, then the password modification is successful.
+
+## Create alarm group
+
+- The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.
+
+* The administrator enters the Security Center -> Alarm Group Management page and clicks the "Create Alarm Group" button to create an alarm group.
+
+  <p align="center">
+    <img src="/img/mail-en.png" width="80%" />
+
+## Token management
+
+> Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.
+
+- The administrator enters the Security Center -> Token Management page, clicks the "Create Token" button, selects the expiration time and user, clicks the "Generate Token" button, and clicks the "Submit" button, then the selected user's token is created successfully.
+
+  <p align="center">
+      <img src="/img/create-token-en.png" width="80%" />
+   </p>
+
+- After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the "generate token" button, and click the "submit" button, then the user creates a token successfully.
+- Call example:
+
+```java
+    /**
+     * test token
+     */
+    public  void doPOSTParam()throws Exception{
+        // create HttpClient
+        CloseableHttpClient httpclient = HttpClients.createDefault();
+
+        // create http post request
+        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create");
+        httpPost.setHeader("token", "123");
+        // set parameters
+        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
+        parameters.add(new BasicNameValuePair("projectName", "qzw"));
+        parameters.add(new BasicNameValuePair("desc", "qzw"));
+        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
+        httpPost.setEntity(formEntity);
+        CloseableHttpResponse response = null;
+        try {
+            // execute
+            response = httpclient.execute(httpPost);
+            // response status code 200
+            if (response.getStatusLine().getStatusCode() == 200) {
+                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
+                System.out.println(content);
+            }
+        } finally {
+            if (response != null) {
+                response.close();
+            }
+            httpclient.close();
+        }
+    }
+```
+
+## Granted permission
+
+    * Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
+    * The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
+    * Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
+
+- The administrator enters the Security Center -> User Management page and clicks the "Authorize" button of the user who needs to be authorized, as shown in the figure below:
+ <p align="center">
+  <img src="/img/auth-en.png" width="80%" />
+</p>
+
+- Select the project to authorize the project.
+
+<p align="center">
+   <img src="/img/authproject-en.png" width="80%" />
+ </p>
+
+- Resources, data sources, and UDF function authorization are the same as project authorization.
+
+## Worker grouping
+
+Each worker node will belong to its own worker group, and the default group is "default".
+
+When the task is executed, the task can be assigned to the specified worker group, and the task will be executed by the worker node in the group.
+
+> Add/Update worker group
+
+- Open the "conf/worker.properties" configuration file on the worker node where you want to set the groups, and modify the "worker.groups" parameter
+- The "worker.groups" parameter is followed by the name of the group corresponding to the worker node, which is “default”.
+- If the worker node corresponds to more than one group, they are separated by commas
+
+```conf
+worker.groups=default,test
+```
+- You can also modify the worker group for worker which be assigned to specific worker group, and if the modification is successful, the worker will use the new group and ignore the configuration in `worker.properties`. The step to modify it as below: "security center -> worker group management -> click 'new worker group' -> click 'new worker group' ->  enter 'group name' -> select exists worker -> click submit". 
+
+## Environmental Management
+
+* Configure the Worker operating environment online. A Worker can specify multiple environments, and each environment is equivalent to the dolphinscheduler_env.sh file.
+
+* The default environment is the dolphinscheduler_env.sh file.
+
+* When the task is executed, the task can be assigned to the designated worker group, and the corresponding environment can be selected according to the worker group. Finally, the worker node executes the environment first and then executes the task.
+
+> Add/Update environment
+
+- The environment configuration is equivalent to the configuration in the dolphinscheduler_env.sh file.
+
+  <p align="center">
+      <img src="/img/create-environment.png" width="80%" />
+  </p>
+
+> Use environment
+
+- Create a task node in the workflow definition and select the environment corresponding to the Worker group and the Worker group. When the task is executed, the Worker will execute the environment first before executing the task.
+
+    <p align="center">
+        <img src="/img/use-environment.png" width="80%" />
+    </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/task-instance.md b/docs/en-us/2.0.2/user_doc/guide/task-instance.md
new file mode 100644
index 0000000..6d02cdc
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task-instance.md
@@ -0,0 +1,12 @@
+
+## Task instance
+
+- Click Project Management -> Workflow -> Task Instance to enter the task instance page, as shown in the figure below, click the name of the workflow instance, you can jump to the workflow instance DAG chart to view the task status.
+     <p align="center">
+        <img src="/img/task-list-en.png" width="80%" />
+     </p>
+
+- <span id=taskLog>View log:</span>Click the "view log" button in the operation column to view the log of task execution.
+     <p align="center">
+        <img src="/img/task-log2-en.png" width="80%" />
+     </p>
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/conditions.md b/docs/en-us/2.0.2/user_doc/guide/task/conditions.md
new file mode 100644
index 0000000..345bee8
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/conditions.md
@@ -0,0 +1,36 @@
+# Conditions
+
+Conditions is a condition node, determining which downstream task should be run based on the condition set to it. For now, the Conditions support multiple upstream tasks, but only two downstream tasks. When the number of upstream tasks exceeds one, complex upstream dependencies can be achieved through `and` and `or` operators.
+
+## Create
+
+Drag in the toolbar<img src="/img/conditions.png" width="20"/>The task node to the drawing board to create a new Conditions task, as shown in the figure below:
+
+  <p align="center">
+   <img src="/img/condition_dag_en.png" width="80%" />
+  </p>
+
+  <p align="center">
+   <img src="/img/condition_task_en.png" width="80%" />
+  </p>
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Downstream tasks: Supports two branches for now, success and failure
+  - Success: When the Conditions task runs successfully, run this downstream task
+  - Failure: When the Conditions task runs fails, run this downstream task
+- Upstream condition selection: one or more upstream tasks can be selected for conditions
+  - Add the upstream dependency: Use the first parameter to choose task name, and the second parameter for status of the upsteam task.
+  - Upstream task relationship: we use `and` and `or` operators to handle complex relationship of upstream when multiple upstream tasks for Conditions task
+
+## Related task
+
+[switch](switch.md): [Condition](conditions.md)task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream node. The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/datax.md b/docs/en-us/2.0.2/user_doc/guide/task/datax.md
new file mode 100644
index 0000000..f6436bc
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/datax.md
@@ -0,0 +1,18 @@
+
+# DATAX
+
+- Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the drawing board
+
+  <p align="center">
+   <img src="/img/datax-en.png" width="80%" />
+  </p>
+
+- Custom template: When you turn on the custom template switch, you can customize the content of the json configuration file of the datax node (applicable when the control configuration does not meet the requirements)
+- Data source: select the data source to extract the data
+- sql statement: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias (as)
+- Target library: select the target library for data synchronization
+- Target table: the name of the target table for data synchronization
+- Pre-sql: Pre-sql is executed before the sql statement (executed by the target library).
+- Post-sql: Post-sql is executed after the sql statement (executed by the target library).
+- json: json configuration file for datax synchronization
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement.
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/dependent.md b/docs/en-us/2.0.2/user_doc/guide/task/dependent.md
new file mode 100644
index 0000000..97c2940
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/dependent.md
@@ -0,0 +1,27 @@
+# DEPENDENT
+
+- Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.
+
+> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/dependent-nodes-en.png" width="80%" />
+ </p>
+
+> The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.
+
+  <p align="center">
+   <img src="/img/depend-node-en.png" width="80%" />
+ </p>
+
+> For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:
+
+ <p align="center">
+   <img src="/img/depend-node1-en.png" width="80%" />
+ </p>
+
+> If the weekly report A also needs to be executed successfully last Tuesday:
+
+ <p align="center">
+   <img src="/img/depend-node3-en.png" width="80%" />
+ </p>
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/flink.md b/docs/en-us/2.0.2/user_doc/guide/task/flink.md
new file mode 100644
index 0000000..164826f
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/flink.md
@@ -0,0 +1,23 @@
+
+# Flink
+
+- Drag in the toolbar<img src="/img/flink.png" width="35"/>The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+  <img src="/img/flink-en.png" width="80%" />
+</p>
+
+- Program type: supports JAVA, Scala and Python three languages
+- The class of the main function: is the full path of the Main Class, the entry point of the Flink program
+- Main jar package: is the Flink jar package
+- Deployment mode: support three modes of cluster and local
+- Number of slots: You can set the number of slots
+- Number of taskManage: You can set the number of taskManage
+- JobManager memory number: You can set the jobManager memory number
+- TaskManager memory number: You can set the taskManager memory number
+- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- Other parameters: support --jars, --files, --archives, --conf format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- Custom parameter: It is a local user-defined parameter of Flink, which will replace the content with \${variable} in the script
+
+Note: JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/http.md b/docs/en-us/2.0.2/user_doc/guide/task/http.md
new file mode 100644
index 0000000..6072e66
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/http.md
@@ -0,0 +1,23 @@
+
+# HTTP
+
+- Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/http-en.png" width="80%" />
+ </p>
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Request address: http request URL.
+- Request type: support GET, POSt, HEAD, PUT, DELETE.
+- Request parameters: Support Parameter, Body, Headers.
+- Verification conditions: support default response code, custom response code, content included, content not included.
+- Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.
+- Custom parameter: It is a user-defined parameter of http part, which will replace the content with \${variable} in the script.
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/map-reduce.md b/docs/en-us/2.0.2/user_doc/guide/task/map-reduce.md
new file mode 100644
index 0000000..e79cb79
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/map-reduce.md
@@ -0,0 +1,33 @@
+# MapReduce
+
+- Using the MR node, you can directly execute the MR program. For the mr node, the worker will use the `hadoop jar` method to submit tasks
+
+> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+## JAVA program
+
+ <p align="center">
+   <img src="/img/mr_java_en.png" width="80%" />
+ </p>
+
+- The class of the main function: is the full path of the Main Class, the entry point of the MR program
+- Program type: select JAVA language
+- Main jar package: is the MR jar package
+- Command line parameters: set the input parameters of the MR program and support the substitution of custom parameter variables
+- Other parameters: support -D, -files, -libjars, -archives format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
+
+## Python program
+
+<p align="center">
+   <img src="/img/mr_edit_en.png" width="80%" />
+ </p>
+
+- Program type: select Python language
+- Main jar package: is the Python jar package for running MR
+- Other parameters: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:
+- -mapper "mapper.py 1" -file mapper.py -reducer reducer.py -file reducer.py –input /journey/words.txt -output /journey/out/mr/\${currentTimeMillis}
+- The mapper.py 1 after -mapper is two parameters, the first parameter is mapper.py, and the second parameter is 1
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/pigeon.md b/docs/en-us/2.0.2/user_doc/guide/task/pigeon.md
new file mode 100644
index 0000000..b50e1c1
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/pigeon.md
@@ -0,0 +1,19 @@
+# Pigeon
+
+Pigeon is general websocket service tracking task for DolphinScheduler. It can trigger, check status, get log from remote websocket service.
+
+## Create
+
+Drag in the toolbar<img src="/img/pigeon.png" width="20"/>The task node to the drawing board to create a new Conditions task
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Target task name: Pigeon websocket service name.
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/python.md b/docs/en-us/2.0.2/user_doc/guide/task/python.md
new file mode 100644
index 0000000..d70ef5c
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/python.md
@@ -0,0 +1,15 @@
+# Python
+
+- Using python nodes, you can directly execute python scripts. For python nodes, workers will use `python **` to submit tasks.
+
+> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/python-en.png" width="80%" />
+ </p>
+
+- Script: Python program developed by the user
+- Environment Name: Specific which Python interpreter would be use and run `Script`. If you want to use Python virtualenv, you should create multiply environments for each virtualenv.
+- Resources: refers to the list of resource files that need to be called in the script
+- User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with \${variable} in the script
+- Note: If you import the python file under the resource directory tree, you need to add the `__init__.py` file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/shell.md b/docs/en-us/2.0.2/user_doc/guide/task/shell.md
new file mode 100644
index 0000000..bae4cde
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/shell.md
@@ -0,0 +1,22 @@
+# Shell node
+
+> Shell node, when the worker is executed, a temporary shell script is generated, and the Linux user with the same name as the tenant executes the script.
+
+- Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag <img src="/img/shell.png" width="35"/> from the toolbar to the drawing board, as shown in the figure below:
+
+  <p align="center">
+      <img src="/img/shell-en.png" width="80%" />
+  </p>
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Script: SHELL program developed by users.
+- Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
+- User-defined parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with \${variable} in the script.
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/spark.md b/docs/en-us/2.0.2/user_doc/guide/task/spark.md
new file mode 100644
index 0000000..71f400f
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/spark.md
@@ -0,0 +1,22 @@
+# SPARK
+
+- Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the `spark-submit` method to submit tasks
+
+> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/spark-submit-en.png" width="80%" />
+ </p>
+
+- Program type: supports JAVA, Scala and Python three languages
+- The class of the main function: is the full path of the Spark program’s entry Main Class
+- Main jar package: Spark jar package
+- Deployment mode: support three modes of yarn-cluster, yarn-client and local
+- Driver core number: You can set the number of Driver cores and the number of memory
+- Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores
+- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- Other parameters: support --jars, --files, --archives, --conf format
+- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
+- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with \${variable} in the script
+
+Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/sql.md b/docs/en-us/2.0.2/user_doc/guide/task/sql.md
new file mode 100644
index 0000000..1a0fbbf
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/sql.md
@@ -0,0 +1,22 @@
+# SQL
+
+- Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png)Task node into the drawing board
+- Non-query SQL function: edit non-query SQL task information, select non-query for sql type, as shown in the figure below:
+ <p align="center">
+  <img src="/img/sql-en.png" width="80%" />
+</p>
+
+- Query SQL function: Edit and query SQL task information, sql type selection query, select form or attachment to send mail to the specified recipient, as shown in the figure below.
+
+<p align="center">
+   <img src="/img/sql-node-en.png" width="80%" />
+ </p>
+
+- Data source: select the corresponding data source
+- sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.
+- sql parameter: the input parameter format is key1=value1;key2=value2...
+- sql statement: SQL statement
+- UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement.
+- Pre-sql: Pre-sql is executed before the sql statement.
+- Post-sql: Post-sql is executed after the sql statement.
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/stored-procedure.md b/docs/en-us/2.0.2/user_doc/guide/task/stored-procedure.md
new file mode 100644
index 0000000..92bcc80
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/stored-procedure.md
@@ -0,0 +1,13 @@
+# Stored Procedure
+
+- According to the selected data source, execute the stored procedure.
+
+> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png)The task node to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/procedure-en.png" width="80%" />
+ </p>
+
+- Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source
+- Method: is the method name of the stored procedure
+- Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/sub-process.md b/docs/en-us/2.0.2/user_doc/guide/task/sub-process.md
new file mode 100644
index 0000000..f8ac1a5
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/sub-process.md
@@ -0,0 +1,14 @@
+# SubProcess
+
+- The sub-process node is to execute a certain external workflow definition as a task node.
+> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar to the drawing board, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/sub-process-en.png" width="80%" />
+ </p>
+
+- Node name: The node name in a workflow definition is unique
+- Run flag: identify whether this node can be scheduled normally
+- Descriptive information: describe the function of the node
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/task/switch.md b/docs/en-us/2.0.2/user_doc/guide/task/switch.md
new file mode 100644
index 0000000..7dc71d5
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/task/switch.md
@@ -0,0 +1,37 @@
+# Switch
+
+Switch is a conditional judgment node, which branch should be executes according to the value of [global variable](../parameter/global.md) and the expression result written by the user.
+
+## Create
+
+Drag the <img src="/img/switch.png" width="20"/> in the tool bar to create task. **Note** After the switch task is created, you must configure it downstream to make parameter `Branch flow` work.
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- Descriptive information: describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- condition: You can configure multiple conditions for the switch task. When the conditions are true, the configured branch will be executed. You can configure multiple different conditions to satisfy different businesses.
+- Branch flow: The default branch flow, when all the conditions are false, it will execute this branch flow.
+
+## Detail
+
+Here we have three tasks, the dependencies are `A -> B -> [C, D]`, and task_a is a shell task and task_b is a switch task
+
+- In task A, a global variable named `id` is defined through [global variable](../parameter/global.md), and the declaration method is `${setValue(id=1)}`
+- Task B adds conditions and uses global variables declared upstream to achieve conditional judgment (note that global variables must exist when the switch is running, which means that switch task can use global variables that are not directly upstream). We want workflow execute task C when id = 1 else run task D
+  - Configure task C to run when the global variable `id=1`. Then edit `${id} == 1` in the condition of task B, select `C` as branch flow
+  - For other tasks, select `D` as branch flow
+
+Switch task configuration is as follows
+
+![task-switch-configure](../../../../../../img/switch_configure.jpg)
+
+## Related Task
+
+[condition](conditions.md):[Condition](conditions.md)task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream node. The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
\ No newline at end of file
diff --git a/docs/en-us/2.0.2/user_doc/guide/workflow-definition.md b/docs/en-us/2.0.2/user_doc/guide/workflow-definition.md
new file mode 100644
index 0000000..47f616b
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/workflow-definition.md
@@ -0,0 +1,114 @@
+# Workflow definition
+
+## <span id=creatDag> Create workflow definition</span>
+
+- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, and click the "Create Workflow" button to enter the **workflow DAG edit** page, as shown in the following figure:
+  <p align="center">
+      <img src="/img/dag5.png" width="80%" />
+  </p>
+- Drag in the toolbar <img src="/img/shell.png" width="35"/> Add a Shell task to the drawing board, as shown in the figure below:
+  <p align="center">
+      <img src="/img/shell-en.png" width="80%" />
+  </p>
+- **Add parameter settings for this shell task:**
+
+1. Fill in the "Node Name", "Description", and "Script" fields;
+2. Check “Normal” for “Run Flag”. If “Prohibit Execution” is checked, the task will not be executed when the workflow runs;
+3. Select "Task Priority": When the number of worker threads is insufficient, high-level tasks will be executed first in the execution queue, and tasks with the same priority will be executed in the order of first in, first out;
+4. Timeout alarm (optional): Check the timeout alarm, timeout failure, and fill in the "timeout period". When the task execution time exceeds **timeout period**, an alert email will be sent and the task timeout fails;
+5. Resources (optional). Resource files are files created or uploaded on the Resource Center -> File Management page. For example, the file name is `test.sh`, and the command to call the resource in the script is `sh test.sh`;
+6. Custom parameters (optional), refer to [Custom Parameters](#UserDefinedParameters);
+7. Click the "Confirm Add" button to save the task settings.
+
+- **Increase the order of task execution:** Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished executing, tasks 2 and 3 will be executed simultaneously.
+
+  <p align="center">
+     <img src="/img/dag6.png" width="80%" />
+  </p>
+
+- **Delete dependencies:** Click the "arrow" icon in the upper right corner <img src="/img/arrow.png" width="35"/>, select the connection line, and click the "Delete" icon in the upper right corner <img src= "/img/delete.png" width="35"/>, delete dependencies between tasks.
+  <p align="center">
+     <img src="/img/dag7.png" width="80%" />
+  </p>
+
+- **Save workflow definition:** Click the "Save" button, and the "Set DAG chart name" pop-up box will pop up, as shown in the figure below. Enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to [ Custom parameters](#UserDefinedParameters)), click the "Add" button, and the workflow definition is created successfully.
+  <p align="center">
+     <img src="/img/dag8.png" width="80%" />
+   </p>
+> For other types of tasks, please refer to [Task Node Type and Parameter Settings](#TaskParamers).
+
+## Workflow definition operation function
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown below:
+
+<p align="center">
+<img src="/img/work_list_en.png" width="80%" />
+</p>
+The operation functions of the workflow definition list are as follows:
+
+- **Edit:** Only "offline" workflow definitions can be edited. Workflow DAG editing is the same as [Create Workflow Definition](#creatDag).
+- **Online:** When the workflow status is "Offline", used to online workflow. Only the workflow in the "Online" state can run, but cannot be edited.
+- **Offline:** When the workflow status is "Online", used to offline workflow. Only the workflow in the "Offline" state can be edited, but not run.
+- **Run:** Only workflow in the online state can run. See [2.3.3 Run Workflow](#runWorkflow) for the operation steps
+- **Timing:** Timing can only be set in online workflows, and the system automatically schedules the workflow to run on a regular basis. The status after creating a timing is "offline", and the timing must be online on the timing management page to take effect. See [2.3.4 Workflow Timing](#creatTiming) for timing operation steps.
+- **Timing Management:** The timing management page can be edited, online/offline, and deleted.
+- **Delete:** Delete the workflow definition.
+- **Download:** Download workflow definition to local.
+- **Tree Diagram:** Display the task node type and task status in a tree structure, as shown in the figure below:
+  <p align="center">
+      <img src="/img/tree_en.png" width="80%" />
+  </p>
+
+## <span id=runWorkflow>Run the workflow</span>
+
+- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown in the figure below, click the "Go Online" button <img src="/img/online.png" width="35"/>,Go online workflow.
+  <p align="center">
+      <img src="/img/work_list_en.png" width="80%" />
+  </p>
+
+- Click the "Run" button to pop up the startup parameter setting pop-up box, as shown in the figure below, set the startup parameters, click the "Run" button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.
+     <p align="center">
+       <img src="/img/run_work_en.png" width="80%" />
+     </p>  
+  <span id=runParamers>Description of workflow operating parameters:</span> 
+       
+      * Failure strategy: When a task node fails to execute, other parallel task nodes need to execute the strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all tasks being executed, and terminate the entire process.
+      * Notification strategy: When the process is over, the process execution information notification email is sent according to the process status, including any status is not sent, successful sent, failed sent, successful or failed sent.
+      * Process priority: The priority of process operation, divided into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high-level processes will be executed first in the execution queue, and processes with the same priority will be executed in a first-in first-out order.
+      * Worker group: The process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
+      * Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process information or email will be sent to all members in the notification group.
+      * Recipient: Select notification policy||Timeout alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
+      * Cc: Select the notification strategy||Timeout alarm||When fault tolerance occurs, the process information or warning email will be copied to the CC list.
+      * Startup parameter: Set or overwrite global parameter values when starting a new process instance.
+      * Complement: Two modes including serial complement and parallel complement. Serial complement: Within the specified time range, the complements are executed from the start date to the end date and N process instances are generated in turn; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances.
+    * For example, you need to fill in the data from May 1 to May 10.
+
+    <p align="center">
+        <img src="/img/complement_en1.png" width="80%" />
+    </p>
+
+  > Serial mode: The complement is executed sequentially from May 1 to May 10, and ten process instances are generated on the process instance page;
+
+  > Parallel mode: The tasks from May 1 to may 10 are executed simultaneously, and 10 process instances are generated on the process instance page.
+
+## <span id=creatTiming>Workflow timing</span>
+
+- Create timing: Click Project Management->Workflow->Workflow Definition, enter the workflow definition page, go online the workflow, click the "timing" button <img src="/img/timing.png" width="35"/> ,The timing parameter setting dialog box pops up, as shown in the figure below:
+  <p align="center">
+      <img src="/img/time_schedule_en.png" width="80%" />
+  </p>
+- Choose the start and end time. In the start and end time range, the workflow is run at regular intervals; not in the start and end time range, no more regular workflow instances are generated.
+- Add a timing that is executed once every day at 5 AM, as shown in the following figure:
+  <p align="center">
+      <img src="/img/timer-en.png" width="80%" />
+  </p>
+- Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as [workflow running parameters](#runParamers).
+- Click the "Create" button to create the timing successfully. At this time, the timing status is "**Offline**" and the timing needs to be **Online** to take effect.
+- Timing online: Click the "timing management" button <img src="/img/timeManagement.png" width="35"/>, enter the timing management page, click the "online" button, the timing status will change to "online", as shown in the below figure, the workflow takes effect regularly.
+  <p align="center">
+      <img src="/img/time-manage-list-en.png" width="80%" />
+  </p>
+
+## Import workflow
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, click the "Import Workflow" button to import the local workflow file, the workflow definition list displays the imported workflow, and the status is offline.
diff --git a/docs/en-us/2.0.2/user_doc/guide/workflow-instance.md b/docs/en-us/2.0.2/user_doc/guide/workflow-instance.md
new file mode 100644
index 0000000..ac65ebe
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/guide/workflow-instance.md
@@ -0,0 +1,62 @@
+# Workflow instance
+
+## View workflow instance
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:
+     <p align="center">
+        <img src="/img/instance-list-en.png" width="80%" />
+     </p>
+- Click the workflow name to enter the DAG view page to view the task execution status, as shown in the figure below.
+  <p align="center">
+    <img src="/img/instance-runs-en.png" width="80%" />
+  </p>
+
+## View task log
+
+- Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the following figure:
+   <p align="center">
+     <img src="/img/instanceViewLog-en.png" width="80%" />
+   </p>
+- Click "View Log", a log pop-up box will pop up, as shown in the figure below, the task log can also be viewed on the task instance page, refer to [Task View Log](#taskLog)。
+   <p align="center">
+     <img src="/img/task-log-en.png" width="80%" />
+   </p>
+
+## View task history
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Double-click the task node, as shown in the figure below, click "View History" to jump to the task instance page, and display a list of task instances running by the workflow instance
+   <p align="center">
+     <img src="/img/task_history_en.png" width="80%" />
+   </p>
+
+## View operating parameters
+
+- Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Click the icon in the upper left corner <img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon <img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure:
+   <p align="center">
+     <img src="/img/run_params_en.png" width="80%" />
+   </p>
+
+## Workflow instance operation function
+
+Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:
+
+  <p align="center">
+    <img src="/img/instance-list-en.png" width="80%" />
+  </p>
+
+- **Edit:** Only terminated processes can be edited. Click the "Edit" button or the name of the workflow instance to enter the DAG edit page. After edit, click the "Save" button to pop up the Save DAG pop-up box, as shown in the figure below. In the pop-up box, check "Whether to update to workflow definition" and save After that, the workflow definition will be updated; if it is not checked, the workflow definition will not be updated.
+     <p align="center">
+       <img src="/img/editDag-en.png" width="80%" />
+     </p>
+- **Rerun:** Re-execute the terminated process.
+- **Recovery failed:** For failed processes, you can perform recovery operations, starting from the failed node.
+- **Stop:** To **stop** the running process, the background will first `kill`worker process, and then execute `kill -9` operation
+- **Pause:** Perform a **pause** operation on the running process, the system status will change to **waiting for execution**, it will wait for the end of the task being executed, and pause the next task to be executed.
+- **Resume pause:** To resume the paused process, start running directly from the **paused node**
+- **Delete:** Delete the workflow instance and the task instance under the workflow instance
+- **Gantt chart:** The vertical axis of the Gantt chart is the topological sorting of task instances under a certain workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:
+     <p align="center">
+         <img src="/img/gantt-en.png" width="80%" />
+     </p>
diff --git a/docs/en-us/2.0.2/user_doc/upgrade.md b/docs/en-us/2.0.2/user_doc/upgrade.md
new file mode 100644
index 0000000..54161d5
--- /dev/null
+++ b/docs/en-us/2.0.2/user_doc/upgrade.md
@@ -0,0 +1,63 @@
+
+# DolphinScheduler upgrade documentation
+
+## 1. Back Up Previous Version's Files and Database.
+
+## 2. Stop All Services of DolphinScheduler.
+
+ `sh ./script/stop-all.sh`
+
+## 3. Download the New Version's Installation Package.
+
+- [Download](/en-us/download/download.html) the latest version of the installation packages.
+- The following upgrade operations need to be performed in the new version's directory.
+
+## 4. Database Upgrade
+- Modify the following properties in `conf/config/install_config.conf`.
+
+- If you use MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-8.0.16.jar, and then correctly config database connect information. You can download mysql connector jar [here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config database conne [...]
+
+```conf
+# Database type, username, password, IP, port, metadata. For now dbtype supports `mysql` and `postgresql`, `H2`
+# Please make sure that the value of configuration is quoted in double quotation marks, otherwise may not take effect
+DATABASE_TYPE="mysql"
+SPRING_DATASOURCE_URL="jdbc:mysql://ds1:3306/ds_201_doc?useUnicode=true&characterEncoding=UTF-8"
+# Have to modify if you are not using dolphinscheduler/dolphinscheduler as your username and password
+SPRING_DATASOURCE_USERNAME="dolphinscheduler"
+SPRING_DATASOURCE_PASSWORD="dolphinscheduler"
+```
+
+- Execute database upgrade script
+
+    `sh ./script/create-dolphinscheduler.sh`
+
+## 5. Backend Service Upgrade.
+
+### 5.1 Modify the Content in `conf/config/install_config.conf` File.
+- Standalone Deployment please refer the [6, Modify running arguments] in [Standalone-Deployment](/en-us/docs/2.0.2/user_doc/guide/installation/standalone.html).
+- Cluster Deployment please refer the [6, Modify running arguments] in [Cluster-Deployment](/en-us/docs/2.0.2/user_doc/guide/installation/cluster.html).
+
+#### Masters Need Attentions
+
+1、Modify the workers config item in conf/config/install_config.conf file.
+
+Imaging bellow are the machine worker service to be deployed:
+| hostname | ip |
+| :---  | :---:  |
+| ds1   | 192.168.xx.10     |
+| ds2   | 192.168.xx.11     |
+| ds3   | 192.168.xx.12     |
+
+To keep worker group config consistent with the previous version, we need to modify workers config item as below:
+
+```shell
+#worker service is deployed on which machine, and also specify which worker group this worker belongs to. 
+workers="ds1:service1,ds2:service2,ds3:service2"
+```
+
+### 5.2 Execute Deploy Script.
+```shell
+`sh install.sh`
+```
+
+
diff --git a/docs/en-us/dev/user_doc/architecture/design.md b/docs/en-us/dev/user_doc/architecture/design.md
index 7dec83c..c2c89c9 100644
--- a/docs/en-us/dev/user_doc/architecture/design.md
+++ b/docs/en-us/dev/user_doc/architecture/design.md
@@ -100,7 +100,8 @@ Before explaining the architecture of the scheduling system, let's first underst
 
 * **UI** 
 
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [User Manual](../guide/introduction.md) section。
+  The front-end page of the system provides various visual operation interfaces of the system,See more
+  at [Introduction to Functions](../guide/homepage.md) section。
 
 #### 2.3 Architecture design ideas
 
diff --git a/docs/en-us/dev/user_doc/integration/ambari.md b/docs/en-us/dev/user_doc/integration/ambari.md
deleted file mode 100644
index bbc4f85..0000000
--- a/docs/en-us/dev/user_doc/integration/ambari.md
+++ /dev/null
@@ -1,128 +0,0 @@
-### Instructions for using the DolphinScheduler's Ambari plug-in
-
-#### Note
-
-1. This document is intended for users with a basic understanding of Ambari
-2. This document is a description of adding the DolphinScheduler service to the installed Ambari service
-3. This document is based on version 2.5.2 of Ambari 
-
-#### Installation preparation
-
-1. Prepare the RPM packages
-
-   - It is generated by executing the command `mvn -U clean install -Prpmbuild -Dmaven.test.skip=true -X` in the project root directory (In the directory: dolphinscheduler-dist/target/rpm/apache-dolphinscheduler/RPMS/noarch)
-
-2. Create an installation for DolphinScheduler with the user has read and write access to the installation directory (/opt/soft)
-
-3. Install with rpm package
-
-   - Manual installation (recommended):
-      - Copy the prepared RPM packages to each node of the cluster.
-      - Execute with DolphinScheduler installation user: `rpm -ivh apache-dolphinscheduler-xxx.noarch.rpm`
-      - Mysql-connector-java packaged using the default POM file will not be included.
-      - The RPM package was packaged in the project with the installation path of /opt/soft. 
-        If you use MySQL as the database, you need to add it manually.
-      
-   - Automatic installation with Ambari
-      - Each node of the cluster needs to be configured the local yum source
-      - Copy the prepared RPM packages to each node local yum source
-
-4. Copy plug-in directory
-
-   - copy directory ambari_plugin/common-services/DOLPHIN to ambari-server/resources/common-services/
-   - copy directory ambari_plugin/statcks/DOLPHIN to ambari-server/resources/stacks/HDP/2.6/services/--stack version is selected based on the actual situation
-
-5. Initializes the database information
-
-   ```sql
-   -- Create the database for the DolphinScheduler:dolphinscheduler
-   CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-   
-   -- Initialize the user and password for the dolphinscheduler database and assign permissions
-   -- Replace the {user} in the SQL statement below with the user of the dolphinscheduler database
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-   GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-   flush privileges;
-   ```
-
-#### Ambari Install DolphinScheduler
-- **NOTE: You have to install Zookeeper first**
-
-1. Install DolphinScheduler on Ambari web interface
-
-   ![](/img/ambari-plugin/DS2_AMBARI_001.png)
-
-2. Select the nodes for the DolphinScheduler's Master installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_002.png)
-
-3. Configure the DolphinScheduler's nodes for Worker, Api, Logger, Alert installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_003.png)
-
-4. Set the installation users of the DolphinScheduler service (created in step 1) and the user groups they belong to
-
-   ![](/img/ambari-plugin/DS2_AMBARI_004.png)
-
-5. System Env Optimization will export some system environment config. Modify according to the actual situation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_020.png)
-   
-6. Configure the database information (same as in the initialization database in step 1)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_005.png)
-
-7. Configure additional information if needed
-
-   ![](/img/ambari-plugin/DS2_AMBARI_006.png)
-
-   ![](/img/ambari-plugin/DS2_AMBARI_007.png)
-
-8. Perform the next steps as normal
-
-   ![](/img/ambari-plugin/DS2_AMBARI_008.png)
-
-9. The interface after successful installation
-
-   ![](/img/ambari-plugin/DS2_AMBARI_009.png)
-   
-   
-
-------
-
-
-
-#### Add components to the node through Ambari -- for example, add a DolphinScheduler Worker
-
-***NOTE***: DolphinScheduler Logger is the installation dependent component of DS Worker in Dolphin's Ambari installation (need to add installation first; Prevent the Job log on the corresponding Worker from being checked)
-
-1. Locate the component node to add -- for example, node ark3
-
-   ![DS2_AMBARI_011](/img/ambari-plugin/DS2_AMBARI_011.png)
-
-2. Add components -- the drop-down list is all addable
-
-   ![DS2_AMBARI_012](/img/ambari-plugin/DS2_AMBARI_012.png)
-
-3. Confirm component addition
-
-   ![DS2_AMBARI_013](/img/ambari-plugin/DS2_AMBARI_013.png)
-
-4. After adding DolphinScheduler Worker and DolphinScheduler Logger components
-
-   ![DS2_AMBARI_015](/img/ambari-plugin/DS2_AMBARI_015.png)
-
-5. Start the component
-
-   ![DS2_AMBARI_016](/img/ambari-plugin/DS2_AMBARI_016.png)
-
-
-#### Remove the component from the node with Ambari
-
-1. Stop the component in the corresponding node
-
-   ![DS2_AMBARI_018](/img/ambari-plugin/DS2_AMBARI_018.png)
-
-2. Remove components
-
-   ![DS2_AMBARI_019](/img/ambari-plugin/DS2_AMBARI_019.png)
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/configuration.md b/docs/zh-cn/2.0.2/user_doc/architecture/configuration.md
new file mode 100644
index 0000000..ceb99a2
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/configuration.md
@@ -0,0 +1,406 @@
+<!-- markdown-link-check-disable -->
+
+# 前言
+本文档为dolphinscheduler配置文件说明文档,针对版本为 dolphinscheduler-1.3.x 版本.
+
+# 目录结构
+目前dolphinscheduler 所有的配置文件都在 [conf ] 目录中.
+为了更直观的了解[conf]目录所在的位置以及包含的配置文件,请查看下面dolphinscheduler安装目录的简化说明.
+本文主要讲述dolphinscheduler的配置文件.其他部分先不做赘述.
+
+[注:以下 dolphinscheduler 简称为DS.]
+```
+
+├─bin                               DS命令存放目录
+│  ├─dolphinscheduler-daemon.sh         启动/关闭DS服务脚本
+│  ├─start-all.sh                       根据配置文件启动所有DS服务
+│  ├─stop-all.sh                        根据配置文件关闭所有DS服务
+├─conf                              配置文件目录
+│  ├─application-api.properties         api服务配置文件
+│  ├─datasource.properties              数据库配置文件
+│  ├─zookeeper.properties               zookeeper配置文件
+│  ├─master.properties                  master服务配置文件
+│  ├─worker.properties                  worker服务配置文件
+│  ├─quartz.properties                  quartz服务配置文件
+│  ├─common.properties                  公共服务[存储]配置文件
+│  ├─alert.properties                   alert服务配置文件
+│  ├─config                             环境变量配置文件夹
+│      ├─install_config.conf                DS环境变量配置脚本[用于DS安装/启动]
+│  ├─env                                运行脚本环境变量配置目录
+│      ├─dolphinscheduler_env.sh            运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper文件目录
+│  ├─i18n                               i18n配置文件目录
+│  ├─logback-api.xml                    api服务日志配置文件
+│  ├─logback-master.xml                 master服务日志配置文件
+│  ├─logback-worker.xml                 worker服务日志配置文件
+│  ├─logback-alert.xml                  alert服务日志配置文件
+├─sql                               DS的元数据创建升级sql文件
+│  ├─create                             创建SQL脚本目录
+│  ├─upgrade                            升级SQL脚本目录
+│  ├─dolphinscheduler_postgre.sql       postgre数据库初始化脚本
+│  ├─dolphinscheduler_mysql.sql         mysql数据库初始化脚本
+│  ├─soft_version                       当前DS版本标识文件
+├─script                            DS服务部署,数据库创建/升级脚本目录
+│  ├─create-dolphinscheduler.sh         DS数据库初始化脚本      
+│  ├─upgrade-dolphinscheduler.sh        DS数据库升级脚本                
+│  ├─monitor-server.sh                  DS服务监控启动脚本               
+│  ├─scp-hosts.sh                       安装文件传输脚本                                                    
+│  ├─remove-zk-node.sh                  清理zookeeper缓存文件脚本       
+├─ui                                前端WEB资源目录
+├─lib                               DS依赖的jar存放目录
+├─install.sh                        自动安装DS服务脚本
+
+
+```
+
+
+# 配置文件详解
+
+序号| 服务分类 |  配置文件|
+|--|--|--|
+1|启动/关闭DS服务脚本|dolphinscheduler-daemon.sh
+2|数据库连接配置 | datasource.properties
+3|zookeeper连接配置|zookeeper.properties
+4|公共[存储]配置|common.properties
+5|API服务配置|application-api.properties
+6|Master服务配置|master.properties
+7|Worker服务配置|worker.properties
+8|Alert 服务配置|alert.properties
+9|Quartz配置|quartz.properties
+10|DS环境变量配置脚本[用于DS安装/启动]|install_config.conf
+11|运行脚本加载环境变量配置文件 <br />[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|各服务日志配置文件|api服务日志配置文件 : logback-api.xml  <br /> master服务日志配置文件  : logback-master.xml    <br /> worker服务日志配置文件 : logback-worker.xml  <br /> alert服务日志配置文件 : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [启动/关闭DS服务脚本]
+dolphinscheduler-daemon.sh脚本负责DS的启动&关闭. 
+start-all.sh/stop-all.sh最终也是通过dolphinscheduler-daemon.sh对集群进行启动/关闭操作.
+目前DS只是做了一个基本的设置,JVM参数请根据各自资源的实际情况自行设置.
+
+默认简化参数如下:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> 不建议设置"-XX:DisableExplicitGC" , DS使用Netty进行通讯,设置该参数,可能会导致内存泄漏.
+
+## 2.datasource.properties [数据库连接]
+在DS中使用Druid对数据库连接进行管理,默认简化配置如下.
+|参数 | 默认值| 描述|
+|--|--|--|
+spring.datasource.driver-class-name| |数据库驱动
+spring.datasource.url||数据库连接地址
+spring.datasource.username||数据库用户名
+spring.datasource.password||数据库密码
+spring.datasource.initialSize|5| 初始连接池数量
+spring.datasource.minIdle|5| 最小连接池数量
+spring.datasource.maxActive|5| 最大连接池数量
+spring.datasource.maxWait|60000| 最大等待时长
+spring.datasource.timeBetweenEvictionRunsMillis|60000| 连接检测周期
+spring.datasource.timeBetweenConnectErrorMillis|60000| 重试间隔
+spring.datasource.minEvictableIdleTimeMillis|300000| 连接保持空闲而不被驱逐的最小时间
+spring.datasource.validationQuery|SELECT 1|检测连接是否有效的sql
+spring.datasource.validationQueryTimeout|3| 检测连接是否有效的超时时间[seconds]
+spring.datasource.testWhileIdle|true| 申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
+spring.datasource.testOnBorrow|true| 申请连接时执行validationQuery检测连接是否有效
+spring.datasource.testOnReturn|false| 归还连接时执行validationQuery检测连接是否有效
+spring.datasource.defaultAutoCommit|true| 是否开启自动提交
+spring.datasource.keepAlive|true| 连接池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作。
+spring.datasource.poolPreparedStatements|true| 开启PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| 要启用PSCache,必须配置大于0,当大于0时,poolPreparedStatements自动触发修改为true。
+
+
+## 3.zookeeper.properties [zookeeper连接配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+zookeeper.quorum|localhost:2181| zk集群连接信息
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS在zookeeper存储根目录
+zookeeper.session.timeout|60000|  session 超时
+zookeeper.connection.timeout|30000|  连接超时
+zookeeper.retry.base.sleep|100| 基本重试时间差
+zookeeper.retry.max.sleep|30000| 最大重试时间
+zookeeper.retry.maxtime|10|最大重试次数
+
+
+## 4.common.properties [hadoop、s3、yarn配置]
+common.properties配置文件目前主要是配置hadoop/s3a相关的配置. 
+|参数 |默认值| 描述| 
+|--|--|--|
+data.basedir.path|/tmp/dolphinscheduler|本地工作目录,用于存放临时文件
+resource.storage.type|NONE|资源文件存储类型: HDFS,S3,NONE
+resource.upload.path|/dolphinscheduler|资源文件存储路径
+hadoop.security.authentication.startup.state|false|hadoop是否开启kerberos权限
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos配置目录
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos登录用户
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos登录用户keytab
+kerberos.expire.time|2|kerberos过期时间,整数,单位为小时
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|资源中心支持的文件格式
+hdfs.root.user|hdfs|如果存储类型为HDFS,需要配置拥有对应操作权限的用户
+fs.defaultFS|hdfs://mycluster:8020|请求地址如果resource.storage.type=S3,该值类似为: s3a://dolphinscheduler. 如果resource.storage.type=HDFS, 如果 hadoop 配置了 HA,需要复制core-site.xml 和 hdfs-site.xml 文件到conf目录
+fs.s3a.endpoint||s3 endpoint地址
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||yarn resourcemanager 地址, 如果resourcemanager开启了HA, 输入HA的IP地址(以逗号分隔),如果resourcemanager为单节点, 该值为空即可
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|如果resourcemanager开启了HA或者没有使用resourcemanager,保持默认值即可. 如果resourcemanager为单节点,你需要将ds1 配置为resourcemanager对应的hostname
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false|是否处于开发模式
+
+
+## 5.application-api.properties [API服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+server.port|12345|api服务通讯端口
+server.servlet.session.timeout|7200|session超时时间
+server.servlet.context-path|/dolphinscheduler |请求路径
+spring.servlet.multipart.max-file-size|1024MB|最大上传文件大小
+spring.servlet.multipart.max-request-size|1024MB|最大请求大小
+server.jetty.max-http-post-size|5000000|jetty服务最大发送请求大小
+spring.messages.encoding|UTF-8|请求编码
+spring.jackson.time-zone|GMT+8|设置时区
+spring.messages.basename|i18n/messages|i18n配置
+security.authentication.type|PASSWORD|权限校验类型
+
+
+## 6.master.properties [Master服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+master.listen.port|5678|master监听端口
+master.exec.threads|100|master工作线程数量,用于限制并行的流程实例数量
+master.exec.task.num|20|master每个流程实例的并行任务数量
+master.dispatch.task.num|3|master每个批次的派发任务数量
+master.host.selector|LowerWeight|master host选择器,用于选择合适的worker执行任务,可选值: Random, RoundRobin, LowerWeight
+master.heartbeat.interval|10|master心跳间隔,单位为秒
+master.task.commit.retryTimes|5|任务重试次数
+master.task.commit.interval|1000|任务提交间隔,单位为毫秒
+master.max.cpuload.avg|-1|master最大cpuload均值,只有高于系统cpuload均值时,master服务才能调度任务. 默认值为-1: cpu cores * 2
+master.reserved.memory|0.3|master预留内存,只有低于系统可用内存时,master服务才能调度任务,单位为G
+
+
+## 7.worker.properties [Worker服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+worker.listen.port|1234|worker监听端口
+worker.exec.threads|100|worker工作线程数量,用于限制并行的任务实例数量
+worker.heartbeat.interval|10|worker心跳间隔,单位为秒
+worker.max.cpuload.avg|-1|worker最大cpuload均值,只有高于系统cpuload均值时,worker服务才能被派发任务. 默认值为-1: cpu cores * 2
+worker.reserved.memory|0.3|worker预留内存,只有低于系统可用内存时,worker服务才能被派发任务,单位为G
+worker.groups|default|worker分组配置,逗号分隔,例如'worker.groups=default,test' <br> worker启动时会根据该配置自动加入对应的分组
+
+
+## 8.alert.properties [Alert 告警服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+alert.type|EMAIL|告警类型|
+mail.protocol|SMTP| 邮件服务器协议
+mail.server.host|xxx.xxx.com|邮件服务器地址
+mail.server.port|25|邮件服务器端口
+mail.sender|xxx@xxx.com|发送人邮箱
+mail.user|xxx@xxx.com|发送人邮箱名称
+mail.passwd|111111|发送人邮箱密码
+mail.smtp.starttls.enable|true|邮箱是否开启tls
+mail.smtp.ssl.enable|false|邮箱是否开启ssl
+mail.smtp.ssl.trust|xxx.xxx.com|邮箱ssl白名单
+xls.file.path|/tmp/xls|邮箱附件临时工作目录
+||以下为企业微信配置[选填]|
+enterprise.wechat.enable|false|企业微信是否启用
+enterprise.wechat.corp.id|xxxxxxx|
+enterprise.wechat.secret|xxxxxxx|
+enterprise.wechat.agent.id|xxxxxxx|
+enterprise.wechat.users|xxxxxxx|
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
+enterprise.wechat.user.send.msg||发送消息格式
+enterprise.wechat.team.send.msg||群发消息格式
+plugin.dir|/Users/xx/your/path/to/plugin/dir|插件目录
+
+
+## 9.quartz.properties [Quartz配置]
+这里面主要是quartz配置,请结合实际业务场景&资源进行配置,本文暂时不做展开.
+|参数 |默认值| 描述| 
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS环境变量配置脚本[用于DS安装/启动]]
+install_config.conf这个配置文件比较繁琐,这个文件主要有两个地方会用到.
+* 1.DS集群的自动安装. 
+
+> 调用install.sh脚本会自动加载该文件中的配置.并根据该文件中的内容自动配置上述的配置文件中的内容. 
+> 比如:dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties 等文件.
+
+
+* 2.DS集群的启动&关闭.
+>DS集群在启动&关闭的时候,会加载该配置文件中的masters,workers,alertServer,apiServers等参数,启动/关闭DS集群.
+
+文件内容如下:
+```bash
+
+# 注意: 该配置文件中如果包含特殊字符,如: `.*[]^${}\+?|()@#&`, 请转义,
+#      示例: `[` 转义为 `\[`
+
+# 数据库类型, 目前仅支持 postgresql 或者 mysql
+dbtype="mysql"
+
+# 数据库 地址 & 端口
+dbhost="192.168.xx.xx:3306"
+
+# 数据库 名称
+dbname="dolphinscheduler"
+
+
+# 数据库 用户名
+username="xx"
+
+# 数据库 密码
+password="xx"
+
+# Zookeeper地址
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# 将DS安装到哪个目录,如: /data1_1T/dolphinscheduler,
+installPath="/data1_1T/dolphinscheduler"
+
+# 使用哪个用户部署
+# 注意: 部署用户需要sudo 权限, 并且可以操作 hdfs .
+#     如果使用hdfs的话,根目录必须使用该用户进行创建.否则会有权限相关的问题.
+deployUser="dolphinscheduler"
+
+
+# 以下为告警服务配置
+# 邮件服务器地址
+mailServerHost="smtp.exmail.qq.com"
+
+# 邮件服务器 端口
+mailServerPort="25"
+
+# 发送者
+mailSender="xxxxxxxxxx"
+
+# 发送用户
+mailUser="xxxxxxxxxx"
+
+# 邮箱密码
+mailPassword="xxxxxxxxxx"
+
+# TLS协议的邮箱设置为true,否则设置为false
+starttlsEnable="true"
+
+# 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true
+sslEnable="false"
+
+# 邮件服务地址值,同 mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+#业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE。如果想上传到HDFS,请配置为HDFS;如果不需要资源上传功能请选择NONE。
+resourceStorageType="NONE"
+
+# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+# Note,s3 be sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# 如果resourceStorageType 为S3 需要配置的参数如下:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# 如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx",否则如果是单ResourceManager或者根本没用到yarn,请配置yarnHaIps=""即可,如果没用到yarn,配置为""
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+# 如果是单ResourceManager,则配置为ResourceManager节点ip或主机名,否则保持默认值即可。
+singleYarnIp="yarnIp1"
+
+# 资源文件在 HDFS/S3  存储路径
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3  操作用户
+hdfsRootUser="hdfs"
+
+# 以下为 kerberos 配置
+
+# kerberos是否开启
+kerberosStartUp="false"
+# kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+# keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+# username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# api 服务端口
+apiServerPort="12345"
+
+
+# 部署DS的所有主机hostname
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# ssh 端口 , 默认 22
+sshPort="22"
+
+# 部署master服务主机
+masters="ds1,ds2"
+
+# 部署 worker服务的主机
+# 注意: 每一个worker都需要设置一个worker 分组的名称,默认值为 "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  部署alert服务主机
+alertServer="ds3"
+
+# 部署api服务主机 
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [环境变量配置]
+通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中.
+涉及到的任务类型有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12.各服务日志配置文件
+对应服务服务名称| 日志文件名 |
+|--|--|--|
+api服务日志配置文件 |logback-api.xml|
+master服务日志配置文件|logback-master.xml |
+worker服务日志配置文件|logback-worker.xml |
+alert服务日志配置文件|logback-alert.xml |
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/design.md b/docs/zh-cn/2.0.2/user_doc/architecture/design.md
new file mode 100644
index 0000000..4418be1
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/design.md
@@ -0,0 +1,267 @@
+## 系统架构设计
+本章节介绍Apache DolphinScheduler调度系统架构
+
+
+### 1.系统架构
+
+#### 1.1 系统架构图
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="系统架构图"  width="70%" />
+  <p align="center">
+        <em>系统架构图</em>
+  </p>
+</p>
+
+#### 1.2 启动流程活动图
+<p align="center">
+  <img src="/img/master-process-2.0-zh_cn.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>启动流程活动图</em>
+  </p>
+</p>
+
+#### 1.3 架构说明
+
+* **MasterServer** 
+
+    MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+    MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+    MasterServer基于netty提供监听服务。
+
+    ##### 该服务内主要包含:
+
+    - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
+
+    - **MasterSchedulerService**是一个扫描线程,定时扫描数据库中的 **command** 表,生成工作流实例,根据不同的**命令类型**进行不同的业务操作
+
+    - **WorkflowExecuteThread**主要是负责DAG任务切分、任务提交、各种不同命令类型的逻辑处理,处理任务状态和工作流状态事件
+
+    - **EventExecuteService**处理master负责的工作流实例所有的状态变化事件,使用线程池处理工作流的状态事件
+    
+    - **StateWheelExecuteThread**处理依赖任务和超时任务的定时状态更新
+
+* **WorkerServer** 
+
+     WorkerServer也采用分布式无中心设计理念,支持自定义任务插件,主要负责任务的执行和提供日志服务。
+     WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
+     
+     ##### 该服务包含:
+     
+     - **WorkerManagerThread**主要通过netty领取master发送过来的任务,并根据不同任务类型调用**TaskExecuteThread**对应执行器。
+     
+     - **RetryReportTaskStatusThread**主要通过netty向master汇报任务状态,如果汇报失败,会一直重试汇报
+
+     - **LoggerServer**是一个日志服务,提供日志分片查看、刷新和下载等功能
+     
+* **Registry** 
+
+    注册中心,使用插件化实现,默认支持Zookeeper, 系统中的MasterServer和WorkerServer节点通过注册中心来进行集群管理和容错。另外系统还基于注册中心进行事件监听和分布式锁。
+    
+* **Alert** 
+
+    提供告警相关功能,仅支持单机服务。支持自定义告警插件。
+
+* **API** 
+
+    API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+    接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
+
+* **UI** 
+
+  系统的前端页面,提供系统的各种可视化操作界面,详见[功能介绍](../guide/homepage.md)部分。
+
+#### 1.4 架构设计思想
+
+##### 一、去中心化vs中心化 
+
+###### 中心化思想
+
+中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave角色"  width="50%" />
+ </p>
+
+- Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
+- Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
+
+
+
+中心化思想设计存在的问题:
+
+- 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
+- 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
+
+
+
+###### 去中心化
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="去中心化"  width="50%" />
+ </p>
+
+- 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
+- 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
+- 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
+
+
+- DolphinScheduler的去中心化是Master/Worker注册到Zookeeper中,实现Master集群和Worker集群无中心,使用分片机制,公平分配工作流在master上执行,并通过不同的发送策略将任务发送给worker执行具体的任务
+
+#####  二、Master执行流程
+
+1. DolphinScheduler使用分片算法将command取模,根据master的排序id分配,master将拿到的command转换成工作流实例,使用线程池处理工作流实例
+
+
+2. DolphinScheduler对工作流的处理流程:
+
+  - 通过UI或者API调用,启动工作流,持久化一条command到数据库中
+  - Master通过分片算法,扫描Command表,生成工作流实例ProcessInstance,同时删除Command数据
+  - Master使用线程池运行WorkflowExecuteThread,执行工作流实例的流程,包括构建DAG,创建任务实例TaskInstance,将TaskInstance通过netty发送给worker
+  - Worker收到任务以后,修改任务状态,并将执行信息返回Master
+  - Master收到任务信息,持久化到数据库,并且将状态变化事件存入EventExecuteService事件队列
+  - EventExecuteService根据事件队列调用WorkflowExecuteThread进行后续任务的提交和工作流状态的修改
+
+
+##### 三、容错设计
+容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
+
+###### 1. 宕机容错
+
+服务容错设计依赖于ZooKeeper的Watcher机制,实现原理如图:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler容错设计"  width="40%" />
+ </p>
+其中Master监控其他Master和Worker的目录,如果监听到remove事件,则会根据具体的业务逻辑进行流程实例容错或者任务实例容错。
+
+- Master容错流程:
+
+<p align="center">
+   <img src="/img/failover-master.jpg" alt="容错流程"  width="50%" />
+ </p>
+
+容错范围:从host的维度来看,Master的容错范围包括:自身host+注册中心上不存在的节点host,容错的整个过程会加锁;
+
+容错内容:Master的容错内容包括:容错工作流实例和任务实例,在容错前会比较实例的开始时间和服务节点的启动时间,在服务启动时间之后的则跳过容错;
+
+容错后处理:ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程调度,遍历 DAG 找到”正在运行”和“提交成功”的任务,对”正在运行”的任务监控其任务实例的状态,对”提交成功”的任务需要判断Task Queue中是否已经存在,如果存在则同样监控任务实例的状态,如果不存在则重新提交任务实例。
+
+
+
+- Worker容错流程:
+
+<p align="center">
+   <img src="/img/failover-worker.jpg" alt="容错流程"  width="50%" />
+ </p>
+
+容错范围:从工作流实例的维度看,每个Master只负责容错自己的工作流实例;只有在`handleDeadServer`时会加锁;
+
+容错内容:当发送Worker节点的remove事件时,Master只容错任务实例,在容错前会比较实例的开始时间和服务节点的启动时间,在服务启动时间之后的则跳过容错;
+
+容错后处理:Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
+
+注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+
+###### 2.任务失败重试
+
+这里首先要区分任务失败重试、流程失败恢复、流程失败重跑的概念:
+
+- 任务失败重试是任务级别的,是调度系统自动进行的,比如一个Shell任务设置重试次数为3次,那么在Shell任务运行失败后会自己再最多尝试运行3次
+- 流程失败恢复是流程级别的,是手动进行的,恢复是从只能**从失败的节点开始执行**或**从当前节点开始执行**
+- 流程失败重跑也是流程级别的,是手动进行的,重跑是从开始节点进行
+
+
+
+接下来说正题,我们将工作流中的任务节点分了两种类型。
+
+- 一种是业务节点,这种节点都对应一个实际的脚本或者处理语句,比如Shell节点,MR节点、Spark节点、依赖节点等。
+
+- 还有一种是逻辑节点,这种节点不做实际的脚本或语句处理,只是整个流程流转的逻辑处理,比如子流程节等。
+
+所有任务都可以配置失败重试的次数,当该任务节点失败,会自动重试,直到成功或者超过配置的重试次数。
+
+如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作
+
+
+
+##### 四、任务优先级设计
+在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
+
+-  按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+    - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+
+        - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
+             </p>
+
+        - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
+             </p>
+
+
+##### 五、Logback和netty实现日志访问
+
+-  由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+  -  将日志放到ES搜索引擎上
+  -  通过netty通信获取远程日志信息
+
+-  介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc远程访问"  width="50%" />
+ </p>
+
+
+- 我们使用自定义Logback的FileAppender和Filter功能,实现每个任务实例生成一个日志文件。
+- FileAppender主要实现如下:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+以/流程定义id/流程实例id/任务实例id.log的形式生成日志
+
+- 过滤匹配以TaskLogInfo开始的线程名称:
+
+- TaskLogFilter实现如下:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/designplus.md b/docs/zh-cn/2.0.2/user_doc/architecture/designplus.md
new file mode 100644
index 0000000..bcd28ac
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/designplus.md
@@ -0,0 +1,58 @@
+## 名词解释
+
+在对Apache DolphinScheduler了解之前,我们先来认识一下调度系统常用的名词
+
+### 1.名词解释
+
+**DAG:** 全称Directed Acyclic Graph,简称DAG。工作流中的Task任务以有向无环图的形式组装起来,从入度为零的节点进行拓扑遍历,直到无后继节点为止。举例如下图:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag示例"  width="60%" />
+  <p align="center">
+        <em>dag示例</em>
+  </p>
+</p>
+
+**流程定义**:通过拖拽任务节点并建立任务节点的关联所形成的可视化**DAG**
+
+**流程实例**:流程实例是流程定义的实例化,可以通过手动启动或定时调度生成,流程定义每运行一次,产生一个流程实例
+
+**任务实例**:任务实例是流程定义中任务节点的实例化,标识着具体的任务执行状态
+
+**任务类型**:目前支持有SHELL、SQL、SUB_PROCESS(子流程)、PROCEDURE、MR、SPARK、PYTHON、DEPENDENT(依赖)、,同时计划支持动态插件扩展,注意:其中子 **SUB_PROCESS**
+也是一个单独的流程定义,是可以单独启动执行的
+
+**调度方式**:系统支持基于cron表达式的定时调度和手动调度。命令类型支持:启动工作流、从当前节点开始执行、恢复被容错的工作流、恢复暂停流程、从失败节点开始执行、补数、定时、重跑、暂停、停止、恢复等待线程。
+其中 **恢复被容错的工作流** 和 **恢复等待线程** 两种命令类型是由调度内部控制使用,外部无法调用
+
+**定时调度**:系统采用 **quartz** 分布式调度器,并同时支持cron表达式可视化的生成
+
+**依赖**:系统不单单支持 **DAG** 简单的前驱和后继节点之间的依赖,同时还提供**任务依赖**节点,支持**流程间的自定义任务依赖**
+
+**优先级** :支持流程实例和任务实例的优先级,如果流程实例和任务实例的优先级不设置,则默认是先进先出
+
+**邮件告警**:支持 **SQL任务** 查询结果邮件发送,流程实例运行结果邮件告警及容错告警通知
+
+**失败策略**:对于并行运行的任务,如果有任务失败,提供两种失败策略处理方式,**继续**是指不管并行运行任务的状态,直到流程失败结束。**结束**是指一旦发现失败任务,则同时Kill掉正在运行的并行任务,流程失败结束
+
+**补数**:补历史数据,支持**区间并行和串行**两种补数方式
+
+### 2.模块介绍
+
+- dolphinscheduler-alert 告警模块,提供 AlertServer 服务。
+
+- dolphinscheduler-api web应用模块,提供 ApiServer 服务。
+
+- dolphinscheduler-common 通用的常量枚举、工具类、数据结构或者基类
+
+- dolphinscheduler-dao 提供数据库访问等操作。
+
+- dolphinscheduler-remote 基于 netty 的客户端、服务端
+
+- dolphinscheduler-server MasterServer 和 WorkerServer 服务
+
+- dolphinscheduler-service service模块,包含Quartz、Zookeeper、日志客户端访问服务,便于server模块和api模块调用
+
+- dolphinscheduler-ui 前端模块
+
+
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/listdocs.md b/docs/zh-cn/2.0.2/user_doc/architecture/listdocs.md
new file mode 100644
index 0000000..dcf5b5b
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/listdocs.md
@@ -0,0 +1,62 @@
+# 历史版本:
+#### 以下是Apache DolphinScheduler每个稳定版本的设置说明。
+
+### 版本:2.0.2
+
+#### 地址:[2.0.2 文档](/zh-cn/docs/2.0.2/user_doc/guide/quick-start.html)
+
+### 版本:2.0.1
+
+#### 地址:[2.0.1 文档](/zh-cn/docs/2.0.1/user_doc/guide/quick-start.html)
+
+### 版本:2.0.0
+
+#### 地址:[2.0.0 文档](/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html)
+
+### 版本:1.3.9
+
+#### 地址:[1.3.9 文档](/zh-cn/docs/1.3.9/user_doc/quick-start.html)
+
+### 版本:1.3.8
+
+#### 地址:[1.3.8 文档](/zh-cn/docs/1.3.8/user_doc/quick-start.html)
+
+### 版本:1.3.6
+
+#### 地址:[1.3.6 文档](/zh-cn/docs/1.3.6/user_doc/quick-start.html)
+
+### 版本:1.3.5
+
+#### 地址:[1.3.5 文档](/zh-cn/docs/1.3.5/user_doc/quick-start.html)
+
+### 版本:1.3.4
+
+##### 地址:[1.3.4 文档](/zh-cn/docs/1.3.4/user_doc/quick-start.html)
+
+### 版本:1.3.3
+
+#### 地址:[1.3.3 文档](/zh-cn/docs/1.3.4/user_doc/quick-start.html)
+
+### 版本:1.3.2
+
+#### 地址:[1.3.2 文档](/zh-cn/docs/1.3.2/user_doc/quick-start.html)
+
+### 版本:1.3.1
+
+#### 地址:[1.3.1 文档](/zh-cn/docs/1.3.1/user_doc/quick-start.html)
+
+### 版本:1.2.1
+
+#### 地址:[1.2.1 文档](/zh-cn/docs/1.2.1/user_doc/quick-start.html)
+
+### 版本:1.2.0
+
+#### 地址:[1.2.0 文档](/zh-cn/docs/1.2.0/user_doc/quick-start.html)
+
+### 版本:1.1.0
+
+#### 地址:[1.1.0 文档](/zh-cn/docs/1.2.0/user_doc/quick-start.html)
+
+### 版本:Dev
+
+#### 地址:[Dev 文档](/zh-cn/docs/dev/user_doc/guide/quick-start.html)
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/load-balance.md b/docs/zh-cn/2.0.2/user_doc/architecture/load-balance.md
new file mode 100644
index 0000000..cb381f3
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/load-balance.md
@@ -0,0 +1,58 @@
+### 负载均衡
+负载均衡即通过路由算法(通常是集群环境),合理的分摊服务器压力,达到服务器性能的最大优化。
+
+### DolphinScheduler-Worker 负载均衡算法
+
+DolphinScheduler-Master 分配任务至 worker,默认提供了三种算法:
+
+加权随机(random)
+
+平滑轮询(roundrobin)
+
+线性负载(lowerweight)
+
+默认配置为线性加权负载。
+
+由于路由是在客户端做的,即 master 服务,因此你可以更改 master.properties 中的 master.host.selector 来配置你所想要的算法。
+
+eg:master.host.selector=random(不区分大小写)
+
+### Worker 负载均衡配置
+
+配置文件 worker.properties
+
+#### 权重
+
+上述所有的负载算法都是基于权重来进行加权分配的,权重影响分流结果。你可以在 修改 worker.weight 的值来给不同的机器设置不同的权重。
+
+#### 预热
+
+考虑到 JIT 优化,我们会让 worker 在启动后低功率的运行一段时间,使其逐渐达到最佳状态,这段过程我们称之为预热。感兴趣的同学可以去阅读 JIT 相关的文章。
+
+因此 worker 在启动后,他的权重会随着时间逐渐达到最大(默认十分钟,我们没有提供配置项,如果需要,你可以修改并提交相关的 PR)。
+
+### 负载均衡算法细述
+
+#### 随机(加权)
+
+该算法比较简单,即在符合的 worker 中随机选取一台(权重会影响他的比重)。
+
+#### 平滑轮询(加权)
+
+加权轮询算法一个明显的缺陷。即在某些特殊的权重下,加权轮询调度会生成不均匀的实例序列,这种不平滑的负载可能会使某些实例出现瞬时高负载的现象,导致系统存在宕机的风险。为了解决这个调度缺陷,我们提供了平滑加权轮询算法。
+
+每台 worker 都有两个权重,即 weight(预热完成后保持不变),current_weight(动态变化),每次路由。都会遍历所有的 worker,使其 current_weight+weight,同时累加所有 worker 的 weight,计为  total_weight,然后挑选 current_weight 最大的作为本次执行任务的 worker,与此同时,将这台 worker 的 current_weight-total_weight。
+
+#### 线性加权(默认算法)
+
+该算法每隔一段时间会向注册中心上报自己的负载信息。我们主要根据两个信息来进行判断
+
+* load 平均值(默认是 CPU 核数 *2)
+* 可用物理内存(默认是 0.3,单位是 G)
+
+如果两者任何一个低于配置项,那么这台 worker 将不参与负载。(即不分配流量)
+
+你可以在 worker.properties 修改下面的属性来自定义配置
+
+* worker.max.cpuload.avg=-1 (worker最大cpuload均值,只有高于系统cpuload均值时,worker服务才能被派发任务. 默认值为-1: cpu cores * 2)
+* worker.reserved.memory=0.3 (worker预留内存,只有低于系统可用内存时,worker服务才能被派发任务,单位为G)
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/metadata.md b/docs/zh-cn/2.0.2/user_doc/architecture/metadata.md
new file mode 100644
index 0000000..6cda094
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/metadata.md
@@ -0,0 +1,185 @@
+# Dolphin Scheduler 2.0元数据文档
+
+<a name="25Ald"></a>
+### 表概览
+| 表名 | 表信息 |
+| :---: | :---: |
+| t_ds_access_token | 访问ds后端的token |
+| t_ds_alert | 告警信息 |
+| t_ds_alertgroup | 告警组 |
+| t_ds_command | 执行命令 |
+| t_ds_datasource | 数据源 |
+| t_ds_error_command | 错误命令 |
+| t_ds_process_definition | 流程定义 |
+| t_ds_process_instance | 流程实例 |
+| t_ds_project | 项目 |
+| t_ds_queue | 队列 |
+| t_ds_relation_datasource_user | 用户关联数据源 |
+| t_ds_relation_process_instance | 子流程 |
+| t_ds_relation_project_user | 用户关联项目 |
+| t_ds_relation_resources_user | 用户关联资源 |
+| t_ds_relation_udfs_user | 用户关联UDF函数 |
+| t_ds_relation_user_alertgroup | 用户关联告警组 |
+| t_ds_resources | 资源文件 |
+| t_ds_schedules | 流程定时调度 |
+| t_ds_session | 用户登录的session |
+| t_ds_task_instance | 任务实例 |
+| t_ds_tenant | 租户 |
+| t_ds_udfs | UDF资源 |
+| t_ds_user | 用户 |
+| t_ds_version | ds版本信息 |
+
+<a name="VNVGr"></a>
+### 用户	队列	数据源
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- 一个租户下可以有多个用户<br />
+- t_ds_user中的queue字段存储的是队列表中的queue_name信息,t_ds_tenant下存的是queue_id,在流程定义执行过程中,用户队列优先级最高,用户队列为空则采用租户队列<br />
+- t_ds_datasource表中的user_id字段表示创建该数据源的用户,t_ds_relation_datasource_user中的user_id表示,对数据源有权限的用户<br />
+<a name="HHyGV"></a>
+### 项目	资源	告警
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- 一个用户可以有多个项目,用户项目授权通过t_ds_relation_project_user表完成project_id和user_id的关系绑定<br />
+- t_ds_projcet表中的user_id表示创建该项目的用户,t_ds_relation_project_user表中的user_id表示对项目有权限的用户<br />
+- t_ds_resources表中的user_id表示创建该资源的用户,t_ds_relation_resources_user中的user_id表示对资源有权限的用户<br />
+- t_ds_udfs表中的user_id表示创建该UDF的用户,t_ds_relation_udfs_user表中的user_id表示对UDF有权限的用户<br />
+<a name="Bg2Sn"></a>
+### 命令	流程	任务
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
+
+- 一个项目有多个流程定义,一个流程定义可以生成多个流程实例,一个流程实例可以生成多个任务实例<br />
+- t_ds_schedulers表存放流程定义的定时调度信息<br />
+- t_ds_relation_process_instance表存放的数据用于处理流程定义中含有子流程的情况,parent_process_instance_id表示含有子流程的主流程实例id,process_instance_id表示子流程实例的id,parent_task_instance_id表示子流程节点的任务实例id,流程实例表和任务实例表分别对应t_ds_process_instance表和t_ds_task_instance表
+<a name="Pv25P"></a>
+### 核心表Schema
+<a name="32Jzd"></a>
+#### t_ds_process_definition
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 流程定义名称 |
+| version | int | 流程定义版本 |
+| release_state | tinyint | 流程定义的发布状态:0 未上线  1已上线 |
+| project_id | int | 项目id |
+| user_id | int | 流程定义所属用户id |
+| process_definition_json | longtext | 流程定义json串 |
+| description | text | 流程定义描述 |
+| global_params | text | 全局参数 |
+| flag | tinyint | 流程是否可用:0 不可用,1 可用 |
+| locations | text | 节点坐标信息 |
+| connects | text | 节点连线信息 |
+| receivers | text | 收件人 |
+| receivers_cc | text | 抄送人 |
+| create_time | datetime | 创建时间 |
+| timeout | int | 超时时间 |
+| tenant_id | int | 租户id |
+| update_time | datetime | 更新时间 |
+| modify_by | varchar | 修改用户 |
+| resource_ids | varchar | 资源id集 |
+
+<a name="e6jfz"></a>
+#### t_ds_process_instance
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 流程实例名称 |
+| process_definition_id | int | 流程定义id |
+| state | tinyint | 流程实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成 |
+| recovery | tinyint | 流程实例容错标识:0 正常,1 需要被容错重启 |
+| start_time | datetime | 流程实例开始时间 |
+| end_time | datetime | 流程实例结束时间 |
+| run_times | int | 流程实例运行次数 |
+| host | varchar | 流程实例所在的机器 |
+| command_type | tinyint | 命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程 |
+| command_param | text | 命令的参数(json格式) |
+| task_depend_type | tinyint | 节点依赖类型:0 当前节点,1 向前执行,2 向后执行 |
+| max_try_times | tinyint | 最大重试次数 |
+| failure_strategy | tinyint | 失败策略 0 失败后结束,1 失败后继续 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组id |
+| schedule_time | datetime | 预期运行时间 |
+| command_start_time | datetime | 开始命令时间 |
+| global_params | text | 全局参数(固化流程定义的参数) |
+| process_instance_json | longtext | 流程实例json(copy的流程定义的json) |
+| flag | tinyint | 是否可用,1 可用,0不可用 |
+| update_time | timestamp | 更新时间 |
+| is_sub_process | int | 是否是子工作流 1 是,0 不是 |
+| executor_id | int | 命令执行用户 |
+| locations | text | 节点坐标信息 |
+| connects | text | 节点连线信息 |
+| history_cmd | text | 历史命令,记录所有对流程实例的操作 |
+| dependence_schedule_times | text | 依赖节点的预估时间 |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+| timeout | int | 超时时间 |
+| tenant_id | int | 租户id |
+
+<a name="IvHEc"></a>
+#### t_ds_task_instance
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| name | varchar | 任务名称 |
+| task_type | varchar | 任务类型 |
+| process_definition_id | int | 流程定义id |
+| process_instance_id | int | 流程实例id |
+| task_json | longtext | 任务节点json |
+| state | tinyint | 任务实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成 |
+| submit_time | datetime | 任务提交时间 |
+| start_time | datetime | 任务开始时间 |
+| end_time | datetime | 任务结束时间 |
+| host | varchar | 执行任务的机器 |
+| execute_path | varchar | 任务执行路径 |
+| log_path | varchar | 任务日志路径 |
+| alert_flag | tinyint | 是否告警 |
+| retry_times | int | 重试次数 |
+| pid | int | 进程pid |
+| app_link | varchar | yarn app id |
+| flag | tinyint | 是否可用:0 不可用,1 可用 |
+| retry_interval | int | 重试间隔 |
+| max_retry_times | int | 最大重试次数 |
+| task_instance_priority | int | 任务实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+
+<a name="pPQkU"></a>
+#### t_ds_schedules
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| process_definition_id | int | 流程定义id |
+| start_time | datetime | 调度开始时间 |
+| end_time | datetime | 调度结束时间 |
+| crontab | varchar | crontab 表达式 |
+| failure_strategy | tinyint | 失败策略: 0 结束,1 继续 |
+| user_id | int | 用户id |
+| release_state | tinyint | 状态:0 未上线,1 上线 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组id |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+| create_time | datetime | 创建时间 |
+| update_time | datetime | 更新时间 |
+
+<a name="TkQzn"></a>
+#### t_ds_command
+| 字段 | 类型 | 注释 |
+| --- | --- | --- |
+| id | int | 主键 |
+| command_type | tinyint | 命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程 |
+| process_definition_id | int | 流程定义id |
+| command_param | text | 命令的参数(json格式) |
+| task_depend_type | tinyint | 节点依赖类型:0 当前节点,1 向前执行,2 向后执行 |
+| failure_strategy | tinyint | 失败策略:0结束,1继续 |
+| warning_type | tinyint | 告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发 |
+| warning_group_id | int | 告警组 |
+| schedule_time | datetime | 预期运行时间 |
+| start_time | datetime | 开始时间 |
+| executor_id | int | 执行用户id |
+| dependence | varchar | 依赖字段 |
+| update_time | datetime | 更新时间 |
+| process_instance_priority | int | 流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
+| worker_group | varchar | 任务指定运行的worker分组 |
+
+
+
diff --git a/docs/zh-cn/2.0.2/user_doc/architecture/task-structure.md b/docs/zh-cn/2.0.2/user_doc/architecture/task-structure.md
new file mode 100644
index 0000000..f369116
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/architecture/task-structure.md
@@ -0,0 +1,1134 @@
+
+# 任务总体存储结构
+在dolphinscheduler中创建的所有任务都保存在t_ds_process_definition 表中.
+
+该数据库表结构如下表所示:
+
+
+序号 | 字段  | 类型  |  描述
+-------- | ---------| -------- | ---------
+1|id|int(11)|主键
+2|name|varchar(255)|流程定义名称
+3|version|int(11)|流程定义版本
+4|release_state|tinyint(4)|流程定义的发布状态:0 未上线 ,  1已上线
+5|project_id|int(11)|项目id
+6|user_id|int(11)|流程定义所属用户id
+7|process_definition_json|longtext|流程定义JSON
+8|description|text|流程定义描述
+9|global_params|text|全局参数
+10|flag|tinyint(4)|流程是否可用:0 不可用,1 可用
+11|locations|text|节点坐标信息
+12|connects|text|节点连线信息
+13|receivers|text|收件人
+14|receivers_cc|text|抄送人
+15|create_time|datetime|创建时间
+16|timeout|int(11) |超时时间
+17|tenant_id|int(11) |租户id
+18|update_time|datetime|更新时间
+19|modify_by|varchar(36)|修改用户
+20|resource_ids|varchar(255)|资源ids
+
+其中process_definition_json 字段为核心字段, 定义了 DAG 图中的任务信息.该数据以JSON 的方式进行存储.
+
+公共的数据结构如下表.
+序号 | 字段  | 类型  |  描述
+-------- | ---------| -------- | ---------
+1|globalParams|Array|全局参数
+2|tasks|Array|流程中的任务集合  [ 各个类型的结构请参考如下章节]
+3|tenantId|int|租户id
+4|timeout|int|超时时间
+
+数据示例:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# 各任务类型存储结构详解
+
+## Shell节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Shell脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14|maxRetryTimes | |String|最大重试次数 | |
+15|retryInterval | |String |重试间隔| |
+16|timeout | |Object|超时控制 | |
+17| taskInstancePriority| |String|任务优先级 | |
+18|workerGroup | |String |Worker 分组| |
+19|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL节点
+通过 SQL对指定的数据源进行数据查询、更新操作.
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SQL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |type |String | 数据库类型
+6| |datasource |Int | 数据源id
+7| |sql |String | 查询SQL语句
+8| |udfs | String| udf函数|UDF函数id,以逗号分隔.
+9| |sqlType | String| SQL节点类型 |0 查询  , 1 非查询
+10| |title |String | 邮件标题
+11| |receivers |String | 收件人
+12| |receiversCc |String | 抄送人
+13| |showType | String| 邮件显示类型|TABLE 表格  ,  ATTACHMENT附件
+14| |connParams | String| 连接参数
+15| |preStatements | Array| 前置SQL
+16| | postStatements| Array|后置SQL||
+17| | localParams| Array|自定义参数||
+18|description | |String|描述 | |
+19|runFlag | |String |运行标识| |
+20|conditionResult | |Object|条件分支 | |
+21| | successNode| Array|成功跳转节点| |
+22| | failedNode|Array|失败跳转节点 | 
+23| dependence| |Object |任务依赖 |与params互斥
+24|maxRetryTimes | |String|最大重试次数 | |
+25|retryInterval | |String |重试间隔| |
+26|timeout | |Object|超时控制 | |
+27| taskInstancePriority| |String|任务优先级 | |
+28|workerGroup | |String |Worker 分组| |
+29|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE[存储过程]节点
+**节点数据结构如下:**
+**节点数据样例:**
+
+## SPARK节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SPARK
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |deployMode |String | 部署模式  |local,client,cluster
+10| |driverCores | String| driver核数
+11| |driverMemory | String| driver 内存数
+12| |numExecutors |String | executor数量
+13| |executorMemory |String | executor内存
+14| |executorCores |String | executor核数
+15| |programType | String| 程序类型|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark 版本| SPARK1 , SPARK2
+17| | localParams| Array|自定义参数
+18| | resourceList| Array|资源文件
+19|description | |String|描述 | |
+20|runFlag | |String |运行标识| |
+21|conditionResult | |Object|条件分支 | |
+22| | successNode| Array|成功跳转节点| |
+23| | failedNode|Array|失败跳转节点 | 
+24| dependence| |Object |任务依赖 |与params互斥
+25|maxRetryTimes | |String|最大重试次数 | |
+26|retryInterval | |String |重试间隔| |
+27|timeout | |Object|超时控制 | |
+28| taskInstancePriority| |String|任务优先级 | |
+29|workerGroup | |String |Worker 分组| |
+30|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce(MR)节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |MR
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |programType | String| 程序类型|JAVA,PYTHON
+10| | localParams| Array|自定义参数
+11| | resourceList| Array|资源文件
+12|description | |String|描述 | |
+13|runFlag | |String |运行标识| |
+14|conditionResult | |Object|条件分支 | |
+15| | successNode| Array|成功跳转节点| |
+16| | failedNode|Array|失败跳转节点 | 
+17| dependence| |Object |任务依赖 |与params互斥
+18|maxRetryTimes | |String|最大重试次数 | |
+19|retryInterval | |String |重试间隔| |
+20|timeout | |Object|超时控制 | |
+21| taskInstancePriority| |String|任务优先级 | |
+22|workerGroup | |String |Worker 分组| |
+23|preTasks | |Array|前置任务 | |
+
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |PYTHON
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Python脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14|maxRetryTimes | |String|最大重试次数 | |
+15|retryInterval | |String |重试间隔| |
+16|timeout | |Object|超时控制 | |
+17| taskInstancePriority| |String|任务优先级 | |
+18|workerGroup | |String |Worker 分组| |
+19|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |FLINK
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |mainClass |String | 运行主类
+6| |mainArgs | String| 运行参数
+7| |others | String| 其他参数
+8| |mainJar |Object | 程序 jar 包
+9| |deployMode |String | 部署模式  |local,client,cluster
+10| |slot | String| slot数量
+11| |taskManager |String | taskManager数量
+12| |taskManagerMemory |String | taskManager内存数
+13| |jobManagerMemory |String | jobManager内存数
+14| |programType | String| 程序类型|JAVA,SCALA,PYTHON
+15| | localParams| Array|自定义参数
+16| | resourceList| Array|资源文件
+17|description | |String|描述 | |
+18|runFlag | |String |运行标识| |
+19|conditionResult | |Object|条件分支 | |
+20| | successNode| Array|成功跳转节点| |
+21| | failedNode|Array|失败跳转节点 | 
+22| dependence| |Object |任务依赖 |与params互斥
+23|maxRetryTimes | |String|最大重试次数 | |
+24|retryInterval | |String |重试间隔| |
+25|timeout | |Object|超时控制 | |
+26| taskInstancePriority| |String|任务优先级 | |
+27|workerGroup | |String |Worker 分组| |
+38|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP节点
+**节点数据结构如下:**
+
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |HTTP
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |url |String | 请求地址
+6| |httpMethod | String| 请求方式|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|请求参数
+8| |httpCheckCondition | String| 校验条件|默认响应码200
+9| |condition |String | 校验内容
+10| | localParams| Array|自定义参数
+11|description | |String|描述 | |
+12|runFlag | |String |运行标识| |
+13|conditionResult | |Object|条件分支 | |
+14| | successNode| Array|成功跳转节点| |
+15| | failedNode|Array|失败跳转节点 | 
+16| dependence| |Object |任务依赖 |与params互斥
+17|maxRetryTimes | |String|最大重试次数 | |
+18|retryInterval | |String |重试间隔| |
+19|timeout | |Object|超时控制 | |
+20| taskInstancePriority| |String|任务优先级 | |
+21|workerGroup | |String |Worker 分组| |
+22|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |DATAX
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |customConfig |Int | 自定义类型| 0定制 , 1自定义
+6| |dsType |String | 源数据库类型
+7| |dataSource |Int | 源数据库ID
+8| |dtType | String| 目标数据库类型
+9| |dataTarget | Int| 目标数据库ID 
+10| |sql |String | SQL语句
+11| |targetTable |String | 目标表
+12| |jobSpeedByte |Int | 限流(字节数)
+13| |jobSpeedRecord | Int| 限流(记录数)
+14| |preStatements | Array| 前置SQL
+15| | postStatements| Array|后置SQL
+16| | json| String|自定义配置|customConfig=1时生效
+17| | localParams| Array|自定义参数|customConfig=1时生效
+18|description | |String|描述 | |
+19|runFlag | |String |运行标识| |
+20|conditionResult | |Object|条件分支 | |
+21| | successNode| Array|成功跳转节点| |
+22| | failedNode|Array|失败跳转节点 | 
+23| dependence| |Object |任务依赖 |与params互斥
+24|maxRetryTimes | |String|最大重试次数 | |
+25|retryInterval | |String |重试间隔| |
+26|timeout | |Object|超时控制 | |
+27| taskInstancePriority| |String|任务优先级 | |
+28|workerGroup | |String |Worker 分组| |
+29|preTasks | |Array|前置任务 | |
+
+
+
+**节点数据样例:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SQOOP
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |JSON 格式
+5| | concurrency| Int|并发度
+6| | modelType|String |流向|import,export
+7| |sourceType|String |数据源类型 |
+8| |sourceParams |String| 数据源参数| JSON格式
+9| | targetType|String |目标数据类型
+10| |targetParams | String|目标数据参数|JSON格式
+11| |localParams |Array |自定义参数
+12|description | |String|描述 | |
+13|runFlag | |String |运行标识| |
+14|conditionResult | |Object|条件分支 | |
+15| | successNode| Array|成功跳转节点| |
+16| | failedNode|Array|失败跳转节点 | 
+17| dependence| |Object |任务依赖 |与params互斥
+18|maxRetryTimes | |String|最大重试次数 | |
+19|retryInterval | |String |重试间隔| |
+20|timeout | |Object|超时控制 | |
+21| taskInstancePriority| |String|任务优先级 | |
+22|workerGroup | |String |Worker 分组| |
+23|preTasks | |Array|前置任务 | |
+
+
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## 条件分支节点
+
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 | null
+5|description | |String|描述 | |
+6|runFlag | |String |运行标识| |
+7|conditionResult | |Object|条件分支 | |
+8| | successNode| Array|成功跳转节点| |
+9| | failedNode|Array|失败跳转节点 | 
+10| dependence| |Object |任务依赖 |与params互斥
+11|maxRetryTimes | |String|最大重试次数 | |
+12|retryInterval | |String |重试间隔| |
+13|timeout | |Object|超时控制 | |
+14| taskInstancePriority| |String|任务优先级 | |
+15|workerGroup | |String |Worker 分组| |
+16|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## 子流程节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |SHELL
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |processDefinitionId |Int| 流程定义id
+6|description | |String|描述 | |
+7|runFlag | |String |运行标识| |
+8|conditionResult | |Object|条件分支 | |
+9| | successNode| Array|成功跳转节点| |
+10| | failedNode|Array|失败跳转节点 | 
+11| dependence| |Object |任务依赖 |与params互斥
+12|maxRetryTimes | |String|最大重试次数 | |
+13|retryInterval | |String |重试间隔| |
+14|timeout | |Object|超时控制 | |
+15| taskInstancePriority| |String|任务优先级 | |
+16|workerGroup | |String |Worker 分组| |
+17|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## 依赖(DEPENDENT)节点
+**节点数据结构如下:**
+序号|参数名||类型|描述 |描述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| 任务编码|
+2|type ||String |类型 |DEPENDENT
+3| name| |String|名称 |
+4| params| |Object| 自定义参数 |Json 格式
+5| |rawScript |String| Shell脚本 |
+6| | localParams| Array|自定义参数||
+7| | resourceList| Array|资源文件||
+8|description | |String|描述 | |
+9|runFlag | |String |运行标识| |
+10|conditionResult | |Object|条件分支 | |
+11| | successNode| Array|成功跳转节点| |
+12| | failedNode|Array|失败跳转节点 | 
+13| dependence| |Object |任务依赖 |与params互斥
+14| | relation|String |关系 |AND,OR
+15| | dependTaskList|Array |依赖任务清单 |
+16|maxRetryTimes | |String|最大重试次数 | |
+17|retryInterval | |String |重试间隔| |
+18|timeout | |Object|超时控制 | |
+19| taskInstancePriority| |String|任务优先级 | |
+20|workerGroup | |String |Worker 分组| |
+21|preTasks | |Array|前置任务 | |
+
+
+**节点数据样例:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/zh-cn/2.0.2/user_doc/expansion-reduction.md b/docs/zh-cn/2.0.2/user_doc/expansion-reduction.md
new file mode 100644
index 0000000..b3e17c2
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/expansion-reduction.md
@@ -0,0 +1,252 @@
+# DolphinScheduler扩容/缩容 文档
+
+
+## 1. DolphinScheduler扩容文档
+本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.
+
+```
+ 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.
+       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.
+```
+
+### 1.1. 基础软件安装(必装项请自行安装)
+
+* [必装] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
+* [可选] 如果扩容的是worker类型的节点,需要考虑是否要安装外部客户端,比如Hadoop、Hive、Spark 的Client.
+
+
+```markdown
+ 注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
+```
+
+### 1.2. 获取安装包
+- 确认现有环境使用的DolphinScheduler是哪个版本,获取对应版本的安装包,如果版本不同,可能存在兼容性的问题.
+- 确认其他节点的统一安装目录,本文假设DolphinScheduler统一安装在 /opt/ 目录中,安装全路径为/opt/dolphinscheduler.
+- 请下载对应版本的安装包至服务器安装目录,解压并重名为dolphinscheduler存放在/opt目录中. 
+- 添加数据库依赖包,本文使用Mysql数据库,添加mysql-connector-java驱动包到/opt/dolphinscheduler/lib目录中
+```shell
+# 创建安装目录,安装目录请不要创建在/root、/home等高权限目录 
+mkdir -p /opt
+cd /opt
+# 解压缩
+tar -zxvf apache-dolphinscheduler-2.0.2-bin.tar.gz -C /opt 
+cd /opt
+mv apache-dolphinscheduler-2.0.2-bin  dolphinscheduler
+```
+
+```markdown
+ 注意:安装包可以从现有的环境直接复制到扩容的物理机上使用.
+```
+
+### 1.3. 创建部署用户
+
+- 在**所有**扩容的机器上创建部署用户,并且一定要配置sudo免密。假如我们计划在ds1,ds2,ds3,ds4这四台扩容机器上部署调度,首先需要在每台机器上都创建部署用户
+
+```shell
+# 创建用户需使用root登录,设置部署用户名,请自行修改,后面以dolphinscheduler为例
+useradd dolphinscheduler;
+
+# 设置用户密码,请自行修改,后面以dolphinscheduler123为例
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# 配置sudo免密
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```markdown
+ 注意:
+ - 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
+ - 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
+ - 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
+```
+
+### 1.4. 修改配置
+
+- 从现有的节点比如Master/Worker节点,直接拷贝conf目录替换掉新增节点中的conf目录.拷贝之后检查一下配置项是否正确.
+    
+    ```markdown
+    重点检查:
+    datasource.properties 中的数据库连接信息. 
+    zookeeper.properties 中的连接zk的信息.
+    common.properties 中关于资源存储的配置信息(如果设置了hadoop,请检查是否存在core-site.xml和hdfs-site.xml配置文件).
+    env/dolphinscheduler_env.sh 中的环境变量
+    ````
+
+- 根据机器配置,修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        #export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/java
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+    
+        ```
+
+     `注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉`
+
+
+- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+    
+    * 新增的master节点, 需要修改 ips 和 masters 参数.
+    * 新增的worker节点, 需要修改 ips 和  workers 参数.
+
+```shell
+#在哪些机器上新增部署DS服务,多个物理机之间用逗号隔开.
+ips="ds1,ds2,ds3,ds4"
+
+#ssh端口,默认22
+sshPort="22"
+
+#master服务部署在哪台机器上
+masters="现有master01,现有master02,ds1,ds2"
+
+#worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
+
+```
+- 如果扩容的是worker节点,需要设置worker分组.请参考用户手册[5.7 创建worker分组 ](https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html)
+
+- 在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限
+
+```shell
+sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
+```
+
+
+
+### 1.4. 重启集群&验证
+
+- 重启集群
+
+```shell
+停止命令:
+bin/stop-all.sh 停止所有服务
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  停止 master 服务
+sh bin/dolphinscheduler-daemon.sh stop worker-server  停止 worker 服务
+sh bin/dolphinscheduler-daemon.sh stop logger-server  停止 logger  服务
+sh bin/dolphinscheduler-daemon.sh stop api-server     停止 api    服务
+sh bin/dolphinscheduler-daemon.sh stop alert-server   停止 alert  服务
+
+
+启动命令:
+bin/start-all.sh 启动所有服务
+
+sh bin/dolphinscheduler-daemon.sh start master-server  启动 master 服务
+sh bin/dolphinscheduler-daemon.sh start worker-server  启动 worker 服务
+sh bin/dolphinscheduler-daemon.sh start logger-server  启动 logger  服务
+sh bin/dolphinscheduler-daemon.sh start api-server     启动 api    服务
+sh bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
+
+```
+
+```
+ 注意: 使用stop-all.sh或者stop-all.sh的时候,如果执行该命令的物理机没有配置到所有机器的ssh免登陆的话,会提示输入密码
+```
+
+
+- 脚本完成后,使用`jps`命令查看各个节点服务是否启动(`jps`为`java JDK`自带)
+
+```
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+
+启动成功后,可以进行日志查看,日志统一存放于logs文件夹内
+
+```日志路径
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    |—— dolphinscheduler-worker-server.log
+    |—— dolphinscheduler-api-server.log
+    |—— dolphinscheduler-logger-server.log
+```
+如果以上服务都正常启动且调度系统页面正常,在web系统的[监控中心]查看是否有扩容的Master或者Worker服务.如果存在,则扩容完成
+
+-----------------------------------------------------------------------------
+
+## 2. 缩容
+缩容是针对现有的DolphinScheduler集群减少master或者worker服务,
+缩容一共分两个步骤,执行完以下两步,即可完成缩容操作.
+
+### 2.1 停止缩容节点上的服务
+ * 如果缩容master节点,要确定要缩容master服务所在的物理机,并在物理机上停止该master服务.
+ * 如果缩容worker节点,要确定要缩容worker服务所在的物理机,并在物理机上停止worker和logger服务.
+ 
+```shell
+停止命令:
+bin/stop-all.sh 停止所有服务
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  停止 master 服务
+sh bin/dolphinscheduler-daemon.sh stop worker-server  停止 worker 服务
+sh bin/dolphinscheduler-daemon.sh stop logger-server  停止 logger  服务
+sh bin/dolphinscheduler-daemon.sh stop api-server     停止 api    服务
+sh bin/dolphinscheduler-daemon.sh stop alert-server   停止 alert  服务
+
+
+启动命令:
+bin/start-all.sh 启动所有服务
+
+sh bin/dolphinscheduler-daemon.sh start master-server  启动 master 服务
+sh bin/dolphinscheduler-daemon.sh start worker-server  启动 worker 服务
+sh bin/dolphinscheduler-daemon.sh start logger-server  启动 logger  服务
+sh bin/dolphinscheduler-daemon.sh start api-server     启动 api    服务
+sh bin/dolphinscheduler-daemon.sh start alert-server   启动 alert  服务
+
+```
+
+```
+ 注意: 使用stop-all.sh或者stop-all.sh的时候,如果没有执行该命令的机器没有配置到所有机器的ssh免登陆的话,会提示输入密码
+```
+
+- 脚本完成后,使用`jps`命令查看各个节点服务是否成功关闭(`jps`为`java JDK`自带)
+
+```
+    MasterServer         ----- master服务
+    WorkerServer         ----- worker服务
+    LoggerServer         ----- logger服务
+    ApiApplicationServer ----- api服务
+    AlertServer          ----- alert服务
+```
+如果对应的master服务或者worker服务不存在,则代表master/worker服务成功关闭.
+
+
+### 2.2 修改配置文件
+
+ - 修改 **所有** 节点上的配置文件 `conf/config/install_config.conf`, 同步修改以下配置.
+    
+    * 缩容master节点, 需要修改 ips 和 masters 参数.
+    * 缩容worker节点, 需要修改 ips 和  workers 参数.
+
+```shell
+#在哪些机器上部署DS服务,本机选localhost
+ips="ds1,ds2,ds3,ds4"
+
+#ssh端口,默认22
+sshPort="22"
+
+#master服务部署在哪台机器上
+masters="现有master01,现有master02,ds1,ds2"
+
+#worker服务部署在哪台机器上,并指定此worker属于哪一个worker组,下面示例的default即为组名
+workers="现有worker01:default,现有worker02:default,ds3:default,ds4:default"
+
+```
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md b/docs/zh-cn/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md
new file mode 100644
index 0000000..a2659b7
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/alert/alert_plugin_user_guide.md
@@ -0,0 +1,12 @@
+## 如何创建告警插件以及告警组
+
+在2.0.2版本中,用户需要创建告警实例,然后同告警组进行关联,一个告警组可以使用多个告警实例,我们会逐一进行进行告警通知。
+
+首先需要进入到安全中心,选择告警组管理,然后点击左侧的告警实例管理,然后创建一个告警实例,然后选择对应的告警插件,填写相关告警参数。
+
+然后选择告警组管理,创建告警组,选择相应的告警实例即可。
+
+<img src="/img/alert/alert_step_1.png">
+<img src="/img/alert/alert_step_2.png">
+<img src="/img/alert/alert_step_3.png">
+<img src="/img/alert/alert_step_4.png">
\ No newline at end of file
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/alert/enterprise-wechat.md b/docs/zh-cn/2.0.2/user_doc/guide/alert/enterprise-wechat.md
new file mode 100644
index 0000000..ec3de42
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/alert/enterprise-wechat.md
@@ -0,0 +1,11 @@
+# 企业微信
+
+如果您需要使用到企业微信进行告警,请在告警实例管理里创建告警实例,选择 WeChat 插件。企业微信的配置样例如下
+
+![enterprise-wechat-plugin](/img/alert/enterprise-wechat-plugin.png)
+
+其中 send.type 分别对应企微文档:
+应用:https://work.weixin.qq.com/api/doc/90000/90135/90236
+群聊:https://work.weixin.qq.com/api/doc/90000/90135/90248
+
+user.send.msg 对应文档中的 content,与此相对应的值的变量为 {msg}
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/datasource/hive.md b/docs/zh-cn/2.0.2/user_doc/guide/datasource/hive.md
new file mode 100644
index 0000000..59c3c23
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/datasource/hive.md
@@ -0,0 +1,42 @@
+# HIVE数据源
+
+## 使用HiveServer2
+
+ <p align="center">
+    <img src="/img/hive_edit.png" width="80%" />
+  </p>
+
+- 数据源:选择HIVE
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP/主机名:输入连接HIVE的IP
+- 端口:输入连接HIVE的端口
+- 用户名:设置连接HIVE的用户名
+- 密码:设置连接HIVE的密码
+- 数据库名:输入连接HIVE的数据库名称
+- Jdbc连接参数:用于HIVE连接的参数设置,以JSON形式填写
+
+## 使用HiveServer2 HA Zookeeper
+
+ <p align="center">
+    <img src="/img/hive_edit2.png" width="80%" />
+  </p>
+
+
+注意:如果没有开启kerberos,请保证参数 `hadoop.security.authentication.startup.state` 值为 `false`,
+参数 `java.security.krb5.conf.path` 值为空. 开启了**kerberos**,则需要在 `common.properties` 配置以下参数
+
+```conf
+# whether to startup kerberos
+hadoop.security.authentication.startup.state=true
+
+# java.security.krb5.conf path
+java.security.krb5.conf.path=/opt/krb5.conf
+
+# login user from keytab username
+login.user.keytab.username=hdfs-mycluster@ESZ.COM
+
+# login user from keytab path
+login.user.keytab.path=/opt/hdfs.headless.keytab
+```
+
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/datasource/introduction.md b/docs/zh-cn/2.0.2/user_doc/guide/datasource/introduction.md
new file mode 100644
index 0000000..77e91bc
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/datasource/introduction.md
@@ -0,0 +1,6 @@
+# 数据源
+
+数据源中心支持MySQL、POSTGRESQL、HIVE/IMPALA、SPARK、CLICKHOUSE、ORACLE、SQLSERVER等数据源。
+
+- 点击“数据源中心->创建数据源”,根据需求创建不同类型的数据源。
+- 点击“测试连接”,测试数据源是否可以连接成功。
\ No newline at end of file
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/datasource/mysql.md b/docs/zh-cn/2.0.2/user_doc/guide/datasource/mysql.md
new file mode 100644
index 0000000..0b0affd
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/datasource/mysql.md
@@ -0,0 +1,15 @@
+# MySQL数据源
+
+- 数据源:选择MYSQL
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP主机名:输入连接MySQL的IP
+- 端口:输入连接MySQL的端口
+- 用户名:设置连接MySQL的用户名
+- 密码:设置连接MySQL的密码
+- 数据库名:输入连接MySQL的数据库名称
+- Jdbc连接参数:用于MySQL连接的参数设置,以JSON形式填写
+
+<p align="center">
+   <img src="/img/mysql_edit.png" width="80%" />
+</p>
\ No newline at end of file
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/datasource/postgresql.md b/docs/zh-cn/2.0.2/user_doc/guide/datasource/postgresql.md
new file mode 100644
index 0000000..ea9c94d
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/datasource/postgresql.md
@@ -0,0 +1,15 @@
+# POSTGRESQL数据源
+
+- 数据源:选择POSTGRESQL
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP/主机名:输入连接POSTGRESQL的IP
+- 端口:输入连接POSTGRESQL的端口
+- 用户名:设置连接POSTGRESQL的用户名
+- 密码:设置连接POSTGRESQL的密码
+- 数据库名:输入连接POSTGRESQL的数据库名称
+- Jdbc连接参数:用于POSTGRESQL连接的参数设置,以JSON形式填写
+
+<p align="center">
+   <img src="/img/postgresql_edit.png" width="80%" />
+ </p>
\ No newline at end of file
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/datasource/spark.md b/docs/zh-cn/2.0.2/user_doc/guide/datasource/spark.md
new file mode 100644
index 0000000..5145406
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/datasource/spark.md
@@ -0,0 +1,21 @@
+# Spark数据源
+
+<p align="center">
+   <img src="/img/spark_datesource.png" width="80%" />
+ </p>
+
+- 数据源:选择Spark
+- 数据源名称:输入数据源的名称
+- 描述:输入数据源的描述
+- IP/主机名:输入连接Spark的IP
+- 端口:输入连接Spark的端口
+- 用户名:设置连接Spark的用户名
+- 密码:设置连接Spark的密码
+- 数据库名:输入连接Spark的数据库名称
+- Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写
+
+注意:如果开启了**kerberos**,则需要填写 **Principal**
+
+<p align="center">
+    <img src="/img/sparksql_kerberos.png" width="80%" />
+  </p>
diff --git a/docs/zh-cn/2.0.2/user_doc/guide/flink-call.md b/docs/zh-cn/2.0.2/user_doc/guide/flink-call.md
new file mode 100644
index 0000000..547efa0
--- /dev/null
+++ b/docs/zh-cn/2.0.2/user_doc/guide/flink-call.md
@@ -0,0 +1,150 @@
+# 调用 flink 操作步骤
+
+### 创建队列
+
+1. 登录调度系统,点击 "安全中心",再点击左侧的 "队列管理",点击 "队列管理" 创建队列
+2. 填写队列名称和队列值,然后点击 "提交" 
+
+<p align="center">
+   <img src="/img/api/create_queue.png" width="80%" />
+ </p>
+
+
+
+
+### 创建租户
+
+```
+1.租户对应的是 linux 用户, 用户 worker 提交作业所使用的的用户, 如果 linux 没有这个用户, worker 会在执行脚本的时候创建这个用户
+2.租户和租户编码都是唯一不能重复,好比一个人有名字有身份证号。
+3.创建完租户会在 hdfs 对应的目录上有相关的文件夹。
+```
+
+<p align="center">
+   <img src="/img/api/create_tenant.png" width="80%" />
+ </p>
+
+
+
+
+### 创建用户
+
+<p align="center">
+   <img src="/img/api/create_user.png" width="80%" />
+ </p>
+
+
+
+
+### 创建 Token
+
+1. 登录调度系统,点击 "安全中心",再点击左侧的 "令牌管理",点击 "令牌管理" 创建令牌
+
+<p align="center">
+   <img src="/img/token-management.png" width="80%" />
+ </p>
+
+
+2. 选择 "失效时间" (Token有效期),选择 "用户" (以指定的用户执行接口操作),点击 "生成令牌" ,拷贝 Token 字符串,然后点击 "提交" 
+
+<p align="center">
+   <img src="/img/create-token.png" width="80%" />
+ </p>
+
+
+### 使用 Token
+
+1. 打开 API文档页面
+
+   > 地址:http://{api server ip}:12345/dolphinscheduler/doc.html?language=zh_CN&lang=cn
+
+<p align="center">
+   <img src="/img/api-documentation.png" width="80%" />
+ </p>
+
+
+2. 选一个测试的接口,本次测试选取的接口是:查询所有项目
+
+   > projects/query-project-list
+
+3. 打开 Postman,填写接口地址,并在 Headers 中填写 Token,发送请求后即可查看结果
+
+   ```
... 4443 lines suppressed ...