You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by ki...@apache.org on 2021/10/26 03:30:08 UTC

[dolphinscheduler-website] branch master updated: Move dev doc from docs to development (#472)

This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 0e9e6f1  Move dev doc from docs to development (#472)
0e9e6f1 is described below

commit 0e9e6f1e06eeaa9d819b6eeea2484cc645f5c954
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Tue Oct 26 11:30:01 2021 +0800

    Move dev doc from docs to development (#472)
    
    * Unify development setup, and delete dev_run
    * Add mechanism to keep DS design detail
---
 development/en-us/backend/backend-development.md   |  52 ------
 .../en-us/backend/mechanism/global-parameter.md    |  61 +++++++
 development/en-us/backend/mechanism/overview.md    |   6 +
 development/en-us/backend/mechanism/task/switch.md |   8 +
 development/en-us/development-environment-setup.md | 202 ++++++++++-----------
 development/zh-cn/backend/backend-development.md   |  52 ------
 .../zh-cn/backend/mechanism/global-parameter.md    |  61 +++++++
 development/zh-cn/backend/mechanism/overview.md    |   6 +
 development/zh-cn/backend/mechanism/task/switch.md |   8 +
 development/zh-cn/development-environment-setup.md | 197 +++++++++-----------
 docs/en-us/dev/user_doc/globalParams.md            |  69 -------
 docs/zh-cn/dev/user_doc/dev_run.md                 | 125 -------------
 docs/zh-cn/dev/user_doc/globalParams.md            |  73 --------
 docs/zh-cn/dev/user_doc/switch_node.md             |  15 --
 site_config/development.js                         |  16 +-
 site_config/docsdev.js                             |  30 ---
 16 files changed, 341 insertions(+), 640 deletions(-)

diff --git a/development/en-us/backend/backend-development.md b/development/en-us/backend/backend-development.md
deleted file mode 100644
index 1b13823..0000000
--- a/development/en-us/backend/backend-development.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# Backend development documentation
-
-## Environmental requirements
-
- * MySQL (5.5+) : Must be installed
- * JDK (1.8+) : Must be installed
- * ZooKeeper (3.4.6+) : Must be installed
- * Maven (3.3+) : Must be installed
-
-Because the dolphinscheduler-rpc module in DolphinScheduler uses Grpc, you need to use Maven to compile the generated classes.
-For those who are not familiar with maven, please refer to: [maven in five minutes](http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html)(3.3+)
-
-http://maven.apache.org/install.html
-
-## Project compilation
-After importing the DolphinScheduler source code into the development tools such as Idea, first convert to the Maven project (right click and select "Add Framework Support")
-
-* Execute the compile command:
-
-when deploy version >= 1.2.0 , please use:
-```
- mvn -U clean package -Prelease -Dmaven.test.skip=true
-```
-
-before 1.2.0, please use:
-```
- mvn -U clean package assembly:assembly -Dmaven.test.skip=true
-```
-
-* View directory
-
-After normal compilation, it will generate `./dolphinscheduler-dist/target/apache-dolphinscheduler-{version}-bin.tar.gz` in the current directory.
-
-```
-bin
-conf
-lib
-script
-sql
-install.sh
-```
-
-- Description
-
-```
-bin : basic service startup script
-conf : project configuration file
-lib : the project depends on the jar package, including the various module jars and third-party jars
-script : cluster start, stop, and service monitoring start and stop scripts
-sql : project depends on sql file
-install.sh : one-click deployment script
-```
diff --git a/development/en-us/backend/mechanism/global-parameter.md b/development/en-us/backend/mechanism/global-parameter.md
new file mode 100644
index 0000000..53b7374
--- /dev/null
+++ b/development/en-us/backend/mechanism/global-parameter.md
@@ -0,0 +1,61 @@
+# Global Parameter development document
+
+After the user defines the parameter with the direction OUT, it is saved in the localParam of the task.
+
+## Usage of parameters
+
+Getting the direct predecessor node `preTasks` of the current `taskInstance` to be created from the DAG, get the `varPool` of `preTasks`, merge this varPool (List) into one `varPool`, and in the merging process, if parameters with the same parameter name are found, they will be handled according to the following logics:
+
+* If all the values are null, the merged value is null
+* If one and only one value is non-null, then the merged value is the non-null value
+* If all the values are not null, it would be the earliest value of the endtime of taskInstance taken by VarPool.
+
+The direction of all the merged properties is updated to IN during the merge process.
+
+The result of the merge is saved in taskInstance.varPool.
+
+The worker receives and parses the varPool into the format of `Map<String,Property>`, where the key of the map is property.prop, which is the parameter name.
+
+When the processor processes the parameters, it will merge the varPool and localParam and globalParam parameters, and if there are parameters with duplicate names during the merging process, they will be replaced according to the following priorities, with the higher priority being retained and the lower priority being replaced:
+
+* globalParam: high
+* varPool: middle
+* localParam: low
+
+The parameters are replaced with the corresponding values using regular expressions compared to ${parameter name} before the node content is executed.
+
+## Parameter setting
+
+Currently, only SQL and SHELL nodes are supported to get parameters.
+
+Get the parameter with direction OUT from localParam, and do the following way according to the type of different nodes.
+
+### SQL node
+
+The structure returned by the parameter is List<Map<String,String>>, where the elements of List are each row of data, the key of Map is the column name, and the value is the value corresponding to the column.
+
+* If the SQL statement returns one row of data, match the OUT parameter name based on the OUT parameter name defined by the user when defining the task, or discard it if it does not match.
+* If the SQL statement returns multiple rows of data, the column names are matched based on the OUT parameter names defined by the user when defining the task of type LIST. All rows of the corresponding column are converted to `List<String>` as the value of this parameter. If there is no match, it is discarded.
+
+### SHELL node
+
+The result of the processor execution is returned as `Map<String,String>`.
+
+The user needs to define `${setValue(key=value)}` in the output when defining the shell script.
+
+Remove `${setValue()}` when processing parameters, split by "=", with the 0th being the key and the 1st being the value.
+
+Similarly match the OUT parameter name and key defined by the user when defining the task, and use value as the value of that parameter.
+
+Return parameter processing
+
+* The result of acquired Processor is String.
+* Determine whether the processor is empty or not, and exit if it is empty.
+* Determine whether the localParam is empty or not, and exit if it is empty.
+* Get the parameter of localParam which is OUT, and exit if it is empty.
+* Format String as per appeal format (`List<Map<String,String>>` for SQL, `Map<String,String>>` for shell).
+
+Assign the parameters with matching values to varPool (List, which contains the original IN's parameters)
+
+* Format the varPool as json and pass it to master.
+* The parameters that are OUT would be written into the localParam after the master has received the varPool.
diff --git a/development/en-us/backend/mechanism/overview.md b/development/en-us/backend/mechanism/overview.md
new file mode 100644
index 0000000..4f0d592
--- /dev/null
+++ b/development/en-us/backend/mechanism/overview.md
@@ -0,0 +1,6 @@
+# Overview
+
+<!-- TODO Since the side menu does not support multiple levels, add new page to keep all sub page here -->
+
+* [Global Parameter](global-parameter.md)
+* [Switch Task type](task/switch.md)
diff --git a/development/en-us/backend/mechanism/task/switch.md b/development/en-us/backend/mechanism/task/switch.md
new file mode 100644
index 0000000..4905104
--- /dev/null
+++ b/development/en-us/backend/mechanism/task/switch.md
@@ -0,0 +1,8 @@
+# SWITCH Task development
+
+Switch task workflow step as follows
+
+* User-defined expressions and branch information are stored in `taskParams` in `taskdefinition`. When the switch is executed, it will be formatted as `SwitchParameters`
+* `SwitchTaskExecThread` processes the expressions defined in `switch` from top to bottom, obtains the value of the variable from `varPool`, and parses the expression through `javascript`. If the expression returns true, stop checking and record The order of the expression, here we record as resultConditionLocation. The task of SwitchTaskExecThread is over
+* After the `switch` task runs, if there is no error (more commonly, the user-defined expression is out of specification or there is a problem with the parameter name), then `MasterExecThread.submitPostNode` will obtain the downstream node of the `DAG` to continue execution.
+* If it is found in `DagHelper.parsePostNodes` that the current node (the node that has just completed the work) is a `switch` node, the `resultConditionLocation` will be obtained, and all branches except `resultConditionLocation` in the SwitchParameters will be skipped. In this way, only the branches that need to be executed are left
diff --git a/development/en-us/development-environment-setup.md b/development/en-us/development-environment-setup.md
index bfb7a43..153e3c4 100644
--- a/development/en-us/development-environment-setup.md
+++ b/development/en-us/development-environment-setup.md
@@ -1,157 +1,141 @@
-## Development Environment Setup
+# DolphinScheduler development
 
-#### Preparation
+## Software Requests
 
-1. First, fork the [dolphinscheduler](https://github.com/apache/dolphinscheduler) code from the remote repository to your local repository.
+Before setting up the DolphinScheduler development environment, please make sure you have installed the software as below:
 
-2. Install MySQL/PostgreSQL, JDK and MAVEN in your own software development environment.
+* [Git](https://git-scm.com/downloads): DolphinScheduler version control system
+* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html): DolphinScheduler backend language
+* [Maven](http://maven.apache.org/download.cgi): Java Package Management System
+* [Node](https://nodejs.org/en/download): DolphinScheduler frontend
+ language
 
-3. Clone your forked repository to the local file system.
+### Clone Git Repository
 
-```bash
-git clone https://github.com/apache/dolphinscheduler.git
+Download the git repository through your git management tool, here we use git-core as an example
+
+```shell
+mkdir dolphinscheduler
+cd dolphinscheduler
+git clone git@github.com:apache/dolphinscheduler.git
 ```
 
-4. After finished the clone, go into the project directory and execute the following commands:
+## Notice
 
-```bash
-git branch -a #check the branch
-git checkout dev #switch to the dev branch
-git pull #sychronize the branch with the remote branch
-mvn -U clean package -Prelease -Dmaven.test.skip=true #mvn package
-```
+There are two ways to configure the DolphinScheduler development environment, standalone mode and normal mode
 
-#### Install node
+* [Standalone mode](#dolphinscheduler-standalone-quick-start): **Recommended**,more convenient to build development environment, it can cover most scenes.
+* [Normal mode](#dolphinscheduler-normal-mode): Separate server master, worker, api, logger, which can cover more test environments than standalone, and it is more like production environment in real life.
 
-1. Install nvm  
-    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
+## DolphinScheduler Standalone Quick Start
 
-2. Refresh the environment variables  
-    source ~/.bash_profile
+> **_Note:_**: Standalone server only for development and debugging, cause it use H2 Database, Zookeeper Testing Server which may not stable in production
+> If you want to test plugin, you can modify `plugin.bind` in StandaloneServer class or modify the configuration file by yourself.
+> Standalone is only supported in DolphinScheduler 1.3.9 and later versions
 
-3. Install node  
-    nvm install v12.20.2
-    note: Mac users could install npm through brew: brew install npm
+### Git Branch Choose
 
-4. Validate the node installation  
-    node --version
+Use different Git branch to develop different codes
 
-#### Install zookeeper
+* If you want to develop based on a binary package, switch git branch to specific release branch, for example, if you want to develop base on 1.3.9, you should choose branch `1.3.9-release`.
+* If you want to develop the latest code, choose branch branch `dev`.
 
-1. Download zookeeper  
-    https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
+### Start backend server
 
-2. Copy the zookeeper config file  
-    cp conf/zoo_sample.cfg conf/zoo.cfg
+Compile backend code
 
-3. Modify zookepper config  
-    vi conf/zoo.cfg  
-    dataDir=./tmp/zookeeper
+```shell
+mvn install -DskipTests
+```
 
-4. Start/stop zookeeper  
-    ./bin/zkServer.sh start  
-    ./bin/zkServer.sh stop
+Find the class `org.apache.dolphinscheduler.server.StandaloneServer` in Intellij IDEA and clikc run main function to startup.
 
-#### Create database
+### Start frontend server
 
-Create user, user name: ds_user, password: dolphinscheduler
+Install frontend dependencies and run it
 
-```
-mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'ds_user'@'%' IDENTIFIED BY 'dolphinscheduler';
-mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'ds_user'@'localhost' IDENTIFIED BY 'dolphinscheduler';
-mysql> flush privileges;
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
 ```
 
-#### Set up the front-end
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
 
-1. Enter the dolphinscheduler-ui directory  
-    cd dolphinscheduler-ui
+## DolphinScheduler Normal Mode
 
-2. Run npm install
+### Prepare
 
-#### Set up the back-end
+#### zookeeper
 
-1. Import the project to IDEA  
-    file-->open
+Download [ZooKeeper](https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3), and extract it.
 
-2. Modify the database configuration in the datasource.properties file in the resource directory of the dao module
+* Create directory `zkData` and `zkLog`
+* Go to the zookeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in conf/zoo.cfg to dataDir=./tmp/zookeeper
 
-```
-spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-spring.datasource.url=jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
-spring.datasource.username=ds_user
-spring.datasource.password=dolphinscheduler
-```
+    ```shell
+    # We use path /data/zookeeper/data and /data/zookeeper/datalog here as example
+    dataDir=/data/zookeeper/data
+    dataLogDir=/data/zookeeper/datalog
+    ```
 
-3. Modify pom.xml in the root directory and modify the scope of mysql-connector-java to compile
+* Run `./bin/zkServer.sh` in terminal by command `./bin/zkServer.sh start`.
 
-4. Refresh the dao module, run the main method of org.apache.dolphinscheduler.dao.upgrade.shell.CreateDolphinScheduler to automatically insert the tables and data required by the project.
+#### Database
 
-5. Modify the service module
-    try to change the zookeeper.quorum part of the zookeeper.properties file
-    zookeeper.quorum=localhost:2181
+The DolphinScheduler's metadata is stored in relational database. Currently supported MySQL and Postgresql. We use MySQL as an example. Start the database and create a new database named dolphinscheduler as DolphinScheduler metabase.
 
-#### Start the project
+After creating the new database, run the sql file under `dolphinscheduler/sql/dolphinscheduler_mysql.sql` directly in mysql to complete the database initialization.
 
-1. Start zookeeper  
-    ./bin/zkServer.sh start
+#### Start Backend Server
 
-2. Start MasterServer  
-    run the main method of org.apache.dolphinscheduler.server.master.MasterServer, you need to set the following VM options:
+Following steps will guide how to start the DolphinScheduler backend service.
 
-```
--Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false
-```
+##### Backend Start Prepare
 
-3. Start WorkerServer  
-    run the main method of org.apache.dolphinscheduler.server.worker.WorkerServer, you need to set the following VM options:
+* Open project: Use IDE open the project, here we use Intellij IDEA as an example, after opening it will take a while for Intellij IDEA to complete the dependent download
+* Plugin installation(**Only required for 2.0 or later**): Compile plugin by command `mvn -U clean install package -Prelease -Dmaven.test.skip=true`
+* File change
+  * If you use mysql as your metadata database, you need to modify `dolphinscheduler/pom.xml` and change the dependency `mysql-connector-java` from `scope` to `compile`. This step is not necessary to use postgresql.
+  * Modify database configuration, modify the database configuration in the `dolphinscheduler/dolphinscheduler-dao/datasource.properties`
 
-```
--Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false
-```
+  ```properties
+  # We here use MySQL with database, username, password named dolphinscheduler as an example
+  spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+  spring.datasource.url=jdbc:mysql://localhost:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
+  spring.datasource.username=dolphinscheduler
+  spring.datasource.password=dolphinscheduler
+  ```
 
-4. Start ApiApplicationServer  
-    run the main method of org.apache.dolphinscheduler.api.ApiApplicationServer, you need to set the following VM options:
+* Log level: Add one single line `<appender-ref ref="STDOUT"/>` in file `dolphinscheduler-server/src/main/resources/logback-worker.xml`, `dolphinscheduler-server/src/main/resources/logback-master.xml`, `dolphinscheduler-api/src/main/resources/logback-api.xml` to show log in console, here we add  the result after modify as below
 
-```
--Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api
-```
+  ```diff
+  <root level="INFO">
+  +  <appender-ref ref="STDOUT"/>
+    <appender-ref ref="APILOGFILE"/>
+    <appender-ref ref="SKYWALKING-LOG"/>
+  </root>
+  ```
 
-5. We are not going to start the other modules. if they are required to be started, check script/dolphinscheduler-daemon.sh and set them the same VM Options.
+> **_Note:_**: Only DolphinScheduler 2.0 and later versions need to inatall plugin before start server. It not need before version 2.0.
 
-```
-if [ "$command" = "api-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api"
-  CLASS=org.apache.dolphinscheduler.api.ApiApplicationServer
-elif [ "$command" = "master-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false"
-  CLASS=org.apache.dolphinscheduler.server.master.MasterServer
-elif [ "$command" = "worker-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false"
-  CLASS=org.apache.dolphinscheduler.server.worker.WorkerServer
-elif [ "$command" = "alert-server" ]; then
-  LOG_FILE="-Dlogback.configurationFile=conf/logback-alert.xml"
-  CLASS=org.apache.dolphinscheduler.alert.AlertServer
-elif [ "$command" = "logger-server" ]; then
-  CLASS=org.apache.dolphinscheduler.server.log.LoggerServer
-else
-  echo "Error: No command named '$command' was found."
-  exit 1
-fi
-```
+##### Server start
 
-6. Start web ui
+There are three necessary server we have to start, including MasterServer,WorkerServer,ApiApplicationServer, and a optional server you could start if you need, named LoggerServer.
 
-```bash
+* MasterServer:Execute function `main` in the class `org.apache.dolphinscheduler.server.master.MasterServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false`
+* WorkerServer:Execute function `main` in the class `org.apache.dolphinscheduler.server.worker.WorkerServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false`
+* ApiApplicationServer:Execute function `main` in the class `org.apache.dolphinscheduler.api.ApiApplicationServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api`. After it started, you could find Open API documentation in http://localhost:12345/dolphinscheduler/doc.html
+* LoggerServer:**Optional server, only start if you need**,Execute function `main` in the class `org.apache.dolphinscheduler.server.log.LoggerServer` by Intellij IDEA
+
+### Start Frontend Server
+
+Install frontend dependencies and run it
+
+```shell
 cd dolphinscheduler-ui
+npm install
 npm run start
 ```
 
-#### Visit the project
-
-1. Visit http://localhost:8888
-
-2. Sign in with the administrator account  
-    username: admin  
-    password: dolphinscheduler123
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
diff --git a/development/zh-cn/backend/backend-development.md b/development/zh-cn/backend/backend-development.md
deleted file mode 100644
index ed9d157..0000000
--- a/development/zh-cn/backend/backend-development.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# 后端开发文档
-
-## 环境要求
-
- * MySQL (5.5+) : 必装
- * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : 必装
- * ZooKeeper(3.4.6+) : 必装 
- * [Maven](http://maven.apache.org/download.cgi)(3.3+) : 必装 
-
-因DolphinScheduler中dolphinscheduler-rpc模块使用到Grpc,需要用到Maven编译生成所需要的类
-对maven不熟的伙伴请参考: [maven in five minutes](http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html)(3.3+)
-
-http://maven.apache.org/install.html
-
-## 项目编译
-将DolphinScheduler源码下载导入Idea开发工具后,首先转为Maven项目(右键点击后选择"Add Framework Support")
-
-* 执行编译命令:
-
-当部署的版本 >= 1.2.0 , 请使用:
-```
- mvn -U clean package -Prelease -Dmaven.test.skip=true
-```
-
-1.2.0以前的版本, 请使用:
-```
- mvn -U clean package assembly:assembly -Dmaven.test.skip=true
-```
-
-* 查看目录
-
-正常编译完后,会在当前目录生成 `./dolphinscheduler-dist/target/apache-dolphinscheduler-{version}-bin.tar.gz`,解压该gz包得到以下目录:
-
-```
-bin
-conf
-lib
-script
-sql
-install.sh
-```
-
-- 说明
-
-```
-bin : 基础服务启动脚本
-conf : 项目配置文件
-lib : 项目依赖jar包,包括各个模块jar和第三方jar
-script : 集群启动、停止和服务监控启停脚本
-sql : 项目依赖sql文件
-install.sh : 一键部署脚本
-```
diff --git a/development/zh-cn/backend/mechanism/global-parameter.md b/development/zh-cn/backend/mechanism/global-parameter.md
new file mode 100644
index 0000000..7df22bc
--- /dev/null
+++ b/development/zh-cn/backend/mechanism/global-parameter.md
@@ -0,0 +1,61 @@
+# 全局参数开发文档
+
+用户在定义方向为 OUT 的参数后,会保存在 task 的 localParam 中。
+
+## 参数的使用
+
+从 DAG 中获取当前需要创建的 taskInstance 的直接前置节点 preTasks,获取 preTasks 的 varPool,将该 `varPool(List<Property>)`合并为一个 varPool,在合并过程中,如果发现有相同的变量名的变量,按照以下逻辑处理
+
+* 若所有的值都是 null,则合并后的值为 null
+* 若有且只有一个值为非 null,则合并后的值为该非 null 值
+* 若所有的值都不是 null,则根据取 varPool 的 taskInstance 的 endtime 最早的一个
+
+在合并过程中将所有的合并过来的 Property 的方向更新为 IN
+
+合并后的结果保存在 taskInstance.varPool 中。
+
+Worker 收到后将 varPool 解析为 Map<String,Property> 的格式,其中 map 的 key 为 property.prop 也就是变量名。
+
+在 processor 处理参数时,会将 varPool 和 localParam 和 globalParam 三个变量池参数合并,合并过程中若有参数名重复的参数,按照以下优先级进行替换,高优先级保留,低优先级被替换:
+
+* `globalParam` :高
+* `varPool` :中
+* `localParam` :低
+
+参数会在节点内容执行之前利用正则表达式比配到 ${变量名},替换为对应的值。
+
+## 参数的设置
+
+目前仅支持 SQL 和 SHELL 节点的参数获取。
+从 localParam 中获取方向为 OUT 的参数,根据不同节点的类型做以下方式处理。
+
+### SQL 节点
+
+参数返回的结构为 List<Map<String,String>>
+
+其中,List 的元素为每行数据,Map 的 key 为列名,value 为该列对应的值
+
+* 若 SQL 语句返回为有一行数据,则根据用户在定义 task 时定义的 OUT 参数名匹配列名,若没有匹配到则放弃。
+* 若 SQL 语句返回多行,按照根据用户在定义 task 时定义的类型为 LIST 的 OUT 参数名匹配列名,将对应列的所有行数据转换为 `List<String>`,作为该参数的值。若没有匹配到则放弃。
+
+### SHELL 节点
+
+processor 执行后的结果返回为 `Map<String,String>`
+
+用户在定义 shell 脚本时需要在输出中定义 `${setValue(key=value)}`
+
+在参数处理时去掉 ${setValue()},按照 “=” 进行拆分,第 0 个为 key,第 1 个为 value。
+
+同样匹配用户定义 task 时定义的 OUT 参数名与 key,将 value 作为该参数的值。
+
+返回参数处理
+
+* 获取到的 processor 的结果为 String
+* 判断 processor 是否为空,为空退出
+* 判断 localParam 是否为空,为空退出
+* 获取 localParam 中为 OUT 的参数,为空退出
+* 将String按照上诉格式格式化(SQL为List<Map<String,String>>,shell为Map<String,String>)
+* 将匹配好值的参数赋值给 varPool(List<Property>,其中包含原有 IN 的参数)
+
+varPool 格式化为 json,传递给 master。
+Master 接收到 varPool 后,将其中为 OUT 的参数回写到 localParam 中。
diff --git a/development/zh-cn/backend/mechanism/overview.md b/development/zh-cn/backend/mechanism/overview.md
new file mode 100644
index 0000000..22bed27
--- /dev/null
+++ b/development/zh-cn/backend/mechanism/overview.md
@@ -0,0 +1,6 @@
+# 综述
+
+<!-- TODO 由于 side menu 不支持多个等级,所以新建了一个leading page存放 -->
+
+* [全局参数](global-parameter.md)
+* [switch任务类型](task/switch.md)
diff --git a/development/zh-cn/backend/mechanism/task/switch.md b/development/zh-cn/backend/mechanism/task/switch.md
new file mode 100644
index 0000000..27ed7f9
--- /dev/null
+++ b/development/zh-cn/backend/mechanism/task/switch.md
@@ -0,0 +1,8 @@
+# SWITCH 任务类型开发文档
+
+Switch任务类型的工作流程如下
+
+* 用户定义的表达式和分支流转的信息存在了taskdefinition中的taskParams中,当switch被执行到时,会被格式化为SwitchParameters。
+* SwitchTaskExecThread从上到下(用户在页面上定义的表达式顺序)处理switch中定义的表达式,从varPool中获取变量的值,通过js解析表达式,如果表达式返回true,则停止检查,并且记录该表达式的顺序,这里我们记录为resultConditionLocation。SwitchTaskExecThread的任务便结束了。
+* 当switch节点运行结束之后,如果没有发生错误(较为常见的是用户定义的表达式不合规范或参数名有问题),这个时候MasterExecThread.submitPostNode会获取DAG的下游节点继续执行。
+* DagHelper.parsePostNodes中如果发现当前节点(刚刚运行完成功的节点)是switch节点的话,会获取resultConditionLocation,将SwitchParameters中除了resultConditionLocation以外的其他分支全部skip掉。这样留下来的就只有需要执行的分支了。
diff --git a/development/zh-cn/development-environment-setup.md b/development/zh-cn/development-environment-setup.md
index 522d474..e763663 100644
--- a/development/zh-cn/development-environment-setup.md
+++ b/development/zh-cn/development-environment-setup.md
@@ -1,156 +1,139 @@
-## 环境搭建
+# DolphinScheduler 开发手册
 
-如果您对本地开发的视频教程感兴趣的话,也可以跟着视频来一步一步操作:
-[![ DolphinScheduler 本地开发搭建 ](/img/build_dev_video.png)](https://www.bilibili.com/video/BV1hf4y1b7sX)
+## 前置条件
 
-#### 准备工作
+在搭建 DolphinScheduler 开发环境之前请确保你已经安装一下软件
 
-1. 首先从远端仓库 fork [dolphinscheduler](https://github.com/apache/dolphinscheduler) 一份代码到自己的仓库中
+* [Git](https://git-scm.com/downloads): 版本控制系统
+* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html): 后端开发
+* [Maven](http://maven.apache.org/download.cgi): Java包管理系统
+* [Node](https://nodejs.org/en/download): 前端开发
 
-2. 在开发环境中安装好 MySQL/PostgreSQL、JDK、MAVEN
+### 克隆代码库
 
-3. 把自己仓库 clone 到本地
+通过你 git 管理工具下载 git 代码,下面以 git-core 为例
 
-```bash
-git clone https://github.com/apache/dolphinscheduler.git
+```shell
+mkdir dolphinscheduler
+cd dolphinscheduler
+git clone git@github.com:apache/dolphinscheduler.git
 ```
 
-4. git clone 项目后,进入项目目录,执行以下命令
+## 开发者须知
 
-```bash
-1. git branch -a #查看分支
-2. git checkout dev #切换到dev分支
-3. git pull #同步分支
-4. mvn -U clean package -Prelease -Dmaven.test.skip=true #mvn打包
-```
+DolphinScheduler 开发环境配置有两个方式,分别是standalone模式,以及普通模式
 
-#### 安装node
+* [standalone模式](#dolphinscheduler-standalone快速开发模式):**推荐使用,但仅支持 1.3.9 及以后的版本**,方便快速的开发环境搭建,能解决大部分场景的开发。
+* [普通模式](#dolphinscheduler-普通开发模式):master、worker、api、logger等单独启动,能更好的的模拟真实生产环境,可以覆盖的测试环境更多。
 
-1. 安装nvm  
-    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
+## DolphinScheduler Standalone快速开发模式
 
-2. 刷新环境变量  
-    source ~/.bash_profile
+> **_注意:_**: 仅供单机开发调试使用,默认使用 H2 Database,Zookeeper Testing Server。
+> 如需测试插件,可自行修改 StandaloneServer中p`lugin.bind`,亦或修改配置文件。具体请查看插件说明。
+> Standalone 仅在 DolphinScheduler 1.3.9 及以后的版本支持
 
-3. 安装node  
-    nvm install v12.20.2
-    备注: Mac用户还可以通过brew安装npm: brew install npm
+### 分支选择
 
-4. 验证node安装成功  
-    node --version  
+开发不同的代码需要基于不同的分支
 
-#### 安装 zookeeper
+* 如果想基于二进制包开发,切换到对应版本的代码,如 1.3.9 则是 `1.3.9-release`
+* 如果想要开发最新代码,切换到 `dev` 分支
 
-1. 下载 zookeeper  
-    https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
+### 启动后端
 
-2. 复制配置文件  
-    cp conf/zoo_sample.cfg conf/zoo.cfg
+编译后端相关依赖
 
-3. 修改配置  
-    vi conf/zoo.cfg  
-    dataDir=./tmp/zookeeper
+```shell
+mvn install -DskipTests
+```
 
-4. 启动/停止 zookeeper  
-    ./bin/zkServer.sh start
-    ./bin/zkServer.sh stop
+在 Intellij IDEA 找到并启动类 `org.apache.dolphinscheduler.server.StandaloneServer` 即可完成后端启动
 
-#### 创建数据库
+### 启动前端
 
-1. 创建用户名为 ds_user,密码为 dolphinscheduler 的用户  
+安装前端依赖并运行前端组件
 
-```
-mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'ds_user'@'%' IDENTIFIED BY 'dolphinscheduler';
-mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'ds_user'@'localhost' IDENTIFIED BY 'dolphinscheduler';
-mysql> flush privileges;
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
 ```
 
-#### 搭建前端
+截止目前,前后端以成功运行起来,浏览器访问[http://localhost:8888](http://localhost:8888),并使用默认账户密码 **admin/dolphinscheduler123** 即可完成登录
 
-1. 进入 dolphinscheduler-ui 的目录  
-    cd dolphinscheduler-ui
+## DolphinScheduler 普通开发模式
 
-2. 执行 npm install  
+### 必要软件安装
 
-#### 搭建后端
+#### zookeeper
 
-1. 将项目导入到 idea 中  
-    file-->open
+下载 [ZooKeeper](https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3),解压
 
-2. 修改 dao 模块 resource 目录下 datasource.properties 文件中的数据库配置信息     
+* 在 ZooKeeper 的目录下新建 zkData、zkLog文件夹
+* 将 conf 目录下的 `zoo_sample.cfg` 文件,复制一份,重命名为 `zoo.cfg`,修改其中数据和日志的配置,如:
 
-```
-spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-spring.datasource.url=jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
-spring.datasource.username=ds_user
-spring.datasource.password=dolphinscheduler
-```
+    ```shell
+    dataDir=/data/zookeeper/data ## 此处使用绝对路径
+    dataLogDir=/data/zookeeper/datalog
+    ```
 
-3. 修改根项目中 pom.xml,将 mysql-connector-java 依赖的 scope 修改为 compile  
+* 运行 `./bin/zkServer.sh`。
 
-4. 刷新 dao 模块,运行 org.apache.dolphinscheduler.dao.upgrade.shell.CreateDolphinScheduler 的 main 方法,自动插入项目所需的表和数据  
+#### 数据库
 
-5. 修改 service 模块 zookeeper.properties 中 quorum 配置 
-    zookeeper.quorum=localhost:2181
+DolphinScheduler 的元数据存储在关系型数据库中,目前支持的关系型数据库包括 MySQL 以及 Postgresql。下面以MySQL为例,启动数据库并创建新database作为 DolphinScheduler 元数据库,这里以数据库名 dolphinscheduler 为例。
 
-#### 启动项目
+创建完新数据库后,将 `dolphinscheduler/sql/dolphinscheduler_mysql.sql` 下的 sql 文件直接在mysql中运行,完成数据库初始化。
 
-1. 启动 zookeeper   
-    ./bin/zkServer.sh start
+#### 启动后端
 
-2. 启动 MasterServer,执行 org.apache.dolphinscheduler.server.master.MasterServer 的 main 方法,需要设置 VM Options:  
+下面步骤将引导如何启动 DolphinScheduler 后端服务。
 
-```
--Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false
-```
+##### 必要的准备工作
 
-3. 启动WorkerServer,执行org.apache.dolphinscheduler.server.worker.WorkerServer的 main方法,需要设置 VM Options:  
+* 打开项目:使用开发工具打开项目,这里以 Intellij IDEA 为例,打开后需要一段时间,让 Intellij IDEA 完成以依赖的下载。
+* 插件的安装(**仅 2.0 及以后的版本需要**):编译对应的插件,在项目目录执行 `mvn -U clean install package -Prelease -Dmaven.test.skip=true` 完成注册插件的安装
+* 必要的修改
+  * 如果使用 mysql 作为元数据库,需要先修改 `dolphinscheduler/pom.xml`,将 `mysql-connector-java` 依赖从 `scope` 改为 `compile`,使用 postgresql 则不需要。
+  * 修改数据库配置,修改 `dolphinscheduler/dolphinscheduler-dao/datasource.properties` 文件中的数据库配置
 
-```
--Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false
-```
+  ```properties
+  # 本样例以 MySQL 为例,其中数据库名为 dolphinscheduler,账户名密码均为 dolphinscheduler
+  spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+  spring.datasource.url=jdbc:mysql://localhost:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
+  spring.datasource.username=dolphinscheduler
+  spring.datasource.password=dolphinscheduler
+  ```
 
-4. 启动 ApiApplicationServer,执行 org.apache.dolphinscheduler.api.ApiApplicationServer 的 main 方法,需要设置 VM Options:   
+* 修改日志级别:修改一下文件,增加一行内容 `<appender-ref ref="STDOUT"/>` 使日志能在命令行中显示 `dolphinscheduler-server/src/main/resources/logback-worker.xml`,`dolphinscheduler-server/src/main/resources/logback-master.xml`,`dolphinscheduler-api/src/main/resources/logback-api.xml` 修改后的结果如下
 
-```
--Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api
-```
+  ```diff
+  <root level="INFO">
+  +  <appender-ref ref="STDOUT"/>
+    <appender-ref ref="APILOGFILE"/>
+    <appender-ref ref="SKYWALKING-LOG"/>
+  </root>
+  ```
 
-5. 这里暂时不启动其它模块,如果启动其它模块,那么去查询script/dolphinscheduler-daemon.sh文件,设置相应的 VM Options  
+> **_注意:_**:上述准备工作中,插件的安装仅 DolphinScheduler 2.0 及以后的版本需要运行,2.0 之前的版本不需要运行该命令
 
-```
-if [ "$command" = "api-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api"
-  CLASS=org.apache.dolphinscheduler.api.ApiApplicationServer
-elif [ "$command" = "master-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false"
-  CLASS=org.apache.dolphinscheduler.server.master.MasterServer
-elif [ "$command" = "worker-server" ]; then
-  LOG_FILE="-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false"
-  CLASS=org.apache.dolphinscheduler.server.worker.WorkerServer
-elif [ "$command" = "alert-server" ]; then
-  LOG_FILE="-Dlogback.configurationFile=conf/logback-alert.xml"
-  CLASS=org.apache.dolphinscheduler.alert.AlertServer
-elif [ "$command" = "logger-server" ]; then
-  CLASS=org.apache.dolphinscheduler.server.log.LoggerServer
-else
-  echo "Error: No command named '$command' was found."
-  exit 1
-fi
-```
+##### 启动服务
 
-6. 启动前端 ui 模块  
+我们需要启动三个必须服务,包括 MasterServer,WorkerServer,ApiApplicationServer,如果有需求可以启动可选服务 LoggerServer
 
-```bash
+* MasterServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.server.master.MasterServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false`
+* WorkerServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.server.worker.WorkerServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false`
+* ApiApplicationServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.api.ApiApplicationServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api`。启动完成可以浏览 Open API 文档,地址为 http://localhost:12345/dolphinscheduler/doc.html
+* LoggerServer:**这是非必须功能,可以不开启**,在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.server.log.LoggerServer` 中的 `main` 方法
+
+### 启动前端
+
+安装前端依赖并运行前端组件
+
+```shell
 cd dolphinscheduler-ui
+npm install
 npm run start
 ```
 
-#### 访问项目
-
-1. 访问 http://localhost:8888
-
-2. 登录管理员账号  
-    用户: admin  
-    密码: dolphinscheduler123
+截止目前,前后端以成功运行起来,浏览器访问[http://localhost:8888](http://localhost:8888),并使用默认账户密码 **admin/dolphinscheduler123** 即可完成登录
diff --git a/docs/en-us/dev/user_doc/globalParams.md b/docs/en-us/dev/user_doc/globalParams.md
deleted file mode 100644
index 7da8954..0000000
--- a/docs/en-us/dev/user_doc/globalParams.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Development documentation
-
-After the user defines the parameter with the direction OUT, it is saved in the localParam of the task.
-
-##### The use of parameters:
-
-Getting the direct predecessor node preTasks of the current taskInstance to be created from the DAG, get the varPool of preTasks, merge this varPool (List) into one varPool, and in the merging process, if parameters with the same parameter name are found, they will be handled according to the following logics:
-
-1. If all the values are null, the merged value is null
-2. If one and only one value is non-null, then the merged value is the non-null value
-3. If all the values are not null, it would be the earliest value of the endtime of taskInstance taken by VarPool.
-
-The direction of all the merged properties is updated to IN during the merge process.
-
-The result of the merge is saved in taskInstance.varPool.
-
-The worker receives and parses the varPool into the format of Map<String,Property>, where the key of the map is property.prop, which is the parameter name.
-
-When the processor processes the parameters, it will merge the varPool and localParam and globalParam parameters, and if there are parameters with duplicate names during the merging process, they will be replaced according to the following priorities, with the higher priority being retained and the lower priority being replaced:
-
-- globalParam: high
-- varPool: middle
-- localParam: low
-
-The parameters are replaced with the corresponding values using regular expressions compared to ${parameter name} before the node content is executed.
-
-##### Parameter setting:
-
-Currently, only SQL and SHELL nodes are supported to get parameters.
-
-- Get the parameter with direction OUT from localParam, and do the following way according to the type of different nodes.
-
-###### SQL node:
-
-The structure returned by the parameter is List<Map<String,String>>, where the elements of List are each row of data, the key of Map is the column name, and the value is the value corresponding to the column.
-
-    (1) If the SQL statement returns one row of data, match the OUT parameter name based on the OUT parameter name defined by the user when defining the task, or discard it if it does not match.
-    
-    (2) If the SQL statement returns multiple rows of data, the column names are matched based on the OUT parameter names defined by the user when defining the task of type LIST. All rows of the corresponding column are converted to List<String> as the value of this parameter. If there is no match, it is discarded.
-
-###### SHELL node
-
-The result of the processor execution is returned as Map<String,String>.
-
-The user needs to define ${setValue(key=value)} in the output when defining the shell script.
-
-Remove ${setValue()} when processing parameters, split by "=", with the 0th being the key and the 1st being the value.
-
-Similarly match the OUT parameter name and key defined by the user when defining the task, and use value as the value of that parameter.
-
--    Return parameter processing
-
-The result of acquired Processor is String.
-
-Determine whether the processor is empty or not, and exit if it is empty.
-
-Determine whether the localParam is empty or not, and exit if it is empty.
-
-Get the parameter of localParam which is OUT, and exit if it is empty.
-
-Format String as per appeal format (List<Map<String,String>> for SQL, Map<String,String>> for shell).
-
-Assign the parameters with matching values to varPool (List, which contains the original IN's parameters)
-
--    Format the varPool as json and pass it to master.
-
--    The parameters that are OUT would be written into the localParam after the master has received the varPool.
-
-     
diff --git a/docs/zh-cn/dev/user_doc/dev_run.md b/docs/zh-cn/dev/user_doc/dev_run.md
deleted file mode 100644
index 9a26319..0000000
--- a/docs/zh-cn/dev/user_doc/dev_run.md
+++ /dev/null
@@ -1,125 +0,0 @@
-#### 搭建 dev 分支开发环境
-
-1. **下载源码**
-
-     GitHub :https://github.com/apache/dolphinscheduler
-
-     ```shell
-     mkdir dolphinscheduler
-     cd dolphinscheduler
-     git clone git@github.com:apache/dolphinscheduler.git
-     ```
-
-     这里选用 dev 分支。
-
-2. ** 安装 ZooKeeper**
-
-     1.   下载 ZooKeeper https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
-
-     2.   解压 apache-zookeeper-3.6.3-bin.tar.gz
-
-     3.   在 ZooKeeper 的目录下新建 zkData、zkLog文件夹
-
-     4.   将 conf 目录下的 zoo_sample.cfg 文件,复制一份,重命名为 zoo.cfg,修改其中数据和日志的配置,如:
-
-          ```shell
-          dataDir=/data/zookeeper/data ## 此处使用绝对路径
-          dataLogDir=/data/zookeeper/datalog
-          ```
-
-     5.   在 bin 中运行 zkServer.sh,然后运行 zkCli.sh 查看 ZooKeeper 运行状态,可以查看 ZooKeeper 节点信息即代表安装成功。
-
-3. **搭建后端环境**
-
-     1.   在本地新建一个数据库用于调试,DolphinScheduler 支持 MySQL 和 PostgreSQL,这里使用 MySQL 进行配置,库名可为 :dolphinscheduler;
-
-     2.   把代码导入 IDEA,修改根项目中 pom.xml,将 mysql-connector-java 依赖的 scope 修改为 compile;
-
-     3.   在 terminal 中执行 `mvn -U clean install package -Prelease -Dmaven.test.skip=true`,安装所需要的注册插件;
-
-     4.   修改 dolphinscheduler-dao 模块的 datasource.properties;
-
-          ```properties
-          # mysql
-          spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-          spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # 替换您的数据库 ip 地址
-          spring.datasource.username=xxx						# 替换您的数据库用户名
-          spring.datasource.password=xxx						# 替换您的数据库密码
-          ```
-
-     5.   刷入项目所需的表和数据;dev 分支数据库字段变化比较频繁,在数据库里直接运行项目根目录下 `dolphinscheduler\sql` 文件夹下对应数据库的 sql 文件 ,MySQL 下直接运行 `dolphinscheduler_mysql.sql`
-
-     6.   分别修改 dolphinscheduler-service 模块的 registry.properties 和 dolphinscheduler-server 模块的 worker.properties,注意:这里的 `1.3.6-SNAPSHOT` 需要根据实际生成的文件进行填写 
-
-          ```properties
-          #registry.plugin.dir config the Registry Plugin dir.
-          registry.plugin.dir=./dolphinscheduler-dist/target/dolphinscheduler-dist-1.3.6-SNAPSHOT/lib/plugin/registry/zookeeper
-          
-          registry.plugin.name=zookeeper
-          registry.servers=127.0.0.1:2181
-          ```
-          
-          ```properties
-          #task.plugin.dir config the #task.plugin.dir config the Task Plugin dir . WorkerServer while find and load the Task Plugin Jar from this dir when deploy and start WorkerServer on the server .
-          task.plugin.dir=./dolphinscheduler-task-plugin/dolphinscheduler-task-shell/target/dolphinscheduler-task-shell-1.3.6-SNAPSHOT
-          ```
-
-     7.   在 logback-worker.xml、logback-master.xml、logback-api.xml 中添加控制台输出;
-
-          ```xml
-          <root level="INFO">
-              <appender-ref ref="STDOUT"/>  <!-- 添加控制台输出 -->
-              <appender-ref ref="APILOGFILE"/>
-              <appender-ref ref="SKYWALKING-LOG"/>
-          </root>
-          ```
-
-     8.   启动 MasterServer,执行 org.apache.dolphinscheduler.server.master.MasterServer 的 main 方法,需要设置 VM Options:
-
-          ```
-          -Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false
-          ```
-
-     9.   启动 WorkerServer,执行org.apache.dolphinscheduler.server.worker.WorkerServer的 main方法,需要设置 VM Options:
-
-          ```
-          -Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false
-          ```
-
-     10.   启动 ApiApplicationServer,执行 org.apache.dolphinscheduler.api.ApiApplicationServer 的 main 方法,需要设置 VM Options:
-
-           ```
-           -Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api
-           ```
-
-     11.   如果需要用到日志功能,执行 org.apache.dolphinscheduler.server.log.LoggerServer 的 main 方法。
-
-     12.   后端 Open API 文档地址 :http://localhost:12345/dolphinscheduler/doc.html?language=zh_CN&lang=cn
-
-4.   **搭建前端环境** 
-
-     1.   #### 安装 node
-
-          1.   安装 nvm
-               curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
-          2.   刷新环境变量
-               source ~/.bash_profile
-          3.   安装 node
-               nvm install v12.20.2 备注: Mac 用户还可以通过 brew 安装 npm: brew install npm
-          4.   验证 node 安装成功
-               node --version
-
-     2.   进入 dolphinscheduler-ui,运行
-
-          ```shell
-          npm install
-          npm run start
-          ```
-
-     3.   访问 [http://localhost:8888](http://localhost:8888/)
-
-     4.   登录管理员账号
-
-          >    用户:admin
-          >
-          >    密码:dolphinscheduler123
diff --git a/docs/zh-cn/dev/user_doc/globalParams.md b/docs/zh-cn/dev/user_doc/globalParams.md
deleted file mode 100644
index bab18c1..0000000
--- a/docs/zh-cn/dev/user_doc/globalParams.md
+++ /dev/null
@@ -1,73 +0,0 @@
-# 开发文档
-
-用户在定义方向为 OUT 的参数后,会保存在 task 的 localParam 中。
-
-##### 参数的使用:
-
-从 DAG 中获取当前需要创建的 taskInstance 的直接前置节点 preTasks,获取 preTasks 的 varPool,将该 varPool(List<Property>)合并为一个 varPool,在合并过程中,如果发现有相同的变量名的变量,按照以下逻辑处理
-
-1. 若所有的值都是 null,则合并后的值为 null
-
-2. 若有且只有一个值为非 null,则合并后的值为该非 null 值
-
-3. 若所有的值都不是 null,则根据取 varPool 的 taskInstance 的 endtime 最早的一个
-
-
-在合并过程中将所有的合并过来的 Property 的方向更新为 IN
-
-合并后的结果保存在 taskInstance.varPool 中。
-
-Worker 收到后将 varPool 解析为 Map<String,Property> 的格式,其中 map 的 key 为 property.prop 也就是变量名。
-
-在 processor 处理参数时,会将 varPool 和 localParam 和 globalParam 三个变量池参数合并,合并过程中若有参数名重复的参数,按照以下优先级进行替换,高优先级保留,低优先级被替换:
-
-- globalParam :高
-- varPool :中
-- localParam :低
-
-参数会在节点内容执行之前利用正则表达式比配到 ${变量名},替换为对应的值。
-
-##### 参数的设置:
-
-目前仅支持 SQL 和 SHELL 节点的参数获取。
-
-- 从 localParam 中获取方向为 OUT 的参数,根据不同节点的类型做以下方式处理。
-
-###### SQL 节点:
-
-参数返回的结构为 List<Map<String,String>>
-
-其中,List 的元素为每行数据,Map 的 key 为列名,value 为该列对应的值
-
- 	  (1) 若 SQL 语句返回为有一行数据,则根据用户在定义 task 时定义的 OUT 参数名匹配列名,若没有匹配到则放弃。
- 	
- 	  (2) 若 SQL 语句返回多行,按照根据用户在定义 task 时定义的类型为 LIST 的 OUT 参数名匹配列名,将对应列的所有行数据转换为 List<String>,作为该参数的值。若没有匹配到则放弃。
-
-###### SHELL 节点:
-
-processor 执行后的结果返回为 Map<String,String>
-
-
-用户在定义 shell 脚本时需要在输出中定义 ${setValue(key=value)}
-
-在参数处理时去掉 ${setValue()},按照 “=” 进行拆分,第 0 个为 key,第 1 个为 value。
-
-同样匹配用户定义 task 时定义的 OUT 参数名与 key,将 value 作为该参数的值。
-
-- 返回参数处理
-
-     获取到的 processor 的结果为 String
-
-     判断 processor 是否为空,为空退出
-
-     判断 localParam 是否为空,为空退出
-
-     获取 localParam 中为 OUT 的参数,为空退出
-
-     将String按照上诉格式格式化(SQL为List<Map<String,String>>,shell为Map<String,String>)
-
-     将匹配好值的参数赋值给 varPool(List<Property>,其中包含原有 IN 的参数)
-
-- varPool 格式化为 json,传递给 master。
-
-- Master 接收到 varPool 后,将其中为 OUT 的参数回写到 localParam 中。
diff --git a/docs/zh-cn/dev/user_doc/switch_node.md b/docs/zh-cn/dev/user_doc/switch_node.md
deleted file mode 100644
index b5cdb64..0000000
--- a/docs/zh-cn/dev/user_doc/switch_node.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### **DolphinScheduler\**** ***\*-SWITCH节点\****
-
-Dolphinscheduler中目前包含条件判断的节点有两个,condition节点和switch节点。Condition节点主要依据上游节点的执行状态(成功、失败)执行对应分支。Switch节点主要依据全局变量的值和用户所编写的表达式判断结果执行对应分支。本文所讲述的是switch节点相关内容。
-
-#### ***\*开发文档\****
-
-1. 用户定义的表达式和分支流转的信息存在了taskdefinition中的taskParams中,当switch被执行到时,会被格式化为SwitchParameters。
-
-2. SwitchTaskExecThread从上到下(用户在页面上定义的表达式顺序)处理switch中定义的表达式,从varPool中获取变量的值,通过js解析表达式,如果表达式返回true,则停止检查,并且记录该表达式的顺序,这里我们记录为resultConditionLocation。SwitchTaskExecThread的任务便结束了。
-
-3. 当switch节点运行结束之后,如果没有发生错误(较为常见的是用户定义的表达式不合规范或参数名有问题),这个时候MasterExecThread.submitPostNode会获取DAG的下游节点继续执行。
-
-4. DagHelper.parsePostNodes中如果发现当前节点(刚刚运行完成功的节点)是switch节点的话,会获取resultConditionLocation,将SwitchParameters中除了resultConditionLocation以外的其他分支全部skip掉。这样留下来的就只有需要执行的分支了。
-
-以上便是switch工作的流程。
diff --git a/site_config/development.js b/site_config/development.js
index d82d7cc..93cd127 100644
--- a/site_config/development.js
+++ b/site_config/development.js
@@ -15,10 +15,6 @@ export default {
           {
             title: 'Backend Development',
             children: [
-              {
-                title: 'Overview',
-                link: '/en-us/development/back/backend-development.html',
-              },
               // TODO not suppor multiply level for now
               // {
                 // title: 'SPI',
@@ -41,6 +37,10 @@ export default {
                   },
                 // ],
               // }
+              {
+                title: 'Mechanism Design',
+                link: '/en-us/development/backend/mechanism/overview.html',
+              },
             ],
           },
           {
@@ -72,10 +72,6 @@ export default {
           {
             title: '后端开发',
             children: [
-              {
-                title: '综述',
-                link: '/zh-cn/development/backend/backend-development.html',
-              },
               // TODO not suppor multiply level for now
               // {
                 // title: 'SPI相关',
@@ -98,6 +94,10 @@ export default {
                   },
                 // ],
               // },
+              {
+                title: '组件设计',
+                link: '/zh-cn/development/backend/mechanism/overview.html',
+              },
             ],
           },
           {
diff --git a/site_config/docsdev.js b/site_config/docsdev.js
index 7a6d385..03c9377 100644
--- a/site_config/docsdev.js
+++ b/site_config/docsdev.js
@@ -260,19 +260,6 @@ export default {
           },
         ],
       },
-      {
-        title: 'To be Classification',
-        children: [
-          {
-            title: 'Global-Params',
-            link: '/en-us/docs/dev/user_doc/globalParams.html',
-          },
-          {
-            title: 'Dev-Run',
-            link: '/en-us/docs/dev/user_doc/dev_run.html',
-          },
-        ],
-      },
     ],
     barText: 'Documentation',
   },
@@ -528,23 +515,6 @@ export default {
           },
         ],
       },
-      {
-        title: '待分类文档',
-        children: [
-          {
-            title: 'Global-Params',
-            link: '/zh-cn/docs/dev/user_doc/globalParams.html',
-          },
-          {
-            title: 'Switch-Node',
-            link: '/zh-cn/docs/dev/user_doc/switch_node.html',
-          },
-          {
-            title: 'Dev-Run',
-            link: '/zh-cn/docs/dev/user_doc/dev_run.html',
-          },
-        ],
-      },
     ],
     barText: '文档',
   },