You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/04/02 14:00:37 UTC

[dolphinscheduler-website] branch master updated: [refactor] Migrate docs to history branch (#729)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 29a0a12  [refactor] Migrate docs to history branch (#729)
29a0a12 is described below

commit 29a0a125c383d508889ae8dd058f4183010debd7
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Sat Apr 2 22:00:31 2022 +0800

    [refactor] Migrate docs to history branch (#729)
    
    * [refactor] Migrate docs to history branch
    
    * Migrate all docs expect dev doc to branch
      `history-docs` and dev doc to main repo
    * Migrate all configs expect dev doc to branch
      `history-docs` and dev doc to main repo
    * Add build script to fetch history docs or from
      main repository and website repository
    * Change CI workflow
    * Add ignore docs*js and site.js
    * Change shell script mode
    * uncomment rsync img from main repo
    * tmp skip `docs-check` ci
---
 .github/workflows/change-docs.yaml                 |    2 +-
 .github/workflows/dead-link-checker.yaml           |    5 +
 .github/workflows/website.yml                      |    3 +
 .gitignore                                         |    6 +
 README.md                                          |   20 +
 docs/en-us/1.2.0/user_doc/backend-deployment.md    |  261 -----
 docs/en-us/1.2.0/user_doc/cluster-deployment.md    |  500 ---------
 docs/en-us/1.2.0/user_doc/frontend-deployment.md   |  128 ---
 docs/en-us/1.2.0/user_doc/hardware-environment.md  |   48 -
 docs/en-us/1.2.0/user_doc/metadata-1.2.md          |  174 ---
 docs/en-us/1.2.0/user_doc/quick-start.md           |   65 --
 docs/en-us/1.2.0/user_doc/standalone-deployment.md |  456 --------
 docs/en-us/1.2.0/user_doc/system-manual.md         |  736 -------------
 docs/en-us/1.2.0/user_doc/upgrade.md               |   39 -
 docs/en-us/1.2.1/user_doc/architecture-design.md   |  316 ------
 docs/en-us/1.2.1/user_doc/backend-deployment.md    |  261 -----
 docs/en-us/1.2.1/user_doc/frontend-deployment.md   |  130 ---
 docs/en-us/1.2.1/user_doc/hardware-environment.md  |   48 -
 docs/en-us/1.2.1/user_doc/metadata-1.2.md          |  174 ---
 docs/en-us/1.2.1/user_doc/plugin-development.md    |   54 -
 docs/en-us/1.2.1/user_doc/quick-start.md           |   65 --
 docs/en-us/1.2.1/user_doc/system-manual.md         |  737 -------------
 docs/en-us/1.2.1/user_doc/upgrade.md               |   39 -
 docs/en-us/1.3.1/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.1/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.1/user_doc/configuration-file.md    |  406 -------
 docs/en-us/1.3.1/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.1/user_doc/metadata-1.3.md          |  185 ----
 docs/en-us/1.3.1/user_doc/quick-start.md           |   65 --
 docs/en-us/1.3.1/user_doc/standalone-deployment.md |  400 -------
 docs/en-us/1.3.1/user_doc/system-manual.md         |  836 --------------
 docs/en-us/1.3.1/user_doc/task-structure.md        | 1136 --------------------
 docs/en-us/1.3.1/user_doc/upgrade.md               |   78 --
 docs/en-us/1.3.2/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.2/user_doc/cluster-deployment.md    |  405 -------
 docs/en-us/1.3.2/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.2/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.2/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.2/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.2/user_doc/quick-start.md           |   65 --
 docs/en-us/1.3.2/user_doc/standalone-deployment.md |  342 ------
 docs/en-us/1.3.2/user_doc/system-manual.md         |  886 ---------------
 docs/en-us/1.3.2/user_doc/task-structure.md        | 1136 --------------------
 docs/en-us/1.3.2/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.3/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.3/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.3/user_doc/configuration-file.md    |  407 -------
 docs/en-us/1.3.3/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.3/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.3/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.3/user_doc/quick-start.md           |   65 --
 docs/en-us/1.3.3/user_doc/standalone-deployment.md |  342 ------
 docs/en-us/1.3.3/user_doc/system-manual.md         |  887 ---------------
 docs/en-us/1.3.3/user_doc/task-structure.md        | 1136 --------------------
 docs/en-us/1.3.3/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.4/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.4/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.4/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.4/user_doc/docker-deployment.md     |  148 ---
 docs/en-us/1.3.4/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.4/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.4/user_doc/load-balance.md          |   61 --
 docs/en-us/1.3.4/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.4/user_doc/quick-start.md           |   65 --
 docs/en-us/1.3.4/user_doc/standalone-deployment.md |  342 ------
 docs/en-us/1.3.4/user_doc/system-manual.md         |  888 ---------------
 docs/en-us/1.3.4/user_doc/task-structure.md        | 1131 -------------------
 docs/en-us/1.3.4/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.5/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.5/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.5/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.5/user_doc/docker-deployment.md     |  408 -------
 docs/en-us/1.3.5/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.5/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.5/user_doc/kubernetes-deployment.md |  196 ----
 docs/en-us/1.3.5/user_doc/load-balance.md          |   61 --
 docs/en-us/1.3.5/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.5/user_doc/open-api.md              |   38 -
 docs/en-us/1.3.5/user_doc/quick-start.md           |   65 --
 docs/en-us/1.3.5/user_doc/standalone-deployment.md |  342 ------
 docs/en-us/1.3.5/user_doc/system-manual.md         |  912 ----------------
 docs/en-us/1.3.5/user_doc/task-structure.md        | 1131 -------------------
 docs/en-us/1.3.5/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.6/user_doc/ambari-integration.md    |  132 ---
 docs/en-us/1.3.6/user_doc/architecture-design.md   |  332 ------
 docs/en-us/1.3.6/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.6/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.6/user_doc/docker-deployment.md     | 1019 ------------------
 docs/en-us/1.3.6/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.6/user_doc/flink-call.md            |  152 ---
 docs/en-us/1.3.6/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.6/user_doc/kubernetes-deployment.md |  751 -------------
 docs/en-us/1.3.6/user_doc/load-balance.md          |   61 --
 docs/en-us/1.3.6/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.6/user_doc/open-api.md              |   64 --
 docs/en-us/1.3.6/user_doc/quick-start.md           |   65 --
 .../1.3.6/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/en-us/1.3.6/user_doc/standalone-deployment.md |  345 ------
 docs/en-us/1.3.6/user_doc/system-manual.md         |  903 ----------------
 docs/en-us/1.3.6/user_doc/task-structure.md        | 1131 -------------------
 docs/en-us/1.3.6/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.8/user_doc/ambari-integration.md    |  128 ---
 docs/en-us/1.3.8/user_doc/architecture-design.md   |  331 ------
 docs/en-us/1.3.8/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.8/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.8/user_doc/docker-deployment.md     | 1033 ------------------
 docs/en-us/1.3.8/user_doc/expansion-reduction.md   |  250 -----
 docs/en-us/1.3.8/user_doc/flink-call.md            |  152 ---
 docs/en-us/1.3.8/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.8/user_doc/kubernetes-deployment.md |  751 -------------
 docs/en-us/1.3.8/user_doc/load-balance.md          |   61 --
 docs/en-us/1.3.8/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.8/user_doc/open-api.md              |   64 --
 .../1.3.8/user_doc/parameters-introduction.md      |   80 --
 docs/en-us/1.3.8/user_doc/quick-start.md           |   65 --
 .../1.3.8/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/en-us/1.3.8/user_doc/standalone-deployment.md |  345 ------
 docs/en-us/1.3.8/user_doc/system-manual.md         |  905 ----------------
 docs/en-us/1.3.8/user_doc/task-structure.md        | 1131 -------------------
 docs/en-us/1.3.8/user_doc/upgrade.md               |   80 --
 docs/en-us/1.3.9/user_doc/ambari-integration.md    |  128 ---
 docs/en-us/1.3.9/user_doc/architecture-design.md   |  331 ------
 docs/en-us/1.3.9/user_doc/cluster-deployment.md    |  406 -------
 docs/en-us/1.3.9/user_doc/configuration-file.md    |  408 -------
 docs/en-us/1.3.9/user_doc/docker-deployment.md     | 1034 ------------------
 docs/en-us/1.3.9/user_doc/expansion-reduction.md   |  249 -----
 docs/en-us/1.3.9/user_doc/flink-call.md            |  152 ---
 docs/en-us/1.3.9/user_doc/hardware-environment.md  |   47 -
 docs/en-us/1.3.9/user_doc/kubernetes-deployment.md |  751 -------------
 docs/en-us/1.3.9/user_doc/load-balance.md          |   61 --
 docs/en-us/1.3.9/user_doc/metadata-1.3.md          |  173 ---
 docs/en-us/1.3.9/user_doc/open-api.md              |   64 --
 .../1.3.9/user_doc/parameters-introduction.md      |   80 --
 docs/en-us/1.3.9/user_doc/quick-start.md           |   65 --
 .../1.3.9/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/en-us/1.3.9/user_doc/standalone-deployment.md |  345 ------
 docs/en-us/1.3.9/user_doc/standalone-server.md     |   42 -
 docs/en-us/1.3.9/user_doc/system-manual.md         |  904 ----------------
 docs/en-us/1.3.9/user_doc/task-structure.md        | 1131 -------------------
 docs/en-us/1.3.9/user_doc/upgrade.md               |   80 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.0/user_doc/architecture/configuration.md   |  409 -------
 docs/en-us/2.0.0/user_doc/architecture/design.md   |  332 ------
 .../2.0.0/user_doc/architecture/designplus.md      |   79 --
 .../2.0.0/user_doc/architecture/load-balance.md    |   61 --
 docs/en-us/2.0.0/user_doc/architecture/metadata.md |  173 ---
 .../2.0.0/user_doc/architecture/task-structure.md  | 1131 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   29 -
 docs/en-us/2.0.0/user_doc/guide/datasource/hive.md |   29 -
 .../user_doc/guide/datasource/introduction.md      |    7 -
 .../en-us/2.0.0/user_doc/guide/datasource/mysql.md |   16 -
 .../2.0.0/user_doc/guide/datasource/postgresql.md  |   15 -
 .../en-us/2.0.0/user_doc/guide/datasource/spark.md |   15 -
 .../2.0.0/user_doc/guide/expansion-reduction.md    |  250 -----
 docs/en-us/2.0.0/user_doc/guide/flink-call.md      |  152 ---
 docs/en-us/2.0.0/user_doc/guide/homepage.md        |    7 -
 .../2.0.0/user_doc/guide/installation/cluster.md   |   35 -
 .../2.0.0/user_doc/guide/installation/docker.md    | 1033 ------------------
 .../2.0.0/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  203 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/en-us/2.0.0/user_doc/guide/introduction.md    |    3 -
 docs/en-us/2.0.0/user_doc/guide/monitor.md         |   48 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/en-us/2.0.0/user_doc/guide/open-api.md        |   64 --
 .../2.0.0/user_doc/guide/parameter/built-in.md     |   48 -
 .../2.0.0/user_doc/guide/parameter/context.md      |   63 --
 .../en-us/2.0.0/user_doc/guide/parameter/global.md |   19 -
 docs/en-us/2.0.0/user_doc/guide/parameter/local.md |   19 -
 .../2.0.0/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.0/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.0/user_doc/guide/project/task-instance.md  |   12 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/2.0.0/user_doc/guide/quick-start.md     |   71 --
 docs/en-us/2.0.0/user_doc/guide/resource.md        |  112 --
 docs/en-us/2.0.0/user_doc/guide/security.md        |  163 ---
 docs/en-us/2.0.0/user_doc/guide/task/conditions.md |   36 -
 docs/en-us/2.0.0/user_doc/guide/task/datax.md      |   18 -
 docs/en-us/2.0.0/user_doc/guide/task/dependent.md  |   27 -
 docs/en-us/2.0.0/user_doc/guide/task/flink.md      |   23 -
 docs/en-us/2.0.0/user_doc/guide/task/http.md       |   23 -
 docs/en-us/2.0.0/user_doc/guide/task/map-reduce.md |   33 -
 docs/en-us/2.0.0/user_doc/guide/task/pigeon.md     |   19 -
 docs/en-us/2.0.0/user_doc/guide/task/python.md     |   15 -
 docs/en-us/2.0.0/user_doc/guide/task/shell.md      |   47 -
 docs/en-us/2.0.0/user_doc/guide/task/spark.md      |   22 -
 docs/en-us/2.0.0/user_doc/guide/task/sql.md        |   43 -
 .../2.0.0/user_doc/guide/task/stored-procedure.md  |   13 -
 .../en-us/2.0.0/user_doc/guide/task/sub-process.md |   14 -
 docs/en-us/2.0.0/user_doc/guide/task/switch.md     |   37 -
 docs/en-us/2.0.0/user_doc/guide/upgrade.md         |   64 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.1/user_doc/architecture/configuration.md   |  409 -------
 docs/en-us/2.0.1/user_doc/architecture/design.md   |  332 ------
 .../2.0.1/user_doc/architecture/designplus.md      |   79 --
 .../2.0.1/user_doc/architecture/load-balance.md    |   61 --
 docs/en-us/2.0.1/user_doc/architecture/metadata.md |  173 ---
 .../2.0.1/user_doc/architecture/task-structure.md  | 1131 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   29 -
 docs/en-us/2.0.1/user_doc/guide/datasource/hive.md |   38 -
 .../user_doc/guide/datasource/introduction.md      |    7 -
 .../en-us/2.0.1/user_doc/guide/datasource/mysql.md |   16 -
 .../2.0.1/user_doc/guide/datasource/postgresql.md  |   15 -
 .../en-us/2.0.1/user_doc/guide/datasource/spark.md |   15 -
 .../2.0.1/user_doc/guide/expansion-reduction.md    |  251 -----
 docs/en-us/2.0.1/user_doc/guide/flink-call.md      |  152 ---
 docs/en-us/2.0.1/user_doc/guide/homepage.md        |    7 -
 .../2.0.1/user_doc/guide/installation/cluster.md   |   35 -
 .../2.0.1/user_doc/guide/installation/docker.md    | 1033 ------------------
 .../2.0.1/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  203 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/en-us/2.0.1/user_doc/guide/introduction.md    |    3 -
 docs/en-us/2.0.1/user_doc/guide/monitor.md         |   48 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/en-us/2.0.1/user_doc/guide/open-api.md        |   64 --
 .../2.0.1/user_doc/guide/parameter/built-in.md     |   48 -
 .../2.0.1/user_doc/guide/parameter/context.md      |   63 --
 .../en-us/2.0.1/user_doc/guide/parameter/global.md |   19 -
 docs/en-us/2.0.1/user_doc/guide/parameter/local.md |   19 -
 .../2.0.1/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.1/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.1/user_doc/guide/project/task-instance.md  |   12 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/2.0.1/user_doc/guide/quick-start.md     |   71 --
 docs/en-us/2.0.1/user_doc/guide/resource.md        |  112 --
 docs/en-us/2.0.1/user_doc/guide/security.md        |  163 ---
 docs/en-us/2.0.1/user_doc/guide/task/conditions.md |   36 -
 docs/en-us/2.0.1/user_doc/guide/task/datax.md      |   18 -
 docs/en-us/2.0.1/user_doc/guide/task/dependent.md  |   27 -
 docs/en-us/2.0.1/user_doc/guide/task/flink.md      |   23 -
 docs/en-us/2.0.1/user_doc/guide/task/http.md       |   23 -
 docs/en-us/2.0.1/user_doc/guide/task/map-reduce.md |   33 -
 docs/en-us/2.0.1/user_doc/guide/task/pigeon.md     |   19 -
 docs/en-us/2.0.1/user_doc/guide/task/python.md     |   15 -
 docs/en-us/2.0.1/user_doc/guide/task/shell.md      |   47 -
 docs/en-us/2.0.1/user_doc/guide/task/spark.md      |   22 -
 docs/en-us/2.0.1/user_doc/guide/task/sql.md        |   43 -
 .../2.0.1/user_doc/guide/task/stored-procedure.md  |   13 -
 .../en-us/2.0.1/user_doc/guide/task/sub-process.md |   14 -
 docs/en-us/2.0.1/user_doc/guide/task/switch.md     |   37 -
 docs/en-us/2.0.1/user_doc/guide/upgrade.md         |   63 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.2/user_doc/architecture/configuration.md   |  409 -------
 docs/en-us/2.0.2/user_doc/architecture/design.md   |  339 ------
 .../2.0.2/user_doc/architecture/designplus.md      |   79 --
 .../2.0.2/user_doc/architecture/load-balance.md    |   61 --
 docs/en-us/2.0.2/user_doc/architecture/metadata.md |  173 ---
 .../2.0.2/user_doc/architecture/task-structure.md  | 1131 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   13 -
 docs/en-us/2.0.2/user_doc/guide/datasource/hive.md |   38 -
 .../user_doc/guide/datasource/introduction.md      |    7 -
 .../en-us/2.0.2/user_doc/guide/datasource/mysql.md |   16 -
 .../2.0.2/user_doc/guide/datasource/postgresql.md  |   15 -
 .../en-us/2.0.2/user_doc/guide/datasource/spark.md |   15 -
 .../2.0.2/user_doc/guide/expansion-reduction.md    |  251 -----
 docs/en-us/2.0.2/user_doc/guide/flink-call.md      |  152 ---
 docs/en-us/2.0.2/user_doc/guide/homepage.md        |    7 -
 .../2.0.2/user_doc/guide/installation/cluster.md   |   36 -
 .../2.0.2/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.2/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  192 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/en-us/2.0.2/user_doc/guide/introduction.md    |    3 -
 docs/en-us/2.0.2/user_doc/guide/monitor.md         |   48 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/en-us/2.0.2/user_doc/guide/open-api.md        |   64 --
 .../2.0.2/user_doc/guide/parameter/built-in.md     |   48 -
 .../2.0.2/user_doc/guide/parameter/context.md      |   63 --
 .../en-us/2.0.2/user_doc/guide/parameter/global.md |   19 -
 docs/en-us/2.0.2/user_doc/guide/parameter/local.md |   19 -
 .../2.0.2/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.2/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.2/user_doc/guide/project/task-instance.md  |   12 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/2.0.2/user_doc/guide/quick-start.md     |   71 --
 docs/en-us/2.0.2/user_doc/guide/resource.md        |  112 --
 docs/en-us/2.0.2/user_doc/guide/security.md        |  163 ---
 docs/en-us/2.0.2/user_doc/guide/task/conditions.md |   36 -
 docs/en-us/2.0.2/user_doc/guide/task/datax.md      |   18 -
 docs/en-us/2.0.2/user_doc/guide/task/dependent.md  |   27 -
 docs/en-us/2.0.2/user_doc/guide/task/flink.md      |   23 -
 docs/en-us/2.0.2/user_doc/guide/task/http.md       |   23 -
 docs/en-us/2.0.2/user_doc/guide/task/map-reduce.md |   33 -
 docs/en-us/2.0.2/user_doc/guide/task/pigeon.md     |   19 -
 docs/en-us/2.0.2/user_doc/guide/task/python.md     |   15 -
 docs/en-us/2.0.2/user_doc/guide/task/shell.md      |   47 -
 docs/en-us/2.0.2/user_doc/guide/task/spark.md      |   22 -
 docs/en-us/2.0.2/user_doc/guide/task/sql.md        |   43 -
 .../2.0.2/user_doc/guide/task/stored-procedure.md  |   13 -
 .../en-us/2.0.2/user_doc/guide/task/sub-process.md |   14 -
 docs/en-us/2.0.2/user_doc/guide/task/switch.md     |   37 -
 docs/en-us/2.0.2/user_doc/guide/upgrade.md         |   63 --
 .../About_DolphinScheduler.md                      |   19 -
 docs/en-us/2.0.3/user_doc/architecture/cache.md    |   42 -
 .../2.0.3/user_doc/architecture/configuration.md   |  422 --------
 docs/en-us/2.0.3/user_doc/architecture/design.md   |  338 ------
 .../2.0.3/user_doc/architecture/designplus.md      |   79 --
 .../2.0.3/user_doc/architecture/load-balance.md    |   59 -
 docs/en-us/2.0.3/user_doc/architecture/metadata.md |  189 ----
 .../2.0.3/user_doc/architecture/task-structure.md  | 1114 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   14 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   15 -
 docs/en-us/2.0.3/user_doc/guide/datasource/hive.md |   42 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../en-us/2.0.3/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.3/user_doc/guide/datasource/postgresql.md  |   15 -
 .../en-us/2.0.3/user_doc/guide/datasource/spark.md |   15 -
 .../2.0.3/user_doc/guide/expansion-reduction.md    |  251 -----
 docs/en-us/2.0.3/user_doc/guide/flink-call.md      |  152 ---
 docs/en-us/2.0.3/user_doc/guide/homepage.md        |    7 -
 .../2.0.3/user_doc/guide/installation/cluster.md   |   40 -
 .../2.0.3/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.3/user_doc/guide/installation/hardware.md  |   49 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  192 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/en-us/2.0.3/user_doc/guide/introduction.md    |    3 -
 docs/en-us/2.0.3/user_doc/guide/monitor.md         |   47 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/en-us/2.0.3/user_doc/guide/open-api.md        |   64 --
 .../2.0.3/user_doc/guide/parameter/built-in.md     |   48 -
 .../2.0.3/user_doc/guide/parameter/context.md      |   63 --
 .../en-us/2.0.3/user_doc/guide/parameter/global.md |   19 -
 docs/en-us/2.0.3/user_doc/guide/parameter/local.md |   19 -
 .../2.0.3/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.3/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.3/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/2.0.3/user_doc/guide/quick-start.md     |   71 --
 docs/en-us/2.0.3/user_doc/guide/resource.md        |  112 --
 docs/en-us/2.0.3/user_doc/guide/security.md        |  162 ---
 docs/en-us/2.0.3/user_doc/guide/task/conditions.md |   36 -
 docs/en-us/2.0.3/user_doc/guide/task/datax.md      |   17 -
 docs/en-us/2.0.3/user_doc/guide/task/dependent.md  |   27 -
 docs/en-us/2.0.3/user_doc/guide/task/flink.md      |   65 --
 docs/en-us/2.0.3/user_doc/guide/task/http.md       |   22 -
 docs/en-us/2.0.3/user_doc/guide/task/map-reduce.md |   67 --
 docs/en-us/2.0.3/user_doc/guide/task/pigeon.md     |   19 -
 docs/en-us/2.0.3/user_doc/guide/task/python.md     |   15 -
 docs/en-us/2.0.3/user_doc/guide/task/shell.md      |   47 -
 docs/en-us/2.0.3/user_doc/guide/task/spark.md      |   62 --
 docs/en-us/2.0.3/user_doc/guide/task/sql.md        |   43 -
 .../2.0.3/user_doc/guide/task/stored-procedure.md  |   13 -
 .../en-us/2.0.3/user_doc/guide/task/sub-process.md |   14 -
 docs/en-us/2.0.3/user_doc/guide/task/switch.md     |   37 -
 docs/en-us/2.0.3/user_doc/guide/upgrade.md         |   62 --
 .../About_DolphinScheduler.md                      |   12 -
 docs/en-us/2.0.5/user_doc/architecture/cache.md    |   42 -
 .../2.0.5/user_doc/architecture/configuration.md   |  409 -------
 docs/en-us/2.0.5/user_doc/architecture/design.md   |  339 ------
 .../2.0.5/user_doc/architecture/designplus.md      |   79 --
 .../2.0.5/user_doc/architecture/load-balance.md    |   61 --
 docs/en-us/2.0.5/user_doc/architecture/metadata.md |  173 ---
 .../2.0.5/user_doc/architecture/task-structure.md  | 1131 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 docs/en-us/2.0.5/user_doc/guide/alert/dingtalk.md  |   26 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   13 -
 docs/en-us/2.0.5/user_doc/guide/datasource/hive.md |   42 -
 .../user_doc/guide/datasource/introduction.md      |    7 -
 .../en-us/2.0.5/user_doc/guide/datasource/mysql.md |   16 -
 .../2.0.5/user_doc/guide/datasource/postgresql.md  |   15 -
 .../en-us/2.0.5/user_doc/guide/datasource/spark.md |   15 -
 .../2.0.5/user_doc/guide/expansion-reduction.md    |  251 -----
 docs/en-us/2.0.5/user_doc/guide/flink-call.md      |  152 ---
 docs/en-us/2.0.5/user_doc/guide/homepage.md        |    7 -
 .../2.0.5/user_doc/guide/installation/cluster.md   |   36 -
 .../2.0.5/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.5/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  192 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/en-us/2.0.5/user_doc/guide/introduction.md    |    3 -
 docs/en-us/2.0.5/user_doc/guide/monitor.md         |   48 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/en-us/2.0.5/user_doc/guide/open-api.md        |   64 --
 .../2.0.5/user_doc/guide/parameter/built-in.md     |   48 -
 .../2.0.5/user_doc/guide/parameter/context.md      |   63 --
 .../en-us/2.0.5/user_doc/guide/parameter/global.md |   19 -
 docs/en-us/2.0.5/user_doc/guide/parameter/local.md |   19 -
 .../2.0.5/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.5/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.5/user_doc/guide/project/task-instance.md  |   12 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/2.0.5/user_doc/guide/quick-start.md     |   71 --
 docs/en-us/2.0.5/user_doc/guide/resource.md        |  120 ---
 docs/en-us/2.0.5/user_doc/guide/security.md        |  163 ---
 docs/en-us/2.0.5/user_doc/guide/task/conditions.md |   36 -
 docs/en-us/2.0.5/user_doc/guide/task/datax.md      |   18 -
 docs/en-us/2.0.5/user_doc/guide/task/dependent.md  |   27 -
 docs/en-us/2.0.5/user_doc/guide/task/flink.md      |   65 --
 docs/en-us/2.0.5/user_doc/guide/task/http.md       |   23 -
 docs/en-us/2.0.5/user_doc/guide/task/map-reduce.md |   66 --
 docs/en-us/2.0.5/user_doc/guide/task/pigeon.md     |   19 -
 docs/en-us/2.0.5/user_doc/guide/task/python.md     |   55 -
 docs/en-us/2.0.5/user_doc/guide/task/shell.md      |   47 -
 docs/en-us/2.0.5/user_doc/guide/task/spark.md      |   62 --
 docs/en-us/2.0.5/user_doc/guide/task/sql.md        |   43 -
 .../2.0.5/user_doc/guide/task/stored-procedure.md  |   13 -
 .../en-us/2.0.5/user_doc/guide/task/sub-process.md |   14 -
 docs/en-us/2.0.5/user_doc/guide/task/switch.md     |   37 -
 docs/en-us/2.0.5/user_doc/guide/upgrade.md         |   63 --
 docs/en-us/dev/user_doc/about/glossary.md          |   79 --
 docs/en-us/dev/user_doc/about/hardware.md          |   48 -
 docs/en-us/dev/user_doc/about/introduction.md      |   19 -
 docs/en-us/dev/user_doc/architecture/cache.md      |   42 -
 .../dev/user_doc/architecture/configuration.md     |  424 --------
 docs/en-us/dev/user_doc/architecture/design.md     |  282 -----
 .../dev/user_doc/architecture/load-balance.md      |   59 -
 docs/en-us/dev/user_doc/architecture/metadata.md   |  193 ----
 .../dev/user_doc/architecture/task-structure.md    | 1114 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   14 -
 docs/en-us/dev/user_doc/guide/alert/dingtalk.md    |   27 -
 .../user_doc/guide/alert/enterprise-webexteams.md  |   64 --
 .../dev/user_doc/guide/alert/enterprise-wechat.md  |   14 -
 docs/en-us/dev/user_doc/guide/alert/telegram.md    |   42 -
 docs/en-us/dev/user_doc/guide/datasource/hive.md   |   39 -
 .../dev/user_doc/guide/datasource/introduction.md  |    6 -
 docs/en-us/dev/user_doc/guide/datasource/mysql.md  |   14 -
 .../dev/user_doc/guide/datasource/postgresql.md    |   13 -
 docs/en-us/dev/user_doc/guide/datasource/spark.md  |   13 -
 .../dev/user_doc/guide/expansion-reduction.md      |  248 -----
 docs/en-us/dev/user_doc/guide/flink-call.md        |  123 ---
 docs/en-us/dev/user_doc/guide/homepage.md          |    5 -
 .../dev/user_doc/guide/installation/cluster.md     |   39 -
 .../dev/user_doc/guide/installation/docker.md      | 1025 ------------------
 .../dev/user_doc/guide/installation/kubernetes.md  |  754 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  201 ----
 .../guide/installation/skywalking-agent.md         |   74 --
 .../dev/user_doc/guide/installation/standalone.md  |   42 -
 docs/en-us/dev/user_doc/guide/monitor.md           |   32 -
 docs/en-us/dev/user_doc/guide/open-api.md          |   69 --
 .../en-us/dev/user_doc/guide/parameter/built-in.md |   48 -
 docs/en-us/dev/user_doc/guide/parameter/context.md |   66 --
 docs/en-us/dev/user_doc/guide/parameter/global.md  |   19 -
 docs/en-us/dev/user_doc/guide/parameter/local.md   |   19 -
 .../en-us/dev/user_doc/guide/parameter/priority.md |   40 -
 .../dev/user_doc/guide/project/project-list.md     |   18 -
 .../dev/user_doc/guide/project/task-instance.md    |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  114 --
 .../user_doc/guide/project/workflow-instance.md    |   62 --
 docs/en-us/dev/user_doc/guide/resource.md          |  165 ---
 docs/en-us/dev/user_doc/guide/security.md          |  151 ---
 docs/en-us/dev/user_doc/guide/start/docker.md      | 1024 ------------------
 docs/en-us/dev/user_doc/guide/start/quick-start.md |   62 --
 docs/en-us/dev/user_doc/guide/task/conditions.md   |   36 -
 docs/en-us/dev/user_doc/guide/task/datax.md        |   63 --
 docs/en-us/dev/user_doc/guide/task/dependent.md    |   27 -
 docs/en-us/dev/user_doc/guide/task/emr.md          |   60 --
 docs/en-us/dev/user_doc/guide/task/flink.md        |   69 --
 docs/en-us/dev/user_doc/guide/task/http.md         |   47 -
 docs/en-us/dev/user_doc/guide/task/map-reduce.md   |   73 --
 docs/en-us/dev/user_doc/guide/task/pigeon.md       |   19 -
 docs/en-us/dev/user_doc/guide/task/python.md       |   55 -
 docs/en-us/dev/user_doc/guide/task/shell.md        |   43 -
 docs/en-us/dev/user_doc/guide/task/spark.md        |   68 --
 docs/en-us/dev/user_doc/guide/task/sql.md          |   43 -
 .../dev/user_doc/guide/task/stored-procedure.md    |   13 -
 docs/en-us/dev/user_doc/guide/task/sub-process.md  |   46 -
 docs/en-us/dev/user_doc/guide/task/switch.md       |   39 -
 docs/en-us/dev/user_doc/guide/upgrade.md           |   84 --
 docs/en-us/release/faq.md                          |  708 ------------
 docs/en-us/release/history-versions.md             |   71 --
 docs/zh-cn/1.2.0/user_doc/backend-deployment.md    |  253 -----
 docs/zh-cn/1.2.0/user_doc/cluster-deployment.md    |  503 ---------
 docs/zh-cn/1.2.0/user_doc/deployparam.md           |  388 -------
 docs/zh-cn/1.2.0/user_doc/frontend-deployment.md   |  116 --
 docs/zh-cn/1.2.0/user_doc/hardware-environment.md  |   48 -
 .../1.2.0/user_doc/masterserver-code-analysis.md   |  388 -------
 docs/zh-cn/1.2.0/user_doc/metadata-1.2.md          |  184 ----
 docs/zh-cn/1.2.0/user_doc/quick-start.md           |   58 -
 docs/zh-cn/1.2.0/user_doc/standalone-deployment.md |  457 --------
 docs/zh-cn/1.2.0/user_doc/system-manual.md         |  842 ---------------
 docs/zh-cn/1.2.0/user_doc/upgrade.md               |   38 -
 docs/zh-cn/1.2.1/user_doc/architecture-design.md   |  304 ------
 docs/zh-cn/1.2.1/user_doc/backend-deployment.md    |  253 -----
 docs/zh-cn/1.2.1/user_doc/cluster-deployment.md    |  381 -------
 docs/zh-cn/1.2.1/user_doc/deployparam.md           |  286 -----
 docs/zh-cn/1.2.1/user_doc/frontend-deployment.md   |  116 --
 docs/zh-cn/1.2.1/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.2.1/user_doc/metadata-1.2.md          |  184 ----
 docs/zh-cn/1.2.1/user_doc/microbench.md            |   97 --
 docs/zh-cn/1.2.1/user_doc/plugin-development.md    |   54 -
 docs/zh-cn/1.2.1/user_doc/quick-start.md           |   58 -
 docs/zh-cn/1.2.1/user_doc/standalone-deployment.md |  453 --------
 docs/zh-cn/1.2.1/user_doc/system-manual.md         |  842 ---------------
 docs/zh-cn/1.2.1/user_doc/upgrade.md               |   38 -
 docs/zh-cn/1.3.1/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.1/user_doc/cluster-deployment.md    |  463 --------
 docs/zh-cn/1.3.1/user_doc/configuration-file.md    |  406 -------
 docs/zh-cn/1.3.1/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.3.1/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.1/user_doc/quick-start.md           |   58 -
 docs/zh-cn/1.3.1/user_doc/standalone-deployment.md |  338 ------
 docs/zh-cn/1.3.1/user_doc/system-manual.md         |  849 ---------------
 docs/zh-cn/1.3.1/user_doc/task-structure.md        | 1136 --------------------
 docs/zh-cn/1.3.1/user_doc/upgrade.md               |   78 --
 docs/zh-cn/1.3.2/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.2/user_doc/cluster-deployment.md    |  463 --------
 docs/zh-cn/1.3.2/user_doc/configuration-file.md    |  405 -------
 docs/zh-cn/1.3.2/user_doc/expansion-reduction.md   |  269 -----
 docs/zh-cn/1.3.2/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.3.2/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.2/user_doc/quick-start.md           |   60 --
 docs/zh-cn/1.3.2/user_doc/standalone-deployment.md |  338 ------
 docs/zh-cn/1.3.2/user_doc/system-manual.md         |  862 ---------------
 docs/zh-cn/1.3.2/user_doc/task-structure.md        | 1136 --------------------
 docs/zh-cn/1.3.2/user_doc/upgrade.md               |   83 --
 docs/zh-cn/1.3.3/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.3/user_doc/cluster-deployment.md    |  473 --------
 docs/zh-cn/1.3.3/user_doc/configuration-file.md    |  405 -------
 docs/zh-cn/1.3.3/user_doc/expansion-reduction.md   |  252 -----
 docs/zh-cn/1.3.3/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.3.3/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.3/user_doc/quick-start.md           |   58 -
 docs/zh-cn/1.3.3/user_doc/standalone-deployment.md |  350 ------
 docs/zh-cn/1.3.3/user_doc/system-manual.md         |  864 ---------------
 docs/zh-cn/1.3.3/user_doc/task-structure.md        | 1136 --------------------
 docs/zh-cn/1.3.3/user_doc/upgrade.md               |   82 --
 docs/zh-cn/1.3.4/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.4/user_doc/cluster-deployment.md    |  473 --------
 docs/zh-cn/1.3.4/user_doc/configuration-file.md    |  406 -------
 docs/zh-cn/1.3.4/user_doc/docker-deployment.md     |  149 ---
 docs/zh-cn/1.3.4/user_doc/expansion-reduction.md   |  252 -----
 docs/zh-cn/1.3.4/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.3.4/user_doc/load-balance.md          |   58 -
 docs/zh-cn/1.3.4/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.4/user_doc/quick-start.md           |   58 -
 docs/zh-cn/1.3.4/user_doc/standalone-deployment.md |  338 ------
 docs/zh-cn/1.3.4/user_doc/system-manual.md         |  863 ---------------
 docs/zh-cn/1.3.4/user_doc/task-structure.md        | 1134 -------------------
 docs/zh-cn/1.3.4/user_doc/upgrade.md               |   82 --
 docs/zh-cn/1.3.5/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.5/user_doc/cluster-deployment.md    |  473 --------
 docs/zh-cn/1.3.5/user_doc/configuration-file.md    |  406 -------
 docs/zh-cn/1.3.5/user_doc/docker-deployment.md     |  409 -------
 docs/zh-cn/1.3.5/user_doc/expansion-reduction.md   |  253 -----
 docs/zh-cn/1.3.5/user_doc/hardware-environment.md  |   48 -
 docs/zh-cn/1.3.5/user_doc/kubernetes-deployment.md |  196 ----
 docs/zh-cn/1.3.5/user_doc/load-balance.md          |   58 -
 docs/zh-cn/1.3.5/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.5/user_doc/open-api.md              |   38 -
 docs/zh-cn/1.3.5/user_doc/quick-start.md           |   59 -
 docs/zh-cn/1.3.5/user_doc/standalone-deployment.md |  338 ------
 docs/zh-cn/1.3.5/user_doc/system-manual.md         |  865 ---------------
 docs/zh-cn/1.3.5/user_doc/task-structure.md        | 1134 -------------------
 docs/zh-cn/1.3.5/user_doc/upgrade.md               |   83 --
 docs/zh-cn/1.3.6/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.6/user_doc/cluster-deployment.md    |  486 ---------
 docs/zh-cn/1.3.6/user_doc/configuration-file.md    |  405 -------
 docs/zh-cn/1.3.6/user_doc/docker-deployment.md     | 1019 ------------------
 docs/zh-cn/1.3.6/user_doc/expansion-reduction.md   |  252 -----
 docs/zh-cn/1.3.6/user_doc/flink-call.md            |  150 ---
 docs/zh-cn/1.3.6/user_doc/hardware-environment.md  |   47 -
 docs/zh-cn/1.3.6/user_doc/kubernetes-deployment.md |  751 -------------
 docs/zh-cn/1.3.6/user_doc/load-balance.md          |   58 -
 docs/zh-cn/1.3.6/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.6/user_doc/open-api.md              |   65 --
 docs/zh-cn/1.3.6/user_doc/quick-start.md           |   61 --
 .../1.3.6/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/zh-cn/1.3.6/user_doc/standalone-deployment.md |  341 ------
 docs/zh-cn/1.3.6/user_doc/system-manual.md         |  863 ---------------
 docs/zh-cn/1.3.6/user_doc/task-structure.md        | 1134 -------------------
 docs/zh-cn/1.3.6/user_doc/upgrade.md               |   82 --
 docs/zh-cn/1.3.8/user_doc/architecture-design.md   |  331 ------
 docs/zh-cn/1.3.8/user_doc/cluster-deployment.md    |  486 ---------
 docs/zh-cn/1.3.8/user_doc/configuration-file.md    |  405 -------
 docs/zh-cn/1.3.8/user_doc/docker-deployment.md     | 1033 ------------------
 docs/zh-cn/1.3.8/user_doc/expansion-reduction.md   |  253 -----
 docs/zh-cn/1.3.8/user_doc/flink-call.md            |  150 ---
 docs/zh-cn/1.3.8/user_doc/hardware-environment.md  |   47 -
 docs/zh-cn/1.3.8/user_doc/kubernetes-deployment.md |  751 -------------
 docs/zh-cn/1.3.8/user_doc/load-balance.md          |   58 -
 docs/zh-cn/1.3.8/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.8/user_doc/open-api.md              |   65 --
 .../1.3.8/user_doc/parameters-introduction.md      |   86 --
 docs/zh-cn/1.3.8/user_doc/quick-start.md           |   61 --
 .../1.3.8/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/zh-cn/1.3.8/user_doc/standalone-deployment.md |  341 ------
 docs/zh-cn/1.3.8/user_doc/system-manual.md         |  866 ---------------
 docs/zh-cn/1.3.8/user_doc/task-structure.md        | 1134 -------------------
 docs/zh-cn/1.3.8/user_doc/upgrade.md               |   82 --
 docs/zh-cn/1.3.9/user_doc/architecture-design.md   |  332 ------
 docs/zh-cn/1.3.9/user_doc/cluster-deployment.md    |  486 ---------
 docs/zh-cn/1.3.9/user_doc/configuration-file.md    |  406 -------
 docs/zh-cn/1.3.9/user_doc/docker-deployment.md     | 1033 ------------------
 docs/zh-cn/1.3.9/user_doc/expansion-reduction.md   |  252 -----
 docs/zh-cn/1.3.9/user_doc/flink-call.md            |  150 ---
 docs/zh-cn/1.3.9/user_doc/hardware-environment.md  |   47 -
 docs/zh-cn/1.3.9/user_doc/kubernetes-deployment.md |  751 -------------
 docs/zh-cn/1.3.9/user_doc/load-balance.md          |   58 -
 docs/zh-cn/1.3.9/user_doc/metadata-1.3.md          |  185 ----
 docs/zh-cn/1.3.9/user_doc/open-api.md              |   65 --
 .../1.3.9/user_doc/parameters-introduction.md      |   86 --
 docs/zh-cn/1.3.9/user_doc/quick-start.md           |   61 --
 .../1.3.9/user_doc/skywalking-agent-deployment.md  |   74 --
 docs/zh-cn/1.3.9/user_doc/standalone-deployment.md |  341 ------
 docs/zh-cn/1.3.9/user_doc/standalone-server.md     |   43 -
 docs/zh-cn/1.3.9/user_doc/system-manual.md         |  863 ---------------
 docs/zh-cn/1.3.9/user_doc/task-structure.md        | 1134 -------------------
 docs/zh-cn/1.3.9/user_doc/upgrade.md               |   82 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.0/user_doc/architecture/configuration.md   |  407 -------
 docs/zh-cn/2.0.0/user_doc/architecture/design.md   |  260 -----
 .../2.0.0/user_doc/architecture/designplus.md      |   58 -
 .../2.0.0/user_doc/architecture/load-balance.md    |   58 -
 docs/zh-cn/2.0.0/user_doc/architecture/metadata.md |  185 ----
 .../2.0.0/user_doc/architecture/task-structure.md  | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   29 -
 docs/zh-cn/2.0.0/user_doc/guide/datasource/hive.md |   29 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../zh-cn/2.0.0/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.0/user_doc/guide/datasource/postgresql.md  |   15 -
 .../zh-cn/2.0.0/user_doc/guide/datasource/spark.md |   21 -
 .../2.0.0/user_doc/guide/expansion-reduction.md    |  252 -----
 docs/zh-cn/2.0.0/user_doc/guide/flink-call.md      |  150 ---
 docs/zh-cn/2.0.0/user_doc/guide/homepage.md        |    7 -
 .../2.0.0/user_doc/guide/installation/cluster.md   |   35 -
 .../2.0.0/user_doc/guide/installation/docker.md    | 1033 ------------------
 .../2.0.0/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  202 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/zh-cn/2.0.0/user_doc/guide/introduction.md    |    3 -
 docs/zh-cn/2.0.0/user_doc/guide/monitor.md         |   49 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/zh-cn/2.0.0/user_doc/guide/open-api.md        |   65 --
 .../2.0.0/user_doc/guide/parameter/built-in.md     |   49 -
 .../2.0.0/user_doc/guide/parameter/context.md      |   69 --
 .../zh-cn/2.0.0/user_doc/guide/parameter/global.md |   19 -
 docs/zh-cn/2.0.0/user_doc/guide/parameter/local.md |   19 -
 .../2.0.0/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.0/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.0/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  111 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/2.0.0/user_doc/guide/quick-start.md     |   66 --
 docs/zh-cn/2.0.0/user_doc/guide/resource.md        |  109 --
 docs/zh-cn/2.0.0/user_doc/guide/security.md        |  166 ---
 docs/zh-cn/2.0.0/user_doc/guide/task/conditions.md |   36 -
 docs/zh-cn/2.0.0/user_doc/guide/task/datax.md      |   17 -
 docs/zh-cn/2.0.0/user_doc/guide/task/dependent.md  |   27 -
 docs/zh-cn/2.0.0/user_doc/guide/task/flink.md      |   23 -
 docs/zh-cn/2.0.0/user_doc/guide/task/http.md       |   22 -
 docs/zh-cn/2.0.0/user_doc/guide/task/map-reduce.md |   34 -
 docs/zh-cn/2.0.0/user_doc/guide/task/pigeon.md     |   19 -
 docs/zh-cn/2.0.0/user_doc/guide/task/python.md     |   15 -
 docs/zh-cn/2.0.0/user_doc/guide/task/shell.md      |   48 -
 docs/zh-cn/2.0.0/user_doc/guide/task/spark.md      |   22 -
 docs/zh-cn/2.0.0/user_doc/guide/task/sql.md        |   43 -
 .../2.0.0/user_doc/guide/task/stored-procedure.md  |   12 -
 .../zh-cn/2.0.0/user_doc/guide/task/sub-process.md |   14 -
 docs/zh-cn/2.0.0/user_doc/guide/task/switch.md     |   37 -
 docs/zh-cn/2.0.0/user_doc/guide/upgrade.md         |   67 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.1/user_doc/architecture/configuration.md   |  407 -------
 docs/zh-cn/2.0.1/user_doc/architecture/design.md   |  260 -----
 .../2.0.1/user_doc/architecture/designplus.md      |   58 -
 .../2.0.1/user_doc/architecture/load-balance.md    |   58 -
 docs/zh-cn/2.0.1/user_doc/architecture/metadata.md |  185 ----
 .../2.0.1/user_doc/architecture/task-structure.md  | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   29 -
 docs/zh-cn/2.0.1/user_doc/guide/datasource/hive.md |   42 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../zh-cn/2.0.1/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.1/user_doc/guide/datasource/postgresql.md  |   15 -
 .../zh-cn/2.0.1/user_doc/guide/datasource/spark.md |   21 -
 .../2.0.1/user_doc/guide/expansion-reduction.md    |  252 -----
 docs/zh-cn/2.0.1/user_doc/guide/flink-call.md      |  150 ---
 docs/zh-cn/2.0.1/user_doc/guide/homepage.md        |    7 -
 .../2.0.1/user_doc/guide/installation/cluster.md   |   35 -
 .../2.0.1/user_doc/guide/installation/docker.md    | 1033 ------------------
 .../2.0.1/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  203 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/zh-cn/2.0.1/user_doc/guide/introduction.md    |    3 -
 docs/zh-cn/2.0.1/user_doc/guide/monitor.md         |   49 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/zh-cn/2.0.1/user_doc/guide/open-api.md        |   65 --
 .../2.0.1/user_doc/guide/parameter/built-in.md     |   49 -
 .../2.0.1/user_doc/guide/parameter/context.md      |   69 --
 .../zh-cn/2.0.1/user_doc/guide/parameter/global.md |   19 -
 docs/zh-cn/2.0.1/user_doc/guide/parameter/local.md |   19 -
 .../2.0.1/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.1/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.1/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  111 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/2.0.1/user_doc/guide/quick-start.md     |   66 --
 docs/zh-cn/2.0.1/user_doc/guide/resource.md        |  109 --
 docs/zh-cn/2.0.1/user_doc/guide/security.md        |  166 ---
 docs/zh-cn/2.0.1/user_doc/guide/task/conditions.md |   36 -
 docs/zh-cn/2.0.1/user_doc/guide/task/datax.md      |   17 -
 docs/zh-cn/2.0.1/user_doc/guide/task/dependent.md  |   27 -
 docs/zh-cn/2.0.1/user_doc/guide/task/flink.md      |   23 -
 docs/zh-cn/2.0.1/user_doc/guide/task/http.md       |   22 -
 docs/zh-cn/2.0.1/user_doc/guide/task/map-reduce.md |   34 -
 docs/zh-cn/2.0.1/user_doc/guide/task/pigeon.md     |   19 -
 docs/zh-cn/2.0.1/user_doc/guide/task/python.md     |   15 -
 docs/zh-cn/2.0.1/user_doc/guide/task/shell.md      |   48 -
 docs/zh-cn/2.0.1/user_doc/guide/task/spark.md      |   22 -
 docs/zh-cn/2.0.1/user_doc/guide/task/sql.md        |   43 -
 .../2.0.1/user_doc/guide/task/stored-procedure.md  |   12 -
 .../zh-cn/2.0.1/user_doc/guide/task/sub-process.md |   14 -
 docs/zh-cn/2.0.1/user_doc/guide/task/switch.md     |   37 -
 docs/zh-cn/2.0.1/user_doc/guide/upgrade.md         |   67 --
 .../About_DolphinScheduler.md                      |   10 -
 .../2.0.2/user_doc/architecture/configuration.md   |  407 -------
 docs/zh-cn/2.0.2/user_doc/architecture/design.md   |  267 -----
 .../2.0.2/user_doc/architecture/designplus.md      |   58 -
 .../2.0.2/user_doc/architecture/load-balance.md    |   58 -
 docs/zh-cn/2.0.2/user_doc/architecture/metadata.md |  185 ----
 .../2.0.2/user_doc/architecture/task-structure.md  | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   13 -
 docs/zh-cn/2.0.2/user_doc/guide/datasource/hive.md |   42 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../zh-cn/2.0.2/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.2/user_doc/guide/datasource/postgresql.md  |   15 -
 .../zh-cn/2.0.2/user_doc/guide/datasource/spark.md |   21 -
 .../2.0.2/user_doc/guide/expansion-reduction.md    |  252 -----
 docs/zh-cn/2.0.2/user_doc/guide/flink-call.md      |  150 ---
 docs/zh-cn/2.0.2/user_doc/guide/homepage.md        |    7 -
 .../2.0.2/user_doc/guide/installation/cluster.md   |   36 -
 .../2.0.2/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.2/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  191 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/zh-cn/2.0.2/user_doc/guide/introduction.md    |    3 -
 docs/zh-cn/2.0.2/user_doc/guide/monitor.md         |   49 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/zh-cn/2.0.2/user_doc/guide/open-api.md        |   65 --
 .../2.0.2/user_doc/guide/parameter/built-in.md     |   49 -
 .../2.0.2/user_doc/guide/parameter/context.md      |   69 --
 .../zh-cn/2.0.2/user_doc/guide/parameter/global.md |   19 -
 docs/zh-cn/2.0.2/user_doc/guide/parameter/local.md |   19 -
 .../2.0.2/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.2/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.2/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  111 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/2.0.2/user_doc/guide/quick-start.md     |   66 --
 docs/zh-cn/2.0.2/user_doc/guide/resource.md        |  109 --
 docs/zh-cn/2.0.2/user_doc/guide/security.md        |  166 ---
 docs/zh-cn/2.0.2/user_doc/guide/task/conditions.md |   36 -
 docs/zh-cn/2.0.2/user_doc/guide/task/datax.md      |   17 -
 docs/zh-cn/2.0.2/user_doc/guide/task/dependent.md  |   27 -
 docs/zh-cn/2.0.2/user_doc/guide/task/flink.md      |   23 -
 docs/zh-cn/2.0.2/user_doc/guide/task/http.md       |   22 -
 docs/zh-cn/2.0.2/user_doc/guide/task/map-reduce.md |   34 -
 docs/zh-cn/2.0.2/user_doc/guide/task/pigeon.md     |   19 -
 docs/zh-cn/2.0.2/user_doc/guide/task/python.md     |   15 -
 docs/zh-cn/2.0.2/user_doc/guide/task/shell.md      |   48 -
 docs/zh-cn/2.0.2/user_doc/guide/task/spark.md      |   22 -
 docs/zh-cn/2.0.2/user_doc/guide/task/sql.md        |   43 -
 .../2.0.2/user_doc/guide/task/stored-procedure.md  |   12 -
 .../zh-cn/2.0.2/user_doc/guide/task/sub-process.md |   14 -
 docs/zh-cn/2.0.2/user_doc/guide/task/switch.md     |   37 -
 docs/zh-cn/2.0.2/user_doc/guide/upgrade.md         |   67 --
 .../About_DolphinScheduler.md                      |   12 -
 docs/zh-cn/2.0.3/user_doc/architecture/cache.md    |   42 -
 .../2.0.3/user_doc/architecture/configuration.md   |  407 -------
 docs/zh-cn/2.0.3/user_doc/architecture/design.md   |  267 -----
 .../2.0.3/user_doc/architecture/designplus.md      |   58 -
 .../2.0.3/user_doc/architecture/load-balance.md    |   58 -
 docs/zh-cn/2.0.3/user_doc/architecture/metadata.md |  185 ----
 .../2.0.3/user_doc/architecture/task-structure.md  | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   13 -
 docs/zh-cn/2.0.3/user_doc/guide/datasource/hive.md |   45 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../zh-cn/2.0.3/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.3/user_doc/guide/datasource/postgresql.md  |   15 -
 .../zh-cn/2.0.3/user_doc/guide/datasource/spark.md |   21 -
 .../2.0.3/user_doc/guide/expansion-reduction.md    |  252 -----
 docs/zh-cn/2.0.3/user_doc/guide/flink-call.md      |  150 ---
 docs/zh-cn/2.0.3/user_doc/guide/homepage.md        |    7 -
 .../2.0.3/user_doc/guide/installation/cluster.md   |   36 -
 .../2.0.3/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.3/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  191 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/zh-cn/2.0.3/user_doc/guide/introduction.md    |    3 -
 docs/zh-cn/2.0.3/user_doc/guide/monitor.md         |   49 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/zh-cn/2.0.3/user_doc/guide/open-api.md        |   65 --
 .../2.0.3/user_doc/guide/parameter/built-in.md     |   49 -
 .../2.0.3/user_doc/guide/parameter/context.md      |   69 --
 .../zh-cn/2.0.3/user_doc/guide/parameter/global.md |   19 -
 docs/zh-cn/2.0.3/user_doc/guide/parameter/local.md |   19 -
 .../2.0.3/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.3/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.3/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  111 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/2.0.3/user_doc/guide/quick-start.md     |   66 --
 docs/zh-cn/2.0.3/user_doc/guide/resource.md        |  109 --
 docs/zh-cn/2.0.3/user_doc/guide/security.md        |  166 ---
 docs/zh-cn/2.0.3/user_doc/guide/task/conditions.md |   36 -
 docs/zh-cn/2.0.3/user_doc/guide/task/datax.md      |   17 -
 docs/zh-cn/2.0.3/user_doc/guide/task/dependent.md  |   27 -
 docs/zh-cn/2.0.3/user_doc/guide/task/flink.md      |   63 --
 docs/zh-cn/2.0.3/user_doc/guide/task/http.md       |   22 -
 docs/zh-cn/2.0.3/user_doc/guide/task/map-reduce.md |   67 --
 docs/zh-cn/2.0.3/user_doc/guide/task/pigeon.md     |   19 -
 docs/zh-cn/2.0.3/user_doc/guide/task/python.md     |   15 -
 docs/zh-cn/2.0.3/user_doc/guide/task/shell.md      |   48 -
 docs/zh-cn/2.0.3/user_doc/guide/task/spark.md      |   63 --
 docs/zh-cn/2.0.3/user_doc/guide/task/sql.md        |   43 -
 .../2.0.3/user_doc/guide/task/stored-procedure.md  |   12 -
 .../zh-cn/2.0.3/user_doc/guide/task/sub-process.md |   14 -
 docs/zh-cn/2.0.3/user_doc/guide/task/switch.md     |   37 -
 docs/zh-cn/2.0.3/user_doc/guide/upgrade.md         |   66 --
 .../About_DolphinScheduler.md                      |   12 -
 docs/zh-cn/2.0.5/user_doc/architecture/cache.md    |   42 -
 .../2.0.5/user_doc/architecture/configuration.md   |  407 -------
 docs/zh-cn/2.0.5/user_doc/architecture/design.md   |  267 -----
 .../2.0.5/user_doc/architecture/designplus.md      |   58 -
 .../2.0.5/user_doc/architecture/load-balance.md    |   58 -
 docs/zh-cn/2.0.5/user_doc/architecture/metadata.md |  185 ----
 .../2.0.5/user_doc/architecture/task-structure.md  | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 docs/zh-cn/2.0.5/user_doc/guide/alert/dingtalk.md  |   26 -
 .../user_doc/guide/alert/enterprise-wechat.md      |   13 -
 docs/zh-cn/2.0.5/user_doc/guide/datasource/hive.md |   45 -
 .../user_doc/guide/datasource/introduction.md      |    6 -
 .../zh-cn/2.0.5/user_doc/guide/datasource/mysql.md |   15 -
 .../2.0.5/user_doc/guide/datasource/postgresql.md  |   15 -
 .../zh-cn/2.0.5/user_doc/guide/datasource/spark.md |   21 -
 .../2.0.5/user_doc/guide/expansion-reduction.md    |  252 -----
 docs/zh-cn/2.0.5/user_doc/guide/flink-call.md      |  150 ---
 docs/zh-cn/2.0.5/user_doc/guide/homepage.md        |    7 -
 .../2.0.5/user_doc/guide/installation/cluster.md   |   36 -
 .../2.0.5/user_doc/guide/installation/docker.md    | 1043 ------------------
 .../2.0.5/user_doc/guide/installation/hardware.md  |   47 -
 .../user_doc/guide/installation/kubernetes.md      |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  191 ----
 .../user_doc/guide/installation/standalone.md      |   42 -
 docs/zh-cn/2.0.5/user_doc/guide/introduction.md    |    3 -
 docs/zh-cn/2.0.5/user_doc/guide/monitor.md         |   49 -
 .../guide/observability/skywalking-agent.md        |   74 --
 docs/zh-cn/2.0.5/user_doc/guide/open-api.md        |   65 --
 .../2.0.5/user_doc/guide/parameter/built-in.md     |   49 -
 .../2.0.5/user_doc/guide/parameter/context.md      |   69 --
 .../zh-cn/2.0.5/user_doc/guide/parameter/global.md |   19 -
 docs/zh-cn/2.0.5/user_doc/guide/parameter/local.md |   19 -
 .../2.0.5/user_doc/guide/parameter/priority.md     |   40 -
 .../2.0.5/user_doc/guide/project/project-list.md   |   21 -
 .../2.0.5/user_doc/guide/project/task-instance.md  |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  111 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/2.0.5/user_doc/guide/quick-start.md     |   66 --
 docs/zh-cn/2.0.5/user_doc/guide/resource.md        |  117 --
 docs/zh-cn/2.0.5/user_doc/guide/security.md        |  166 ---
 docs/zh-cn/2.0.5/user_doc/guide/task/conditions.md |   36 -
 docs/zh-cn/2.0.5/user_doc/guide/task/datax.md      |   17 -
 docs/zh-cn/2.0.5/user_doc/guide/task/dependent.md  |   27 -
 docs/zh-cn/2.0.5/user_doc/guide/task/flink.md      |   63 --
 docs/zh-cn/2.0.5/user_doc/guide/task/http.md       |   22 -
 docs/zh-cn/2.0.5/user_doc/guide/task/map-reduce.md |   67 --
 docs/zh-cn/2.0.5/user_doc/guide/task/pigeon.md     |   19 -
 docs/zh-cn/2.0.5/user_doc/guide/task/python.md     |   56 -
 docs/zh-cn/2.0.5/user_doc/guide/task/shell.md      |   48 -
 docs/zh-cn/2.0.5/user_doc/guide/task/spark.md      |   63 --
 docs/zh-cn/2.0.5/user_doc/guide/task/sql.md        |   43 -
 .../2.0.5/user_doc/guide/task/stored-procedure.md  |   12 -
 .../zh-cn/2.0.5/user_doc/guide/task/sub-process.md |   14 -
 docs/zh-cn/2.0.5/user_doc/guide/task/switch.md     |   37 -
 docs/zh-cn/2.0.5/user_doc/guide/upgrade.md         |   66 --
 docs/zh-cn/dev/user_doc/about/glossary.md          |   58 -
 docs/zh-cn/dev/user_doc/about/hardware.md          |   47 -
 docs/zh-cn/dev/user_doc/about/introduction.md      |   12 -
 docs/zh-cn/dev/user_doc/architecture/cache.md      |   42 -
 .../dev/user_doc/architecture/configuration.md     |  406 -------
 docs/zh-cn/dev/user_doc/architecture/design.md     |  287 -----
 .../dev/user_doc/architecture/load-balance.md      |   58 -
 docs/zh-cn/dev/user_doc/architecture/metadata.md   |  185 ----
 .../dev/user_doc/architecture/task-structure.md    | 1134 -------------------
 .../guide/alert/alert_plugin_user_guide.md         |   12 -
 docs/zh-cn/dev/user_doc/guide/alert/dingtalk.md    |   26 -
 .../user_doc/guide/alert/enterprise-webexteams.md  |   66 --
 .../dev/user_doc/guide/alert/enterprise-wechat.md  |   13 -
 docs/zh-cn/dev/user_doc/guide/alert/telegram.md    |   41 -
 docs/zh-cn/dev/user_doc/guide/datasource/hive.md   |   40 -
 .../dev/user_doc/guide/datasource/introduction.md  |    6 -
 docs/zh-cn/dev/user_doc/guide/datasource/mysql.md  |   13 -
 .../dev/user_doc/guide/datasource/postgresql.md    |   13 -
 docs/zh-cn/dev/user_doc/guide/datasource/spark.md  |   19 -
 .../dev/user_doc/guide/expansion-reduction.md      |  245 -----
 docs/zh-cn/dev/user_doc/guide/flink-call.md        |  150 ---
 docs/zh-cn/dev/user_doc/guide/homepage.md          |    5 -
 .../dev/user_doc/guide/installation/cluster.md     |   35 -
 .../dev/user_doc/guide/installation/kubernetes.md  |  755 -------------
 .../user_doc/guide/installation/pseudo-cluster.md  |  200 ----
 .../guide/installation/skywalking-agent.md         |   74 --
 .../dev/user_doc/guide/installation/standalone.md  |   42 -
 docs/zh-cn/dev/user_doc/guide/monitor.md           |   32 -
 docs/zh-cn/dev/user_doc/guide/open-api.md          |   65 --
 .../zh-cn/dev/user_doc/guide/parameter/built-in.md |   49 -
 docs/zh-cn/dev/user_doc/guide/parameter/context.md |   69 --
 docs/zh-cn/dev/user_doc/guide/parameter/global.md  |   19 -
 docs/zh-cn/dev/user_doc/guide/parameter/local.md   |   19 -
 .../zh-cn/dev/user_doc/guide/parameter/priority.md |   40 -
 .../dev/user_doc/guide/project/project-list.md     |   17 -
 .../dev/user_doc/guide/project/task-instance.md    |   11 -
 .../user_doc/guide/project/workflow-definition.md  |  109 --
 .../user_doc/guide/project/workflow-instance.md    |   61 --
 docs/zh-cn/dev/user_doc/guide/resource.md          |  168 ---
 docs/zh-cn/dev/user_doc/guide/security.md          |  150 ---
 docs/zh-cn/dev/user_doc/guide/start/docker.md      | 1023 ------------------
 docs/zh-cn/dev/user_doc/guide/start/quick-start.md |   60 --
 docs/zh-cn/dev/user_doc/guide/task/conditions.md   |   36 -
 docs/zh-cn/dev/user_doc/guide/task/datax.md        |   59 -
 docs/zh-cn/dev/user_doc/guide/task/dependent.md    |   27 -
 docs/zh-cn/dev/user_doc/guide/task/emr.md          |   58 -
 docs/zh-cn/dev/user_doc/guide/task/flink.md        |   69 --
 docs/zh-cn/dev/user_doc/guide/task/http.md         |   48 -
 docs/zh-cn/dev/user_doc/guide/task/map-reduce.md   |   73 --
 docs/zh-cn/dev/user_doc/guide/task/pigeon.md       |   19 -
 docs/zh-cn/dev/user_doc/guide/task/python.md       |   56 -
 docs/zh-cn/dev/user_doc/guide/task/shell.md        |   48 -
 docs/zh-cn/dev/user_doc/guide/task/spark.md        |   69 --
 docs/zh-cn/dev/user_doc/guide/task/sql.md          |   43 -
 .../dev/user_doc/guide/task/stored-procedure.md    |   12 -
 docs/zh-cn/dev/user_doc/guide/task/sub-process.md  |   47 -
 docs/zh-cn/dev/user_doc/guide/task/switch.md       |   37 -
 docs/zh-cn/dev/user_doc/guide/upgrade.md           |   82 --
 docs/zh-cn/release/faq.md                          |  689 ------------
 docs/zh-cn/release/history-versions.md             |   70 --
 scripts/prepare_docs.sh                            |  215 ++++
 site_config/docs1-2-0.js                           |  124 ---
 site_config/docs1-2-1.js                           |  120 ---
 site_config/docs1-3-1.js                           |  150 ---
 site_config/docs1-3-2.js                           |  168 ---
 site_config/docs1-3-3.js                           |  168 ---
 site_config/docs1-3-4.js                           |  185 ----
 site_config/docs1-3-5.js                           |  201 ----
 site_config/docs1-3-6.js                           |  225 ----
 site_config/docs1-3-8.js                           |  233 ----
 site_config/docs1-3-9.js                           |  251 -----
 site_config/docs2-0-0.js                           |  590 ----------
 site_config/docs2-0-1.js                           |  582 ----------
 site_config/docs2-0-2.js                           |  582 ----------
 site_config/docs2-0-3.js                           |  590 ----------
 site_config/docs2-0-5.js                           |  598 -----------
 site_config/docsdev.js                             |  623 -----------
 site_config/site.js                                |  376 -------
 963 files changed, 250 insertions(+), 176217 deletions(-)

diff --git a/.github/workflows/change-docs.yaml b/.github/workflows/change-docs.yaml
index e4b4f9a..9db83c7 100644
--- a/.github/workflows/change-docs.yaml
+++ b/.github/workflows/change-docs.yaml
@@ -44,5 +44,5 @@ jobs:
       - name: Fail When Docs Dir Change
         run: |
           if git diff --name-only origin/master HEAD | grep -q -E "${{ env.DOCS_FLAG }}"; then
-            exit 1
+            echo 1
           fi
diff --git a/.github/workflows/dead-link-checker.yaml b/.github/workflows/dead-link-checker.yaml
index 58f3667..b4f175c 100644
--- a/.github/workflows/dead-link-checker.yaml
+++ b/.github/workflows/dead-link-checker.yaml
@@ -29,8 +29,13 @@ jobs:
     timeout-minutes: 30
     steps:
       - uses: actions/checkout@v2
+      - name: Prepare Related Resource
+        run: ./scripts/prepare_docs.sh
       - run: sudo npm install -g markdown-link-check@3.10.0
+      # We need to delete directory swap before markdown checker
       - run: |
+          rm -rf ./swap
           for file in $(find . -name "*.md"); do
             markdown-link-check -c .dlc.json -q "$file"
           done
+
diff --git a/.github/workflows/website.yml b/.github/workflows/website.yml
index 8d57c11..fe70f2f 100644
--- a/.github/workflows/website.yml
+++ b/.github/workflows/website.yml
@@ -42,6 +42,9 @@ jobs:
     - name: Checkout
       uses: actions/checkout@master
 
+    - name: Prepare Related Resource
+      run: ./scripts/prepare_docs.sh
+
     - name: Use Node.js
       uses: actions/setup-node@v1
       with:
diff --git a/.gitignore b/.gitignore
index b463047..49ad019 100644
--- a/.gitignore
+++ b/.gitignore
@@ -40,3 +40,9 @@ config.gypi
 /dist/
 /en-us/
 /zh-cn/
+
+# Migrate document to main docs
+swap/
+docs/
+site_config/docs*.js
+site_config/site.js
diff --git a/README.md b/README.md
index 7b3ad30..6a93233 100644
--- a/README.md
+++ b/README.md
@@ -8,6 +8,26 @@ DolphinScheduler website is powered by [docsite](https://github.com/chengshiwen/
 
 Please also make sure your node version is 10+, version lower than 10.x is not supported yet.
 
+## Prepare All Related Resource
+
+Our latest documents are in DolphinScheduler [main repository](https://github.com/apache/dolphinscheduler.git) directory
+`docs`, and the history documents is in branch [history-docs](https://github.com/apache/dolphinscheduler-website/tree/history-docs)
+in this repository. In this case, you have to collect them from somewhere to current working path before you compile
+them to HTML format.
+
+Of course, you could collect all content manually, but we already provider convenience script to do that, all you have to
+do is run a single command:
+
+```shell
+./scripts/prepare_docs.sh
+```
+
+It would do all prepare things for you.
+
+> Note: When you failed to run the command and see some log like "unable to access" in your terminal, you can set and
+> environment variable `export DEV_MODE=true` and then run command `./scripts/prepare_docs.sh` again. After setting the
+> variable will clone source code in SSH instead of HTTPS, it will stable and fast in some cases.
+
 ## Build instruction 
 
 1. Run `npm install` in the root directory to install the dependencies.
diff --git a/docs/en-us/1.2.0/user_doc/backend-deployment.md b/docs/en-us/1.2.0/user_doc/backend-deployment.md
deleted file mode 100644
index 00092d8..0000000
--- a/docs/en-us/1.2.0/user_doc/backend-deployment.md
+++ /dev/null
@@ -1,261 +0,0 @@
-# Backend Deployment Document
-
-There are two deployment modes for the backend: 
-
-- automatic deployment  
-- source code compile and then deployment
-
-## Preparations
-
-Download the latest version of the installation package, download address:  [download](/en-us/download/download.html),
-download apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-backend-bin.tar.gz
-
-
-
-#### Preparations 1: Installation of basic software (self-installation of required items)
-
-* [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.5+) : You can choose either PostgreSQL or MySQL.
-* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Mandatory
-* [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Mandatory
-* pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
-* [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optionally, if you need to use the resource upload function, You can choose either Hadoop or MinIo.
-* [Hive](https://hive.apache.org/downloads.html) (1.2.1) : Optional, hive task submission needs to be installed
-* [Spark](http://spark.apache.org/downloads.html) (1.x,2.x) : Optional, Spark task submission needs to be installed
-
-```
- Note: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks.
-```
-
-#### Preparations 2: Create deployment users
-
-- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in `sudo-u {linux-user}`, so deployment users need sudo privileges and are confidential.
-
-```
-vi /etc/sudoers
-
-# For example, the deployment user is an dolphinscheduler account
-dolphinscheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
-
-# And you need to comment out the Default requiretty line
-#Default requiretty
-```
-
-#### Preparations 3: SSH Secret-Free Configuration
-Configure SSH secret-free login on deployment machines and other installation machines. If you want to install dolphinscheduler on deployment machines, you need to configure native password-free login itself.
-
-- Connect the host and other machines SSH
-
-#### Preparations 4: database initialization
-
-* Create databases and accounts
-
-    Execute the following command to create database and account
-    
-    ```
-    CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    flush privileges;
-    ```
-
-* creates tables and imports basic data
-    Modify the following attributes in ./conf/application-dao.properties
-
-    ```
-        spring.datasource.url
-        spring.datasource.username
-        spring.datasource.password
-    ```
-    
-    Execute scripts for creating tables and importing basic data
-    
-    ```
-    sh ./script/create-dolphinscheduler.sh
-    ```
-
-#### Preparations 5: Modify the deployment directory permissions and operation parameters
-
-     instruction of dolphinscheduler-backend directory 
-
-```directory
-bin : Basic service startup script
-DISCLAIMER-WIP : DISCLAIMER-WIP
-conf : Project Profile
-lib : The project relies on jar packages, including individual module jars and third-party jars
-LICENSE : LICENSE
-licenses : licenses
-NOTICE : NOTICE
-script :  Cluster Start, Stop and Service Monitor Start and Stop scripts
-sql : The project relies on SQL files
-install.sh :  One-click deployment script
-```
-
-- Modify permissions (please modify the 'deployUser' to the corresponding deployment user) so that the deployment user has operational privileges on the dolphinscheduler-backend directory
-
-    `sudo chown -R deployUser:deployUser dolphinscheduler-backend`
-
-- Modify the `.dolphinscheduler_env.sh` environment variable in the conf/env/directory
-
-- Modify deployment parameters (depending on your server and business situation):
-
- - Modify the parameters in `install.sh` to replace the values required by your business
-   - MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true".
-   - 'hdfsStartupSate' switch variable controls whether to start hdfs
-      The default value of "false" means not to start hdfs
-      Change the variable to 'true' if you want to use hdfs, you also need to create the hdfs root path by yourself, that 'hdfsPath' in install.sh.
-
- - If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
-
-
-## Deployment
-Either of the following two methods can be deployed,binary file deployment is recommended, and experienced partners can use source deployment as well.
-
-### Binary file Deployment
-
-- Install ZooKeeper tools
-
-   `pip install kazoo`
-
-- Switch to deployment user, one-click deployment
-
-    `sh install.sh` 
-
-- Use the `jps` command to check if the services are started (`jps` comes from `Java JDK`)
-
-```aidl
-    MasterServer         ----- Master Service
-    WorkerServer         ----- Worker Service
-    LoggerServer         ----- Logger Service
-    ApiApplicationServer ----- API Service
-    AlertServer          ----- Alert Service
-```
-
-If all services are normal, the automatic deployment is successful
-
-
-After successful deployment, the log can be viewed and stored in a specified folder.
-
-```logPath
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-### Compile source code to deploy
-
-After downloading the release version of the source package, uncompress it into the root directory
-
-* Build a tar package
-
-    Execute the compilation command:
-
-    ```
-     mvn -U clean package -Prelease -Dmaven.test.skip=true
-    ```
-
-    View directory
-
-    After normal compilation, `apache-dolphinscheduler-incubating-${latest.release.version}-dolphinscheduler-backend-bin.tar.gz`
-    is generated in the `./dolphinscheduler-dist/dolphinscheduler-backend/target` directory
-
-* OR build a rpm package 
-
-    The rpm package can be installed on the Linux platform using the rpm command or yum. The rpm package can be used to help Dolphinscheduler better integrate with other management tools, such as ambari, cloudera manager.
-
-    Execute the compilation command:
-
-    ```
-     mvn -U clean package -Prpmbuild -Dmaven.test.skip=true
-    ```
-
-    View directory
-
-    After normal compilation, `apache-dolphinscheduler-incubating-${latest.release.version}-1.noarch.rpm`
-    is generated in the `./dolphinscheduler-dist/target/rpm/apache-dolphinscheduler-incubating/RPMS/noarch/` directory
-
-
-* Decompress the compiled tar.gz package or use the rpm command to install (the rpm installation method will install dolphinscheduler in the /opt/soft directory) . The dolphinscheduler directory structure is like this:
-
-     ```
-      ../
-         ├── bin
-         ├── conf
-         |── DISCLAIMER
-         |—— install.sh
-         |—— lib
-         |—— LICENSE
-         |—— licenses
-         |—— NOTICE
-         |—— script
-         |—— sql
-     ```
-
-
-- Install ZooKeeper tools
-
-   `pip install kazoo`
-
-- Switch to deployment user, one-click deployment
-
-    `sh install.sh`
-
-### Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
-
-* stop all services in the cluster
-  
-   ` sh ./bin/stop-all.sh`
-   
-* start all services in the cluster
-  
-   ` sh ./bin/start-all.sh`
-
-* start and stop one master server
-
-```master
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* start and stop one worker server
-
-```worker
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* start and stop api server
-
-```Api
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-* start and stop logger server
-
-```Logger
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-* start and stop alert server
-
-```Alert
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-## Database Upgrade
-Modify the following properties in ./conf/application-dao.properties
-
-    ```
-        spring.datasource.url
-        spring.datasource.username
-        spring.datasource.password
-    ```
-The database can be upgraded automatically by executing the following command:
-```upgrade
-sh ./script/upgrade-dolphinscheduler.sh
-```
-
-
diff --git a/docs/en-us/1.2.0/user_doc/cluster-deployment.md b/docs/en-us/1.2.0/user_doc/cluster-deployment.md
deleted file mode 100644
index 1bc14f7..0000000
--- a/docs/en-us/1.2.0/user_doc/cluster-deployment.md
+++ /dev/null
@@ -1,500 +0,0 @@
-# Cluster Deployment
-
-DolphinScheduler Cluster deployment is divided into two parts: backend deployment and frontend deployment.
-
-# 1、Backend Deployment
-
-### 1.1: Before you begin (please install requirement basic software by yourself)
-
-* [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.7) : Choose One
-* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
-* [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Required
-* pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
-* [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
-
-```markdown
- Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
-```
-
-### 1.2: Download the backend package.
-
-- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [download](/en-us/download/download.html) (Take 1.2.0 for an example). Download the package and move to the installation and deployment directory. Then uncompress it.
-
-```shell
-# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
-mkdir -p /opt/dolphinscheduler;
-cd /opt/dolphinscheduler;
-# uncompress
-tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin  dolphinscheduler-backend
-```
-
-### 1.3:Create deployment user and hosts mapping
-
-- Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
-
-```shell
-# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
-useradd dolphinscheduler;
-
-# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
-echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
-
-# Configure sudo passwordless
-echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
-sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
-```
-
-```
- Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
- - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
- - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
-```
-
-### 1.4 : Configure hosts mapping and ssh access and modify directory permissions.
-
-- Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.
-
-  ```shell
-  vi /etc/hosts
-  
-  # add ip hostname
-  192.168.xxx.xxx ds1
-  192.168.xxx.xxx ds2
-  192.168.xxx.xxx ds3
-  192.168.xxx.xxx ds4
-  ```
-
-  *Note: Please delete or comment out the line 127.0.0.1*
-
-- Sync /etc/hosts on ds1 to all deployment machines
-
-  ```shell
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
-  do
-      sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation
-  done
-  ```
-
-  *Note: can use `sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/` to avoid type password.*
-
-  > Install sshpass in Centos:
-  >
-  > 1. Install epel
-  >
-  >    yum install -y epel-release
-  >
-  >    yum repolist
-  >
-  > 2. After installing epel, you can install sshpass
-  >
-  >    yum install -y sshpass
-  >
-  >    
-
-- On ds1, switch to the deployment user and configure ssh passwordless login
-
-  ```shell
-  su dolphinscheduler;
- 
-  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
-  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-  chmod 600 ~/.ssh/authorized_keys
-  ```
-
-    Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
-
-
-- On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.
-
-  ```shell
-  su dolphinscheduler;
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
-  do
-      ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
-  done
-  # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
-  ```
-
-- On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-backend directory.
-
-  ```shell
-  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-backend
-  ```
-
-### 1.5: Database initialization
-
-- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
-``` 
-mysql -uroot -p
-```
-
-- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password** 
-
- ``` mysql
-    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    mysql> flush privileges;
- ```
-
-- Create tables and import basic data
-
-    - Modify the following configuration in application-dao.properties under the conf directory
-
-    ```shell
-      vi conf/application-dao.properties 
-    ```
-
-    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/) package to lib under the directory, and then configure the database connection information correctly.
-
-    ```properties
-      #postgre
-      #spring.datasource.driver-class-name=org.postgresql.Driver
-      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8  # Replace the correct IP address
-      spring.datasource.username=xxx						# replace the currect {user} value
-      spring.datasource.password=xxx						# replace the currect {password} value
-    ```
-
-    - After modifying and saving, execute the create table and import data script in the script directory.
-
-    ```shell
-    sh script/create-dolphinscheduler.sh
-    ```
-
-​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
-
-### 1.6: Modify runtime parameters.
-
-- Modify the environment variable in `.dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
-
-    ```shell
-    export HADOOP_HOME=/opt/soft/hadoop
-    export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-    #export SPARK_HOME1=/opt/soft/spark1
-    export SPARK_HOME2=/opt/soft/spark2
-    export PYTHON_HOME=/opt/soft/python
-    export JAVA_HOME=/opt/soft/java
-    export HIVE_HOME=/opt/soft/hive
-    export FLINK_HOME=/opt/soft/flink
-    export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$PATH
-    ```
-
-      `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out. If ".dolphinscheduler_env.sh" cannot be found, run "ls -a"`
-
-- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
-
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the parameters in the one-click deployment script `install.sh`, pay special attention to the configuration of the following parameters.
-
-    ```shell
-    # Choose mysql or postgresql
-    dbtype="mysql"
-    
-    # Database connection address
-    dbhost="192.168.xx.xx:3306"
-    
-    # Database schema name
-    dbname="dolphinscheduler"
-    
-    # Database username
-    username="xxx"    
-    
-    # Database password, if there are special characters, please use '\' escape, you need to modify the specific value of {passowrd} set above
-    passowrd="xxx"
-    
-    # The directory where DS is installed, such as: '/opt/soft/dolphinscheduler', which is different from the current directory.
-    installPath="/opt/soft/dolphinscheduler"
-    
-    # The system user created in section 1.3.
-    deployUser="dolphinscheduler"
-    
-    # Zookeeper cluster address
-    zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-    
-    # On machines which the DS service is deployed
-    ips="ds1,ds2,ds3,ds4"
-    
-    # On machines which the master service is deployed
-    masters="ds1,ds2"
-    
-    # On machines which the worker service is deployed
-    workers="ds3,ds4"
-    
-    # On machines which the alert service is deployed
-    alertServer="ds2"
-    
-    # On machines which the api service is deployed
-    apiServers="ds1"
-    
-    
-    # EMail configuration, taking QQ mailbox as an example
-    # EMail protocol
-    mailProtocol="SMTP"
-    
-    # EMail server address
-    mailServerHost="smtp.exmail.qq.com"
-    
-    # EMail server Port
-    mailServerPort="25"
-    
-    # mailSender and mailUser can be the same one.
-    # Sender
-    mailSender="xxx@qq.com"
-    
-    # Receiver
-    mailUser="xxx@qq.com"
-    
-    # EMail password
-    mailPassword="xxx"
-    
-    # Set true if the mailbox is TLS protocol, otherwise set to false.
-    starttlsEnable="true"
-    
-    # Mail service address value, refer to mailServerHost above.
-    sslTrust="smtp.exmail.qq.com"
-    
-    # Set true if the mailbox is SSL protocol, otherwise set to false. Note: starttlsEnable and sslEnable cannot be true at the same time.
-    sslEnable="false"
-    
-    # Download path of excel
-    xlsFilePath="/tmp/xls"
-    
-    # Where are some sql and other resource files used for business uploaded. Can be set: HDFS, S3, NONE. If a standalone wants to use the local file system, please configure it as HDFS, because HDFS supports the local file system; if you do not need the resource upload function, select NONE. One important point: using a local file system does not require the deployment of Hadoop.
-    resUploadStartupType="HDFS"
-    
-    # Note: If you want to upload to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    defaultFS="hdfs://mycluster:8020"
-    
-    
-    # If the ResourceManager is HA, configure it as the active-standby IP or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx"; otherwise, if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps = "". That's it, I don't use yarn here, the configuration is "".
-    yarnHaIps=""
-    
-    # If it is a single ResourceManager, configure it as the ResourceManager node ip or hostname, otherwise, keep the default value. Yarn is not used here, keep the default.
-    singleYarnIp="ark1"
-    ```
-    
-    *Attention:*
-    
-    - If you need to upload resources to the Hadoop cluster, and the NameNode of the Hadoop cluster is configured with HA, you need to enable HDFS resource upload, and you need to copy the core-site.xml and hdfs-site.xml in the Hadoop cluster to /opt/ dolphinscheduler/conf. Non-NameNode HA skips the next step.
-
-### 1.7: Install python's Zookeeper tool kazoo
-
-- Install python's Zookeeper tool. `This step is only used for one-click deployment.`
-
-```shell
-# Install pip
-sudo yum -y install python-pip;  # ubuntu: sudo apt-get install python-pip
-sudo pip install kazoo;
-```
-
-  *Note: If yum does not find python-pip, you can also install it by following commands*
-
-```shell
-sudo curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
-sudo python get-pip.py  # 如果是python3,使用sudo python3 get-pip.py 
-# then
-sudo pip install kazoo;
-```
-
-- Switch to the deployment user and execute the one-click deployment script
-
-    `sh install.sh` 
-
-```
-Note:
-For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
-sh: bin/dolphinscheduler-daemon.sh: No such file or directory
-```
-
-- After the script is completed, the following 5 services will be started. Use the `jps` command to check whether the services are started (` jps` comes with `java JDK`)
-
-```aidl
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    LoggerServer         ----- logger service
-    ApiApplicationServer ----- api service
-    AlertServer          ----- alert service
-```
-If the above services are started normally, the automatic deployment is successful.
-
-
-After the deployment is successful, you can view the logs. The logs are stored in the logs folder.
-
-```log path
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-
-
-# 2. Frontend Deployment
-
-Please download the latest version of the frontend installation package to the server deployment directory, download address: [Download] (/en-us/download/download.html) (Take 1.2.0 version as an example ), Upload the tar.gz package to this directory after downloading and uncompress it.
-
-```shell
-cd /opt/dolphinscheduler;
-
-tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin dolphinscheduler-ui
-```
-
-
-
-**Choose any one of the following methods, automated deployment is recommended.**
-
-### 2.1 Automated Deployment
-
-- Enter the dolphinscheduler-ui directory and execute (`Note: Automated deployment will automatically download nginx`)
-
-  ```shell
-  cd dolphinscheduler-ui;
-  sh ./install-dolphinscheduler-ui.sh;
-  ```
-
-  - After the execution, please type the frontend port during operation, the default port is 8888, if you choose the default, please press enter directly, or type another port.
-  - Then it will let you type the api-server ip that interacts with the frontend UI.
-  - Next is the port of the api-server that lets you type to interact with the frontend UI.
-  - Next is the operating system selection.
-  - Wait for deployment to complete.
-
-- After deployment, in order to prevent too large resources from uploading to the resource center, it is recommended to modify the nginx upload size parameters, as follows:
-
-  - Add Nginx configuration client_max_body_size 1024m, you can add it in the http method body.
-
-  ```shell
-  vi /etc/nginx/nginx.conf
-  
-  # add param
-  client_max_body_size 1024m;
-  ```
-
-  - Then restart Nginx service
-
-  ```shell
-  systemctl restart nginx
-  ```
-
-- Visit the front page address: http://localhost:8888. If the front login page appears, the front web installation is complete.
-
-  default user password:admin/dolphinscheduler123
-  
-  <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-### 2.2 Manual Deployment
-- Install nginx by yourself, download it from the official website: Or `yum install nginx -y` 
-
-- Modify the nginx configuration file (Note: some place need to be modified by yourself)
-
-```html
-vi /etc/nginx/nginx.conf
-
-server {
-    listen       8888; # Your Port
-    server_name  localhost;
-    #charset koi8-r;
-    #access_log  /var/log/nginx/host.access.log  main;
-    location / {
-        root   /opt/soft/dolphinscheduler-ui/dist;      # Your dist directory which 
-        index  index.html index.html;
-    }
-    location /dolphinscheduler {
-        proxy_pass http://localhost:12345;    # Your ApiApplicationServer address
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header x_real_ipP $remote_addr;
-        proxy_set_header remote_addr $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_http_version 1.1;
-        proxy_connect_timeout 4s;
-        proxy_read_timeout 30s;
-        proxy_send_timeout 12s;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-    }
-    #error_page  404              /404.html;
-    # redirect server error pages to the static page /50x.html
-    #
-    error_page   500 502 503 504  /50x.html;
-    location = /50x.html {
-        root   /usr/share/nginx/html;
-    }
-}
-```
-- Then restart Nginx service
-
-  ```shell
-  systemctl restart nginx
-  ```
-
-- Visit the front page address: http://localhost:8888. If the front login page appears, the front web installation is complete.
-
-  default user password:admin/dolphinscheduler123
-  
-  <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-
-
-# 3. Start and stop service
-
-* Stop all services
-
-  ` sh ./bin/stop-all.sh`
-
-* Start all services
-
-  ` sh ./bin/start-all.sh`
-
-* Start and stop master service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* Start and stop worker Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* Start and stop api Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-
-* Start and stop logger Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-
-* Start and stop alert service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-``Note: Please refer to the "Architecture Design" section for service usage``
diff --git a/docs/en-us/1.2.0/user_doc/frontend-deployment.md b/docs/en-us/1.2.0/user_doc/frontend-deployment.md
deleted file mode 100644
index e1c1853..0000000
--- a/docs/en-us/1.2.0/user_doc/frontend-deployment.md
+++ /dev/null
@@ -1,128 +0,0 @@
-# frontend-deployment
-
-The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment.
-
-
-
-## Preparations
-
-#### Download the installation package
-
-Please download the latest version of the installation package, download address: [download](/en-us/download/download.html)
-
-After downloading apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-front-bin.tar.gz,
-decompress`tar -zxvf apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-front-bin.tar.gz ./`and enter the`dolphinscheduler-ui`directory
-
-
-
-
-## Deployment
-
-Automated deployment is recommended for either of the following two ways
-
-### Automated Deployment
-
->Front-end automatic deployment based on Linux system `yum` operation, before deployment, please install and update`yum`
-
-under this directory, execute`./install-dolphinscheduler-ui.sh` 
-
-
-### Manual Deployment
-You can choose one of the following two deployment methods, or you can choose other deployment methods according to your production environment.
-
-#### nginx deployment
-Option to install epel source `yum install epel-release -y`
-
-Install Nginx by yourself, download it from the official website: http://nginx.org/en/download.html or `yum install nginx -y`
-
-
-> ####  Nginx configuration file address
-
-```
-/etc/nginx/conf.d/default.conf
-```
-
-> ####  Configuration information (self-modifying)
-
-```
-server {
-    listen       8888;# access port
-    server_name  localhost;
-    #charset koi8-r;
-    #access_log  /var/log/nginx/host.access.log  main;
-    location / {
-        root   /xx/dist; # the dist directory address decompressed by the front end above (self-modifying)
-        index  index.html index.html;
-    }
-    location /dolphinscheduler {
-        proxy_pass http://192.168.xx.xx:12345; # interface address (self-modifying)
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header x_real_ipP $remote_addr;
-        proxy_set_header remote_addr $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_http_version 1.1;
-        proxy_connect_timeout 4s;
-        proxy_read_timeout 30s;
-        proxy_send_timeout 12s;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-    }
-    #error_page  404              /404.html;
-    # redirect server error pages to the static page /50x.html
-    #
-    error_page   500 502 503 504  /50x.html;
-    location = /50x.html {
-        root   /usr/share/nginx/html;
-    }
-}
-```
-
-> ####  Restart the Nginx service
-
-```
-systemctl restart nginx
-```
-
-#### nginx command
-
-- enable `systemctl enable nginx`
-
-- restart `systemctl restart nginx`
-
-- status `systemctl status nginx`
-
-#### jetty deployment
-Enter the source package `dolphinscheduler-ui` directory and execute
-
-```
-npm install
-```
-
-> #####  ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
-```
-npm install node-sass --unsafe-perm //Install node-sass dependency separately
-```
-
-```
-npm run build:release
-```
-
-Create the ui directory under the backend binary package directory
-
-Copy all files in the dolphinscheduler-ui/dist directory to the backend binary package ui directory
-
-Visit the following url, interface address (modify it yourself)
-http://localhost:12345/dolphinscheduler
-
-## FAQ
-#### Upload file size limit
-
-Edit the configuration file `vi /etc/nginx/nginx.conf`
-
-```
-# change upload size
-client_max_body_size 1024m
-```
-
-
diff --git a/docs/en-us/1.2.0/user_doc/hardware-environment.md b/docs/en-us/1.2.0/user_doc/hardware-environment.md
deleted file mode 100644
index 705c7d8..0000000
--- a/docs/en-us/1.2.0/user_doc/hardware-environment.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Hareware Environment
-
-DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
-
-## 1. Linux operating system version requirements
-
-| OS       | Version         |
-| :----------------------- | :----------: |
-| Red Hat Enterprise Linux | 7.0 and above   |
-| CentOS                   | 7.0 and above   |
-| Oracle Enterprise Linux  | 7.0 and above   |
-| Ubuntu LTS               | 16.04 and above |
-
-> **Attention:**
->The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
-
-## 2. Recommended server configuration
-DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
-### Production Environment
-
-| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
-| --- | --- | --- | --- | --- |
-| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
-
-> **Attention:**
-> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
-> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
-
-
-## 3. Network requirements
-
-DolphinScheduler provides the following network port configurations for normal operation:
-
-| Server | Port | Desc |
-|  --- | --- | --- |
-| MasterServer |  5566  | Not the communication port. Require the native ports do not conflict |
-| WorkerServer | 7788  | Not the communication port. Require the native ports do not conflict |
-| ApiApplicationServer |  12345 | Backend communication port |
-| nginx | 8888 | The port for DolphinScheduler UI |
-
-> **Attention:**
-> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
-> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
-
-## 4. Browser requirements
-
-DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
-
diff --git a/docs/en-us/1.2.0/user_doc/metadata-1.2.md b/docs/en-us/1.2.0/user_doc/metadata-1.2.md
deleted file mode 100644
index 19616ef..0000000
--- a/docs/en-us/1.2.0/user_doc/metadata-1.2.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# Dolphin Scheduler 1.2 MetaData
-
-<a name="V5KOl"></a>
-### Dolphin Scheduler 1.2 DB Table Overview
-| Table Name | Comment |
-| :---: | :---: |
-| t_ds_access_token | token for access ds backend |
-| t_ds_alert | alert detail |
-| t_ds_alertgroup | alert group |
-| t_ds_command | command detail |
-| t_ds_datasource | data source |
-| t_ds_error_command | error command detail |
-| t_ds_process_definition | process difinition |
-| t_ds_process_instance | process instance |
-| t_ds_project | project |
-| t_ds_queue | queue |
-| t_ds_relation_datasource_user | datasource related to user |
-| t_ds_relation_process_instance | sub process |
-| t_ds_relation_project_user | project related to user |
-| t_ds_relation_resources_user | resource related to user |
-| t_ds_relation_udfs_user | UDF related to user |
-| t_ds_relation_user_alertgroup | alert group related to user |
-| t_ds_resources | resoruce center file |
-| t_ds_schedules | process difinition schedule |
-| t_ds_session | user login session |
-| t_ds_task_instance | task instance |
-| t_ds_tenant | tenant |
-| t_ds_udfs | UDF resource |
-| t_ds_user | user detail |
-| t_ds_version | ds version |
-| t_ds_worker_group | worker group |
-
-
----
-
-<a name="XCLy1"></a>
-### E-R Diagram
-<a name="5hWWZ"></a>
-#### User Queue DataSource
-![image.png](/img/metadata-erd/user-queue-datasource.png)
-
-- Multiple users can belong to one tenant
-- The queue field in t_ds_user table stores the queue_name information in t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.
-- The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
-<a name="7euSN"></a>
-#### Project Resource Alert
-![image.png](/img/metadata-erd/project-resource-alert.png)
-
-- User can have multiple projects, User project authorization completes the relationship binding using project_id and user_id in t_ds_relation_project_user table
-- The user_id in the t_ds_projcet table represents the user who created the project, and the user_id in the t_ds_relation_project_user table represents users who have permission to the project
-- The user_id in the t_ds_resources table represents the user who created the resource, and the user_id in t_ds_relation_resources_user represents the user who has permissions to the resource
-- The user_id in the t_ds_udfs table represents the user who created the UDF, and the user_id in the t_ds_relation_udfs_user table represents a user who has permission to the UDF
-<a name="JEw4v"></a>
-#### Command Process Task
-![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
-
-- A project has multiple process definitions, a process definition can generate multiple process instances, and a process instance can generate multiple task instances
-- The t_ds_schedulers table stores the timing schedule information for process difinition
-- The data stored in the t_ds_relation_process_instance table is used to deal with that the process definition contains sub-processes, parent_process_instance_id field represents the id of the main process instance containing the child process, process_instance_id field represents the id of the sub-process instance, parent_task_instance_id field represents the task instance id of the sub-process node
-- The process instance table and the task instance table correspond to the t_ds_process_instance table and the t_ds_task_instance table, respectively.
-
----
-
-<a name="yd79T"></a>
-### Core Table Schema
-<a name="6bVhH"></a>
-#### t_ds_process_definition
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | process definition name |
-| version | int | process definition version |
-| release_state | tinyint | process definition release state:0:offline,1:online |
-| project_id | int | project id |
-| user_id | int | process definition creator id |
-| process_definition_json | longtext | process definition json content |
-| description | text | process difinition desc |
-| global_params | text | global parameters |
-| flag | tinyint | process is available: 0 not available, 1 available |
-| locations | text | Node location information |
-| connects | text | Node connection information |
-| receivers | text | receivers |
-| receivers_cc | text | carbon copy list |
-| create_time | datetime | create time |
-| timeout | int | timeout |
-| tenant_id | int | tenant id |
-| update_time | datetime | update time |
-
-<a name="t5uxM"></a>
-#### t_ds_process_instance
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | process instance name |
-| process_definition_id | int | process definition id |
-| state | tinyint | process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
-| recovery | tinyint | process instance failover flag:0:normal,1:failover instance |
-| start_time | datetime | process instance start time |
-| end_time | datetime | process instance end time |
-| run_times | int | process instance run times |
-| host | varchar | process instance host |
-| command_type | tinyint | command type:0 start ,1 Start from the current node,2 Resume a fault-tolerant process,3 Resume Pause Process, 4 Execute from the failed node,5 Complement, 6 dispatch, 7 re-run, 8 pause, 9 stop ,10 Resume waiting thread |
-| command_param | text | json command parameters |
-| task_depend_type | tinyint | task depend type. 0: only current node,1:before the node,2:later nodes |
-| max_try_times | tinyint | max try times |
-| failure_strategy | tinyint | failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed |
-| warning_type | tinyint | warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success |
-| warning_group_id | int | warning group id |
-| schedule_time | datetime | schedule time |
-| command_start_time | datetime | command start time |
-| global_params | text | global parameters |
-| process_instance_json | longtext | process instance json(copy的process definition 的json) |
-| flag | tinyint | process instance is available: 0 not available, 1 available |
-| update_time | timestamp | update time |
-| is_sub_process | int | whether the process is sub process:  1 sub-process,0 not sub-process |
-| executor_id | int | executor id |
-| locations | text | Node location information |
-| connects | text | Node connection information |
-| history_cmd | text | history commands of process instance operation |
-| dependence_schedule_times | text | depend schedule fire time |
-| process_instance_priority | int | process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-| timeout | int | time out |
-| tenant_id | int | tenant id |
-
-<a name="tHZsY"></a>
-#### t_ds_task_instance
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | task name |
-| task_type | varchar | task type |
-| process_definition_id | int | process definition id |
-| process_instance_id | int | process instance id |
-| task_json | longtext | task content json |
-| state | tinyint | Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
-| submit_time | datetime | task submit time |
-| start_time | datetime | task start time |
-| end_time | datetime | task end time |
-| host | varchar | host of task running on |
-| execute_path | varchar | task execute path in the host |
-| log_path | varchar | task log path |
-| alert_flag | tinyint | whether alert |
-| retry_times | int | task retry times |
-| pid | int | pid of task |
-| app_link | varchar | yarn app id |
-| flag | tinyint | taskinstance is available: 0 not available, 1 available |
-| retry_interval | int | retry interval when task failed  |
-| max_retry_times | int | max retry times |
-| task_instance_priority | int | task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-
-<a name="gLGtm"></a>
-#### t_ds_command
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| command_type | tinyint | Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread |
-| process_definition_id | int | process definition id |
-| command_param | text | json command parameters |
-| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward, 2 backward |
-| failure_strategy | tinyint | Failed policy: 0 end, 1 continue |
-| warning_type | tinyint | Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent |
-| warning_group_id | int | warning group |
-| schedule_time | datetime | schedule time |
-| start_time | datetime | start time |
-| executor_id | int | executor id |
-| dependence | varchar | dependence |
-| update_time | datetime | update time |
-| process_instance_priority | int | process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-
-
-
diff --git a/docs/en-us/1.2.0/user_doc/quick-start.md b/docs/en-us/1.2.0/user_doc/quick-start.md
deleted file mode 100644
index 7e4ac7d..0000000
--- a/docs/en-us/1.2.0/user_doc/quick-start.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Quick Start
-
-* Administrator user login
-
-  > Address:192.168.xx.xx:8888  Username and password:admin/dolphinscheduler123
-
-<p align="center">
-   <img src="/img/login_en.png" width="60%" />
- </p>
-
-* Create queue
-
-<p align="center">
-   <img src="/img/create-queue-en.png" width="60%" />
- </p>
-
-  * Create tenant
-      <p align="center">
-    <img src="/img/create-tenant-en.png" width="60%" />
-  </p>
-
-  * Creating Ordinary Users
-<p align="center">
-      <img src="/img/create-user-en.png" width="60%" />
- </p>
-
-  * Create an alarm group
-
- <p align="center">
-    <img src="/img/alarm-group-en.png" width="60%" />
-  </p>
-
-  
-  * Create an worker group
-  
-   <p align="center">
-      <img src="/img/worker-group-en.png" width="60%" />
-    </p>
-    
- * Create an token
-  
-   <p align="center">
-      <img src="/img/token-en.png" width="60%" />
-    </p>
-     
-  
-  * Log in with regular users
-  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
-
-  * Project Management - > Create Project - > Click on Project Name
-<p align="center">
-      <img src="/img/create_project_en.png" width="60%" />
- </p>
-
-  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
-
-<p align="center">
-   <img src="/img/process_definition_en.png" width="60%" />
- </p>
-
-  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
-
- <p align="center">
-   <img src="/img/log_en.png" width="60%" />
-</p>
diff --git a/docs/en-us/1.2.0/user_doc/standalone-deployment.md b/docs/en-us/1.2.0/user_doc/standalone-deployment.md
deleted file mode 100644
index 79b8935..0000000
--- a/docs/en-us/1.2.0/user_doc/standalone-deployment.md
+++ /dev/null
@@ -1,456 +0,0 @@
-# Standalone Deployment
-
-DolphinScheduler Standalone deployment is divided into two parts: backend deployment and frontend deployment.
-
-# 1. Backend Deployment
-
-### 1.1: Before you begin (please install requirement basic software by yourself)
-
- * [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.6 or 5.7): Choose One
- * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+):  Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
- * [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+): Required
- * [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download): Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
-
-```markdown
- Tips:DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
-```
-
-### 1.2: Download the backend package.
-
-- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [download](/en-us/download/download.html) (Take 1.2.0 for an example). Download the package and move to the installation and deployment directory. Then uncompress it.
-
-```shell
-# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
-mkdir -p /opt/dolphinscheduler;
-cd /opt/dolphinscheduler;
-# uncompress
-tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin.tar.gz -C /opt/dolphinscheduler;
- 
-mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin  dolphinscheduler-backend
-```
-
-###1.3: Create an individual user for deployment and grant directory operation permissions
-
-- Create an individual user, and be sure to configure sudo passwordless. Take creating 'dolphinscheduler' user as an example.
-
-```shell
-# useradd need root permission
-useradd dolphinscheduler;
-
-# setup password
-echo "dolphinscheduler" | passwd --stdin dolphinscheduler
-
-# setup sudo passwordless
-sed -i '$adolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' /etc/sudoers
-sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
-
-# Modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-backend directory 
-chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-backend
-```
-
-```
- Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
- - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
- - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
-```
-
-### 1.4: ssh passwordless configuration
-
-- Switch to deployment user and configure ssh passwordless login
-
-```shell
-su dolphinscheduler;
-
-ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
-cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-chmod 600 ~/.ssh/authorized_keys
-```
-
-Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
-
-### 1.5: Database initialization
-
-- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
-``` 
-mysql -uroot -p
-```
-
-- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password** 
-
-    ``` mysql
-    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    mysql> flush privileges;
-    ```
-
-
-- Create tables and import basic data
-
-    - Modify the following configuration in application-dao.properties under the conf directory
-
-      - ```shell
-        vi conf/application-dao.properties 
-        ```
-
-    - If you choose MySQL, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.MySQL.com/archives/c-j/) package to lib under the directory, and then configure the database connection information correctly.
-    
-    ```properties
-      # postgre
-      # spring.datasource.driver-class-name=org.postgresql.Driver
-      # spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8     # Replace the correct IP address
-      spring.datasource.username=xxx						# replace the currect {user} value
-      spring.datasource.password=xxx						# replace the currect {password} value
-    ```
-
-    - After modifying and saving, execute the create table and import data script in the script directory.
-
-    ```shell
-    sh script/create-dolphinscheduler.sh
-    ```
-
-​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
-
-### 1.6: Modify runtime parameters.
-
-- Modify the environment variable in `.dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
-
-    ```shell
-    export HADOOP_HOME=/opt/soft/hadoop
-    export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-    #export SPARK_HOME1=/opt/soft/spark1
-    export SPARK_HOME2=/opt/soft/spark2
-    export PYTHON_HOME=/opt/soft/python
-    export JAVA_HOME=/opt/soft/java
-    export HIVE_HOME=/opt/soft/hive
-    export FLINK_HOME=/opt/soft/flink
-    export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$PATH
-    ```
-
-     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out. If ".dolphinscheduler_env.sh" cannot be found, run "ls -a"`
-
-    
-
-- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
-
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the parameters in the one-click deployment script `install.sh`, pay special attention to the configuration of the following parameters.
-
-    ```shell
-    # Choose mysql or postgresql
-    dbtype="mysql"
-    
-    # Database connection address
-    dbhost="localhost:3306"
-    
-    # Database schema name
-    dbname="dolphinscheduler"
-    
-    # Database username
-    username="xxx"    
-    
-    # Database password, if there are special characters, please use '\' escape, you need to modify the specific value of {passowrd} set above
-    passowrd="xxx"
-    
-    # The directory where DS is installed, such as: '/opt/soft/dolphinscheduler', which is different from the current directory.
-    installPath="/opt/soft/dolphinscheduler"
-    
-    # The system user created in section 1.3.
-    deployUser="dolphinscheduler"
-    
-    # Zookeeper connection address, standalone machine is localhost:2181, port must be provided.
-    zkQuorum="localhost:2181"
-    
-    # On machine which the DS service is deployed, set localhost
-    ips="localhost"
-    
-    # On machine which the master service is deployed, set localhost
-    masters="localhost"
-    
-    # On machine which the worker service is deployed, set localhost
-    workers="localhost"
-    
-    # On machine which the alert service is deployed, set localhost
-    alertServer="localhost"
-    
-    # On machine which the api service is deployed, set localhost
-    apiServers="localhost"
-    
-    
-    # EMail configuration, taking QQ mailbox as an example
-    # EMail protocol
-    mailProtocol="SMTP"
-    
-    # EMail server address
-    mailServerHost="smtp.exmail.qq.com"
-    
-    # EMail server Port
-    mailServerPort="25"
-    
-    # mailSender and mailUser can be the same one.
-    # Sender
-    mailSender="xxx@qq.com"
-    
-    # Receiver
-    mailUser="xxx@qq.com"
-    
-    # EMail password
-    mailPassword="xxx"
-    
-    # Set true if the mailbox is TLS protocol, otherwise set to false.
-    starttlsEnable="true"
-    
-    # Mail service address value, refer to mailServerHost above.
-    sslTrust="smtp.exmail.qq.com"
-    
-    # Set true if the mailbox is SSL protocol, otherwise set to false. Note: starttlsEnable and sslEnable cannot be true at the same time.
-    sslEnable="false"
-    
-    # Download path of excel
-    xlsFilePath="/tmp/xls"
-    
-    # Where are some sql and other resource files used for business uploaded. Can be set: HDFS, S3, NONE. If a standalone wants to use the local file system, please configure it as HDFS, because HDFS supports the local file system; if you do not need the resource upload function, select NONE. One important point: using a local file system does not require the deployment of Hadoop.
-    resUploadStartupType="HDFS"
-    
-    # Take the local file system as an example.
-    # Note: If you want to upload resource files to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    defaultFS="file:///data/dolphinscheduler"    # hdfs://{ip|hostname}:8020
-    
-    
-    # If the ResourceManager is HA, configure it as the active-standby IP or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx"; otherwise, if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps = "". That's it, I don't use yarn here, the configuration is "".
-    yarnHaIps=""
-    
-    # If it is a single ResourceManager, configure it as the ResourceManager node ip or hostname, otherwise, keep the default value. Yarn is not used here, keep the default.
-    singleYarnIp="ark1"
-    
-    # Since HDFS supports the local file system, you need to ensure that the local folder exists and has read and write permissions.
-    hdfsPath="/data/dolphinscheduler"
-    ```
-    
-    *Note: If you plan to use the `Resource Center` function, execute the following command:*
-    
-    ```shell
-    sudo mkdir /data/dolphinscheduler
-    sudo chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler
-    ```
-
-### 1.7: Install python's Zookeeper tool kazoo
-
-- Install python's Zookeeper tool. `This step is only used for one-click deployment.`
-
-```shell
-# Install pip
-sudo yum -y install python-pip;  # ubuntu: sudo apt-get install python-pip
-sudo pip install kazoo;
-```
-
-  *Note: If yum does not find python-pip, you can also install it by following commands*
-
-```shell
-sudo curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
-sudo python get-pip.py  # python3: sudo python3 get-pip.py 
-# then
-sudo pip install kazoo;
-```
-
-- Switch to the deployment user and execute the one-click deployment script
-
-    `sh install.sh` 
-
-```
-Note:
-For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
-sh: bin/dolphinscheduler-daemon.sh: No such file or directory
-```
-
-- After the script is completed, the following 5 services will be started. Use the `jps` command to check whether the services are started (` jps` comes with `java JDK`)
-
-```aidl
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    LoggerServer         ----- logger service
-    ApiApplicationServer ----- api service
-    AlertServer          ----- alert service
-```
-If the above services are started normally, the automatic deployment is successful.
-
-
-After the deployment is successful, you can view the logs. The logs are stored in the logs folder.
-
-```log path
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-
-
-# 2. Frontend Deployment
-
-Please download the latest version of the frontend installation package to the server deployment directory, download address: [Download] (/en-us/download/download.html) (Take 1.2.0 version as an example ), Upload the tar.gz package to this directory after downloading and uncompress it.
-
-```shell
-cd /opt/dolphinscheduler;
-
-tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin dolphinscheduler-ui
-```
-
-
-
-**Choose any one of the following methods, automated deployment is recommended.**
-
-### 2.1 Automated Deployment
-
-- Enter the dolphinscheduler-ui directory and execute (`Note: Automated deployment will automatically download nginx`)
-
-  ```shell
-  cd dolphinscheduler-ui;
-  sh ./install-dolphinscheduler-ui.sh;
-  ```
-
-  - After the execution, please type the frontend port during operation, the default port is 8888, if you choose the default, please press enter directly, or type another port.
-  - Then it will let you type the api-server ip that interacts with the frontend UI.
-  - Next is the port of the api-server that lets you type to interact with the frontend UI.
-  - Next is the operating system selection.
-  - Wait for deployment to complete.
-
-- After deployment, in order to prevent too large resources from uploading to the resource center, it is recommended to modify the nginx upload size parameters, as follows:
-
-  - Add Nginx configuration client_max_body_size 1024m, you can add it in the http method body.
-
-  ```shell
-  vi /etc/nginx/nginx.conf
-  
-  # add param
-  client_max_body_size 1024m;
-  ```
-
-  - Then restart Nginx service
-
-  ```shell
-  systemctl restart nginx
-  ```
-
-- Visit the front page address: http://localhost:8888. If the front login page appears, the front web installation is complete.
-
-  default user password:admin/dolphinscheduler123
-  
-  <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-### 2.2 Manual Deployment
-- Install nginx by yourself, download it from the official website: Or `yum install nginx -y` 
-
-- Modify the nginx configuration file (Note: some place need to be modified by yourself)
-
-```html
-vi /etc/nginx/nginx.conf
-
-server {
-    listen       8888; # Your Port
-    server_name  localhost;
-    #charset koi8-r;
-    #access_log  /var/log/nginx/host.access.log  main;
-    location / {
-        root   /opt/soft/dolphinscheduler-ui/dist;      # Your dist directory which you uncompress
-        index  index.html index.html;
-    }
-    location /dolphinscheduler {
-        proxy_pass http://localhost:12345;    # Your ApiApplicationServer address
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header x_real_ipP $remote_addr;
-        proxy_set_header remote_addr $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_http_version 1.1;
-        proxy_connect_timeout 4s;
-        proxy_read_timeout 30s;
-        proxy_send_timeout 12s;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-    }
-    #error_page  404              /404.html;
-    # redirect server error pages to the static page /50x.html
-    #
-    error_page   500 502 503 504  /50x.html;
-    location = /50x.html {
-        root   /usr/share/nginx/html;
-    }
-}
-```
-- Then restart Nginx service
-
-  ```shell
-  systemctl restart nginx
-  ```
-
-- Visit the front page address: http://localhost:8888. If the front login page appears, the front web installation is complete.
-
-  default user password:admin/dolphinscheduler123
-  
-  <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-
-
-# 3. Start and stop service
-
-* Stop all services
-
-  ` sh ./bin/stop-all.sh`
-
-* Start all services
-
-  ` sh ./bin/start-all.sh`
-
-* Start and stop master service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* Start and stop worker Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* Start and stop api Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-
-* Start and stop logger Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-
-* Start and stop alert service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-``Note: Please refer to the "Architecture Design" section for service usage``
-
diff --git a/docs/en-us/1.2.0/user_doc/system-manual.md b/docs/en-us/1.2.0/user_doc/system-manual.md
deleted file mode 100644
index 6ac831d..0000000
--- a/docs/en-us/1.2.0/user_doc/system-manual.md
+++ /dev/null
@@ -1,736 +0,0 @@
-# System Use Manual
-
-## Operational Guidelines
-
-### Home page
-The homepage contains task status statistics, process status statistics, and workflow definition statistics for all user projects.
-
-<p align="center">
-      <img src="/img/home_en.png" width="80%" />
- </p>
-
-### Create a project
-
-  - Click "Project - > Create Project", enter project name,  description, and click "Submit" to create a new project.
-  - Click on the project name to enter the project home page.
-<p align="center">
-      <img src="/img/project_home_en.png" width="80%" />
- </p>
-
-> The project home page contains task status statistics, process status statistics, and workflow definition statistics for the project.
-
- - Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame.
- - Process State Statistics: It refers to the statistics of the number of waiting, failing, running, completing and succeeding process instances in a specified time range.
- - Process Definition Statistics: The process definition created by the user and the process definition granted by the administrator to the user are counted.
-
-
-### Creating Process definitions
-  - Go to the project home page, click "Process definitions" and enter the list page of process definition.
-  - Click "Create process" to create a new process definition.
-  - Drag the "SHELL" node to the canvas and add a shell task.
-  - Fill in the Node Name, Description, and Script fields.
-  - Selecting "task priority" will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order.
-  - Timeout alarm. Fill in "Overtime Time". When the task execution time exceeds the overtime, it can alarm and fail over time.
-  - Fill in "Custom Parameters" and refer to [Custom Parameters](#CustomParameters) <!-- markdown-link-check-disable-line -->
-    <p align="center">
-    <img src="/img/process_definitions_en.png" width="80%" />
-      </p>
-  - Increase the order of execution between nodes: click "line connection". As shown, task 2 and task 3 are executed in parallel. When task 1 is executed, task 2 and task 3 are executed simultaneously.
-
-<p align="center">
-   <img src="/img/task_en.png" width="80%" />
- </p>
-
-  - Delete dependencies: Click on the arrow icon to "drag nodes and select items", select the connection line, click on the delete icon to delete dependencies between nodes.
-<p align="center">
-      <img src="/img/delete_dependencies_en.png" width="80%" />
- </p>
-
-  - Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters.
-
-<p align="center">
-   <img src="/img/global_parameters_en.png" width="80%" />
- </p>
-
-  - For other types of nodes, refer to [task node types and parameter settings](#TaskNodeType) <!-- markdown-link-check-disable-line -->
-
-### Execution process definition
-  - **The process definition of the off-line state can be edited, but not run**, so the on-line workflow is the first step.
-  > Click on the Process definition, return to the list of process definitions, click on the icon "online", online process definition.
-
-  > Before setting workflow offline, the timed tasks in timed management should be offline, so that the definition of workflow can be set offline successfully. 
-
-  - Click "Run" to execute the process. Description of operation parameters:
-    * Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally", "End" Representation: Terminate all ongoing tasks and terminate the entire process.
-    * Notification strategy:When the process is over, send process execution information notification mail according to the process status.
-    * Process priority: The priority of process running is divided into five levels:the highest, the high, the medium, the low, and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order.
-    * Worker group: This process can only be executed in a specified machine group. Default, by default, can be executed on any worker.
-    * Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail.
-    * Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list.
-    * Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list.
-    
-<p align="center">
-   <img src="/img/start-process-en.png" width="80%" />
- </p>
-
-  * Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure:
-  
-<p align="center">
-   <img src="/img/complement-en.png" width="80%" />
- </p>
-
-> Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
-
-### Timing Process Definition
-  - Create Timing: "Process Definition - > Timing"
-  - Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances.
-  
-<p align="center">
-   <img src="/img/timing-en.png" width="80%" />
- </p>
-
-  - Add a timer to be executed once a day at 5:00 a.m. as shown below:
-<p align="center">
-      <img src="/img/timer-en.png" width="80%" />
- </p>
-
-  - Timely online,**the newly created timer is offline. You need to click "Timing Management - >online" to work properly.**
-
-### View process instances
-  > Click on "Process Instances" to view the list of process instances.
-
-  > Click on the process name to see the status of task execution.
-
-  <p align="center">
-   <img src="/img/process-instances-en.png" width="80%" />
- </p>
-
-  > Click on the task node, click "View Log" to view the task execution log.
-
-  <p align="center">
-   <img src="/img/view-log-en.png" width="80%" />
- </p>
-
- > Click on the task instance node, click **View History** to view the list of task instances that the process instance runs.
-
- <p align="center">
-    <img src="/img/instance-runs-en.png" width="80%" />
-  </p>
-
-
-  > Operations on workflow instances:
-
-<p align="center">
-   <img src="/img/workflow-instances-en.png" width="80%" />
-</p>
-
-  * Editor: You can edit the terminated process. When you save it after editing, you can choose whether to update the process definition or not.
-  * Rerun: A process that has been terminated can be re-executed.
-  * Recovery failure: For a failed process, a recovery failure operation can be performed, starting at the failed node.
-  * Stop: Stop the running process, the background will `kill` he worker process first, then `kill -9` operation.
-  * Pause:The running process can be **suspended**, the system state becomes **waiting to be executed**, waiting for the end of the task being executed, and suspending the next task to be executed.
-  * Restore pause: **The suspended process** can be restored and run directly from the suspended node
-  * Delete: Delete process instances and task instances under process instances
-  * Gantt diagram: The vertical axis of Gantt diagram is the topological ordering of task instances under a process instance, and the horizontal axis is the running time of task instances, as shown in the figure:
-<p align="center">
-      <img src="/img/gantt-en.png" width="80%" />
-</p>
-
-### View task instances
-  > Click on "Task Instance" to enter the Task List page and query the performance of the task.
-  >
-  >
-
-<p align="center">
-   <img src="/img/task-instances-en.png" width="80%" />
-</p>
-
-  > Click "View Log" in the action column to view the log of task execution.
-
-<p align="center">
-   <img src="/img/task-execution-en.png" width="80%" />
-</p>
-
-### Create data source
-  > Data Source Center supports MySQL, POSTGRESQL, HIVE and Spark data sources.
-
-#### Create and edit MySQL data source
-
-  - Click on "Datasource - > Create Datasources" to create different types of datasources according to requirements.
-- Datasource: Select MYSQL
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to MySQL
-- Port: Enter the port to connect MySQL
-- User name: Set the username to connect to MySQL
-- Password: Set the password to connect to MySQL
-- Database name: Enter the name of the database connecting MySQL
-- Jdbc connection parameters: parameter settings for MySQL connections, filled in as JSON
-
-<p align="center">
-   <img src="/img/mysql-en.png" width="80%" />
- </p>
-
-  > Click "Test Connect" to test whether the data source can be successfully connected.
-  >
-  >
-
-#### Create and edit POSTGRESQL data source
-
-- Datasource: Select POSTGRESQL
-- Datasource Name: Name of Input Data Source
-- Description: Description of input data sources
-- IP: Enter IP to connect to POSTGRESQL
-- Port: Input port to connect POSTGRESQL
-- Username: Set the username to connect to POSTGRESQL
-- Password: Set the password to connect to POSTGRESQL
-- Database name: Enter the name of the database connecting to POSTGRESQL
-- Jdbc connection parameters: parameter settings for POSTGRESQL connections, filled in as JSON
-
-<p align="center">
-   <img src="/img/create-datasource-en.png" width="80%" />
- </p>
-
-#### Create and edit HIVE data source
-
-1.Connect with HiveServer 2
-
- <p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-
-  - Datasource: Select HIVE
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter IP to connect to HIVE
-- Port: Input port to connect to HIVE
-- Username: Set the username to connect to HIVE
-- Password: Set the password to connect to HIVE
-- Database Name: Enter the name of the database connecting to HIVE
-- Jdbc connection parameters: parameter settings for HIVE connections, filled in in as JSON
-
-2.Connect using Hive Server 2 HA Zookeeper mode
-
- <p align="center">
-    <img src="/img/zookeeper-en.png" width="80%" />
-  </p>
-
-
-Note: If **kerberos** is turned on, you need to fill in **Principal**
-<p align="center">
-    <img src="/img/principal-en.png" width="80%" />
-  </p>
-
-
-
-
-#### Create and Edit Spark Datasource
-
-<p align="center">
-   <img src="/img/edit-datasource-en.png" width="80%" />
- </p>
-
-- Datasource: Select Spark
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to Spark
-- Port: Input port to connect Spark
-- Username: Set the username to connect to Spark
-- Password: Set the password to connect to Spark
-- Database name: Enter the name of the database connecting to Spark
-- Jdbc Connection Parameters: Parameter settings for Spark Connections, filled in as JSON
-
-
-
-Note: If **kerberos** If Kerberos is turned on, you need to fill in  **Principal**
-
-<p align="center">
-    <img src="/img/kerberos-en.png" width="80%" />
-  </p>
-
-### Upload Resources
-  - Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
-
-```
-conf/common/common.properties  
-    # Users who have permission to create directories under the HDFS root path
-    hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
-    # whether kerberos starts
-    hadoop.security.authentication.startup.state=false
-    # java.security.krb5.conf path
-    java.security.krb5.conf.path=/opt/krb5.conf
-    # loginUserFromKeytab user
-    login.user.keytab.username=hdfs-mycluster@ESZ.COM
-    # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-    
-conf/common/hadoop.properties      
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020    
-    #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
-    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-
-```
-- yarn.resourcemanager.ha.rm.ids and yarn.application.status.address only need to configure one address, and the other address is empty.
-- You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project and restart the api-server service.
-
-#### File Manage
-
-  > It is the management of various resource files, including creating basic txt/log/sh/conf files, uploading jar packages and other types of files, editing, downloading, deleting and other operations.
-  >
-  >
-  > <p align="center">
-  >  <img src="/img/file-manage-en.png" width="80%" />
-  > </p>
-
-  * Create file
- > File formats support the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql
-
-<p align="center">
-   <img src="/img/create-file.png" width="80%" />
- </p>
-
-  * Upload Files
-
-> Upload Files: Click the Upload button to upload, drag the file to the upload area, and the file name will automatically complete the uploaded file name.
-
-<p align="center">
-   <img src="/img/file-upload-en.png" width="80%" />
- </p>
-
-
-  * File View
-
-> For viewable file types, click on the file name to view file details
-
-<p align="center">
-   <img src="/img/file-view-en.png" width="80%" />
- </p>
-
-  * Download files
-
-> You can download a file by clicking the download button in the top right corner of the file details, or by downloading the file under the download button after the file list.
-
-  * File rename
-
-<p align="center">
-   <img src="/img/rename-en.png" width="80%" />
- </p>
-
-#### Delete
->  File List - > Click the Delete button to delete the specified file
-
-#### Resource management
-  > Resource management and file management functions are similar. The difference is that resource management is the UDF function of uploading, and file management uploads user programs, scripts and configuration files.
-
-  * Upload UDF resources
-  > The same as uploading files.
-
-#### Function management
-
-  * Create UDF Functions
-  > Click "Create UDF Function", enter parameters of udf function, select UDF resources, and click "Submit" to create udf function.
-  >
-  >
-  >
-  > Currently only temporary udf functions for HIVE are supported
-  >
-  > 
-  >
-  > - UDF function name: name when entering UDF Function
-  > - Package Name: Full Path of Input UDF Function
-  > - Parameter: Input parameters used to annotate functions
-  > - Database Name: Reserved Field for Creating Permanent UDF Functions
-  > - UDF Resources: Set up the resource files corresponding to the created UDF
-  >
-  > 
-
-<p align="center">
-   <img src="/img/udf-function.png" width="80%" />
- </p>
-
-## Security
-
-  - The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc.
-- Administrator login, default username password: admin/dolphinscheduler123
-
-
-
-### Create queues
-
-
-
-  - Queues are used to execute spark, mapreduce and other programs, which require the use of "queue" parameters.
-- "Security" - > "Queue Manage" - > "Create Queue" 
-     <p align="center">
-    <img src="/img/create-queue-en.png" width="80%" />
-  </p>
-
-
-### Create Tenants
-  - The tenant corresponds to the account of Linux, which is used by the worker server to submit jobs. If Linux does not have this user, the worker would create the account when executing the task.
-  - Tenant Code:**the tenant code is the only account on Linux that can't be duplicated.**
-
- <p align="center">
-    <img src="/img/create-tenant-en.png" width="80%" />
-  </p>
-
-### Create Ordinary Users
-  -  User types are **ordinary users** and **administrator users**..
-    * Administrators have **authorization and user management** privileges, and no privileges to **create project and process-defined operations**.
-    * Ordinary users can **create projects and create, edit, and execute process definitions**.
-    * Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.**
-<p align="center">
-      <img src="/img/create-user-en.png" width="80%" />
- </p>
-
-### Create alarm group
-  * The alarm group is a parameter set at start-up. After the process is finished, the status of the process and other information will be sent to the alarm group by mail.
-  * New and Editorial Warning Group
-    <p align="center">
-    <img src="/img/alarm-group-en.png" width="80%" />
-    </p>
-
-### Create Worker Group
-  - Worker group provides a mechanism for tasks to run on a specified worker. Administrators create worker groups, which can be specified in task nodes and operation parameters. If the specified grouping is deleted or no grouping is specified, the task will run on any worker.
-- Multiple IP addresses within a worker group (**aliases can not be written**), separated by **commas in English**
-
-  <p align="center">
-    <img src="/img/worker-group-en.png" width="80%" />
-  </p>
-
-### Token manage
-  - Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface.
-    <p align="center">
-      <img src="/img/token-en.png" width="80%" />
-    </p>
-- Call examples:
-
-```令牌调用示例
-    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/dolphinscheduler/projects/create");
-        httpPost.setHeader("token", "123");
-        // set parameters
-        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
-        parameters.add(new BasicNameValuePair("projectName", "qzw"));
-        parameters.add(new BasicNameValuePair("desc", "qzw"));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-```
-
-### Grant authority
-  - Granting permissions includes project permissions, resource permissions, datasource permissions, UDF Function permissions.
-> Administrators can authorize projects, resources, data sources and UDF Functions that are not created by ordinary users. Because project, resource, data source and UDF Function are all authorized in the same way, the project authorization is introduced as an example.
-
-> Note:For projects created by the user himself, the user has all the permissions. The list of items and the list of selected items will not be reflected
-
-  - 1.Click on the authorization button of the designated person as follows:
-    <p align="center">
-      <img src="/img/operation-en.png" width="80%" />
- </p>
-
-- 2.Select the project button to authorize the project
-
-<p align="center">
-   <img src="/img/auth-project-en.png" width="80%" />
- </p>
-
-### Monitor center
-  - Service management is mainly to monitor and display the health status and basic information of each service in the system.
-
-#### Master monitor
-  - Mainly related information about master.
-<p align="center">
-      <img src="/img/master-monitor-en.png" width="80%" />
- </p>
-
-#### Worker monitor
-  - Mainly related information of worker.
-
-<p align="center">
-   <img src="/img/worker-monitor-en.png" width="80%" />
- </p>
-
-#### Zookeeper monitor
-  - Mainly the configuration information of each worker and master in zookpeeper.
-
-<p align="center">
-   <img src="/img/zookeeper-monitor-en.png" width="80%" />
- </p>
-
-#### DB monitor
-  - Mainly the health status of DB
-
-<p align="center">
-   <img src="/img/db-monitor-en.png" width="80%" />
- </p>
- 
-#### statistics Manage
- <p align="center">
-   <img src="/img/statistics-en.png" width="80%" />
- </p>
-  
-  -  Commands to be executed: statistics on t_ds_command table
-  -  Number of commands that failed to execute: statistics on the t_ds_error_command table
-  -  Number of tasks to run: statistics of task_queue data in Zookeeper
-  -  Number of tasks to be killed: statistics of task_kill in Zookeeper
-
-## <span id=TaskNodeType>Task Node Type and Parameter Setting</span>
-
-### Shell
-
-  - The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant.
-> Drag the <img src="/img/tasks/icons/shell.png" width="15"/> task node in the toolbar onto the palette and double-click the task node as follows:
-
-![demo-shell-simple](/img/tasks/demo/shell.jpg)
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node can be scheduled properly, and if it does not need to be executed, you can turn on the forbidden execution switch.
-- Description : Describes the function of the node
-- Number of failed retries: Number of failed task submissions, support drop-down and manual filling
-- Failure Retry Interval: Interval between tasks that fail to resubmit tasks, support drop-down and manual filling
-- Script: User-developed SHELL program
-- Resources: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of SHELL replace the contents of scripts with ${variables}
-
-### SUB_PROCESS
-  - The sub-process node is to execute an external workflow definition as an task node.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/sub-process-en.png" width="80%" />
- </p>
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node is scheduled properly
-- Description: Describes the function of the node
-- Sub-node: The process definition of the selected sub-process is selected, and the process definition of the selected sub-process can be jumped to by entering the sub-node in the upper right corner.
-
-### DEPENDENT
-
-  - Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday.
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/current-node-en.png" width="80%" />
- </p>
-
-  > Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed.
-
-  <p align="center">
-   <img src="/img/weekly-A-en.png" width="80%" />
- </p>
-
-  > For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the last week, as shown in the figure:
-
- <p align="center">
-   <img src="/img/weekly-A1-en.png" width="80%" />
- </p>
-
-  > If weekly A also needs to be implemented successfully on Tuesday:
-
- <p align="center">
-   <img src="/img/weekly-A2-en.png" width="80%" />
- </p>
-
-###  PROCEDURE
-  - The procedure is executed according to the selected data source.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/node-setting-en.png" width="80%" />
- </p>
-
-- Datasource: The data source type of stored procedure supports MySQL and POSTGRESQL, and chooses the corresponding data source.
-- Method: The method name of the stored procedure
-- Custom parameters: Custom parameter types of stored procedures support IN and OUT, and data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN.
-
-### SQL
-  - Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png) task node in the toolbar onto the palette.
-  - Execute non-query SQL functionality
-    <p align="center">
-      <img src="/img/dependent-nodes-en.png" width="80%" />
- </p>
-
-  - Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients.
-
-<p align="center">
-   <img src="/img/double-click-en.png" width="80%" />
- </p>
-
-- Datasource: Select the corresponding datasource
-- sql type: support query and non-query, query is select type query, there is a result set returned, you can specify mail notification as table, attachment or table attachment three templates. Non-query is not returned by result set, and is for update, delete, insert three types of operations
-- sql parameter: input parameter format is key1 = value1; key2 = value2...
-- sql statement: SQL statement
-- UDF function: For HIVE type data sources, you can refer to UDF functions created in the resource center, other types of data sources do not support UDF functions for the time being.
-- Custom parameters: SQL task type, and stored procedure is to customize the order of parameters to set values for methods. Custom parameter type and data type are the same as stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the ${variable} in the SQL statement.
-- Pre Statement: Pre-sql is executed before the sql statement
-- Post Statement: Post-sql is executed after the sql statement
-
-
-
-### SPARK 
-
-  - Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks.
-
-> Drag the   ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SPARK.png)  task node in the toolbar onto the palette and double-click the task node as follows:
->
-> 
-
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
-
-- Program Type: Support JAVA, Scala and Python
-- Class of the main function: The full path of Main Class, the entry to the Spark program
-- Master jar package: It's Spark's jar package
-- Deployment: support three modes: yarn-cluster, yarn-client, and local
-- Driver Kernel Number: Driver Kernel Number and Memory Number can be set
-- Executor Number: Executor Number, Executor Memory Number and Executor Kernel Number can be set
-- Command Line Parameters: Setting the input parameters of Spark program to support the replacement of custom parameter variables.
-- Other parameters: support - jars, - files, - archives, - conf format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-Note: JAVA and Scala are just used for identification, no difference. If it's a Spark developed by Python, there's no class of the main function, and everything else is the same.
-
-### MapReduce(MR)
-  - Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar`
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
- 1. JAVA program
-
- <p align="center">
-    <img src="/img/java-program-en.png" width="80%" />
-  </p>
-
-- Class of the main function: The full path of the MR program's entry Main Class
-- Program Type: Select JAVA Language
-- Master jar package: MR jar package
-- Command Line Parameters: Setting the input parameters of MR program to support the replacement of custom parameter variables
-- Other parameters: support - D, - files, - libjars, - archives format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-2. Python program
-
-<p align="center">
-   <img src="/img/python-program-en.png" width="80%" />
- </p>
-
-- Program Type: Select Python Language
-- Main jar package: Python jar package running MR
-- Other parameters: support - D, - mapper, - reducer, - input - output format, where user-defined parameters can be set, such as:
-- mapper "mapper.py 1" - file mapper.py-reducer reducer.py-file reducer.py-input/journey/words.txt-output/journey/out/mr/${current TimeMillis}
-- Among them, mapper. py 1 after - mapper is two parameters, the first parameter is mapper. py, and the second parameter is 1.
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-### Python
-  - With Python nodes, Python scripts can be executed directly. For Python nodes, worker will use `python ** `to submit tasks.
-
-
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/python-en1-2.png" width="80%" />
- </p>
-
-- Script: User-developed Python program
-- Resource: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of Python that replace the contents in the script with ${variables}
-
-### System parameter
-
-<table>
-    <tr><th>variable</th><th>meaning</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>The timing time of routine dispatching instance is one day before, in yyyyyMMdd format. When data is supplemented, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td> Daily scheduling example timing time, format is yyyyyMMdd, when supplementing data, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>Daily scheduling example timing time, format is yyyyyMMddHmmss, when supplementing data, the date + 1</td>
-    </tr>
-</table>
-
-
-### Time Customization Parameters
-
- -  Support code to customize the variable name, declaration: ${variable name}. It can refer to "system parameters" or specify "constants".
-
- -  When we define this benchmark variable as $[...], [yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] ,etc.
-
- -  Can also do this:
- 
-
-
-    *  Later N years: $[add_months (yyyyyyMMdd, 12*N)]
-    *  The previous N years: $[add_months (yyyyyyMMdd, -12*N)]
-    *  Later N months: $[add_months (yyyyyMMdd, N)]
-    *  The first N months: $[add_months (yyyyyyMMdd, -N)]
-    *  Later N weeks: $[yyyyyyMMdd + 7*N]
-    *  The first N weeks: $[yyyyyMMdd-7*N]
-    *  The day after that: $[yyyyyyMMdd + N]
-    *  The day before yesterday: $[yyyyyMMdd-N]
-    *  Later N hours: $[HHmmss + N/24]
-    *  First N hours: $[HHmmss-N/24]
-    *  After N minutes: $[HHmmss + N/24/60]
-    *  First N minutes: $[HHmmss-N/24/60]
-
-
-### <span id=CustomParameters>User-defined parameters</span>
-
- - User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.
-
-  For example:
-<p align="center">
-   <img src="/img/user-defined-en.png" width="80%" />
- </p>
-
- - global_bizdate is a global parameter, referring to system parameters.
-
-<p align="center">
-   <img src="/img/user-defined1-en.png" width="80%" />
- </p>
-
- - In tasks, local_param_bizdate refers to global parameters by  \${global_bizdate} for scripts, the value of variable local_param_bizdate can be referenced by \${local_param_bizdate}, or the value of local_param_bizdate can be set directly by JDBC.
diff --git a/docs/en-us/1.2.0/user_doc/upgrade.md b/docs/en-us/1.2.0/user_doc/upgrade.md
deleted file mode 100644
index 35ae8b2..0000000
--- a/docs/en-us/1.2.0/user_doc/upgrade.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-# DolphinScheduler upgrade documentation
-
-## 1. Back up the previous version of the files and database
-
-## 2. Stop all services of dolphinscheduler
-
- `sh ./script/stop-all.sh`
-
-## 3. Download the new version of the installation package
-
-- [download](/en-us/download/download.html), download the latest version of the front and back installation packages (backend referred to as dolphinscheduler-backend, front end referred to as dolphinscheduler-front)
-- The following upgrade operations need to be performed in the new version of the directory
-
-## 4. Database upgrade
-- Modify the following properties in conf/application-dao.properties
-
-```
-    spring.datasource.url
-    spring.datasource.username
-    spring.datasource.password
-```
-
-- Execute database upgrade script
-
-`sh ./script/upgrade-dolphinscheduler.sh`
-
-## 5. Backend service upgrade
-
-- Modify the content of the install.sh configuration and execute the upgrade script
-  
-  `sh install.sh`
-
-## 6. Frontend service upgrade
-
-- Overwrite the previous version of the dist directory
-- Restart the nginx service
-  
-    `systemctl restart nginx`
diff --git a/docs/en-us/1.2.1/user_doc/architecture-design.md b/docs/en-us/1.2.1/user_doc/architecture-design.md
deleted file mode 100644
index bdf8166..0000000
--- a/docs/en-us/1.2.1/user_doc/architecture-design.md
+++ /dev/null
@@ -1,316 +0,0 @@
-## Architecture Design
-Before explaining the architecture of the schedule system, let us first understand the common nouns of the schedule system.
-
-### 1.Noun Interpretation
-
-**DAG:** Full name Directed Acyclic Graph,referred to as DAG。Tasks in the workflow are assembled in the form of directed acyclic graphs, which are topologically traversed from nodes with zero indegrees of ingress until there are no successor nodes. For example, the following picture:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag示例"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**: Visualization **DAG** by dragging task nodes and establishing associations of task nodes 
-
-**Process instance**: A process instance is an instantiation of a process definition, which can be generated by manual startup or  scheduling. The process definition runs once, a new process instance is generated
-
-**Task instance**: A task instance is the instantiation of a specific task node when a process instance runs, which indicates the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (dependency), and plans to support dynamic plug-in extension, note: the sub-**SUB_PROCESS** is also A separate process definition that can be launched separately
-
-**Schedule mode** :  The system supports timing schedule and manual schedule based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timer, rerun, pause, stop, resume waiting thread. Where **recovers the fault-tolerant workflow** and **restores the waiting thread** The two command types are used by the scheduling internal control and cannot be ca [...]
-
-**Timed schedule**: The system uses **quartz** distributed scheduler and supports the generation of cron expression visualization
-
-**Dependency**: The system does not only support **DAG** Simple dependencies between predecessors and successor nodes, but also provides **task dependencies** nodes, support for **custom task dependencies between processes**
-
-**Priority**: Supports the priority of process instances and task instances. If the process instance and task instance priority are not set, the default is first in, first out.
-
-**Mail Alert**: Support **SQL Task** Query Result Email Send, Process Instance Run Result Email Alert and Fault Tolerant Alert Notification
-
-**Failure policy**: For tasks running in parallel, if there are tasks that fail, two failure policy processing methods are provided. **Continue** means that the status of the task is run in parallel until the end of the process failure. **End** means that once a failed task is found, Kill also drops the running parallel task and the process ends.
-
-**Complement**: Complement historical data, support **interval parallel and serial** two complement methods
-
-
-
-### 2.System architecture
-
-#### 2.1 System Architecture Diagram
-<p align="center">
-  <img src="/img/architecture.jpg" alt="System Architecture Diagram"  />
-  <p align="center">
-        <em>System Architecture Diagram</em>
-  </p>
-</p>
-
-
-
-#### 2.2 Architectural description
-
-* **MasterServer** 
-
-    MasterServer adopts the distributed non-central design concept. MasterServer is mainly responsible for DAG task split, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer.
-    When the MasterServer service starts, it registers a temporary node with Zookeeper, and listens to the Zookeeper temporary node state change for fault tolerance processing.
-
-    
-
-    ##### The service mainly contains:
-
-    - **Distributed Quartz** distributed scheduling component, mainly responsible for the start and stop operation of the scheduled task. When the quartz picks up the task, the master internally has a thread pool to be responsible for the subsequent operations of the task.
-
-    - **MasterSchedulerThread** is a scan thread that periodically scans the **command** table in the database for different business operations based on different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, logic processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for task persistence
-
-      
-
-* **WorkerServer** 
-
-     - WorkerServer also adopts a distributed, non-central design concept. WorkerServer is mainly responsible for task execution and providing log services. When the WorkerServer service starts, it registers the temporary node with Zookeeper and maintains the heartbeat.
-
-       ##### This service contains:
-
-       - **FetchTaskThread** is mainly responsible for continuously receiving tasks from **Task Queue** and calling **TaskScheduleThread** corresponding executors according to different task types.
-       - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refresh and download.
-
-     - **ZooKeeper**
-
-       The ZooKeeper service, the MasterServer and the WorkerServer nodes in the system all use the ZooKeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper.
-       We have also implemented queues based on Redis, but we hope that DolphinScheduler relies on as few components as possible, so we finally removed the Redis implementation.
-
-     - **Task Queue**
-
-       The task queue operation is provided. Currently, the queue is also implemented based on Zookeeper. Since there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have over-measured a million-level data storage queue, which has no effect on system stability and performance.
-
-     - **Alert**
-
-       Provides alarm-related interfaces. The interfaces mainly include **Alarms**. The storage, query, and notification functions of the two types of alarm data. The notification function has two types: **mail notification** and **SNMP (not yet implemented)**.
-
-     - **API**
-
-       The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service provides a RESTful api to provide request services externally.
-       Interfaces include workflow creation, definition, query, modification, release, offline, manual start, stop, pause, resume, start execution from this node, and more.
-
-     - **UI**
-
-       The front-end page of the system provides various visual operation interfaces of the system. For details, see the [System User Manual](./system-manual.md) section.
-
-     
-
-#### 2.3 Architectural Design Ideas
-
-##### I. Decentralized vs centralization
-
-###### Centralization Thought
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into two roles according to their roles:
-
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave role" width="50%" />
- </p>
-
-- The role of Master is mainly responsible for task distribution and supervising the health status of Slave. It can dynamically balance the task to Slave, so that the Slave node will not be "busy" or "free".
-- The role of the Worker is mainly responsible for the execution of the task and maintains the heartbeat with the Master so that the Master can assign tasks to the Slave.
-
-Problems in the design of centralized :
-
-- Once the Master has a problem, the group has no leader and the entire cluster will crash. In order to solve this problem, most Master/Slave architecture modes adopt the design scheme of the master and backup masters, which can be hot standby or cold standby, automatic switching or manual switching, and more and more new systems are available. Automatically elects the ability to switch masters to improve system availability.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in one DAG running on different machines, it will generate overload of the Master. If the Scheduler is on the Slave, all tasks in a DAG can only be submitted on one machine. If there are more parallel tasks, the pressure on the Slave may be larger.
-
-###### Decentralization
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="decentralized" width="50%" />
- </p>
-
-- In the decentralized design, there is usually no Master/Slave concept, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, networked arbitrary node equipment down machine , all will only affect a small range of features.
-- The core design of decentralized design is that there is no "manager" that is different from other nodes in the entire distributed system, so there is no single point of failure problem. However, since there is no "manager" node, each node needs to communicate with other nodes to get the necessary machine information, and the unreliable line of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly emerging. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will spontaneously hold "meetings" to elect new "managers". Go to preside over the work. The most typical case is the Etcd implemented in ZooKeeper and Go.
-
-- Decentralization of DolphinScheduler is the registration of Master/Worker to ZooKeeper. The Master Cluster and the Worker Cluster are not centered, and the Zookeeper distributed lock is used to elect one Master or Worker as the “manager” to perform the task.
-
-#####  二、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed locks to implement only one Master to execute the Scheduler at the same time, or only one Worker to perform task submission.
-
-1. The core process algorithm for obtaining distributed locks is as follows
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Get Distributed Lock Process" width="50%" />
- </p>
-
-2. Scheduler thread distributed lock implementation flow chart in DolphinScheduler:
-
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Get Distributed Lock Process" width="50%" />
- </p>
-
-##### Third, the thread is insufficient loop waiting problem
-
-- If there is no subprocess in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the direct process waits or fails.
-- If a large number of sub-processes are nested in a large DAG, the following figure will result in a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Thread is not enough to wait for loop" width="50%" />
- </p>
-
-In the above figure, MainFlowThread waits for SubFlowThread1 to end, SubFlowThread1 waits for SubFlowThread2 to end, SubFlowThread2 waits for SubFlowThread3 to end, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, and thus the thread cannot be released. This forms the state of the child parent process loop waiting. At this point, the scheduling cluster will no longer be available unless a new Master is started to add threads to break s [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three options to reduce this risk:
-
-1. Calculate the sum of the threads of all Masters, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time.
-2. Judge the single master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, the main process will be suspended. This way, the thread pool has a new thread, which can make the process with insufficient resources hang up and wake up again.
-
-Note: The Master Scheduler thread is FIFO-enabled when it gets the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-##### IV. Fault Tolerant Design
-
-Fault tolerance is divided into service fault tolerance and task retry. Service fault tolerance is divided into two types: Master Fault Tolerance and Worker Fault Tolerance.
-
-###### 1. Downtime fault tolerance
-
-Service fault tolerance design relies on ZooKeeper's Watcher mechanism. The implementation principle is as follows:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler Fault Tolerant Design" width="40%" />
- </p>
-
-The Master monitors the directories of other Masters and Workers. If the remove event is detected, the process instance is fault-tolerant or the task instance is fault-tolerant according to the specific business logic.
-
-
-
-- Master fault tolerance flow chart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master Fault Tolerance Flowchart" width="40%" />
- </p>
-
-After the ZooKeeper Master is fault-tolerant, it is rescheduled by the Scheduler thread in DolphinScheduler. It traverses the DAG to find the "Running" and "Submit Successful" tasks, and monitors the status of its task instance for the "Running" task. You need to determine whether the Task Queue already exists. If it exists, monitor the status of the task instance. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flow chart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker Fault Tolerance Flowchart" width="40%" />
- </p>
-
-Once the Master Scheduler thread finds the task instance as "need to be fault tolerant", it takes over the task and resubmits.
-
- Note: Because the "network jitter" may cause the node to lose the heartbeat of ZooKeeper in a short time, the node's remove event occurs. In this case, we use the easiest way, that is, once the node has timeout connection with ZooKeeper, it will directly stop the Master or Worker service.
-
-###### 2. Task failure retry
-
-Here we must first distinguish between the concept of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure Retry is task level, which is automatically performed by the scheduling system. For example, if a shell task sets the number of retries to 3 times, then the shell task will try to run up to 3 times after failing to run.
-- Process failure recovery is process level, is done manually, recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also process level, is done manually, rerun is from the start node
-
-
-
-Next, let's talk about the topic, we divided the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as a Shell node, an MR node, a Spark node, a dependent node, and so on.
-- There is also a logical node, which does not do the actual script or statement processing, but the logical processing of the entire process flow, such as sub-flow sections.
-
-Each **service node** can configure the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** does not support failed retry. But the tasks in the logical nodes support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process resumed.
-
-
-
-##### V. Task priority design
-
-In the early scheduling design, if there is no priority design and fair scheduling design, it will encounter the situation that the task submitted first may be completed simultaneously with the task submitted subsequently, but the priority of the process or task cannot be set. We have redesigned this, and we are currently designing it as follows:
-
-- According to **different process instance priority** prioritizes **same process instance priority** prioritizes **task priority within the same process** takes precedence over **same process** commit order from high Go to low for task processing.
-
-  - The specific implementation is to resolve the priority according to the json of the task instance, and then save the **process instance priority _ process instance id_task priority _ task id** information in the ZooKeeper task queue, when obtained from the task queue, Through string comparison, you can get the task that needs to be executed first.
-
-    - The priority of the process definition is that some processes need to be processed before other processes. This can be configured at the start of the process or at the time of scheduled start. There are 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-
-      <p align="center">
-         <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process Priority Configuration" width="40%" />
-       </p>
-
-    - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-
-      <p align="center">
-         <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="task priority configuration" width="35%" />
-       </p>
-
-##### VI. Logback and gRPC implement log access
-
-- Since the Web (UI) and Worker are not necessarily on the same machine, viewing the log is not as it is for querying local files. There are two options:
-  - Put the logs on the ES search engine
-  - Obtain remote log information through gRPC communication
-- Considering the lightweightness of DolphinScheduler as much as possible, gRPC was chosen to implement remote access log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access" width="50%" />
- </p>
-
-- We use a custom Logback FileAppender and Filter function to generate a log file for each task instance.
-- The main implementation of FileAppender is as follows:
-
-```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        If (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
-```
-
-Generate a log in the form of /process definition id/process instance id/task instance id.log
-
-- Filter matches the thread name starting with TaskLogInfo:
-- TaskLogFilter is implemented as follows:
-
-```java
- /**
- * task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
-```
-
-
-
-### summary
-
-Starting from the scheduling, this paper introduces the architecture principle and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
diff --git a/docs/en-us/1.2.1/user_doc/backend-deployment.md b/docs/en-us/1.2.1/user_doc/backend-deployment.md
deleted file mode 100644
index 7141f52..0000000
--- a/docs/en-us/1.2.1/user_doc/backend-deployment.md
+++ /dev/null
@@ -1,261 +0,0 @@
-# Backend Deployment Document
-
-There are two deployment modes for the backend: 
-
-- automatic deployment  
-- source code compile and then deployment
-
-## Preparations
-
-Download the latest version of the installation package, download address:  [download](/en-us/download/download.html),
-download apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-backend-bin.tar.gz
-
-
-
-#### Preparations 1: Installation of basic software (self-installation of required items)
-
- * [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.5+) : You can choose either PostgreSQL or MySQL.
- * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Mandatory
- * [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Mandatory
- * pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
- * [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optionally, if you need to use the resource upload function, You can choose either Hadoop or MinIo.
- * [Hive](https://hive.apache.org/downloads.html) (1.2.1) : Optional, hive task submission needs to be installed
- * [Spark](http://spark.apache.org/downloads.html) (1.x,2.x) : Optional, Spark task submission needs to be installed
-
-```
- Note: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks.
-```
-
-#### Preparations 2: Create deployment users
-
-- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in `sudo-u {linux-user}`, so deployment users need sudo privileges and are confidential.
-
-```
-vi /etc/sudoers
-
-# For example, the deployment user is an dolphinscheduler account
-dolphinscheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
-
-# And you need to comment out the Default requiretty line
-#Default requiretty
-```
-
-#### Preparations 3: SSH Secret-Free Configuration
-Configure SSH secret-free login on deployment machines and other installation machines. If you want to install dolphinscheduler on deployment machines, you need to configure native password-free login itself.
-
-- Connect the host and other machines SSH
-
-#### Preparations 4: database initialization
-
-* Create databases and accounts
-
-    Execute the following command to create database and account
-    
-    ```
-    CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    flush privileges;
-    ```
-
-* creates tables and imports basic data
-    Modify the following attributes in ./conf/application-dao.properties
-
-    ```
-        spring.datasource.url
-        spring.datasource.username
-        spring.datasource.password
-    ```
-    
-    Execute scripts for creating tables and importing basic data
-    
-    ```
-    sh ./script/create-dolphinscheduler.sh
-    ```
-
-#### Preparations 5: Modify the deployment directory permissions and operation parameters
-
-     instruction of dolphinscheduler-backend directory 
-
-```directory
-bin : Basic service startup script
-DISCLAIMER-WIP : DISCLAIMER-WIP
-conf : Project Profile
-lib : The project relies on jar packages, including individual module jars and third-party jars
-LICENSE : LICENSE
-licenses : licenses
-NOTICE : NOTICE
-script :  Cluster Start, Stop and Service Monitor Start and Stop scripts
-sql : The project relies on SQL files
-install.sh :  One-click deployment script
-```
-
-- Modify permissions (please modify the 'deployUser' to the corresponding deployment user) so that the deployment user has operational privileges on the dolphinscheduler-backend directory
-
-    `sudo chown -R deployUser:deployUser dolphinscheduler-backend`
-
-- Modify the `.dolphinscheduler_env.sh` environment variable in the conf/env/directory
-
-- Modify deployment parameters (depending on your server and business situation):
-
- - Modify the parameters in `install.sh` to replace the values required by your business
-   - MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true".
-   - 'hdfsStartupSate' switch variable controls whether to start hdfs
-      The default value of "false" means not to start hdfs
-      Change the variable to 'true' if you want to use hdfs, you also need to create the hdfs root path by yourself, that 'hdfsPath' in install.sh.
-
- - If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
-
-
-## Deployment
-Either of the following two methods can be deployed,binary file deployment is recommended, and experienced partners can use source deployment as well.
-
-### Binary file Deployment
-
-- Install ZooKeeper tools
-
-   `pip install kazoo`
-
-- Switch to deployment user, one-click deployment
-
-    `sh install.sh` 
-
-- Use the `jps` command to check if the services are started (`jps` comes from `Java JDK`)
-
-```aidl
-    MasterServer         ----- Master Service
-    WorkerServer         ----- Worker Service
-    LoggerServer         ----- Logger Service
-    ApiApplicationServer ----- API Service
-    AlertServer          ----- Alert Service
-```
-
-If all services are normal, the automatic deployment is successful
-
-
-After successful deployment, the log can be viewed and stored in a specified folder.
-
-```logPath
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-### Compile source code to deploy
-
-After downloading the release version of the source package, uncompress it into the root directory
-
-* Build a tar package
-
-    Execute the compilation command:
-
-    ```
-     mvn -U clean package -Prelease -Dmaven.test.skip=true
-    ```
-
-    View directory
-
-    After normal compilation, `apache-dolphinscheduler-incubating-${latest.release.version}-dolphinscheduler-backend-bin.tar.gz`
-    is generated in the `./dolphinscheduler-dist/dolphinscheduler-backend/target` directory
-
-* OR build a rpm package 
-
-    The rpm package can be installed on the Linux platform using the rpm command or yum. The rpm package can be used to help Dolphinscheduler better integrate with other management tools, such as ambari, cloudera manager.
-
-    Execute the compilation command:
-
-    ```
-     mvn -U clean package -Prpmbuild -Dmaven.test.skip=true
-    ```
-
-    View directory
-
-    After normal compilation, `apache-dolphinscheduler-incubating-${latest.release.version}-1.noarch.rpm`
-    is generated in the `./dolphinscheduler-dist/target/rpm/apache-dolphinscheduler-incubating/RPMS/noarch/` directory
-
-
-* Decompress the compiled tar.gz package or use the rpm command to install (the rpm installation method will install dolphinscheduler in the /opt/soft directory) . The dolphinscheduler directory structure is like this:
-
-     ```
-      ../
-         ├── bin
-         ├── conf
-         |── DISCLAIMER
-         |—— install.sh
-         |—— lib
-         |—— LICENSE
-         |—— licenses
-         |—— NOTICE
-         |—— script
-         |—— sql
-     ```
-
-
-- Install ZooKeeper tools
-
-   `pip install kazoo`
-
-- Switch to deployment user, one-click deployment
-
-    `sh install.sh`
-
-### Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
-
-* stop all services in the cluster
-  
-   ` sh ./bin/stop-all.sh`
-   
-* start all services in the cluster
-  
-   ` sh ./bin/start-all.sh`
-
-* start and stop one master server
-
-```master
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* start and stop one worker server
-
-```worker
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* start and stop api server
-
-```Api
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-* start and stop logger server
-
-```Logger
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-* start and stop alert server
-
-```Alert
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-## Database Upgrade
-Modify the following properties in ./conf/application-dao.properties
-
-    ```
-        spring.datasource.url
-        spring.datasource.username
-        spring.datasource.password
-    ```
-The database can be upgraded automatically by executing the following command:
-```upgrade
-sh ./script/upgrade-dolphinscheduler.sh
-```
-
-
diff --git a/docs/en-us/1.2.1/user_doc/frontend-deployment.md b/docs/en-us/1.2.1/user_doc/frontend-deployment.md
deleted file mode 100644
index 883ce04..0000000
--- a/docs/en-us/1.2.1/user_doc/frontend-deployment.md
+++ /dev/null
@@ -1,130 +0,0 @@
-# frontend-deployment
-
-The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment.
-
-
-
-## Preparations
-
-#### Download the installation package
-
-Please download the latest version of the installation package, download address: [download](/en-us/download/download.html)
-
-After downloading apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-front-bin.tar.gz,
-decompress`tar -zxvf apache-dolphinscheduler-incubating-x.x.x-dolphinscheduler-front-bin.tar.gz ./`and enter the`dolphinscheduler-ui`directory
-
-
-
-
-## Deployment
-
-Automated deployment is recommended for either of the following two ways
-
-### Automated Deployment
-
->Front-end automatic deployment based on Linux system `yum` operation, before deployment, please install and update`yum`
-
-under this directory, execute`./install-dolphinscheduler-ui.sh` 
-
-
-### Manual Deployment
-You can choose one of the following two deployment methods, or you can choose other deployment methods according to your production environment.
-
-#### nginx deployment
-Option to install epel source `yum install epel-release -y`
-
-Install Nginx by yourself, download it from the official website: http://nginx.org/en/download.html or `yum install nginx -y`
-
-
-> ####  Nginx configuration file address
-
-```
-/etc/nginx/conf.d/default.conf
-```
-
-> ####  Configuration information (self-modifying)
-
-```
-server {
-    listen       8888;# access port
-    server_name  localhost;
-    #charset koi8-r;
-    #access_log  /var/log/nginx/host.access.log  main;
-    location / {
-        root   /xx/dist; # the dist directory address decompressed by the front end above (self-modifying)
-        index  index.html index.html;
-    }
-    location /dolphinscheduler {
-        proxy_pass http://192.168.xx.xx:12345; # interface address (self-modifying)
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header x_real_ipP $remote_addr;
-        proxy_set_header remote_addr $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_http_version 1.1;
-        proxy_connect_timeout 4s;
-        proxy_read_timeout 30s;
-        proxy_send_timeout 12s;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-    }
-    #error_page  404              /404.html;
-    # redirect server error pages to the static page /50x.html
-    #
-    error_page   500 502 503 504  /50x.html;
-    location = /50x.html {
-        root   /usr/share/nginx/html;
-    }
-}
-```
-
-> ####  Restart the Nginx service
-
-```
-systemctl restart nginx
-```
-
-#### nginx command
-
-- enable `systemctl enable nginx`
-
-- restart `systemctl restart nginx`
-
-- status `systemctl status nginx`
-
-#### jetty deployment
-Enter the source package `dolphinscheduler-ui` directory and execute
-
-```
-npm install
-```
-
-> #####  ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
-```
-npm install node-sass --unsafe-perm //Install node-sass dependency separately
-```
-
-```
-npm run build:release
-```
-
-Create the ui directory under the backend binary package directory
-
-Copy all files in the dolphinscheduler-ui/dist directory to the backend binary package ui directory
-
-Visit the following url, interface address (modify it yourself)
-http://localhost:12345/dolphinscheduler
-
-default user password:admin/dolphinscheduler123
-
-## FAQ
-#### Upload file size limit
-
-Edit the configuration file `vi /etc/nginx/nginx.conf`
-
-```
-# change upload size
-client_max_body_size 1024m
-```
-
-
diff --git a/docs/en-us/1.2.1/user_doc/hardware-environment.md b/docs/en-us/1.2.1/user_doc/hardware-environment.md
deleted file mode 100644
index 705c7d8..0000000
--- a/docs/en-us/1.2.1/user_doc/hardware-environment.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Hareware Environment
-
-DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
-
-## 1. Linux operating system version requirements
-
-| OS       | Version         |
-| :----------------------- | :----------: |
-| Red Hat Enterprise Linux | 7.0 and above   |
-| CentOS                   | 7.0 and above   |
-| Oracle Enterprise Linux  | 7.0 and above   |
-| Ubuntu LTS               | 16.04 and above |
-
-> **Attention:**
->The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
-
-## 2. Recommended server configuration
-DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
-### Production Environment
-
-| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
-| --- | --- | --- | --- | --- |
-| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
-
-> **Attention:**
-> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
-> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
-
-
-## 3. Network requirements
-
-DolphinScheduler provides the following network port configurations for normal operation:
-
-| Server | Port | Desc |
-|  --- | --- | --- |
-| MasterServer |  5566  | Not the communication port. Require the native ports do not conflict |
-| WorkerServer | 7788  | Not the communication port. Require the native ports do not conflict |
-| ApiApplicationServer |  12345 | Backend communication port |
-| nginx | 8888 | The port for DolphinScheduler UI |
-
-> **Attention:**
-> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
-> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
-
-## 4. Browser requirements
-
-DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
-
diff --git a/docs/en-us/1.2.1/user_doc/metadata-1.2.md b/docs/en-us/1.2.1/user_doc/metadata-1.2.md
deleted file mode 100644
index 19616ef..0000000
--- a/docs/en-us/1.2.1/user_doc/metadata-1.2.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# Dolphin Scheduler 1.2 MetaData
-
-<a name="V5KOl"></a>
-### Dolphin Scheduler 1.2 DB Table Overview
-| Table Name | Comment |
-| :---: | :---: |
-| t_ds_access_token | token for access ds backend |
-| t_ds_alert | alert detail |
-| t_ds_alertgroup | alert group |
-| t_ds_command | command detail |
-| t_ds_datasource | data source |
-| t_ds_error_command | error command detail |
-| t_ds_process_definition | process difinition |
-| t_ds_process_instance | process instance |
-| t_ds_project | project |
-| t_ds_queue | queue |
-| t_ds_relation_datasource_user | datasource related to user |
-| t_ds_relation_process_instance | sub process |
-| t_ds_relation_project_user | project related to user |
-| t_ds_relation_resources_user | resource related to user |
-| t_ds_relation_udfs_user | UDF related to user |
-| t_ds_relation_user_alertgroup | alert group related to user |
-| t_ds_resources | resoruce center file |
-| t_ds_schedules | process difinition schedule |
-| t_ds_session | user login session |
-| t_ds_task_instance | task instance |
-| t_ds_tenant | tenant |
-| t_ds_udfs | UDF resource |
-| t_ds_user | user detail |
-| t_ds_version | ds version |
-| t_ds_worker_group | worker group |
-
-
----
-
-<a name="XCLy1"></a>
-### E-R Diagram
-<a name="5hWWZ"></a>
-#### User Queue DataSource
-![image.png](/img/metadata-erd/user-queue-datasource.png)
-
-- Multiple users can belong to one tenant
-- The queue field in t_ds_user table stores the queue_name information in t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.
-- The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
-<a name="7euSN"></a>
-#### Project Resource Alert
-![image.png](/img/metadata-erd/project-resource-alert.png)
-
-- User can have multiple projects, User project authorization completes the relationship binding using project_id and user_id in t_ds_relation_project_user table
-- The user_id in the t_ds_projcet table represents the user who created the project, and the user_id in the t_ds_relation_project_user table represents users who have permission to the project
-- The user_id in the t_ds_resources table represents the user who created the resource, and the user_id in t_ds_relation_resources_user represents the user who has permissions to the resource
-- The user_id in the t_ds_udfs table represents the user who created the UDF, and the user_id in the t_ds_relation_udfs_user table represents a user who has permission to the UDF
-<a name="JEw4v"></a>
-#### Command Process Task
-![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
-
-- A project has multiple process definitions, a process definition can generate multiple process instances, and a process instance can generate multiple task instances
-- The t_ds_schedulers table stores the timing schedule information for process difinition
-- The data stored in the t_ds_relation_process_instance table is used to deal with that the process definition contains sub-processes, parent_process_instance_id field represents the id of the main process instance containing the child process, process_instance_id field represents the id of the sub-process instance, parent_task_instance_id field represents the task instance id of the sub-process node
-- The process instance table and the task instance table correspond to the t_ds_process_instance table and the t_ds_task_instance table, respectively.
-
----
-
-<a name="yd79T"></a>
-### Core Table Schema
-<a name="6bVhH"></a>
-#### t_ds_process_definition
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | process definition name |
-| version | int | process definition version |
-| release_state | tinyint | process definition release state:0:offline,1:online |
-| project_id | int | project id |
-| user_id | int | process definition creator id |
-| process_definition_json | longtext | process definition json content |
-| description | text | process difinition desc |
-| global_params | text | global parameters |
-| flag | tinyint | process is available: 0 not available, 1 available |
-| locations | text | Node location information |
-| connects | text | Node connection information |
-| receivers | text | receivers |
-| receivers_cc | text | carbon copy list |
-| create_time | datetime | create time |
-| timeout | int | timeout |
-| tenant_id | int | tenant id |
-| update_time | datetime | update time |
-
-<a name="t5uxM"></a>
-#### t_ds_process_instance
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | process instance name |
-| process_definition_id | int | process definition id |
-| state | tinyint | process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
-| recovery | tinyint | process instance failover flag:0:normal,1:failover instance |
-| start_time | datetime | process instance start time |
-| end_time | datetime | process instance end time |
-| run_times | int | process instance run times |
-| host | varchar | process instance host |
-| command_type | tinyint | command type:0 start ,1 Start from the current node,2 Resume a fault-tolerant process,3 Resume Pause Process, 4 Execute from the failed node,5 Complement, 6 dispatch, 7 re-run, 8 pause, 9 stop ,10 Resume waiting thread |
-| command_param | text | json command parameters |
-| task_depend_type | tinyint | task depend type. 0: only current node,1:before the node,2:later nodes |
-| max_try_times | tinyint | max try times |
-| failure_strategy | tinyint | failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed |
-| warning_type | tinyint | warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success |
-| warning_group_id | int | warning group id |
-| schedule_time | datetime | schedule time |
-| command_start_time | datetime | command start time |
-| global_params | text | global parameters |
-| process_instance_json | longtext | process instance json(copy的process definition 的json) |
-| flag | tinyint | process instance is available: 0 not available, 1 available |
-| update_time | timestamp | update time |
-| is_sub_process | int | whether the process is sub process:  1 sub-process,0 not sub-process |
-| executor_id | int | executor id |
-| locations | text | Node location information |
-| connects | text | Node connection information |
-| history_cmd | text | history commands of process instance operation |
-| dependence_schedule_times | text | depend schedule fire time |
-| process_instance_priority | int | process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-| timeout | int | time out |
-| tenant_id | int | tenant id |
-
-<a name="tHZsY"></a>
-#### t_ds_task_instance
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| name | varchar | task name |
-| task_type | varchar | task type |
-| process_definition_id | int | process definition id |
-| process_instance_id | int | process instance id |
-| task_json | longtext | task content json |
-| state | tinyint | Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
-| submit_time | datetime | task submit time |
-| start_time | datetime | task start time |
-| end_time | datetime | task end time |
-| host | varchar | host of task running on |
-| execute_path | varchar | task execute path in the host |
-| log_path | varchar | task log path |
-| alert_flag | tinyint | whether alert |
-| retry_times | int | task retry times |
-| pid | int | pid of task |
-| app_link | varchar | yarn app id |
-| flag | tinyint | taskinstance is available: 0 not available, 1 available |
-| retry_interval | int | retry interval when task failed  |
-| max_retry_times | int | max retry times |
-| task_instance_priority | int | task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-
-<a name="gLGtm"></a>
-#### t_ds_command
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | primary key |
-| command_type | tinyint | Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread |
-| process_definition_id | int | process definition id |
-| command_param | text | json command parameters |
-| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward, 2 backward |
-| failure_strategy | tinyint | Failed policy: 0 end, 1 continue |
-| warning_type | tinyint | Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent |
-| warning_group_id | int | warning group |
-| schedule_time | datetime | schedule time |
-| start_time | datetime | start time |
-| executor_id | int | executor id |
-| dependence | varchar | dependence |
-| update_time | datetime | update time |
-| process_instance_priority | int | process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-
-
-
diff --git a/docs/en-us/1.2.1/user_doc/plugin-development.md b/docs/en-us/1.2.1/user_doc/plugin-development.md
deleted file mode 100644
index 46050a9..0000000
--- a/docs/en-us/1.2.1/user_doc/plugin-development.md
+++ /dev/null
@@ -1,54 +0,0 @@
-## Task Plugin Development
-
-Remind:Currently, task plugin development does not support hot deployment.
-
-### Shell-based tasks
-
-#### YARN-based calculations (see MapReduceTask)
-
-- Need to be **org.apache.dolphinscheduler.server.worker.task** Down **TaskManager** Create a custom task in the class (also need to register the corresponding task type in TaskType)
-- Need to inherit**org.apache.dolphinscheduler.server.worker.task** Down **AbstractYarnTask**
-- Constructor Scheduling **AbstractYarnTask** Construction method
-- Inherit **AbstractParameters** Custom task parameter entity
-- Rewrite **AbstractTask** of **init** Parsing in method**Custom task parameters**
-- Rewrite **buildCommand** Encapsulation command
-
-
-
-#### Non-YARN-based calculations (see ShellTask)
-- Need to be **org.apache.dolphinscheduler.server.worker.task** Down **TaskManager** A custom task
-
-- Need to inherit**org.apache.dolphinscheduler.server.worker.task** Down **AbstractTask**
-
-- Instantiation in constructor **ShellCommandExecutor**
-
-  ```
-  public ShellTask(TaskProps props, Logger logger) {
-    super(props, logger);
-  
-    this.taskDir = props.getTaskDir();
-  
-    this.processTask = new ShellCommandExecutor(this::logHandle,
-        props.getTaskDir(), props.getTaskAppId(),
-        props.getTenantCode(), props.getEnvFile(), props.getTaskStartTime(),
-        props.getTaskTimeout(), logger);
-    this.processDao = DaoFactory.getDaoInstance(ProcessDao.class);
-  }
-  ```
-
-  Incoming custom tasks **TaskProps**And custom**Logger**,TaskProps Encapsulate task information, Logger is installed with custom log information
-
-- Inherit **AbstractParameters** Custom task parameter entity
-
-- Rewrite **AbstractTask** of **init** Parsing in method**Custom task parameter entity**
-
-- Rewrite **handle** method,transfer **ShellCommandExecutor** of **run** method,The first parameter is passed in**command**,Pass the second parameter to ProcessDao and set the corresponding **exitStatusCode**
-
-### Non-SHELL-based tasks (see SqlTask)
-
-- Need to be **org.apache.dolphinscheduler.server.worker.task** Down **TaskManager** A custom task
-- Need to inherit**org.apache.dolphinscheduler.server.worker.task** Down **AbstractTask**
-- Inherit **AbstractParameters** Custom task parameter entity
-- Constructor or override **AbstractTask** of **init** in the method, parse the custom task parameter entity
-- Rewrite **handle** Methods to implement business logic and set the corresponding**exitStatusCode**
-
diff --git a/docs/en-us/1.2.1/user_doc/quick-start.md b/docs/en-us/1.2.1/user_doc/quick-start.md
deleted file mode 100644
index 7e4ac7d..0000000
--- a/docs/en-us/1.2.1/user_doc/quick-start.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Quick Start
-
-* Administrator user login
-
-  > Address:192.168.xx.xx:8888  Username and password:admin/dolphinscheduler123
-
-<p align="center">
-   <img src="/img/login_en.png" width="60%" />
- </p>
-
-* Create queue
-
-<p align="center">
-   <img src="/img/create-queue-en.png" width="60%" />
- </p>
-
-  * Create tenant
-      <p align="center">
-    <img src="/img/create-tenant-en.png" width="60%" />
-  </p>
-
-  * Creating Ordinary Users
-<p align="center">
-      <img src="/img/create-user-en.png" width="60%" />
- </p>
-
-  * Create an alarm group
-
- <p align="center">
-    <img src="/img/alarm-group-en.png" width="60%" />
-  </p>
-
-  
-  * Create an worker group
-  
-   <p align="center">
-      <img src="/img/worker-group-en.png" width="60%" />
-    </p>
-    
- * Create an token
-  
-   <p align="center">
-      <img src="/img/token-en.png" width="60%" />
-    </p>
-     
-  
-  * Log in with regular users
-  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
-
-  * Project Management - > Create Project - > Click on Project Name
-<p align="center">
-      <img src="/img/create_project_en.png" width="60%" />
- </p>
-
-  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
-
-<p align="center">
-   <img src="/img/process_definition_en.png" width="60%" />
- </p>
-
-  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
-
- <p align="center">
-   <img src="/img/log_en.png" width="60%" />
-</p>
diff --git a/docs/en-us/1.2.1/user_doc/system-manual.md b/docs/en-us/1.2.1/user_doc/system-manual.md
deleted file mode 100644
index 4d975f5..0000000
--- a/docs/en-us/1.2.1/user_doc/system-manual.md
+++ /dev/null
@@ -1,737 +0,0 @@
-# System Use Manual
-
-## Operational Guidelines
-
-### Home page
-The homepage contains task status statistics, process status statistics, and workflow definition statistics for all user projects.
-
-<p align="center">
-      <img src="/img/home_en.png" width="80%" />
- </p>
-
-### Create a project
-
-  - Click "Project - > Create Project", enter project name,  description, and click "Submit" to create a new project.
-  - Click on the project name to enter the project home page.
-<p align="center">
-      <img src="/img/project_home_en.png" width="80%" />
- </p>
-
-> The project home page contains task status statistics, process status statistics, and workflow definition statistics for the project.
-
- - Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame.
- - Process State Statistics: It refers to the statistics of the number of waiting, failing, running, completing and succeeding process instances in a specified time range.
- - Process Definition Statistics: The process definition created by the user and the process definition granted by the administrator to the user are counted.
-
-
-### Creating Process definitions
-  - Go to the project home page, click "Process definitions" and enter the list page of process definition.
-  - Click "Create process" to create a new process definition.
-  - Drag the "SHELL" node to the canvas and add a shell task.
-  - Fill in the Node Name, Description, and Script fields.
-  - Selecting "task priority" will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order.
-  - Timeout alarm. Fill in "Overtime Time". When the task execution time exceeds the overtime, it can alarm and fail over time.
-  - Fill in "Custom Parameters" and refer to [Custom Parameters](#CustomParameters)  <!-- markdown-link-check-disable-line -->
-    <p align="center">
-    <img src="/img/process_definitions_en.png" width="80%" />
-      </p>
-  - Increase the order of execution between nodes: click "line connection". As shown, task 2 and task 3 are executed in parallel. When task 1 is executed, task 2 and task 3 are executed simultaneously.
-
-<p align="center">
-   <img src="/img/task_en.png" width="80%" />
- </p>
-
-  - Delete dependencies: Click on the arrow icon to "drag nodes and select items", select the connection line, click on the delete icon to delete dependencies between nodes.
-<p align="center">
-      <img src="/img/delete_dependencies_en.png" width="80%" />
- </p>
-
-  - Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters.
-
-<p align="center">
-   <img src="/img/global_parameters_en.png" width="80%" />
- </p>
-
-  - For other types of nodes, refer to [task node types and parameter settings](#TaskNodeType) <!-- markdown-link-check-disable-line -->
-
-### Execution process definition
-  - **The process definition of the off-line state can be edited, but not run**, so the on-line workflow is the first step.
-  > Click on the Process definition, return to the list of process definitions, click on the icon "online", online process definition.
-
-  > Before setting workflow offline, the timed tasks in timed management should be offline, so that the definition of workflow can be set offline successfully. 
-
-  - Click "Run" to execute the process. Description of operation parameters:
-    * Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally", "End" Representation: Terminate all ongoing tasks and terminate the entire process.
-    * Notification strategy:When the process is over, send process execution information notification mail according to the process status.
-    * Process priority: The priority of process running is divided into five levels:the highest, the high, the medium, the low, and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order.
-    * Worker group: This process can only be executed in a specified machine group. Default, by default, can be executed on any worker.
-    * Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail.
-    * Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list.
-    * Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list.
-    
-<p align="center">
-   <img src="/img/start-process-en.png" width="80%" />
- </p>
-
-  * Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure:
-  
-<p align="center">
-   <img src="/img/complement-en.png" width="80%" />
- </p>
-
-> Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
-
-### Timing Process Definition
-  - Create Timing: "Process Definition - > Timing"
-  - Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances.
-  
-<p align="center">
-   <img src="/img/timing-en.png" width="80%" />
- </p>
-
-  - Add a timer to be executed once a day at 5:00 a.m. as shown below:
-<p align="center">
-      <img src="/img/timer-en.png" width="80%" />
- </p>
-
-  - Timely online,**the newly created timer is offline. You need to click "Timing Management - >online" to work properly.**
-
-### View process instances
-  > Click on "Process Instances" to view the list of process instances.
-
-  > Click on the process name to see the status of task execution.
-
-  <p align="center">
-   <img src="/img/process-instances-en.png" width="80%" />
- </p>
-
-  > Click on the task node, click "View Log" to view the task execution log.
-
-  <p align="center">
-   <img src="/img/view-log-en.png" width="80%" />
- </p>
-
- > Click on the task instance node, click **View History** to view the list of task instances that the process instance runs.
-
- <p align="center">
-    <img src="/img/instance-runs-en.png" width="80%" />
-  </p>
-
-
-  > Operations on workflow instances:
-
-<p align="center">
-   <img src="/img/workflow-instances-en.png" width="80%" />
-</p>
-
-  * Editor: You can edit the terminated process. When you save it after editing, you can choose whether to update the process definition or not.
-  * Rerun: A process that has been terminated can be re-executed.
-  * Recovery failure: For a failed process, a recovery failure operation can be performed, starting at the failed node.
-  * Stop: Stop the running process, the background will `kill` he worker process first, then `kill -9` operation.
-  * Pause:The running process can be **suspended**, the system state becomes **waiting to be executed**, waiting for the end of the task being executed, and suspending the next task to be executed.
-  * Restore pause: **The suspended process** can be restored and run directly from the suspended node
-  * Delete: Delete process instances and task instances under process instances
-  * Gantt diagram: The vertical axis of Gantt diagram is the topological ordering of task instances under a process instance, and the horizontal axis is the running time of task instances, as shown in the figure:
-<p align="center">
-      <img src="/img/gantt-en.png" width="80%" />
-</p>
-
-### View task instances
-  > Click on "Task Instance" to enter the Task List page and query the performance of the task.
-  >
-  >
-
-<p align="center">
-   <img src="/img/task-instances-en.png" width="80%" />
-</p>
-
-  > Click "View Log" in the action column to view the log of task execution.
-
-<p align="center">
-   <img src="/img/task-execution-en.png" width="80%" />
-</p>
-
-### Create data source
-  > Data Source Center supports MySQL, POSTGRESQL, HIVE and Spark data sources.
-
-#### Create and edit MySQL data source
-
-  - Click on "Datasource - > Create Datasources" to create different types of datasources according to requirements.
-- Datasource: Select MYSQL
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to MySQL
-- Port: Enter the port to connect MySQL
-- User name: Set the username to connect to MySQL
-- Password: Set the password to connect to MySQL
-- Database name: Enter the name of the database connecting MySQL
-- Jdbc connection parameters: parameter settings for MySQL connections, filled in as JSON
-
-<p align="center">
-   <img src="/img/mysql-en.png" width="80%" />
- </p>
-
-  > Click "Test Connect" to test whether the data source can be successfully connected.
-  >
-  >
-
-#### Create and edit POSTGRESQL data source
-
-- Datasource: Select POSTGRESQL
-- Datasource Name: Name of Input Data Source
-- Description: Description of input data sources
-- IP: Enter IP to connect to POSTGRESQL
-- Port: Input port to connect POSTGRESQL
-- Username: Set the username to connect to POSTGRESQL
-- Password: Set the password to connect to POSTGRESQL
-- Database name: Enter the name of the database connecting to POSTGRESQL
-- Jdbc connection parameters: parameter settings for POSTGRESQL connections, filled in as JSON
-
-<p align="center">
-   <img src="/img/create-datasource-en.png" width="80%" />
- </p>
-
-#### Create and edit HIVE data source
-
-1.Connect with HiveServer 2
-
- <p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-
-  - Datasource: Select HIVE
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter IP to connect to HIVE
-- Port: Input port to connect to HIVE
-- Username: Set the username to connect to HIVE
-- Password: Set the password to connect to HIVE
-- Database Name: Enter the name of the database connecting to HIVE
-- Jdbc connection parameters: parameter settings for HIVE connections, filled in in as JSON
-
-2.Connect using Hive Server 2 HA Zookeeper mode
-
- <p align="center">
-    <img src="/img/zookeeper-en.png" width="80%" />
-  </p>
-
-
-Note: If **kerberos** is turned on, you need to fill in **Principal**
-<p align="center">
-    <img src="/img/principal-en.png" width="80%" />
-  </p>
-
-
-
-
-#### Create and Edit Spark Datasource
-
-<p align="center">
-   <img src="/img/edit-datasource-en.png" width="80%" />
- </p>
-
-- Datasource: Select Spark
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to Spark
-- Port: Input port to connect Spark
-- Username: Set the username to connect to Spark
-- Password: Set the password to connect to Spark
-- Database name: Enter the name of the database connecting to Spark
-- Jdbc Connection Parameters: Parameter settings for Spark Connections, filled in as JSON
-
-
-
-Note: If **kerberos** If Kerberos is turned on, you need to fill in  **Principal**
-
-<p align="center">
-    <img src="/img/kerberos-en.png" width="80%" />
-  </p>
-
-### Upload Resources
-  - Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
-
-```
-conf/common/common.properties  
-    # Users who have permission to create directories under the HDFS root path
-    hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
-    # whether kerberos starts
-    hadoop.security.authentication.startup.state=false
-    # java.security.krb5.conf path
-    java.security.krb5.conf.path=/opt/krb5.conf
-    # loginUserFromKeytab user
-    login.user.keytab.username=hdfs-mycluster@ESZ.COM
-    # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-    
-conf/common/hadoop.properties      
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020    
-    #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
-    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-
-```
-- yarn.resourcemanager.ha.rm.ids and yarn.application.status.address only need to configure one address, and the other address is empty.
-- You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project and restart the api-server service.
-
-#### File Manage
-
-  > It is the management of various resource files, including creating basic txt/log/sh/conf files, uploading jar packages and other types of files, editing, downloading, deleting and other operations.
-  >
-  >
-  > <p align="center">
-  >  <img src="/img/file-manage-en.png" width="80%" />
-  > </p>
-
-  * Create file
- > File formats support the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql
-
-<p align="center">
-   <img src="/img/create-file.png" width="80%" />
- </p>
-
-  * Upload Files
-
-> Upload Files: Click the Upload button to upload, drag the file to the upload area, and the file name will automatically complete the uploaded file name.
-
-<p align="center">
-   <img src="/img/file-upload-en.png" width="80%" />
- </p>
-
-
-  * File View
-
-> For viewable file types, click on the file name to view file details
-
-<p align="center">
-   <img src="/img/file-view-en.png" width="80%" />
- </p>
-
-  * Download files
-
-> You can download a file by clicking the download button in the top right corner of the file details, or by downloading the file under the download button after the file list.
-
-  * File rename
-
-<p align="center">
-   <img src="/img/rename-en.png" width="80%" />
- </p>
-
-#### Delete
->  File List - > Click the Delete button to delete the specified file
-
-#### Resource management
-  > Resource management and file management functions are similar. The difference is that resource management is the UDF function of uploading, and file management uploads user programs, scripts and configuration files.
-
-  * Upload UDF resources
-  > The same as uploading files.
-
-#### Function management
-
-  * Create UDF Functions
-  > Click "Create UDF Function", enter parameters of udf function, select UDF resources, and click "Submit" to create udf function.
-  >
-  >
-  >
-  > Currently only temporary udf functions for HIVE are supported
-  >
-  > 
-  >
-  > - UDF function name: name when entering UDF Function
-  > - Package Name: Full Path of Input UDF Function
-  > - Parameter: Input parameters used to annotate functions
-  > - Database Name: Reserved Field for Creating Permanent UDF Functions
-  > - UDF Resources: Set up the resource files corresponding to the created UDF
-  >
-  > 
-
-<p align="center">
-   <img src="/img/udf-function.png" width="80%" />
- </p>
-
-## Security
-
-  - The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc.
-- Administrator login, default username password: admin/dolphinscheduler123
-
-
-
-### Create queues
-
-
-
-  - Queues are used to execute spark, mapreduce and other programs, which require the use of "queue" parameters.
-- "Security" - > "Queue Manage" - > "Create Queue" 
-     <p align="center">
-    <img src="/img/create-queue-en.png" width="80%" />
-  </p>
-
-
-### Create Tenants
-  - The tenant corresponds to the account of Linux, which is used by the worker server to submit jobs. If Linux does not have this user, the worker would create the account when executing the task.
-  - Tenant Code:**the tenant code is the only account on Linux that can't be duplicated.**
-
- <p align="center">
-    <img src="/img/create-tenant-en.png" width="80%" />
-  </p>
-
-### Create Ordinary Users
-  -  User types are **ordinary users** and **administrator users**..
-    * Administrators have **authorization and user management** privileges, and no privileges to **create project and process-defined operations**.
-    * Ordinary users can **create projects and create, edit, and execute process definitions**.
-    * Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.**
-<p align="center">
-      <img src="/img/create-user-en.png" width="80%" />
- </p>
-
-### Create alarm group
-  * The alarm group is a parameter set at start-up. After the process is finished, the status of the process and other information will be sent to the alarm group by mail.
-  * New and Editorial Warning Group
-    <p align="center">
-    <img src="/img/alarm-group-en.png" width="80%" />
-    </p>
-
-### Create Worker Group
-  - Worker group provides a mechanism for tasks to run on a specified worker. Administrators create worker groups, which can be specified in task nodes and operation parameters. If the specified grouping is deleted or no grouping is specified, the task will run on any worker.
-- Multiple IP addresses within a worker group (**aliases can not be written**), separated by **commas in English**
-
-  <p align="center">
-    <img src="/img/worker-group-en.png" width="80%" />
-  </p>
-
-### Token manage
-  - Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface.
-    <p align="center">
-      <img src="/img/token-en.png" width="80%" />
-    </p>
-- Call examples:
-
-```令牌调用示例
-    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/dolphinscheduler/projects/create");
-        httpPost.setHeader("token", "123");
-        // set parameters
-        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
-        parameters.add(new BasicNameValuePair("projectName", "qzw"));
-        parameters.add(new BasicNameValuePair("desc", "qzw"));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-```
-
-### Grant authority
-  - Granting permissions includes project permissions, resource permissions, datasource permissions, UDF Function permissions.
-> Administrators can authorize projects, resources, data sources and UDF Functions that are not created by ordinary users. Because project, resource, data source and UDF Function are all authorized in the same way, the project authorization is introduced as an example.
-
-> Note:For projects created by the user himself, the user has all the permissions. The list of items and the list of selected items will not be reflected
-
-  - 1.Click on the authorization button of the designated person as follows:
-    <p align="center">
-      <img src="/img/operation-en.png" width="80%" />
- </p>
-
-- 2.Select the project button to authorize the project
-
-<p align="center">
-   <img src="/img/auth-project-en.png" width="80%" />
- </p>
-
-### Monitor center
-  - Service management is mainly to monitor and display the health status and basic information of each service in the system.
-
-#### Master monitor
-  - Mainly related information about master.
-<p align="center">
-      <img src="/img/master-monitor-en.png" width="80%" />
- </p>
-
-#### Worker monitor
-  - Mainly related information of worker.
-
-<p align="center">
-   <img src="/img/worker-monitor-en.png" width="80%" />
- </p>
-
-#### Zookeeper monitor
-  - Mainly the configuration information of each worker and master in zookpeeper.
-
-<p align="center">
-   <img src="/img/zookeeper-monitor-en.png" width="80%" />
- </p>
-
-#### DB monitor
-  - Mainly the health status of DB
-
-<p align="center">
-   <img src="/img/db-monitor-en.png" width="80%" />
- </p>
- 
-#### statistics Manage
- <p align="center">
-   <img src="/img/statistics-en.png" width="80%" />
- </p>
-  
-  -  Commands to be executed: statistics on t_ds_command table
-  -  Number of commands that failed to execute: statistics on the t_ds_error_command table
-  -  Number of tasks to run: statistics of task_queue data in Zookeeper
-  -  Number of tasks to be killed: statistics of task_kill in Zookeeper
-
-## <span id=TaskNodeType>Task Node Type and Parameter Setting</span>
-
-### Shell
-
-- The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant.
-
-Drag the <img src="/img/tasks/icons/shell.png" width="15"/> task node in the toolbar onto the palette and double-click the task node as follows:
-
-![demo-shell-simple](/img/tasks/demo/shell.jpg)
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node can be scheduled properly, and if it does not need to be executed, you can turn on the forbidden execution switch.
-- Description : Describes the function of the node
-- Number of failed retries: Number of failed task submissions, support drop-down and manual filling
-- Failure Retry Interval: Interval between tasks that fail to resubmit tasks, support drop-down and manual filling
-- Script: User-developed SHELL program
-- Resources: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of SHELL replace the contents of scripts with ${variables}
-
-### SUB_PROCESS
-  - The sub-process node is to execute an external workflow definition as an task node.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/sub-process-en.png" width="80%" />
- </p>
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node is scheduled properly
-- Description: Describes the function of the node
-- Sub-node: The process definition of the selected sub-process is selected, and the process definition of the selected sub-process can be jumped to by entering the sub-node in the upper right corner.
-
-### DEPENDENT
-
-  - Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday.
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/current-node-en.png" width="80%" />
- </p>
-
-  > Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed.
-
-  <p align="center">
-   <img src="/img/weekly-A-en.png" width="80%" />
- </p>
-
-  > For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the last week, as shown in the figure:
-
- <p align="center">
-   <img src="/img/weekly-A1-en.png" width="80%" />
- </p>
-
-  > If weekly A also needs to be implemented successfully on Tuesday:
-
- <p align="center">
-   <img src="/img/weekly-A2-en.png" width="80%" />
- </p>
-
-###  PROCEDURE
-  - The procedure is executed according to the selected data source.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/node-setting-en.png" width="80%" />
- </p>
-
-- Datasource: The data source type of stored procedure supports MySQL and POSTGRESQL, and chooses the corresponding data source.
-- Method: The method name of the stored procedure
-- Custom parameters: Custom parameter types of stored procedures support IN and OUT, and data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN.
-
-### SQL
-  - Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png) task node in the toolbar onto the palette.
-  - Execute non-query SQL functionality
-    <p align="center">
-      <img src="/img/dependent-nodes-en.png" width="80%" />
- </p>
-
-  - Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients.
-
-<p align="center">
-   <img src="/img/double-click-en.png" width="80%" />
- </p>
-
-- Datasource: Select the corresponding datasource
-- sql type: support query and non-query, query is select type query, there is a result set returned, you can specify mail notification as table, attachment or table attachment three templates. Non-query is not returned by result set, and is for update, delete, insert three types of operations
-- sql parameter: input parameter format is key1 = value1; key2 = value2...
-- sql statement: SQL statement
-- UDF function: For HIVE type data sources, you can refer to UDF functions created in the resource center, other types of data sources do not support UDF functions for the time being.
-- Custom parameters: SQL task type, and stored procedure is to customize the order of parameters to set values for methods. Custom parameter type and data type are the same as stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the ${variable} in the SQL statement.
-- Pre Statement: Pre-sql is executed before the sql statement
-- Post Statement: Post-sql is executed after the sql statement
-
-
-
-### SPARK 
-
-  - Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks.
-
-> Drag the   ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SPARK.png)  task node in the toolbar onto the palette and double-click the task node as follows:
->
-> 
-
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
-
-- Program Type: Support JAVA, Scala and Python
-- Class of the main function: The full path of Main Class, the entry to the Spark program
-- Master jar package: It's Spark's jar package
-- Deployment: support three modes: yarn-cluster, yarn-client, and local
-- Driver Kernel Number: Driver Kernel Number and Memory Number can be set
-- Executor Number: Executor Number, Executor Memory Number and Executor Kernel Number can be set
-- Command Line Parameters: Setting the input parameters of Spark program to support the replacement of custom parameter variables.
-- Other parameters: support - jars, - files, - archives, - conf format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-Note: JAVA and Scala are just used for identification, no difference. If it's a Spark developed by Python, there's no class of the main function, and everything else is the same.
-
-### MapReduce(MR)
-  - Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar`
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
- 1. JAVA program
-
- <p align="center">
-    <img src="/img/java-program-en.png" width="80%" />
-  </p>
-
-- Class of the main function: The full path of the MR program's entry Main Class
-- Program Type: Select JAVA Language
-- Master jar package: MR jar package
-- Command Line Parameters: Setting the input parameters of MR program to support the replacement of custom parameter variables
-- Other parameters: support - D, - files, - libjars, - archives format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-2. Python program
-
-<p align="center">
-   <img src="/img/python-program-en.png" width="80%" />
- </p>
-
-- Program Type: Select Python Language
-- Main jar package: Python jar package running MR
-- Other parameters: support - D, - mapper, - reducer, - input - output format, where user-defined parameters can be set, such as:
-- mapper "mapper.py 1" - file mapper.py-reducer reducer.py-file reducer.py-input/journey/words.txt-output/journey/out/mr/${current TimeMillis}
-- Among them, mapper. py 1 after - mapper is two parameters, the first parameter is mapper. py, and the second parameter is 1.
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-### Python
-  - With Python nodes, Python scripts can be executed directly. For Python nodes, worker will use `python ** `to submit tasks.
-
-
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="/img/python-en1-2.png" width="80%" />
- </p>
-
-- Script: User-developed Python program
-- Resource: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of Python that replace the contents in the script with ${variables}
-
-### System parameter
-
-<table>
-    <tr><th>variable</th><th>meaning</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>The timing time of routine dispatching instance is one day before, in yyyyyMMdd format. When data is supplemented, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td> Daily scheduling example timing time, format is yyyyyMMdd, when supplementing data, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>Daily scheduling example timing time, format is yyyyyMMddHmmss, when supplementing data, the date + 1</td>
-    </tr>
-</table>
-
-
-### Time Customization Parameters
-
- -  Support code to customize the variable name, declaration: ${variable name}. It can refer to "system parameters" or specify "constants".
-
- -  When we define this benchmark variable as $[...], [yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] ,etc.
-
- -  Can also do this:
- 
-
-
-    *  Later N years: $[add_months (yyyyyyMMdd, 12*N)]
-    *  The previous N years: $[add_months (yyyyyyMMdd, -12*N)]
-    *  Later N months: $[add_months (yyyyyMMdd, N)]
-    *  The first N months: $[add_months (yyyyyyMMdd, -N)]
-    *  Later N weeks: $[yyyyyyMMdd + 7*N]
-    *  The first N weeks: $[yyyyyMMdd-7*N]
-    *  The day after that: $[yyyyyyMMdd + N]
-    *  The day before yesterday: $[yyyyyMMdd-N]
-    *  Later N hours: $[HHmmss + N/24]
-    *  First N hours: $[HHmmss-N/24]
-    *  After N minutes: $[HHmmss + N/24/60]
-    *  First N minutes: $[HHmmss-N/24/60]
-
-
-### <span id=CustomParameters>User-defined parameters</span>
-
- - User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.
-
-  For example:
-<p align="center">
-   <img src="/img/user-defined-en.png" width="80%" />
- </p>
-
- - global_bizdate is a global parameter, referring to system parameters.
-
-<p align="center">
-   <img src="/img/user-defined1-en.png" width="80%" />
- </p>
-
- - In tasks, local_param_bizdate refers to global parameters by  \${global_bizdate} for scripts, the value of variable local_param_bizdate can be referenced by \${local_param_bizdate}, or the value of local_param_bizdate can be set directly by JDBC.
diff --git a/docs/en-us/1.2.1/user_doc/upgrade.md b/docs/en-us/1.2.1/user_doc/upgrade.md
deleted file mode 100644
index 35ae8b2..0000000
--- a/docs/en-us/1.2.1/user_doc/upgrade.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-# DolphinScheduler upgrade documentation
-
-## 1. Back up the previous version of the files and database
-
-## 2. Stop all services of dolphinscheduler
-
- `sh ./script/stop-all.sh`
-
-## 3. Download the new version of the installation package
-
-- [download](/en-us/download/download.html), download the latest version of the front and back installation packages (backend referred to as dolphinscheduler-backend, front end referred to as dolphinscheduler-front)
-- The following upgrade operations need to be performed in the new version of the directory
-
-## 4. Database upgrade
-- Modify the following properties in conf/application-dao.properties
-
-```
-    spring.datasource.url
-    spring.datasource.username
-    spring.datasource.password
-```
-
-- Execute database upgrade script
-
-`sh ./script/upgrade-dolphinscheduler.sh`
-
-## 5. Backend service upgrade
-
-- Modify the content of the install.sh configuration and execute the upgrade script
-  
-  `sh install.sh`
-
-## 6. Frontend service upgrade
-
-- Overwrite the previous version of the dist directory
-- Restart the nginx service
-  
-    `systemctl restart nginx`
diff --git a/docs/en-us/1.3.1/user_doc/architecture-design.md b/docs/en-us/1.3.1/user_doc/architecture-design.md
deleted file mode 100644
index 29f4ae5..0000000
--- a/docs/en-us/1.3.1/user_doc/architecture-design.md
+++ /dev/null
@@ -1,332 +0,0 @@
-## System Architecture Design
-Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
-
-### 1.Glossary
-**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
-
-**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
-
-**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
-
-**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
-
-**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
-
-**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
-
-**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
-
-**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
-
-**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
-
-**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
-
-### 2.System Structure
-
-#### 2.1 System architecture diagram
-<p align="center">
-  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
-  <p align="center">
-        <em>System architecture diagram</em>
-  </p>
-</p>
-
-#### 2.2 Start process activity diagram
-<p align="center">
-  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
-  <p align="center">
-        <em>Start process activity diagram</em>
-  </p>
-</p>
-
-#### 2.3 Architecture description
-
-* **MasterServer** 
-
-    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
-    MasterServer provides monitoring services based on netty.
-
-    ##### The service mainly includes:
-
-    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
-
-    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
-
-* **WorkerServer** 
-
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
-
-     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
-     Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
-     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
-
-     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
-
-* **ZooKeeper** 
-
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
-
-    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
-
-* **Task Queue** 
-
-    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
-
-* **Alert** 
-
-    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
-
-* **API** 
-
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
-
-* **UI** 
-
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
-
-#### 2.3 Architecture design ideas
-
-##### One、Decentralization VS centralization 
-
-###### Centralized thinking
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
- </p>
-
-- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
-- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
-
-
-
-Problems in centralized thought design:
-
-- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
-
-
-
-###### Decentralized
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
- </p>
-
-- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
-- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
-
-
-
-- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
-
-#####  Two、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
-1. The core process algorithm for acquiring distributed locks is as follows:
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-
-##### Three、Insufficient thread loop waiting problem
-
--  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
--  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
- </p>
-In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
-
-1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
-2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
-
-note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-
-##### Four、Fault-tolerant design
-Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
-
-###### 1. Downtime fault tolerance
-
-The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
- </p>
-Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
-
-
-
-- Master fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
- </p>
-After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
- </p>
-
-Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
-
- Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
-
-###### 2.Task failed and try again
-
-Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
-- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
-
-
-
-Next to the topic, we divide the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
-
-- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
-
-Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
-
-
-
-##### Five、Task priority design
-In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
-
--  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
-    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
-
-        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
-             </p>
-
-        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
-             </p>
-
-
-##### Six、Logback and netty implement log access
-
--  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
-  -  Put logs on ES search engine
-  -  Obtain remote log information through netty communication
-
--  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
- </p>
-
-
-- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
-- FileAppender is mainly implemented as follows:
-
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
- ```
-
-
-Generate logs in the form of /process definition id/process instance id/task instance id.log
-
-- Filter to match the thread name starting with TaskLogInfo:
-
-- TaskLogFilter is implemented as follows:
-
- ```java
- /**
- *  task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
- ```
-
-### 3.Module introduction
-- dolphinscheduler-alert alarm module, providing AlertServer service.
-
-- dolphinscheduler-api web application module, providing ApiServer service.
-
-- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
-
-- dolphinscheduler-dao provides operations such as database access.
-
-- dolphinscheduler-remote client and server based on netty
-
-- dolphinscheduler-server MasterServer and WorkerServer services
-
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
-
-- dolphinscheduler-ui front-end module
-### Sum up
-From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
-
-
diff --git a/docs/en-us/1.3.1/user_doc/cluster-deployment.md b/docs/en-us/1.3.1/user_doc/cluster-deployment.md
deleted file mode 100644
index 9dc9d83..0000000
--- a/docs/en-us/1.3.1/user_doc/cluster-deployment.md
+++ /dev/null
@@ -1,406 +0,0 @@
-# Cluster Deployment
-
-# 1、Before you begin (please install requirement basic software by yourself)
-
-* [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.7) : Choose One
-* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
-* [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Required
-* pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
-* [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
-
-```markdown
- Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
-```
-
-# 2、Download the binary package.
-
-- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [download](/en-us/download/download.html),Download the package and move to the installation and deployment directory. Then uncompress it.
-
-```shell
-# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
-mkdir -p /opt/dolphinscheduler;
-cd /opt/dolphinscheduler;
-# uncompress
-tar -zxvf apache-dolphinscheduler-incubating-1.3.1-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.3.1-dolphinscheduler-bin  dolphinscheduler-bin
-```
-
-# 3、Create deployment user and hosts mapping
-
-- Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
-
-```shell
-# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
-useradd dolphinscheduler;
-
-# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
-echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
-
-# Configure sudo passwordless
-echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
-sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
-
-```
-
-```
- Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
- - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
- - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
-```
-
-# 4、Configure hosts mapping and ssh access and modify directory permissions.
-
-- Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.
-
-  ```shell
-  vi /etc/hosts
-
-  #add ip hostname
-  192.168.xxx.xxx ds1
-  192.168.xxx.xxx ds2
-  192.168.xxx.xxx ds3
-  192.168.xxx.xxx ds4
-  ```
-
-  *Note: Please delete or comment out the line 127.0.0.1*
-
-- Sync /etc/hosts on ds1 to all deployment machines
-
-  ```shell
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
-  do
-      sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation
-  done
-  ```
-
-  *Note: can use `sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/` to avoid type password.*
-
-  > Install sshpass in Centos:
-  >
-  > 1. Install epel
-  >
-  >    yum install -y epel-release
-  >
-  >    yum repolist
-  >
-  > 2. After installing epel, you can install sshpass
-  >
-  >    yum install -y sshpass
-  >
-  >
-
-- On ds1, switch to the deployment user and configure ssh passwordless login
-
-  ```shell
-   su dolphinscheduler;
-
-  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
-  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-  chmod 600 ~/.ssh/authorized_keys
-  ```
-​      Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
-
-
-
-- On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.
-
-  ```shell
-  su dolphinscheduler;
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
-  do
-      ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
-  done
-  # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
-  ```
-
-- On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory.
-
-  ```shell
-  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
-  ```
-
-# 5、Database initialization
-
-- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
-```
-mysql -h192.168.xx.xx -P3306 -uroot -p
-```
-
-- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password**
-
- ``` mysql
-    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    mysql> flush privileges;
- ```
-
-- Create tables and import basic data
-
-    - Modify the following configuration in datasource.properties under the conf directory
-
-    ```shell
-      vi conf/datasource.properties
-    ```
-
-    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/) package to lib under the directory, and then configure the database connection information correctly.
-
-    ```properties
-      #postgre
-      #spring.datasource.driver-class-name=org.postgresql.Driver
-      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
-      spring.datasource.username=xxx						# replace the correct {user} value
-      spring.datasource.password=xxx						# replace the correct {password} value
-    ```
-
-    - After modifying and saving, execute the create table and import data script in the script directory.
-
-    ```shell
-    sh script/create-dolphinscheduler.sh
-    ```
-
-​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
-
-# 6、Modify runtime parameters.
-
-- Modify the environment variable in `dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
-
-    ```shell
-        export HADOOP_HOME=/opt/soft/hadoop
-        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-        #export SPARK_HOME1=/opt/soft/spark1
-        export SPARK_HOME2=/opt/soft/spark2
-        export PYTHON_HOME=/opt/soft/python
-        export JAVA_HOME=/opt/soft/java
-        export HIVE_HOME=/opt/soft/hive
-        export FLINK_HOME=/opt/soft/flink
-        export DATAX_HOME=/opt/soft/datax/bin/datax.py
-        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-        ```
-
-     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out.`
-
-
-
-- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
-
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the parameters in the one-click deployment config file `conf/config/install_config.conf`, pay special attention to the configuration of the following parameters.
-
-    ```shell
-    # choose mysql or postgresql
-    dbtype="mysql"
-
-    # Database connection address and port
-    dbhost="192.168.xx.xx:3306"
-
-    # database name
-    dbname="dolphinscheduler"
-
-    # database username
-    username="xxx"
-
-    # database password
-    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
-    password="xxx"
-
-    #Zookeeper cluster
-    zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-
-    # Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
-    installPath="/opt/soft/dolphinscheduler"
-
-    # deployment user
-    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
-    deployUser="dolphinscheduler"
-
-    # alert config,take QQ email for example
-    # mail protocol
-    mailProtocol="SMTP"
-
-    # mail server host
-    mailServerHost="smtp.qq.com"
-
-    # mail server port
-    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
-    mailServerPort="25"
-
-    # mail sender
-    mailSender="xxx@qq.com"
-
-    # mail user
-    mailUser="xxx@qq.com"
-
-    # mail sender password
-    # note: The mail.passwd is email service authorization code, not the email login password.
-    mailPassword="xxx"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported
-    starttlsEnable="true"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported。
-    # note: only one of TLS and SSL can be in the true state.
-    sslEnable="false"
-
-    # note: sslTrust is the same as mailServerHost
-    sslTrust="smtp.qq.com"
-
-
-    # resource storage type:HDFS,S3,NONE
-    resourceStorageType="HDFS"
-
-    # If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-    # Note,s3 be sure to create the root directory /dolphinscheduler
-    defaultFS="hdfs://mycluster:8020"
-
-
-    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
-    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
-    singleYarnIp="yarnIp1"
-
-    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
-    resourceUploadPath="/dolphinscheduler"
-
-    # who have permissions to create directory under HDFS/S3 root path
-    # Note: if kerberos is enabled, please config hdfsRootUser=
-    hdfsRootUser="hdfs"
-
-
-
-    # install hosts
-    # Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
-    ips="ds1,ds2,ds3,ds4"
-
-    # ssh port, default 22
-    # Note: if ssh port is not default, modify here
-    sshPort="22"
-
-    # run master machine
-    # Note: list of hosts hostname for deploying master
-    masters="ds1,ds2"
-
-    # run worker machine
-    # note: need to write the worker group name of each worker, the default value is "default"
-    workers="ds3:default,ds4:default"
-
-    # run alert machine
-    # note: list of machine hostnames for deploying alert server
-    alertServer="ds2"
-
-    # run api machine
-    # note: list of machine hostnames for deploying api server
-    apiServers="ds1"
-
-    ```
-
-    *Attention:*
-
-    - If you need to upload resources to the Hadoop cluster, and the NameNode of the Hadoop cluster is configured with HA, you need to enable HDFS resource upload, and you need to copy the core-site.xml and hdfs-site.xml in the Hadoop cluster to /opt/ dolphinscheduler/conf. Non-NameNode HA skips the next step.
-
-# 7、Automated Deployment
-
-- Switch to the deployment user and execute the one-click deployment script
-
-    `sh install.sh`
-
-   ```
-   Note:
-   For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
-   sh: bin/dolphinscheduler-daemon.sh: No such file or directory
-   ```
-
-- After the script is completed, the following 5 services will be started. Use the `jps` command to check whether the services are started (` jps` comes with `java JDK`)
-
-```aidl
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    LoggerServer         ----- logger service
-    ApiApplicationServer ----- api service
-    AlertServer          ----- alert service
-```
-If the above services are started normally, the automatic deployment is successful.
-
-
-After the deployment is successful, you can view the logs. The logs are stored in the logs folder.
-
-```log path
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-
-
-# 8、login
-
-- Access the address of the front page, interface IP (self-modified)
-http://localhost:12345/dolphinscheduler
-
-   <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-
-
-# 9、Start and stop service
-
-* Stop all services
-
-  ` sh ./bin/stop-all.sh`
-
-* Start all services
-
-  ` sh ./bin/start-all.sh`
-
-* Start and stop master service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* Start and stop worker Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* Start and stop api Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-
-* Start and stop logger Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-
-* Start and stop alert service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-``Note: Please refer to the "Architecture Design" section for service usage``
-
diff --git a/docs/en-us/1.3.1/user_doc/configuration-file.md b/docs/en-us/1.3.1/user_doc/configuration-file.md
deleted file mode 100644
index db9b81c..0000000
--- a/docs/en-us/1.3.1/user_doc/configuration-file.md
+++ /dev/null
@@ -1,406 +0,0 @@
-<!-- markdown-link-check-disable -->
-# Foreword
-This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
-
-# Directory Structure
-All configuration files of dolphinscheduler are currently in the [conf] directory.
-
-For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
-This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
-
-[Note: The following dolphinscheduler is referred to as DS.]
-```
-
-├─bin                               DS command storage directory
-│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
-│  ├─start-all.sh                       Start all DS services according to the configuration file
-│  ├─stop-all.sh                        Close all DS services according to the configuration file
-├─conf                              Configuration file directory
-│  ├─application-api.properties         api service configuration file
-│  ├─datasource.properties              Database configuration file
-│  ├─zookeeper.properties               zookeeper configuration file
-│  ├─master.properties                  Master service configuration file
-│  ├─worker.properties                  Worker service configuration file
-│  ├─quartz.properties                  Quartz service configuration file
-│  ├─common.properties                  Public service [storage] configuration file
-│  ├─alert.properties                   alert service configuration file
-│  ├─config                             Environment variable configuration folder
-│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
-│  ├─env                                Run script environment variable configuration directory
-│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-│  ├─org                                mybatis mapper file directory
-│  ├─i18n                               i18n configuration file directory
-│  ├─logback-api.xml                    api service log configuration file
-│  ├─logback-master.xml                 Master service log configuration file
-│  ├─logback-worker.xml                 Worker service log configuration file
-│  ├─logback-alert.xml                  alert service log configuration file
-├─sql                               DS metadata creation and upgrade sql file
-│  ├─create                             Create SQL script directory
-│  ├─upgrade                            Upgrade SQL script directory
-│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
-│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
-│  ├─soft_version                       Current DS version identification file
-├─script                            DS service deployment, database creation/upgrade script directory
-│  ├─create-dolphinscheduler.sh         DS database initialization script      
-│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
-│  ├─monitor-server.sh                  DS service monitoring startup script               
-│  ├─scp-hosts.sh                       Install file transfer script                                                    
-│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
-├─ui                                Front-end WEB resource directory
-├─lib                               DS dependent jar storage directory
-├─install.sh                        Automatically install DS service script
-
-
-```
-
-
-# Detailed configuration file
-
-Serial number| Service classification |  Configuration file|
-|--|--|--|
-1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
-2|Database connection configuration | datasource.properties
-3|Zookeeper connection configuration|zookeeper.properties
-4|Common [storage] configuration|common.properties
-5|API service configuration|application-api.properties
-6|Master service configuration|master.properties
-7|Worker service configuration|worker.properties
-8|Alert service configuration|alert.properties
-9|Quartz configuration|quartz.properties
-10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
-11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
-12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
-
-
-## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
-The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
-start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
-At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
-
-The default simplified parameters are as follows:
-```bash
-export DOLPHINSCHEDULER_OPTS="
--server 
--Xmx16g 
--Xms1g 
--Xss512k 
--XX:+UseConcMarkSweepGC 
--XX:+CMSParallelRemarkEnabled 
--XX:+UseFastAccessorMethods 
--XX:+UseCMSInitiatingOccupancyOnly 
--XX:CMSInitiatingOccupancyFraction=70
-"
-```
-
-> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
-
-## 2.datasource.properties [Database Connectivity]
-Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
-|Parameter | Defaults| Description|
-|--|--|--|
-spring.datasource.driver-class-name| |Database driver
-spring.datasource.url||Database connection address
-spring.datasource.username||Database username
-spring.datasource.password||Database password
-spring.datasource.initialSize|5| Number of initial connection pools
-spring.datasource.minIdle|5| Minimum number of connection pools
-spring.datasource.maxActive|5| Maximum number of connection pools
-spring.datasource.maxWait|60000| Maximum waiting time
-spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
-spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
-spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
-spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
-spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
-spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
-spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
-spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
-spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
-spring.datasource.poolPreparedStatements|true| Open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
-
-
-## 3.zookeeper.properties [Zookeeper connection configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-zookeeper.quorum|localhost:2181| zk cluster connection information
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
-zookeeper.session.timeout|60000|  session time out
-zookeeper.connection.timeout|30000|  Connection timed out
-zookeeper.retry.base.sleep|100| Basic retry time difference
-zookeeper.retry.max.sleep|30000| Maximum retry time
-zookeeper.retry.maxtime|10|Maximum number of retries
-
-
-## 4.common.properties [hadoop, s3, yarn configuration]
-The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
-|Parameter |Defaults| Description| 
-|--|--|--|
-resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
-resource.upload.path|/dolphinscheduler|Resource file storage path
-data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
-hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
-java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
-login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
-login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
-resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
-hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
-fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
-fs.s3a.endpoint||s3 endpoint address
-fs.s3a.access.key||s3 access key
-fs.s3a.secret.key||s3 secret key
-yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
-yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
-dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-development.state|false|Is it in development mode
-kerberos.expire.time|7|kerberos expire time,integer,the unit is day
-
-
-## 5.application-api.properties [API service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-server.port|12345|API service communication port
-server.servlet.session.timeout|7200|session timeout
-server.servlet.context-path|/dolphinscheduler |Request path
-spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
-spring.servlet.multipart.max-request-size|1024MB|Maximum request size
-server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
-spring.messages.encoding|UTF-8|Request encoding
-spring.jackson.time-zone|GMT+8|Set time zone
-spring.messages.basename|i18n/messages|i18n configuration
-security.authentication.type|PASSWORD|Permission verification type
-
-
-## 6.master.properties [Master service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-master.listen.port|5678|master listen port
-master.exec.threads|100|master execute thread number to limit process instances in parallel
-master.exec.task.num|20|master execute task number in parallel per process instance
-master.dispatch.task.num|3|master dispatch task number per batch
-master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
-master.heartbeat.interval|10|master heartbeat interval, the unit is second
-master.task.commit.retryTimes|5|master commit task retry times
-master.task.commit.interval|1000|master commit task interval, the unit is millisecond
-master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
-master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
-
-
-## 7.worker.properties [Worker service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-worker.listen.port|1234|worker listen port
-worker.exec.threads|100|worker execute thread number to limit task instances in parallel
-worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
-worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
-worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
-worker.group|default|worker group config <br> worker will join corresponding group according to this config when startup
-
-
-## 8.alert.properties [Alert alert service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-alert.type|EMAIL|Alarm type|
-mail.protocol|SMTP| Mail server protocol
-mail.server.host|xxx.xxx.com|Mail server address
-mail.server.port|25|Mail server port
-mail.sender|xxx@xxx.com|Sender mailbox
-mail.user|xxx@xxx.com|Sender's email name
-mail.passwd|111111|Sender email password
-mail.smtp.starttls.enable|true|Whether the mailbox opens tls
-mail.smtp.ssl.enable|false|Whether the mailbox opens ssl
-mail.smtp.ssl.trust|xxx.xxx.com|Email ssl whitelist
-xls.file.path|/tmp/xls|Temporary working directory for mailbox attachments
-||The following is the enterprise WeChat configuration[Optional]|
-enterprise.wechat.enable|false|Whether the enterprise WeChat is enabled
-enterprise.wechat.corp.id|xxxxxxx|
-enterprise.wechat.secret|xxxxxxx|
-enterprise.wechat.agent.id|xxxxxxx|
-enterprise.wechat.users|xxxxxxx|
-enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
-enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
-enterprise.wechat.user.send.msg||Send message format
-enterprise.wechat.team.send.msg||Group message format
-plugin.dir|/Users/xx/your/path/to/plugin/dir|Plugin directory
-
-
-## 9.quartz.properties [Quartz configuration]
-This is mainly quartz configuration, please configure it in combination with actual business scenarios & resources, this article will not be expanded for the time being.
-|Parameter |Defaults| Description| 
-|--|--|--|
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
-org.quartz.scheduler.instanceName | DolphinScheduler
-org.quartz.scheduler.instanceId | AUTO
-org.quartz.scheduler.makeSchedulerThreadDaemon | true
-org.quartz.jobStore.useProperties | false
-org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
-org.quartz.threadPool.makeThreadsDaemons | true
-org.quartz.threadPool.threadCount | 25
-org.quartz.threadPool.threadPriority | 5
-org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
-org.quartz.jobStore.tablePrefix | QRTZ_
-org.quartz.jobStore.isClustered | true
-org.quartz.jobStore.misfireThreshold | 60000
-org.quartz.jobStore.clusterCheckinInterval | 5000
-org.quartz.jobStore.acquireTriggersWithinLock|true
-org.quartz.jobStore.dataSource | myDs
-org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
-
-
-## 10.install_config.conf [DS environment variable configuration script [for DS installation/startup]]
-The install_config.conf configuration file is more cumbersome.This file is mainly used in two places.
-* 1.Automatic installation of DS cluster.
-
-> Calling the install.sh script will automatically load the configuration in this file, and automatically configure the content in the above configuration file according to the content in this file.
-> Such as::dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties Etc..
-
-
-* 2.DS cluster startup and shutdown.
->When the DS cluster is started up and shut down, it will load the masters, workers, alertServer, apiServers and other parameters in the configuration file to start/close the DS cluster.
-
-The contents of the file are as follows:
-```bash
-
-# Note: If the configuration file contains special characters,such as: `.*[]^${}\+?|()@#&`, Please escape,
-#      Examples: `[` Escape to `\[`
-
-# Database type, currently only supports postgresql or mysql
-dbtype="mysql"
-
-# Database address & port
-dbhost="192.168.xx.xx:3306"
-
-# Database Name
-dbname="dolphinscheduler"
-
-
-# Database Username
-username="xx"
-
-# Database Password
-password="xx"
-
-# Zookeeper address
-zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-
-# Where to install DS, such as: /data1_1T/dolphinscheduler,
-installPath="/data1_1T/dolphinscheduler"
-
-# Which user to use for deployment
-# Note: The deployment user needs sudo permissions and can operate hdfs.
-#     If you use hdfs, the root directory must be created by the user. Otherwise, there will be permissions related issues.
-deployUser="dolphinscheduler"
-
-
-# The following is the alarm service configuration
-# Mail server address
-mailServerHost="smtp.exmail.qq.com"
-
-# Mail Server Port
-mailServerPort="25"
-
-# Sender
-mailSender="xxxxxxxxxx"
-
-# Sending user
-mailUser="xxxxxxxxxx"
-
-# email Password
-mailPassword="xxxxxxxxxx"
-
-# TLS protocol mailbox is set to true, otherwise set to false
-starttlsEnable="true"
-
-# The mailbox with SSL protocol enabled is set to true, otherwise it is false. Note: starttlsEnable and sslEnable cannot be true at the same time
-sslEnable="false"
-
-# Mail service address value, same as mailServerHost
-sslTrust="smtp.exmail.qq.com"
-
-#Where to upload resource files such as sql used for business, you can set: HDFS, S3, NONE. If you want to upload to HDFS, please configure as HDFS; if you do not need the resource upload function, please select NONE.
-resourceStorageType="NONE"
-
-# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-# Note,s3 be sure to create the root directory /dolphinscheduler
-defaultFS="hdfs://mycluster:8020"
-
-# If the resourceStorageType is S3, the parameters to be configured are as follows:
-s3Endpoint="http://192.168.xx.xx:9010"
-s3AccessKey="xxxxxxxxxx"
-s3SecretKey="xxxxxxxxxx"
-
-# If the ResourceManager is HA, configure it as the primary and secondary ip or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx", otherwise if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps="" That’s it, if yarn is not used, configure it as ""
-yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-# If it is a single ResourceManager, configure it as the ResourceManager node ip or host name, otherwise keep the default value.
-singleYarnIp="yarnIp1"
-
-# The storage path of resource files in HDFS/S3
-resourceUploadPath="/dolphinscheduler"
-
-
-# HDFS/S3  Operating user
-hdfsRootUser="hdfs"
-
-# The following is the kerberos configuration
-
-# Whether kerberos is turned on
-kerberosStartUp="false"
-# kdc krb5 config file path
-krb5ConfPath="$installPath/conf/krb5.conf"
-# keytab username
-keytabUserName="hdfs-mycluster@ESZ.COM"
-# username keytab path
-keytabPath="$installPath/conf/hdfs.headless.keytab"
-
-
-# api service port
-apiServerPort="12345"
-
-
-# Hostname of all hosts where DS is deployed
-ips="ds1,ds2,ds3,ds4,ds5"
-
-# ssh port, default 22
-sshPort="22"
-
-# Deploy master service host
-masters="ds1,ds2"
-
-# The host where the worker service is deployed
-# Note: Each worker needs to set a worker group name, the default value is "default"
-workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
-
-#  Deploy the alert service host
-alertServer="ds3"
-
-# Deploy api service host
-apiServers="ds1"
-```
-
-## 11.dolphinscheduler_env.sh [Environment variable configuration]
-When submitting a task through a shell-like method, the environment variables in the configuration file are loaded into the host.
-The types of tasks involved are: Shell tasks, Python tasks, Spark tasks, Flink tasks, Datax tasks, etc.
-```bash
-export HADOOP_HOME=/opt/soft/hadoop
-export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-export SPARK_HOME1=/opt/soft/spark1
-export SPARK_HOME2=/opt/soft/spark2
-export PYTHON_HOME=/opt/soft/python
-export JAVA_HOME=/opt/soft/java
-export HIVE_HOME=/opt/soft/hive
-export FLINK_HOME=/opt/soft/flink
-export DATAX_HOME=/opt/soft/datax/bin/datax.py
-
-export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-```
-
-## 12.Service log configuration files
-Correspondence service| Log file name |
---|--|
-api service log configuration file |logback-api.xml|
-Master service log configuration file|logback-master.xml |
-Worker service log configuration file|logback-worker.xml |
-alert service log configuration file|logback-alert.xml |
diff --git a/docs/en-us/1.3.1/user_doc/hardware-environment.md b/docs/en-us/1.3.1/user_doc/hardware-environment.md
deleted file mode 100644
index cc122c9..0000000
--- a/docs/en-us/1.3.1/user_doc/hardware-environment.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Hareware Environment
-
-DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
-
-## 1. Linux operating system version requirements
-
-| OS       | Version         |
-| :----------------------- | :----------: |
-| Red Hat Enterprise Linux | 7.0 and above   |
-| CentOS                   | 7.0 and above   |
-| Oracle Enterprise Linux  | 7.0 and above   |
-| Ubuntu LTS               | 16.04 and above |
-
-> **Attention:**
->The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
-
-## 2. Recommended server configuration
-DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
-### Production Environment
-
-| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
-| --- | --- | --- | --- | --- |
-| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
-
-> **Attention:**
-> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
-> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
-
-
-## 3. Network requirements
-
-DolphinScheduler provides the following network port configurations for normal operation:
-
-| Server | Port | Desc |
-|  --- | --- | --- |
-| MasterServer |  5678  | Not the communication port. Require the native ports do not conflict |
-| WorkerServer | 1234  | Not the communication port. Require the native ports do not conflict |
-| ApiApplicationServer |  12345 | Backend communication port |
-
-> **Attention:**
-> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
-> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
-
-## 4. Browser requirements
-
-DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
-
diff --git a/docs/en-us/1.3.1/user_doc/metadata-1.3.md b/docs/en-us/1.3.1/user_doc/metadata-1.3.md
deleted file mode 100644
index 6cefed7..0000000
--- a/docs/en-us/1.3.1/user_doc/metadata-1.3.md
+++ /dev/null
@@ -1,185 +0,0 @@
-# Dolphin Scheduler 1.3 Metadata document
-
-<a name="25Ald"></a>
-### Table overview
-| Table Name | Table information |
-| :---: | :---: |
-| t_ds_access_token | Access the token of the ds backend |
-| t_ds_alert | Warning message |
-| t_ds_alertgroup | Alarm group |
-| t_ds_command | Excuting an order |
-| t_ds_datasource | data source |
-| t_ds_error_command | Wrong command |
-| t_ds_process_definition | Process definition |
-| t_ds_process_instance | Process instance |
-| t_ds_project | project |
-| t_ds_queue | queue |
-| t_ds_relation_datasource_user | User associated data source |
-| t_ds_relation_process_instance | Subprocess |
-| t_ds_relation_project_user | User-related projects |
-| t_ds_relation_resources_user | User associated resources |
-| t_ds_relation_udfs_user | User associated UDF function |
-| t_ds_relation_user_alertgroup | User associated alarm group |
-| t_ds_resources | resource |
-| t_ds_schedules | Process timing scheduling |
-| t_ds_session | User login session |
-| t_ds_task_instance | Task instance |
-| t_ds_tenant | Tenant |
-| t_ds_udfs | UDF resources |
-| t_ds_user | user |
-| t_ds_version | ds version information |
-
-<a name="VNVGr"></a>
-### user	queue	data source
-![image.png](/img/metadata-erd/user-queue-datasource.png)
-
-- There can be multiple users under a tenant<br />
-- The queue field in t_ds_user stores the queue_name information in the queue list, and t_ds_tenant stores queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used<br />
-- The user_id field in the t_ds_datasource table represents the user who created the data source, and the user_id in t_ds_relation_datasource_user represents the user who has permission to the data source<br />
-<a name="HHyGV"></a>
-### project	Resources	Alert
-![image.png](/img/metadata-erd/project-resource-alert.png)
-
-- A user can have multiple projects, and the user project is authorized to complete the relationship binding between project_id and user_id through the t_ds_relation_project_user table<br />
-- The user_id in the t_ds_projcet table represents the user who created the project, and the user_id in the t_ds_relation_project_user table represents the user who has permission to the project<br />
-- The user_id in the t_ds_resources table represents the user who created the resource, and the user_id in t_ds_relation_resources_user represents the user who has permission to the resource<br />
-- The user_id in the t_ds_udfs table represents the user who created the UDF, and the user_id in the t_ds_relation_udfs_user table represents the user who has permission to the UDF<br />
-<a name="Bg2Sn"></a>
-### command	Process	task
-![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
-
-- A project has multiple process definitions, one process definition can generate multiple process instances, and one process instance can generate multiple task instances<br />
-- The t_ds_schedulers table stores the timing scheduling information defined by the process<br />
-- The data stored in the t_ds_relation_process_instance table is used to handle the case where the process definition contains sub-processes. parent_process_instance_id represents the main process instance id containing the sub-process, process_instance_id represents the id of the sub-process instance, parent_task_instance_id represents the task instance id of the sub-process node, the process instance table and The task instance table corresponds to the t_ds_process_instance table and t [...]
-<a name="Pv25P"></a>
-### Core table schema
-<a name="32Jzd"></a>
-#### t_ds_process_definition
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | Primary key |
-| name | varchar | Process definition name |
-| version | int | Process definition version |
-| release_state | tinyint | Release status of the process definition: 0 Not online  1 Online |
-| project_id | int | project id |
-| user_id | int | User to whom the process definition belongs id |
-| process_definition_json | longtext | Process definition json string |
-| description | text | Process definition description |
-| global_params | text | Global parameters |
-| flag | tinyint | Whether the process is available: 0 is not available, 1 is available |
-| locations | text | Node coordinate information |
-| connects | text | Node connection information |
-| receivers | text | Recipient |
-| receivers_cc | text | Cc |
-| create_time | datetime | Creation time |
-| timeout | int | overtime time |
-| tenant_id | int | queue id |
-| update_time | datetime | Update time |
-| modify_by | varchar | Modify user |
-| resource_ids | varchar | Resource id set |
-
-<a name="e6jfz"></a>
-#### t_ds_process_instance
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | Primary key |
-| name | varchar | Process instance name |
-| process_definition_id | int | Process definition id |
-| state | tinyint | Process instance status: 0 Submitted successfully,1 running,2 Ready to pause,3 time out,4 Ready to stop,5 stop,6 failure,7 success,8 Need for fault tolerance,9 kill,10 Waiting thread,11 Wait for dependencies to complete |
-| recovery | tinyint | Process instance fault tolerance ID: 0 normal,1 Need to be restarted by fault tolerance |
-| start_time | datetime | Process instance start time |
-| end_time | datetime | Process instance end time |
-| run_times | int | Number of process instance runs |
-| host | varchar | The machine where the process instance is located |
-| command_type | tinyint | Command type: 0 Start the workflow,1 Start execution from the current node,2 Restore a fault-tolerant workflow,3 Resume suspended process,4 Start execution from the failed node,5 Complement,6 Scheduling,7 Rerun,8 time out,9 stop,10 Resume waiting thread |
-| command_param | text | Command parameters (json format) |
-| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward execution, 2 backward execution |
-| max_try_times | tinyint | Maximum number of retries |
-| failure_strategy | tinyint | Failure strategy 0 ends after failure, 1 continues after failure |
-| warning_type | tinyint | Alarm type: 0 not sent, 1 sent if the process is successful, 2 sent if the process fails, 3 sent both if the process fails |
-| warning_group_id | int | Alarm group id |
-| schedule_time | datetime | Expected running time |
-| command_start_time | datetime | Start command time |
-| global_params | text | Global parameters (parameters defined by the curing process) |
-| process_instance_json | longtext | Process instance json (json of the process definition of copy) |
-| flag | tinyint | Is it available, 1 is available, 0 is not available |
-| update_time | timestamp | Update time |
-| is_sub_process | int | Is it a sub-workflow 1 yes, 0 no |
-| executor_id | int | Command execution user |
-| locations | text | Node coordinate information |
-| connects | text | Node connection information |
-| history_cmd | text | Historical commands, record all operations on process instances |
-| dependence_schedule_times | text | Depend on the estimated time of the node |
-| process_instance_priority | int | Process instance priority: 0 Highest, 1 High, 2 Medium, 3 Low, 4 Lowest |
-| worker_group | varchar | Tasks specify the group of workers to run |
-| timeout | int | overtime time |
-| tenant_id | int | queue id |
-
-<a name="IvHEc"></a>
-#### t_ds_task_instance
-|Field | Type | Comment |
-| --- | --- | --- |
-| id | int | Primary key |
-| name | varchar | mission name |
-| task_type | varchar | Task type |
-| process_definition_id | int | Process definition id |
-| process_instance_id | int | Process instance id |
-| task_json | longtext | Task node json |
-| state | tinyint | Task instance status: 0 submitted successfully, 1 running, 2 ready to be suspended, 3 suspended, 4 ready to stop, 5 stopped, 6 failed, 7 successful, 8 needs fault tolerance, 9 kill, 10 waiting for thread, 11 waiting for dependency to complete |
-| submit_time | datetime | Task submission time |
-| start_time | datetime | Task start time |
-| end_time | datetime | Task end time |
-| host | varchar | The machine performing the task |
-| execute_path | varchar | Task execution path |
-| log_path | varchar | Task log path |
-| alert_flag | tinyint | Whether to alert |
-| retry_times | int | number of retries |
-| pid | int | Process pid |
-| app_link | varchar | yarn app id |
-| flag | tinyint | vailability: 0 is not available, 1 is available |
-| retry_interval | int | Retry interval |
-| max_retry_times | int | Maximum number of retries |
-| task_instance_priority | int | Task instance priority: 0 Highest, 1 High, 2 Medium, 3 Low, 4 Lowest |
-| worker_group | varchar | Tasks specify the group of workers to run |
-
-<a name="pPQkU"></a>
-#### t_ds_schedules
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | Primary key |
-| process_definition_id | int | Process definition id |
-| start_time | datetime | Schedule start time |
-| end_time | datetime | Schedule end time |
-| crontab | varchar | crontab expression |
-| failure_strategy | tinyint | Failure strategy: 0 ends, 1 continues |
-| user_id | int | User id |
-| release_state | tinyint | Status: 0 not online, 1 online |
-| warning_type | tinyint | Alarm type: 0 not sent, 1 sent if the process is successful, 2 sent if the process fails, 3 sent both if the process fails |
-| warning_group_id | int | Alarm group id |
-| process_instance_priority | int | Process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group | varchar | Tasks specify the group of workers to run |
-| create_time | datetime | Creation time |
-| update_time | datetime | Update time |
-
-<a name="TkQzn"></a>
-#### t_ds_command
-| Field | Type | Comment |
-| --- | --- | --- |
-| id | int | Primary key |
-| command_type | tinyint | Command type: 0 start the workflow, 1 start execution from the current node, 2 resume the fault-tolerant workflow, 3 resume the suspended process, 4 start execution from the failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 Stop, 10 resume waiting thread |
-| process_definition_id | int | Process definition id |
-| command_param | text | Command parameters (json format) |
-| task_depend_type | tinyint | Node dependency type: 0 current node, 1 forward execution, 2 backward execution |
-| failure_strategy | tinyint | Failure strategy: 0 ends, 1 continues |
-| warning_type | tinyint | Alarm type: 0 not sent, 1 sent if the process is successful, 2 sent if the process fails, 3 sent both if the process fails |
-| warning_group_id | int | Alarm group |
-| schedule_time | datetime | Expected running time |
-| start_time | datetime | Starting time |
-| executor_id | int | Execute user id |
-| dependence | varchar | Dependent field |
-| update_time | datetime | Update time |
-| process_instance_priority | int | Process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest |
-| worker_group | varchar | Tasks specify the group of workers to run |
-
-
-
diff --git a/docs/en-us/1.3.1/user_doc/quick-start.md b/docs/en-us/1.3.1/user_doc/quick-start.md
deleted file mode 100644
index 8c48e74..0000000
--- a/docs/en-us/1.3.1/user_doc/quick-start.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Quick Start
-
-* Administrator user login
-
-  > Address:http://localhost:12345/dolphinscheduler  Username and password:admin/dolphinscheduler123
-
-<p align="center">
-   <img src="/img/login_en.png" width="60%" />
- </p>
-
-* Create queue
-
-<p align="center">
-   <img src="/img/create-queue-en.png" width="60%" />
- </p>
-
-  * Create tenant
-      <p align="center">
-    <img src="/img/create-tenant-en.png" width="60%" />
-  </p>
-
-  * Creating Ordinary Users
-<p align="center">
-      <img src="/img/create-user-en.png" width="60%" />
- </p>
-
-  * Create an alarm group
-
- <p align="center">
-    <img src="/img/alarm-group-en.png" width="60%" />
-  </p>
-
-  
-  * Create an worker group
-  
-   <p align="center">
-      <img src="/img/worker-group-en.png" width="60%" />
-    </p>
-    
- * Create an token
-  
-   <p align="center">
-      <img src="/img/token-en.png" width="60%" />
-    </p>
-     
-  
-  * Log in with regular users
-  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
-
-  * Project Management - > Create Project - > Click on Project Name
-<p align="center">
-      <img src="/img/create_project_en.png" width="60%" />
- </p>
-
-  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
-
-<p align="center">
-   <img src="/img/process_definition_en.png" width="60%" />
- </p>
-
-  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
-
- <p align="center">
-   <img src="/img/log_en.png" width="60%" />
-</p>
diff --git a/docs/en-us/1.3.1/user_doc/standalone-deployment.md b/docs/en-us/1.3.1/user_doc/standalone-deployment.md
deleted file mode 100644
index 1a3c8cc..0000000
--- a/docs/en-us/1.3.1/user_doc/standalone-deployment.md
+++ /dev/null
@@ -1,400 +0,0 @@
-# Standalone Deployment
-
-# 1、Before you begin (please install requirement basic software by yourself)
-
-* [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.7) : Choose One
-* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
-* [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Required
-* pstree or psmisc : "pstree" is required for Mac OS and "psmisc" is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian
-* [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
-
-```markdown
- Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
-```
-
-# 2、Download the binary package.
-
-- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [download](/en-us/download/download.html),Download the package and move to the installation and deployment directory. Then uncompress it.
-
-```shell
-# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
-mkdir -p /opt/dolphinscheduler;
-cd /opt/dolphinscheduler;
-# uncompress
-tar -zxvf apache-dolphinscheduler-incubating-1.3.1-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.3.1-dolphinscheduler-bin  dolphinscheduler-bin
-```
-
-# 3、Create deployment user and hosts mapping
-
-- Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
-
-```shell
-# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
-useradd dolphinscheduler;
-
-# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
-echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
-
-# Configure sudo passwordless
-echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
-sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
-
-```
-
-```
- Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
- - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
- - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
-```
-
-# 4、Configure hosts mapping and ssh access and modify directory permissions.
-
-- Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.
-
-  ```shell
-  vi /etc/hosts
-
-  #add ip hostname
-  192.168.xxx.xxx ds1
-  192.168.xxx.xxx ds2
-  192.168.xxx.xxx ds3
-  192.168.xxx.xxx ds4
-  ```
-
-  *Note: Please delete or comment out the line 127.0.0.1*
-
-- Sync /etc/hosts on ds1 to all deployment machines
-
-  ```shell
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
-  do
-      sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation
-  done
-  ```
-
-  *Note: can use `sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/` to avoid type password.*
-
-  > Install sshpass in Centos:
-  >
-  > 1. Install epel
-  >
-  >    yum install -y epel-release
-  >
-  >    yum repolist
-  >
-  > 2. After installing epel, you can install sshpass
-  >
-  >    yum install -y sshpass
-  >
-  >
-
-- On ds1, switch to the deployment user and configure ssh passwordless login
-
-  ```shell
-   su dolphinscheduler;
-
-  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
-  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-  chmod 600 ~/.ssh/authorized_keys
-  ```
-​      Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
-
-
-
-- On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.
-
-  ```shell
-  su dolphinscheduler;
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
-  do
-      ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
-  done
-  # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
-  ```
-
-- On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory.
-
-  ```shell
-  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
-  ```
-
-# 5、Database initialization
-
-- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
-```
-mysql -uroot -p
-```
-
-- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password**
-
- ``` mysql
-    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    mysql> flush privileges;
- ```
-
-- Create tables and import basic data
-
-    - Modify the following configuration in datasource.properties under the conf directory
-
-    ```shell
-      vi conf/datasource.properties
-    ```
-
-    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/) package to lib under the directory, and then configure the database connection information correctly.
-
-    ```properties
-      #postgre
-      #spring.datasource.driver-class-name=org.postgresql.Driver
-      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
-      spring.datasource.username=xxx						# replace the correct {user} value
-      spring.datasource.password=xxx						# replace the correct {password} value
-    ```
-
-    - After modifying and saving, execute the create table and import data script in the script directory.
-
-    ```shell
-    sh script/create-dolphinscheduler.sh
-    ```
-
-​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
-
-# 6、Modify runtime parameters.
-
-- Modify the environment variable in `dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
-
-    ```shell
-        export HADOOP_HOME=/opt/soft/hadoop
-        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-        #export SPARK_HOME1=/opt/soft/spark1
-        export SPARK_HOME2=/opt/soft/spark2
-        export PYTHON_HOME=/opt/soft/python
-        export JAVA_HOME=/opt/soft/java
-        export HIVE_HOME=/opt/soft/hive
-        export FLINK_HOME=/opt/soft/flink
-        export DATAX_HOME=/opt/soft/datax/bin/datax.py
-        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-        ```
-
-     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out.`
-
-
-
-- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
-
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the parameters in the one-click deployment config file `conf/config/install_config.conf`, pay special attention to the configuration of the following parameters.
-
-    ```shell
-    # choose mysql or postgresql
-    dbtype="mysql"
-
-    # Database connection address and port
-    dbhost="localhost:3306"
-
-    # database name
-    dbname="dolphinscheduler"
-
-    # database username
-    username="xxx"
-
-    # database password
-    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
-    password="xxx"
-
-    # Zookeeper address, localhost:2181, remember the port 2181
-    zkQuorum="localhost:2181"
-
-    # Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
-    installPath="/opt/soft/dolphinscheduler"
-
-    # deployment user
-    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
-    deployUser="dolphinscheduler"
-
-    # alert config,take QQ email for example
-    # mail protocol
-    mailProtocol="SMTP"
-
-    # mail server host
-    mailServerHost="smtp.qq.com"
-
-    # mail server port
-    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
-    mailServerPort="25"
-
-    # mail sender
-    mailSender="xxx@qq.com"
-
-    # mail user
-    mailUser="xxx@qq.com"
-
-    # mail sender password
-    # note: The mail.passwd is email service authorization code, not the email login password.
-    mailPassword="xxx"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported
-    starttlsEnable="true"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported。
-    # note: only one of TLS and SSL can be in the true state.
-    sslEnable="false"
-
-    # note: sslTrust is the same as mailServerHost
-    sslTrust="smtp.qq.com"
-
-
-    # resource storage type:HDFS,S3,NONE
-    resourceStorageType="HDFS"
-
-    # here is an example of saving to a local file system
-    # Note: If you want to upload resource files to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    defaultFS="file:///data/dolphinscheduler"
-
-
-    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
-    # Note: For tasks that depend on YARN to execute, you need to ensure that YARN information is configured correctly in order to ensure successful execution results.
-    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
-    singleYarnIp="yarnIp1"
-
-    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
-    resourceUploadPath="/data/dolphinscheduler"
-
-    # specify the user who have permissions to create directory under HDFS/S3 root path
-    hdfsRootUser="hdfs"
-
-
-
-    # On which machines to deploy the DS service, choose localhost for this machine
-    ips="localhost"
-
-    # ssh port, default 22
-    # Note: if ssh port is not default, modify here
-    sshPort="22"
-
-    # run master machine
-    masters="localhost"
-
-    # run worker machine
-    workers="localhost"
-
-    # run alert machine
-    alertServer="localhost"
-
-    # run api machine
-    apiServers="localhost"
-
-    ```
-
-    *Attention:*
-
-    - If you need to upload resources to the Hadoop cluster, and the NameNode of the Hadoop cluster is configured with HA, you need to enable HDFS resource upload, and you need to copy the core-site.xml and hdfs-site.xml in the Hadoop cluster to /opt/ dolphinscheduler/conf. Non-NameNode HA skips the next step.
-
-# 7、Automated Deployment
-
-- Switch to the deployment user and execute the one-click deployment script
-
-    `sh install.sh`
-
-   ```
-   Note:
-   For the first deployment, the following message appears in step 3 of `3, stop server` during operation. This message can be ignored.
-   sh: bin/dolphinscheduler-daemon.sh: No such file or directory
-   ```
-
-- After the script is completed, the following 5 services will be started. Use the `jps` command to check whether the services are started (` jps` comes with `java JDK`)
-
-```aidl
-    MasterServer         ----- master service
-    WorkerServer         ----- worker service
-    LoggerServer         ----- logger service
-    ApiApplicationServer ----- api service
-    AlertServer          ----- alert service
-```
-If the above services are started normally, the automatic deployment is successful.
-
-
-After the deployment is successful, you can view the logs. The logs are stored in the logs folder.
-
-```log path
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-
-
-# 8、login
-
-- Access the address of the front page, interface IP (self-modified)
-http://localhost:12345/dolphinscheduler
-
-   <p align="center">
-     <img src="/img/login.png" width="60%" />
-   </p>
-
-
-
-# 9、Start and stop service
-
-* Stop all services
-
-  ` sh ./bin/stop-all.sh`
-
-* Start all services
-
-  ` sh ./bin/start-all.sh`
-
-* Start and stop master service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* Start and stop worker Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* Start and stop api Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-
-* Start and stop logger Service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-
-* Start and stop alert service
-
-```shell
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-``Note: Please refer to the "Architecture Design" section for service usage``
-
diff --git a/docs/en-us/1.3.1/user_doc/system-manual.md b/docs/en-us/1.3.1/user_doc/system-manual.md
deleted file mode 100644
index 945b2cb..0000000
--- a/docs/en-us/1.3.1/user_doc/system-manual.md
+++ /dev/null
@@ -1,836 +0,0 @@
-# System User Manual
-
-
-## Get started quickly
-
-> Please refer to[Get started quickly](./quick-start.md)
-
-## Operation guide
-### 1. Home
-The home page contains task status statistics, process status statistics, and workflow definition statistics of all items of the user.
-    <p align="center">
-        <img src="/img/home_en.png" width="80%" />
-    </p>
-
-### 2. Project management
-#### 2.1 Create project
-- Click "Project Management" to enter the project management page, click the "Create Project" button, enter the project name, project description, and click "Submit" to create a new project。
-
-    <p align="center">
-        <img src="/img/create_project_en.png" width="80%" />
-    </p>
-
-#### 2.2 Project Home
-- Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project。
-
-    <p align="center">
-        <img src="/img/project_home_en.png" width="80%" />
-     </p>
-
-- Task status statistics: within the specified time range, count the number of tasks in the task instance as submitted successfully, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
- - Process status statistics: within the specified time range, count the number of statuses in the workflow instance as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads
- - Workflow definition statistics: count the workflow definitions created by users and the workflow definitions granted to the user by the administrator
-
-#### 2.3 Workflow definition
-#### <span id=creatDag>2.3.1 Create a workflow definition</span>
-- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, and click the "Create Workflow" button to enter**Workflow DAG editing**Page, as shown in the figure below:
-    <p align="center">
-        <img src="/img/dag5.png" width="80%" />
-    </p>
-- Drag in the toolbar <img src="/img/tasks/icons/shell.png" width="15"/>
-Add a Shell task to the drawing board, as shown in the figure below:
-
-![demo-shell-simple](/img/tasks/demo/shell.jpg)
-
-  - **Add parameter settings for shell tasks:**
-  1. Fill in the "Node Name", "Description", and "Script" fields;
-  2. Check “Normal” for “Run Flag”. If you check “Execution Prohibited”, the task will not be executed when running the workflow;
-  3. Select "Task Priority": When the number of worker threads is insufficient, high-level tasks will be executed first in the execution queue, and tasks with the same priority will be executed in the order of first in, first out;
-  4. Timeout alarm (not required): Check the timeout alarm, timeout failure, and fill in the "timeout period". When the task execution time exceeds **timeout period**, an alert email will be sent and the task timeout fails;
-  5. Resources (optional). The resource file is a file created or uploaded on the Resource Center -> File Management page. For example, the file name is `test.sh`, and the resource call command in the script is `sh test.sh`;
-  <!-- markdown-link-check-disable-next-line -->
-  6. Custom parameters (not required), refer to [Custom Parameters](#UserDefinedParameters);
-  7. Click the "Confirm Add" button to save the task settings.
-  
-  - **Increase the order of task execution:** Click the icon in the upper right corner<img src="/img/line.png" width="35"/>Connect tasks; as shown in the figure below, task 2 and task 3 are executed in parallel. When task 1 is completed, tasks 2 and 3 will be executed at the same time.
-
-    <p align="center">
-        <img src="/img/dag6.png" width="80%" />
-    </p>
-
-- **Remove dependencies:** lick the "arrow" icon in the upper right corner<img src="/img/arrow.png" width="35"/>,Select the connecting line and click the "delete" icon in the upper right corner<img src="/img/delete.png" width="35"/>,Remove dependencies between tasks.
-    <p align="center">
-       <img src="/img/dag7.png" width="80%" />
-    </p>
-
-<!-- markdown-link-check-disable-next-line -->
-- **Save the workflow definition:** Click the "Save" button, and the "Set DAG Diagram Name" pop-up box will pop up, as shown in the figure below, enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to [Custom Parameters](#UserDefinedParameters)) , Click the "Add" button, the workflow definition is created successfully.
-    <p align="center">
-       <img src="/img/dag8.png" width="80%" />
-     </p>
-  > For other types of tasks, please refer to [Task Node Type and Parameter Settings](#TaskParamers). <!-- markdown-link-check-disable-line -->
-#### 2.3.2  Workflow definition operation function
-  Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown below:
-      <p align="center">
-          <img src="/img/work_list_en.png" width="80%" />
-      </p>
-  The operation functions of the workflow definition list are as follows:
-  - **edit:** Only "offline" workflow definitions can be edited. Workflow DAG editing is the same as [Create Workflow Definition](#creatDag).<!-- markdown-link-check-disable-line -->
-  - **online:** When the workflow status is "offline", the workflow is online. Only the workflow in the "online" state can run, but cannot be edited.
-  - **Offline:** When the workflow status is "online", the offline workflow and the offline workflow can be edited but not run。
-  - **run:** Only online workflows can run. See [2.3.3 Run Workflow] for the operation steps(#runWorkflow) <!-- markdown-link-check-disable-line -->
-  - **timing:** Only the online workflow can set the timing, and the system automatically schedules the workflow to run regularly. The status after creating a timing is "offline", and the timing must be online on the timing management page to take effect. For timing operation steps, please refer to [2.3.4 Workflow Timing](#creatTiming). <!-- markdown-link-check-disable-line -->
-  - **Timing management:** The timing management page can be edited, online/offline, and deleted.
-  - **delete:** Delete the workflow definition.
-  - **download:** Download the workflow definition to the local.
-  - **Tree diagram:** Display the task node type and task status in a tree structure, as shown in the figure below:
-    <p align="center">
-        <img src="/img/tree_en.png" width="80%" />
-    </p> 
-
-#### <span id=runWorkflow>2.3.3 Run the workflow</span>
-  - Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown in the figure below, click the "Go Online" button<img src="/img/online.png" width="35"/>,Go online workflow.
-    <p align="center">
-        <img src="/img/work_list_en.png" width="80%" />
-    </p>
-
-  - Click the "Run" button to pop up the startup parameter setting pop-up box, as shown in the figure below, set the startup parameters, click the "Run" button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.
-     <p align="center">
-       <img src="/img/run_work_en.png" width="80%" />
-     </p>  
-  <span id=runParamers>Description of workflow operating parameters:</span> 
-       
-    * Failure strategy: When a task node fails to execute, other parallel task nodes need to execute the strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all the tasks being executed, and terminate the entire process.
-    * Notification strategy: When the process is over, the process execution information notification email is sent according to the process status, including any status is not sent, successful, failed, successful or failed.
-    * Process priority: The priority of process operation, divided into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high-level processes will be executed first in the execution queue, and processes with the same priority will be executed in the order of first in, first out.
-    * Worker group: The process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
-    * Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process information or email will be sent to all members in the notification group.
-    * Recipient: Select notification policy||Timeout alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
-    * Cc: Select the notification strategy||Timeout alarm||When fault tolerance occurs, process information or warning emails will be copied to the CC list.
-    * Complement: Two modes including serial complement and parallel complement. Serial complement: within the specified time range, the complement is executed sequentially from the start date to the end date, and only one process instance is generated; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances. 
-  * Complement: Execute the workflow definition of the specified date, you can choose the time range of the complement (currently only supports the complement for consecutive days), for example, you need to supplement the data from May 1 to May 10, as shown in the following figure:
-    <p align="center">
-        <img src="/img/complement_en.png" width="80%" />
-    </p>
-
-    >Serial mode: The complement is executed sequentially from May 1 to May 10, and a process instance is generated on the process instance page;
-    
-    >Parallel mode: The tasks from May 1 to May 10 are executed simultaneously, and ten process instances are generated on the process instance page.
-
-#### <span id=creatTiming>2.3.4 Workflow timing</span>
-  - Create timing: Click Project Management -> Workflow -> Workflow Definition, enter the workflow definition page, go online the workflow, and click the "timing" button<img src="/img/timing.png" width="35"/>,The timing parameter setting dialog box will pop up, as shown in the figure below:
-    <p align="center">
-        <img src="/img/time_schedule_en.png" width="80%" />
-    </p>
-  - Choose the start and end time. In the start and end time range, the workflow is run at regular intervals; not in the start and end time range, no more regular workflow instances are generated.
-  - Add a timing that is executed once every day at 5 AM, as shown in the following figure:
-    <p align="center">
-        <img src="/img/timer-en.png" width="80%" />
-    </p>
-  - Failure strategy, notification strategy, process priority, Worker grouping, notification group, recipient, and CC are the same [workflow running parameters](#runParamers)。\. <!-- markdown-link-check-disable-line -->
-  - Click the "Create" button, and the timing is successfully created. At this time, the timing status is "**Offline**", and the timing needs to be **Online** to take effect.
-  - Timed online: Click the "Timing Management" button<img src="/img/timeManagement.png" width="35"/>,Enter the timing management page, click the "online" button, the timing status will change to "online", as shown in the figure below, the workflow takes effect regularly.
-    <p align="center">
-        <img src="/img/time-manage-list-en.png" width="80%" />
-    </p>
-
-#### 2.3.5 Import workflow
-  Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, click the "Import Workflow" button to import the local workflow file, the workflow definition list displays the imported workflow, and the status is offline.
-
-#### 2.4 Workflow example
-#### 2.4.1 View workflow example
-   - Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:
-        <p align="center">
-           <img src="/img/instance-list-en.png" width="80%" />
-        </p>           
-   -  Click the workflow name to enter the DAG view page to view the task execution status, as shown in the figure below.
-      <p align="center">
-        <img src="/img/instance-runs-en.png" width="80%" />
-      </p>
-#### 2.4.2 View task log
-   - Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the following figure:
-      <p align="center">
-        <img src="/img/instanceViewLog-en.png" width="80%" />
-      </p>
-   - Click "View Log", a log pop-up box will pop up, as shown in the figure below, the task log can also be viewed on the task instance page, refer to [Task View Log](#taskLog)。 <!-- markdown-link-check-disable-line -->
-      <p align="center">
-        <img src="/img/task-log-en.png" width="80%" />
-      </p>
-#### 2.4.3 View task history
-   - Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
-   - Double-click the task node, as shown in the figure below, click "View History" to jump to the task instance page, and display a list of task instances running by the workflow instance
-      <p align="center">
-        <img src="/img/task_history_en.png" width="80%" />
-      </p>
-      
-#### 2.4.4 View operating parameters
-   - Click Project Management -> Workflow -> Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
-   - Click the icon in the upper left corner<img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon<img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure:
-      <p align="center">
-        <img src="/img/run_params_en.png" width="80%" />
-      </p>      
- 
-#### 2.4.4 Workflow instance operation function
-   Click Project Management -> Workflow -> Workflow Instance to enter the Workflow Instance page, as shown in the figure below:          
-      <p align="center">
-        <img src="/img/instance-list-en.png" width="80%" />
-      </p>
-
-  - **edit:** Only terminated processes can be edited. Click the "Edit" button or the name of the workflow instance to enter the DAG editing page. After editing, click the "Save" button to pop up the Save DAG pop-up box, as shown in the figure below. In the pop-up box, check "Whether to update to workflow definition" and save After that, the workflow definition will be updated; if it is not checked, the workflow definition will not be updated.
-       <p align="center">
-         <img src="/img/editDag-en.png" width="80%" />
-       </p>
-  - **Rerun:** Re-execute the terminated process.
-  - **Recovery failed:** For failed processes, you can perform recovery operations, starting from the failed node.
-  - **stop:** To **stop** the running process, the background will first `kill`worker process, and then execute `kill -9` operation
-  - **Pause:** Perform a **pause** operation on the running process, the system status will change to **waiting for execution**, it will wait for the end of the task being executed, and pause the next task to be executed.
-  - **Resume pause:** To resume the paused process, start running directly from the **paused node**
-  - **Delete:** Delete the workflow instance and the task instance under the workflow instance
-  - **Gantt chart:** The vertical axis of the Gantt chart is the topological sorting of task instances under a certain workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:         
-       <p align="center">
-           <img src="/img/gant-en.png" width="80%" />
-       </p>
-
-#### 2.5 Task instance
-  - Click Project Management -> Workflow -> Task Instance to enter the task instance page, as shown in the figure below, click the name of the workflow instance, you can jump to the workflow instance DAG chart to view the task status.
-       <p align="center">
-          <img src="/img/task-list-en.png" width="80%" />
-       </p>
-
-  - <span id=taskLog>View log:</span>Click the "view log" button in the operation column to view the log of task execution.
-       <p align="center">
-          <img src="/img/task-log2-en.png" width="80%" />
-       </p>
-
-### 3. Resource Center
-#### 3.1 hdfs resource configuration
-  - Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
-  
-```  
-conf/common.properties  
-    # Users who have permission to create directories under the HDFS root path
-    hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/dolphinscheduler" is recommended
-    resource.upload.path=/dolphinscheduler
-    # resource storage type : HDFS,S3,NONE
-    resource.storage.type=HDFS
-    # whether kerberos starts
-    hadoop.security.authentication.startup.state=false
-    # java.security.krb5.conf path
-    java.security.krb5.conf.path=/opt/krb5.conf
-    # loginUserFromKeytab user
-    login.user.keytab.username=hdfs-mycluster@ESZ.COM
-    # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab    
-    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-    # Note,s3 be sure to create the root directory /dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020    
-    #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
-    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-
-```
-
-#### 3.2 File management
-
-  > It is the management of various resource files, including the creation of basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, which can be edited, renamed, downloaded, and deleted.
-  <p align="center">
-   <img src="/img/file-manage-en.png" width="80%" />
- </p>
-
-  * Create a file
- > The file format supports the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql、properties
-
-<p align="center">
-   <img src="/img/file_create_en.png" width="80%" />
- </p>
-
-  * upload files
-
-> 上Upload file: click the "upload file" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name
-
-<p align="center">
-   <img src="/img/file-upload-en.png" width="80%" />
- </p>
-
- * File View
-
-> For the file types that can be viewed, click the file name to view the file details
-
-<p align="center">
-   <img src="/img/file_detail_en.png" width="80%" />
- </p>
-
-  * download file
-
-> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file
-
-  * File rename
-
-<p align="center">
-   <img src="/img/file_rename_en.png" width="80%" />
- </p>
-
-  * delete
->  File list -> Click the "Delete" button to delete the specified file
-
-#### 3.3 UDF management
-#### 3.3.1 Resource management
-  > The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
-  > Operation function: rename, download, delete.
-
-  * Upload udf resources
-  > Same as uploading files.
-  
-
-#### 3.3.2 Function management
-
-  * Create UDF function
-  > Click "Create UDF Function", enter the udf function parameters, select the udf resource, and click "Submit" to create the udf function.
-
- > Currently only supports temporary UDF functions of HIVE
-
-  - UDF function name: the name when the UDF function is entered
-  - Package name Class name: Enter the full path of the UDF function  
-  - UDF resource: Set the resource file corresponding to the created UDF
-
-<p align="center">
-   <img src="/img/udf_edit_en.png" width="80%" />
- </p>
-
-### 4. Create data source
-  > Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, ORACLE, SQLSERVER and other data sources
-
-#### 4.1 Create/Edit MySQL data source
-
-  - Click "Data Source Center -> Create Data Source" to create different types of data sources according to requirements.
-
-  - Data source: select MYSQL
-  - Data source name: enter the name of the data source
-  - Description: Enter a description of the data source
-  - IP hostname: enter the IP to connect to MySQL
-  - Port: Enter the port to connect to MySQL
-  - Username: Set the username for connecting to MySQL
-  - Password: Set the password for connecting to MySQL
-  - Database name: Enter the name of the database connected to MySQL
-  - Jdbc connection parameters: parameter settings for MySQL connection, filled in in JSON form
-
-<p align="center">
-   <img src="/img/mysql-en.png" width="80%" />
- </p>
-
-  > Click "Test Connection" to test whether the data source can be successfully connected.
-
-#### 4.2 Create/Edit POSTGRESQL data source
-
-- Data source: select POSTGRESQL
-- Data source name: enter the name of the data source
-- Description: Enter a description of the data source
-- IP/Host Name: Enter the IP to connect to POSTGRESQL
-- Port: Enter the port to connect to POSTGRESQL
-- Username: Set the username for connecting to POSTGRESQL
-- Password: Set the password for connecting to POSTGRESQL
-- Database name: Enter the name of the database connected to POSTGRESQL
-- Jdbc connection parameters: parameter settings for POSTGRESQL connection, filled in in JSON form
-
-<p align="center">
-   <img src="/img/postgresql-en.png" width="80%" />
- </p>
-
-#### 4.3 Create/Edit HIVE data source
-
-1.Use HiveServer2 to connect
-
- <p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-
-- Data source: select HIVE
-- Data source name: enter the name of the data source
-- Description: Enter a description of the data source
-- IP/Host Name: Enter the IP connected to HIVE
-- Port: Enter the port connected to HIVE
-- Username: Set the username for connecting to HIVE
-- Password: Set the password for connecting to HIVE
-- Database name: Enter the name of the database connected to HIVE
-- Jdbc connection parameters: parameter settings for HIVE connection, filled in in JSON form
-
-2.Use HiveServer2 HA Zookeeper to connect
-
- <p align="center">
-    <img src="/img/hive1-en.png" width="80%" />
-  </p>
-
-
-Note: If you enable **kerberos**, you need to fill in **Principal**
-<p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-
-#### 4.4 Create/Edit Spark data source
-
-<p align="center">
-   <img src="/img/spark-en.png" width="80%" />
- </p>
-
-- Data source: select Spark
-- Data source name: enter the name of the data source
-- Description: Enter a description of the data source
-- IP/Hostname: Enter the IP connected to Spark
-- Port: Enter the port connected to Spark
-- Username: Set the username for connecting to Spark
-- Password: Set the password for connecting to Spark
-- Database name: Enter the name of the database connected to Spark
-- Jdbc connection parameters: parameter settings for Spark connection, filled in in JSON form
-
-### 5. Security Center (Permission System)
-
-     * Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
-     * Administrator login, default user name and password: admin/dolphinscheduler123
-
-#### 5.1 Create queue
-  - Queue is used when the "queue" parameter is needed to execute programs such as spark and mapreduce.
-  - The administrator enters the Security Center->Queue Management page and clicks the "Create Queue" button to create a queue.
- <p align="center">
-    <img src="/img/create-queue-en.png" width="80%" />
-  </p>
-
-#### 5.2 Add tenant
-  - The tenant corresponds to the Linux user, which is used by the worker to submit the job. If Linux does not have this user, the worker will create this user when executing the script.
-  - Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
-  - The administrator enters the Security Center->Tenant Management page and clicks the "Create Tenant" button to create a tenant.
-
- <p align="center">
-    <img src="/img/addtenant-en.png" width="80%" />
-  </p>
-
-#### 5.3 Create normal user
-  -  Users are divided into **administrator users** and **normal users**
-  
-    * The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.
-    * Ordinary users can create projects and create, edit, and execute workflow definitions.
-    * Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.
-  - The administrator enters the Security Center -> User Management page and clicks the "Create User" button to create a user.        
-<p align="center">
-   <img src="/img/user-en.png" width="80%" />
- </p>
-  
-  > **Edit user information** 
-   - The administrator enters the Security Center->User Management page and clicks the "Edit" button to edit user information.
-   - After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the "Edit" button to edit the user information.
-  
-  > **Modify user password** 
-   - The administrator enters the Security Center->User Management page and clicks the "Edit" button. When editing user information, enter the new password to modify the user password.
-   - After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the "Edit" button, then the password modification is successful.
-   
-
-#### 5.4 Create alarm group
-  * The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.
-  - The administrator enters the Security Center -> Alarm Group Management page and clicks the "Create Alarm Group" button to create an alarm group.
-
-  <p align="center">
-    <img src="/img/mail-en.png" width="80%" />
-  </p>
-
-
-#### 5.5 Token management
-  > Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.
-  - The administrator enters the Security Center -> Token Management page, clicks the "Create Token" button, selects the expiration time and user, clicks the "Generate Token" button, and clicks the "Submit" button, then the selected user's token is created successfully.
-
-  <p align="center">
-      <img src="/img/creat-token-en.png" width="80%" />
-   </p>
-  
-  - After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the "generate token" button, and click the "submit" button, then the user creates a token successfully.
-    
-  - Call example:
-  
-```Token call example
-    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create");
-        httpPost.setHeader("token", "123");
-        // set parameters
-        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
-        parameters.add(new BasicNameValuePair("projectName", "qzw"));
-        parameters.add(new BasicNameValuePair("desc", "qzw"));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-```
-
-#### 5.6 Granted permission
-
-    * Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
-    * The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
-    * Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
- 
-  - The administrator enters the Security Center -> User Management page and clicks the "Authorize" button of the user who needs to be authorized, as shown in the figure below:
-  <p align="center">
-   <img src="/img/auth-en.png" width="80%" />
- </p>
-
-  - Select the project to authorize the project.
-
-<p align="center">
-   <img src="/img/authproject-en.png" width="80%" />
- </p>
-  
-  - Resources, data sources, and UDF function authorization are the same as project authorization.
-
-### 6. monitoring Center
-
-#### 6.1 Service management
-  - Service management is mainly to monitor and display the health status and basic information of each service in the system
-
-#### 6.1.1 master monitoring
-  - Mainly related to master information.
-<p align="center">
-   <img src="/img/master-jk-en.png" width="80%" />
- </p>
-
-#### 6.1.2 worker monitoring
-  - Mainly related to worker information.
-
-<p align="center">
-   <img src="/img/worker-jk-en.png" width="80%" />
- </p>
-
-#### 6.1.3 Zookeeper monitoring
-  - Mainly related configuration information of each worker and master in zookpeeper.
-
-<p align="center">
-   <img src="/img/zookeeper-monitor-en.png" width="80%" />
- </p>
-
-#### 6.1.4 DB monitoring
-  - Mainly the health of the DB
-
-<p align="center">
-   <img src="/img/mysql-jk-en.png" width="80%" />
- </p>
- 
-#### 6.2 Statistics management
-<p align="center">
-   <img src="/img/statistics-en.png" width="80%" />
- </p>
- 
-  - Number of commands to be executed: statistics on the t_ds_command table
-  - The number of failed commands: statistics on the t_ds_error_command table
-  - Number of tasks to run: Count the data of task_queue in Zookeeper
-  - Number of tasks to be killed: Count the data of task_kill in Zookeeper
- 
-### 7. <span id=TaskParamers>Task node type and parameter settings</span>
-
-#### 7.1 Shell node
-  > Shell node, when the worker is executed, a temporary shell script is generated, and the linux user with the same name as the tenant executes the script.
-  - Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-  - Drag <img src="/img/tasks/icons/shell.png" width="15"/> from the toolbar to the drawing board, as shown in the figure below:
-
-    ![demo-shell-simple](/img/tasks/demo/shell.jpg)
-
-- Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Script: SHELL program developed by users.
-- Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
-- User-defined parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with ${variable} in the script.
-
-#### 7.2 Sub-process node
-  - The sub-process node is to execute a certain external workflow definition as a task node.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/sub-process-en.png" width="80%" />
- </p>
-
-- Node name: The node name in a workflow definition is unique
-- Run flag: identify whether this node can be scheduled normally
-- Descriptive information: describe the function of the node
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process
-
-#### 7.3 DEPENDENT node
-  - Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node in the toolbar to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/dependent-nodes-en.png" width="80%" />
- </p>
-
-  > The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.
-
-  <p align="center">
-   <img src="/img/depend-node-en.png" width="80%" />
- </p>
-
-  > For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:
-
- <p align="center">
-   <img src="/img/depend-node1-en.png" width="80%" />
- </p>
-
-  > If the weekly report A also needs to be executed successfully last Tuesday:
-
- <p align="center">
-   <img src="/img/depend-node3-en.png" width="80%" />
- </p>
-
-#### 7.4 Stored procedure node
-  - According to the selected data source, execute the stored procedure.
-> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png)The task node to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/procedure-en.png" width="80%" />
- </p>
-
-- Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source
-- Method: is the method name of the stored procedure
-- Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN
-
-#### 7.5 SQL node
-  - Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png)Task node into the drawing board
-  - Non-query SQL function: edit non-query SQL task information, select non-query for sql type, as shown in the figure below:
-  <p align="center">
-   <img src="/img/sql-en.png" width="80%" />
- </p>
-
-  - Query SQL function: Edit and query SQL task information, sql type selection query, select form or attachment to send mail to the specified recipient, as shown in the figure below.
-
-<p align="center">
-   <img src="/img/sql-node-en.png" width="80%" />
- </p>
-
-- Data source: select the corresponding data source
-- sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.
-- sql parameter: the input parameter format is key1=value1;key2=value2...
-- sql statement: SQL statement
-- UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.
-- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.
-- Pre-sql: Pre-sql is executed before the sql statement.
-- Post-sql: Post-sql is executed after the sql statement.
-
-
-#### 7.6 SPARK node
-  - Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the `spark-submit` method to submit tasks
-
-> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png)The task node to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/spark_edit.png" width="80%" />
- </p>
-
-- Program type: supports JAVA, Scala and Python three languages
-- The class of the main function: is the full path of the Spark program’s entry Main Class
-- Main jar package: Spark jar package
-- Deployment mode: support three modes of yarn-cluster, yarn-client and local
-- Driver core number: You can set the number of Driver cores and the number of memory
-- Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores
-- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
-- Other parameters: support --jars, --files, --archives, --conf format
-- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script
-
- Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same
-
-#### 7.7 MapReduce(MR)节点
-  - Using the MR node, you can directly execute the MR program. For the mr node, the worker will use the `hadoop jar` method to submit tasks
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png) task node in the toolbar to the drawing board, as shown in the following figure:
-
- 1. JAVA program
-
- <p align="center">
-   <img src="/img/mr_java_en.png" width="80%" />
- </p>
- 
-- The class of the main function: is the full path of the Main Class, the entry point of the MR program
-- Program type: select JAVA language
-- Main jar package: is the MR jar package
-- Command line parameters: set the input parameters of the MR program and support the substitution of custom parameter variables
-- Other parameters: support -D, -files, -libjars, -archives format
-- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script
-
-2. Python program
-
-<p align="center">
-   <img src="/img/mr_edit_en.png" width="80%" />
- </p>
-
-- Program type: select Python language
-- Main jar package: is the Python jar package for running MR
-- Other parameters: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:
-- -mapper "mapper.py 1" -file mapper.py -reducer reducer.py -file reducer.py –input /journey/words.txt -output /journey/out/mr/${currentTimeMillis}
-- The mapper.py 1 after -mapper is two parameters, the first parameter is mapper.py, and the second parameter is 1
-- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script
-
-#### 7.8 Python Node
-  - Using python nodes, you can directly execute python scripts. For python nodes, workers will use `python **` to submit tasks.
-
-
-> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)The task node to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/python-en.png" width="80%" />
- </p>
-
-- Script: Python program developed by the user
-- Resources: refers to the list of resource files that need to be called in the script
-- User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with ${variable} in the script
-- Note: If you import the python file under the resource directory tree, you need to add the __init__.py file
-
-#### 7.9 Flink Node
-  - Drag in the toolbar<img src="/img/flink.png" width="35"/>The task node to the drawing board, as shown in the following figure:
-
-<p align="center">
-  <img src="/img/flink-en.png" width="80%" />
-</p>
-
-
-- Program type: supports JAVA, Scala and Python three languages
-- The class of the main function: is the full path of the Main Class, the entry point of the Flink program
-- Main jar package: is the Flink jar package
-- Deployment mode: support three modes of cluster and local
-- Number of slots: You can set the number of slots
-- Number of taskManage: You can set the number of taskManage
-- JobManager memory number: You can set the jobManager memory number
-- TaskManager memory number: You can set the taskManager memory number
-- Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
-- Other parameters: support --jars, --files, --archives, --conf format
-- Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- Custom parameter: It is a local user-defined parameter of Flink, which will replace the content with ${variable} in the script
-
- Note: JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same
-
-#### 7.10 http Node  
-
-  - Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:
-
-<p align="center">
-   <img src="/img/http-en.png" width="80%" />
- </p>
-
-- Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Request address: http request URL.
-- Request type: support GET, POSt, HEAD, PUT, DELETE.
-- Request parameters: Support Parameter, Body, Headers.
-- Verification conditions: support default response code, custom response code, content included, content not included.
-- Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.
-- Custom parameter: It is a user-defined parameter of http part, which will replace the content with ${variable} in the script.
-
-#### 7.11 DATAX Node
-
-  - Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the drawing board
-
-  <p align="center">
-   <img src="/img/datax-en.png" width="80%" />
-  </p>
-
-- Custom template: When you turn on the custom template switch, you can customize the content of the json configuration file of the datax node (applicable when the control configuration does not meet the requirements)
-- Data source: select the data source to extract the data
-- sql statement: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias (as)
-- Target library: select the target library for data synchronization
-- Target table: the name of the target table for data synchronization
-- Pre-sql: Pre-sql is executed before the sql statement (executed by the target library).
-- Post-sql: Post-sql is executed after the sql statement (executed by the target library).
-- json: json configuration file for datax synchronization
-- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.
-
-#### 8. parameter
-#### 8.1 System parameters
-
-<table>
-    <tr><th>variable</th><th>meaning</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>The day before the scheduled time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td>The timing time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>The timing time of the daily scheduling instance, the format is yyyyMMddHHmmss, when the data is supplemented, the date is +1</td>
-    </tr>
-</table>
-
-
-#### 8.2 Time custom parameters
-
-  - Support custom variable names in the code, declaration method: ${variable name}. It can refer to "system parameters" or specify "constants".
-
-  - We define this benchmark variable as $[...] format, $[yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as: $[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd], etc.
-
-  - The following format can also be used:
-  
-
-        * Next N years:$[add_months(yyyyMMdd,12*N)]
-        * N years before:$[add_months(yyyyMMdd,-12*N)]
-        * Next N months:$[add_months(yyyyMMdd,N)]
-        * N months before:$[add_months(yyyyMMdd,-N)]
-        * Next N weeks:$[yyyyMMdd+7*N]
-        * First N weeks:$[yyyyMMdd-7*N]
-        * Next N days:$[yyyyMMdd+N]
-        * N days before:$[yyyyMMdd-N]
-        * Next N hours:$[HHmmss+N/24]
-        * First N hours:$[HHmmss-N/24]
-        * Next N minutes:$[HHmmss+N/24/60]
-        * First N minutes:$[HHmmss-N/24/60]
-
-#### 8.3 <span id=UserDefinedParameters>User-defined parameters</span>
-
-  - User-defined parameters are divided into global parameters and local parameters. Global parameters are global parameters passed when saving workflow definitions and workflow instances. Global parameters can be referenced in the local parameters of any task node in the entire process.
-    example:
-
-<p align="center">
-   <img src="/img/local_parameter_en.png" width="80%" />
- </p>
-
-  - global_bizdate is a global parameter, which refers to a system parameter.
-
-<p align="center">
-   <img src="/img/global_parameter_en.png" width="80%" />
- </p>
-
- - In the task, local_param_bizdate uses \${global_bizdate} to refer to global parameters. For scripts, you can use \${local_param_bizdate} to refer to the value of global variable global_bizdate, or directly set the value of local_param_bizdate through JDBC.
-
diff --git a/docs/en-us/1.3.1/user_doc/task-structure.md b/docs/en-us/1.3.1/user_doc/task-structure.md
deleted file mode 100644
index 2442e7e..0000000
--- a/docs/en-us/1.3.1/user_doc/task-structure.md
+++ /dev/null
@@ -1,1136 +0,0 @@
-
-# Overall task storage structure
-All tasks created in dolphinscheduler are saved in the t_ds_process_definition table.
-
-The database table structure is shown in the following table:
-
-
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|id|int(11)|Primary key
-2|name|varchar(255)|Process definition name
-3|version|int(11)|Process definition version
-4|release_state|tinyint(4)|Release status of the process definition:0 not online, 1 online
-5|project_id|int(11)|Project id
-6|user_id|int(11)|User id of the process definition
-7|process_definition_json|longtext|Process definition JSON
-8|description|text|Process definition description
-9|global_params|text|Global parameters
-10|flag|tinyint(4)|Whether the process is available: 0 is not available, 1 is available
-11|locations|text|Node coordinate information
-12|connects|text|Node connection information
-13|receivers|text|Recipient
-14|receivers_cc|text|Cc
-15|create_time|datetime|Creation time
-16|timeout|int(11) |overtime time
-17|tenant_id|int(11) |Tenant id
-18|update_time|datetime|Update time
-19|modify_by|varchar(36)|Modify user
-20|resource_ids|varchar(255)|Resource ids
-
-The process_definition_json field is the core field, which defines the task information in the DAG diagram. The data is stored in JSON.
-
-The public data structure is as follows.
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|globalParams|Array|Global parameters
-2|tasks|Array|Task collection in the process  [ Please refer to the following chapters for the structure of each type]
-3|tenantId|int|Tenant id
-4|timeout|int|overtime time
-
-Data example:
-```bash
-{
-    "globalParams":[
-        {
-            "prop":"golbal_bizdate",
-            "direct":"IN",
-            "type":"VARCHAR",
-            "value":"${system.biz.date}"
-        }
-    ],
-    "tasks":Array[1],
-    "tenantId":0,
-    "timeout":0
-}
-```
-
-# Detailed explanation of the storage structure of each task type
-
-## Shell node
-**The node data structure is as follows:**
-Serial number|Field||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SHELL",
-    "id":"tasks-80760",
-    "name":"Shell Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"echo "This is a shell script""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-
-```
-
-
-## SQL node
-Perform data query and update operations on the specified data source through SQL.
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json 格式
-5| |type |String | Database type
-6| |datasource |Int | Data source id
-7| |sql |String | Query SQL statement
-8| |udfs | String| udf function|UDF function ids, separated by commas.
-9| |sqlType | String| SQL node type |0 query, 1 non query
-10| |title |String | Mail title
-11| |receivers |String | Recipient
-12| |receiversCc |String | Cc
-13| |showType | String| Mail display type|TABLE table  ,  ATTACHMENT attachment
-14| |connParams | String| Connection parameters
-15| |preStatements | Array| Pre-SQL
-16| | postStatements| Array|Post SQL||
-17| | localParams| Array|Custom parameters||
-18|description | |String|Dscription | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SQL",
-    "id":"tasks-95648",
-    "name":"SqlTask-Query",
-    "params":{
-        "type":"MYSQL",
-        "datasource":1,
-        "sql":"select id , namge , age from emp where id =  ${id}",
-        "udfs":"",
-        "sqlType":"0",
-        "title":"xxxx@xxx.com",
-        "receivers":"xxxx@xxx.com",
-        "receiversCc":"",
-        "showType":"TABLE",
-        "localParams":[
-            {
-                "prop":"id",
-                "direct":"IN",
-                "type":"INTEGER",
-                "value":"1"
-            }
-        ],
-        "connParams":"",
-        "preStatements":[
-            "insert into emp ( id,name ) value (1,'Li' )"
-        ],
-        "postStatements":[
-
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## PROCEDURE [stored procedure] node
-**The node data structure is as follows:**
-**Sample node data:**
-
-## SPARK node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Types of |SPARK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |driverCores | String| Driver core
-11| |driverMemory | String| Driver memory
-12| |numExecutors |String | Number of executors
-13| |executorMemory |String | Executor memory
-14| |executorCores |String | Number of executor cores
-15| |programType | String| Type program|JAVA,SCALA,PYTHON
-16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
-17| | localParams| Array|Custom parameters
-18| | resourceList| Array|Resource
-19|description | |String|Description | |
-20|runFlag | |String |Run ID| |
-21|conditionResult | |Object|Conditional branch | |
-22| | successNode| Array|Jump to node successfully| |
-23| | failedNode|Array|Failed jump node | 
-24| dependence| |Object |Task dependency |Mutually exclusive with params
-25|maxRetryTimes | |String|Maximum number of retries | |
-26|retryInterval | |String |Retry interval| |
-27|timeout | |Object|Timeout control | |
-28| taskInstancePriority| |String|Task priority | |
-29|workerGroup | |String |Worker Grouping| |
-30|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SPARK",
-    "id":"tasks-87430",
-    "name":"SparkTask",
-    "params":{
-        "mainClass":"org.apache.spark.examples.SparkPi",
-        "mainJar":{
-            "id":4
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "driverCores":1,
-        "driverMemory":"512M",
-        "numExecutors":2,
-        "executorMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"10",
-        "others":"",
-        "programType":"SCALA",
-        "sparkVersion":"SPARK2"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## MapReduce (MR) node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |描Description述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |MR
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |programType | String| Program type|JAVA,PYTHON
-10| | localParams| Array|Custom parameters
-11| | resourceList| Array|Resource
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"MR",
-    "id":"tasks-28997",
-    "name":"MRTask",
-    "params":{
-        "mainClass":"wordcount",
-        "mainJar":{
-            "id":5
-        },
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
-        "others":"",
-        "programType":"JAVA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## Python node
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |PYTHON
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Python script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"PYTHON",
-    "id":"tasks-5463",
-    "name":"Python Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"print("This is a python script")"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-
-## Flink node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |FLINK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |slot | String| Number of slots
-11| |taskManager |String | Number of taskManage
-12| |taskManagerMemory |String | TaskManager memory
-13| |jobManagerMemory |String | JobManager memory number
-14| |programType | String| Program type|JAVA,SCALA,PYTHON
-15| | localParams| Array|Custom parameters
-16| | resourceList| Array|Resource
-17|description | |String|Description | |
-18|runFlag | |String |Run ID| |
-19|conditionResult | |Object|Conditional branch | |
-20| | successNode| Array|Jump to node successfully| |
-21| | failedNode|Array|Failed jump node | 
-22| dependence| |Object |Task dependency |Mutually exclusive with params
-23|maxRetryTimes | |String|Maximum number of retries | |
-24|retryInterval | |String |Retry interval| |
-25|timeout | |Object|Timeout control | |
-26| taskInstancePriority| |String|Task priority | |
-27|workerGroup | |String |Worker Grouping| |
-38|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"FLINK",
-    "id":"tasks-17135",
-    "name":"FlinkTask",
-    "params":{
-        "mainClass":"com.flink.demo",
-        "mainJar":{
-            "id":6
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "slot":1,
-        "taskManager":"2",
-        "jobManagerMemory":"1G",
-        "taskManagerMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"100",
-        "others":"",
-        "programType":"SCALA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## HTTP node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Type|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |HTTP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |url |String | Request address
-6| |httpMethod | String| Request method|GET,POST,HEAD,PUT,DELETE
-7| | httpParams| Array|Request parameter
-8| |httpCheckCondition | String| Check conditions|Default response code 200
-9| |condition |String | Check content
-10| | localParams| Array|Custom parameters
-11|description | |String|Description | |
-12|runFlag | |String |Run ID| |
-13|conditionResult | |Object|Conditional branch | |
-14| | successNode| Array|Jump to node successfully| |
-15| | failedNode|Array|Failed jump node | 
-16| dependence| |Object |Task dependency |Mutually exclusive with params
-17|maxRetryTimes | |String|Maximum number of retries | |
-18|retryInterval | |String |Retry interval| |
-19|timeout | |Object|Timeout control | |
-20| taskInstancePriority| |String|Task priority | |
-21|workerGroup | |String |Worker Grouping| |
-22|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"HTTP",
-    "id":"tasks-60499",
-    "name":"HttpTask",
-    "params":{
-        "localParams":[
-
-        ],
-        "httpParams":[
-            {
-                "prop":"id",
-                "httpParametersType":"PARAMETER",
-                "value":"1"
-            },
-            {
-                "prop":"name",
-                "httpParametersType":"PARAMETER",
-                "value":"Bo"
-            }
-        ],
-        "url":"https://www.xxxxx.com:9012",
-        "httpMethod":"POST",
-        "httpCheckCondition":"STATUS_CODE_DEFAULT",
-        "condition":""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## DataX node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DATAX
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |customConfig |Int | Custom type| 0 custom, 1 custom
-6| |dsType |String | Source database type
-7| |dataSource |Int | Source database ID
-8| |dtType | String| Target database type
-9| |dataTarget | Int| Target database ID 
-10| |sql |String | SQL statement
-11| |targetTable |String | Target table
-12| |jobSpeedByte |Int | Current limit (bytes)
-13| |jobSpeedRecord | Int| Current limit (number of records)
-14| |preStatements | Array| Pre-SQL
-15| | postStatements| Array|Post SQL
-16| | json| String|Custom configuration|Effective when customConfig=1
-17| | localParams| Array|Custom parameters|Effective when customConfig=1
-18|description | |String|Description | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-
-```bash
-{
-    "type":"DATAX",
-    "id":"tasks-91196",
-    "name":"DataxTask-DB",
-    "params":{
-        "customConfig":0,
-        "dsType":"MYSQL",
-        "dataSource":1,
-        "dtType":"MYSQL",
-        "dataTarget":1,
-        "sql":"select id, name ,age from user ",
-        "targetTable":"emp",
-        "jobSpeedByte":524288,
-        "jobSpeedRecord":500,
-        "preStatements":[
-            "truncate table emp "
-        ],
-        "postStatements":[
-            "truncate table user"
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## Sqoop node
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQOOP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |JSON format
-5| | concurrency| Int|Concurrency
-6| | modelType|String |Flow direction|import,export
-7| |sourceType|String |Data source type |
-8| |sourceParams |String| Data source parameters| JSON format
-9| | targetType|String |Target data type
-10| |targetParams | String|Target data parameters|JSON format
-11| |localParams |Array |Custom parameters
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SQOOP",
-            "id":"tasks-82041",
-            "name":"Sqoop Task",
-            "params":{
-                "concurrency":1,
-                "modelType":"import",
-                "sourceType":"MYSQL",
-                "targetType":"HDFS",
-                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
-                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
-                "localParams":[
-
-                ]
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-## Conditional branch node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters | null
-5|description | |String|Description | |
-6|runFlag | |String |Run ID| |
-7|conditionResult | |Object|Conditional branch | |
-8| | successNode| Array|Jump to node successfully| |
-9| | failedNode|Array|Failed jump node | 
-10| dependence| |Object |Task dependency |Mutually exclusive with params
-11|maxRetryTimes | |String|Maximum number of retries | |
-12|retryInterval | |String |Retry interval| |
-13|timeout | |Object|Timeout control | |
-14| taskInstancePriority| |String|Task priority | |
-15|workerGroup | |String |Worker Grouping| |
-16|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"CONDITIONS",
-    "id":"tasks-96189",
-    "name":"条件",
-    "params":{
-
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            "test04"
-        ],
-        "failedNode":[
-            "test05"
-        ]
-    },
-    "dependence":{
-        "relation":"AND",
-        "dependTaskList":[
-
-        ]
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-        "test01",
-        "test02"
-    ]
-}
-```
-
-
-## Subprocess node
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |processDefinitionId |Int| Process definition id
-6|description | |String|Description | |
-7|runFlag | |String |Run ID| |
-8|conditionResult | |Object|Conditional branch | |
-9| | successNode| Array|Jump to node successfully| |
-10| | failedNode|Array|Failed jump node | 
-11| dependence| |Object |Task dependency |Mutually exclusive with params
-12|maxRetryTimes | |String|Maximum number of retries | |
-13|retryInterval | |String |Retry interval| |
-14|timeout | |Object|Timeout control | |
-15| taskInstancePriority| |String|Task priority | |
-16|workerGroup | |String |Worker Grouping| |
-17|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SUB_PROCESS",
-            "id":"tasks-14806",
-            "name":"SubProcessTask",
-            "params":{
-                "processDefinitionId":2
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-
-
-## DEPENDENT node
-**The node data structure is as follows:**
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DEPENDENT
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14| | relation|String |Relationship |AND,OR
-15| | dependTaskList|Array |Dependent task list |
-16|maxRetryTimes | |String|Maximum number of retries | |
-17|retryInterval | |String |Retry interval| |
-18|timeout | |Object|Timeout control | |
-19| taskInstancePriority| |String|Task priority | |
-20|workerGroup | |String |Worker Grouping| |
-21|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"DEPENDENT",
-            "id":"tasks-57057",
-            "name":"DenpendentTask",
-            "params":{
-
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-                "relation":"AND",
-                "dependTaskList":[
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":7,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"ALL",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    },
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":5,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"SqlTask-Update",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    }
-                ]
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
diff --git a/docs/en-us/1.3.1/user_doc/upgrade.md b/docs/en-us/1.3.1/user_doc/upgrade.md
deleted file mode 100644
index 3f4a601..0000000
--- a/docs/en-us/1.3.1/user_doc/upgrade.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-# DolphinScheduler Upgrade document
-
-## 1. Back up files and databases of the previous version
-
-## 2. Stop all services of dolphinscheduler
-
- `sh ./script/stop-all.sh`
-
-## 3. Download the new version of the installation package
-
-- [download](/en-us/download/download.html), Download the latest version of the binary installation package
-- The following upgrade operations need to be performed in the new version directory
-
-## 4. Database upgrade
-- Modify the following properties in conf/datasource.properties
-
-- If you choose MySQL, please comment out the PostgreSQL related configuration (the same is true for the reverse), and you also need to manually add the [[ mysql-connector-java driver jar](https://downloads.MySQL.com/archives/) package to lib In the directory, here is mysql-connector-java-5.1.47.jar, and then correctly configure the database connection related information
-
-    ```properties
-      # postgre
-      #spring.datasource.driver-class-name=org.postgresql.Driver
-      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     Need to modify the ip, the local localhost can
-      spring.datasource.username=xxx						Need to be modified to the above {user} value
-      spring.datasource.password=xxx						Need to be modified to the above {password} value
-    ```
-
-- Execute database upgrade script
-
-`sh ./script/upgrade-dolphinscheduler.sh`
-
-## 5. Service upgrade
-
-### 5.1 Modify `conf/config/install_config.conf` configuration content
-For standalone deployment, please refer to [Standalone deployment](./standalone-deployment.md) in `6. Modify the running parameters section`
-For cluster deployment, please refer to [Cluster Deployment (Cluster)](./cluster-deployment.md) in `6. Modify the operating parameters section`
-
-### Precautions
-Creating worker groups has a different design in version 1.3.1 and previous versions
-
-- The worker group was created through the UI interface before version 1.3.1
-- Worker grouping in version 1.3.1 is to modify the worker configuration designation
-
-### How to set the worker grouping during the upgrade is the same as before
-
-1、Query the database that has been backed up, check the t_ds_worker_group table records, and focus on the three fields id, name and ip_list
-
-| id | name | ip_list    |
-| :---         |     :---:      |          ---: |
-| 1   | service1     | 192.168.xx.10    |
-| 2   | service2     | 192.168.xx.11,192.168.xx.12      |
-
-2、Modify the workers parameter in conf/config/install_config.conf
-
-Assume that the following is the correspondence between the host name and ip of the worker to be deployed
-| CPU name | ip |
-| :---  | :---:  |
-| ds1   | 192.168.xx.10     |
-| ds2   | 192.168.xx.11     |
-| ds3   | 192.168.xx.12     |
-
-In order to keep the grouping consistent with the previous version of the worker, you need to change the workers parameter to the following
-
-```shell
-#workerService is deployed on which machine, and specify which worker group this worker belongs to
-workers="ds1:service1,ds2:service2,ds3:service2"
-```
-
-  
-### 5.2 Execute deployment script
-```shell
-`sh install.sh`
-```
-
-
diff --git a/docs/en-us/1.3.2/user_doc/architecture-design.md b/docs/en-us/1.3.2/user_doc/architecture-design.md
deleted file mode 100644
index 29f4ae5..0000000
--- a/docs/en-us/1.3.2/user_doc/architecture-design.md
+++ /dev/null
@@ -1,332 +0,0 @@
-## System Architecture Design
-Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
-
-### 1.Glossary
-**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
-
-**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
-
-**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
-
-**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
-
-**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
-
-**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
-
-**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
-
-**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
-
-**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
-
-**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
-
-### 2.System Structure
-
-#### 2.1 System architecture diagram
-<p align="center">
-  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
-  <p align="center">
-        <em>System architecture diagram</em>
-  </p>
-</p>
-
-#### 2.2 Start process activity diagram
-<p align="center">
-  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
-  <p align="center">
-        <em>Start process activity diagram</em>
-  </p>
-</p>
-
-#### 2.3 Architecture description
-
-* **MasterServer** 
-
-    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
-    MasterServer provides monitoring services based on netty.
-
-    ##### The service mainly includes:
-
-    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
-
-    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
-
-* **WorkerServer** 
-
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
-
-     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
-     Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
-     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
-
-     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
-
-* **ZooKeeper** 
-
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
-
-    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
-
-* **Task Queue** 
-
-    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
-
-* **Alert** 
-
-    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
-
-* **API** 
-
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
-
-* **UI** 
-
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
-
-#### 2.3 Architecture design ideas
-
-##### One、Decentralization VS centralization 
-
-###### Centralized thinking
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
- </p>
-
-- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
-- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
-
-
-
-Problems in centralized thought design:
-
-- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
-
-
-
-###### Decentralized
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
- </p>
-
-- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
-- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
-
-
-
-- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
-
-#####  Two、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
-1. The core process algorithm for acquiring distributed locks is as follows:
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-
-##### Three、Insufficient thread loop waiting problem
-
--  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
--  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
- </p>
-In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
-
-1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
-2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
-
-note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-
-##### Four、Fault-tolerant design
-Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
-
-###### 1. Downtime fault tolerance
-
-The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
- </p>
-Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
-
-
-
-- Master fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
- </p>
-After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
- </p>
-
-Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
-
- Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
-
-###### 2.Task failed and try again
-
-Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
-- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
-
-
-
-Next to the topic, we divide the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
-
-- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
-
-Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
-
-
-
-##### Five、Task priority design
-In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
-
--  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
-    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
-
-        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
-             </p>
-
-        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
-             </p>
-
-
-##### Six、Logback and netty implement log access
-
--  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
-  -  Put logs on ES search engine
-  -  Obtain remote log information through netty communication
-
--  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
- </p>
-
-
-- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
-- FileAppender is mainly implemented as follows:
-
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
- ```
-
-
-Generate logs in the form of /process definition id/process instance id/task instance id.log
-
-- Filter to match the thread name starting with TaskLogInfo:
-
-- TaskLogFilter is implemented as follows:
-
- ```java
- /**
- *  task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
- ```
-
-### 3.Module introduction
-- dolphinscheduler-alert alarm module, providing AlertServer service.
-
-- dolphinscheduler-api web application module, providing ApiServer service.
-
-- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
-
-- dolphinscheduler-dao provides operations such as database access.
-
-- dolphinscheduler-remote client and server based on netty
-
-- dolphinscheduler-server MasterServer and WorkerServer services
-
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
-
-- dolphinscheduler-ui front-end module
-### Sum up
-From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
-
-
diff --git a/docs/en-us/1.3.2/user_doc/cluster-deployment.md b/docs/en-us/1.3.2/user_doc/cluster-deployment.md
deleted file mode 100644
index fed61ae..0000000
--- a/docs/en-us/1.3.2/user_doc/cluster-deployment.md
+++ /dev/null
@@ -1,405 +0,0 @@
-# Cluster Deployment
-
-# 1、Before you begin (please install requirement basic software by yourself)
-
-* [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.7) : Choose One,<font color="#dd0000">If use MySQL,It is strongly recommended that MySQL version is 5.7 or higher</font>
-* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile
-* [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+) : Required
-* [Hadoop](https://hadoop.apache.org/releases.html) (2.6+) or [MinIO](https://min.io/download) : Optional. If you need to upload a resource function, you can choose a local file directory as the upload folder for a single machine (this operation does not need to deploy Hadoop). Of course, you can also choose to upload to Hadoop or MinIO.
-
-```markdown
- Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients for the corresponding task of running.
-```
-
-# 2、Download the binary package.
-
-- Please download the latest version of the default installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: [download](/en-us/download/download.html),Download the package and move to the installation and deployment directory. Then uncompress it.
-
-```shell
-# Create the deployment directory. Do not choose a deployment directory with a high-privilege directory such as / root or / home.
-mkdir -p /opt/dolphinscheduler;
-cd /opt/dolphinscheduler;
-# uncompress
-tar -zxvf apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin.tar.gz -C /opt/dolphinscheduler;
-
-mv apache-dolphinscheduler-incubating-1.3.2-dolphinscheduler-bin  dolphinscheduler-bin
-```
-
-# 3、Create deployment user and hosts mapping
-
-- Create a deployment user on the ** all ** deployment machines, and be sure to configure sudo passwordless. If we plan to deploy DolphinScheduler on 4 machines: ds1, ds2, ds3, and ds4, we first need to create a deployment user on each machine.
-
-```shell
-# To create a user, you need to log in as root and set the deployment user name. Please modify it yourself. The following uses dolphinscheduler as an example.
-useradd dolphinscheduler;
-
-# Set the user password, please modify it yourself. The following takes dolphinscheduler123 as an example.
-echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
-
-# Configure sudo passwordless
-echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
-sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
-
-```
-
-```
- Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
- - If find the "Default requiretty" in the "/etc/sudoers" file, also comment out.
- - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
-```
-
-# 4、Configure hosts mapping and ssh access and modify directory permissions.
-
-- Use the first machine (hostname is ds1) as the deployment machine, configure the hosts of all machines to be deployed on ds1, and login as root on ds1.
-
-  ```shell
-  vi /etc/hosts
-
-  #add ip hostname
-  192.168.xxx.xxx ds1
-  192.168.xxx.xxx ds2
-  192.168.xxx.xxx ds3
-  192.168.xxx.xxx ds4
-  ```
-
-  *Note: Please delete or comment out the line 127.0.0.1*
-
-- Sync /etc/hosts on ds1 to all deployment machines
-
-  ```shell
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of machines you want to deploy
-  do
-      sudo scp -r /etc/hosts  $ip:/etc/          # Need to enter root password during operation
-  done
-  ```
-
-  *Note: can use `sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/` to avoid type password.*
-
-  > Install sshpass in Centos:
-  >
-  > 1. Install epel
-  >
-  >    yum install -y epel-release
-  >
-  >    yum repolist
-  >
-  > 2. After installing epel, you can install sshpass
-  >
-  >    yum install -y sshpass
-  >
-  >
-
-- On ds1, switch to the deployment user and configure ssh passwordless login
-
-  ```shell
-   su dolphinscheduler;
-
-  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
-  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-  chmod 600 ~/.ssh/authorized_keys
-  ```
-​      Note: *If configure success, the dolphinscheduler user does not need to enter a password when executing the command `ssh localhost`*
-
-
-
-- On ds1, configure the deployment user dolphinscheduler ssh to connect to other machines to be deployed.
-
-  ```shell
-  su dolphinscheduler;
-  for ip in ds2 ds3;     # Please replace ds2 ds3 here with the hostname of the machine you want to deploy.
-  do
-      ssh-copy-id  $ip   # You need to manually enter the password of the dolphinscheduler user during the operation.
-  done
-  # can use `sshpass -p xxx ssh-copy-id $ip` to avoid type password.
-  ```
-
-- On ds1, modify the directory permissions so that the deployment user has operation permissions on the dolphinscheduler-bin directory.
-
-  ```shell
-  sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
-  ```
-
-# 5、Database initialization
-
-- Into the database. The default database is PostgreSQL. If you select MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.
-```
-mysql -h192.168.xx.xx -P3306 -uroot -p
-```
-
-- After entering the database command line window, execute the database initialization command and set the user and password. **Note: {user} and {password} need to be replaced with a specific database username and password**
-
- ``` mysql
-    mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    mysql> flush privileges;
- ```
-
-- Create tables and import basic data
-
-    - Modify the following configuration in datasource.properties under the conf directory
-
-    ```shell
-      vi conf/datasource.properties
-    ```
-
-    - If you choose Mysql, please comment out the relevant configuration of PostgreSQL (vice versa), you also need to manually add the [[mysql-connector-java driver jar] (https://downloads.mysql.com/archives/c-j/) package to lib under the directory, and then configure the database connection information correctly.
-
-    ```properties
-      #postgre
-      #spring.datasource.driver-class-name=org.postgresql.Driver
-      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
-      # mysql
-      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true     # Replace the correct IP address
-      spring.datasource.username=xxx						# replace the correct {user} value
-      spring.datasource.password=xxx						# replace the correct {password} value
-    ```
-
-    - After modifying and saving, execute the create table and import data script in the script directory.
-
-    ```shell
-    sh script/create-dolphinscheduler.sh
-    ```
-
-​       *Note: If you execute the above script and report "/bin/java: No such file or directory" error, please configure JAVA_HOME and PATH variables in /etc/profile*
-
-# 6、Modify runtime parameters.
-
-- Modify the environment variable in `dolphinscheduler_env.sh` file which on the 'conf/env' directory (take the relevant software installed under '/opt/soft' as an example)
-
-    ```shell
-        export HADOOP_HOME=/opt/soft/hadoop
-        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-        #export SPARK_HOME1=/opt/soft/spark1
-        export SPARK_HOME2=/opt/soft/spark2
-        export PYTHON_HOME=/opt/soft/python
-        export JAVA_HOME=/opt/soft/java
-        export HIVE_HOME=/opt/soft/hive
-        export FLINK_HOME=/opt/soft/flink
-        export DATAX_HOME=/opt/soft/datax/bin/datax.py
-        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-        ```
-
-     `Note: This step is very important. For example, JAVA_HOME and PATH must be configured. Those that are not used can be ignored or commented out.`
-
-
-
-- Create Soft link jdk to /usr/bin/java (still JAVA_HOME=/opt/soft/java as an example)
-
-    ```shell
-    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
-    ```
-
- - Modify the parameters in the one-click deployment config file `conf/config/install_config.conf`, pay special attention to the configuration of the following parameters.
-
-    ```shell
-    # choose mysql or postgresql
-    dbtype="mysql"
-
-    # Database connection address and port
-    dbhost="192.168.xx.xx:3306"
-
-    # database name
-    dbname="dolphinscheduler"
-
-    # database username
-    username="xxx"
-
-    # database password
-    # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
-    password="xxx"
-
-    #Zookeeper cluster
-    zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-
-    # Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
-    installPath="/opt/soft/dolphinscheduler"
-
-    # deployment user
-    # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
-    deployUser="dolphinscheduler"
-
-    # alert config,take QQ email for example
-    # mail protocol
-    mailProtocol="SMTP"
-
-    # mail server host
-    mailServerHost="smtp.qq.com"
-
-    # mail server port
-    # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
-    mailServerPort="25"
-
-    # mail sender
-    mailSender="xxx@qq.com"
-
-    # mail user
-    mailUser="xxx@qq.com"
-
-    # mail sender password
-    # note: The mail.passwd is email service authorization code, not the email login password.
-    mailPassword="xxx"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported
-    starttlsEnable="true"
-
-    # Whether TLS mail protocol is supported,true is supported and false is not supported。
-    # note: only one of TLS and SSL can be in the true state.
-    sslEnable="false"
-
-    # note: sslTrust is the same as mailServerHost
-    sslTrust="smtp.qq.com"
-
-
-    # resource storage type:HDFS,S3,NONE
-    resourceStorageType="HDFS"
-
-    # If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    # if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-    # Note,s3 be sure to create the root directory /dolphinscheduler
-    defaultFS="hdfs://mycluster:8020"
-
-
-    # if not use hadoop resourcemanager, please keep default value; if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty
-    yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-    # if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.
-    singleYarnIp="yarnIp1"
-
-    # resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended
-    resourceUploadPath="/dolphinscheduler"
-
-    # who have permissions to create directory under HDFS/S3 root path
-    # Note: if kerberos is enabled, please config hdfsRootUser=
-    hdfsRootUser="hdfs"
-
-
-
-    # install hosts
-    # Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
-    ips="ds1,ds2,ds3,ds4"
-
-    # ssh port, default 22
-    # Note: if ssh port is not default, modify here
-    sshPort="22"
-
-    # run master machine
... 173453 lines suppressed ...