You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/03/09 14:02:19 UTC

[dolphinscheduler-website] 03/03: [init] Move all docs to content directory

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch history-docs
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git

commit c2e7207499a1eef60835cac943a91e50a10d8d6a
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Wed Mar 9 18:20:05 2022 +0800

    [init] Move all docs to content directory
    
    including docs, development, blog, community directory
---
 .../en-us/blog}/Apache-DolphinScheduler-2.0.1.md   |    0
 .../en-us/blog}/Apache_dolphinScheduler_2.0.2.md   |    0
 .../en-us/blog}/Apache_dolphinScheduler_2.0.3.md   |    0
 .../blog}/Awarded_most_popular_project_in_2021.md  |    0
 .../en-us/blog}/Board_of_Directors_Report.md       |    0
 {blog/en-us => content/en-us/blog}/DAG.md          |    0
 .../en-us/blog}/DS-2.0-alpha-release.md            |    0
 .../en-us/blog}/DS_run_in_windows.md               |    0
 .../DolphinScheduler-Vulnerability-Explanation.md  |    0
 ...hinScheduler_Kubernetes_Technology_in_action.md |    0
 {blog/en-us => content/en-us/blog}/Eavy_Info.md    |    0
 {blog/en-us => content/en-us/blog}/FAQ.md          |    0
 .../Introducing-Apache-DolphinScheduler-1.3.9.md   |    0
 {blog/en-us => content/en-us/blog}/Json_Split.md   |    0
 .../en-us/blog}/Lizhi-case-study.md                |    0
 .../en-us/blog}/Meetup_2022_02_26.md               |    0
 {blog/en-us => content/en-us/blog}/Twos.md         |    0
 .../en-us/blog}/YouZan-case-study.md               |    0
 .../en-us/blog}/architecture-design.md             |    0
 .../en-us/blog}/meetup_2019_10_26.md               |    0
 .../en-us/blog}/meetup_2019_12_08.md               |    0
 .../en-us => content/en-us/community}/DSIP.md      |    0
 .../en-us/community}/development/DS-License.md     |    0
 .../community}/development/become-a-committer.md   |    0
 .../en-us/community}/development/code-conduct.md   |  136 +-
 .../en-us/community}/development/commit-message.md |    0
 .../en-us/community}/development/contribute.md     |    0
 .../en-us/community}/development/document.md       |    0
 .../en-us/community}/development/issue.md          |    0
 .../en-us/community}/development/microbench.md     |    0
 .../en-us/community}/development/pull-request.md   |    0
 .../en-us/community}/development/submit-code.md    |    0
 .../en-us/community}/development/subscribe.md      |    0
 .../en-us/community}/development/unit-test.md      |    0
 .../en-us/community}/join/e2e-guide.md             |    0
 .../en-us/community}/join/review.md                |    0
 .../en-us/community}/release-post.md               |    0
 .../en-us/community}/release-prepare.md            |    0
 .../en-us => content/en-us/community}/release.md   |    0
 .../en-us => content/en-us/community}/security.md  |    0
 .../en-us => content/en-us/community}/team.md      |    0
 .../en-us/development}/api-standard.md             |    0
 .../en-us/development}/architecture-design.md      |    0
 .../backend/mechanism/global-parameter.md          |    0
 .../development}/backend/mechanism/overview.md     |    0
 .../development}/backend/mechanism/task/switch.md  |    0
 .../en-us/development}/backend/spi/alert.md        |    0
 .../en-us/development}/backend/spi/datasource.md   |    0
 .../en-us/development}/backend/spi/registry.md     |    0
 .../en-us/development}/backend/spi/task.md         |    0
 .../development}/development-environment-setup.md  |    0
 .../en-us/development}/e2e-test.md                 |    0
 .../en-us/development}/frontend-development.md     |    0
 .../en-us/development}/have-questions.md           |    0
 .../docs}/1.2.0/user_doc/backend-deployment.md     |    0
 .../docs}/1.2.0/user_doc/cluster-deployment.md     |    0
 .../docs}/1.2.0/user_doc/frontend-deployment.md    |    0
 .../docs/1.2.0}/user_doc/hardware-environment.md   |    0
 .../en-us/docs/1.2.0}/user_doc/metadata-1.2.md     |    0
 .../en-us/docs/1.2.0}/user_doc/quick-start.md      |    0
 .../docs}/1.2.0/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.2.0/user_doc/system-manual.md    |    0
 .../en-us/docs/1.2.0}/user_doc/upgrade.md          |    0
 .../docs}/1.2.1/user_doc/architecture-design.md    |    0
 .../docs}/1.2.1/user_doc/backend-deployment.md     |    0
 .../docs}/1.2.1/user_doc/frontend-deployment.md    |    0
 .../docs/1.2.1}/user_doc/hardware-environment.md   |    0
 .../en-us/docs/1.2.1}/user_doc/metadata-1.2.md     |    0
 .../docs}/1.2.1/user_doc/plugin-development.md     |    0
 .../en-us/docs/1.2.1}/user_doc/quick-start.md      |    0
 .../en-us/docs}/1.2.1/user_doc/system-manual.md    |    0
 .../en-us/docs/1.2.1}/user_doc/upgrade.md          |    0
 .../docs/1.3.1}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.1/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.1/user_doc/configuration-file.md     |  812 +++----
 .../docs/1.3.1}/user_doc/hardware-environment.md   |    0
 .../en-us/docs}/1.3.1/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.1}/user_doc/quick-start.md      |    0
 .../docs}/1.3.1/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.1/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.1}/user_doc/task-structure.md   | 2272 ++++++++++----------
 .../en-us/docs}/1.3.1/user_doc/upgrade.md          |    0
 .../docs/1.3.2}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.2/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.2}/user_doc/configuration-file.md     |  815 +++----
 .../docs}/1.3.2/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.2}/user_doc/hardware-environment.md   |    0
 .../en-us/docs/1.3.2}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.2}/user_doc/quick-start.md      |    0
 .../docs}/1.3.2/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.2/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.2}/user_doc/task-structure.md   | 2272 ++++++++++----------
 .../en-us/docs/1.3.2}/user_doc/upgrade.md          |    0
 .../docs/1.3.3}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.3/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.3}/user_doc/configuration-file.md     |  815 ++++---
 .../docs}/1.3.3/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.3}/user_doc/hardware-environment.md   |    0
 .../en-us/docs/1.3.3}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.3}/user_doc/quick-start.md      |    0
 .../docs}/1.3.3/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.3/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.3}/user_doc/task-structure.md   | 2272 ++++++++++----------
 .../en-us/docs/1.3.3}/user_doc/upgrade.md          |    0
 .../docs/1.3.4}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.4/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.4}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.4/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.4/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.4}/user_doc/hardware-environment.md   |    0
 .../en-us/docs/1.3.4/user_doc}/load-balance.md     |    0
 .../en-us/docs}/1.3.4/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.4}/user_doc/quick-start.md      |    0
 .../docs}/1.3.4/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.4/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.4}/user_doc/task-structure.md   |    0
 .../en-us/docs}/1.3.4/user_doc/upgrade.md          |    0
 .../docs/1.3.5}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.5/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.5}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.5/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.5/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.5}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.5/user_doc/kubernetes-deployment.md  |    0
 .../en-us/docs/1.3.5/user_doc}/load-balance.md     |    0
 .../en-us/docs/1.3.5}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs}/1.3.5/user_doc/open-api.md         |    0
 .../en-us/docs/1.3.5}/user_doc/quick-start.md      |    0
 .../docs}/1.3.5/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.5/user_doc/system-manual.md    |    0
 .../en-us/docs}/1.3.5/user_doc/task-structure.md   |    0
 .../en-us/docs/1.3.5}/user_doc/upgrade.md          |    0
 .../docs}/1.3.6/user_doc/ambari-integration.md     |    0
 .../docs/1.3.6}/user_doc/architecture-design.md    |  664 +++---
 .../docs}/1.3.6/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.6/user_doc/configuration-file.md     |    0
 .../docs}/1.3.6/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.6/user_doc/expansion-reduction.md    |    0
 .../en-us/docs}/1.3.6/user_doc/flink-call.md       |    0
 .../docs/1.3.6}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.6/user_doc/kubernetes-deployment.md  |    0
 .../en-us/docs/1.3.6/user_doc}/load-balance.md     |    0
 .../en-us/docs/1.3.6}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.6/user_doc}/open-api.md         |    2 +-
 .../en-us/docs/1.3.6}/user_doc/quick-start.md      |    0
 .../1.3.6/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.6/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.6/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.6}/user_doc/task-structure.md   |    0
 .../en-us/docs/1.3.6}/user_doc/upgrade.md          |    0
 .../docs/1.3.8}/user_doc/ambari-integration.md     |    0
 .../docs/1.3.8}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.8/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.8}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.8/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.8/user_doc/expansion-reduction.md    |    0
 .../en-us/docs/1.3.8/user_doc}/flink-call.md       |    0
 .../docs/1.3.8}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.8/user_doc/kubernetes-deployment.md  |    0
 .../en-us/docs/1.3.8/user_doc}/load-balance.md     |    0
 .../en-us/docs/1.3.8}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.8/user_doc}/open-api.md         |    0
 .../1.3.8}/user_doc/parameters-introduction.md     |    0
 .../en-us/docs/1.3.8}/user_doc/quick-start.md      |    0
 .../1.3.8/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.8/user_doc/standalone-deployment.md  |    0
 .../en-us/docs}/1.3.8/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.8/user_doc}/task-structure.md   |    0
 .../en-us/docs/1.3.8}/user_doc/upgrade.md          |    0
 .../docs/1.3.9}/user_doc/ambari-integration.md     |    0
 .../docs/1.3.9}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.9/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.9}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.9/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.9/user_doc/expansion-reduction.md    |    0
 .../en-us/docs/1.3.9/user_doc}/flink-call.md       |    0
 .../docs/1.3.9}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.9/user_doc/kubernetes-deployment.md  |    0
 .../en-us/docs}/1.3.9/user_doc/load-balance.md     |    0
 .../en-us/docs/1.3.9}/user_doc/metadata-1.3.md     |    0
 .../en-us/docs/1.3.9/user_doc}/open-api.md         |    0
 .../1.3.9/user_doc/parameters-introduction.md      |    0
 .../en-us/docs/1.3.9}/user_doc/quick-start.md      |    0
 .../1.3.9/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.9/user_doc/standalone-deployment.md  |    0
 .../docs}/1.3.9/user_doc/standalone-server.md      |    0
 .../en-us/docs}/1.3.9/user_doc/system-manual.md    |    0
 .../en-us/docs/1.3.9/user_doc}/task-structure.md   |    0
 .../en-us/docs/1.3.9}/user_doc/upgrade.md          |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.0}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.0}/user_doc/architecture/design.md    |    0
 .../2.0.0}/user_doc/architecture/designplus.md     |    0
 .../2.0.0/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.0}/user_doc/architecture/metadata.md  |    0
 .../2.0.0}/user_doc/architecture/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs}/2.0.0/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.0}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.0}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.0}/user_doc/guide/datasource/spark.md |    0
 .../2.0.0/user_doc/guide/expansion-reduction.md    |    0
 .../en-us/docs/2.0.0}/user_doc/guide/flink-call.md |    0
 .../en-us/docs/2.0.0}/user_doc/guide/homepage.md   |    0
 .../2.0.0}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.0/user_doc/guide/installation/docker.md    |    0
 .../2.0.0}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.0}/user_doc/guide/introduction.md     |    0
 .../en-us/docs/2.0.0}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../en-us/docs/2.0.0}/user_doc/guide/open-api.md   |    0
 .../2.0.0}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.0}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.0}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.0}/user_doc/guide/parameter/local.md  |    0
 .../2.0.0}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.0}/user_doc/guide/project/project-list.md  |    0
 .../2.0.0}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.0}/user_doc/guide/quick-start.md      |    0
 .../en-us/docs/2.0.0}/user_doc/guide/resource.md   |    0
 .../en-us/docs/2.0.0}/user_doc/guide/security.md   |    0
 .../docs/2.0.0}/user_doc/guide/task/conditions.md  |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.0}/user_doc/guide/task/dependent.md   |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/flink.md |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/http.md  |    0
 .../docs/2.0.0}/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.0}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.0}/user_doc/guide/task/python.md      |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/shell.md |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/spark.md |    0
 .../en-us/docs/2.0.0}/user_doc/guide/task/sql.md   |    0
 .../2.0.0}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.0}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.0}/user_doc/guide/task/switch.md      |    0
 .../en-us/docs}/2.0.0/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.1}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.1}/user_doc/architecture/design.md    |    0
 .../2.0.1}/user_doc/architecture/designplus.md     |    0
 .../2.0.1/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.1}/user_doc/architecture/metadata.md  |    0
 .../2.0.1}/user_doc/architecture/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs/2.0.1}/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.1}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.1}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.1}/user_doc/guide/datasource/spark.md |    0
 .../2.0.1/user_doc/guide/expansion-reduction.md    |    0
 .../en-us/docs/2.0.1}/user_doc/guide/flink-call.md |    0
 .../en-us/docs/2.0.1}/user_doc/guide/homepage.md   |    0
 .../2.0.1}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.1/user_doc/guide/installation/docker.md    |    0
 .../2.0.1}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.1}/user_doc/guide/introduction.md     |    0
 .../en-us/docs/2.0.1}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../en-us/docs/2.0.1}/user_doc/guide/open-api.md   |    0
 .../2.0.1}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.1}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.1}/user_doc/guide/parameter/global.md |    0
 .../docs}/2.0.1/user_doc/guide/parameter/local.md  |    0
 .../2.0.1}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.1}/user_doc/guide/project/project-list.md  |    0
 .../2.0.1}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.1}/user_doc/guide/quick-start.md      |    0
 .../en-us/docs}/2.0.1/user_doc/guide/resource.md   |    0
 .../en-us/docs/2.0.1}/user_doc/guide/security.md   |    0
 .../docs/2.0.1}/user_doc/guide/task/conditions.md  |    0
 .../en-us/docs/2.0.1}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.1}/user_doc/guide/task/dependent.md   |    0
 .../en-us/docs}/2.0.1/user_doc/guide/task/flink.md |    0
 .../en-us/docs/2.0.1}/user_doc/guide/task/http.md  |    0
 .../docs}/2.0.1/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.1}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.1}/user_doc/guide/task/python.md      |    0
 .../en-us/docs/2.0.1}/user_doc/guide/task/shell.md |    0
 .../en-us/docs}/2.0.1/user_doc/guide/task/spark.md |    0
 .../en-us/docs/2.0.1}/user_doc/guide/task/sql.md   |    0
 .../2.0.1}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.1}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.1}/user_doc/guide/task/switch.md      |    0
 .../en-us/docs/2.0.1}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.2}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.2}/user_doc/architecture/design.md    |    0
 .../2.0.2}/user_doc/architecture/designplus.md     |    0
 .../2.0.2/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.2}/user_doc/architecture/metadata.md  |    0
 .../2.0.2/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs/2.0.2}/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.2}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.2}/user_doc/guide/datasource/postgresql.md |    0
 .../docs}/2.0.2/user_doc/guide/datasource/spark.md |    0
 .../2.0.2/user_doc/guide/expansion-reduction.md    |    0
 .../en-us/docs/2.0.2/user_doc/guide}/flink-call.md |    0
 .../en-us/docs}/2.0.2/user_doc/guide/homepage.md   |    0
 .../2.0.2}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.2/user_doc/guide/installation/docker.md    |    0
 .../2.0.2}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.2}/user_doc/guide/introduction.md     |    0
 .../en-us/docs/2.0.2}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../en-us/docs/2.0.2/user_doc/guide}/open-api.md   |    0
 .../2.0.2}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.2}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.2}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.2}/user_doc/guide/parameter/local.md  |    0
 .../2.0.2/user_doc/guide/parameter/priority.md     |    0
 .../2.0.2}/user_doc/guide/project/project-list.md  |    0
 .../2.0.2}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs}/2.0.2/user_doc/guide/quick-start.md      |    0
 .../en-us/docs/2.0.2}/user_doc/guide/resource.md   |    0
 .../en-us/docs/2.0.2}/user_doc/guide/security.md   |    0
 .../docs/2.0.2}/user_doc/guide/task/conditions.md  |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.2}/user_doc/guide/task/dependent.md   |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/flink.md |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/http.md  |    0
 .../docs/2.0.2}/user_doc/guide/task/map-reduce.md  |    0
 .../docs}/2.0.2/user_doc/guide/task/pigeon.md      |    0
 .../docs}/2.0.2/user_doc/guide/task/python.md      |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/shell.md |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/spark.md |    0
 .../en-us/docs/2.0.2}/user_doc/guide/task/sql.md   |    0
 .../2.0.2}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.2}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.2}/user_doc/guide/task/switch.md      |    0
 .../en-us/docs}/2.0.2/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../docs}/2.0.3/user_doc/architecture/cache.md     |    0
 .../2.0.3/user_doc/architecture/configuration.md   |    0
 .../docs}/2.0.3/user_doc/architecture/design.md    |    0
 .../2.0.3/user_doc/architecture/designplus.md      |    0
 .../2.0.3/user_doc/architecture/load-balance.md    |    0
 .../docs}/2.0.3/user_doc/architecture/metadata.md  |    0
 .../2.0.3/user_doc/architecture/task-structure.md  |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs}/2.0.3/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs}/2.0.3/user_doc/guide/datasource/mysql.md |    0
 .../2.0.3/user_doc/guide/datasource/postgresql.md  |    0
 .../docs/2.0.3}/user_doc/guide/datasource/spark.md |    0
 .../2.0.3/user_doc/guide/expansion-reduction.md    |    0
 .../en-us/docs}/2.0.3/user_doc/guide/flink-call.md |    0
 .../en-us/docs/2.0.3}/user_doc/guide/homepage.md   |    0
 .../2.0.3/user_doc/guide/installation/cluster.md   |    0
 .../2.0.3/user_doc/guide/installation/docker.md    |    0
 .../2.0.3}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.3}/user_doc/guide/introduction.md     |    0
 .../en-us/docs}/2.0.3/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../en-us/docs}/2.0.3/user_doc/guide/open-api.md   |    0
 .../2.0.3/user_doc/guide/parameter/built-in.md     |    0
 .../2.0.3/user_doc/guide/parameter/context.md      |    0
 .../docs/2.0.3}/user_doc/guide/parameter/global.md |    0
 .../docs}/2.0.3/user_doc/guide/parameter/local.md  |    0
 .../2.0.3}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.3}/user_doc/guide/project/project-list.md  |    0
 .../2.0.3/user_doc/guide/project/task-instance.md  |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.3}/user_doc/guide/quick-start.md      |    0
 .../en-us/docs}/2.0.3/user_doc/guide/resource.md   |    0
 .../en-us/docs}/2.0.3/user_doc/guide/security.md   |    0
 .../docs/2.0.3}/user_doc/guide/task/conditions.md  |    0
 .../en-us/docs/2.0.3}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.3}/user_doc/guide/task/dependent.md   |    0
 .../en-us/docs}/2.0.3/user_doc/guide/task/flink.md |    0
 .../en-us/docs/2.0.3}/user_doc/guide/task/http.md  |    0
 .../docs}/2.0.3/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.3}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.3}/user_doc/guide/task/python.md      |    0
 .../en-us/docs}/2.0.3/user_doc/guide/task/shell.md |    0
 .../en-us/docs}/2.0.3/user_doc/guide/task/spark.md |    0
 .../en-us/docs}/2.0.3/user_doc/guide/task/sql.md   |    0
 .../2.0.3}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.3}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.3}/user_doc/guide/task/switch.md      |    0
 .../en-us/docs}/2.0.3/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../docs}/2.0.5/user_doc/architecture/cache.md     |    0
 .../2.0.5}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.5}/user_doc/architecture/design.md    |    0
 .../2.0.5}/user_doc/architecture/designplus.md     |    0
 .../2.0.5/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.5}/user_doc/architecture/metadata.md  |    0
 .../2.0.5/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../docs}/2.0.5/user_doc/guide/alert/dingtalk.md   |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs}/2.0.5/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.5}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.5}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.5}/user_doc/guide/datasource/spark.md |    0
 .../2.0.5/user_doc/guide/expansion-reduction.md    |    0
 .../en-us/docs/2.0.5/user_doc/guide}/flink-call.md |    0
 .../en-us/docs/2.0.5}/user_doc/guide/homepage.md   |    0
 .../2.0.5}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.5/user_doc/guide/installation/docker.md    |    0
 .../2.0.5}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.5}/user_doc/guide/introduction.md     |    0
 .../en-us/docs/2.0.5}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../en-us/docs/2.0.5/user_doc/guide}/open-api.md   |    0
 .../2.0.5}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.5}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.5}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.5}/user_doc/guide/parameter/local.md  |    0
 .../2.0.5}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.5}/user_doc/guide/project/project-list.md  |    0
 .../2.0.5}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.5}/user_doc/guide/quick-start.md      |    0
 .../en-us/docs}/2.0.5/user_doc/guide/resource.md   |    0
 .../en-us/docs/2.0.5}/user_doc/guide/security.md   |    0
 .../docs/2.0.5}/user_doc/guide/task/conditions.md  |    0
 .../en-us/docs/2.0.5}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.5}/user_doc/guide/task/dependent.md   |    0
 .../en-us/docs}/2.0.5/user_doc/guide/task/flink.md |    0
 .../en-us/docs/2.0.5}/user_doc/guide/task/http.md  |    0
 .../docs}/2.0.5/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.5}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.5}/user_doc/guide/task/python.md      |    0
 .../en-us/docs/2.0.5}/user_doc/guide/task/shell.md |    0
 .../en-us/docs}/2.0.5/user_doc/guide/task/spark.md |    0
 .../en-us/docs/2.0.5}/user_doc/guide/task/sql.md   |    0
 .../2.0.5}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.5}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.5}/user_doc/guide/task/switch.md      |    0
 .../en-us/docs/2.0.5}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../en-us/docs}/dev/user_doc/architecture/cache.md |    0
 .../dev/user_doc/architecture/configuration.md     |    0
 .../docs}/dev/user_doc/architecture/design.md      |    0
 .../dev/user_doc/architecture/load-balance.md      |    0
 .../docs}/dev/user_doc/architecture/metadata.md    |    0
 .../dev/user_doc/architecture/task-structure.md    |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../docs}/dev/user_doc/guide/alert/dingtalk.md     |    0
 .../user_doc/guide/alert/enterprise-webexteams.md  |    0
 .../dev/user_doc/guide/alert/enterprise-wechat.md  |    0
 .../docs}/dev/user_doc/guide/alert/telegram.md     |    0
 .../docs}/dev/user_doc/guide/datasource/hive.md    |    0
 .../dev/user_doc/guide/datasource/introduction.md  |    0
 .../docs}/dev/user_doc/guide/datasource/mysql.md   |    0
 .../dev/user_doc/guide/datasource/postgresql.md    |    0
 .../docs}/dev/user_doc/guide/datasource/spark.md   |    0
 .../dev/user_doc/guide/expansion-reduction.md      |    0
 .../en-us/docs}/dev/user_doc/guide/flink-call.md   |    0
 .../en-us/docs}/dev/user_doc/guide/homepage.md     |    0
 .../dev/user_doc/guide/installation/cluster.md     |    0
 .../dev/user_doc/guide/installation/docker.md      |    0
 .../dev}/user_doc/guide/installation/hardware.md   |    0
 .../dev/user_doc/guide/installation/kubernetes.md  |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../guide/installation/skywalking-agent.md         |    0
 .../dev}/user_doc/guide/installation/standalone.md |    0
 .../en-us/docs/dev}/user_doc/guide/introduction.md |    0
 .../en-us/docs}/dev/user_doc/guide/monitor.md      |    0
 .../en-us/docs}/dev/user_doc/guide/open-api.md     |    0
 .../docs}/dev/user_doc/guide/parameter/built-in.md |    0
 .../docs}/dev/user_doc/guide/parameter/context.md  |    0
 .../docs/dev}/user_doc/guide/parameter/global.md   |    0
 .../docs/dev}/user_doc/guide/parameter/local.md    |    0
 .../docs}/dev/user_doc/guide/parameter/priority.md |    0
 .../dev}/user_doc/guide/project/project-list.md    |    0
 .../dev/user_doc/guide/project/task-instance.md    |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../en-us/docs}/dev/user_doc/guide/quick-start.md  |    0
 .../en-us/docs}/dev/user_doc/guide/resource.md     |    0
 .../en-us/docs}/dev/user_doc/guide/security.md     |    0
 .../docs/dev}/user_doc/guide/task/conditions.md    |    0
 .../en-us/docs/dev}/user_doc/guide/task/datax.md   |    0
 .../docs/dev}/user_doc/guide/task/dependent.md     |    0
 .../en-us/docs}/dev/user_doc/guide/task/emr.md     |    0
 .../en-us/docs}/dev/user_doc/guide/task/flink.md   |    0
 .../en-us/docs/dev}/user_doc/guide/task/http.md    |    0
 .../docs}/dev/user_doc/guide/task/map-reduce.md    |    0
 .../en-us/docs}/dev/user_doc/guide/task/pigeon.md  |    0
 .../en-us/docs}/dev/user_doc/guide/task/python.md  |    0
 .../en-us/docs}/dev/user_doc/guide/task/shell.md   |    0
 .../en-us/docs}/dev/user_doc/guide/task/spark.md   |    0
 .../en-us/docs}/dev/user_doc/guide/task/sql.md     |    0
 .../dev}/user_doc/guide/task/stored-procedure.md   |    0
 .../docs/dev}/user_doc/guide/task/sub-process.md   |    0
 .../en-us/docs/dev}/user_doc/guide/task/switch.md  |    0
 .../en-us/docs}/dev/user_doc/guide/upgrade.md      |    0
 {docs/en-us => content/en-us/docs}/release/faq.md  |    0
 .../en-us/docs}/release/history-versions.md        |    0
 .../en-us => content/en-us/download}/download.md   |    0
 .../en-us/download}/download_ppt.md                |    0
 .../zh-cn/blog}/Apache-DolphinScheduler-2.0.1.md   |    0
 .../zh-cn/blog}/Apache_dolphinScheduler_2.0.2.md   |    0
 .../zh-cn/blog}/Apache_dolphinScheduler_2.0.3.md   |    0
 .../blog}/Awarded_most_popular_project_in_2021.md  |    0
 .../zh-cn/blog}/Board_of_Directors_Report.md       |    0
 {blog/zh-cn => content/zh-cn/blog}/DAG.md          |    0
 .../zh-cn/blog}/DS-2.0-alpha-release.md            |    0
 .../zh-cn/blog}/DS_architecture_evolution.md       |  270 +--
 .../zh-cn/blog}/DS_run_in_windows.md               |    0
 ...hinScheduler_Kubernetes_Technology_in_action.md |  914 ++++----
 ...203\205\345\206\265\350\257\264\346\230\216.md" |    0
 {blog/zh-cn => content/zh-cn/blog}/Eavy_Info.md    |    0
 .../zh-cn/blog}/Lizhi-case-study.md                |    0
 .../zh-cn/blog}/Meetup_2022_02_26.md               |    0
 {blog/zh-cn => content/zh-cn/blog}/Twos.md         |    0
 .../zh-cn/blog}/YouZan-case-study.md               |    0
 .../zh-cn/blog}/about_blocking_task.md             |    0
 .../zh-cn/blog}/architecture-design.md             |    0
 .../zh-cn => content/zh-cn/blog}/cicd_workflow.md  |    0
 .../zh-cn/blog}/dolphinscheduler_json.md           | 1598 +++++++-------
 .../zh-cn/blog}/ipalfish_tech_platform.md          |    0
 {blog/zh-cn => content/zh-cn/blog}/json_split.md   |    0
 .../zh-cn/blog}/live_online_2020_05_26.md          |    0
 .../zh-cn/blog}/meetup_2019_10_26.md               |    0
 .../zh-cn/blog}/meetup_2019_12_08.md               |    0
 .../zh-cn/blog}/new_committer_wenjun.md            |    0
 {blog/zh-cn => content/zh-cn/blog}/ut-guideline.md |    0
 {blog/zh-cn => content/zh-cn/blog}/ut-template.md  |    0
 .../zh-cn => content/zh-cn/community}/DSIP.md      |    0
 .../zh-cn/community}/development/DS-License.md     |    0
 .../community}/development/become-a-committer.md   |    0
 .../zh-cn/community}/development/code-conduct.md   |    0
 .../zh-cn/community}/development/commit-message.md |    0
 .../zh-cn/community}/development/contribute.md     |    0
 .../zh-cn/community}/development/document.md       |    0
 .../zh-cn/community}/development/issue.md          |    0
 .../zh-cn/community}/development/microbench.md     |    0
 .../zh-cn/community}/development/pull-request.md   |    0
 .../zh-cn/community}/development/submit-code.md    |    0
 .../zh-cn/community}/development/subscribe.md      |    0
 .../zh-cn/community}/development/unit-test.md      |    0
 .../zh-cn/community}/join/e2e-guide.md             |    0
 .../zh-cn/community}/join/review.md                |    0
 .../zh-cn/community}/release-post.md               |    0
 .../zh-cn/community}/release-prepare.md            |    0
 .../zh-cn => content/zh-cn/community}/release.md   |    0
 .../zh-cn => content/zh-cn/community}/security.md  |    0
 .../zh-cn => content/zh-cn/community}/team.md      |    0
 .../zh-cn/development}/api-standard.md             |    0
 .../zh-cn/development}/architecture-design.md      |    0
 .../backend/mechanism/global-parameter.md          |    0
 .../development}/backend/mechanism/overview.md     |    0
 .../development}/backend/mechanism/task/switch.md  |    0
 .../zh-cn/development}/backend/spi/alert.md        |    0
 .../zh-cn/development}/backend/spi/datasource.md   |    0
 .../zh-cn/development}/backend/spi/registry.md     |    0
 .../zh-cn/development}/backend/spi/task.md         |    0
 .../development}/development-environment-setup.md  |    0
 .../zh-cn/development}/e2e-test.md                 |    0
 .../zh-cn/development}/frontend-development.md     |    0
 .../zh-cn/development}/have-questions.md           |    0
 .../docs}/1.2.0/user_doc/backend-deployment.md     |    0
 .../docs}/1.2.0/user_doc/cluster-deployment.md     |    0
 .../zh-cn/docs}/1.2.0/user_doc/deployparam.md      |    0
 .../docs/1.2.0}/user_doc/frontend-deployment.md    |    0
 .../docs/1.2.0}/user_doc/hardware-environment.md   |    0
 .../1.2.0/user_doc/masterserver-code-analysis.md   |    0
 .../zh-cn/docs/1.2.0}/user_doc/metadata-1.2.md     |    0
 .../zh-cn/docs/1.2.0}/user_doc/quick-start.md      |    0
 .../docs}/1.2.0/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.2.0/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.2.0}/user_doc/upgrade.md          |    0
 .../docs}/1.2.1/user_doc/architecture-design.md    |    0
 .../docs}/1.2.1/user_doc/backend-deployment.md     |    0
 .../docs}/1.2.1/user_doc/cluster-deployment.md     |    0
 .../zh-cn/docs}/1.2.1/user_doc/deployparam.md      |    0
 .../docs/1.2.1}/user_doc/frontend-deployment.md    |    0
 .../docs/1.2.1}/user_doc/hardware-environment.md   |    0
 .../zh-cn/docs/1.2.1}/user_doc/metadata-1.2.md     |    0
 .../zh-cn/docs}/1.2.1/user_doc/microbench.md       |    0
 .../docs}/1.2.1/user_doc/plugin-development.md     |    0
 .../zh-cn/docs/1.2.1}/user_doc/quick-start.md      |    0
 .../docs}/1.2.1/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.2.1/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.2.1}/user_doc/upgrade.md          |    0
 .../docs/1.3.1}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.1/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.1/user_doc/configuration-file.md     |    0
 .../docs/1.3.1}/user_doc/hardware-environment.md   |    0
 .../zh-cn/docs/1.3.1}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs}/1.3.1/user_doc/quick-start.md      |    0
 .../docs}/1.3.1/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.1/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.1}/user_doc/task-structure.md   |    0
 .../zh-cn/docs}/1.3.1/user_doc/upgrade.md          |    0
 .../docs/1.3.2}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.2/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.2/user_doc/configuration-file.md     |    0
 .../docs}/1.3.2/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.2}/user_doc/hardware-environment.md   |    0
 .../zh-cn/docs/1.3.2}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs}/1.3.2/user_doc/quick-start.md      |    0
 .../docs}/1.3.2/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.2/user_doc/system-manual.md    |    0
 .../zh-cn/docs}/1.3.2/user_doc/task-structure.md   |    0
 .../zh-cn/docs/1.3.2}/user_doc/upgrade.md          |    0
 .../docs/1.3.3}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.3/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.3/user_doc/configuration-file.md     |    0
 .../docs}/1.3.3/user_doc/expansion-reduction.md    |    0
 .../docs}/1.3.3/user_doc/hardware-environment.md   |    0
 .../zh-cn/docs/1.3.3}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs}/1.3.3/user_doc/quick-start.md      |    0
 .../docs}/1.3.3/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.3/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.3}/user_doc/task-structure.md   |    0
 .../zh-cn/docs/1.3.3}/user_doc/upgrade.md          |    0
 .../docs}/1.3.4/user_doc/architecture-design.md    |    0
 .../docs}/1.3.4/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.4/user_doc/configuration-file.md     |    0
 .../docs}/1.3.4/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.4/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.4}/user_doc/hardware-environment.md   |    0
 .../zh-cn/docs/1.3.4}/user_doc/load-balance.md     |    0
 .../zh-cn/docs/1.3.4}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs}/1.3.4/user_doc/quick-start.md      |    0
 .../docs}/1.3.4/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.4/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.4/user_doc}/task-structure.md   |    0
 .../zh-cn/docs/1.3.4}/user_doc/upgrade.md          |    0
 .../docs/1.3.5}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.5/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.5/user_doc/configuration-file.md     |    0
 .../docs}/1.3.5/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.5/user_doc/expansion-reduction.md    |    0
 .../docs/1.3.5}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.5/user_doc/kubernetes-deployment.md  |    0
 .../zh-cn/docs/1.3.5}/user_doc/load-balance.md     |    0
 .../zh-cn/docs/1.3.5}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs}/1.3.5/user_doc/open-api.md         |    0
 .../zh-cn/docs}/1.3.5/user_doc/quick-start.md      |    0
 .../docs}/1.3.5/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.5/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.5/user_doc}/task-structure.md   |    0
 .../zh-cn/docs/1.3.5}/user_doc/upgrade.md          |    0
 .../docs/1.3.6}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.6/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.6}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.6/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.6/user_doc/expansion-reduction.md    |    0
 .../zh-cn/docs/1.3.6/user_doc}/flink-call.md       |    0
 .../docs/1.3.6}/user_doc/hardware-environment.md   |    0
 .../docs}/1.3.6/user_doc/kubernetes-deployment.md  |    0
 .../zh-cn/docs/1.3.6/user_doc}/load-balance.md     |    0
 .../zh-cn/docs/1.3.6}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs/1.3.6/user_doc}/open-api.md         |    0
 .../zh-cn/docs/1.3.6}/user_doc/quick-start.md      |    0
 .../1.3.6/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.6/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs/1.3.6}/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.6/user_doc}/task-structure.md   |    0
 .../zh-cn/docs}/1.3.6/user_doc/upgrade.md          |    0
 .../docs/1.3.8}/user_doc/architecture-design.md    |    0
 .../docs}/1.3.8/user_doc/cluster-deployment.md     |    0
 .../docs/1.3.8}/user_doc/configuration-file.md     |    0
 .../docs}/1.3.8/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.8/user_doc/expansion-reduction.md    |    0
 .../zh-cn/docs/1.3.8/user_doc}/flink-call.md       |    0
 .../docs}/1.3.8/user_doc/hardware-environment.md   |    0
 .../docs/1.3.8}/user_doc/kubernetes-deployment.md  |    0
 .../zh-cn/docs/1.3.8/user_doc}/load-balance.md     |    0
 .../zh-cn/docs/1.3.8}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs/1.3.8/user_doc}/open-api.md         |    0
 .../1.3.8/user_doc/parameters-introduction.md      |  172 +-
 .../zh-cn/docs}/1.3.8/user_doc/quick-start.md      |    0
 .../1.3.8/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.8/user_doc/standalone-deployment.md  |    0
 .../zh-cn/docs}/1.3.8/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.8/user_doc}/task-structure.md   |    0
 .../zh-cn/docs/1.3.8}/user_doc/upgrade.md          |    0
 .../docs}/1.3.9/user_doc/architecture-design.md    |    0
 .../docs}/1.3.9/user_doc/cluster-deployment.md     |    0
 .../docs}/1.3.9/user_doc/configuration-file.md     |    0
 .../docs}/1.3.9/user_doc/docker-deployment.md      |    0
 .../docs}/1.3.9/user_doc/expansion-reduction.md    |    0
 .../zh-cn/docs/1.3.9/user_doc}/flink-call.md       |    0
 .../docs/1.3.9}/user_doc/hardware-environment.md   |    0
 .../docs/1.3.9}/user_doc/kubernetes-deployment.md  |    0
 .../zh-cn/docs/1.3.9/user_doc}/load-balance.md     |    0
 .../zh-cn/docs/1.3.9}/user_doc/metadata-1.3.md     |    0
 .../zh-cn/docs/1.3.9/user_doc}/open-api.md         |    0
 .../1.3.9/user_doc/parameters-introduction.md      |  172 +-
 .../zh-cn/docs/1.3.9}/user_doc/quick-start.md      |    0
 .../1.3.9/user_doc/skywalking-agent-deployment.md  |    0
 .../docs}/1.3.9/user_doc/standalone-deployment.md  |    0
 .../docs}/1.3.9/user_doc/standalone-server.md      |    0
 .../zh-cn/docs/1.3.9}/user_doc/system-manual.md    |    0
 .../zh-cn/docs/1.3.9/user_doc}/task-structure.md   |    0
 .../zh-cn/docs/1.3.9}/user_doc/upgrade.md          |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.0}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.0}/user_doc/architecture/design.md    |    0
 .../2.0.0}/user_doc/architecture/designplus.md     |    0
 .../2.0.0}/user_doc/architecture/load-balance.md   |    0
 .../docs/2.0.0}/user_doc/architecture/metadata.md  |    0
 .../2.0.0/user_doc/architecture/task-structure.md  |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs}/2.0.0/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.0}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.0}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.0}/user_doc/guide/datasource/spark.md |    0
 .../2.0.0/user_doc/guide/expansion-reduction.md    |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/flink-call.md |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/homepage.md   |    0
 .../2.0.0}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.0/user_doc/guide/installation/docker.md    |    0
 .../2.0.0}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.0}/user_doc/guide/introduction.md     |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/open-api.md   |    0
 .../2.0.0}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.0}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.0}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.0}/user_doc/guide/parameter/local.md  |    0
 .../2.0.0}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.0}/user_doc/guide/project/project-list.md  |    0
 .../2.0.0}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.0}/user_doc/guide/quick-start.md      |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/resource.md   |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/security.md   |    0
 .../docs/2.0.0}/user_doc/guide/task/conditions.md  |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.0}/user_doc/guide/task/dependent.md   |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/task/flink.md |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/task/http.md  |    0
 .../docs/2.0.0}/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.0}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.0}/user_doc/guide/task/python.md      |    0
 .../zh-cn/docs}/2.0.0/user_doc/guide/task/shell.md |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/task/spark.md |    0
 .../zh-cn/docs/2.0.0}/user_doc/guide/task/sql.md   |    0
 .../2.0.0}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.0}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.0}/user_doc/guide/task/switch.md      |    0
 .../zh-cn/docs}/2.0.0/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.1}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.1}/user_doc/architecture/design.md    |    0
 .../2.0.1}/user_doc/architecture/designplus.md     |    0
 .../2.0.1/user_doc/architecture/load-balance.md    |    0
 .../docs/2.0.1}/user_doc/architecture/metadata.md  |    0
 .../2.0.1/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs/2.0.1}/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.1}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.1}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.1}/user_doc/guide/datasource/spark.md |    0
 .../2.0.1/user_doc/guide/expansion-reduction.md    |    0
 .../zh-cn/docs}/2.0.1/user_doc/guide/flink-call.md |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/homepage.md   |    0
 .../2.0.1/user_doc/guide/installation/cluster.md   |    0
 .../2.0.1/user_doc/guide/installation/docker.md    |    0
 .../2.0.1}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.1}/user_doc/guide/introduction.md     |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../zh-cn/docs}/2.0.1/user_doc/guide/open-api.md   |    0
 .../2.0.1}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.1}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.1}/user_doc/guide/parameter/global.md |    0
 .../docs}/2.0.1/user_doc/guide/parameter/local.md  |    0
 .../2.0.1}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.1}/user_doc/guide/project/project-list.md  |    0
 .../2.0.1}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.1}/user_doc/guide/quick-start.md      |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/resource.md   |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/security.md   |    0
 .../docs/2.0.1}/user_doc/guide/task/conditions.md  |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.1}/user_doc/guide/task/dependent.md   |    0
 .../zh-cn/docs}/2.0.1/user_doc/guide/task/flink.md |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/task/http.md  |    0
 .../docs}/2.0.1/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.1}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.1}/user_doc/guide/task/python.md      |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/task/shell.md |    0
 .../zh-cn/docs}/2.0.1/user_doc/guide/task/spark.md |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/task/sql.md   |    0
 .../2.0.1}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.1}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.1}/user_doc/guide/task/switch.md      |    0
 .../zh-cn/docs/2.0.1}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../2.0.2}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.2}/user_doc/architecture/design.md    |    0
 .../2.0.2/user_doc/architecture/designplus.md      |    0
 .../2.0.2}/user_doc/architecture/load-balance.md   |    0
 .../docs}/2.0.2/user_doc/architecture/metadata.md  |    0
 .../2.0.2/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs/2.0.2}/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs}/2.0.2/user_doc/guide/datasource/mysql.md |    0
 .../2.0.2/user_doc/guide/datasource/postgresql.md  |    0
 .../docs/2.0.2}/user_doc/guide/datasource/spark.md |    0
 .../2.0.2/user_doc/guide/expansion-reduction.md    |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/flink-call.md |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/homepage.md   |    0
 .../2.0.2}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.2/user_doc/guide/installation/docker.md    |    0
 .../2.0.2}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs}/2.0.2/user_doc/guide/introduction.md     |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/open-api.md   |    0
 .../2.0.2}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.2}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.2}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.2}/user_doc/guide/parameter/local.md  |    0
 .../2.0.2}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.2}/user_doc/guide/project/project-list.md  |    0
 .../2.0.2}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.2}/user_doc/guide/quick-start.md      |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/resource.md   |    0
 .../zh-cn/docs}/2.0.2/user_doc/guide/security.md   |    0
 .../docs/2.0.2}/user_doc/guide/task/conditions.md  |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.2}/user_doc/guide/task/dependent.md   |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/flink.md |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/http.md  |    0
 .../docs/2.0.2}/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.2}/user_doc/guide/task/pigeon.md      |    0
 .../docs}/2.0.2/user_doc/guide/task/python.md      |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/shell.md |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/spark.md |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/task/sql.md   |    0
 .../2.0.2}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.2}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.2}/user_doc/guide/task/switch.md      |    0
 .../zh-cn/docs/2.0.2}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../docs/2.0.3}/user_doc/architecture/cache.md     |    0
 .../2.0.3/user_doc/architecture/configuration.md   |    0
 .../docs}/2.0.3/user_doc/architecture/design.md    |    0
 .../2.0.3}/user_doc/architecture/designplus.md     |    0
 .../2.0.3/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.3}/user_doc/architecture/metadata.md  |    0
 .../2.0.3/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs/2.0.3}/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.3}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.3}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.3}/user_doc/guide/datasource/spark.md |    0
 .../2.0.3/user_doc/guide/expansion-reduction.md    |    0
 .../zh-cn/docs/2.0.3/user_doc/guide}/flink-call.md |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/homepage.md   |    0
 .../2.0.3/user_doc/guide/installation/cluster.md   |    0
 .../2.0.3/user_doc/guide/installation/docker.md    |    0
 .../2.0.3}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.3}/user_doc/guide/introduction.md     |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../zh-cn/docs/2.0.3/user_doc/guide}/open-api.md   |    0
 .../2.0.3/user_doc/guide/parameter/built-in.md     |    0
 .../2.0.3/user_doc/guide/parameter/context.md      |    0
 .../docs/2.0.3}/user_doc/guide/parameter/global.md |    0
 .../docs}/2.0.3/user_doc/guide/parameter/local.md  |    0
 .../2.0.3}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.3}/user_doc/guide/project/project-list.md  |    0
 .../2.0.3}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.3}/user_doc/guide/quick-start.md      |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/resource.md   |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/security.md   |    0
 .../docs/2.0.3}/user_doc/guide/task/conditions.md  |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.3}/user_doc/guide/task/dependent.md   |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/task/flink.md |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/task/http.md  |    0
 .../docs/2.0.3}/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.3}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.3}/user_doc/guide/task/python.md      |    0
 .../zh-cn/docs}/2.0.3/user_doc/guide/task/shell.md |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/task/spark.md |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/task/sql.md   |    0
 .../2.0.3}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.3}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.3}/user_doc/guide/task/switch.md      |    0
 .../zh-cn/docs/2.0.3}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../docs}/2.0.5/user_doc/architecture/cache.md     |    0
 .../2.0.5}/user_doc/architecture/configuration.md  |    0
 .../docs/2.0.5}/user_doc/architecture/design.md    |    0
 .../2.0.5}/user_doc/architecture/designplus.md     |    0
 .../2.0.5/user_doc/architecture}/load-balance.md   |    0
 .../docs/2.0.5}/user_doc/architecture/metadata.md  |    0
 .../2.0.5/user_doc/architecture}/task-structure.md |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../docs/2.0.5}/user_doc/guide/alert/dingtalk.md   |    0
 .../user_doc/guide/alert/enterprise-wechat.md      |    0
 .../docs}/2.0.5/user_doc/guide/datasource/hive.md  |    0
 .../user_doc/guide/datasource/introduction.md      |    0
 .../docs/2.0.5}/user_doc/guide/datasource/mysql.md |    0
 .../2.0.5}/user_doc/guide/datasource/postgresql.md |    0
 .../docs/2.0.5}/user_doc/guide/datasource/spark.md |    0
 .../2.0.5/user_doc/guide/expansion-reduction.md    |    0
 .../zh-cn/docs/2.0.5/user_doc/guide}/flink-call.md |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/homepage.md   |    0
 .../2.0.5}/user_doc/guide/installation/cluster.md  |    0
 .../2.0.5/user_doc/guide/installation/docker.md    |    0
 .../2.0.5}/user_doc/guide/installation/hardware.md |    0
 .../user_doc/guide/installation/kubernetes.md      |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../user_doc/guide/installation/standalone.md      |    0
 .../docs/2.0.5}/user_doc/guide/introduction.md     |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/monitor.md    |    0
 .../guide/observability/skywalking-agent.md        |    0
 .../zh-cn/docs/2.0.5/user_doc/guide}/open-api.md   |    0
 .../2.0.5}/user_doc/guide/parameter/built-in.md    |    0
 .../2.0.5}/user_doc/guide/parameter/context.md     |    0
 .../docs/2.0.5}/user_doc/guide/parameter/global.md |    0
 .../docs/2.0.5}/user_doc/guide/parameter/local.md  |    0
 .../2.0.5}/user_doc/guide/parameter/priority.md    |    0
 .../2.0.5}/user_doc/guide/project/project-list.md  |    0
 .../2.0.5}/user_doc/guide/project/task-instance.md |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../docs/2.0.5}/user_doc/guide/quick-start.md      |    0
 .../zh-cn/docs}/2.0.5/user_doc/guide/resource.md   |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/security.md   |    0
 .../docs/2.0.5}/user_doc/guide/task/conditions.md  |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/task/datax.md |    0
 .../docs/2.0.5}/user_doc/guide/task/dependent.md   |    0
 .../zh-cn/docs}/2.0.5/user_doc/guide/task/flink.md |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/task/http.md  |    0
 .../docs}/2.0.5/user_doc/guide/task/map-reduce.md  |    0
 .../docs/2.0.5}/user_doc/guide/task/pigeon.md      |    0
 .../docs/2.0.5}/user_doc/guide/task/python.md      |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/task/shell.md |    0
 .../zh-cn/docs}/2.0.5/user_doc/guide/task/spark.md |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/task/sql.md   |    0
 .../2.0.5}/user_doc/guide/task/stored-procedure.md |    0
 .../docs/2.0.5}/user_doc/guide/task/sub-process.md |    0
 .../docs/2.0.5}/user_doc/guide/task/switch.md      |    0
 .../zh-cn/docs/2.0.5}/user_doc/guide/upgrade.md    |    0
 .../About_DolphinScheduler.md                      |    0
 .../zh-cn/docs/dev}/user_doc/architecture/cache.md |    0
 .../dev/user_doc/architecture/configuration.md     |    0
 .../docs}/dev/user_doc/architecture/design.md      |    0
 .../dev/user_doc/architecture}/load-balance.md     |    0
 .../docs}/dev/user_doc/architecture/metadata.md    |    0
 .../dev/user_doc/architecture}/task-structure.md   |    0
 .../guide/alert/alert_plugin_user_guide.md         |    0
 .../docs/dev}/user_doc/guide/alert/dingtalk.md     |    0
 .../user_doc/guide/alert/enterprise-webexteams.md  |    0
 .../dev/user_doc/guide/alert/enterprise-wechat.md  |    0
 .../docs}/dev/user_doc/guide/alert/telegram.md     |    0
 .../docs/dev}/user_doc/guide/datasource/hive.md    |    0
 .../dev}/user_doc/guide/datasource/introduction.md |    0
 .../docs}/dev/user_doc/guide/datasource/mysql.md   |    0
 .../dev/user_doc/guide/datasource/postgresql.md    |    0
 .../docs/dev}/user_doc/guide/datasource/spark.md   |    0
 .../dev/user_doc/guide/expansion-reduction.md      |    0
 .../zh-cn/docs/dev/user_doc/guide}/flink-call.md   |    0
 .../zh-cn/docs/dev}/user_doc/guide/homepage.md     |    0
 .../dev}/user_doc/guide/installation/cluster.md    |    0
 .../dev/user_doc/guide/installation/docker.md      |    0
 .../dev}/user_doc/guide/installation/hardware.md   |    0
 .../dev/user_doc/guide/installation/kubernetes.md  |    0
 .../user_doc/guide/installation/pseudo-cluster.md  |    0
 .../guide/installation/skywalking-agent.md         |    0
 .../dev}/user_doc/guide/installation/standalone.md |    0
 .../zh-cn/docs}/dev/user_doc/guide/introduction.md |    0
 .../zh-cn/docs/dev}/user_doc/guide/monitor.md      |    0
 .../zh-cn/docs}/dev/user_doc/guide/open-api.md     |    0
 .../docs}/dev/user_doc/guide/parameter/built-in.md |    0
 .../docs}/dev/user_doc/guide/parameter/context.md  |    0
 .../docs/dev}/user_doc/guide/parameter/global.md   |    0
 .../docs/dev}/user_doc/guide/parameter/local.md    |    0
 .../docs/dev}/user_doc/guide/parameter/priority.md |    0
 .../dev}/user_doc/guide/project/project-list.md    |    0
 .../dev}/user_doc/guide/project/task-instance.md   |    0
 .../user_doc/guide/project/workflow-definition.md  |    0
 .../user_doc/guide/project/workflow-instance.md    |    0
 .../zh-cn/docs/dev}/user_doc/guide/quick-start.md  |    0
 .../zh-cn/docs}/dev/user_doc/guide/resource.md     |    0
 .../zh-cn/docs}/dev/user_doc/guide/security.md     |    0
 .../docs/dev}/user_doc/guide/task/conditions.md    |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/datax.md   |    0
 .../docs/dev}/user_doc/guide/task/dependent.md     |    0
 .../zh-cn/docs}/dev/user_doc/guide/task/emr.md     |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/flink.md   |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/http.md    |    0
 .../docs/dev}/user_doc/guide/task/map-reduce.md    |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/pigeon.md  |    0
 .../zh-cn/docs}/dev/user_doc/guide/task/python.md  |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/shell.md   |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/spark.md   |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/sql.md     |    0
 .../dev}/user_doc/guide/task/stored-procedure.md   |    0
 .../docs/dev}/user_doc/guide/task/sub-process.md   |    0
 .../zh-cn/docs/dev}/user_doc/guide/task/switch.md  |    0
 .../zh-cn/docs}/dev/user_doc/guide/upgrade.md      |    0
 {docs/zh-cn => content/zh-cn/docs}/release/faq.md  |    0
 .../zh-cn/docs}/release/history-versions.md        |    0
 .../zh-cn => content/zh-cn/download}/download.md   |    0
 .../zh-cn/download}/download_ppt.md                |    0
 docs/en-us/1.3.5/user_doc/architecture-design.md   |  332 ---
 docs/en-us/1.3.6/user_doc/architecture-design.md   |  332 ---
 docs/en-us/1.3.6/user_doc/open-api.md              |   64 -
 .../1.3.8/user_doc/parameters-introduction.md      |   80 -
 docs/zh-cn/1.3.6/user_doc/open-api.md              |   65 -
 1063 files changed, 8253 insertions(+), 9126 deletions(-)

diff --git a/blog/en-us/Apache-DolphinScheduler-2.0.1.md b/content/en-us/blog/Apache-DolphinScheduler-2.0.1.md
similarity index 100%
rename from blog/en-us/Apache-DolphinScheduler-2.0.1.md
rename to content/en-us/blog/Apache-DolphinScheduler-2.0.1.md
diff --git a/blog/en-us/Apache_dolphinScheduler_2.0.2.md b/content/en-us/blog/Apache_dolphinScheduler_2.0.2.md
similarity index 100%
rename from blog/en-us/Apache_dolphinScheduler_2.0.2.md
rename to content/en-us/blog/Apache_dolphinScheduler_2.0.2.md
diff --git a/blog/en-us/Apache_dolphinScheduler_2.0.3.md b/content/en-us/blog/Apache_dolphinScheduler_2.0.3.md
similarity index 100%
rename from blog/en-us/Apache_dolphinScheduler_2.0.3.md
rename to content/en-us/blog/Apache_dolphinScheduler_2.0.3.md
diff --git a/blog/en-us/Awarded_most_popular_project_in_2021.md b/content/en-us/blog/Awarded_most_popular_project_in_2021.md
similarity index 100%
rename from blog/en-us/Awarded_most_popular_project_in_2021.md
rename to content/en-us/blog/Awarded_most_popular_project_in_2021.md
diff --git a/blog/en-us/Board_of_Directors_Report.md b/content/en-us/blog/Board_of_Directors_Report.md
similarity index 100%
rename from blog/en-us/Board_of_Directors_Report.md
rename to content/en-us/blog/Board_of_Directors_Report.md
diff --git a/blog/en-us/DAG.md b/content/en-us/blog/DAG.md
similarity index 100%
rename from blog/en-us/DAG.md
rename to content/en-us/blog/DAG.md
diff --git a/blog/en-us/DS-2.0-alpha-release.md b/content/en-us/blog/DS-2.0-alpha-release.md
similarity index 100%
rename from blog/en-us/DS-2.0-alpha-release.md
rename to content/en-us/blog/DS-2.0-alpha-release.md
diff --git a/blog/en-us/DS_run_in_windows.md b/content/en-us/blog/DS_run_in_windows.md
similarity index 100%
rename from blog/en-us/DS_run_in_windows.md
rename to content/en-us/blog/DS_run_in_windows.md
diff --git a/blog/en-us/DolphinScheduler-Vulnerability-Explanation.md b/content/en-us/blog/DolphinScheduler-Vulnerability-Explanation.md
similarity index 100%
rename from blog/en-us/DolphinScheduler-Vulnerability-Explanation.md
rename to content/en-us/blog/DolphinScheduler-Vulnerability-Explanation.md
diff --git a/blog/en-us/DolphinScheduler_Kubernetes_Technology_in_action.md b/content/en-us/blog/DolphinScheduler_Kubernetes_Technology_in_action.md
similarity index 100%
rename from blog/en-us/DolphinScheduler_Kubernetes_Technology_in_action.md
rename to content/en-us/blog/DolphinScheduler_Kubernetes_Technology_in_action.md
diff --git a/blog/en-us/Eavy_Info.md b/content/en-us/blog/Eavy_Info.md
similarity index 100%
rename from blog/en-us/Eavy_Info.md
rename to content/en-us/blog/Eavy_Info.md
diff --git a/blog/en-us/FAQ.md b/content/en-us/blog/FAQ.md
similarity index 100%
rename from blog/en-us/FAQ.md
rename to content/en-us/blog/FAQ.md
diff --git a/blog/en-us/Introducing-Apache-DolphinScheduler-1.3.9.md b/content/en-us/blog/Introducing-Apache-DolphinScheduler-1.3.9.md
similarity index 100%
rename from blog/en-us/Introducing-Apache-DolphinScheduler-1.3.9.md
rename to content/en-us/blog/Introducing-Apache-DolphinScheduler-1.3.9.md
diff --git a/blog/en-us/Json_Split.md b/content/en-us/blog/Json_Split.md
similarity index 100%
rename from blog/en-us/Json_Split.md
rename to content/en-us/blog/Json_Split.md
diff --git a/blog/en-us/Lizhi-case-study.md b/content/en-us/blog/Lizhi-case-study.md
similarity index 100%
rename from blog/en-us/Lizhi-case-study.md
rename to content/en-us/blog/Lizhi-case-study.md
diff --git a/blog/en-us/Meetup_2022_02_26.md b/content/en-us/blog/Meetup_2022_02_26.md
similarity index 100%
rename from blog/en-us/Meetup_2022_02_26.md
rename to content/en-us/blog/Meetup_2022_02_26.md
diff --git a/blog/en-us/Twos.md b/content/en-us/blog/Twos.md
similarity index 100%
rename from blog/en-us/Twos.md
rename to content/en-us/blog/Twos.md
diff --git a/blog/en-us/YouZan-case-study.md b/content/en-us/blog/YouZan-case-study.md
similarity index 100%
rename from blog/en-us/YouZan-case-study.md
rename to content/en-us/blog/YouZan-case-study.md
diff --git a/blog/en-us/architecture-design.md b/content/en-us/blog/architecture-design.md
similarity index 100%
rename from blog/en-us/architecture-design.md
rename to content/en-us/blog/architecture-design.md
diff --git a/blog/en-us/meetup_2019_10_26.md b/content/en-us/blog/meetup_2019_10_26.md
similarity index 100%
rename from blog/en-us/meetup_2019_10_26.md
rename to content/en-us/blog/meetup_2019_10_26.md
diff --git a/blog/en-us/meetup_2019_12_08.md b/content/en-us/blog/meetup_2019_12_08.md
similarity index 100%
rename from blog/en-us/meetup_2019_12_08.md
rename to content/en-us/blog/meetup_2019_12_08.md
diff --git a/community/en-us/DSIP.md b/content/en-us/community/DSIP.md
similarity index 100%
rename from community/en-us/DSIP.md
rename to content/en-us/community/DSIP.md
diff --git a/community/en-us/development/DS-License.md b/content/en-us/community/development/DS-License.md
similarity index 100%
rename from community/en-us/development/DS-License.md
rename to content/en-us/community/development/DS-License.md
diff --git a/community/en-us/development/become-a-committer.md b/content/en-us/community/development/become-a-committer.md
similarity index 100%
rename from community/en-us/development/become-a-committer.md
rename to content/en-us/community/development/become-a-committer.md
diff --git a/community/en-us/development/code-conduct.md b/content/en-us/community/development/code-conduct.md
similarity index 98%
rename from community/en-us/development/code-conduct.md
rename to content/en-us/community/development/code-conduct.md
index b3c0fbf..5505e95 100644
--- a/community/en-us/development/code-conduct.md
+++ b/content/en-us/community/development/code-conduct.md
@@ -1,68 +1,68 @@
-# Code of Conduct
-
-The following Code of Conduct is based on full compliance with the [Apache Software Foundation Code of Conduct](https://www.apache.org/foundation/policies/conduct.html).
-
-## Development philosophy
- - **Consistent** code style, naming, and usage are consistent.  
- - **Easy to read** code is obvious, easy to read and understand, when debugging one knows the intent of the code.
- - **Neat** agree with the concepts of《Refactoring》and《Code Cleanliness》and pursue clean and elegant code.
- - **Abstract** hierarchy is clear and the concepts are refined and reasonable. Keep methods, classes, packages, and modules at the same level of abstraction.
- - **Heart** Maintain a sense of responsibility and continue to be carved in the spirit of artisans.
- 
-## Development specifications
-
- - Executing `mvn -U clean package -Prelease` can compile and test through all test cases. 
- - The test coverage tool checks for no less than dev branch coverage.
- - In the root directory, use Checkstyle to check your code for special reasons for violating validation rules. The template location is located at ds_check_style.xml.
- - Follow the coding specifications.
-
-## Coding specifications
-
- - Use linux line breaks.
- - Indentation (including empty lines) is consistent with the last line.
- - An empty line is required between the class declaration and the following variable or method.
- - There should be no meaningless empty lines.
- - Classes, methods, and variables should be named as the name implies and abbreviations should be avoided.
- - Return value variables are named after `result`; `each` is used in loops to name loop variables; and `entry` is used in map instead of `each`.
- - The cached exception is called `e`; Catch the exception and do nothing, and the exception is named `ignored`.
- - Configuration Files are named in camelCase, and file names are lowercase with uppercase initial/starting letter.
- - Code that requires comment interpretation should be as small as possible and interpreted by method name.
- - `equals` and `==` In a conditional expression, the constant is left, the variable is on the right, and in the expression greater than less than condition, the variable is left and the constant is right.
- - In addition to the abstract classes used for inheritance, try to design the class as `final`.
- - Nested loops are as much a method as possible.
- - The order in which member variables are defined and the order in which parameters are passed is consistent across classes and methods.
- - Priority is given to the use of guard statements.
- - Classes and methods have minimal access control.
- - The private method used by the method should follow the method, and if there are multiple private methods, the writing private method should appear in the same order as the private method in the original method.
- - Method entry and return values are not allowed to be `null`.
- - The return and assignment statements of if else are preferred with the tri-objective operator.
- - Priority is given to `LinkedList` and only use `ArrayList` if you need to get element values in the collection through the index.
- - Collection types such as `ArrayList`,`HashMap` that may produce expansion must specify the initial size of the collection to avoid expansion.
- - Logs and notes are always in English.
- - Comments can only contain `javadoc`, `todo` and `fixme`.
- - Exposed classes and methods must have javadoc, other classes and methods and methods that override the parent class do not require javadoc.
-
-## Unit test specifications
-
- - Test code and production code are subject to the same code specifications.
- - Unit tests are subject to AIR (Automatic, Independent, Repeatable) Design concept.
-   - Automatic: Unit tests should be fully automated, not interactive. Manual checking of output results is prohibited, `System.out`, `log`, etc. are not allowed, and must be verified with assertions. 
-   - Independent: It is prohibited to call each other between unit test cases and to rely on the order of execution. Each unit test can be run independently.
-   - Repeatable: Unit tests cannot be affected by the external environment and can be repeated. 
- - Unit tests are subject to BCDE(Border, Correct, Design, Error) Design principles.
-   - Border (Boundary value test): The expected results are obtained by entering the boundaries of loop boundaries, special values, data order, etc.
-   - Correct (Correctness test): The expected results are obtained with the correct input.
-   - Design (Rationality Design): Design high-quality unit tests in combination with production code design.
-   - Error (Fault tolerance test): The expected results are obtained through incorrect input such as illegal data, abnormal flow, etc.
- - If there is no special reason, the test needs to be fully covered.
- - Each test case needs to be accurately asserted.
- - Prepare the environment for code separation from the test code.
- - Only jUnit `Assert`,hamcrest `CoreMatchers`,Mockito Correlation can use static import.
- - Single-data assertions should use `assertTrue`,`assertFalse`,`assertNull` and `assertNotNull`.
- - Multi-data assertions should use `assertThat`.
- - Accurate assertion, try not to use `not`,`containsString` assertion.
- - The true value of the test case should be named actualXXX, and the expected value should be named expectedXXX.
- - Classes and Methods with `@Test` labels do not require javadoc.
-
- - Public specifications.
-   - Each line is no longer than `200` in length, ensuring that each line is semantically complete for easy understanding.
+# Code of Conduct
+
+The following Code of Conduct is based on full compliance with the [Apache Software Foundation Code of Conduct](https://www.apache.org/foundation/policies/conduct.html).
+
+## Development philosophy
+ - **Consistent** code style, naming, and usage are consistent.  
+ - **Easy to read** code is obvious, easy to read and understand, when debugging one knows the intent of the code.
+ - **Neat** agree with the concepts of《Refactoring》and《Code Cleanliness》and pursue clean and elegant code.
+ - **Abstract** hierarchy is clear and the concepts are refined and reasonable. Keep methods, classes, packages, and modules at the same level of abstraction.
+ - **Heart** Maintain a sense of responsibility and continue to be carved in the spirit of artisans.
+ 
+## Development specifications
+
+ - Executing `mvn -U clean package -Prelease` can compile and test through all test cases. 
+ - The test coverage tool checks for no less than dev branch coverage.
+ - In the root directory, use Checkstyle to check your code for special reasons for violating validation rules. The template location is located at ds_check_style.xml.
+ - Follow the coding specifications.
+
+## Coding specifications
+
+ - Use linux line breaks.
+ - Indentation (including empty lines) is consistent with the last line.
+ - An empty line is required between the class declaration and the following variable or method.
+ - There should be no meaningless empty lines.
+ - Classes, methods, and variables should be named as the name implies and abbreviations should be avoided.
+ - Return value variables are named after `result`; `each` is used in loops to name loop variables; and `entry` is used in map instead of `each`.
+ - The cached exception is called `e`; Catch the exception and do nothing, and the exception is named `ignored`.
+ - Configuration Files are named in camelCase, and file names are lowercase with uppercase initial/starting letter.
+ - Code that requires comment interpretation should be as small as possible and interpreted by method name.
+ - `equals` and `==` In a conditional expression, the constant is left, the variable is on the right, and in the expression greater than less than condition, the variable is left and the constant is right.
+ - In addition to the abstract classes used for inheritance, try to design the class as `final`.
+ - Nested loops are as much a method as possible.
+ - The order in which member variables are defined and the order in which parameters are passed is consistent across classes and methods.
+ - Priority is given to the use of guard statements.
+ - Classes and methods have minimal access control.
+ - The private method used by the method should follow the method, and if there are multiple private methods, the writing private method should appear in the same order as the private method in the original method.
+ - Method entry and return values are not allowed to be `null`.
+ - The return and assignment statements of if else are preferred with the tri-objective operator.
+ - Priority is given to `LinkedList` and only use `ArrayList` if you need to get element values in the collection through the index.
+ - Collection types such as `ArrayList`,`HashMap` that may produce expansion must specify the initial size of the collection to avoid expansion.
+ - Logs and notes are always in English.
+ - Comments can only contain `javadoc`, `todo` and `fixme`.
+ - Exposed classes and methods must have javadoc, other classes and methods and methods that override the parent class do not require javadoc.
+
+## Unit test specifications
+
+ - Test code and production code are subject to the same code specifications.
+ - Unit tests are subject to AIR (Automatic, Independent, Repeatable) Design concept.
+   - Automatic: Unit tests should be fully automated, not interactive. Manual checking of output results is prohibited, `System.out`, `log`, etc. are not allowed, and must be verified with assertions. 
+   - Independent: It is prohibited to call each other between unit test cases and to rely on the order of execution. Each unit test can be run independently.
+   - Repeatable: Unit tests cannot be affected by the external environment and can be repeated. 
+ - Unit tests are subject to BCDE(Border, Correct, Design, Error) Design principles.
+   - Border (Boundary value test): The expected results are obtained by entering the boundaries of loop boundaries, special values, data order, etc.
+   - Correct (Correctness test): The expected results are obtained with the correct input.
+   - Design (Rationality Design): Design high-quality unit tests in combination with production code design.
+   - Error (Fault tolerance test): The expected results are obtained through incorrect input such as illegal data, abnormal flow, etc.
+ - If there is no special reason, the test needs to be fully covered.
+ - Each test case needs to be accurately asserted.
+ - Prepare the environment for code separation from the test code.
+ - Only jUnit `Assert`,hamcrest `CoreMatchers`,Mockito Correlation can use static import.
+ - Single-data assertions should use `assertTrue`,`assertFalse`,`assertNull` and `assertNotNull`.
+ - Multi-data assertions should use `assertThat`.
+ - Accurate assertion, try not to use `not`,`containsString` assertion.
+ - The true value of the test case should be named actualXXX, and the expected value should be named expectedXXX.
+ - Classes and Methods with `@Test` labels do not require javadoc.
+
+ - Public specifications.
+   - Each line is no longer than `200` in length, ensuring that each line is semantically complete for easy understanding.
diff --git a/community/en-us/development/commit-message.md b/content/en-us/community/development/commit-message.md
similarity index 100%
rename from community/en-us/development/commit-message.md
rename to content/en-us/community/development/commit-message.md
diff --git a/community/en-us/development/contribute.md b/content/en-us/community/development/contribute.md
similarity index 100%
rename from community/en-us/development/contribute.md
rename to content/en-us/community/development/contribute.md
diff --git a/community/en-us/development/document.md b/content/en-us/community/development/document.md
similarity index 100%
rename from community/en-us/development/document.md
rename to content/en-us/community/development/document.md
diff --git a/community/en-us/development/issue.md b/content/en-us/community/development/issue.md
similarity index 100%
rename from community/en-us/development/issue.md
rename to content/en-us/community/development/issue.md
diff --git a/community/en-us/development/microbench.md b/content/en-us/community/development/microbench.md
similarity index 100%
rename from community/en-us/development/microbench.md
rename to content/en-us/community/development/microbench.md
diff --git a/community/en-us/development/pull-request.md b/content/en-us/community/development/pull-request.md
similarity index 100%
rename from community/en-us/development/pull-request.md
rename to content/en-us/community/development/pull-request.md
diff --git a/community/en-us/development/submit-code.md b/content/en-us/community/development/submit-code.md
similarity index 100%
rename from community/en-us/development/submit-code.md
rename to content/en-us/community/development/submit-code.md
diff --git a/community/en-us/development/subscribe.md b/content/en-us/community/development/subscribe.md
similarity index 100%
rename from community/en-us/development/subscribe.md
rename to content/en-us/community/development/subscribe.md
diff --git a/community/en-us/development/unit-test.md b/content/en-us/community/development/unit-test.md
similarity index 100%
rename from community/en-us/development/unit-test.md
rename to content/en-us/community/development/unit-test.md
diff --git a/community/en-us/join/e2e-guide.md b/content/en-us/community/join/e2e-guide.md
similarity index 100%
rename from community/en-us/join/e2e-guide.md
rename to content/en-us/community/join/e2e-guide.md
diff --git a/community/en-us/join/review.md b/content/en-us/community/join/review.md
similarity index 100%
rename from community/en-us/join/review.md
rename to content/en-us/community/join/review.md
diff --git a/community/en-us/release-post.md b/content/en-us/community/release-post.md
similarity index 100%
rename from community/en-us/release-post.md
rename to content/en-us/community/release-post.md
diff --git a/community/en-us/release-prepare.md b/content/en-us/community/release-prepare.md
similarity index 100%
rename from community/en-us/release-prepare.md
rename to content/en-us/community/release-prepare.md
diff --git a/community/en-us/release.md b/content/en-us/community/release.md
similarity index 100%
rename from community/en-us/release.md
rename to content/en-us/community/release.md
diff --git a/community/en-us/security.md b/content/en-us/community/security.md
similarity index 100%
rename from community/en-us/security.md
rename to content/en-us/community/security.md
diff --git a/community/en-us/team.md b/content/en-us/community/team.md
similarity index 100%
rename from community/en-us/team.md
rename to content/en-us/community/team.md
diff --git a/development/en-us/api-standard.md b/content/en-us/development/api-standard.md
similarity index 100%
rename from development/en-us/api-standard.md
rename to content/en-us/development/api-standard.md
diff --git a/development/en-us/architecture-design.md b/content/en-us/development/architecture-design.md
similarity index 100%
rename from development/en-us/architecture-design.md
rename to content/en-us/development/architecture-design.md
diff --git a/development/en-us/backend/mechanism/global-parameter.md b/content/en-us/development/backend/mechanism/global-parameter.md
similarity index 100%
rename from development/en-us/backend/mechanism/global-parameter.md
rename to content/en-us/development/backend/mechanism/global-parameter.md
diff --git a/development/en-us/backend/mechanism/overview.md b/content/en-us/development/backend/mechanism/overview.md
similarity index 100%
rename from development/en-us/backend/mechanism/overview.md
rename to content/en-us/development/backend/mechanism/overview.md
diff --git a/development/en-us/backend/mechanism/task/switch.md b/content/en-us/development/backend/mechanism/task/switch.md
similarity index 100%
rename from development/en-us/backend/mechanism/task/switch.md
rename to content/en-us/development/backend/mechanism/task/switch.md
diff --git a/development/en-us/backend/spi/alert.md b/content/en-us/development/backend/spi/alert.md
similarity index 100%
rename from development/en-us/backend/spi/alert.md
rename to content/en-us/development/backend/spi/alert.md
diff --git a/development/en-us/backend/spi/datasource.md b/content/en-us/development/backend/spi/datasource.md
similarity index 100%
rename from development/en-us/backend/spi/datasource.md
rename to content/en-us/development/backend/spi/datasource.md
diff --git a/development/en-us/backend/spi/registry.md b/content/en-us/development/backend/spi/registry.md
similarity index 100%
rename from development/en-us/backend/spi/registry.md
rename to content/en-us/development/backend/spi/registry.md
diff --git a/development/en-us/backend/spi/task.md b/content/en-us/development/backend/spi/task.md
similarity index 100%
rename from development/en-us/backend/spi/task.md
rename to content/en-us/development/backend/spi/task.md
diff --git a/development/en-us/development-environment-setup.md b/content/en-us/development/development-environment-setup.md
similarity index 100%
rename from development/en-us/development-environment-setup.md
rename to content/en-us/development/development-environment-setup.md
diff --git a/development/en-us/e2e-test.md b/content/en-us/development/e2e-test.md
similarity index 100%
rename from development/en-us/e2e-test.md
rename to content/en-us/development/e2e-test.md
diff --git a/development/en-us/frontend-development.md b/content/en-us/development/frontend-development.md
similarity index 100%
rename from development/en-us/frontend-development.md
rename to content/en-us/development/frontend-development.md
diff --git a/development/en-us/have-questions.md b/content/en-us/development/have-questions.md
similarity index 100%
rename from development/en-us/have-questions.md
rename to content/en-us/development/have-questions.md
diff --git a/docs/en-us/1.2.0/user_doc/backend-deployment.md b/content/en-us/docs/1.2.0/user_doc/backend-deployment.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/backend-deployment.md
rename to content/en-us/docs/1.2.0/user_doc/backend-deployment.md
diff --git a/docs/en-us/1.2.0/user_doc/cluster-deployment.md b/content/en-us/docs/1.2.0/user_doc/cluster-deployment.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/cluster-deployment.md
rename to content/en-us/docs/1.2.0/user_doc/cluster-deployment.md
diff --git a/docs/en-us/1.2.0/user_doc/frontend-deployment.md b/content/en-us/docs/1.2.0/user_doc/frontend-deployment.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/frontend-deployment.md
rename to content/en-us/docs/1.2.0/user_doc/frontend-deployment.md
diff --git a/docs/en-us/1.2.1/user_doc/hardware-environment.md b/content/en-us/docs/1.2.0/user_doc/hardware-environment.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/hardware-environment.md
rename to content/en-us/docs/1.2.0/user_doc/hardware-environment.md
diff --git a/docs/en-us/1.2.1/user_doc/metadata-1.2.md b/content/en-us/docs/1.2.0/user_doc/metadata-1.2.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/metadata-1.2.md
rename to content/en-us/docs/1.2.0/user_doc/metadata-1.2.md
diff --git a/docs/en-us/1.2.1/user_doc/quick-start.md b/content/en-us/docs/1.2.0/user_doc/quick-start.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/quick-start.md
rename to content/en-us/docs/1.2.0/user_doc/quick-start.md
diff --git a/docs/en-us/1.2.0/user_doc/standalone-deployment.md b/content/en-us/docs/1.2.0/user_doc/standalone-deployment.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/standalone-deployment.md
rename to content/en-us/docs/1.2.0/user_doc/standalone-deployment.md
diff --git a/docs/en-us/1.2.0/user_doc/system-manual.md b/content/en-us/docs/1.2.0/user_doc/system-manual.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/system-manual.md
rename to content/en-us/docs/1.2.0/user_doc/system-manual.md
diff --git a/docs/en-us/1.2.1/user_doc/upgrade.md b/content/en-us/docs/1.2.0/user_doc/upgrade.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/upgrade.md
rename to content/en-us/docs/1.2.0/user_doc/upgrade.md
diff --git a/docs/en-us/1.2.1/user_doc/architecture-design.md b/content/en-us/docs/1.2.1/user_doc/architecture-design.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/architecture-design.md
rename to content/en-us/docs/1.2.1/user_doc/architecture-design.md
diff --git a/docs/en-us/1.2.1/user_doc/backend-deployment.md b/content/en-us/docs/1.2.1/user_doc/backend-deployment.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/backend-deployment.md
rename to content/en-us/docs/1.2.1/user_doc/backend-deployment.md
diff --git a/docs/en-us/1.2.1/user_doc/frontend-deployment.md b/content/en-us/docs/1.2.1/user_doc/frontend-deployment.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/frontend-deployment.md
rename to content/en-us/docs/1.2.1/user_doc/frontend-deployment.md
diff --git a/docs/en-us/1.2.0/user_doc/hardware-environment.md b/content/en-us/docs/1.2.1/user_doc/hardware-environment.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/hardware-environment.md
rename to content/en-us/docs/1.2.1/user_doc/hardware-environment.md
diff --git a/docs/en-us/1.2.0/user_doc/metadata-1.2.md b/content/en-us/docs/1.2.1/user_doc/metadata-1.2.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/metadata-1.2.md
rename to content/en-us/docs/1.2.1/user_doc/metadata-1.2.md
diff --git a/docs/en-us/1.2.1/user_doc/plugin-development.md b/content/en-us/docs/1.2.1/user_doc/plugin-development.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/plugin-development.md
rename to content/en-us/docs/1.2.1/user_doc/plugin-development.md
diff --git a/docs/en-us/1.2.0/user_doc/quick-start.md b/content/en-us/docs/1.2.1/user_doc/quick-start.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/quick-start.md
rename to content/en-us/docs/1.2.1/user_doc/quick-start.md
diff --git a/docs/en-us/1.2.1/user_doc/system-manual.md b/content/en-us/docs/1.2.1/user_doc/system-manual.md
similarity index 100%
rename from docs/en-us/1.2.1/user_doc/system-manual.md
rename to content/en-us/docs/1.2.1/user_doc/system-manual.md
diff --git a/docs/en-us/1.2.0/user_doc/upgrade.md b/content/en-us/docs/1.2.1/user_doc/upgrade.md
similarity index 100%
rename from docs/en-us/1.2.0/user_doc/upgrade.md
rename to content/en-us/docs/1.2.1/user_doc/upgrade.md
diff --git a/docs/en-us/1.3.2/user_doc/architecture-design.md b/content/en-us/docs/1.3.1/user_doc/architecture-design.md
similarity index 98%
copy from docs/en-us/1.3.2/user_doc/architecture-design.md
copy to content/en-us/docs/1.3.1/user_doc/architecture-design.md
index 29f4ae5..fe3beb7 100644
--- a/docs/en-us/1.3.2/user_doc/architecture-design.md
+++ b/content/en-us/docs/1.3.1/user_doc/architecture-design.md
@@ -1,332 +1,332 @@
-## System Architecture Design
-Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
-
-### 1.Glossary
-**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
-
-**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
-
-**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
-
-**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
-
-**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
-
-**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
-
-**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
-
-**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
-
-**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
-
-**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
-
-### 2.System Structure
-
-#### 2.1 System architecture diagram
-<p align="center">
-  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
-  <p align="center">
-        <em>System architecture diagram</em>
-  </p>
-</p>
-
-#### 2.2 Start process activity diagram
-<p align="center">
-  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
-  <p align="center">
-        <em>Start process activity diagram</em>
-  </p>
-</p>
-
-#### 2.3 Architecture description
-
-* **MasterServer** 
-
-    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
-    MasterServer provides monitoring services based on netty.
-
-    ##### The service mainly includes:
-
-    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
-
-    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
-
-* **WorkerServer** 
-
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
-
-     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
-     Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
-     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
-
-     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
-
-* **ZooKeeper** 
-
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
-
-    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
-
-* **Task Queue** 
-
-    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
-
-* **Alert** 
-
-    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
-
-* **API** 
-
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
-
-* **UI** 
-
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
-
-#### 2.3 Architecture design ideas
-
-##### One、Decentralization VS centralization 
-
-###### Centralized thinking
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
- </p>
-
-- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
-- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
-
-
-
-Problems in centralized thought design:
-
-- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
-
-
-
-###### Decentralized
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
- </p>
-
-- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
-- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
-
-
-
-- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
-
-#####  Two、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
-1. The core process algorithm for acquiring distributed locks is as follows:
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-
-##### Three、Insufficient thread loop waiting problem
-
--  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
--  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
- </p>
-In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
-
-1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
-2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
-
-note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-
-##### Four、Fault-tolerant design
-Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
-
-###### 1. Downtime fault tolerance
-
-The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
- </p>
-Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
-
-
-
-- Master fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
- </p>
-After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
- </p>
-
-Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
-
- Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
-
-###### 2.Task failed and try again
-
-Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
-- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
-
-
-
-Next to the topic, we divide the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
-
-- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
-
-Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
-
-
-
-##### Five、Task priority design
-In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
-
--  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
-    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
-
-        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
-             </p>
-
-        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
-             </p>
-
-
-##### Six、Logback and netty implement log access
-
--  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
-  -  Put logs on ES search engine
-  -  Obtain remote log information through netty communication
-
--  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
- </p>
-
-
-- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
-- FileAppender is mainly implemented as follows:
-
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
- ```
-
-
-Generate logs in the form of /process definition id/process instance id/task instance id.log
-
-- Filter to match the thread name starting with TaskLogInfo:
-
-- TaskLogFilter is implemented as follows:
-
- ```java
- /**
- *  task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
- ```
-
-### 3.Module introduction
-- dolphinscheduler-alert alarm module, providing AlertServer service.
-
-- dolphinscheduler-api web application module, providing ApiServer service.
-
-- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
-
-- dolphinscheduler-dao provides operations such as database access.
-
-- dolphinscheduler-remote client and server based on netty
-
-- dolphinscheduler-server MasterServer and WorkerServer services
-
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
-
-- dolphinscheduler-ui front-end module
-### Sum up
-From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
-
-
+## System Architecture Design
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
+
+### 1.Glossary
+**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+
+**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
+
+**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
+
+**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
+
+**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
+
+**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
+
+**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
+
+**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
+
+**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
+
+**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
+
+**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
+
+### 2.System Structure
+
+#### 2.1 System architecture diagram
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
+  <p align="center">
+        <em>System architecture diagram</em>
+  </p>
+</p>
+
+#### 2.2 Start process activity diagram
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>Start process activity diagram</em>
+  </p>
+</p>
+
+#### 2.3 Architecture description
+
+* **MasterServer** 
+
+    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
+    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
+    MasterServer provides monitoring services based on netty.
+
+    ##### The service mainly includes:
+
+    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
+
+    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
+
+    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
+
+    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
+
+* **WorkerServer** 
+
+     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
+
+     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
+     Server provides monitoring services based on netty. Worker
+     ##### The service mainly includes:
+     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
+
+     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
+
+* **ZooKeeper** 
+
+    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
+
+    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+
+* **Task Queue** 
+
+    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
+
+* **Alert** 
+
+    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
+
+* **API** 
+
+    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
+
+* **UI** 
+
+    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
+
+#### 2.3 Architecture design ideas
+
+##### One、Decentralization VS centralization 
+
+###### Centralized thinking
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
+ </p>
+
+- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
+- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
+
+
+
+Problems in centralized thought design:
+
+- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
+
+
+
+###### Decentralized
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
+ </p>
+
+- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
+- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
+
+
+
+- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
+
+#####  Two、Distributed lock practice
+
+DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
+1. The core process algorithm for acquiring distributed locks is as follows:
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+
+##### Three、Insufficient thread loop waiting problem
+
+-  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
+-  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
+ </p>
+In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
+
+1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
+2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
+
+note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
+
+So we chose the third way to solve the problem of insufficient threads.
+
+
+##### Four、Fault-tolerant design
+Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
+
+###### 1. Downtime fault tolerance
+
+The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
+ </p>
+Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
+
+
+
+- Master fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
+ </p>
+After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
+
+
+
+- Worker fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
+ </p>
+
+Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
+
+ Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+
+###### 2.Task failed and try again
+
+Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
+
+- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
+- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
+- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
+
+
+
+Next to the topic, we divide the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
+
+- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
+
+Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
+
+If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
+
+
+
+##### Five、Task priority design
+In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
+
+-  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
+    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
+
+        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
+             </p>
+
+        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
+             </p>
+
+
+##### Six、Logback and netty implement log access
+
+-  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
+  -  Put logs on ES search engine
+  -  Obtain remote log information through netty communication
+
+-  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
+ </p>
+
+
+- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
+- FileAppender is mainly implemented as follows:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+Generate logs in the form of /process definition id/process instance id/task instance id.log
+
+- Filter to match the thread name starting with TaskLogInfo:
+
+- TaskLogFilter is implemented as follows:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+### 3.Module introduction
+- dolphinscheduler-alert alarm module, providing AlertServer service.
+
+- dolphinscheduler-api web application module, providing ApiServer service.
+
+- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
+
+- dolphinscheduler-dao provides operations such as database access.
+
+- dolphinscheduler-remote client and server based on netty
+
+- dolphinscheduler-server MasterServer and WorkerServer services
+
+- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
+
+- dolphinscheduler-ui front-end module
+### Sum up
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
+
+
diff --git a/docs/en-us/1.3.1/user_doc/cluster-deployment.md b/content/en-us/docs/1.3.1/user_doc/cluster-deployment.md
similarity index 100%
rename from docs/en-us/1.3.1/user_doc/cluster-deployment.md
rename to content/en-us/docs/1.3.1/user_doc/cluster-deployment.md
diff --git a/docs/en-us/1.3.1/user_doc/configuration-file.md b/content/en-us/docs/1.3.1/user_doc/configuration-file.md
similarity index 98%
rename from docs/en-us/1.3.1/user_doc/configuration-file.md
rename to content/en-us/docs/1.3.1/user_doc/configuration-file.md
index db9b81c..733ac29 100644
--- a/docs/en-us/1.3.1/user_doc/configuration-file.md
+++ b/content/en-us/docs/1.3.1/user_doc/configuration-file.md
@@ -1,406 +1,406 @@
-<!-- markdown-link-check-disable -->
-# Foreword
-This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
-
-# Directory Structure
-All configuration files of dolphinscheduler are currently in the [conf] directory.
-
-For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
-This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
-
-[Note: The following dolphinscheduler is referred to as DS.]
-```
-
-├─bin                               DS command storage directory
-│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
-│  ├─start-all.sh                       Start all DS services according to the configuration file
-│  ├─stop-all.sh                        Close all DS services according to the configuration file
-├─conf                              Configuration file directory
-│  ├─application-api.properties         api service configuration file
-│  ├─datasource.properties              Database configuration file
-│  ├─zookeeper.properties               zookeeper configuration file
-│  ├─master.properties                  Master service configuration file
-│  ├─worker.properties                  Worker service configuration file
-│  ├─quartz.properties                  Quartz service configuration file
-│  ├─common.properties                  Public service [storage] configuration file
-│  ├─alert.properties                   alert service configuration file
-│  ├─config                             Environment variable configuration folder
-│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
-│  ├─env                                Run script environment variable configuration directory
-│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-│  ├─org                                mybatis mapper file directory
-│  ├─i18n                               i18n configuration file directory
-│  ├─logback-api.xml                    api service log configuration file
-│  ├─logback-master.xml                 Master service log configuration file
-│  ├─logback-worker.xml                 Worker service log configuration file
-│  ├─logback-alert.xml                  alert service log configuration file
-├─sql                               DS metadata creation and upgrade sql file
-│  ├─create                             Create SQL script directory
-│  ├─upgrade                            Upgrade SQL script directory
-│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
-│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
-│  ├─soft_version                       Current DS version identification file
-├─script                            DS service deployment, database creation/upgrade script directory
-│  ├─create-dolphinscheduler.sh         DS database initialization script      
-│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
-│  ├─monitor-server.sh                  DS service monitoring startup script               
-│  ├─scp-hosts.sh                       Install file transfer script                                                    
-│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
-├─ui                                Front-end WEB resource directory
-├─lib                               DS dependent jar storage directory
-├─install.sh                        Automatically install DS service script
-
-
-```
-
-
-# Detailed configuration file
-
-Serial number| Service classification |  Configuration file|
-|--|--|--|
-1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
-2|Database connection configuration | datasource.properties
-3|Zookeeper connection configuration|zookeeper.properties
-4|Common [storage] configuration|common.properties
-5|API service configuration|application-api.properties
-6|Master service configuration|master.properties
-7|Worker service configuration|worker.properties
-8|Alert service configuration|alert.properties
-9|Quartz configuration|quartz.properties
-10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
-11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
-12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
-
-
-## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
-The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
-start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
-At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
-
-The default simplified parameters are as follows:
-```bash
-export DOLPHINSCHEDULER_OPTS="
--server 
--Xmx16g 
--Xms1g 
--Xss512k 
--XX:+UseConcMarkSweepGC 
--XX:+CMSParallelRemarkEnabled 
--XX:+UseFastAccessorMethods 
--XX:+UseCMSInitiatingOccupancyOnly 
--XX:CMSInitiatingOccupancyFraction=70
-"
-```
-
-> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
-
-## 2.datasource.properties [Database Connectivity]
-Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
-|Parameter | Defaults| Description|
-|--|--|--|
-spring.datasource.driver-class-name| |Database driver
-spring.datasource.url||Database connection address
-spring.datasource.username||Database username
-spring.datasource.password||Database password
-spring.datasource.initialSize|5| Number of initial connection pools
-spring.datasource.minIdle|5| Minimum number of connection pools
-spring.datasource.maxActive|5| Maximum number of connection pools
-spring.datasource.maxWait|60000| Maximum waiting time
-spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
-spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
-spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
-spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
-spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
-spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
-spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
-spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
-spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
-spring.datasource.poolPreparedStatements|true| Open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
-
-
-## 3.zookeeper.properties [Zookeeper connection configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-zookeeper.quorum|localhost:2181| zk cluster connection information
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
-zookeeper.session.timeout|60000|  session time out
-zookeeper.connection.timeout|30000|  Connection timed out
-zookeeper.retry.base.sleep|100| Basic retry time difference
-zookeeper.retry.max.sleep|30000| Maximum retry time
-zookeeper.retry.maxtime|10|Maximum number of retries
-
-
-## 4.common.properties [hadoop, s3, yarn configuration]
-The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
-|Parameter |Defaults| Description| 
-|--|--|--|
-resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
-resource.upload.path|/dolphinscheduler|Resource file storage path
-data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
-hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
-java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
-login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
-login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
-resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
-hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
-fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
-fs.s3a.endpoint||s3 endpoint address
-fs.s3a.access.key||s3 access key
-fs.s3a.secret.key||s3 secret key
-yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
-yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
-dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-development.state|false|Is it in development mode
-kerberos.expire.time|7|kerberos expire time,integer,the unit is day
-
-
-## 5.application-api.properties [API service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-server.port|12345|API service communication port
-server.servlet.session.timeout|7200|session timeout
-server.servlet.context-path|/dolphinscheduler |Request path
-spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
-spring.servlet.multipart.max-request-size|1024MB|Maximum request size
-server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
-spring.messages.encoding|UTF-8|Request encoding
-spring.jackson.time-zone|GMT+8|Set time zone
-spring.messages.basename|i18n/messages|i18n configuration
-security.authentication.type|PASSWORD|Permission verification type
-
-
-## 6.master.properties [Master service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-master.listen.port|5678|master listen port
-master.exec.threads|100|master execute thread number to limit process instances in parallel
-master.exec.task.num|20|master execute task number in parallel per process instance
-master.dispatch.task.num|3|master dispatch task number per batch
-master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
-master.heartbeat.interval|10|master heartbeat interval, the unit is second
-master.task.commit.retryTimes|5|master commit task retry times
-master.task.commit.interval|1000|master commit task interval, the unit is millisecond
-master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
-master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
-
-
-## 7.worker.properties [Worker service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-worker.listen.port|1234|worker listen port
-worker.exec.threads|100|worker execute thread number to limit task instances in parallel
-worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
-worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
-worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
-worker.group|default|worker group config <br> worker will join corresponding group according to this config when startup
-
-
-## 8.alert.properties [Alert alert service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-alert.type|EMAIL|Alarm type|
-mail.protocol|SMTP| Mail server protocol
-mail.server.host|xxx.xxx.com|Mail server address
-mail.server.port|25|Mail server port
-mail.sender|xxx@xxx.com|Sender mailbox
-mail.user|xxx@xxx.com|Sender's email name
-mail.passwd|111111|Sender email password
-mail.smtp.starttls.enable|true|Whether the mailbox opens tls
-mail.smtp.ssl.enable|false|Whether the mailbox opens ssl
-mail.smtp.ssl.trust|xxx.xxx.com|Email ssl whitelist
-xls.file.path|/tmp/xls|Temporary working directory for mailbox attachments
-||The following is the enterprise WeChat configuration[Optional]|
-enterprise.wechat.enable|false|Whether the enterprise WeChat is enabled
-enterprise.wechat.corp.id|xxxxxxx|
-enterprise.wechat.secret|xxxxxxx|
-enterprise.wechat.agent.id|xxxxxxx|
-enterprise.wechat.users|xxxxxxx|
-enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
-enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
-enterprise.wechat.user.send.msg||Send message format
-enterprise.wechat.team.send.msg||Group message format
-plugin.dir|/Users/xx/your/path/to/plugin/dir|Plugin directory
-
-
-## 9.quartz.properties [Quartz configuration]
-This is mainly quartz configuration, please configure it in combination with actual business scenarios & resources, this article will not be expanded for the time being.
-|Parameter |Defaults| Description| 
-|--|--|--|
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
-org.quartz.scheduler.instanceName | DolphinScheduler
-org.quartz.scheduler.instanceId | AUTO
-org.quartz.scheduler.makeSchedulerThreadDaemon | true
-org.quartz.jobStore.useProperties | false
-org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
-org.quartz.threadPool.makeThreadsDaemons | true
-org.quartz.threadPool.threadCount | 25
-org.quartz.threadPool.threadPriority | 5
-org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
-org.quartz.jobStore.tablePrefix | QRTZ_
-org.quartz.jobStore.isClustered | true
-org.quartz.jobStore.misfireThreshold | 60000
-org.quartz.jobStore.clusterCheckinInterval | 5000
-org.quartz.jobStore.acquireTriggersWithinLock|true
-org.quartz.jobStore.dataSource | myDs
-org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
-
-
-## 10.install_config.conf [DS environment variable configuration script [for DS installation/startup]]
-The install_config.conf configuration file is more cumbersome.This file is mainly used in two places.
-* 1.Automatic installation of DS cluster.
-
-> Calling the install.sh script will automatically load the configuration in this file, and automatically configure the content in the above configuration file according to the content in this file.
-> Such as::dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties Etc..
-
-
-* 2.DS cluster startup and shutdown.
->When the DS cluster is started up and shut down, it will load the masters, workers, alertServer, apiServers and other parameters in the configuration file to start/close the DS cluster.
-
-The contents of the file are as follows:
-```bash
-
-# Note: If the configuration file contains special characters,such as: `.*[]^${}\+?|()@#&`, Please escape,
-#      Examples: `[` Escape to `\[`
-
-# Database type, currently only supports postgresql or mysql
-dbtype="mysql"
-
-# Database address & port
-dbhost="192.168.xx.xx:3306"
-
-# Database Name
-dbname="dolphinscheduler"
-
-
-# Database Username
-username="xx"
-
-# Database Password
-password="xx"
-
-# Zookeeper address
-zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-
-# Where to install DS, such as: /data1_1T/dolphinscheduler,
-installPath="/data1_1T/dolphinscheduler"
-
-# Which user to use for deployment
-# Note: The deployment user needs sudo permissions and can operate hdfs.
-#     If you use hdfs, the root directory must be created by the user. Otherwise, there will be permissions related issues.
-deployUser="dolphinscheduler"
-
-
-# The following is the alarm service configuration
-# Mail server address
-mailServerHost="smtp.exmail.qq.com"
-
-# Mail Server Port
-mailServerPort="25"
-
-# Sender
-mailSender="xxxxxxxxxx"
-
-# Sending user
-mailUser="xxxxxxxxxx"
-
-# email Password
-mailPassword="xxxxxxxxxx"
-
-# TLS protocol mailbox is set to true, otherwise set to false
-starttlsEnable="true"
-
-# The mailbox with SSL protocol enabled is set to true, otherwise it is false. Note: starttlsEnable and sslEnable cannot be true at the same time
-sslEnable="false"
-
-# Mail service address value, same as mailServerHost
-sslTrust="smtp.exmail.qq.com"
-
-#Where to upload resource files such as sql used for business, you can set: HDFS, S3, NONE. If you want to upload to HDFS, please configure as HDFS; if you do not need the resource upload function, please select NONE.
-resourceStorageType="NONE"
-
-# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-# Note,s3 be sure to create the root directory /dolphinscheduler
-defaultFS="hdfs://mycluster:8020"
-
-# If the resourceStorageType is S3, the parameters to be configured are as follows:
-s3Endpoint="http://192.168.xx.xx:9010"
-s3AccessKey="xxxxxxxxxx"
-s3SecretKey="xxxxxxxxxx"
-
-# If the ResourceManager is HA, configure it as the primary and secondary ip or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx", otherwise if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps="" That’s it, if yarn is not used, configure it as ""
-yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-# If it is a single ResourceManager, configure it as the ResourceManager node ip or host name, otherwise keep the default value.
-singleYarnIp="yarnIp1"
-
-# The storage path of resource files in HDFS/S3
-resourceUploadPath="/dolphinscheduler"
-
-
-# HDFS/S3  Operating user
-hdfsRootUser="hdfs"
-
-# The following is the kerberos configuration
-
-# Whether kerberos is turned on
-kerberosStartUp="false"
-# kdc krb5 config file path
-krb5ConfPath="$installPath/conf/krb5.conf"
-# keytab username
-keytabUserName="hdfs-mycluster@ESZ.COM"
-# username keytab path
-keytabPath="$installPath/conf/hdfs.headless.keytab"
-
-
-# api service port
-apiServerPort="12345"
-
-
-# Hostname of all hosts where DS is deployed
-ips="ds1,ds2,ds3,ds4,ds5"
-
-# ssh port, default 22
-sshPort="22"
-
-# Deploy master service host
-masters="ds1,ds2"
-
-# The host where the worker service is deployed
-# Note: Each worker needs to set a worker group name, the default value is "default"
-workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
-
-#  Deploy the alert service host
-alertServer="ds3"
-
-# Deploy api service host
-apiServers="ds1"
-```
-
-## 11.dolphinscheduler_env.sh [Environment variable configuration]
-When submitting a task through a shell-like method, the environment variables in the configuration file are loaded into the host.
-The types of tasks involved are: Shell tasks, Python tasks, Spark tasks, Flink tasks, Datax tasks, etc.
-```bash
-export HADOOP_HOME=/opt/soft/hadoop
-export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-export SPARK_HOME1=/opt/soft/spark1
-export SPARK_HOME2=/opt/soft/spark2
-export PYTHON_HOME=/opt/soft/python
-export JAVA_HOME=/opt/soft/java
-export HIVE_HOME=/opt/soft/hive
-export FLINK_HOME=/opt/soft/flink
-export DATAX_HOME=/opt/soft/datax/bin/datax.py
-
-export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-```
-
-## 12.Service log configuration files
-Correspondence service| Log file name |
---|--|
-api service log configuration file |logback-api.xml|
-Master service log configuration file|logback-master.xml |
-Worker service log configuration file|logback-worker.xml |
-alert service log configuration file|logback-alert.xml |
+<!-- markdown-link-check-disable -->
+# Foreword
+This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
+
+# Directory Structure
+All configuration files of dolphinscheduler are currently in the [conf] directory.
+
+For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
+This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
+
+[Note: The following dolphinscheduler is referred to as DS.]
+```
+
+├─bin                               DS command storage directory
+│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
+│  ├─start-all.sh                       Start all DS services according to the configuration file
+│  ├─stop-all.sh                        Close all DS services according to the configuration file
+├─conf                              Configuration file directory
+│  ├─application-api.properties         api service configuration file
+│  ├─datasource.properties              Database configuration file
+│  ├─zookeeper.properties               zookeeper configuration file
+│  ├─master.properties                  Master service configuration file
+│  ├─worker.properties                  Worker service configuration file
+│  ├─quartz.properties                  Quartz service configuration file
+│  ├─common.properties                  Public service [storage] configuration file
+│  ├─alert.properties                   alert service configuration file
+│  ├─config                             Environment variable configuration folder
+│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
+│  ├─env                                Run script environment variable configuration directory
+│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper file directory
+│  ├─i18n                               i18n configuration file directory
+│  ├─logback-api.xml                    api service log configuration file
+│  ├─logback-master.xml                 Master service log configuration file
+│  ├─logback-worker.xml                 Worker service log configuration file
+│  ├─logback-alert.xml                  alert service log configuration file
+├─sql                               DS metadata creation and upgrade sql file
+│  ├─create                             Create SQL script directory
+│  ├─upgrade                            Upgrade SQL script directory
+│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
+│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
+│  ├─soft_version                       Current DS version identification file
+├─script                            DS service deployment, database creation/upgrade script directory
+│  ├─create-dolphinscheduler.sh         DS database initialization script      
+│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
+│  ├─monitor-server.sh                  DS service monitoring startup script               
+│  ├─scp-hosts.sh                       Install file transfer script                                                    
+│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
+├─ui                                Front-end WEB resource directory
+├─lib                               DS dependent jar storage directory
+├─install.sh                        Automatically install DS service script
+
+
+```
+
+
+# Detailed configuration file
+
+Serial number| Service classification |  Configuration file|
+|--|--|--|
+1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
+2|Database connection configuration | datasource.properties
+3|Zookeeper connection configuration|zookeeper.properties
+4|Common [storage] configuration|common.properties
+5|API service configuration|application-api.properties
+6|Master service configuration|master.properties
+7|Worker service configuration|worker.properties
+8|Alert service configuration|alert.properties
+9|Quartz configuration|quartz.properties
+10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
+11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
+12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
+The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
+start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
+At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
+
+The default simplified parameters are as follows:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
+
+## 2.datasource.properties [Database Connectivity]
+Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
+|Parameter | Defaults| Description|
+|--|--|--|
+spring.datasource.driver-class-name| |Database driver
+spring.datasource.url||Database connection address
+spring.datasource.username||Database username
+spring.datasource.password||Database password
+spring.datasource.initialSize|5| Number of initial connection pools
+spring.datasource.minIdle|5| Minimum number of connection pools
+spring.datasource.maxActive|5| Maximum number of connection pools
+spring.datasource.maxWait|60000| Maximum waiting time
+spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
+spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
+spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
+spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
+spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
+spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
+spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
+spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
+spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
+spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
+spring.datasource.poolPreparedStatements|true| Open PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
+
+
+## 3.zookeeper.properties [Zookeeper connection configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+zookeeper.quorum|localhost:2181| zk cluster connection information
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
+zookeeper.session.timeout|60000|  session time out
+zookeeper.connection.timeout|30000|  Connection timed out
+zookeeper.retry.base.sleep|100| Basic retry time difference
+zookeeper.retry.max.sleep|30000| Maximum retry time
+zookeeper.retry.maxtime|10|Maximum number of retries
+
+
+## 4.common.properties [hadoop, s3, yarn configuration]
+The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
+|Parameter |Defaults| Description| 
+|--|--|--|
+resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
+resource.upload.path|/dolphinscheduler|Resource file storage path
+data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
+hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
+resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
+hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
+fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
+fs.s3a.endpoint||s3 endpoint address
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
+development.state|false|Is it in development mode
+kerberos.expire.time|7|kerberos expire time,integer,the unit is day
+
+
+## 5.application-api.properties [API service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+server.port|12345|API service communication port
+server.servlet.session.timeout|7200|session timeout
+server.servlet.context-path|/dolphinscheduler |Request path
+spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
+spring.servlet.multipart.max-request-size|1024MB|Maximum request size
+server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
+spring.messages.encoding|UTF-8|Request encoding
+spring.jackson.time-zone|GMT+8|Set time zone
+spring.messages.basename|i18n/messages|i18n configuration
+security.authentication.type|PASSWORD|Permission verification type
+
+
+## 6.master.properties [Master service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+master.listen.port|5678|master listen port
+master.exec.threads|100|master execute thread number to limit process instances in parallel
+master.exec.task.num|20|master execute task number in parallel per process instance
+master.dispatch.task.num|3|master dispatch task number per batch
+master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
+master.heartbeat.interval|10|master heartbeat interval, the unit is second
+master.task.commit.retryTimes|5|master commit task retry times
+master.task.commit.interval|1000|master commit task interval, the unit is millisecond
+master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
+master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
+
+
+## 7.worker.properties [Worker service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+worker.listen.port|1234|worker listen port
+worker.exec.threads|100|worker execute thread number to limit task instances in parallel
+worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
+worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
+worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
+worker.group|default|worker group config <br> worker will join corresponding group according to this config when startup
+
+
+## 8.alert.properties [Alert alert service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+alert.type|EMAIL|Alarm type|
+mail.protocol|SMTP| Mail server protocol
+mail.server.host|xxx.xxx.com|Mail server address
+mail.server.port|25|Mail server port
+mail.sender|xxx@xxx.com|Sender mailbox
+mail.user|xxx@xxx.com|Sender's email name
+mail.passwd|111111|Sender email password
+mail.smtp.starttls.enable|true|Whether the mailbox opens tls
+mail.smtp.ssl.enable|false|Whether the mailbox opens ssl
+mail.smtp.ssl.trust|xxx.xxx.com|Email ssl whitelist
+xls.file.path|/tmp/xls|Temporary working directory for mailbox attachments
+||The following is the enterprise WeChat configuration[Optional]|
+enterprise.wechat.enable|false|Whether the enterprise WeChat is enabled
+enterprise.wechat.corp.id|xxxxxxx|
+enterprise.wechat.secret|xxxxxxx|
+enterprise.wechat.agent.id|xxxxxxx|
+enterprise.wechat.users|xxxxxxx|
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
+enterprise.wechat.user.send.msg||Send message format
+enterprise.wechat.team.send.msg||Group message format
+plugin.dir|/Users/xx/your/path/to/plugin/dir|Plugin directory
+
+
+## 9.quartz.properties [Quartz configuration]
+This is mainly quartz configuration, please configure it in combination with actual business scenarios & resources, this article will not be expanded for the time being.
+|Parameter |Defaults| Description| 
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS environment variable configuration script [for DS installation/startup]]
+The install_config.conf configuration file is more cumbersome.This file is mainly used in two places.
+* 1.Automatic installation of DS cluster.
+
+> Calling the install.sh script will automatically load the configuration in this file, and automatically configure the content in the above configuration file according to the content in this file.
+> Such as::dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties Etc..
+
+
+* 2.DS cluster startup and shutdown.
+>When the DS cluster is started up and shut down, it will load the masters, workers, alertServer, apiServers and other parameters in the configuration file to start/close the DS cluster.
+
+The contents of the file are as follows:
+```bash
+
+# Note: If the configuration file contains special characters,such as: `.*[]^${}\+?|()@#&`, Please escape,
+#      Examples: `[` Escape to `\[`
+
+# Database type, currently only supports postgresql or mysql
+dbtype="mysql"
+
+# Database address & port
+dbhost="192.168.xx.xx:3306"
+
+# Database Name
+dbname="dolphinscheduler"
+
+
+# Database Username
+username="xx"
+
+# Database Password
+password="xx"
+
+# Zookeeper address
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# Where to install DS, such as: /data1_1T/dolphinscheduler,
+installPath="/data1_1T/dolphinscheduler"
+
+# Which user to use for deployment
+# Note: The deployment user needs sudo permissions and can operate hdfs.
+#     If you use hdfs, the root directory must be created by the user. Otherwise, there will be permissions related issues.
+deployUser="dolphinscheduler"
+
+
+# The following is the alarm service configuration
+# Mail server address
+mailServerHost="smtp.exmail.qq.com"
+
+# Mail Server Port
+mailServerPort="25"
+
+# Sender
+mailSender="xxxxxxxxxx"
+
+# Sending user
+mailUser="xxxxxxxxxx"
+
+# email Password
+mailPassword="xxxxxxxxxx"
+
+# TLS protocol mailbox is set to true, otherwise set to false
+starttlsEnable="true"
+
+# The mailbox with SSL protocol enabled is set to true, otherwise it is false. Note: starttlsEnable and sslEnable cannot be true at the same time
+sslEnable="false"
+
+# Mail service address value, same as mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+#Where to upload resource files such as sql used for business, you can set: HDFS, S3, NONE. If you want to upload to HDFS, please configure as HDFS; if you do not need the resource upload function, please select NONE.
+resourceStorageType="NONE"
+
+# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+# Note,s3 be sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# If the resourceStorageType is S3, the parameters to be configured are as follows:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# If the ResourceManager is HA, configure it as the primary and secondary ip or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx", otherwise if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps="" That’s it, if yarn is not used, configure it as ""
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+# If it is a single ResourceManager, configure it as the ResourceManager node ip or host name, otherwise keep the default value.
+singleYarnIp="yarnIp1"
+
+# The storage path of resource files in HDFS/S3
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3  Operating user
+hdfsRootUser="hdfs"
+
+# The following is the kerberos configuration
+
+# Whether kerberos is turned on
+kerberosStartUp="false"
+# kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+# keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+# username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# api service port
+apiServerPort="12345"
+
+
+# Hostname of all hosts where DS is deployed
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# ssh port, default 22
+sshPort="22"
+
+# Deploy master service host
+masters="ds1,ds2"
+
+# The host where the worker service is deployed
+# Note: Each worker needs to set a worker group name, the default value is "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  Deploy the alert service host
+alertServer="ds3"
+
+# Deploy api service host
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [Environment variable configuration]
+When submitting a task through a shell-like method, the environment variables in the configuration file are loaded into the host.
+The types of tasks involved are: Shell tasks, Python tasks, Spark tasks, Flink tasks, Datax tasks, etc.
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12.Service log configuration files
+Correspondence service| Log file name |
+--|--|
+api service log configuration file |logback-api.xml|
+Master service log configuration file|logback-master.xml |
+Worker service log configuration file|logback-worker.xml |
+alert service log configuration file|logback-alert.xml |
diff --git a/docs/en-us/1.3.9/user_doc/hardware-environment.md b/content/en-us/docs/1.3.1/user_doc/hardware-environment.md
similarity index 100%
rename from docs/en-us/1.3.9/user_doc/hardware-environment.md
rename to content/en-us/docs/1.3.1/user_doc/hardware-environment.md
diff --git a/docs/en-us/1.3.1/user_doc/metadata-1.3.md b/content/en-us/docs/1.3.1/user_doc/metadata-1.3.md
similarity index 100%
rename from docs/en-us/1.3.1/user_doc/metadata-1.3.md
rename to content/en-us/docs/1.3.1/user_doc/metadata-1.3.md
diff --git a/docs/en-us/1.3.6/user_doc/quick-start.md b/content/en-us/docs/1.3.1/user_doc/quick-start.md
similarity index 100%
rename from docs/en-us/1.3.6/user_doc/quick-start.md
rename to content/en-us/docs/1.3.1/user_doc/quick-start.md
diff --git a/docs/en-us/1.3.1/user_doc/standalone-deployment.md b/content/en-us/docs/1.3.1/user_doc/standalone-deployment.md
similarity index 100%
rename from docs/en-us/1.3.1/user_doc/standalone-deployment.md
rename to content/en-us/docs/1.3.1/user_doc/standalone-deployment.md
diff --git a/docs/en-us/1.3.1/user_doc/system-manual.md b/content/en-us/docs/1.3.1/user_doc/system-manual.md
similarity index 100%
rename from docs/en-us/1.3.1/user_doc/system-manual.md
rename to content/en-us/docs/1.3.1/user_doc/system-manual.md
diff --git a/docs/en-us/1.3.2/user_doc/task-structure.md b/content/en-us/docs/1.3.1/user_doc/task-structure.md
similarity index 96%
rename from docs/en-us/1.3.2/user_doc/task-structure.md
rename to content/en-us/docs/1.3.1/user_doc/task-structure.md
index 2442e7e..98fc0b3 100644
--- a/docs/en-us/1.3.2/user_doc/task-structure.md
+++ b/content/en-us/docs/1.3.1/user_doc/task-structure.md
@@ -1,1136 +1,1136 @@
-
-# Overall task storage structure
-All tasks created in dolphinscheduler are saved in the t_ds_process_definition table.
-
-The database table structure is shown in the following table:
-
-
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|id|int(11)|Primary key
-2|name|varchar(255)|Process definition name
-3|version|int(11)|Process definition version
-4|release_state|tinyint(4)|Release status of the process definition:0 not online, 1 online
-5|project_id|int(11)|Project id
-6|user_id|int(11)|User id of the process definition
-7|process_definition_json|longtext|Process definition JSON
-8|description|text|Process definition description
-9|global_params|text|Global parameters
-10|flag|tinyint(4)|Whether the process is available: 0 is not available, 1 is available
-11|locations|text|Node coordinate information
-12|connects|text|Node connection information
-13|receivers|text|Recipient
-14|receivers_cc|text|Cc
-15|create_time|datetime|Creation time
-16|timeout|int(11) |overtime time
-17|tenant_id|int(11) |Tenant id
-18|update_time|datetime|Update time
-19|modify_by|varchar(36)|Modify user
-20|resource_ids|varchar(255)|Resource ids
-
-The process_definition_json field is the core field, which defines the task information in the DAG diagram. The data is stored in JSON.
-
-The public data structure is as follows.
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|globalParams|Array|Global parameters
-2|tasks|Array|Task collection in the process  [ Please refer to the following chapters for the structure of each type]
-3|tenantId|int|Tenant id
-4|timeout|int|overtime time
-
-Data example:
-```bash
-{
-    "globalParams":[
-        {
-            "prop":"golbal_bizdate",
-            "direct":"IN",
-            "type":"VARCHAR",
-            "value":"${system.biz.date}"
-        }
-    ],
-    "tasks":Array[1],
-    "tenantId":0,
-    "timeout":0
-}
-```
-
-# Detailed explanation of the storage structure of each task type
-
-## Shell node
-**The node data structure is as follows:**
-Serial number|Field||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SHELL",
-    "id":"tasks-80760",
-    "name":"Shell Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"echo "This is a shell script""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-
-```
-
-
-## SQL node
-Perform data query and update operations on the specified data source through SQL.
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json 格式
-5| |type |String | Database type
-6| |datasource |Int | Data source id
-7| |sql |String | Query SQL statement
-8| |udfs | String| udf function|UDF function ids, separated by commas.
-9| |sqlType | String| SQL node type |0 query, 1 non query
-10| |title |String | Mail title
-11| |receivers |String | Recipient
-12| |receiversCc |String | Cc
-13| |showType | String| Mail display type|TABLE table  ,  ATTACHMENT attachment
-14| |connParams | String| Connection parameters
-15| |preStatements | Array| Pre-SQL
-16| | postStatements| Array|Post SQL||
-17| | localParams| Array|Custom parameters||
-18|description | |String|Dscription | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SQL",
-    "id":"tasks-95648",
-    "name":"SqlTask-Query",
-    "params":{
-        "type":"MYSQL",
-        "datasource":1,
-        "sql":"select id , namge , age from emp where id =  ${id}",
-        "udfs":"",
-        "sqlType":"0",
-        "title":"xxxx@xxx.com",
-        "receivers":"xxxx@xxx.com",
-        "receiversCc":"",
-        "showType":"TABLE",
-        "localParams":[
-            {
-                "prop":"id",
-                "direct":"IN",
-                "type":"INTEGER",
-                "value":"1"
-            }
-        ],
-        "connParams":"",
-        "preStatements":[
-            "insert into emp ( id,name ) value (1,'Li' )"
-        ],
-        "postStatements":[
-
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## PROCEDURE [stored procedure] node
-**The node data structure is as follows:**
-**Sample node data:**
-
-## SPARK node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Types of |SPARK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |driverCores | String| Driver core
-11| |driverMemory | String| Driver memory
-12| |numExecutors |String | Number of executors
-13| |executorMemory |String | Executor memory
-14| |executorCores |String | Number of executor cores
-15| |programType | String| Type program|JAVA,SCALA,PYTHON
-16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
-17| | localParams| Array|Custom parameters
-18| | resourceList| Array|Resource
-19|description | |String|Description | |
-20|runFlag | |String |Run ID| |
-21|conditionResult | |Object|Conditional branch | |
-22| | successNode| Array|Jump to node successfully| |
-23| | failedNode|Array|Failed jump node | 
-24| dependence| |Object |Task dependency |Mutually exclusive with params
-25|maxRetryTimes | |String|Maximum number of retries | |
-26|retryInterval | |String |Retry interval| |
-27|timeout | |Object|Timeout control | |
-28| taskInstancePriority| |String|Task priority | |
-29|workerGroup | |String |Worker Grouping| |
-30|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SPARK",
-    "id":"tasks-87430",
-    "name":"SparkTask",
-    "params":{
-        "mainClass":"org.apache.spark.examples.SparkPi",
-        "mainJar":{
-            "id":4
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "driverCores":1,
-        "driverMemory":"512M",
-        "numExecutors":2,
-        "executorMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"10",
-        "others":"",
-        "programType":"SCALA",
-        "sparkVersion":"SPARK2"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## MapReduce (MR) node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |描Description述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |MR
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |programType | String| Program type|JAVA,PYTHON
-10| | localParams| Array|Custom parameters
-11| | resourceList| Array|Resource
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"MR",
-    "id":"tasks-28997",
-    "name":"MRTask",
-    "params":{
-        "mainClass":"wordcount",
-        "mainJar":{
-            "id":5
-        },
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
-        "others":"",
-        "programType":"JAVA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## Python node
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |PYTHON
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Python script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"PYTHON",
-    "id":"tasks-5463",
-    "name":"Python Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"print("This is a python script")"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-
-## Flink node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |FLINK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |slot | String| Number of slots
-11| |taskManager |String | Number of taskManage
-12| |taskManagerMemory |String | TaskManager memory
-13| |jobManagerMemory |String | JobManager memory number
-14| |programType | String| Program type|JAVA,SCALA,PYTHON
-15| | localParams| Array|Custom parameters
-16| | resourceList| Array|Resource
-17|description | |String|Description | |
-18|runFlag | |String |Run ID| |
-19|conditionResult | |Object|Conditional branch | |
-20| | successNode| Array|Jump to node successfully| |
-21| | failedNode|Array|Failed jump node | 
-22| dependence| |Object |Task dependency |Mutually exclusive with params
-23|maxRetryTimes | |String|Maximum number of retries | |
-24|retryInterval | |String |Retry interval| |
-25|timeout | |Object|Timeout control | |
-26| taskInstancePriority| |String|Task priority | |
-27|workerGroup | |String |Worker Grouping| |
-38|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"FLINK",
-    "id":"tasks-17135",
-    "name":"FlinkTask",
-    "params":{
-        "mainClass":"com.flink.demo",
-        "mainJar":{
-            "id":6
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "slot":1,
-        "taskManager":"2",
-        "jobManagerMemory":"1G",
-        "taskManagerMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"100",
-        "others":"",
-        "programType":"SCALA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## HTTP node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Type|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |HTTP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |url |String | Request address
-6| |httpMethod | String| Request method|GET,POST,HEAD,PUT,DELETE
-7| | httpParams| Array|Request parameter
-8| |httpCheckCondition | String| Check conditions|Default response code 200
-9| |condition |String | Check content
-10| | localParams| Array|Custom parameters
-11|description | |String|Description | |
-12|runFlag | |String |Run ID| |
-13|conditionResult | |Object|Conditional branch | |
-14| | successNode| Array|Jump to node successfully| |
-15| | failedNode|Array|Failed jump node | 
-16| dependence| |Object |Task dependency |Mutually exclusive with params
-17|maxRetryTimes | |String|Maximum number of retries | |
-18|retryInterval | |String |Retry interval| |
-19|timeout | |Object|Timeout control | |
-20| taskInstancePriority| |String|Task priority | |
-21|workerGroup | |String |Worker Grouping| |
-22|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"HTTP",
-    "id":"tasks-60499",
-    "name":"HttpTask",
-    "params":{
-        "localParams":[
-
-        ],
-        "httpParams":[
-            {
-                "prop":"id",
-                "httpParametersType":"PARAMETER",
-                "value":"1"
-            },
-            {
-                "prop":"name",
-                "httpParametersType":"PARAMETER",
-                "value":"Bo"
-            }
-        ],
-        "url":"https://www.xxxxx.com:9012",
-        "httpMethod":"POST",
-        "httpCheckCondition":"STATUS_CODE_DEFAULT",
-        "condition":""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## DataX node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DATAX
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |customConfig |Int | Custom type| 0 custom, 1 custom
-6| |dsType |String | Source database type
-7| |dataSource |Int | Source database ID
-8| |dtType | String| Target database type
-9| |dataTarget | Int| Target database ID 
-10| |sql |String | SQL statement
-11| |targetTable |String | Target table
-12| |jobSpeedByte |Int | Current limit (bytes)
-13| |jobSpeedRecord | Int| Current limit (number of records)
-14| |preStatements | Array| Pre-SQL
-15| | postStatements| Array|Post SQL
-16| | json| String|Custom configuration|Effective when customConfig=1
-17| | localParams| Array|Custom parameters|Effective when customConfig=1
-18|description | |String|Description | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-
-```bash
-{
-    "type":"DATAX",
-    "id":"tasks-91196",
-    "name":"DataxTask-DB",
-    "params":{
-        "customConfig":0,
-        "dsType":"MYSQL",
-        "dataSource":1,
-        "dtType":"MYSQL",
-        "dataTarget":1,
-        "sql":"select id, name ,age from user ",
-        "targetTable":"emp",
-        "jobSpeedByte":524288,
-        "jobSpeedRecord":500,
-        "preStatements":[
-            "truncate table emp "
-        ],
-        "postStatements":[
-            "truncate table user"
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## Sqoop node
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQOOP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |JSON format
-5| | concurrency| Int|Concurrency
-6| | modelType|String |Flow direction|import,export
-7| |sourceType|String |Data source type |
-8| |sourceParams |String| Data source parameters| JSON format
-9| | targetType|String |Target data type
-10| |targetParams | String|Target data parameters|JSON format
-11| |localParams |Array |Custom parameters
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SQOOP",
-            "id":"tasks-82041",
-            "name":"Sqoop Task",
-            "params":{
-                "concurrency":1,
-                "modelType":"import",
-                "sourceType":"MYSQL",
-                "targetType":"HDFS",
-                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
-                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
-                "localParams":[
-
-                ]
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-## Conditional branch node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters | null
-5|description | |String|Description | |
-6|runFlag | |String |Run ID| |
-7|conditionResult | |Object|Conditional branch | |
-8| | successNode| Array|Jump to node successfully| |
-9| | failedNode|Array|Failed jump node | 
-10| dependence| |Object |Task dependency |Mutually exclusive with params
-11|maxRetryTimes | |String|Maximum number of retries | |
-12|retryInterval | |String |Retry interval| |
-13|timeout | |Object|Timeout control | |
-14| taskInstancePriority| |String|Task priority | |
-15|workerGroup | |String |Worker Grouping| |
-16|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"CONDITIONS",
-    "id":"tasks-96189",
-    "name":"条件",
-    "params":{
-
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            "test04"
-        ],
-        "failedNode":[
-            "test05"
-        ]
-    },
-    "dependence":{
-        "relation":"AND",
-        "dependTaskList":[
-
-        ]
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-        "test01",
-        "test02"
-    ]
-}
-```
-
-
-## Subprocess node
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |processDefinitionId |Int| Process definition id
-6|description | |String|Description | |
-7|runFlag | |String |Run ID| |
-8|conditionResult | |Object|Conditional branch | |
-9| | successNode| Array|Jump to node successfully| |
-10| | failedNode|Array|Failed jump node | 
-11| dependence| |Object |Task dependency |Mutually exclusive with params
-12|maxRetryTimes | |String|Maximum number of retries | |
-13|retryInterval | |String |Retry interval| |
-14|timeout | |Object|Timeout control | |
-15| taskInstancePriority| |String|Task priority | |
-16|workerGroup | |String |Worker Grouping| |
-17|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SUB_PROCESS",
-            "id":"tasks-14806",
-            "name":"SubProcessTask",
-            "params":{
-                "processDefinitionId":2
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-
-
-## DEPENDENT node
-**The node data structure is as follows:**
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DEPENDENT
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14| | relation|String |Relationship |AND,OR
-15| | dependTaskList|Array |Dependent task list |
-16|maxRetryTimes | |String|Maximum number of retries | |
-17|retryInterval | |String |Retry interval| |
-18|timeout | |Object|Timeout control | |
-19| taskInstancePriority| |String|Task priority | |
-20|workerGroup | |String |Worker Grouping| |
-21|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"DEPENDENT",
-            "id":"tasks-57057",
-            "name":"DenpendentTask",
-            "params":{
-
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-                "relation":"AND",
-                "dependTaskList":[
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":7,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"ALL",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    },
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":5,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"SqlTask-Update",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    }
-                ]
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
+
+# Overall task storage structure
+All tasks created in dolphinscheduler are saved in the t_ds_process_definition table.
+
+The database table structure is shown in the following table:
+
+
+Serial number | Field  | Types  |  Description
+-------- | ---------| -------- | ---------
+1|id|int(11)|Primary key
+2|name|varchar(255)|Process definition name
+3|version|int(11)|Process definition version
+4|release_state|tinyint(4)|Release status of the process definition:0 not online, 1 online
+5|project_id|int(11)|Project id
+6|user_id|int(11)|User id of the process definition
+7|process_definition_json|longtext|Process definition JSON
+8|description|text|Process definition description
+9|global_params|text|Global parameters
+10|flag|tinyint(4)|Whether the process is available: 0 is not available, 1 is available
+11|locations|text|Node coordinate information
+12|connects|text|Node connection information
+13|receivers|text|Recipient
+14|receivers_cc|text|Cc
+15|create_time|datetime|Creation time
+16|timeout|int(11) |overtime time
+17|tenant_id|int(11) |Tenant id
+18|update_time|datetime|Update time
+19|modify_by|varchar(36)|Modify user
+20|resource_ids|varchar(255)|Resource ids
+
+The process_definition_json field is the core field, which defines the task information in the DAG diagram. The data is stored in JSON.
+
+The public data structure is as follows.
+Serial number | Field  | Types  |  Description
+-------- | ---------| -------- | ---------
+1|globalParams|Array|Global parameters
+2|tasks|Array|Task collection in the process  [ Please refer to the following chapters for the structure of each type]
+3|tenantId|int|Tenant id
+4|timeout|int|overtime time
+
+Data example:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# Detailed explanation of the storage structure of each task type
+
+## Shell node
+**The node data structure is as follows:**
+Serial number|Field||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14|maxRetryTimes | |String|Maximum number of retries | |
+15|retryInterval | |String |Retry interval| |
+16|timeout | |Object|Timeout control | |
+17| taskInstancePriority| |String|Task priority | |
+18|workerGroup | |String |Worker Grouping| |
+19|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL node
+Perform data query and update operations on the specified data source through SQL.
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SQL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json 格式
+5| |type |String | Database type
+6| |datasource |Int | Data source id
+7| |sql |String | Query SQL statement
+8| |udfs | String| udf function|UDF function ids, separated by commas.
+9| |sqlType | String| SQL node type |0 query, 1 non query
+10| |title |String | Mail title
+11| |receivers |String | Recipient
+12| |receiversCc |String | Cc
+13| |showType | String| Mail display type|TABLE table  ,  ATTACHMENT attachment
+14| |connParams | String| Connection parameters
+15| |preStatements | Array| Pre-SQL
+16| | postStatements| Array|Post SQL||
+17| | localParams| Array|Custom parameters||
+18|description | |String|Dscription | |
+19|runFlag | |String |Run ID| |
+20|conditionResult | |Object|Conditional branch | |
+21| | successNode| Array|Jump to node successfully| |
+22| | failedNode|Array|Failed jump node | 
+23| dependence| |Object |Task dependency |Mutually exclusive with params
+24|maxRetryTimes | |String|Maximum number of retries | |
+25|retryInterval | |String |Retry interval| |
+26|timeout | |Object|Timeout control | |
+27| taskInstancePriority| |String|Task priority | |
+28|workerGroup | |String |Worker Grouping| |
+29|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE [stored procedure] node
+**The node data structure is as follows:**
+**Sample node data:**
+
+## SPARK node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Types of |SPARK
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |deployMode |String | Deployment mode  |local,client,cluster
+10| |driverCores | String| Driver core
+11| |driverMemory | String| Driver memory
+12| |numExecutors |String | Number of executors
+13| |executorMemory |String | Executor memory
+14| |executorCores |String | Number of executor cores
+15| |programType | String| Type program|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
+17| | localParams| Array|Custom parameters
+18| | resourceList| Array|Resource
+19|description | |String|Description | |
+20|runFlag | |String |Run ID| |
+21|conditionResult | |Object|Conditional branch | |
+22| | successNode| Array|Jump to node successfully| |
+23| | failedNode|Array|Failed jump node | 
+24| dependence| |Object |Task dependency |Mutually exclusive with params
+25|maxRetryTimes | |String|Maximum number of retries | |
+26|retryInterval | |String |Retry interval| |
+27|timeout | |Object|Timeout control | |
+28| taskInstancePriority| |String|Task priority | |
+29|workerGroup | |String |Worker Grouping| |
+30|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce (MR) node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |描Description述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |MR
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |programType | String| Program type|JAVA,PYTHON
+10| | localParams| Array|Custom parameters
+11| | resourceList| Array|Resource
+12|description | |String|Description | |
+13|runFlag | |String |Run ID| |
+14|conditionResult | |Object|Conditional branch | |
+15| | successNode| Array|Jump to node successfully| |
+16| | failedNode|Array|Failed jump node | 
+17| dependence| |Object |Task dependency |Mutually exclusive with params
+18|maxRetryTimes | |String|Maximum number of retries | |
+19|retryInterval | |String |Retry interval| |
+20|timeout | |Object|Timeout control | |
+21| taskInstancePriority| |String|Task priority | |
+22|workerGroup | |String |Worker Grouping| |
+23|preTasks | |Array|Predecessor | |
+
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python node
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |PYTHON
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Python script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14|maxRetryTimes | |String|Maximum number of retries | |
+15|retryInterval | |String |Retry interval| |
+16|timeout | |Object|Timeout control | |
+17| taskInstancePriority| |String|Task priority | |
+18|workerGroup | |String |Worker Grouping| |
+19|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |FLINK
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |deployMode |String | Deployment mode  |local,client,cluster
+10| |slot | String| Number of slots
+11| |taskManager |String | Number of taskManage
+12| |taskManagerMemory |String | TaskManager memory
+13| |jobManagerMemory |String | JobManager memory number
+14| |programType | String| Program type|JAVA,SCALA,PYTHON
+15| | localParams| Array|Custom parameters
+16| | resourceList| Array|Resource
+17|description | |String|Description | |
+18|runFlag | |String |Run ID| |
+19|conditionResult | |Object|Conditional branch | |
+20| | successNode| Array|Jump to node successfully| |
+21| | failedNode|Array|Failed jump node | 
+22| dependence| |Object |Task dependency |Mutually exclusive with params
+23|maxRetryTimes | |String|Maximum number of retries | |
+24|retryInterval | |String |Retry interval| |
+25|timeout | |Object|Timeout control | |
+26| taskInstancePriority| |String|Task priority | |
+27|workerGroup | |String |Worker Grouping| |
+38|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Type|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |HTTP
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |url |String | Request address
+6| |httpMethod | String| Request method|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|Request parameter
+8| |httpCheckCondition | String| Check conditions|Default response code 200
+9| |condition |String | Check content
+10| | localParams| Array|Custom parameters
+11|description | |String|Description | |
+12|runFlag | |String |Run ID| |
+13|conditionResult | |Object|Conditional branch | |
+14| | successNode| Array|Jump to node successfully| |
+15| | failedNode|Array|Failed jump node | 
+16| dependence| |Object |Task dependency |Mutually exclusive with params
+17|maxRetryTimes | |String|Maximum number of retries | |
+18|retryInterval | |String |Retry interval| |
+19|timeout | |Object|Timeout control | |
+20| taskInstancePriority| |String|Task priority | |
+21|workerGroup | |String |Worker Grouping| |
+22|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX node
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |DATAX
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |customConfig |Int | Custom type| 0 custom, 1 custom
+6| |dsType |String | Source database type
+7| |dataSource |Int | Source database ID
+8| |dtType | String| Target database type
+9| |dataTarget | Int| Target database ID 
+10| |sql |String | SQL statement
+11| |targetTable |String | Target table
+12| |jobSpeedByte |Int | Current limit (bytes)
+13| |jobSpeedRecord | Int| Current limit (number of records)
+14| |preStatements | Array| Pre-SQL
+15| | postStatements| Array|Post SQL
+16| | json| String|Custom configuration|Effective when customConfig=1
+17| | localParams| Array|Custom parameters|Effective when customConfig=1
+18|description | |String|Description | |
+19|runFlag | |String |Run ID| |
+20|conditionResult | |Object|Conditional branch | |
+21| | successNode| Array|Jump to node successfully| |
+22| | failedNode|Array|Failed jump node | 
+23| dependence| |Object |Task dependency |Mutually exclusive with params
+24|maxRetryTimes | |String|Maximum number of retries | |
+25|retryInterval | |String |Retry interval| |
+26|timeout | |Object|Timeout control | |
+27| taskInstancePriority| |String|Task priority | |
+28|workerGroup | |String |Worker Grouping| |
+29|preTasks | |Array|Predecessor | |
+
+
+
+**Sample node data:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop node
+
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SQOOP
+3| name| |String|Name |
+4| params| |Object| Custom parameters |JSON format
+5| | concurrency| Int|Concurrency
+6| | modelType|String |Flow direction|import,export
+7| |sourceType|String |Data source type |
+8| |sourceParams |String| Data source parameters| JSON format
+9| | targetType|String |Target data type
+10| |targetParams | String|Target data parameters|JSON format
+11| |localParams |Array |Custom parameters
+12|description | |String|Description | |
+13|runFlag | |String |Run ID| |
+14|conditionResult | |Object|Conditional branch | |
+15| | successNode| Array|Jump to node successfully| |
+16| | failedNode|Array|Failed jump node | 
+17| dependence| |Object |Task dependency |Mutually exclusive with params
+18|maxRetryTimes | |String|Maximum number of retries | |
+19|retryInterval | |String |Retry interval| |
+20|timeout | |Object|Timeout control | |
+21| taskInstancePriority| |String|Task priority | |
+22|workerGroup | |String |Worker Grouping| |
+23|preTasks | |Array|Predecessor | |
+
+
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## Conditional branch node
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters | null
+5|description | |String|Description | |
+6|runFlag | |String |Run ID| |
+7|conditionResult | |Object|Conditional branch | |
+8| | successNode| Array|Jump to node successfully| |
+9| | failedNode|Array|Failed jump node | 
+10| dependence| |Object |Task dependency |Mutually exclusive with params
+11|maxRetryTimes | |String|Maximum number of retries | |
+12|retryInterval | |String |Retry interval| |
+13|timeout | |Object|Timeout control | |
+14| taskInstancePriority| |String|Task priority | |
+15|workerGroup | |String |Worker Grouping| |
+16|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## Subprocess node
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |processDefinitionId |Int| Process definition id
+6|description | |String|Description | |
+7|runFlag | |String |Run ID| |
+8|conditionResult | |Object|Conditional branch | |
+9| | successNode| Array|Jump to node successfully| |
+10| | failedNode|Array|Failed jump node | 
+11| dependence| |Object |Task dependency |Mutually exclusive with params
+12|maxRetryTimes | |String|Maximum number of retries | |
+13|retryInterval | |String |Retry interval| |
+14|timeout | |Object|Timeout control | |
+15| taskInstancePriority| |String|Task priority | |
+16|workerGroup | |String |Worker Grouping| |
+17|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## DEPENDENT node
+**The node data structure is as follows:**
+
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |DEPENDENT
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14| | relation|String |Relationship |AND,OR
+15| | dependTaskList|Array |Dependent task list |
+16|maxRetryTimes | |String|Maximum number of retries | |
+17|retryInterval | |String |Retry interval| |
+18|timeout | |Object|Timeout control | |
+19| taskInstancePriority| |String|Task priority | |
+20|workerGroup | |String |Worker Grouping| |
+21|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/en-us/1.3.1/user_doc/upgrade.md b/content/en-us/docs/1.3.1/user_doc/upgrade.md
similarity index 100%
rename from docs/en-us/1.3.1/user_doc/upgrade.md
rename to content/en-us/docs/1.3.1/user_doc/upgrade.md
diff --git a/docs/en-us/1.3.1/user_doc/architecture-design.md b/content/en-us/docs/1.3.2/user_doc/architecture-design.md
similarity index 98%
rename from docs/en-us/1.3.1/user_doc/architecture-design.md
rename to content/en-us/docs/1.3.2/user_doc/architecture-design.md
index 29f4ae5..fe3beb7 100644
--- a/docs/en-us/1.3.1/user_doc/architecture-design.md
+++ b/content/en-us/docs/1.3.2/user_doc/architecture-design.md
@@ -1,332 +1,332 @@
-## System Architecture Design
-Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
-
-### 1.Glossary
-**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
-
-**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
-
-**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
-
-**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
-
-**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
-
-**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
-
-**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
-
-**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
-
-**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
-
-**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
-
-### 2.System Structure
-
-#### 2.1 System architecture diagram
-<p align="center">
-  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
-  <p align="center">
-        <em>System architecture diagram</em>
-  </p>
-</p>
-
-#### 2.2 Start process activity diagram
-<p align="center">
-  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
-  <p align="center">
-        <em>Start process activity diagram</em>
-  </p>
-</p>
-
-#### 2.3 Architecture description
-
-* **MasterServer** 
-
-    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
-    MasterServer provides monitoring services based on netty.
-
-    ##### The service mainly includes:
-
-    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
-
-    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
-
-* **WorkerServer** 
-
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
-
-     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
-     Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
-     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
-
-     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
-
-* **ZooKeeper** 
-
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
-
-    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
-
-* **Task Queue** 
-
-    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
-
-* **Alert** 
-
-    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
-
-* **API** 
-
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
-
-* **UI** 
-
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
-
-#### 2.3 Architecture design ideas
-
-##### One、Decentralization VS centralization 
-
-###### Centralized thinking
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
- </p>
-
-- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
-- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
-
-
-
-Problems in centralized thought design:
-
-- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
-
-
-
-###### Decentralized
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
- </p>
-
-- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
-- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
-
-
-
-- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
-
-#####  Two、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
-1. The core process algorithm for acquiring distributed locks is as follows:
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-
-##### Three、Insufficient thread loop waiting problem
-
--  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
--  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
- </p>
-In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
-
-1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
-2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
-
-note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-
-##### Four、Fault-tolerant design
-Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
-
-###### 1. Downtime fault tolerance
-
-The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
- </p>
-Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
-
-
-
-- Master fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
- </p>
-After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
- </p>
-
-Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
-
- Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
-
-###### 2.Task failed and try again
-
-Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
-- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
-
-
-
-Next to the topic, we divide the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
-
-- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
-
-Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
-
-
-
-##### Five、Task priority design
-In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
-
--  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
-    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
-
-        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
-             </p>
-
-        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
-             </p>
-
-
-##### Six、Logback and netty implement log access
-
--  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
-  -  Put logs on ES search engine
-  -  Obtain remote log information through netty communication
-
--  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
- </p>
-
-
-- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
-- FileAppender is mainly implemented as follows:
-
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
- ```
-
-
-Generate logs in the form of /process definition id/process instance id/task instance id.log
-
-- Filter to match the thread name starting with TaskLogInfo:
-
-- TaskLogFilter is implemented as follows:
-
- ```java
- /**
- *  task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
- ```
-
-### 3.Module introduction
-- dolphinscheduler-alert alarm module, providing AlertServer service.
-
-- dolphinscheduler-api web application module, providing ApiServer service.
-
-- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
-
-- dolphinscheduler-dao provides operations such as database access.
-
-- dolphinscheduler-remote client and server based on netty
-
-- dolphinscheduler-server MasterServer and WorkerServer services
-
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
-
-- dolphinscheduler-ui front-end module
-### Sum up
-From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
-
-
+## System Architecture Design
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
+
+### 1.Glossary
+**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+
+**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
+
+**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
+
+**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
+
+**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
+
+**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
+
+**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
+
+**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
+
+**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
+
+**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
+
+**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
+
+### 2.System Structure
+
+#### 2.1 System architecture diagram
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
+  <p align="center">
+        <em>System architecture diagram</em>
+  </p>
+</p>
+
+#### 2.2 Start process activity diagram
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>Start process activity diagram</em>
+  </p>
+</p>
+
+#### 2.3 Architecture description
+
+* **MasterServer** 
+
+    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
+    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
+    MasterServer provides monitoring services based on netty.
+
+    ##### The service mainly includes:
+
+    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
+
+    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
+
+    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
+
+    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
+
+* **WorkerServer** 
+
+     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
+
+     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
+     Server provides monitoring services based on netty. Worker
+     ##### The service mainly includes:
+     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
+
+     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
+
+* **ZooKeeper** 
+
+    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
+
+    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+
+* **Task Queue** 
+
+    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
+
+* **Alert** 
+
+    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
+
+* **API** 
+
+    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
+
+* **UI** 
+
+    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
+
+#### 2.3 Architecture design ideas
+
+##### One、Decentralization VS centralization 
+
+###### Centralized thinking
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
+ </p>
+
+- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
+- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
+
+
+
+Problems in centralized thought design:
+
+- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
+
+
+
+###### Decentralized
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
+ </p>
+
+- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
+- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
+
+
+
+- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
+
+#####  Two、Distributed lock practice
+
+DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
+1. The core process algorithm for acquiring distributed locks is as follows:
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+
+##### Three、Insufficient thread loop waiting problem
+
+-  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
+-  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
+ </p>
+In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
+
+1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
+2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
+
+note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
+
+So we chose the third way to solve the problem of insufficient threads.
+
+
+##### Four、Fault-tolerant design
+Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
+
+###### 1. Downtime fault tolerance
+
+The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
+ </p>
+Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
+
+
+
+- Master fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
+ </p>
+After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
+
+
+
+- Worker fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
+ </p>
+
+Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
+
+ Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+
+###### 2.Task failed and try again
+
+Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
+
+- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
+- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
+- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
+
+
+
+Next to the topic, we divide the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
+
+- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
+
+Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
+
+If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
+
+
+
+##### Five、Task priority design
+In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
+
+-  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
+    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
+
+        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
+             </p>
+
+        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
+             </p>
+
+
+##### Six、Logback and netty implement log access
+
+-  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
+  -  Put logs on ES search engine
+  -  Obtain remote log information through netty communication
+
+-  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
+ </p>
+
+
+- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
+- FileAppender is mainly implemented as follows:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+Generate logs in the form of /process definition id/process instance id/task instance id.log
+
+- Filter to match the thread name starting with TaskLogInfo:
+
+- TaskLogFilter is implemented as follows:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+### 3.Module introduction
+- dolphinscheduler-alert alarm module, providing AlertServer service.
+
+- dolphinscheduler-api web application module, providing ApiServer service.
+
+- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
+
+- dolphinscheduler-dao provides operations such as database access.
+
+- dolphinscheduler-remote client and server based on netty
+
+- dolphinscheduler-server MasterServer and WorkerServer services
+
+- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
+
+- dolphinscheduler-ui front-end module
+### Sum up
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
+
+
diff --git a/docs/en-us/1.3.2/user_doc/cluster-deployment.md b/content/en-us/docs/1.3.2/user_doc/cluster-deployment.md
similarity index 100%
rename from docs/en-us/1.3.2/user_doc/cluster-deployment.md
rename to content/en-us/docs/1.3.2/user_doc/cluster-deployment.md
diff --git a/docs/en-us/1.3.3/user_doc/configuration-file.md b/content/en-us/docs/1.3.2/user_doc/configuration-file.md
similarity index 98%
rename from docs/en-us/1.3.3/user_doc/configuration-file.md
rename to content/en-us/docs/1.3.2/user_doc/configuration-file.md
index e6d349d..8b2c216 100644
--- a/docs/en-us/1.3.3/user_doc/configuration-file.md
+++ b/content/en-us/docs/1.3.2/user_doc/configuration-file.md
@@ -1,407 +1,408 @@
-<!-- markdown-link-check-disable -->
-# Foreword
-This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
-
-# Directory Structure
-All configuration files of dolphinscheduler are currently in the [conf] directory.
-
-For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
-
-This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
-
-[Note: The following dolphinscheduler is referred to as DS.]
-```
-
-├─bin                               DS command storage directory
-│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
-│  ├─start-all.sh                       Start all DS services according to the configuration file
-│  ├─stop-all.sh                        Close all DS services according to the configuration file
-├─conf                              Configuration file directory
-│  ├─application-api.properties         api service configuration file
-│  ├─datasource.properties              Database configuration file
-│  ├─zookeeper.properties               zookeeper configuration file
-│  ├─master.properties                  Master service configuration file
-│  ├─worker.properties                  Worker service configuration file
-│  ├─quartz.properties                  Quartz service configuration file
-│  ├─common.properties                  Public service [storage] configuration file
-│  ├─alert.properties                   alert service configuration file
-│  ├─config                             Environment variable configuration folder
-│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
-│  ├─env                                Run script environment variable configuration directory
-│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-│  ├─org                                mybatis mapper file directory
-│  ├─i18n                               i18n configuration file directory
-│  ├─logback-api.xml                    api service log configuration file
-│  ├─logback-master.xml                 Master service log configuration file
-│  ├─logback-worker.xml                 Worker service log configuration file
-│  ├─logback-alert.xml                  alert service log configuration file
-├─sql                               DS metadata creation and upgrade sql file
-│  ├─create                             Create SQL script directory
-│  ├─upgrade                            Upgrade SQL script directory
-│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
-│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
-│  ├─soft_version                       Current DS version identification file
-├─script                            DS service deployment, database creation/upgrade script directory
-│  ├─create-dolphinscheduler.sh         DS database initialization script      
-│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
-│  ├─monitor-server.sh                  DS service monitoring startup script               
-│  ├─scp-hosts.sh                       Install file transfer script                                                    
-│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
-├─ui                                Front-end WEB resource directory
-├─lib                               DS dependent jar storage directory
-├─install.sh                        Automatically install DS service script
-
-
-```
-
-
-# Detailed configuration file
-
-Serial number| Service classification |  Configuration file|
-|--|--|--|
-1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
-2|Database connection configuration | datasource.properties
-3|Zookeeper connection configuration|zookeeper.properties
-4|Common [storage] configuration|common.properties
-5|API service configuration|application-api.properties
-6|Master service configuration|master.properties
-7|Worker service configuration|worker.properties
-8|Alert service configuration|alert.properties
-9|Quartz configuration|quartz.properties
-10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
-11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
-12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
-
-
-## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
-The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
-start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
-At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
-
-The default simplified parameters are as follows:
-```bash
-export DOLPHINSCHEDULER_OPTS="
--server 
--Xmx16g 
--Xms1g 
--Xss512k 
--XX:+UseConcMarkSweepGC 
--XX:+CMSParallelRemarkEnabled 
--XX:+UseFastAccessorMethods 
--XX:+UseCMSInitiatingOccupancyOnly 
--XX:CMSInitiatingOccupancyFraction=70
-"
-```
-
-> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
-
-## 2.datasource.properties [Database Connectivity]
-Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
-|Parameter | Defaults| Description|
-|--|--|--|
-spring.datasource.driver-class-name| |Database driver
-spring.datasource.url||Database connection address
-spring.datasource.username||Database username
-spring.datasource.password||Database password
-spring.datasource.initialSize|5| Number of initial connection pools
-spring.datasource.minIdle|5| Minimum number of connection pools
-spring.datasource.maxActive|5| Maximum number of connection pools
-spring.datasource.maxWait|60000| Maximum waiting time
-spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
-spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
-spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
-spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
-spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
-spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
-spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
-spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
-spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
-spring.datasource.poolPreparedStatements|true| Open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
-
-
-## 3.zookeeper.properties [Zookeeper connection configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-zookeeper.quorum|localhost:2181| zk cluster connection information
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
-zookeeper.session.timeout|60000|  session time out
-zookeeper.connection.timeout|30000|  Connection timed out
-zookeeper.retry.base.sleep|100| Basic retry time difference
-zookeeper.retry.max.sleep|30000| Maximum retry time
-zookeeper.retry.maxtime|10|Maximum number of retries
-
-
-## 4.common.properties [hadoop, s3, yarn configuration]
-The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
-|Parameter |Defaults| Description| 
-|--|--|--|
-resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
-resource.upload.path|/dolphinscheduler|Resource file storage path
-data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
-hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
-java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
-login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
-login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
-resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
-hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
-fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
-fs.s3a.endpoint||s3 endpoint address
-fs.s3a.access.key||s3 access key
-fs.s3a.secret.key||s3 secret key
-yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
-yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
-dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-development.state|false|Is it in development mode
-kerberos.expire.time|7|kerberos expire time,integer,the unit is day
-
-
-## 5.application-api.properties [API service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-server.port|12345|API service communication port
-server.servlet.session.timeout|7200|session timeout
-server.servlet.context-path|/dolphinscheduler |Request path
-spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
-spring.servlet.multipart.max-request-size|1024MB|Maximum request size
-server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
-spring.messages.encoding|UTF-8|Request encoding
-spring.jackson.time-zone|GMT+8|Set time zone
-spring.messages.basename|i18n/messages|i18n configuration
-security.authentication.type|PASSWORD|Permission verification type
-
-
-## 6.master.properties [Master service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-master.listen.port|5678|master listen port
-master.exec.threads|100|master execute thread number to limit process instances in parallel
-master.exec.task.num|20|master execute task number in parallel per process instance
-master.dispatch.task.num|3|master dispatch task number per batch
-master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
-master.heartbeat.interval|10|master heartbeat interval, the unit is second
-master.task.commit.retryTimes|5|master commit task retry times
-master.task.commit.interval|1000|master commit task interval, the unit is millisecond
-master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
-master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
-
-
-## 7.worker.properties [Worker service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-worker.listen.port|1234|worker listen port
-worker.exec.threads|100|worker execute thread number to limit task instances in parallel
-worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
-worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
-worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
-worker.groups|default|worker groups separated by comma, like 'worker.groups=default,test' <br> worker will join corresponding group according to this config when startup
-
-
-## 8.alert.properties [Alert alert service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-alert.type|EMAIL|Alarm type|
-mail.protocol|SMTP| Mail server protocol
-mail.server.host|xxx.xxx.com|Mail server address
-mail.server.port|25|Mail server port
-mail.sender|xxx@xxx.com|Sender mailbox
-mail.user|xxx@xxx.com|Sender's email name
-mail.passwd|111111|Sender email password
-mail.smtp.starttls.enable|true|Whether the mailbox opens tls
-mail.smtp.ssl.enable|false|Whether the mailbox opens ssl
-mail.smtp.ssl.trust|xxx.xxx.com|Email ssl whitelist
-xls.file.path|/tmp/xls|Temporary working directory for mailbox attachments
-||The following is the enterprise WeChat configuration[Optional]|
-enterprise.wechat.enable|false|Whether the enterprise WeChat is enabled
-enterprise.wechat.corp.id|xxxxxxx|
-enterprise.wechat.secret|xxxxxxx|
-enterprise.wechat.agent.id|xxxxxxx|
-enterprise.wechat.users|xxxxxxx|
-enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
-enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
-enterprise.wechat.user.send.msg||Send message format
-enterprise.wechat.team.send.msg||Group message format
-plugin.dir|/Users/xx/your/path/to/plugin/dir|Plugin directory
-
-
-## 9.quartz.properties [Quartz configuration]
-This is mainly quartz configuration, please configure it in combination with actual business scenarios & resources, this article will not be expanded for the time being.
-|Parameter |Defaults| Description| 
-|--|--|--|
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
-org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
-org.quartz.scheduler.instanceName | DolphinScheduler
-org.quartz.scheduler.instanceId | AUTO
-org.quartz.scheduler.makeSchedulerThreadDaemon | true
-org.quartz.jobStore.useProperties | false
-org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
-org.quartz.threadPool.makeThreadsDaemons | true
-org.quartz.threadPool.threadCount | 25
-org.quartz.threadPool.threadPriority | 5
-org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
-org.quartz.jobStore.tablePrefix | QRTZ_
-org.quartz.jobStore.isClustered | true
-org.quartz.jobStore.misfireThreshold | 60000
-org.quartz.jobStore.clusterCheckinInterval | 5000
-org.quartz.jobStore.acquireTriggersWithinLock|true
-org.quartz.jobStore.dataSource | myDs
-org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
-
-
-## 10.install_config.conf [DS environment variable configuration script [for DS installation/startup]]
-The install_config.conf configuration file is more cumbersome.This file is mainly used in two places.
-* 1.Automatic installation of DS cluster.
-
-> Calling the install.sh script will automatically load the configuration in this file, and automatically configure the content in the above configuration file according to the content in this file.
-> Such as::dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties Etc..
-
-
-* 2.DS cluster startup and shutdown.
->When the DS cluster is started up and shut down, it will load the masters, workers, alertServer, apiServers and other parameters in the configuration file to start/close the DS cluster.
-
-The contents of the file are as follows:
-```bash
-
-# Note: If the configuration file contains special characters,such as: `.*[]^${}\+?|()@#&`, Please escape,
-#      Examples: `[` Escape to `\[`
-
-# Database type, currently only supports postgresql or mysql
-dbtype="mysql"
-
-# Database address & port
-dbhost="192.168.xx.xx:3306"
-
-# Database Name
-dbname="dolphinscheduler"
-
-
-# Database Username
-username="xx"
-
-# Database Password
-password="xx"
-
-# Zookeeper address
-zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
-
-# Where to install DS, such as: /data1_1T/dolphinscheduler,
-installPath="/data1_1T/dolphinscheduler"
-
-# Which user to use for deployment
-# Note: The deployment user needs sudo permissions and can operate hdfs.
-#     If you use hdfs, the root directory must be created by the user. Otherwise, there will be permissions related issues.
-deployUser="dolphinscheduler"
-
-
-# The following is the alarm service configuration
-# Mail server address
-mailServerHost="smtp.exmail.qq.com"
-
-# Mail Server Port
-mailServerPort="25"
-
-# Sender
-mailSender="xxxxxxxxxx"
-
-# Sending user
-mailUser="xxxxxxxxxx"
-
-# email Password
-mailPassword="xxxxxxxxxx"
-
-# TLS protocol mailbox is set to true, otherwise set to false
-starttlsEnable="true"
-
-# The mailbox with SSL protocol enabled is set to true, otherwise it is false. Note: starttlsEnable and sslEnable cannot be true at the same time
-sslEnable="false"
-
-# Mail service address value, same as mailServerHost
-sslTrust="smtp.exmail.qq.com"
-
-#Where to upload resource files such as sql used for business, you can set: HDFS, S3, NONE. If you want to upload to HDFS, please configure as HDFS; if you do not need the resource upload function, please select NONE.
-resourceStorageType="NONE"
-
-# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-# Note,s3 be sure to create the root directory /dolphinscheduler
-defaultFS="hdfs://mycluster:8020"
-
-# If the resourceStorageType is S3, the parameters to be configured are as follows:
-s3Endpoint="http://192.168.xx.xx:9010"
-s3AccessKey="xxxxxxxxxx"
-s3SecretKey="xxxxxxxxxx"
-
-# If the ResourceManager is HA, configure it as the primary and secondary ip or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx", otherwise if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps="" That’s it, if yarn is not used, configure it as ""
-yarnHaIps="192.168.xx.xx,192.168.xx.xx"
-
-# If it is a single ResourceManager, configure it as the ResourceManager node ip or host name, otherwise keep the default value.
-singleYarnIp="yarnIp1"
-
-# The storage path of resource files in HDFS/S3
-resourceUploadPath="/dolphinscheduler"
-
-
-# HDFS/S3  Operating user
-hdfsRootUser="hdfs"
-
-# The following is the kerberos configuration
-
-# Whether kerberos is turned on
-kerberosStartUp="false"
-# kdc krb5 config file path
-krb5ConfPath="$installPath/conf/krb5.conf"
-# keytab username
-keytabUserName="hdfs-mycluster@ESZ.COM"
-# username keytab path
-keytabPath="$installPath/conf/hdfs.headless.keytab"
-
-
-# api service port
-apiServerPort="12345"
-
-
-# Hostname of all hosts where DS is deployed
-ips="ds1,ds2,ds3,ds4,ds5"
-
-# ssh port, default 22
-sshPort="22"
-
-# Deploy master service host
-masters="ds1,ds2"
-
-# The host where the worker service is deployed
-# Note: Each worker needs to set a worker group name, the default value is "default"
-workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
-
-#  Deploy the alert service host
-alertServer="ds3"
-
-# Deploy api service host
-apiServers="ds1"
-```
-
-## 11.dolphinscheduler_env.sh [Environment variable configuration]
-When submitting a task through a shell-like method, the environment variables in the configuration file are loaded into the host.
-The types of tasks involved are: Shell tasks, Python tasks, Spark tasks, Flink tasks, Datax tasks, etc.
-```bash
-export HADOOP_HOME=/opt/soft/hadoop
-export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-export SPARK_HOME1=/opt/soft/spark1
-export SPARK_HOME2=/opt/soft/spark2
-export PYTHON_HOME=/opt/soft/python
-export JAVA_HOME=/opt/soft/java
-export HIVE_HOME=/opt/soft/hive
-export FLINK_HOME=/opt/soft/flink
-export DATAX_HOME=/opt/soft/datax/bin/datax.py
-
-export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
-
-```
-
-## 12.Service log configuration files
-Correspondence service| Log file name |
---|--|
-api service log configuration file |logback-api.xml|
-Master service log configuration file|logback-master.xml |
-Worker service log configuration file|logback-worker.xml |
-alert service log configuration file|logback-alert.xml |
+<!-- markdown-link-check-disable -->
+
+# Foreword
+This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
+
+# Directory Structure
+All configuration files of dolphinscheduler are currently in the [conf] directory.
+
+For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
+
+This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
+
+[Note: The following dolphinscheduler is referred to as DS.]
+```
+
+├─bin                               DS command storage directory
+│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
+│  ├─start-all.sh                       Start all DS services according to the configuration file
+│  ├─stop-all.sh                        Close all DS services according to the configuration file
+├─conf                              Configuration file directory
+│  ├─application-api.properties         api service configuration file
+│  ├─datasource.properties              Database configuration file
+│  ├─zookeeper.properties               zookeeper configuration file
+│  ├─master.properties                  Master service configuration file
+│  ├─worker.properties                  Worker service configuration file
+│  ├─quartz.properties                  Quartz service configuration file
+│  ├─common.properties                  Public service [storage] configuration file
+│  ├─alert.properties                   alert service configuration file
+│  ├─config                             Environment variable configuration folder
+│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
+│  ├─env                                Run script environment variable configuration directory
+│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper file directory
+│  ├─i18n                               i18n configuration file directory
+│  ├─logback-api.xml                    api service log configuration file
+│  ├─logback-master.xml                 Master service log configuration file
+│  ├─logback-worker.xml                 Worker service log configuration file
+│  ├─logback-alert.xml                  alert service log configuration file
+├─sql                               DS metadata creation and upgrade sql file
+│  ├─create                             Create SQL script directory
+│  ├─upgrade                            Upgrade SQL script directory
+│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
+│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
+│  ├─soft_version                       Current DS version identification file
+├─script                            DS service deployment, database creation/upgrade script directory
+│  ├─create-dolphinscheduler.sh         DS database initialization script      
+│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
+│  ├─monitor-server.sh                  DS service monitoring startup script               
+│  ├─scp-hosts.sh                       Install file transfer script                                                    
+│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
+├─ui                                Front-end WEB resource directory
+├─lib                               DS dependent jar storage directory
+├─install.sh                        Automatically install DS service script
+
+
+```
+
+
+# Detailed configuration file
+
+Serial number| Service classification |  Configuration file|
+|--|--|--|
+1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
+2|Database connection configuration | datasource.properties
+3|Zookeeper connection configuration|zookeeper.properties
+4|Common [storage] configuration|common.properties
+5|API service configuration|application-api.properties
+6|Master service configuration|master.properties
+7|Worker service configuration|worker.properties
+8|Alert service configuration|alert.properties
+9|Quartz configuration|quartz.properties
+10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
+11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
+12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
+The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
+start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
+At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
+
+The default simplified parameters are as follows:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
+
+## 2.datasource.properties [Database Connectivity]
+Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
+|Parameter | Defaults| Description|
+|--|--|--|
+spring.datasource.driver-class-name| |Database driver
+spring.datasource.url||Database connection address
+spring.datasource.username||Database username
+spring.datasource.password||Database password
+spring.datasource.initialSize|5| Number of initial connection pools
+spring.datasource.minIdle|5| Minimum number of connection pools
+spring.datasource.maxActive|5| Maximum number of connection pools
+spring.datasource.maxWait|60000| Maximum waiting time
+spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
+spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
+spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
+spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
+spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
+spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
+spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
+spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
+spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
+spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
+spring.datasource.poolPreparedStatements|true| Open PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
+
+
+## 3.zookeeper.properties [Zookeeper connection configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+zookeeper.quorum|localhost:2181| zk cluster connection information
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
+zookeeper.session.timeout|60000|  session time out
+zookeeper.connection.timeout|30000|  Connection timed out
+zookeeper.retry.base.sleep|100| Basic retry time difference
+zookeeper.retry.max.sleep|30000| Maximum retry time
+zookeeper.retry.maxtime|10|Maximum number of retries
+
+
+## 4.common.properties [hadoop, s3, yarn configuration]
+The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
+|Parameter |Defaults| Description| 
+|--|--|--|
+resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
+resource.upload.path|/dolphinscheduler|Resource file storage path
+data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
+hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
+resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
+hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
+fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
+fs.s3a.endpoint||s3 endpoint address
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
+development.state|false|Is it in development mode
+kerberos.expire.time|7|kerberos expire time,integer,the unit is day
+
+
+## 5.application-api.properties [API service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+server.port|12345|API service communication port
+server.servlet.session.timeout|7200|session timeout
+server.servlet.context-path|/dolphinscheduler |Request path
+spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
+spring.servlet.multipart.max-request-size|1024MB|Maximum request size
+server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
+spring.messages.encoding|UTF-8|Request encoding
+spring.jackson.time-zone|GMT+8|Set time zone
+spring.messages.basename|i18n/messages|i18n configuration
+security.authentication.type|PASSWORD|Permission verification type
+
+
+## 6.master.properties [Master service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+master.listen.port|5678|master listen port
+master.exec.threads|100|master execute thread number to limit process instances in parallel
+master.exec.task.num|20|master execute task number in parallel per process instance
+master.dispatch.task.num|3|master dispatch task number per batch
+master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
+master.heartbeat.interval|10|master heartbeat interval, the unit is second
+master.task.commit.retryTimes|5|master commit task retry times
+master.task.commit.interval|1000|master commit task interval, the unit is millisecond
+master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
+master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
+
+
+## 7.worker.properties [Worker service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+worker.listen.port|1234|worker listen port
+worker.exec.threads|100|worker execute thread number to limit task instances in parallel
+worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
+worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
+worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
+worker.groups|default|worker groups separated by comma, like 'worker.groups=default,test' <br> worker will join corresponding group according to this config when startup
+
+
+## 8.alert.properties [Alert alert service configuration]
+|Parameter |Defaults| Description| 
+|--|--|--|
+alert.type|EMAIL|Alarm type|
+mail.protocol|SMTP| Mail server protocol
+mail.server.host|xxx.xxx.com|Mail server address
+mail.server.port|25|Mail server port
+mail.sender|xxx@xxx.com|Sender mailbox
+mail.user|xxx@xxx.com|Sender's email name
+mail.passwd|111111|Sender email password
+mail.smtp.starttls.enable|true|Whether the mailbox opens tls
+mail.smtp.ssl.enable|false|Whether the mailbox opens ssl
+mail.smtp.ssl.trust|xxx.xxx.com|Email ssl whitelist
+xls.file.path|/tmp/xls|Temporary working directory for mailbox attachments
+||The following is the enterprise WeChat configuration[Optional]|
+enterprise.wechat.enable|false|Whether the enterprise WeChat is enabled
+enterprise.wechat.corp.id|xxxxxxx|
+enterprise.wechat.secret|xxxxxxx|
+enterprise.wechat.agent.id|xxxxxxx|
+enterprise.wechat.users|xxxxxxx|
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
+enterprise.wechat.user.send.msg||Send message format
+enterprise.wechat.team.send.msg||Group message format
+plugin.dir|/Users/xx/your/path/to/plugin/dir|Plugin directory
+
+
+## 9.quartz.properties [Quartz configuration]
+This is mainly quartz configuration, please configure it in combination with actual business scenarios & resources, this article will not be expanded for the time being.
+|Parameter |Defaults| Description| 
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS environment variable configuration script [for DS installation/startup]]
+The install_config.conf configuration file is more cumbersome.This file is mainly used in two places.
+* 1.Automatic installation of DS cluster.
+
+> Calling the install.sh script will automatically load the configuration in this file, and automatically configure the content in the above configuration file according to the content in this file.
+> Such as::dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties Etc..
+
+
+* 2.DS cluster startup and shutdown.
+>When the DS cluster is started up and shut down, it will load the masters, workers, alertServer, apiServers and other parameters in the configuration file to start/close the DS cluster.
+
+The contents of the file are as follows:
+```bash
+
+# Note: If the configuration file contains special characters,such as: `.*[]^${}\+?|()@#&`, Please escape,
+#      Examples: `[` Escape to `\[`
+
+# Database type, currently only supports postgresql or mysql
+dbtype="mysql"
+
+# Database address & port
+dbhost="192.168.xx.xx:3306"
+
+# Database Name
+dbname="dolphinscheduler"
+
+
+# Database Username
+username="xx"
+
+# Database Password
+password="xx"
+
+# Zookeeper address
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# Where to install DS, such as: /data1_1T/dolphinscheduler,
+installPath="/data1_1T/dolphinscheduler"
+
+# Which user to use for deployment
+# Note: The deployment user needs sudo permissions and can operate hdfs.
+#     If you use hdfs, the root directory must be created by the user. Otherwise, there will be permissions related issues.
+deployUser="dolphinscheduler"
+
+
+# The following is the alarm service configuration
+# Mail server address
+mailServerHost="smtp.exmail.qq.com"
+
+# Mail Server Port
+mailServerPort="25"
+
+# Sender
+mailSender="xxxxxxxxxx"
+
+# Sending user
+mailUser="xxxxxxxxxx"
+
+# email Password
+mailPassword="xxxxxxxxxx"
+
+# TLS protocol mailbox is set to true, otherwise set to false
+starttlsEnable="true"
+
+# The mailbox with SSL protocol enabled is set to true, otherwise it is false. Note: starttlsEnable and sslEnable cannot be true at the same time
+sslEnable="false"
+
+# Mail service address value, same as mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+#Where to upload resource files such as sql used for business, you can set: HDFS, S3, NONE. If you want to upload to HDFS, please configure as HDFS; if you do not need the resource upload function, please select NONE.
+resourceStorageType="NONE"
+
+# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+# Note,s3 be sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# If the resourceStorageType is S3, the parameters to be configured are as follows:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# If the ResourceManager is HA, configure it as the primary and secondary ip or hostname of the ResourceManager node, such as "192.168.xx.xx, 192.168.xx.xx", otherwise if it is a single ResourceManager or yarn is not used at all, please configure yarnHaIps="" That’s it, if yarn is not used, configure it as ""
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+# If it is a single ResourceManager, configure it as the ResourceManager node ip or host name, otherwise keep the default value.
+singleYarnIp="yarnIp1"
+
+# The storage path of resource files in HDFS/S3
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3  Operating user
+hdfsRootUser="hdfs"
+
+# The following is the kerberos configuration
+
+# Whether kerberos is turned on
+kerberosStartUp="false"
+# kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+# keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+# username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# api service port
+apiServerPort="12345"
+
+
+# Hostname of all hosts where DS is deployed
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# ssh port, default 22
+sshPort="22"
+
+# Deploy master service host
+masters="ds1,ds2"
+
+# The host where the worker service is deployed
+# Note: Each worker needs to set a worker group name, the default value is "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  Deploy the alert service host
+alertServer="ds3"
+
+# Deploy api service host
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [Environment variable configuration]
+When submitting a task through a shell-like method, the environment variables in the configuration file are loaded into the host.
+The types of tasks involved are: Shell tasks, Python tasks, Spark tasks, Flink tasks, Datax tasks, etc.
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12.Service log configuration files
+Correspondence service| Log file name |
+--|--|
+api service log configuration file |logback-api.xml|
+Master service log configuration file|logback-master.xml |
+Worker service log configuration file|logback-worker.xml |
+alert service log configuration file|logback-alert.xml |
diff --git a/docs/en-us/1.3.2/user_doc/expansion-reduction.md b/content/en-us/docs/1.3.2/user_doc/expansion-reduction.md
similarity index 100%
rename from docs/en-us/1.3.2/user_doc/expansion-reduction.md
rename to content/en-us/docs/1.3.2/user_doc/expansion-reduction.md
diff --git a/docs/en-us/1.3.8/user_doc/hardware-environment.md b/content/en-us/docs/1.3.2/user_doc/hardware-environment.md
similarity index 100%
rename from docs/en-us/1.3.8/user_doc/hardware-environment.md
rename to content/en-us/docs/1.3.2/user_doc/hardware-environment.md
diff --git a/docs/en-us/1.3.6/user_doc/metadata-1.3.md b/content/en-us/docs/1.3.2/user_doc/metadata-1.3.md
similarity index 100%
rename from docs/en-us/1.3.6/user_doc/metadata-1.3.md
rename to content/en-us/docs/1.3.2/user_doc/metadata-1.3.md
diff --git a/docs/en-us/1.3.5/user_doc/quick-start.md b/content/en-us/docs/1.3.2/user_doc/quick-start.md
similarity index 100%
rename from docs/en-us/1.3.5/user_doc/quick-start.md
rename to content/en-us/docs/1.3.2/user_doc/quick-start.md
diff --git a/docs/en-us/1.3.2/user_doc/standalone-deployment.md b/content/en-us/docs/1.3.2/user_doc/standalone-deployment.md
similarity index 100%
rename from docs/en-us/1.3.2/user_doc/standalone-deployment.md
rename to content/en-us/docs/1.3.2/user_doc/standalone-deployment.md
diff --git a/docs/en-us/1.3.2/user_doc/system-manual.md b/content/en-us/docs/1.3.2/user_doc/system-manual.md
similarity index 100%
rename from docs/en-us/1.3.2/user_doc/system-manual.md
rename to content/en-us/docs/1.3.2/user_doc/system-manual.md
diff --git a/docs/en-us/1.3.3/user_doc/task-structure.md b/content/en-us/docs/1.3.2/user_doc/task-structure.md
similarity index 96%
rename from docs/en-us/1.3.3/user_doc/task-structure.md
rename to content/en-us/docs/1.3.2/user_doc/task-structure.md
index 2442e7e..98fc0b3 100644
--- a/docs/en-us/1.3.3/user_doc/task-structure.md
+++ b/content/en-us/docs/1.3.2/user_doc/task-structure.md
@@ -1,1136 +1,1136 @@
-
-# Overall task storage structure
-All tasks created in dolphinscheduler are saved in the t_ds_process_definition table.
-
-The database table structure is shown in the following table:
-
-
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|id|int(11)|Primary key
-2|name|varchar(255)|Process definition name
-3|version|int(11)|Process definition version
-4|release_state|tinyint(4)|Release status of the process definition:0 not online, 1 online
-5|project_id|int(11)|Project id
-6|user_id|int(11)|User id of the process definition
-7|process_definition_json|longtext|Process definition JSON
-8|description|text|Process definition description
-9|global_params|text|Global parameters
-10|flag|tinyint(4)|Whether the process is available: 0 is not available, 1 is available
-11|locations|text|Node coordinate information
-12|connects|text|Node connection information
-13|receivers|text|Recipient
-14|receivers_cc|text|Cc
-15|create_time|datetime|Creation time
-16|timeout|int(11) |overtime time
-17|tenant_id|int(11) |Tenant id
-18|update_time|datetime|Update time
-19|modify_by|varchar(36)|Modify user
-20|resource_ids|varchar(255)|Resource ids
-
-The process_definition_json field is the core field, which defines the task information in the DAG diagram. The data is stored in JSON.
-
-The public data structure is as follows.
-Serial number | Field  | Types  |  Description
--------- | ---------| -------- | ---------
-1|globalParams|Array|Global parameters
-2|tasks|Array|Task collection in the process  [ Please refer to the following chapters for the structure of each type]
-3|tenantId|int|Tenant id
-4|timeout|int|overtime time
-
-Data example:
-```bash
-{
-    "globalParams":[
-        {
-            "prop":"golbal_bizdate",
-            "direct":"IN",
-            "type":"VARCHAR",
-            "value":"${system.biz.date}"
-        }
-    ],
-    "tasks":Array[1],
-    "tenantId":0,
-    "timeout":0
-}
-```
-
-# Detailed explanation of the storage structure of each task type
-
-## Shell node
-**The node data structure is as follows:**
-Serial number|Field||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SHELL",
-    "id":"tasks-80760",
-    "name":"Shell Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"echo "This is a shell script""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-
-```
-
-
-## SQL node
-Perform data query and update operations on the specified data source through SQL.
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json 格式
-5| |type |String | Database type
-6| |datasource |Int | Data source id
-7| |sql |String | Query SQL statement
-8| |udfs | String| udf function|UDF function ids, separated by commas.
-9| |sqlType | String| SQL node type |0 query, 1 non query
-10| |title |String | Mail title
-11| |receivers |String | Recipient
-12| |receiversCc |String | Cc
-13| |showType | String| Mail display type|TABLE table  ,  ATTACHMENT attachment
-14| |connParams | String| Connection parameters
-15| |preStatements | Array| Pre-SQL
-16| | postStatements| Array|Post SQL||
-17| | localParams| Array|Custom parameters||
-18|description | |String|Dscription | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SQL",
-    "id":"tasks-95648",
-    "name":"SqlTask-Query",
-    "params":{
-        "type":"MYSQL",
-        "datasource":1,
-        "sql":"select id , namge , age from emp where id =  ${id}",
-        "udfs":"",
-        "sqlType":"0",
-        "title":"xxxx@xxx.com",
-        "receivers":"xxxx@xxx.com",
-        "receiversCc":"",
-        "showType":"TABLE",
-        "localParams":[
-            {
-                "prop":"id",
-                "direct":"IN",
-                "type":"INTEGER",
-                "value":"1"
-            }
-        ],
-        "connParams":"",
-        "preStatements":[
-            "insert into emp ( id,name ) value (1,'Li' )"
-        ],
-        "postStatements":[
-
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## PROCEDURE [stored procedure] node
-**The node data structure is as follows:**
-**Sample node data:**
-
-## SPARK node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Types of |SPARK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |driverCores | String| Driver core
-11| |driverMemory | String| Driver memory
-12| |numExecutors |String | Number of executors
-13| |executorMemory |String | Executor memory
-14| |executorCores |String | Number of executor cores
-15| |programType | String| Type program|JAVA,SCALA,PYTHON
-16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
-17| | localParams| Array|Custom parameters
-18| | resourceList| Array|Resource
-19|description | |String|Description | |
-20|runFlag | |String |Run ID| |
-21|conditionResult | |Object|Conditional branch | |
-22| | successNode| Array|Jump to node successfully| |
-23| | failedNode|Array|Failed jump node | 
-24| dependence| |Object |Task dependency |Mutually exclusive with params
-25|maxRetryTimes | |String|Maximum number of retries | |
-26|retryInterval | |String |Retry interval| |
-27|timeout | |Object|Timeout control | |
-28| taskInstancePriority| |String|Task priority | |
-29|workerGroup | |String |Worker Grouping| |
-30|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"SPARK",
-    "id":"tasks-87430",
-    "name":"SparkTask",
-    "params":{
-        "mainClass":"org.apache.spark.examples.SparkPi",
-        "mainJar":{
-            "id":4
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "driverCores":1,
-        "driverMemory":"512M",
-        "numExecutors":2,
-        "executorMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"10",
-        "others":"",
-        "programType":"SCALA",
-        "sparkVersion":"SPARK2"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## MapReduce (MR) node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |描Description述
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |MR
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |programType | String| Program type|JAVA,PYTHON
-10| | localParams| Array|Custom parameters
-11| | resourceList| Array|Resource
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"MR",
-    "id":"tasks-28997",
-    "name":"MRTask",
-    "params":{
-        "mainClass":"wordcount",
-        "mainJar":{
-            "id":5
-        },
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
-        "others":"",
-        "programType":"JAVA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-## Python node
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |PYTHON
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Python script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14|maxRetryTimes | |String|Maximum number of retries | |
-15|retryInterval | |String |Retry interval| |
-16|timeout | |Object|Timeout control | |
-17| taskInstancePriority| |String|Task priority | |
-18|workerGroup | |String |Worker Grouping| |
-19|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"PYTHON",
-    "id":"tasks-5463",
-    "name":"Python Task",
-    "params":{
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "rawScript":"print("This is a python script")"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-
-## Flink node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |FLINK
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |mainClass |String | Run the main class
-6| |mainArgs | String| Operating parameters
-7| |others | String| Other parameters
-8| |mainJar |Object | Program jar package
-9| |deployMode |String | Deployment mode  |local,client,cluster
-10| |slot | String| Number of slots
-11| |taskManager |String | Number of taskManage
-12| |taskManagerMemory |String | TaskManager memory
-13| |jobManagerMemory |String | JobManager memory number
-14| |programType | String| Program type|JAVA,SCALA,PYTHON
-15| | localParams| Array|Custom parameters
-16| | resourceList| Array|Resource
-17|description | |String|Description | |
-18|runFlag | |String |Run ID| |
-19|conditionResult | |Object|Conditional branch | |
-20| | successNode| Array|Jump to node successfully| |
-21| | failedNode|Array|Failed jump node | 
-22| dependence| |Object |Task dependency |Mutually exclusive with params
-23|maxRetryTimes | |String|Maximum number of retries | |
-24|retryInterval | |String |Retry interval| |
-25|timeout | |Object|Timeout control | |
-26| taskInstancePriority| |String|Task priority | |
-27|workerGroup | |String |Worker Grouping| |
-38|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"FLINK",
-    "id":"tasks-17135",
-    "name":"FlinkTask",
-    "params":{
-        "mainClass":"com.flink.demo",
-        "mainJar":{
-            "id":6
-        },
-        "deployMode":"cluster",
-        "resourceList":[
-            {
-                "id":3,
-                "name":"run.sh",
-                "res":"run.sh"
-            }
-        ],
-        "localParams":[
-
-        ],
-        "slot":1,
-        "taskManager":"2",
-        "jobManagerMemory":"1G",
-        "taskManagerMemory":"2G",
-        "executorCores":2,
-        "mainArgs":"100",
-        "others":"",
-        "programType":"SCALA"
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## HTTP node
-**The node data structure is as follows:**
-
-Serial number|Parameter name||Type|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |HTTP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |url |String | Request address
-6| |httpMethod | String| Request method|GET,POST,HEAD,PUT,DELETE
-7| | httpParams| Array|Request parameter
-8| |httpCheckCondition | String| Check conditions|Default response code 200
-9| |condition |String | Check content
-10| | localParams| Array|Custom parameters
-11|description | |String|Description | |
-12|runFlag | |String |Run ID| |
-13|conditionResult | |Object|Conditional branch | |
-14| | successNode| Array|Jump to node successfully| |
-15| | failedNode|Array|Failed jump node | 
-16| dependence| |Object |Task dependency |Mutually exclusive with params
-17|maxRetryTimes | |String|Maximum number of retries | |
-18|retryInterval | |String |Retry interval| |
-19|timeout | |Object|Timeout control | |
-20| taskInstancePriority| |String|Task priority | |
-21|workerGroup | |String |Worker Grouping| |
-22|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"HTTP",
-    "id":"tasks-60499",
-    "name":"HttpTask",
-    "params":{
-        "localParams":[
-
-        ],
-        "httpParams":[
-            {
-                "prop":"id",
-                "httpParametersType":"PARAMETER",
-                "value":"1"
-            },
-            {
-                "prop":"name",
-                "httpParametersType":"PARAMETER",
-                "value":"Bo"
-            }
-        ],
-        "url":"https://www.xxxxx.com:9012",
-        "httpMethod":"POST",
-        "httpCheckCondition":"STATUS_CODE_DEFAULT",
-        "condition":""
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-
-
-## DataX node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DATAX
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |customConfig |Int | Custom type| 0 custom, 1 custom
-6| |dsType |String | Source database type
-7| |dataSource |Int | Source database ID
-8| |dtType | String| Target database type
-9| |dataTarget | Int| Target database ID 
-10| |sql |String | SQL statement
-11| |targetTable |String | Target table
-12| |jobSpeedByte |Int | Current limit (bytes)
-13| |jobSpeedRecord | Int| Current limit (number of records)
-14| |preStatements | Array| Pre-SQL
-15| | postStatements| Array|Post SQL
-16| | json| String|Custom configuration|Effective when customConfig=1
-17| | localParams| Array|Custom parameters|Effective when customConfig=1
-18|description | |String|Description | |
-19|runFlag | |String |Run ID| |
-20|conditionResult | |Object|Conditional branch | |
-21| | successNode| Array|Jump to node successfully| |
-22| | failedNode|Array|Failed jump node | 
-23| dependence| |Object |Task dependency |Mutually exclusive with params
-24|maxRetryTimes | |String|Maximum number of retries | |
-25|retryInterval | |String |Retry interval| |
-26|timeout | |Object|Timeout control | |
-27| taskInstancePriority| |String|Task priority | |
-28|workerGroup | |String |Worker Grouping| |
-29|preTasks | |Array|Predecessor | |
-
-
-
-**Sample node data:**
-
-
-```bash
-{
-    "type":"DATAX",
-    "id":"tasks-91196",
-    "name":"DataxTask-DB",
-    "params":{
-        "customConfig":0,
-        "dsType":"MYSQL",
-        "dataSource":1,
-        "dtType":"MYSQL",
-        "dataTarget":1,
-        "sql":"select id, name ,age from user ",
-        "targetTable":"emp",
-        "jobSpeedByte":524288,
-        "jobSpeedRecord":500,
-        "preStatements":[
-            "truncate table emp "
-        ],
-        "postStatements":[
-            "truncate table user"
-        ]
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            ""
-        ],
-        "failedNode":[
-            ""
-        ]
-    },
-    "dependence":{
-
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-
-    ]
-}
-```
-
-## Sqoop node
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SQOOP
-3| name| |String|Name |
-4| params| |Object| Custom parameters |JSON format
-5| | concurrency| Int|Concurrency
-6| | modelType|String |Flow direction|import,export
-7| |sourceType|String |Data source type |
-8| |sourceParams |String| Data source parameters| JSON format
-9| | targetType|String |Target data type
-10| |targetParams | String|Target data parameters|JSON format
-11| |localParams |Array |Custom parameters
-12|description | |String|Description | |
-13|runFlag | |String |Run ID| |
-14|conditionResult | |Object|Conditional branch | |
-15| | successNode| Array|Jump to node successfully| |
-16| | failedNode|Array|Failed jump node | 
-17| dependence| |Object |Task dependency |Mutually exclusive with params
-18|maxRetryTimes | |String|Maximum number of retries | |
-19|retryInterval | |String |Retry interval| |
-20|timeout | |Object|Timeout control | |
-21| taskInstancePriority| |String|Task priority | |
-22|workerGroup | |String |Worker Grouping| |
-23|preTasks | |Array|Predecessor | |
-
-
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SQOOP",
-            "id":"tasks-82041",
-            "name":"Sqoop Task",
-            "params":{
-                "concurrency":1,
-                "modelType":"import",
-                "sourceType":"MYSQL",
-                "targetType":"HDFS",
-                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
-                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
-                "localParams":[
-
-                ]
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-## Conditional branch node
-
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters | null
-5|description | |String|Description | |
-6|runFlag | |String |Run ID| |
-7|conditionResult | |Object|Conditional branch | |
-8| | successNode| Array|Jump to node successfully| |
-9| | failedNode|Array|Failed jump node | 
-10| dependence| |Object |Task dependency |Mutually exclusive with params
-11|maxRetryTimes | |String|Maximum number of retries | |
-12|retryInterval | |String |Retry interval| |
-13|timeout | |Object|Timeout control | |
-14| taskInstancePriority| |String|Task priority | |
-15|workerGroup | |String |Worker Grouping| |
-16|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-    "type":"CONDITIONS",
-    "id":"tasks-96189",
-    "name":"条件",
-    "params":{
-
-    },
-    "description":"",
-    "runFlag":"NORMAL",
-    "conditionResult":{
-        "successNode":[
-            "test04"
-        ],
-        "failedNode":[
-            "test05"
-        ]
-    },
-    "dependence":{
-        "relation":"AND",
-        "dependTaskList":[
-
-        ]
-    },
-    "maxRetryTimes":"0",
-    "retryInterval":"1",
-    "timeout":{
-        "strategy":"",
-        "interval":null,
-        "enable":false
-    },
-    "taskInstancePriority":"MEDIUM",
-    "workerGroup":"default",
-    "preTasks":[
-        "test01",
-        "test02"
-    ]
-}
-```
-
-
-## Subprocess node
-**The node data structure is as follows:**
-Serial number|Parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |SHELL
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |processDefinitionId |Int| Process definition id
-6|description | |String|Description | |
-7|runFlag | |String |Run ID| |
-8|conditionResult | |Object|Conditional branch | |
-9| | successNode| Array|Jump to node successfully| |
-10| | failedNode|Array|Failed jump node | 
-11| dependence| |Object |Task dependency |Mutually exclusive with params
-12|maxRetryTimes | |String|Maximum number of retries | |
-13|retryInterval | |String |Retry interval| |
-14|timeout | |Object|Timeout control | |
-15| taskInstancePriority| |String|Task priority | |
-16|workerGroup | |String |Worker Grouping| |
-17|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"SUB_PROCESS",
-            "id":"tasks-14806",
-            "name":"SubProcessTask",
-            "params":{
-                "processDefinitionId":2
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-
-            },
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
-
-
-
-## DEPENDENT node
-**The node data structure is as follows:**
-
-**The node data structure is as follows:**
-Serial number|parameter name||Types|Description |Description
--------- | ---------| ---------| -------- | --------- | ---------
-1|id | |String| Task code|
-2|type ||String |Type |DEPENDENT
-3| name| |String|Name |
-4| params| |Object| Custom parameters |Json format
-5| |rawScript |String| Shell script |
-6| | localParams| Array|Custom parameters||
-7| | resourceList| Array|Resource||
-8|description | |String|Description | |
-9|runFlag | |String |Run ID| |
-10|conditionResult | |Object|Conditional branch | |
-11| | successNode| Array|Jump to node successfully| |
-12| | failedNode|Array|Failed jump node | 
-13| dependence| |Object |Task dependency |Mutually exclusive with params
-14| | relation|String |Relationship |AND,OR
-15| | dependTaskList|Array |Dependent task list |
-16|maxRetryTimes | |String|Maximum number of retries | |
-17|retryInterval | |String |Retry interval| |
-18|timeout | |Object|Timeout control | |
-19| taskInstancePriority| |String|Task priority | |
-20|workerGroup | |String |Worker Grouping| |
-21|preTasks | |Array|Predecessor | |
-
-
-**Sample node data:**
-
-```bash
-{
-            "type":"DEPENDENT",
-            "id":"tasks-57057",
-            "name":"DenpendentTask",
-            "params":{
-
-            },
-            "description":"",
-            "runFlag":"NORMAL",
-            "conditionResult":{
-                "successNode":[
-                    ""
-                ],
-                "failedNode":[
-                    ""
-                ]
-            },
-            "dependence":{
-                "relation":"AND",
-                "dependTaskList":[
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":7,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"ALL",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    },
-                    {
-                        "relation":"AND",
-                        "dependItemList":[
-                            {
-                                "projectId":1,
-                                "definitionId":5,
-                                "definitionList":[
-                                    {
-                                        "value":8,
-                                        "label":"MRTask"
-                                    },
-                                    {
-                                        "value":7,
-                                        "label":"FlinkTask"
-                                    },
-                                    {
-                                        "value":6,
-                                        "label":"SparkTask"
-                                    },
-                                    {
-                                        "value":5,
-                                        "label":"SqlTask-Update"
-                                    },
-                                    {
-                                        "value":4,
-                                        "label":"SqlTask-Query"
-                                    },
-                                    {
-                                        "value":3,
-                                        "label":"SubProcessTask"
-                                    },
-                                    {
-                                        "value":2,
-                                        "label":"Python Task"
-                                    },
-                                    {
-                                        "value":1,
-                                        "label":"Shell Task"
-                                    }
-                                ],
-                                "depTasks":"SqlTask-Update",
-                                "cycle":"day",
-                                "dateValue":"today"
-                            }
-                        ]
-                    }
-                ]
-            },
-            "maxRetryTimes":"0",
-            "retryInterval":"1",
-            "timeout":{
-                "strategy":"",
-                "interval":null,
-                "enable":false
-            },
-            "taskInstancePriority":"MEDIUM",
-            "workerGroup":"default",
-            "preTasks":[
-
-            ]
-        }
-```
+
+# Overall task storage structure
+All tasks created in dolphinscheduler are saved in the t_ds_process_definition table.
+
+The database table structure is shown in the following table:
+
+
+Serial number | Field  | Types  |  Description
+-------- | ---------| -------- | ---------
+1|id|int(11)|Primary key
+2|name|varchar(255)|Process definition name
+3|version|int(11)|Process definition version
+4|release_state|tinyint(4)|Release status of the process definition:0 not online, 1 online
+5|project_id|int(11)|Project id
+6|user_id|int(11)|User id of the process definition
+7|process_definition_json|longtext|Process definition JSON
+8|description|text|Process definition description
+9|global_params|text|Global parameters
+10|flag|tinyint(4)|Whether the process is available: 0 is not available, 1 is available
+11|locations|text|Node coordinate information
+12|connects|text|Node connection information
+13|receivers|text|Recipient
+14|receivers_cc|text|Cc
+15|create_time|datetime|Creation time
+16|timeout|int(11) |overtime time
+17|tenant_id|int(11) |Tenant id
+18|update_time|datetime|Update time
+19|modify_by|varchar(36)|Modify user
+20|resource_ids|varchar(255)|Resource ids
+
+The process_definition_json field is the core field, which defines the task information in the DAG diagram. The data is stored in JSON.
+
+The public data structure is as follows.
+Serial number | Field  | Types  |  Description
+-------- | ---------| -------- | ---------
+1|globalParams|Array|Global parameters
+2|tasks|Array|Task collection in the process  [ Please refer to the following chapters for the structure of each type]
+3|tenantId|int|Tenant id
+4|timeout|int|overtime time
+
+Data example:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+# Detailed explanation of the storage structure of each task type
+
+## Shell node
+**The node data structure is as follows:**
+Serial number|Field||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14|maxRetryTimes | |String|Maximum number of retries | |
+15|retryInterval | |String |Retry interval| |
+16|timeout | |Object|Timeout control | |
+17| taskInstancePriority| |String|Task priority | |
+18|workerGroup | |String |Worker Grouping| |
+19|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+
+```
+
+
+## SQL node
+Perform data query and update operations on the specified data source through SQL.
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SQL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json 格式
+5| |type |String | Database type
+6| |datasource |Int | Data source id
+7| |sql |String | Query SQL statement
+8| |udfs | String| udf function|UDF function ids, separated by commas.
+9| |sqlType | String| SQL node type |0 query, 1 non query
+10| |title |String | Mail title
+11| |receivers |String | Recipient
+12| |receiversCc |String | Cc
+13| |showType | String| Mail display type|TABLE table  ,  ATTACHMENT attachment
+14| |connParams | String| Connection parameters
+15| |preStatements | Array| Pre-SQL
+16| | postStatements| Array|Post SQL||
+17| | localParams| Array|Custom parameters||
+18|description | |String|Dscription | |
+19|runFlag | |String |Run ID| |
+20|conditionResult | |Object|Conditional branch | |
+21| | successNode| Array|Jump to node successfully| |
+22| | failedNode|Array|Failed jump node | 
+23| dependence| |Object |Task dependency |Mutually exclusive with params
+24|maxRetryTimes | |String|Maximum number of retries | |
+25|retryInterval | |String |Retry interval| |
+26|timeout | |Object|Timeout control | |
+27| taskInstancePriority| |String|Task priority | |
+28|workerGroup | |String |Worker Grouping| |
+29|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## PROCEDURE [stored procedure] node
+**The node data structure is as follows:**
+**Sample node data:**
+
+## SPARK node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Types of |SPARK
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |deployMode |String | Deployment mode  |local,client,cluster
+10| |driverCores | String| Driver core
+11| |driverMemory | String| Driver memory
+12| |numExecutors |String | Number of executors
+13| |executorMemory |String | Executor memory
+14| |executorCores |String | Number of executor cores
+15| |programType | String| Type program|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
+17| | localParams| Array|Custom parameters
+18| | resourceList| Array|Resource
+19|description | |String|Description | |
+20|runFlag | |String |Run ID| |
+21|conditionResult | |Object|Conditional branch | |
+22| | successNode| Array|Jump to node successfully| |
+23| | failedNode|Array|Failed jump node | 
+24| dependence| |Object |Task dependency |Mutually exclusive with params
+25|maxRetryTimes | |String|Maximum number of retries | |
+26|retryInterval | |String |Retry interval| |
+27|timeout | |Object|Timeout control | |
+28| taskInstancePriority| |String|Task priority | |
+29|workerGroup | |String |Worker Grouping| |
+30|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## MapReduce (MR) node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |描Description述
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |MR
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |programType | String| Program type|JAVA,PYTHON
+10| | localParams| Array|Custom parameters
+11| | resourceList| Array|Resource
+12|description | |String|Description | |
+13|runFlag | |String |Run ID| |
+14|conditionResult | |Object|Conditional branch | |
+15| | successNode| Array|Jump to node successfully| |
+16| | failedNode|Array|Failed jump node | 
+17| dependence| |Object |Task dependency |Mutually exclusive with params
+18|maxRetryTimes | |String|Maximum number of retries | |
+19|retryInterval | |String |Retry interval| |
+20|timeout | |Object|Timeout control | |
+21| taskInstancePriority| |String|Task priority | |
+22|workerGroup | |String |Worker Grouping| |
+23|preTasks | |Array|Predecessor | |
+
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+## Python node
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |PYTHON
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Python script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14|maxRetryTimes | |String|Maximum number of retries | |
+15|retryInterval | |String |Retry interval| |
+16|timeout | |Object|Timeout control | |
+17| taskInstancePriority| |String|Task priority | |
+18|workerGroup | |String |Worker Grouping| |
+19|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+
+## Flink node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |FLINK
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |mainClass |String | Run the main class
+6| |mainArgs | String| Operating parameters
+7| |others | String| Other parameters
+8| |mainJar |Object | Program jar package
+9| |deployMode |String | Deployment mode  |local,client,cluster
+10| |slot | String| Number of slots
+11| |taskManager |String | Number of taskManage
+12| |taskManagerMemory |String | TaskManager memory
+13| |jobManagerMemory |String | JobManager memory number
+14| |programType | String| Program type|JAVA,SCALA,PYTHON
+15| | localParams| Array|Custom parameters
+16| | resourceList| Array|Resource
+17|description | |String|Description | |
+18|runFlag | |String |Run ID| |
+19|conditionResult | |Object|Conditional branch | |
+20| | successNode| Array|Jump to node successfully| |
+21| | failedNode|Array|Failed jump node | 
+22| dependence| |Object |Task dependency |Mutually exclusive with params
+23|maxRetryTimes | |String|Maximum number of retries | |
+24|retryInterval | |String |Retry interval| |
+25|timeout | |Object|Timeout control | |
+26| taskInstancePriority| |String|Task priority | |
+27|workerGroup | |String |Worker Grouping| |
+38|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## HTTP node
+**The node data structure is as follows:**
+
+Serial number|Parameter name||Type|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |HTTP
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |url |String | Request address
+6| |httpMethod | String| Request method|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|Request parameter
+8| |httpCheckCondition | String| Check conditions|Default response code 200
+9| |condition |String | Check content
+10| | localParams| Array|Custom parameters
+11|description | |String|Description | |
+12|runFlag | |String |Run ID| |
+13|conditionResult | |Object|Conditional branch | |
+14| | successNode| Array|Jump to node successfully| |
+15| | failedNode|Array|Failed jump node | 
+16| dependence| |Object |Task dependency |Mutually exclusive with params
+17|maxRetryTimes | |String|Maximum number of retries | |
+18|retryInterval | |String |Retry interval| |
+19|timeout | |Object|Timeout control | |
+20| taskInstancePriority| |String|Task priority | |
+21|workerGroup | |String |Worker Grouping| |
+22|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+
+
+## DataX node
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |DATAX
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |customConfig |Int | Custom type| 0 custom, 1 custom
+6| |dsType |String | Source database type
+7| |dataSource |Int | Source database ID
+8| |dtType | String| Target database type
+9| |dataTarget | Int| Target database ID 
+10| |sql |String | SQL statement
+11| |targetTable |String | Target table
+12| |jobSpeedByte |Int | Current limit (bytes)
+13| |jobSpeedRecord | Int| Current limit (number of records)
+14| |preStatements | Array| Pre-SQL
+15| | postStatements| Array|Post SQL
+16| | json| String|Custom configuration|Effective when customConfig=1
+17| | localParams| Array|Custom parameters|Effective when customConfig=1
+18|description | |String|Description | |
+19|runFlag | |String |Run ID| |
+20|conditionResult | |Object|Conditional branch | |
+21| | successNode| Array|Jump to node successfully| |
+22| | failedNode|Array|Failed jump node | 
+23| dependence| |Object |Task dependency |Mutually exclusive with params
+24|maxRetryTimes | |String|Maximum number of retries | |
+25|retryInterval | |String |Retry interval| |
+26|timeout | |Object|Timeout control | |
+27| taskInstancePriority| |String|Task priority | |
+28|workerGroup | |String |Worker Grouping| |
+29|preTasks | |Array|Predecessor | |
+
+
+
+**Sample node data:**
+
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+## Sqoop node
+
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SQOOP
+3| name| |String|Name |
+4| params| |Object| Custom parameters |JSON format
+5| | concurrency| Int|Concurrency
+6| | modelType|String |Flow direction|import,export
+7| |sourceType|String |Data source type |
+8| |sourceParams |String| Data source parameters| JSON format
+9| | targetType|String |Target data type
+10| |targetParams | String|Target data parameters|JSON format
+11| |localParams |Array |Custom parameters
+12|description | |String|Description | |
+13|runFlag | |String |Run ID| |
+14|conditionResult | |Object|Conditional branch | |
+15| | successNode| Array|Jump to node successfully| |
+16| | failedNode|Array|Failed jump node | 
+17| dependence| |Object |Task dependency |Mutually exclusive with params
+18|maxRetryTimes | |String|Maximum number of retries | |
+19|retryInterval | |String |Retry interval| |
+20|timeout | |Object|Timeout control | |
+21| taskInstancePriority| |String|Task priority | |
+22|workerGroup | |String |Worker Grouping| |
+23|preTasks | |Array|Predecessor | |
+
+
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+## Conditional branch node
+
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters | null
+5|description | |String|Description | |
+6|runFlag | |String |Run ID| |
+7|conditionResult | |Object|Conditional branch | |
+8| | successNode| Array|Jump to node successfully| |
+9| | failedNode|Array|Failed jump node | 
+10| dependence| |Object |Task dependency |Mutually exclusive with params
+11|maxRetryTimes | |String|Maximum number of retries | |
+12|retryInterval | |String |Retry interval| |
+13|timeout | |Object|Timeout control | |
+14| taskInstancePriority| |String|Task priority | |
+15|workerGroup | |String |Worker Grouping| |
+16|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+
+## Subprocess node
+**The node data structure is as follows:**
+Serial number|Parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |SHELL
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |processDefinitionId |Int| Process definition id
+6|description | |String|Description | |
+7|runFlag | |String |Run ID| |
+8|conditionResult | |Object|Conditional branch | |
+9| | successNode| Array|Jump to node successfully| |
+10| | failedNode|Array|Failed jump node | 
+11| dependence| |Object |Task dependency |Mutually exclusive with params
+12|maxRetryTimes | |String|Maximum number of retries | |
+13|retryInterval | |String |Retry interval| |
+14|timeout | |Object|Timeout control | |
+15| taskInstancePriority| |String|Task priority | |
+16|workerGroup | |String |Worker Grouping| |
+17|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+
+
+## DEPENDENT node
+**The node data structure is as follows:**
+
+**The node data structure is as follows:**
+Serial number|parameter name||Types|Description |Description
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| Task code|
+2|type ||String |Type |DEPENDENT
+3| name| |String|Name |
+4| params| |Object| Custom parameters |Json format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|Custom parameters||
+7| | resourceList| Array|Resource||
+8|description | |String|Description | |
+9|runFlag | |String |Run ID| |
+10|conditionResult | |Object|Conditional branch | |
+11| | successNode| Array|Jump to node successfully| |
+12| | failedNode|Array|Failed jump node | 
+13| dependence| |Object |Task dependency |Mutually exclusive with params
+14| | relation|String |Relationship |AND,OR
+15| | dependTaskList|Array |Dependent task list |
+16|maxRetryTimes | |String|Maximum number of retries | |
+17|retryInterval | |String |Retry interval| |
+18|timeout | |Object|Timeout control | |
+19| taskInstancePriority| |String|Task priority | |
+20|workerGroup | |String |Worker Grouping| |
+21|preTasks | |Array|Predecessor | |
+
+
+**Sample node data:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
diff --git a/docs/en-us/1.3.6/user_doc/upgrade.md b/content/en-us/docs/1.3.2/user_doc/upgrade.md
similarity index 100%
rename from docs/en-us/1.3.6/user_doc/upgrade.md
rename to content/en-us/docs/1.3.2/user_doc/upgrade.md
diff --git a/docs/en-us/1.3.4/user_doc/architecture-design.md b/content/en-us/docs/1.3.3/user_doc/architecture-design.md
similarity index 98%
rename from docs/en-us/1.3.4/user_doc/architecture-design.md
rename to content/en-us/docs/1.3.3/user_doc/architecture-design.md
index 29f4ae5..fe3beb7 100644
--- a/docs/en-us/1.3.4/user_doc/architecture-design.md
+++ b/content/en-us/docs/1.3.3/user_doc/architecture-design.md
@@ -1,332 +1,332 @@
-## System Architecture Design
-Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
-
-### 1.Glossary
-**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
-
-<p align="center">
-  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
-  <p align="center">
-        <em>dag example</em>
-  </p>
-</p>
-
-**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
-
-**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
-
-**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
-
-**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
-
-**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
-
-**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
-
-**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
-
-**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
-
-**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
-
-**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
-
-**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
-
-### 2.System Structure
-
-#### 2.1 System architecture diagram
-<p align="center">
-  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
-  <p align="center">
-        <em>System architecture diagram</em>
-  </p>
-</p>
-
-#### 2.2 Start process activity diagram
-<p align="center">
-  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
-  <p align="center">
-        <em>Start process activity diagram</em>
-  </p>
-</p>
-
-#### 2.3 Architecture description
-
-* **MasterServer** 
-
-    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
-    MasterServer provides monitoring services based on netty.
-
-    ##### The service mainly includes:
-
-    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
-
-    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
-
-    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
-
-    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
-
-* **WorkerServer** 
-
-     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
-
-     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
-     Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
-     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
-
-     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
-
-* **ZooKeeper** 
-
-    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
-
-    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
-
-* **Task Queue** 
-
-    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
-
-* **Alert** 
-
-    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
-
-* **API** 
-
-    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
-
-* **UI** 
-
-    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
-
-#### 2.3 Architecture design ideas
-
-##### One、Decentralization VS centralization 
-
-###### Centralized thinking
-
-The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
- </p>
-
-- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
-- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
-
-
-
-Problems in centralized thought design:
-
-- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
-- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
-
-
-
-###### Decentralized
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
- </p>
-
-- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
-- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
-- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
-
-
-
-- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
-
-#####  Two、Distributed lock practice
-
-DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
-1. The core process algorithm for acquiring distributed locks is as follows:
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
- <p align="center">
-   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
- </p>
-
-
-##### Three、Insufficient thread loop waiting problem
-
--  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
--  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
- </p>
-In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
-
-It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
-
-1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
-2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
-3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
-
-note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
-
-So we chose the third way to solve the problem of insufficient threads.
-
-
-##### Four、Fault-tolerant design
-Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
-
-###### 1. Downtime fault tolerance
-
-The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
- </p>
-Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
-
-
-
-- Master fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
- </p>
-After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
-
-
-
-- Worker fault tolerance flowchart:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
- </p>
-
-Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
-
- Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
-
-###### 2.Task failed and try again
-
-Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
-
-- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
-- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
-- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
-
-
-
-Next to the topic, we divide the task nodes in the workflow into two types.
-
-- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
-
-- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
-
-Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
-
-If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
-
-
-
-##### Five、Task priority design
-In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
-
--  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
-    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
-
-        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
-             </p>
-
-        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
-            <p align="center">
-               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
-             </p>
-
-
-##### Six、Logback and netty implement log access
-
--  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
-  -  Put logs on ES search engine
-  -  Obtain remote log information through netty communication
-
--  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
- </p>
-
-
-- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
-- FileAppender is mainly implemented as follows:
-
- ```java
- /**
-  * task log appender
-  */
- public class TaskLogAppender extends FileAppender<ILoggingEvent> {
- 
-     ...
-
-    @Override
-    protected void append(ILoggingEvent event) {
-
-        if (currentlyActiveFile == null){
-            currentlyActiveFile = getFile();
-        }
-        String activeFile = currentlyActiveFile;
-        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
-        String threadName = event.getThreadName();
-        String[] threadNameArr = threadName.split("-");
-        // logId = processDefineId_processInstanceId_taskInstanceId
-        String logId = threadNameArr[1];
-        ...
-        super.subAppend(event);
-    }
-}
- ```
-
-
-Generate logs in the form of /process definition id/process instance id/task instance id.log
-
-- Filter to match the thread name starting with TaskLogInfo:
-
-- TaskLogFilter is implemented as follows:
-
- ```java
- /**
- *  task log filter
- */
-public class TaskLogFilter extends Filter<ILoggingEvent> {
-
-    @Override
-    public FilterReply decide(ILoggingEvent event) {
-        if (event.getThreadName().startsWith("TaskLogInfo-")){
-            return FilterReply.ACCEPT;
-        }
-        return FilterReply.DENY;
-    }
-}
- ```
-
-### 3.Module introduction
-- dolphinscheduler-alert alarm module, providing AlertServer service.
-
-- dolphinscheduler-api web application module, providing ApiServer service.
-
-- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
-
-- dolphinscheduler-dao provides operations such as database access.
-
-- dolphinscheduler-remote client and server based on netty
-
-- dolphinscheduler-server MasterServer and WorkerServer services
-
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
-
-- dolphinscheduler-ui front-end module
-### Sum up
-From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
-
-
+## System Architecture Design
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system
+
+### 1.Glossary
+**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes. Examples are as follows:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+
+**Process definition**:Visualization formed by dragging task nodes and establishing task node associations**DAG**
+
+**Process instance**:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
+
+**Task instance**:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: 其中子 **SUB_PROCESS**  It is also a separate process definition that can be started and executed separately
+
+**Scheduling method:** The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them **Resume fault-tolerant workflow** 和 **Resume waiting thread** The two command types are used by the internal control of scheduling, and cannot b [...]
+
+**Scheduled**:System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
+
+**Rely**:The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also provides **task dependent** nodes, supporting **between processes**
+
+**Priority** :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out
+
+**Email alert**:Support **SQL task** Query result email sending, process instance running result email alert and fault tolerance alert notification
+
+**Failure strategy**:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends
+
+**Complement**:Supplement historical data,Supports **interval parallel and serial** two complement methods
+
+### 2.System Structure
+
+#### 2.1 System architecture diagram
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
+  <p align="center">
+        <em>System architecture diagram</em>
+  </p>
+</p>
+
+#### 2.2 Start process activity diagram
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>Start process activity diagram</em>
+  </p>
+</p>
+
+#### 2.3 Architecture description
+
+* **MasterServer** 
+
+    MasterServer adopts a distributed and centerless design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
+    When the MasterServer service starts, register a temporary node with Zookeeper, and perform fault tolerance by monitoring changes in the temporary node of Zookeeper.
+    MasterServer provides monitoring services based on netty.
+
+    ##### The service mainly includes:
+
+    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of scheduled tasks. When Quartz starts the task, there will be a thread pool inside the Master that is specifically responsible for the follow-up operation of the processing task
+
+    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and performs different business operations according to different **command types**
+
+    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing of various command types
+
+    - **MasterTaskExecThread** is mainly responsible for the persistence of tasks
+
+* **WorkerServer** 
+
+     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
+
+     When the WorkerServer service starts, register a temporary node with Zookeeper and maintain a heartbeat.
+     Server provides monitoring services based on netty. Worker
+     ##### The service mainly includes:
+     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
+
+     - **LoggerServer** is an RPC service that provides functions such as log fragment viewing, refreshing and downloading
+
+* **ZooKeeper** 
+
+    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system is based on ZooKeeper for event monitoring and distributed locks.
+
+    We have also implemented queues based on Redis, but we hope that DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+
+* **Task Queue** 
+
+    Provide task queue operation, the current queue is also implemented based on Zookeeper. Because there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have tested the millions of data storage queues, which has no impact on system stability and performance.
+
+* **Alert** 
+
+    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
+
+* **API** 
+
+    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to the outside world. Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from the node and so on.
+
+* **UI** 
+
+    The front-end page of the system provides various visual operation interfaces of the system,See more at [System User Manual](./system-manual.md) section。
+
+#### 2.3 Architecture design ideas
+
+##### One、Decentralization VS centralization 
+
+###### Centralized thinking
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into roles according to roles, which are roughly divided into two roles:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
+ </p>
+
+- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node will not be in a "busy dead" or "idle dead" state.
+- The role of Worker is mainly responsible for task execution and maintenance and Master's heartbeat, so that Master can assign tasks to Slave.
+
+
+
+Problems in centralized thought design:
+
+- Once there is a problem with the Master, the dragons are headless and the entire cluster will collapse. In order to solve this problem, most of the Master/Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching, and more and more new systems are beginning to have The ability to automatically elect and switch Master to improve the availability of the system.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
+
+
+
+###### Decentralized
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
+ </p>
+
+- In the decentralized design, there is usually no concept of Master/Slave, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, any node equipment connected to the network is down, All will only affect a small range of functions.
+- The core design of decentralized design is that there is no "manager" different from other nodes in the entire distributed system, so there is no single point of failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty of implementing the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
+
+
+
+- The decentralization of DolphinScheduler is that the Master/Worker is registered in Zookeeper, and the Master cluster and Worker cluster are centerless, and the Zookeeper distributed lock is used to elect one of the Master or Worker as the "manager" to perform the task.
+
+#####  Two、Distributed lock practice
+
+DolphinScheduler uses ZooKeeper distributed lock to realize that only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
+1. The core process algorithm for acquiring distributed locks is as follows:
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+2. Flow chart of implementation of Scheduler thread distributed lock in DolphinScheduler:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+
+##### Three、Insufficient thread loop waiting problem
+
+-  If there is no sub-process in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
+-  If many sub-processes are nested in a large DAG, the following figure will produce a "dead" state:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
+ </p>
+In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, so that the threads cannot be released. In this way, the state of the child-parent process loop waiting is formed. At this time, unless a new Master is started to add threads to break such a "stalemate", the sched [...]
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
+
+1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time. 
+2. Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
+
+note:The Master Scheduler thread is executed by FIFO when acquiring the Command.
+
+So we chose the third way to solve the problem of insufficient threads.
+
+
+##### Four、Fault-tolerant design
+Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.
+
+###### 1. Downtime fault tolerance
+
+The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle is shown in the figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
+ </p>
+Among them, the Master monitors the directories of other Masters and Workers. If the remove event is heard, fault tolerance of the process instance or task instance will be performed according to the specific business logic.
+
+
+
+- Master fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_master.png" alt="Master fault tolerance flowchart"  width="40%" />
+ </p>
+After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks, monitors the status of its task instances for the "running" tasks, and "commits successful" tasks It is necessary to determine whether the task queue already exists. If it exists, the status of the task instance is also monitored. If it does not exist, resubmit the task instance.
+
+
+
+- Worker fault tolerance flowchart:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
+ </p>
+
+Once the Master Scheduler thread finds that the task instance is in the "fault tolerant" state, it takes over the task and resubmits it.
+
+ Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+
+###### 2.Task failed and try again
+
+Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:
+
+- Task failure retry is at the task level and is automatically performed by the scheduling system. For example, if a Shell task is set to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
+- Process failure recovery is at the process level and is performed manually. Recovery can only be performed **from the failed node** or **from the current node**
+- Process failure rerun is also at the process level and is performed manually, rerun is performed from the start node
+
+
+
+Next to the topic, we divide the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or processing statement, such as Shell node, MR node, Spark node, and dependent node.
+
+- There is also a logical node, which does not do actual script or statement processing, but only logical processing of the entire process flow, such as sub-process sections.
+
+Each **business node** can be configured with the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** Failure retry is not supported. But the tasks in the logical node support retry.
+
+If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process recovery operation
+
+
+
+##### Five、Task priority design
+In the early scheduling design, if there is no priority design and the fair scheduling design is used, the task submitted first may be completed at the same time as the task submitted later, and the process or task priority cannot be set, so We have redesigned this, and our current design is as follows:
+
+-  According to **priority of different process instances** priority over **priority of the same process instance** priority over **priority of tasks within the same process**priority over **tasks within the same process**submission order from high to Low task processing.
+    - The specific implementation is to parse the priority according to the json of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first
+
+        - The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="Process priority configuration"  width="40%" />
+             </p>
+
+        - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="Task priority configuration"  width="35%" />
+             </p>
+
+
+##### Six、Logback and netty implement log access
+
+-  Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:
+  -  Put logs on ES search engine
+  -  Obtain remote log information through netty communication
+
+-  In consideration of the lightness of DolphinScheduler as much as possible, so I chose gRPC to achieve remote access to log information.
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
+ </p>
+
+
+- We use the FileAppender and Filter functions of the custom Logback to realize that each task instance generates a log file.
+- FileAppender is mainly implemented as follows:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+Generate logs in the form of /process definition id/process instance id/task instance id.log
+
+- Filter to match the thread name starting with TaskLogInfo:
+
+- TaskLogFilter is implemented as follows:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+### 3.Module introduction
+- dolphinscheduler-alert alarm module, providing AlertServer service.
+
+- dolphinscheduler-api web application module, providing ApiServer service.
+
+- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
+
+- dolphinscheduler-dao provides operations such as database access.
+
+- dolphinscheduler-remote client and server based on netty
+
+- dolphinscheduler-server MasterServer and WorkerServer services
+
+- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server module and api module
+
+- dolphinscheduler-ui front-end module
+### Sum up
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
+
+
diff --git a/docs/en-us/1.3.3/user_doc/cluster-deployment.md b/content/en-us/docs/1.3.3/user_doc/cluster-deployment.md
similarity index 100%
rename from docs/en-us/1.3.3/user_doc/cluster-deployment.md
rename to content/en-us/docs/1.3.3/user_doc/cluster-deployment.md
diff --git a/docs/en-us/1.3.2/user_doc/configuration-file.md b/content/en-us/docs/1.3.3/user_doc/configuration-file.md
similarity index 98%
rename from docs/en-us/1.3.2/user_doc/configuration-file.md
rename to content/en-us/docs/1.3.3/user_doc/configuration-file.md
index 6a5ab10..4d3a155 100644
--- a/docs/en-us/1.3.2/user_doc/configuration-file.md
+++ b/content/en-us/docs/1.3.3/user_doc/configuration-file.md
@@ -1,408 +1,407 @@
-<!-- markdown-link-check-disable -->
-
-# Foreword
-This document is a description of the dolphinscheduler configuration file, and the version is for dolphinscheduler-1.3.x.
-
-# Directory Structure
-All configuration files of dolphinscheduler are currently in the [conf] directory.
-
-For a more intuitive understanding of the location of the [conf] directory and the configuration files it contains, please see the simplified description of the dolphinscheduler installation directory below.
-
-This article mainly talks about the configuration file of dolphinscheduler. I won't go into details in other parts.
-
-[Note: The following dolphinscheduler is referred to as DS.]
-```
-
-├─bin                               DS command storage directory
-│  ├─dolphinscheduler-daemon.sh         Activate/deactivate DS service script
-│  ├─start-all.sh                       Start all DS services according to the configuration file
-│  ├─stop-all.sh                        Close all DS services according to the configuration file
-├─conf                              Configuration file directory
-│  ├─application-api.properties         api service configuration file
-│  ├─datasource.properties              Database configuration file
-│  ├─zookeeper.properties               zookeeper configuration file
-│  ├─master.properties                  Master service configuration file
-│  ├─worker.properties                  Worker service configuration file
-│  ├─quartz.properties                  Quartz service configuration file
-│  ├─common.properties                  Public service [storage] configuration file
-│  ├─alert.properties                   alert service configuration file
-│  ├─config                             Environment variable configuration folder
-│      ├─install_config.conf                DS environment variable configuration script [for DS installation/startup]
-│  ├─env                                Run script environment variable configuration directory
-│      ├─dolphinscheduler_env.sh            Run the script to load the environment variable configuration file [such as: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-│  ├─org                                mybatis mapper file directory
-│  ├─i18n                               i18n configuration file directory
-│  ├─logback-api.xml                    api service log configuration file
-│  ├─logback-master.xml                 Master service log configuration file
-│  ├─logback-worker.xml                 Worker service log configuration file
-│  ├─logback-alert.xml                  alert service log configuration file
-├─sql                               DS metadata creation and upgrade sql file
-│  ├─create                             Create SQL script directory
-│  ├─upgrade                            Upgrade SQL script directory
-│  ├─dolphinscheduler-postgre.sql       Postgre database initialization script
-│  ├─dolphinscheduler_mysql.sql         mysql database initialization version
-│  ├─soft_version                       Current DS version identification file
-├─script                            DS service deployment, database creation/upgrade script directory
-│  ├─create-dolphinscheduler.sh         DS database initialization script      
-│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script                
-│  ├─monitor-server.sh                  DS service monitoring startup script               
-│  ├─scp-hosts.sh                       Install file transfer script                                                    
-│  ├─remove-zk-node.sh                  Clean Zookeeper cache file script       
-├─ui                                Front-end WEB resource directory
-├─lib                               DS dependent jar storage directory
-├─install.sh                        Automatically install DS service script
-
-
-```
-
-
-# Detailed configuration file
-
-Serial number| Service classification |  Configuration file|
-|--|--|--|
-1|Activate/deactivate DS service script|dolphinscheduler-daemon.sh
-2|Database connection configuration | datasource.properties
-3|Zookeeper connection configuration|zookeeper.properties
-4|Common [storage] configuration|common.properties
-5|API service configuration|application-api.properties
-6|Master service configuration|master.properties
-7|Worker service configuration|worker.properties
-8|Alert service configuration|alert.properties
-9|Quartz configuration|quartz.properties
-10|DS environment variable configuration script [for DS installation/startup]|install_config.conf
-11|Run the script to load the environment variable configuration file <br />[for example: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...|dolphinscheduler_env.sh
-12|Service log configuration files|api service log configuration file : logback-api.xml  <br /> Master service log configuration file  : logback-master.xml    <br /> Worker service log configuration file : logback-worker.xml  <br /> alertService log configuration file : logback-alert.xml 
-
-
-## 1.dolphinscheduler-daemon.sh [Activate/deactivate DS service script]
-The dolphinscheduler-daemon.sh script is responsible for DS startup & shutdown 
-start-all.sh/stop-all.sh eventually starts and shuts down the cluster through dolphinscheduler-daemon.sh.
-At present, DS has only made a basic setting. Please set the JVM parameters according to the actual situation of their resources.
-
-The default simplified parameters are as follows:
-```bash
-export DOLPHINSCHEDULER_OPTS="
--server 
--Xmx16g 
--Xms1g 
--Xss512k 
--XX:+UseConcMarkSweepGC 
--XX:+CMSParallelRemarkEnabled 
--XX:+UseFastAccessorMethods 
--XX:+UseCMSInitiatingOccupancyOnly 
--XX:CMSInitiatingOccupancyFraction=70
-"
-```
-
-> It is not recommended to set "-XX:DisableExplicitGC", DS uses Netty for communication. Setting this parameter may cause memory leaks.
-
-## 2.datasource.properties [Database Connectivity]
-Use Druid to manage the database connection in DS.The default simplified configuration is as follows.
-|Parameter | Defaults| Description|
-|--|--|--|
-spring.datasource.driver-class-name| |Database driver
-spring.datasource.url||Database connection address
-spring.datasource.username||Database username
-spring.datasource.password||Database password
-spring.datasource.initialSize|5| Number of initial connection pools
-spring.datasource.minIdle|5| Minimum number of connection pools
-spring.datasource.maxActive|5| Maximum number of connection pools
-spring.datasource.maxWait|60000| Maximum waiting time
-spring.datasource.timeBetweenEvictionRunsMillis|60000| Connection detection cycle
-spring.datasource.timeBetweenConnectErrorMillis|60000| Retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| The minimum time a connection remains idle without being evicted
-spring.datasource.validationQuery|SELECT 1|SQL to check whether the connection is valid
-spring.datasource.validationQueryTimeout|3| Timeout to check if the connection is valid[seconds]
-spring.datasource.testWhileIdle|true| Check when applying for connection, if idle time is greater than timeBetweenEvictionRunsMillis,Run validationQuery to check whether the connection is valid.
-spring.datasource.testOnBorrow|true| Execute validationQuery to check whether the connection is valid when applying for connection
-spring.datasource.testOnReturn|false| When returning the connection, execute validationQuery to check whether the connection is valid
-spring.datasource.defaultAutoCommit|true| Whether to enable automatic submission
-spring.datasource.keepAlive|true| For connections within the minIdle number in the connection pool, if the idle time exceeds minEvictableIdleTimeMillis, the keepAlive operation will be performed.
-spring.datasource.poolPreparedStatements|true| Open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| To enable PSCache, you must configure greater than 0, when greater than 0,PoolPreparedStatements automatically trigger modification to true.
-
-
-## 3.zookeeper.properties [Zookeeper connection configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-zookeeper.quorum|localhost:2181| zk cluster connection information
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS stores root directory in zookeeper
-zookeeper.session.timeout|60000|  session time out
-zookeeper.connection.timeout|30000|  Connection timed out
-zookeeper.retry.base.sleep|100| Basic retry time difference
-zookeeper.retry.max.sleep|30000| Maximum retry time
-zookeeper.retry.maxtime|10|Maximum number of retries
-
-
-## 4.common.properties [hadoop, s3, yarn configuration]
-The common.properties configuration file is currently mainly used to configure hadoop/s3a related configurations. 
-|Parameter |Defaults| Description| 
-|--|--|--|
-resource.storage.type|NONE|Resource file storage type: HDFS,S3,NONE
-resource.upload.path|/dolphinscheduler|Resource file storage path
-data.basedir.path|/tmp/dolphinscheduler|Local working directory for storing temporary files
-hadoop.security.authentication.startup.state|false|hadoop enable kerberos permission
-java.security.krb5.conf.path|/opt/krb5.conf|kerberos configuration directory
-login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos login user
-login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos login user keytab
-resource.view.suffixs|txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|File formats supported by the resource center
-hdfs.root.user|hdfs|If the storage type is HDFS, you need to configure users with corresponding operation permissions
-fs.defaultFS|hdfs://mycluster:8020|Request address if resource.storage.type=S3 ,the value is similar to: s3a://dolphinscheduler. If resource.storage.type=HDFS, If hadoop configured HA, you need to copy the core-site.xml and hdfs-site.xml files to the conf directory
-fs.s3a.endpoint||s3 endpoint address
-fs.s3a.access.key||s3 access key
-fs.s3a.secret.key||s3 secret key
-yarn.resourcemanager.ha.rm.ids||yarn resourcemanager address, If the resourcemanager has HA turned on, enter the IP address of the HA (separated by commas). If the resourcemanager is a single node, the value can be empty.
-yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|If resourcemanager has HA enabled or resourcemanager is not used, keep the default value. If resourcemanager is a single node, you need to configure ds1 as the hostname corresponding to resourcemanager
-dolphinscheduler.env.path|env/dolphinscheduler_env.sh|Run the script to load the environment variable configuration file [eg: JAVA_HOME, HADOOP_HOME, HIVE_HOME ...]
-development.state|false|Is it in development mode
-kerberos.expire.time|7|kerberos expire time,integer,the unit is day
-
-
-## 5.application-api.properties [API service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-server.port|12345|API service communication port
-server.servlet.session.timeout|7200|session timeout
-server.servlet.context-path|/dolphinscheduler |Request path
-spring.servlet.multipart.max-file-size|1024MB|Maximum upload file size
-spring.servlet.multipart.max-request-size|1024MB|Maximum request size
-server.jetty.max-http-post-size|5000000|Jetty service maximum send request size
-spring.messages.encoding|UTF-8|Request encoding
-spring.jackson.time-zone|GMT+8|Set time zone
-spring.messages.basename|i18n/messages|i18n configuration
-security.authentication.type|PASSWORD|Permission verification type
-
-
-## 6.master.properties [Master service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-master.listen.port|5678|master listen port
-master.exec.threads|100|master execute thread number to limit process instances in parallel
-master.exec.task.num|20|master execute task number in parallel per process instance
-master.dispatch.task.num|3|master dispatch task number per batch
-master.host.selector|LowerWeight|master host selector to select a suitable worker, default value: LowerWeight. Optional values include Random, RoundRobin, LowerWeight
-master.heartbeat.interval|10|master heartbeat interval, the unit is second
-master.task.commit.retryTimes|5|master commit task retry times
-master.task.commit.interval|1000|master commit task interval, the unit is millisecond
-master.max.cpuload.avg|-1|master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
-master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
-
-
-## 7.worker.properties [Worker service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-worker.listen.port|1234|worker listen port
-worker.exec.threads|100|worker execute thread number to limit task instances in parallel
-worker.heartbeat.interval|10|worker heartbeat interval, the unit is second
-worker.max.cpuload.avg|-1|worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2
-worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
-worker.groups|default|worker groups separated by comma, like 'worker.groups=default,test' <br> worker will join corresponding group according to this config when startup
-
-
-## 8.alert.properties [Alert alert service configuration]
-|Parameter |Defaults| Description| 
-|--|--|--|
-alert.type|EMAIL|Alarm type|
-mail.protocol|SMTP| Mail server protocol
-mail.server.host|xxx.xxx.com|Mail server address
-mail.server.port|25|Mail server port
... 12796 lines suppressed ...