You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/03/30 10:56:43 UTC

[dolphinscheduler] branch migrate-dev-docs-to-main created (now a0b43fe)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a change to branch migrate-dev-docs-to-main
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git.


      at a0b43fe  [doc] Migrate dev doc from website repository

This branch includes the following new commits:

     new a0b43fe  [doc] Migrate dev doc from website repository

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


[dolphinscheduler] 01/01: [doc] Migrate dev doc from website repository

Posted by zh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch migrate-dev-docs-to-main
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git

commit a0b43feb0cfe4419ed93766c403d1114067d2a84
Author: Jiajie Zhong <zh...@hotmail.com>
AuthorDate: Wed Mar 30 18:56:18 2022 +0800

    [doc] Migrate dev doc from website repository
---
 docs/config/docsdev.js                             |  623 +++++++++++
 docs/config/index.md.jsx                           |  107 ++
 docs/config/site.js                                |  376 +++++++
 docs/docs/en/about/glossary.md                     |   79 ++
 docs/docs/en/about/hardware.md                     |   48 +
 docs/docs/en/about/introduction.md                 |   19 +
 docs/docs/en/architecture/cache.md                 |   42 +
 docs/docs/en/architecture/configuration.md         |  424 ++++++++
 docs/docs/en/architecture/design.md                |  282 +++++
 docs/docs/en/architecture/load-balance.md          |   59 +
 docs/docs/en/architecture/metadata.md              |  193 ++++
 docs/docs/en/architecture/task-structure.md        | 1114 +++++++++++++++++++
 docs/docs/en/faq.md                                |  689 ++++++++++++
 .../docs/en/guide/alert/alert_plugin_user_guide.md |   14 +
 docs/docs/en/guide/alert/dingtalk.md               |   27 +
 docs/docs/en/guide/alert/enterprise-webexteams.md  |   24 +
 docs/docs/en/guide/alert/enterprise-wechat.md      |   14 +
 docs/docs/en/guide/alert/telegram.md               |   42 +
 docs/docs/en/guide/datasource/hive.md              |   39 +
 docs/docs/en/guide/datasource/introduction.md      |    6 +
 docs/docs/en/guide/datasource/mysql.md             |   14 +
 docs/docs/en/guide/datasource/postgresql.md        |   13 +
 docs/docs/en/guide/datasource/spark.md             |   13 +
 docs/docs/en/guide/expansion-reduction.md          |  248 +++++
 docs/docs/en/guide/flink-call.md                   |  123 +++
 docs/docs/en/guide/homepage.md                     |    5 +
 docs/docs/en/guide/installation/cluster.md         |   39 +
 docs/docs/en/guide/installation/docker.md          | 1025 ++++++++++++++++++
 docs/docs/en/guide/installation/kubernetes.md      |  754 +++++++++++++
 docs/docs/en/guide/installation/pseudo-cluster.md  |  201 ++++
 .../docs/en/guide/installation/skywalking-agent.md |   74 ++
 docs/docs/en/guide/installation/standalone.md      |   42 +
 docs/docs/en/guide/monitor.md                      |   32 +
 docs/docs/en/guide/open-api.md                     |   69 ++
 docs/docs/en/guide/parameter/built-in.md           |   48 +
 docs/docs/en/guide/parameter/context.md            |   66 ++
 docs/docs/en/guide/parameter/global.md             |   19 +
 docs/docs/en/guide/parameter/local.md              |   19 +
 docs/docs/en/guide/parameter/priority.md           |   40 +
 docs/docs/en/guide/project/project-list.md         |   18 +
 docs/docs/en/guide/project/task-instance.md        |   11 +
 docs/docs/en/guide/project/workflow-definition.md  |  114 ++
 docs/docs/en/guide/project/workflow-instance.md    |   62 ++
 docs/docs/en/guide/resource.md                     |  165 +++
 docs/docs/en/guide/security.md                     |  151 +++
 docs/docs/en/guide/start/quick-start.md            |   62 ++
 docs/docs/en/guide/task/conditions.md              |   36 +
 docs/docs/en/guide/task/datax.md                   |   63 ++
 docs/docs/en/guide/task/dependent.md               |   27 +
 docs/docs/en/guide/task/emr.md                     |   60 ++
 docs/docs/en/guide/task/flink.md                   |   69 ++
 docs/docs/en/guide/task/http.md                    |   47 +
 docs/docs/en/guide/task/map-reduce.md              |   73 ++
 docs/docs/en/guide/task/pigeon.md                  |   19 +
 docs/docs/en/guide/task/python.md                  |   15 +
 docs/docs/en/guide/task/shell.md                   |   43 +
 docs/docs/en/guide/task/spark.md                   |   68 ++
 docs/docs/en/guide/task/sql.md                     |   43 +
 docs/docs/en/guide/task/stored-procedure.md        |   13 +
 docs/docs/en/guide/task/sub-process.md             |   46 +
 docs/docs/en/guide/task/switch.md                  |   39 +
 docs/docs/en/guide/upgrade.md                      |   84 ++
 docs/docs/en/history-versions.md                   |   70 ++
 docs/docs/zh/about/glossary.md                     |   58 +
 docs/docs/zh/about/hardware.md                     |   47 +
 docs/docs/zh/about/introduction.md                 |   12 +
 docs/docs/zh/architecture/cache.md                 |   42 +
 docs/docs/zh/architecture/configuration.md         |  406 +++++++
 docs/docs/zh/architecture/design.md                |  287 +++++
 docs/docs/zh/architecture/load-balance.md          |   58 +
 docs/docs/zh/architecture/metadata.md              |  185 ++++
 docs/docs/zh/architecture/task-structure.md        | 1134 ++++++++++++++++++++
 docs/docs/zh/faq.md                                |  689 ++++++++++++
 .../docs/zh/guide/alert/alert_plugin_user_guide.md |   12 +
 docs/docs/zh/guide/alert/dingtalk.md               |   26 +
 docs/docs/zh/guide/alert/enterprise-webexteams.md  |   25 +
 docs/docs/zh/guide/alert/enterprise-wechat.md      |   13 +
 docs/docs/zh/guide/alert/telegram.md               |   41 +
 docs/docs/zh/guide/datasource/hive.md              |   40 +
 docs/docs/zh/guide/datasource/introduction.md      |    6 +
 docs/docs/zh/guide/datasource/mysql.md             |   13 +
 docs/docs/zh/guide/datasource/postgresql.md        |   13 +
 docs/docs/zh/guide/datasource/spark.md             |   19 +
 docs/docs/zh/guide/expansion-reduction.md          |  245 +++++
 docs/docs/zh/guide/flink-call.md                   |  150 +++
 docs/docs/zh/guide/homepage.md                     |    5 +
 docs/docs/zh/guide/installation/cluster.md         |   35 +
 docs/docs/zh/guide/installation/kubernetes.md      |  755 +++++++++++++
 docs/docs/zh/guide/installation/pseudo-cluster.md  |  200 ++++
 .../docs/zh/guide/installation/skywalking-agent.md |   74 ++
 docs/docs/zh/guide/installation/standalone.md      |   42 +
 docs/docs/zh/guide/monitor.md                      |   32 +
 docs/docs/zh/guide/open-api.md                     |   65 ++
 docs/docs/zh/guide/parameter/built-in.md           |   49 +
 docs/docs/zh/guide/parameter/context.md            |   69 ++
 docs/docs/zh/guide/parameter/global.md             |   19 +
 docs/docs/zh/guide/parameter/local.md              |   19 +
 docs/docs/zh/guide/parameter/priority.md           |   40 +
 docs/docs/zh/guide/project/project-list.md         |   17 +
 docs/docs/zh/guide/project/task-instance.md        |   11 +
 docs/docs/zh/guide/project/workflow-definition.md  |  109 ++
 docs/docs/zh/guide/project/workflow-instance.md    |   61 ++
 docs/docs/zh/guide/resource.md                     |  168 +++
 docs/docs/zh/guide/security.md                     |  150 +++
 docs/docs/zh/guide/start/docker.md                 | 1024 ++++++++++++++++++
 docs/docs/zh/guide/start/quick-start.md            |   60 ++
 docs/docs/zh/guide/task/conditions.md              |   36 +
 docs/docs/zh/guide/task/datax.md                   |   59 +
 docs/docs/zh/guide/task/dependent.md               |   27 +
 docs/docs/zh/guide/task/emr.md                     |   58 +
 docs/docs/zh/guide/task/flink.md                   |   69 ++
 docs/docs/zh/guide/task/http.md                    |   48 +
 docs/docs/zh/guide/task/map-reduce.md              |   73 ++
 docs/docs/zh/guide/task/pigeon.md                  |   19 +
 docs/docs/zh/guide/task/python.md                  |   15 +
 docs/docs/zh/guide/task/shell.md                   |   48 +
 docs/docs/zh/guide/task/spark.md                   |   69 ++
 docs/docs/zh/guide/task/sql.md                     |   43 +
 docs/docs/zh/guide/task/stored-procedure.md        |   12 +
 docs/docs/zh/guide/task/sub-process.md             |   47 +
 docs/docs/zh/guide/task/switch.md                  |   37 +
 docs/docs/zh/guide/upgrade.md                      |   82 ++
 docs/docs/zh/history-versions.md                   |   70 ++
 123 files changed, 15800 insertions(+)

diff --git a/docs/config/docsdev.js b/docs/config/docsdev.js
new file mode 100644
index 0000000..4155faf
--- /dev/null
+++ b/docs/config/docsdev.js
@@ -0,0 +1,623 @@
+export default {
+    'en-us': {
+        sidemenu: [
+            {
+                title: 'About',
+                children: [
+                    {
+                        title: 'Introduction',
+                        link: '/en-us/docs/dev/user_doc/about/introduction.html',
+                    },
+                    {
+                        title: 'Hardware Environment',
+                        link: '/en-us/docs/dev/user_doc/about/hardware.html',
+                    },
+                    {
+                        title: 'Glossary',
+                        link: '/en-us/docs/dev/user_doc/about/glossary.html',
+                    },
+                ],
+            },
+            {
+                title: 'Quick Start',
+                children: [
+                    {
+                        title: 'Quick Start',
+                        link: '/en-us/docs/dev/user_doc/guide/start/quick-start.html',
+                    },
+                    {
+                        title: 'Docker Deployment',
+                        link: '/en-us/docs/dev/user_doc/guide/start/docker.html',
+                    },
+                ],
+            },
+            {
+                title: 'Installation',
+                children: [
+                    {
+                        title: 'Standalone Deployment',
+                        link: '/en-us/docs/dev/user_doc/guide/installation/standalone.html',
+                    },
+                    {
+                        title: 'Pseudo Cluster Deployment',
+                        link: '/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html',
+                    },
+                    {
+                        title: 'Cluster Deployment',
+                        link: '/en-us/docs/dev/user_doc/guide/installation/cluster.html',
+                    },
+                    {
+                        title: 'Kubernetes Deployment',
+                        link: '/en-us/docs/dev/user_doc/guide/installation/kubernetes.html',
+                    },
+                ],
+            },
+            {
+                title: 'Introduction to Functions',
+                children: [
+                    {
+                        title: 'Workflow Overview',
+                        link: '/en-us/docs/dev/user_doc/guide/homepage.html',
+                    },
+                    {
+                        title: 'Project',
+                        children: [
+                            {
+                                title: 'Project List',
+                                link: '/en-us/docs/dev/user_doc/guide/project/project-list.html',
+                            },
+                            {
+                                title: 'Workflow Definition',
+                                link: '/en-us/docs/dev/user_doc/guide/project/workflow-definition.html',
+                            },
+                            {
+                                title: 'Workflow Instance',
+                                link: '/en-us/docs/dev/user_doc/guide/project/workflow-instance.html',
+                            },
+                            {
+                                title: 'Task Instance',
+                                link: '/en-us/docs/dev/user_doc/guide/project/task-instance.html',
+                            },
+                        ]
+                    },
+                    {
+                        title: 'Task',
+                        children: [
+                            {
+                                title: 'Shell',
+                                link: '/en-us/docs/dev/user_doc/guide/task/shell.html',
+                            },
+                            {
+                                title: 'SubProcess',
+                                link: '/en-us/docs/dev/user_doc/guide/task/sub-process.html',
+                            },
+                            {
+                                title: 'Dependent',
+                                link: '/en-us/docs/dev/user_doc/guide/task/dependent.html',
+                            },
+                            {
+                                title: 'Stored Procedure',
+                                link: '/en-us/docs/dev/user_doc/guide/task/stored-procedure.html',
+                            },
+                            {
+                                title: 'SQL',
+                                link: '/en-us/docs/dev/user_doc/guide/task/sql.html',
+                            },
+                            {
+                                title: 'Spark',
+                                link: '/en-us/docs/dev/user_doc/guide/task/spark.html',
+                            },
+                            {
+                                title: 'MapReduce',
+                                link: '/en-us/docs/dev/user_doc/guide/task/map-reduce.html',
+                            },
+                            {
+                                title: 'Python',
+                                link: '/en-us/docs/dev/user_doc/guide/task/python.html',
+                            },
+                            {
+                                title: 'Flink',
+                                link: '/en-us/docs/dev/user_doc/guide/task/flink.html',
+                            },
+                            {
+                                title: 'HTTP',
+                                link: '/en-us/docs/dev/user_doc/guide/task/http.html',
+                            },
+                            {
+                                title: 'DataX',
+                                link: '/en-us/docs/dev/user_doc/guide/task/datax.html',
+                            },
+                            {
+                                title: 'Pigeon',
+                                link: '/en-us/docs/dev/user_doc/guide/task/pigeon.html',
+                            },
+                            {
+                                title: 'Conditions',
+                                link: '/en-us/docs/dev/user_doc/guide/task/conditions.html',
+                            },
+                            {
+                                title: 'Switch',
+                                link: '/en-us/docs/dev/user_doc/guide/task/switch.html',
+                            },
+                            {
+                                title: 'Amazon EMR',
+                                link: '/en-us/docs/dev/user_doc/guide/task/emr.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: 'Parameter',
+                        children: [
+                            {
+                                title: 'Built-in Parameter',
+                                link: '/en-us/docs/dev/user_doc/guide/parameter/built-in.html',
+                            },
+                            {
+                                title: 'Global Parameter',
+                                link: '/en-us/docs/dev/user_doc/guide/parameter/global.html',
+                            },
+                            {
+                                title: 'Local Parameter',
+                                link: '/en-us/docs/dev/user_doc/guide/parameter/local.html',
+                            },
+                            {
+                                title: 'Parameter Context',
+                                link: '/en-us/docs/dev/user_doc/guide/parameter/context.html',
+                            },
+                            {
+                                title: 'Parameter Priority',
+                                link: '/en-us/docs/dev/user_doc/guide/parameter/priority.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: 'Data Source',
+                        children: [
+                            {
+                                title: 'Introduction',
+                                link: '/en-us/docs/dev/user_doc/guide/datasource/introduction.html',
+                            },
+                            {
+                                title: 'MySQL',
+                                link: '/en-us/docs/dev/user_doc/guide/datasource/mysql.html',
+                            },
+                            {
+                                title: 'PostgreSQL',
+                                link: '/en-us/docs/dev/user_doc/guide/datasource/postgresql.html',
+                            },
+                            {
+                                title: 'HIVE',
+                                link: '/en-us/docs/dev/user_doc/guide/datasource/hive.html',
+                            },
+                            {
+                                title: 'Spark',
+                                link: '/en-us/docs/dev/user_doc/guide/datasource/spark.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: 'Alert',
+                        children: [
+                            {
+                                title: 'Alert Component User Guide ',
+                                link: '/en-us/docs/dev/user_doc/guide/alert/alert_plugin_user_guide.html',
+                            },
+                            {
+                                title: 'Telegram',
+                                link: '/en-us/docs/dev/user_doc/guide/alert/telegram.html',
+                            },
+                            {
+                                title: 'Ding Talk',
+                                link: '/en-us/docs/dev/user_doc/guide/alert/dingtalk.html',
+                            },
+                            {
+                                title: 'Enterprise Wechat',
+                                link: '/en-us/docs/dev/user_doc/guide/alert/enterprise-wechat.html',
+                            },
+                            {
+                                title: 'Enterprise Webexteams',
+                                link: '/en-us/docs/dev/user_doc/guide/alert/enterprise-webexteams.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: 'Resource',
+                        link: '/en-us/docs/dev/user_doc/guide/resource.html',
+                    },
+                    {
+                        title: 'Monitor',
+                        link: '/en-us/docs/dev/user_doc/guide/monitor.html',
+                    },
+                    {
+                        title: 'Security',
+                        link: '/en-us/docs/dev/user_doc/guide/security.html',
+                    },
+                    {
+                        title: 'Open API',
+                        link: '/en-us/docs/dev/user_doc/guide/open-api.html',
+                    },
+                    {
+                        title: 'Flink',
+                        link: '/en-us/docs/dev/user_doc/guide/flink-call.html',
+                    },
+                    {
+                        title: 'Upgrade',
+                        link: '/en-us/docs/dev/user_doc/guide/upgrade.html',
+                    },
+                    {
+                        title: 'Expansion and Reduction',
+                        link: '/en-us/docs/dev/user_doc/guide/expansion-reduction.html',
+                    },
+                ],
+            },
+            {
+                title: 'Architecture Guide',
+                children: [
+                    {
+                        title: 'Architecture Design',
+                        link: '/en-us/docs/dev/user_doc/architecture/design.html',
+                    },
+                    {
+                        title: 'Metadata',
+                        link: '/en-us/docs/dev/user_doc/architecture/metadata.html',
+                    },
+                    {
+                        title: 'Configuration File',
+                        link: '/en-us/docs/dev/user_doc/architecture/configuration.html',
+                    },
+                    {
+                        title: 'Task Structure',
+                        link: '/en-us/docs/dev/user_doc/architecture/task-structure.html',
+                    },
+                    {
+                        title: 'Load Balance',
+                        link: '/en-us/docs/dev/user_doc/architecture/load-balance.html',
+                    },
+                    {
+                        title: 'Cache',
+                        link: '/en-us/docs/dev/user_doc/architecture/cache.html',
+                    },
+                ],
+            },
+            {
+                title: 'Observability',
+                children: [
+
+                    {
+                        title: 'SkyWalking-Agent',
+                        link: '/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html',
+                    },
+                ],
+            },
+            {
+                title: 'FAQ',
+                children: [
+                    {
+                        title: 'FAQ',
+                        link: '/en-us/docs/release/faq.html',
+                    },
+                ],
+            },
+            {
+                title: 'Older Versions',
+                children: [
+                    {
+                        title: 'Older Versions',
+                        link: '/en-us/docs/release/history-versions.html',
+                    },
+                ],
+            },
+        ],
+        barText: 'Documentation',
+    },
+    'zh-cn': {
+        sidemenu: [
+            {
+                title: '关于Apache DolphinScheduler',
+                children: [
+                    {
+                        title: '简介',
+                        link: '/zh-cn/docs/dev/user_doc/about/introduction.html',
+                    },
+                    {
+                        title: '建议配置',
+                        link: '/zh-cn/docs/dev/user_doc/about/hardware.html',
+                    },
+                    {
+                        title: '名词解释',
+                        link: '/zh-cn/docs/dev/user_doc/about/glossary.html',
+                    },
+                ],
+            },
+            {
+                title: '快速上手',
+                children: [
+                    {
+                        title: '快速上手',
+                        link: '/zh-cn/docs/dev/user_doc/guide/start/quick-start.html',
+                    },
+                    {
+                        title: 'Docker部署(Docker)',
+                        link: '/zh-cn/docs/dev/user_doc/guide/start/docker.html',
+                    },
+                ],
+            },
+            {
+                title: '部署指南',
+                children: [
+                    {
+                        title: '单机部署(Standalone)',
+                        link: '/zh-cn/docs/dev/user_doc/guide/installation/standalone.html',
+                    },
+                    {
+                        title: '伪集群部署(Pseudo-Cluster)',
+                        link: '/zh-cn/docs/dev/user_doc/guide/installation/pseudo-cluster.html',
+                    },
+                    {
+                        title: '集群部署(Cluster)',
+                        link: '/zh-cn/docs/dev/user_doc/guide/installation/cluster.html',
+                    },
+                    {
+                        title: 'Kubernetes部署(Kubernetes)',
+                        link: '/zh-cn/docs/dev/user_doc/guide/installation/kubernetes.html',
+                    },
+                ],
+            },
+            {
+                title: '功能介绍',
+                children: [
+                    {
+                        title: '指标总览',
+                        link: '/zh-cn/docs/dev/user_doc/guide/homepage.html',
+                    },
+                    {
+                        title: '项目管理',
+                        children: [
+                            {
+                                title: '项目列表',
+                                link: '/zh-cn/docs/dev/user_doc/guide/project/project-list.html',
+                            },
+                            {
+                                title: '工作流定义',
+                                link: '/zh-cn/docs/dev/user_doc/guide/project/workflow-definition.html',
+                            },
+                            {
+                                title: '工作流实例',
+                                link: '/zh-cn/docs/dev/user_doc/guide/project/workflow-instance.html',
+                            },
+                            {
+                                title: '任务实例',
+                                link: '/zh-cn/docs/dev/user_doc/guide/project/task-instance.html',
+                            },
+                        ]
+                    },
+                    {
+                        title: '任务类型',
+                        children: [
+                            {
+                                title: 'Shell',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/shell.html',
+                            },
+                            {
+                                title: 'SubProcess',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/sub-process.html',
+                            },
+                            {
+                                title: 'Dependent',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/dependent.html',
+                            },
+                            {
+                                title: 'Stored Procedure',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/stored-procedure.html',
+                            },
+                            {
+                                title: 'SQL',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/sql.html',
+                            },
+                            {
+                                title: 'Spark',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/spark.html',
+                            },
+                            {
+                                title: 'MapReduce',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/map-reduce.html',
+                            },
+                            {
+                                title: 'Python',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/python.html',
+                            },
+                            {
+                                title: 'Flink',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/flink.html',
+                            },
+                            {
+                                title: 'HTTP',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/http.html',
+                            },
+                            {
+                                title: 'DataX',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/datax.html',
+                            },
+                            {
+                                title: 'Pigeon',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/pigeon.html',
+                            },
+                            {
+                                title: 'Conditions',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/conditions.html',
+                            },
+                            {
+                                title: 'Switch',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/switch.html',
+                            },
+                            {
+                                title: 'Amazon EMR',
+                                link: '/zh-cn/docs/dev/user_doc/guide/task/emr.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: '参数',
+                        children: [
+                            {
+                                title: '内置参数',
+                                link: '/zh-cn/docs/dev/user_doc/guide/parameter/built-in.html',
+                            },
+                            {
+                                title: '全局参数',
+                                link: '/zh-cn/docs/dev/user_doc/guide/parameter/global.html',
+                            },
+                            {
+                                title: '本地参数',
+                                link: '/zh-cn/docs/dev/user_doc/guide/parameter/local.html',
+                            },
+                            {
+                                title: '参数传递',
+                                link: '/zh-cn/docs/dev/user_doc/guide/parameter/context.html',
+                            },
+                            {
+                                title: '参数优先级',
+                                link: '/zh-cn/docs/dev/user_doc/guide/parameter/priority.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: '数据源中心',
+                        children: [
+                            {
+                                title: '简介',
+                                link: '/zh-cn/docs/dev/user_doc/guide/datasource/introduction.html',
+                            },
+                            {
+                                title: 'MySQL',
+                                link: '/zh-cn/docs/dev/user_doc/guide/datasource/mysql.html',
+                            },
+                            {
+                                title: 'PostgreSQL',
+                                link: '/zh-cn/docs/dev/user_doc/guide/datasource/postgresql.html',
+                            },
+                            {
+                                title: 'HIVE',
+                                link: '/zh-cn/docs/dev/user_doc/guide/datasource/hive.html',
+                            },
+                            {
+                                title: 'Spark',
+                                link: '/zh-cn/docs/dev/user_doc/guide/datasource/spark.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: '告警',
+                        children: [
+                            {
+                                title: '告警组件向导',
+                                link: '/zh-cn/docs/dev/user_doc/guide/alert/alert_plugin_user_guide.html',
+                            },
+                            {
+                                title: 'Telegram',
+                                link: '/zh-cn/docs/dev/user_doc/guide/alert/telegram.html',
+                            },
+                            {
+                                title: '钉钉告警',
+                                link: '/zh-cn/docs/dev/user_doc/guide/alert/dingtalk.html',
+                            },
+                            {
+                                title: '企业微信',
+                                link: '/zh-cn/docs/dev/user_doc/guide/alert/enterprise-wechat.html',
+                            },
+                            {
+                                title: 'Webexteams',
+                                link: '/zh-cn/docs/dev/user_doc/guide/alert/enterprise-webexteams.html',
+                            },
+                        ],
+                    },
+                    {
+                        title: '资源中心',
+                        link: '/zh-cn/docs/dev/user_doc/guide/resource.html',
+                    },
+                    {
+                        title: '监控中心',
+                        link: '/zh-cn/docs/dev/user_doc/guide/monitor.html',
+                    },
+                    {
+                        title: '安全中心',
+                        link: '/zh-cn/docs/dev/user_doc/guide/security.html',
+                    },
+                    {
+                        title: 'API调用',
+                        link: '/zh-cn/docs/dev/user_doc/guide/open-api.html',
+                    },
+                    {
+                        title: 'Flink调用',
+                        link: '/zh-cn/docs/dev/user_doc/guide/flink-call.html',
+                    },
+                    {
+                        title: '升级',
+                        link: '/zh-cn/docs/dev/user_doc/guide/upgrade.html',
+                    },
+                    {
+                        title: '扩/缩容',
+                        link: '/zh-cn/docs/dev/user_doc/guide/expansion-reduction.html',
+                    },
+                ],
+            },
+            {
+                title: '架构设计',
+                children: [
+                    {
+                        title: '元数据文档',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/metadata.html',
+                    },
+                    {
+                        title: '架构设计',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/design.html',
+                    },
+                    {
+                        title: '配置文件',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/configuration.html',
+                    },
+                    {
+                        title: '任务结构',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/task-structure.html',
+                    },
+                    {
+                        title: '负载均衡',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/load-balance.html',
+                    },
+                    {
+                        title: '缓存',
+                        link: '/zh-cn/docs/dev/user_doc/architecture/cache.html',
+                    },
+                ],
+            },
+            {
+                title: '可观测性',
+                children: [
+                    {
+                        title: 'SkyWalking-Agent',
+                        link: '/zh-cn/docs/dev/user_doc/guide/installation/skywalking-agent.html',
+                    },
+                ],
+            },
+            {
+                title: 'FAQ',
+                children: [
+                    {
+                        title: 'FAQ',
+                        link: '/zh-cn/docs/release/faq.html',
+                    },
+                ],
+            },
+            {
+                title: '历史版本',
+                children: [
+                    {
+                        title: '历史版本',
+                        link: '/zh-cn/docs/release/history-versions.html',
+                    },
+                ],
+            },
+        ],
+        barText: '文档',
+    },
+};
diff --git a/docs/config/index.md.jsx b/docs/config/index.md.jsx
new file mode 100644
index 0000000..5e30f33
--- /dev/null
+++ b/docs/config/index.md.jsx
@@ -0,0 +1,107 @@
+import React from 'react';
+import ReactDOM from 'react-dom';
+import cookie from 'js-cookie';
+import Language from '../../components/language';
+import Header from '../../components/header';
+import Footer from '../../components/footer';
+import Md2Html from '../../components/md2html';
+import Sidemenu from '../../components/sidemenu';
+import siteConfig from '../../../site_config/site';
+import docs120Config from '../../../site_config/docs1-2-0';
+import docs121Config from '../../../site_config/docs1-2-1';
+import docs131Config from '../../../site_config/docs1-3-1';
+import docs132Config from '../../../site_config/docs1-3-2';
+import docs133Config from '../../../site_config/docs1-3-3';
+import docs134Config from '../../../site_config/docs1-3-4';
+import docs135Config from '../../../site_config/docs1-3-5';
+import docs136Config from '../../../site_config/docs1-3-6';
+import docs138Config from '../../../site_config/docs1-3-8';
+import docs139Config from '../../../site_config/docs1-3-9';
+import docs200Config from '../../../site_config/docs2-0-0';
+import docs201Config from '../../../site_config/docs2-0-1';
+import docs202Config from '../../../site_config/docs2-0-2';
+import docs203Config from '../../../site_config/docs2-0-3';
+import docs205Config from '../../../site_config/docs2-0-5';
+import docsDevConfig from '../../../site_config/docsdev';
+
+const docsSource = {
+  '1.2.0': docs120Config,
+  '1.2.1': docs121Config,
+  '1.3.1': docs131Config,
+  '1.3.2': docs132Config,
+  '1.3.3': docs133Config,
+  '1.3.4': docs134Config,
+  '1.3.5': docs135Config,
+  '1.3.6': docs136Config,
+  '1.3.8': docs138Config,
+  '1.3.9': docs139Config,
+  '2.0.0': docs200Config,
+  '2.0.1': docs201Config,
+  '2.0.2': docs202Config,
+  '2.0.3': docs203Config,
+  '2.0.5': docs205Config,
+  dev: docsDevConfig,
+};
+
+const isValidVersion = version => version && docsSource.hasOwnProperty(version);
+
+class Docs extends Md2Html(Language) {
+  render() {
+    const language = this.getLanguage();
+    let dataSource = {};
+    // from location path
+    let version = window.location.pathname.split('/')[3];
+    if (isValidVersion(version) || version === 'latest') {
+      cookie.set('docs_version', version);
+    }
+    // from rendering html
+    if (!version && this.props.subdir) {
+      version = this.props.subdir.split('/')[0];
+    }
+    if (isValidVersion(version)) {
+      dataSource = docsSource[version][language];
+    } else if (isValidVersion(cookie.get('docs_version'))) {
+      dataSource = docsSource[cookie.get('docs_version')][language];
+    } else if (isValidVersion(siteConfig.docsLatest)) {
+      dataSource = docsSource[siteConfig.docsLatest][language];
+      dataSource.sidemenu.forEach((menu) => {
+        menu.children.forEach((submenu) => {
+          if (!submenu.children) {
+            submenu.link = submenu.link.replace(`docs/${siteConfig.docsLatest}`, 'docs/latest');
+          } else {
+            submenu.children.forEach((menuLevel3) => {
+              menuLevel3.link = menuLevel3.link.replace(`docs/${siteConfig.docsLatest}`, 'docs/latest');
+            });
+          }
+        });
+      });
+    } else {
+      return null;
+    }
+    const __html = this.props.__html || this.state.__html;
+    return (
+      <div className="md2html docs-page">
+        <Header
+          currentKey="docs"
+          type="dark"
+          logo="/img/hlogo_white.svg"
+          language={language}
+          onLanguageChange={this.onLanguageChange}
+        />
+        <section className="content-section">
+          <Sidemenu dataSource={dataSource.sidemenu} />
+          <div
+            className="doc-content markdown-body"
+            ref={(node) => { this.markdownContainer = node; }}
+            dangerouslySetInnerHTML={{ __html }}
+          />
+        </section>
+        <Footer logo="/img/ds_gray.svg" language={language} />
+      </div>
+    );
+  }
+}
+
+document.getElementById('root') && ReactDOM.render(<Docs />, document.getElementById('root'));
+
+export default Docs;
diff --git a/docs/config/site.js b/docs/config/site.js
new file mode 100644
index 0000000..304cc6d
--- /dev/null
+++ b/docs/config/site.js
@@ -0,0 +1,376 @@
+// 全局的一些配置
+export default {
+  rootPath: '',
+  port: 8080,
+  domain: 'dolphinscheduler.apache.org',
+  copyToDist: ['asset', 'img', 'file', '.asf.yaml', 'sitemap.xml', '.nojekyll', '.htaccess', 'googled0df7b96f277a143.html'],
+  docsLatest: '2.0.5',
+  defaultSearch: 'google', // default search engine
+  defaultLanguage: 'en-us',
+  'en-us': {
+    pageMenu: [
+      {
+        key: 'home',
+        text: 'HOME',
+        link: '/en-us/index.html',
+      },
+      {
+        key: 'docs',
+        text: 'DOCS',
+        link: '/en-us/docs/latest/user_doc/guide/quick-start.html',
+        children: [
+          {
+            key: 'docs0',
+            text: 'latest(2.0.5)',
+            link: '/en-us/docs/latest/user_doc/guide/quick-start.html',
+          },
+          {
+            key: 'docs1',
+            text: '1.3.9',
+            link: '/en-us/docs/1.3.9/user_doc/quick-start.html',
+          },
+          {
+            key: 'docsHistory',
+            text: 'Older Versions',
+            link: '/en-us/docs/release/history-versions.html',
+          },
+          {
+            key: 'pythonAPI',
+            text: 'PyDolphinScheduler',
+            link: '/python/index.html',
+          },
+          {
+            key: 'docsdev',
+            text: 'dev',
+            link: '/en-us/docs/dev/user_doc/about/introduction.html',
+          },
+        ],
+      },
+      {
+        key: 'download',
+        text: 'DOWNLOAD',
+        link: '/en-us/download/download.html',
+      },
+      { key: 'blog',
+        text: 'BLOG',
+        link: '/en-us/blog/index.html',
+      },
+      {
+        key: 'development',
+        text: 'DEVELOPMENT',
+        link: '/en-us/development/development-environment-setup.html',
+      },
+      {
+        key: 'community',
+        text: 'COMMUNITY',
+        link: '/en-us/community/team.html',
+      },
+      {
+        key: 'ASF',
+        text: 'ASF',
+        target: '_blank',
+        link: 'https://www.apache.org/',
+        children: [
+          {
+            key: 'Foundation',
+            text: 'Foundation',
+            target: '_blank',
+            link: 'https://www.apache.org/',
+          },
+          {
+            key: 'License',
+            text: 'License',
+            target: '_blank',
+            link: 'https://www.apache.org/licenses/',
+          },
+          {
+            key: 'Events',
+            text: 'Events',
+            target: '_blank',
+            link: 'https://www.apache.org/events/current-event',
+          },
+          {
+            key: 'Security',
+            text: 'Security',
+            target: '_blank',
+            link: 'https://www.apache.org/security/',
+          },
+          {
+            key: 'Sponsorship',
+            text: 'Sponsorship',
+            target: '_blank',
+            link: 'https://www.apache.org/foundation/sponsorship.html',
+          },
+          {
+            key: 'Thanks',
+            text: 'Thanks',
+            target: '_blank',
+            link: 'https://www.apache.org/foundation/thanks.html',
+          },
+        ],
+      },
+      {
+        key: 'user',
+        text: 'USER',
+        link: '/en-us/user/index.html',
+      },
+    ],
+    documentation: {
+      title: 'Documentation',
+      list: [
+        {
+          text: 'Overview',
+          link: '/en-us/development/architecture-design.html',
+        },
+        {
+          text: 'Quick start',
+          link: '/en-us/docs/latest/user_doc/guide/quick-start.html',
+        },
+        {
+          text: 'Developer guide',
+          link: '/en-us/development/development-environment-setup.html',
+        },
+      ],
+    },
+    asf: {
+      title: 'ASF',
+      list: [
+        {
+          text: 'Foundation',
+          link: 'http://www.apache.org',
+        },
+        {
+          text: 'License',
+          link: 'http://www.apache.org/licenses/',
+        },
+        {
+          text: 'Events',
+          link: 'http://www.apache.org/events/current-event',
+        },
+        {
+          text: 'Sponsorship',
+          link: 'http://www.apache.org/foundation/sponsorship.html',
+        },
+        {
+          text: 'Thanks',
+          link: 'http://www.apache.org/foundation/thanks.html',
+        },
+      ],
+    },
+    contact: {
+      title: 'About us',
+      content: 'Do you need feedback? Please contact us through the following ways.',
+      list: [
+        {
+          name: 'Email List',
+          img1: '/img/emailgray.png',
+          img2: '/img/emailblue.png',
+          link: '/en-us/community/development/subscribe.html',
+        },
+        {
+          name: 'Twitter',
+          img1: '/img/twittergray.png',
+          img2: '/img/twitterblue.png',
+          link: 'https://twitter.com/dolphinschedule',
+        },
+        {
+          name: 'Stack Overflow',
+          img1: '/img/stackoverflow.png',
+          img2: '/img/stackoverflow-selected.png',
+          link: 'https://stackoverflow.com/questions/tagged/apache-dolphinscheduler',
+        },
+        {
+          name: 'Slack',
+          img1: '/img/slack.png',
+          img2: '/img/slack-selected.png',
+          link: 'https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw',
+        },
+      ],
+    },
+    copyright: 'Copyright © 2019-2021 The Apache Software Foundation. Apache DolphinScheduler, DolphinScheduler, and its feather logo are trademarks of The Apache Software Foundation.',
+  },
+  'zh-cn': {
+    pageMenu: [
+      {
+        key: 'home',
+        text: '首页',
+        link: '/zh-cn/index.html',
+      },
+      {
+        key: 'docs',
+        text: '文档',
+        link: '/zh-cn/docs/latest/user_doc/guide/quick-start.html',
+        children: [
+          {
+            key: 'docs0',
+            text: '最新版本latest(2.0.5)',
+            link: '/zh-cn/docs/latest/user_doc/guide/quick-start.html',
+          },
+          {
+            key: 'docs1',
+            text: '1.3.9',
+            link: '/zh-cn/docs/1.3.9/user_doc/quick-start.html',
+          },
+          {
+            key: 'docsHistory',
+            text: '历史版本',
+            link: '/zh-cn/docs/release/history-versions.html',
+          },
+          {
+            key: 'pythonAPI',
+            text: 'PyDolphinScheduler',
+            link: '/python/index.html',
+          },
+          {
+            key: 'docsdev',
+            text: 'dev',
+            link: '/zh-cn/docs/dev/user_doc/about/introduction.html',
+          },
+        ],
+      },
+      {
+        key: 'download',
+        text: '下载',
+        link: '/zh-cn/download/download.html',
+      },
+      {
+        key: 'blog',
+        text: '博客',
+        link: '/zh-cn/blog/index.html',
+      },
+      {
+        key: 'development',
+        text: '开发者',
+        link: '/zh-cn/development/development-environment-setup.html',
+      },
+      {
+        key: 'community',
+        text: '社区',
+        link: '/zh-cn/community/team.html',
+      },
+      {
+        key: 'ASF',
+        text: 'ASF',
+        target: '_blank',
+        link: 'https://www.apache.org/',
+        children: [
+          {
+            key: 'Foundation',
+            text: 'Foundation',
+            target: '_blank',
+            link: 'https://www.apache.org/',
+          },
+          {
+            key: 'License',
+            text: 'License',
+            target: '_blank',
+            link: 'https://www.apache.org/licenses/',
+          },
+          {
+            key: 'Events',
+            text: 'Events',
+            target: '_blank',
+            link: 'https://www.apache.org/events/current-event',
+          },
+          {
+            key: 'Security',
+            text: 'Security',
+            target: '_blank',
+            link: 'https://www.apache.org/security/',
+          },
+          {
+            key: 'Sponsorship',
+            text: 'Sponsorship',
+            target: '_blank',
+            link: 'https://www.apache.org/foundation/sponsorship.html',
+          },
+          {
+            key: 'Thanks',
+            text: 'Thanks',
+            target: '_blank',
+            link: 'https://www.apache.org/foundation/thanks.html',
+          },
+        ],
+      },
+      {
+        key: 'user',
+        text: '用户',
+        // link: '',
+        link: '/zh-cn/user/index.html',
+      },
+    ],
+    documentation: {
+      title: '文档',
+      list: [
+        {
+          text: '概览',
+          link: '/zh-cn/development/architecture-design.html',
+        },
+        {
+          text: '快速开始',
+          link: '/zh-cn/docs/latest/user_doc/guide/quick-start.html',
+        },
+        {
+          text: '开发者指南',
+          link: '/zh-cn/development/development-environment-setup.html',
+        },
+      ],
+    },
+    asf: {
+      title: 'ASF',
+      list: [
+        {
+          text: '基金会',
+          link: 'http://www.apache.org',
+        },
+        {
+          text: '证书',
+          link: 'http://www.apache.org/licenses/',
+        },
+        {
+          text: '事件',
+          link: 'http://www.apache.org/events/current-event',
+        },
+        {
+          text: '赞助',
+          link: 'http://www.apache.org/foundation/sponsorship.html',
+        },
+        {
+          text: '致谢',
+          link: 'http://www.apache.org/foundation/thanks.html',
+        },
+      ],
+    },
+    contact: {
+      title: '联系我们',
+      content: '有问题需要反馈?请通过以下方式联系我们。',
+      list: [
+        {
+          name: '邮件列表',
+          img1: '/img/emailgray.png',
+          img2: '/img/emailblue.png',
+          link: '/zh-cn/community/development/subscribe.html',
+        },
+        {
+          name: 'Twitter',
+          img1: '/img/twittergray.png',
+          img2: '/img/twitterblue.png',
+          link: 'https://twitter.com/dolphinschedule',
+        },
+        {
+          name: 'Stack Overflow',
+          img1: '/img/stackoverflow.png',
+          img2: '/img/stackoverflow-selected.png',
+          link: 'https://stackoverflow.com/questions/tagged/apache-dolphinscheduler',
+        },
+        {
+          name: 'Slack',
+          img1: '/img/slack.png',
+          img2: '/img/slack-selected.png',
+          link: 'https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw',
+        },
+      ],
+    },
+    copyright: 'Copyright © 2019-2021 The Apache Software Foundation. Apache DolphinScheduler, DolphinScheduler, and its feather logo are trademarks of The Apache Software Foundation.',
+  },
+};
diff --git a/docs/docs/en/about/glossary.md b/docs/docs/en/about/glossary.md
new file mode 100644
index 0000000..541d572
--- /dev/null
+++ b/docs/docs/en/about/glossary.md
@@ -0,0 +1,79 @@
+## System Architecture Design
+
+Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the
+scheduling system
+
+### 1.Glossary
+
+**DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the
+form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until
+there are no subsequent nodes. Examples are as follows:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag example"  width="60%" />
+  <p align="center">
+        <em>dag example</em>
+  </p>
+</p>
+
+**Process definition**: Visualization formed by dragging task nodes and establishing task node associations**DAG**
+
+**Process instance**: The process instance is the instantiation of the process definition, which can be generated by
+manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated
+
+**Task instance**: The task instance is the instantiation of the task node in the process definition, which identifies
+the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (
+depends), and plans to support dynamic plug-in expansion, note: **SUB_PROCESS**  It is also a separate process
+definition that can be started and executed separately
+
+**Scheduling method**: The system supports scheduled scheduling and manual scheduling based on cron expressions. Command
+type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process,
+start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread. Among them **Resume
+fault-tolerant workflow** and **Resume waiting thread** The two command types are used by the internal control of
+scheduling, and cannot be called from the outside
+
+**Scheduled**: System adopts **quartz** distributed scheduler, and supports the visual generation of cron expressions
+
+**Rely**: The system not only supports **DAG** simple dependencies between the predecessor and successor nodes, but also
+provides **task dependent** nodes, supporting **between processes**
+
+**Priority**: Support the priority of process instances and task instances, if the priority of process instances and
+task instances is not set, the default is first-in-first-out
+
+**Email alert**: Support **SQL task** Query result email sending, process instance running result email alert and fault
+tolerance alert notification
+
+**Failure strategy**: For tasks running in parallel, if a task fails, two failure strategy processing methods are
+provided. **Continue** refers to regardless of the status of the task running in parallel until the end of the process
+failure. **End** means that once a failed task is found, Kill will also run the parallel task at the same time, and the
+process fails and ends
+
+**Complement**: Supplement historical data,Supports **interval parallel and serial** two complement methods
+
+### 2.Module introduction
+
+- dolphinscheduler-alert alarm module, providing AlertServer service.
+
+- dolphinscheduler-api web application module, providing ApiServer service.
+
+- dolphinscheduler-common General constant enumeration, utility class, data structure or base class
+
+- dolphinscheduler-dao provides operations such as database access.
+
+- dolphinscheduler-remote client and server based on netty
+
+- dolphinscheduler-server MasterServer and WorkerServer services
+
+- dolphinscheduler-service service module, including Quartz, Zookeeper, log client access service, easy to call server
+  module and api module
+
+- dolphinscheduler-ui front-end module
+
+### Sum up
+
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation
+ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
+
+
diff --git a/docs/docs/en/about/hardware.md b/docs/docs/en/about/hardware.md
new file mode 100644
index 0000000..82e8313
--- /dev/null
+++ b/docs/docs/en/about/hardware.md
@@ -0,0 +1,48 @@
+# Hardware Environment
+
+DolphinScheduler, as an open-source distributed workflow task scheduling system, can deploy and run smoothly in Intel architecture server environments and mainstream virtualization environments and supports mainstream Linux operating system environments.
+
+## Linux Operating System Version Requirements
+
+| OS       | Version         |
+| :----------------------- | :----------: |
+| Red Hat Enterprise Linux | 7.0 and above   |
+| CentOS                   | 7.0 and above   |
+| Oracle Enterprise Linux  | 7.0 and above   |
+| Ubuntu LTS               | 16.04 and above |
+
+> **Attention:**
+>The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN.
+
+## Recommended Server Configuration
+
+DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following shows the recommended server requirements in a production environment:
+
+### Production Environment
+
+| **CPU** | **MEM** | **HD** | **NIC** | **Num** |
+| --- | --- | --- | --- | --- |
+| 4 core+ | 8 GB+ | SAS | GbE | 1+ |
+
+> **Attention:**
+> - The above recommended configuration is the minimum configuration for deploying DolphinScheduler. Higher configuration is strongly recommended for production environments.
+> - The recommended hard disk size is more than 50GB and separate the system disk and data disk.
+
+
+## Network Requirements
+
+DolphinScheduler provides the following network port configurations for normal operation:
+
+| Server | Port | Desc |
+|  --- | --- | --- |
+| MasterServer |  5678  | not the communication port, require the native ports do not conflict |
+| WorkerServer | 1234  | not the communication port, require the native ports do not conflict |
+| ApiApplicationServer |  12345 | backend communication port |
+
+> **Attention:**
+> - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
+> - Administrators can adjust relevant ports on the network side and host-side according to the deployment plan of DolphinScheduler components in the actual environment.
+
+## Browser Requirements
+
+DolphinScheduler recommends Chrome and the latest browsers which use Chrome Kernel to access the front-end UI page.
\ No newline at end of file
diff --git a/docs/docs/en/about/introduction.md b/docs/docs/en/about/introduction.md
new file mode 100644
index 0000000..a0d314e
--- /dev/null
+++ b/docs/docs/en/about/introduction.md
@@ -0,0 +1,19 @@
+# About DolphinScheduler
+
+Apache DolphinScheduler is a distributed, easy to extend visual DAG workflow task scheduling open-source system. Solves the intricate dependencies of data R&D ETL and the inability to monitor the health status of tasks. DolphinScheduler assembles tasks in the DAG streaming way, which can monitor the execution status of tasks in time, and supports operations like retry, recovery failure from specified nodes, pause, resume and kill tasks, etc.
+
+## Simple to Use
+
+- DolphinScheduler has DAG monitoring user interfaces, users can customize DAG by dragging and dropping. All process definitions are visualized, supports rich third-party systems APIs and one-click deployment.
+
+## High Reliability
+
+- Decentralized multi-masters and multi-workers, support HA, select queues to avoid overload.
+
+## Rich Scenarios
+
+- Support features like multi-tenants, suspend and resume operations to cope with big data scenarios. Support many task types like Spark, Flink, Hive, MR, shell, python, sub_process.
+
+## High Scalability
+
+- Supports customized task types, distributed scheduling, and the overall scheduling capability increases linearly with the scale of the cluster.
\ No newline at end of file
diff --git a/docs/docs/en/architecture/cache.md b/docs/docs/en/architecture/cache.md
new file mode 100644
index 0000000..a0251e3
--- /dev/null
+++ b/docs/docs/en/architecture/cache.md
@@ -0,0 +1,42 @@
+# Cache
+
+## Purpose
+
+Due to the large database read operations during the master-server scheduling process. Such as read tables like `tenant`, `user`, `processDefinition`, etc. Operations stress read pressure to the DB, and slow down the entire core scheduling process.
+
+By considering this part of the business data is a high-read and low-write scenario, a cache module is introduced to reduce the DB read pressure and speed up the core scheduling process.
+
+## Cache Settings
+
+```yaml
+spring:
+  cache:
+    # default disable cache, you can enable by `type: caffeine`
+    type: none
+    cache-names:
+      - tenant
+      - user
+      - processDefinition
+      - processTaskRelation
+      - taskDefinition
+    caffeine:
+      spec: maximumSize=100,expireAfterWrite=300s,recordStats
+```
+
+The cache module uses [spring-cache](https://spring.io/guides/gs/caching/), so you can set cache config like whether to enable cache (`none` to disable by default), cache types in the spring `application.yaml` directly.
+
+Currently, implements the config of [caffeine](https://github.com/ben-manes/caffeine), you can assign cache configs like cache size, expire time, etc.
+
+## Cache Read
+
+The cache module adopts the `@Cacheable` annotation from spring-cache and you can annotate the annotation in the related mapper layer. Refer to the `TenantMapper`.
+
+## Cache Evict
+
+The business data updates come from the api-server, and the cache side is in the master-server. Then it is necessary to monitor the data updates from the api-server (use aspect point cut interceptor `@CacheEvict`), and notify the master-server of `cacheEvictCommand` when processing a cache eviction.
+
+Note: the final strategy for cache update comes from the expiration strategy configuration in caffeine, therefore configure it under the business scenarios;
+
+The sequence diagram shows below:
+
+<img src="/img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
\ No newline at end of file
diff --git a/docs/docs/en/architecture/configuration.md b/docs/docs/en/architecture/configuration.md
new file mode 100644
index 0000000..5caf1fc
--- /dev/null
+++ b/docs/docs/en/architecture/configuration.md
@@ -0,0 +1,424 @@
+<!-- markdown-link-check-disable -->
+
+# Configuration
+
+## Preface
+
+This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.
+
+## Directory Structure
+
+Currently, all the configuration files are under [conf ] directory. 
+Check the following simplified DolphinScheduler installation directories to have a direct view about the position of [conf] directory and configuration files it has. 
+This document only describes DolphinScheduler configurations and other topics are not going into.
+
+[Note: the DolphinScheduler (hereinafter called the ‘DS’) .]
+
+```
+├─bin                               DS application commands directory
+│  ├─dolphinscheduler-daemon.sh         startup or shutdown DS application 
+│  ├─start-all.sh                       startup all DS services with configurations
+│  ├─stop-all.sh                        shutdown all DS services with configurations
+├─conf                              configurations directory
+│  ├─application-api.properties         API-service config properties
+│  ├─datasource.properties              datasource config properties
+│  ├─zookeeper.properties               ZooKeeper config properties
+│  ├─master.properties                  master-service config properties
+│  ├─worker.properties                  worker-service config properties
+│  ├─quartz.properties                  quartz config properties
+│  ├─common.properties                  common-service [storage] config properties
+│  ├─alert.properties                   alert-service config properties
+│  ├─config                             environment variables config directory
+│      ├─install_config.conf                DS environment variables configuration script [install or start DS]
+│  ├─env                                load environment variables configs script directory
+│      ├─dolphinscheduler_env.sh            load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper files directory
+│  ├─i18n                               i18n configs directory
+│  ├─logback-api.xml                    API-service log config
+│  ├─logback-master.xml                 master-service log config
+│  ├─logback-worker.xml                 worker-service log config
+│  ├─logback-alert.xml                  alert-service log config
+├─sql                                   .sql files to create or upgrade DS metadata
+│  ├─create                             create SQL scripts directory
+│  ├─upgrade                            upgrade SQL scripts directory
+│  ├─dolphinscheduler_postgre.sql       PostgreSQL database init script
+│  ├─dolphinscheduler_mysql.sql         MySQL database init script
+│  ├─soft_version                       current DS version-id file
+├─script                            DS services deployment, database create or upgrade scripts directory
+│  ├─create-dolphinscheduler.sh         DS database init script
+│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script
+│  ├─monitor-server.sh                  DS monitor-server start script       
+│  ├─scp-hosts.sh                       transfer installation files script                                     
+│  ├─remove-zk-node.sh                  cleanup ZooKeeper caches script       
+├─ui                                front-end web resources directory
+├─lib                               DS .jar dependencies directory
+├─install.sh                        auto-setup DS services script
+```
+
+## Configurations in Details
+
+serial number| service classification| config file|
+|--|--|--|
+1|startup or shutdown DS application|dolphinscheduler-daemon.sh
+2|datasource config properties|datasource.properties
+3|ZooKeeper config properties|zookeeper.properties
+4|common-service[storage] config properties|common.properties
+5|API-service config properties|application-api.properties
+6|master-service config properties|master.properties
+7|worker-service config properties|worker.properties
+8|alert-service config properties|alert.properties
+9|quartz config properties|quartz.properties
+10|DS environment variables configuration script[install/start DS]|install_config.conf
+11|load environment variables configs <br /> [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|services log config files|API-service log config : logback-api.xml  <br /> master-service log config  : logback-master.xml    <br /> worker-service log config : logback-worker.xml  <br /> alert-service log config : logback-alert.xml
+
+
+### dolphinscheduler-daemon.sh [startup or shutdown DS application]
+
+dolphinscheduler-daemon.sh is responsible for DS startup and shutdown.
+Essentially, start-all.sh or stop-all.sh startup and shutdown the cluster via dolphinscheduler-daemon.sh.
+Currently, DS just makes a basic config, remember to config further JVM options based on your practical situation of resources.
+
+Default simplified parameters are:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> "-XX:DisableExplicitGC" is not recommended due to may lead to memory link (DS dependent on Netty to communicate).
+
+### datasource.properties [datasource config properties]
+
+DS uses Druid to manage database connections and default simplified configs are:
+|Parameters | Default value| Description|
+|--|--|--|
+spring.datasource.driver-class-name||datasource driver
+spring.datasource.url||datasource connection url
+spring.datasource.username||datasource username
+spring.datasource.password||datasource password
+spring.datasource.initialSize|5| initial connection pool size number
+spring.datasource.minIdle|5| minimum connection pool size number
+spring.datasource.maxActive|5| maximum connection pool size number
+spring.datasource.maxWait|60000| max wait milliseconds
+spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check interval
+spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval
+spring.datasource.minEvictableIdleTimeMillis|300000| connections over minEvictableIdleTimeMillis will be collect when idle check
+spring.datasource.validationQuery|SELECT 1| validate connection by running the SQL
+spring.datasource.validationQueryTimeout|3| validate connection timeout[seconds]
+spring.datasource.testWhileIdle|true| set whether the pool validates the allocated connection when a new connection request comes
+spring.datasource.testOnBorrow|true| validity check when the program requests a new connection
+spring.datasource.testOnReturn|false| validity check when the program recalls a connection
+spring.datasource.defaultAutoCommit|true| whether auto commit
+spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the connection closed by pool when the connection idles over minEvictableIdleTimeMillis
+spring.datasource.poolPreparedStatements|true| open PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the size of PSCache on each connection
+
+
+### zookeeper.properties [zookeeper config properties]
+
+|Parameters | Default value| Description|
+|--|--|--|
+zookeeper.quorum|localhost:2181| ZooKeeper cluster connection info
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under ZooKeeper root directory
+zookeeper.session.timeout|60000|  session timeout
+zookeeper.connection.timeout|30000| connection timeout
+zookeeper.retry.base.sleep|100| time to wait between subsequent retries
+zookeeper.retry.max.sleep|30000| maximum time to wait between subsequent retries
+zookeeper.retry.maxtime|10| maximum retry times
+
+
+### common.properties [hadoop、s3、yarn config properties]
+
+Currently, common.properties mainly configures Hadoop,s3a related configurations.
+|Parameters | Default value| Description|
+|--|--|--|
+data.basedir.path|/tmp/dolphinscheduler| local directory used to store temp files
+resource.storage.type|NONE| type of resource files: HDFS, S3, NONE
+resource.upload.path|/dolphinscheduler| storage path of resource files
+hadoop.security.authentication.startup.state|false| whether hadoop grant kerberos permission
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos config directory
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos username
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos user keytab
+kerberos.expire.time|2|kerberos expire time,integer,the unit is hour
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties| file types supported by resource center
+hdfs.root.user|hdfs| configure users with corresponding permissions if storage type is HDFS
+fs.defaultFS|hdfs://mycluster:8020|If resource.storage.type=S3, then the request url would be similar to 's3a://dolphinscheduler'. Otherwise if resource.storage.type=HDFS and hadoop supports HA, copy core-site.xml and hdfs-site.xml into 'conf' directory
+fs.s3a.endpoint||s3 endpoint url
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||specify the yarn resourcemanager url. if resourcemanager supports HA, input HA IP addresses (separated by comma), or input null for standalone
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|keep default if ResourceManager supports HA or not use ResourceManager, or replace ds1 with corresponding hostname if ResourceManager in standalone mode
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false| specify whether in development state
+
+
+### application-api.properties [API-service log config]
+
+|Parameters | Default value| Description|
+|--|--|--|
+server.port|12345|api service communication port
+server.servlet.session.timeout|7200|session timeout
+server.servlet.context-path|/dolphinscheduler | request path
+spring.servlet.multipart.max-file-size|1024MB| maximum file size
+spring.servlet.multipart.max-request-size|1024MB| maximum request size
+server.jetty.max-http-post-size|5000000| jetty maximum post size
+spring.messages.encoding|UTF-8| message encoding
+spring.jackson.time-zone|GMT+8| time zone
+spring.messages.basename|i18n/messages| i18n config
+security.authentication.type|PASSWORD| authentication type
+
+
+### master.properties [master-service log config]
+
+|Parameters | Default value| Description|
+|--|--|--|
+master.listen.port|5678|master listen port
+master.exec.threads|100|master-service execute thread number, used to limit the number of process instances in parallel
+master.exec.task.num|20|defines the number of parallel tasks for each process instance of the master-service
+master.dispatch.task.num|3|defines the number of dispatch tasks for each batch of the master-service
+master.host.selector|LowerWeight|master host selector, to select a suitable worker to run the task, optional value: random, round-robin, lower weight
+master.heartbeat.interval|10|master heartbeat interval, the unit is second
+master.task.commit.retryTimes|5|master commit task retry times
+master.task.commit.interval|1000|master commit task interval, the unit is millisecond
+master.max.cpuload.avg|-1|master max CPU load avg, only higher than the system CPU load average, master server can schedule. default value -1: the number of CPU cores * 2
+master.reserved.memory|0.3|master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
+
+
+### worker.properties [worker-service log config]
+
+|Parameters | Default value| Description|
+|--|--|--|
+worker.listen.port|1234|worker-service listen port
+worker.exec.threads|100|worker-service execute thread number, used to limit the number of task instances in parallel
+worker.heartbeat.interval|10|worker-service heartbeat interval, the unit is second
+worker.max.cpuload.avg|-1|worker max CPU load avg, only higher than the system CPU load average, worker server can be dispatched tasks. default value -1: the number of CPU cores * 2
+worker.reserved.memory|0.3|worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
+worker.groups|default|worker groups separated by comma, e.g., 'worker.groups=default,test' <br> worker will join corresponding group according to this config when startup
+
+
+### alert.properties [alert-service log config]
+
+|Parameters | Default value| Description|
+|--|--|--|
+alert.type|EMAIL|alter type|
+mail.protocol|SMTP|mail server protocol
+mail.server.host|xxx.xxx.com|mail server host
+mail.server.port|25|mail server port
+mail.sender|xxx@xxx.com|mail sender email
+mail.user|xxx@xxx.com|mail sender email name
+mail.passwd|111111|mail sender email password
+mail.smtp.starttls.enable|true|specify mail whether open tls
+mail.smtp.ssl.enable|false|specify mail whether open ssl
+mail.smtp.ssl.trust|xxx.xxx.com|specify mail ssl trust list
+xls.file.path|/tmp/xls|mail attachment temp storage directory
+||following configure WeCom[optional]|
+enterprise.wechat.enable|false|specify whether enable WeCom
+enterprise.wechat.corp.id|xxxxxxx|WeCom corp id
+enterprise.wechat.secret|xxxxxxx|WeCom secret
+enterprise.wechat.agent.id|xxxxxxx|WeCom agent id
+enterprise.wechat.users|xxxxxxx|WeCom users
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|WeCom token url
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|WeCom push url
+enterprise.wechat.user.send.msg||send message format
+enterprise.wechat.team.send.msg||group message format
+plugin.dir|/Users/xx/your/path/to/plugin/dir|plugin directory
+
+
+### quartz.properties [quartz config properties]
+
+This part describes quartz configs and configure them based on your practical situation and resources.
+|Parameters | Default value| Description|
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate |
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate |
+org.quartz.scheduler.instanceName | DolphinScheduler |
+org.quartz.scheduler.instanceId | AUTO |
+org.quartz.scheduler.makeSchedulerThreadDaemon | true |
+org.quartz.jobStore.useProperties | false |
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool |
+org.quartz.threadPool.makeThreadsDaemons | true |
+org.quartz.threadPool.threadCount | 25 |
+org.quartz.threadPool.threadPriority | 5 |
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX |
+org.quartz.jobStore.tablePrefix | QRTZ_ |
+org.quartz.jobStore.isClustered | true |
+org.quartz.jobStore.misfireThreshold | 60000 |
+org.quartz.jobStore.clusterCheckinInterval | 5000 |
+org.quartz.jobStore.acquireTriggersWithinLock|true |
+org.quartz.jobStore.dataSource | myDs |
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider |
+
+
+### install_config.conf [DS environment variables configuration script[install or start DS]]
+
+install_config.conf is a bit complicated and is mainly used in the following two places.
+* DS Cluster Auto Installation.
+
+> System will load configs in the install_config.conf and auto-configure files below, based on the file content when executing 'install.sh'.
+> Files such as dolphinscheduler-daemon.sh, datasource.properties, zookeeper.properties, common.properties, application-api.properties, master.properties, worker.properties, alert.properties, quartz.properties, etc.
+
+* Startup and Shutdown DS Cluster.
+
+> The system will load masters, workers, alert-server, API-servers and other parameters inside the file to startup or shutdown DS cluster.
+
+#### File Content
+
+```bash
+
+# Note:  please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&`.
+#   eg: `[` escape to `\[`
+
+# Database type (DS currently only supports PostgreSQL and MySQL)
+dbtype="mysql"
+
+# Database url and port
+dbhost="192.168.xx.xx:3306"
+
+# Database name
+dbname="dolphinscheduler"
+
+
+# Database username
+username="xx"
+
+# Database password
+password="xx"
+
+# ZooKeeper url
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# DS installation path, such as '/data1_1T/dolphinscheduler'
+installPath="/data1_1T/dolphinscheduler"
+
+# Deployment user
+# Note: Deployment user needs 'sudo' privilege and has rights to operate HDFS.
+#     Root directory must be created by the same user if using HDFS, otherwise permission related issues will be raised.
+deployUser="dolphinscheduler"
+
+
+# Followings are alert-service configs
+# Mail server host
+mailServerHost="smtp.exmail.qq.com"
+
+# Mail server port
+mailServerPort="25"
+
+# Mail sender
+mailSender="xxxxxxxxxx"
+
+# Mail user
+mailUser="xxxxxxxxxx"
+
+# Mail password
+mailPassword="xxxxxxxxxx"
+
+# Whether mail supports TLS
+starttlsEnable="true"
+
+# Whether mail supports SSL. Note: starttlsEnable and sslEnable cannot both set true.
+sslEnable="false"
+
+# Mail server host, same as mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+# Specify which resource upload function to use for resources storage, such as sql files. And supported options are HDFS, S3 and NONE. HDFS for upload to HDFS and NONE for not using this function.
+resourceStorageType="NONE"
+
+# if S3, write S3 address. HA, for example: s3a://dolphinscheduler,
+# Note: s3 make sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# If parameter 'resourceStorageType' is S3, following configs are needed:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# If ResourceManager supports HA, then input master and standby node IP or hostname, eg: '192.168.xx.xx,192.168.xx.xx'. Or else ResourceManager run in standalone mode, please set yarnHaIps="" and "" for not using yarn.
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+
+# If ResourceManager runs in standalone, then set ResourceManager node ip or hostname, or else remain default.
+singleYarnIp="yarnIp1"
+
+# Storage path when using HDFS/S3
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3 root user
+hdfsRootUser="hdfs"
+
+# Followings are Kerberos configs
+
+# Specify Kerberos enable or not
+kerberosStartUp="false"
+
+# Kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+
+# Keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+
+# Username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# API-service port
+apiServerPort="12345"
+
+
+# All hosts deploy DS
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# Ssh port, default 22
+sshPort="22"
+
+# Master service hosts
+masters="ds1,ds2"
+
+# All hosts deploy worker service
+# Note: Each worker needs to set a worker group name and default name is "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  Host deploy alert-service
+alertServer="ds3"
+
+# Host deploy API-service
+apiServers="ds1"
+```
+
+### dolphinscheduler_env.sh [load environment variables configs]
+
+When using shell to commit tasks, DS will load environment variables inside dolphinscheduler_env.sh into the host.
+Types of tasks involved are: Shell, Python, Spark, Flink, DataX, etc.
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+### Services logback configs
+
+Services name| logback config name |
+--|--|
+API-service logback config |logback-api.xml|
+master-service logback config|logback-master.xml |
+worker-service logback config|logback-worker.xml |
+alert-service logback config|logback-alert.xml |
diff --git a/docs/docs/en/architecture/design.md b/docs/docs/en/architecture/design.md
new file mode 100644
index 0000000..d2b8be2
--- /dev/null
+++ b/docs/docs/en/architecture/design.md
@@ -0,0 +1,282 @@
+# System Architecture Design
+
+## System Structure
+
+### System Architecture Diagram
+
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  width="70%" />
+  <p align="center">
+        <em>System architecture diagram</em>
+  </p>
+</p>
+
+### Start Process Activity Diagram
+
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity diagram"  width="70%" />
+  <p align="center">
+        <em>Start process activity diagram</em>
+  </p>
+</p>
+
+### Architecture Description
+
+* **MasterServer** 
+
+    MasterServer adopts a distributed and decentralized design concept. MasterServer is mainly responsible for DAG task segmentation, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer at the same time.
+    When the MasterServer service starts, register a temporary node with ZooKeeper, and perform fault tolerance by monitoring changes in the temporary node of ZooKeeper.
+    MasterServer provides monitoring services based on netty.
+
+    #### The Service Mainly Includes:
+
+    - **Distributed Quartz** distributed scheduling component, which is mainly responsible for the start and stop operations of schedule tasks. When Quartz starts the task, there will be a thread pool inside the Master responsible for the follow-up operation of the processing task.
+
+    - **MasterSchedulerThread** is a scanning thread that regularly scans the **command** table in the database and runs different business operations according to different **command types**.
+
+    - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, and logical processing to different command types.
+
+    - **MasterTaskExecThread** is mainly responsible for the persistence to tasks.
+
+* **WorkerServer** 
+
+     WorkerServer also adopts a distributed and decentralized design concept. WorkerServer is mainly responsible for task execution and providing log services.
+
+     When the WorkerServer service starts, register a temporary node with ZooKeeper and maintain a heartbeat.
+     Server provides monitoring services based on netty.
+  
+     #### The Service Mainly Includes:
+  
+     - **Fetch TaskThread** is mainly responsible for continuously getting tasks from the **Task Queue**, and calling **TaskScheduleThread** corresponding executor according to different task types.
+
+* **ZooKeeper** 
+
+    ZooKeeper service, MasterServer and WorkerServer nodes in the system all use ZooKeeper for cluster management and fault tolerance. In addition, the system implements event monitoring and distributed locks based on ZooKeeper.
+
+    We have also implemented queues based on Redis, but we hope DolphinScheduler depends on as few components as possible, so we finally removed the Redis implementation.
+
+* **Task Queue** 
+
+    Provide task queue operation, the current queue is also implement base on ZooKeeper. Due to little information stored in the queue, there is no need to worry about excessive data in the queue. In fact, we have tested the millions of data storage in queues, which has no impact on system stability and performance.
+
+* **Alert** 
+
+    Provide alarm related interface, the interface mainly includes **alarm** two types of alarm data storage, query and notification functions. Among them, there are **email notification** and **SNMP (not yet implemented)**.
+
+* **API** 
+
+    The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service uniformly provides RESTful APIs to provide request services to external.
+    Interfaces include workflow creation, definition, query, modification, release, logoff, manual start, stop, pause, resume, start execution from specific node, etc.
+
+* **UI** 
+
+  The front-end page of the system provides various visual operation interfaces of the system, see more at [Introduction to Functions](../guide/homepage.md) section.
+
+### Architecture Design Ideas
+
+#### Decentralization VS Centralization
+
+##### Centralized Thinking
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are roughly divided into two roles according to responsibilities:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave character"  width="50%" />
+ </p>
+
+- The role of the master is mainly responsible for task distribution and monitoring the health status of the slave, and can dynamically balance the task to the slave, so that the slave node won't be in a "busy dead" or "idle dead" state.
+- The role of Worker is mainly responsible for task execution and heartbeat maintenance to the Master, so that Master can assign tasks to Slave.
+
+Problems in centralized thought design:
+
+- Once there is a problem with the Master, the team grow aimless without commander and the entire cluster collapse. In order to solve this problem, most of the Master and Slave architecture models adopt the design scheme of active and standby Master, which can be hot standby or cold standby, or automatic switching or manual switching. More and more new systems are beginning to have ability to automatically elect and switch Master to improve the availability of the system.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in a DAG running on different machines, it will cause the Master to be overloaded. If the Scheduler is on the slave, all tasks in a DAG can only submit jobs on a certain machine. When there are more parallel tasks, the pressure on the slave may be greater.
+
+##### Decentralized
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="Decentralization"  width="50%" />
+ </p>
+
+- In the decentralized design, there is usually no concept of Master or Slave. All roles are the same, the status is equal, the global Internet is a typical decentralized distributed system. Any node connected to the network goes down, will only affect a small range of functions.
+- The core design of decentralized design is that there is no distinct "manager" different from other nodes in the entire distributed system, so there is no single point failure. However, because there is no "manager" node, each node needs to communicate with other nodes to obtain the necessary machine information, and the unreliability of distributed system communication greatly increases the difficulty to implement the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly pouring out. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will automatically hold "meetings" to elect new "managers" To preside over the work. The most typical case is Etcd implemented by ZooKeeper and Go language.
+- The decentralization of DolphinScheduler is that the Master and Worker register in ZooKeeper, for implement the centerless feature to Master cluster and Worker cluster. Use the ZooKeeper distributed lock to elect one of the Master or Worker as the "manager" to perform the task.
+
+#### Distributed Lock Practice
+
+DolphinScheduler uses ZooKeeper distributed lock to implement only one Master executes Scheduler at the same time, or only one Worker executes the submission of tasks.
+1. The following shows the core process algorithm for acquiring distributed locks:
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+2. Flow diagram of implementation of Scheduler thread distributed lock in DolphinScheduler:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="Obtain distributed lock process"  width="50%" />
+ </p>
+
+
+#### Insufficient Thread Loop Waiting Problem
+
+-  If there is no sub-process in a DAG, when the number of data in the Command is greater than the threshold set by the thread pool, the process directly waits or fails.
+-  If a large DAG nests many sub-processes, there will produce a "dead" state as the following figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="Insufficient threads waiting loop problem"  width="50%" />
+ </p>
+In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlowThread1 waits for the end of SubFlowThread2, SubFlowThread2 waits for the end of SubFlowThread3, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot finish, and the threads cannot be released. In this situation, the state of the child-parent process loop waiting is formed. At this moment, unless a new Master is started and add threads to break such a "stalemate", t [...]
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three solutions to reduce this risk:
+
+1. Calculate the sum of all Master threads, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process executes. Because it is a multi-master thread pool, it is unlikely to obtain the total number of threads in real time. 
+2. Judge whether the single-master thread pool is full, let the thread fail directly when fulfilled.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.
+
+Note: The Master Scheduler thread executes by FIFO when acquiring the Command.
+
+So we choose the third way to solve the problem of insufficient threads.
+
+
+#### Fault-Tolerant Design
+
+Fault tolerance divides into service downtime fault tolerance and task retry, and service downtime fault tolerance divides into master fault tolerance and worker fault tolerance.
+
+##### Downtime Fault Tolerance
+
+The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, and the implementation principle shows in the figure:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler fault-tolerant design"  width="40%" />
+ </p>
+Among them, the Master monitors the directories of other Masters and Workers. If the remove event is triggered, perform fault tolerance of the process instance or task instance according to the specific business logic.
+
+
+
+- Master fault tolerance:
+
+<p align="center">
+   <img src="/img/failover-master.jpg" alt="failover-master"  width="50%" />
+ </p>
+
+Fault tolerance range: From the perspective of host, the fault tolerance range of Master includes: own host and node host that does not exist in the registry, and the entire process of fault tolerance will be locked;
+
+Fault-tolerant content: Master's fault-tolerant content includes: fault-tolerant process instances and task instances. Before fault-tolerant, compares the start time of the instance with the server start-up time, and skips fault-tolerance if after the server start time;
+
+Fault-tolerant post-processing: After the fault tolerance of ZooKeeper Master completed, then re-schedule by the Scheduler thread in DolphinScheduler, traverses the DAG to find the "running" and "submit successful" tasks. Monitor the status of its task instances for the "running" tasks, and for the "commits successful" tasks, it is necessary to find out whether the task queue already exists. If exists, monitor the status of the task instance. Otherwise, resubmit the task instance.
+
+- Worker fault tolerance:
+
+<p align="center">
+   <img src="/img/failover-worker.jpg" alt="failover-worker"  width="50%" />
+ </p>
+
+Fault tolerance range: From the perspective of process instance, each Master is only responsible for fault tolerance of its own process instance; it will lock only when `handleDeadServer`;
+
+Fault-tolerant content: When sending the remove event of the Worker node, the Master only fault-tolerant task instances. Before fault-tolerant, compares the start time of the instance with the server start-up time, and skips fault-tolerance if after the server start time;
+
+Fault-tolerant post-processing: Once the Master Scheduler thread finds that the task instance is in the "fault-tolerant" state, it takes over the task and resubmits it.
+
+ Note: Due to "network jitter", the node may lose heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.
+
+##### Task Failed and Try Again
+
+Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure re-run:
+
+- Task failure retry is at the task level and is automatically performed by the schedule system. For example, if a Shell task sets to retry for 3 times, it will try to run it again up to 3 times after the Shell task fails.
+- Process failure recovery is at the process level and is performed manually. Recovery can only perform **from the failed node** or **from the current node**.
+- Process failure re-run is also at the process level and is performed manually, re-run perform from the beginning node.
+
+Next to the main point, we divide the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or process command, such as shell node, MR node, Spark node, and dependent node.
+
+- Another is a logical node, which does not operate actual script or process command, but only logical processing to the entire process flow, such as sub-process sections.
+
+Each **business node** can configure the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the retry times. **Logical node** failure retry is not supported, but the tasks in the logical node support.
+
+If there is a task failure in the workflow that reaches the maximum retry times, the workflow will fail and stop, and the failed workflow can be manually re-run or process recovery operations.
+
+#### Task Priority Design
+
+In the early schedule design, if there is no priority design and use the fair scheduling, the task submitted first may complete at the same time with the task submitted later, thus invalid the priority of process or task. So we have re-designed this, and the following is our current design:
+
+-  According to **the priority of different process instances** prior over **priority of the same process instance** prior over **priority of tasks within the same process** prior over **tasks within the same process**, process task submission order from highest to Lowest.
+    - The specific implementation is to parse the priority according to the JSON of the task instance, and then save the **process instance priority_process instance id_task priority_task id** information to the ZooKeeper task queue. When obtain from the task queue, we can get the highest priority task by comparing string.
+
+          - The priority of the process definition is to consider that some processes need to process before other processes. Configure the priority when the process starts or schedules. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+            <p align="center">
+               <img src="https://user-images.githubusercontent.com/10797147/146744784-eb351b14-c94a-4ed6-8ba4-5132c2a3d116.png" alt="Process priority configuration"  width="40%" />
+             </p>
+
+        - The priority of the task is also divides into 5 levels, ordered by HIGHEST, HIGH, MEDIUM, LOW, LOWEST. As shown below:
+            <p align="center">
+               <img src="https://user-images.githubusercontent.com/10797147/146744830-5eac611f-5933-4f53-a0c6-31613c283708.png" alt="Task priority configuration"  width="35%" />
+             </p>
+
+#### Logback and Netty Implement Log Access
+
+-  Since Web (UI) and Worker are not always on the same machine, to view the log cannot be like querying a local file. There are two options:
+  -  Put logs on the ES search engine.
+  -  Obtain remote log information through netty communication.
+
+-  In consideration of the lightness of DolphinScheduler as much as possible, so choose gRPC to achieve remote access to log information.
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc remote access"  width="50%" />
+ </p>
+
+- We use the customized FileAppender and Filter functions from Logback to implement each task instance generates one log file.
+- The following is the FileAppender implementation:
+
+```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+```
+
+Generate logs in the form of /process definition id /process instance id /task instance id.log
+
+- Filter to match the thread name starting with TaskLogInfo:
+
+- The following shows the TaskLogFilter implementation:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+```
+
+## Sum Up
+
+From the perspective of scheduling, this article preliminarily introduces the architecture principles and implementation ideas of the big data distributed workflow scheduling system: DolphinScheduler. To be continued.
+
diff --git a/docs/docs/en/architecture/load-balance.md b/docs/docs/en/architecture/load-balance.md
new file mode 100644
index 0000000..5cf2d4e
--- /dev/null
+++ b/docs/docs/en/architecture/load-balance.md
@@ -0,0 +1,59 @@
+# Load Balance
+
+Load balancing refers to the reasonable allocation of server pressure through routing algorithms (usually in cluster environments) to achieve the maximum optimization of server performance.
+
+## DolphinScheduler-Worker Load Balancing Algorithms
+
+DolphinScheduler-Master allocates tasks to workers, and by default provides three algorithms:
+
+- Weighted random (random)
+
+- Smoothing polling (round-robin)
+
+- Linear load (lower weight)
+
+The default configuration is the linear load.
+
+As the routing sets on the client side, the master service, you can change master.host.selector in master.properties to configure the algorithm.
+
+e.g. master.host.selector=random (case-insensitive)
+
+## Worker Load Balancing Configuration
+
+The configuration file is worker.properties
+
+### Weight
+
+All the load algorithms above are weighted based on weights, which affect the routing outcome. You can set different weights for different machines by modifying the `worker.weight` value.
+
+### Preheating
+
+Consider JIT optimization, worker runs at low power for a period of time after startup, so that it can gradually reach its optimal state, a process we call preheating. If you are interested, you can read some articles about JIT.
+
+So the worker gradually reaches its maximum weight with time after starts up ( by default ten minutes, there is no configuration about the pre-heating duration, it's recommend to submit a PR if have needs to change the duration).
+
+## Load Balancing Algorithm in Details
+
+### Random (Weighted)
+
+This algorithm is relatively simple, select a worker by random (the weight affects its weighting).
+
+### Smoothed Polling (Weighted)
+
+An obvious drawback of the weighted polling algorithm, which is under special weights circumstance, weighted polling scheduling generates an imbalanced sequence of instances, and this unsmooth load may cause some instances to experience transient high loads, leading to a risk of system crash. To address this scheduling flaw, we provide a smooth weighted polling algorithm.
+
+Each worker has two weights parameters, weight (which remains constant after warm-up is complete) and current_weight (which changes dynamically). For every route, calculate the current_weight + weight and is iterated over all the workers, the weight of all the workers sum up and count as total_weight, then the worker with the largest current_weight is selected as the worker for this task. By meantime, set worker's current_weight-total_weight.
+
+### Linear Weighting (Default Algorithm)
+
+This algorithm reports its own load information to the registry at regular intervals. Make decision on two main pieces of information:
+
+- load average (default is the number of CPU cores * 2)
+- available physical memory (default is 0.3, in G)
+
+If either of these is lower than the configured item, then this worker will not participate in the load. (no traffic will be allocated)
+
+You can customise the configuration by changing the following properties in worker.properties
+
+- worker.max.cpuload.avg=-1 (worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2)
+- worker.reserved.memory=0.3 (worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G)
diff --git a/docs/docs/en/architecture/metadata.md b/docs/docs/en/architecture/metadata.md
new file mode 100644
index 0000000..54ebc56
--- /dev/null
+++ b/docs/docs/en/architecture/metadata.md
@@ -0,0 +1,193 @@
+# MetaData
+
+## DolphinScheduler DB Table Overview
+
+| Table Name | Comment |
+| :---: | :---: |
+| t_ds_access_token | token for access DolphinScheduler backend |
+| t_ds_alert | alert detail |
+| t_ds_alertgroup | alert group |
+| t_ds_command | command detail |
+| t_ds_datasource | data source |
+| t_ds_error_command | error command detail |
+| t_ds_process_definition | process definition |
+| t_ds_process_instance | process instance |
+| t_ds_project | project |
+| t_ds_queue | queue |
+| t_ds_relation_datasource_user | datasource related to user |
+| t_ds_relation_process_instance | sub process |
+| t_ds_relation_project_user | project related to user |
+| t_ds_relation_resources_user | resource related to user |
+| t_ds_relation_udfs_user | UDF functions related to user |
+| t_ds_relation_user_alertgroup | alert group related to user |
+| t_ds_resources | resoruce center file |
+| t_ds_schedules | process definition schedule |
+| t_ds_session | user login session |
+| t_ds_task_instance | task instance |
+| t_ds_tenant | tenant |
+| t_ds_udfs | UDF resource |
+| t_ds_user | user detail |
+| t_ds_version | DolphinScheduler version |
+
+
+---
+
+## E-R Diagram
+
+### User Queue DataSource
+
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- One tenant can own Multiple users.
+- The queue field in the t_ds_user table stores the queue_name information in the t_ds_queue table, t_ds_tenant stores queue information using queue_id column. During the execution of the process definition, the user queue has the highest priority. If the user queue is null, use the tenant queue.
+- The user_id field in the t_ds_datasource table shows the user who create the data source. The user_id in t_ds_relation_datasource_user shows the user who has permission to the data source.
+  
+### Project Resource Alert
+
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- User can have multiple projects, user project authorization completes the relationship binding using project_id and user_id in t_ds_relation_project_user table.
+- The user_id in the t_ds_projcet table represents the user who create the project, and the user_id in the t_ds_relation_project_user table represents users who have permission to the project.
+- The user_id in the t_ds_resources table represents the user who create the resource, and the user_id in t_ds_relation_resources_user represents the user who has permissions to the resource.
+- The user_id in the t_ds_udfs table represents the user who create the UDF, and the user_id in the t_ds_relation_udfs_user table represents a user who has permission to the UDF.
+  
+### Command Process Task
+
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
+
+- A project has multiple process definitions, a process definition can generate multiple process instances, and a process instance can generate multiple task instances.
+- The t_ds_schedulers table stores the specified time schedule information for process definition.
+- The data stored in the t_ds_relation_process_instance table is used to deal with the sub-processes of a process definition, parent_process_instance_id field represents the id of the main process instance who contains child processes, process_instance_id field represents the id of the sub-process instance, parent_task_instance_id field represents the task instance id of the sub-process node.
+- The process instance table and the task instance table correspond to the t_ds_process_instance table and the t_ds_task_instance table, respectively.
+
+---
+
+## Core Table Schema
+
+### t_ds_process_definition
+
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process definition name |
+| version | int | process definition version |
+| release_state | tinyint | process definition release state:0:offline,1:online |
+| project_id | int | project id |
+| user_id | int | process definition creator id |
+| process_definition_json | longtext | process definition JSON content |
+| description | text | process definition description |
+| global_params | text | global parameters |
+| flag | tinyint | whether process available: 0 not available, 1 available |
+| locations | text | Node location information |
+| connects | text | Node connection information |
+| receivers | text | receivers |
+| receivers_cc | text | carbon copy list |
+| create_time | datetime | create time |
+| timeout | int | timeout |
+| tenant_id | int | tenant id |
+| update_time | datetime | update time |
+| modify_by | varchar | define user modify the process |
+| resource_ids | varchar | resource id set |
+
+### t_ds_process_instance
+
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | process instance name |
+| process_definition_id | int | process definition id |
+| state | tinyint | process instance Status: 0 successful commit, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| recovery | tinyint | process instance failover flag:0: normal,1: failover instance needs restart |
+| start_time | datetime | process instance start time |
+| end_time | datetime | process instance end time |
+| run_times | int | process instance run times |
+| host | varchar | process instance host |
+| command_type | tinyint | command type:0 start ,1 start from the current node,2 resume a fault-tolerant process,3 resume from pause process, 4 execute from the failed node,5 complement, 6 dispatch, 7 re-run, 8 pause, 9 stop, 10 resume waiting thread |
+| command_param | text | JSON command parameters |
+| task_depend_type | tinyint | node dependency type: 0 current node, 1 forward, 2 backward |
+| max_try_times | tinyint | max try times |
+| failure_strategy | tinyint | failure strategy, 0: end the process when node failed,1: continue run the other nodes when failed |
+| warning_type | tinyint | warning type 0: no warning, 1: warning if process success, 2: warning if process failed, 3: warning whatever results |
+| warning_group_id | int | warning group id |
+| schedule_time | datetime | schedule time |
+| command_start_time | datetime | command start time |
+| global_params | text | global parameters |
+| process_instance_json | longtext | process instance JSON |
+| flag | tinyint | whether process instance is available: 0 not available, 1 available |
+| update_time | timestamp | update time |
+| is_sub_process | int | whether the process is sub process: 1 sub-process, 0 not sub-process |
+| executor_id | int | executor id |
+| locations | text | node location information |
+| connects | text | node connection information |
+| history_cmd | text | history commands, record all the commands to a instance |
+| dependence_schedule_times | text | depend schedule estimate time |
+| process_instance_priority | int | process instance priority. 0 highest,1 high,2 medium,3 low,4 lowest |
+| worker_group | varchar | worker group who assign the task |
+| timeout | int | timeout |
+| tenant_id | int | tenant id |
+
+### t_ds_task_instance
+
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| name | varchar | task name |
+| task_type | varchar | task type |
+| process_definition_id | int | process definition id |
+| process_instance_id | int | process instance id |
+| task_json | longtext | task content JSON |
+| state | tinyint | Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete |
+| submit_time | datetime | task submit time |
+| start_time | datetime | task start time |
+| end_time | datetime | task end time |
+| host | varchar | host of task running on |
+| execute_path | varchar | task execute path in the host |
+| log_path | varchar | task log path |
+| alert_flag | tinyint | whether alert |
+| retry_times | int | task retry times |
+| pid | int | pid of task |
+| app_link | varchar | Yarn app id |
+| flag | tinyint | task instance is available : 0 not available, 1 available |
+| retry_interval | int | retry interval when task failed |
+| max_retry_times | int | max retry times |
+| task_instance_priority | int | task instance priority:0 highest,1 high,2 medium,3 low,4 lowest |
+| worker_group | varchar | worker group who assign the task |
+
+#### t_ds_schedules
+
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| process_definition_id | int | process definition id |
+| start_time | datetime | schedule start time |
+| end_time | datetime | schedule end time |
+| crontab | varchar | crontab expression |
+| failure_strategy | tinyint | failure strategy: 0 end,1 continue |
+| user_id | int | user id |
+| release_state | tinyint | release status: 0 not yet released,1 released |
+| warning_type | tinyint | warning type: 0: no warning, 1: warning if process success, 2: warning if process failed, 3: warning whatever results |
+| warning_group_id | int | warning group id |
+| process_instance_priority | int | process instance priority:0 highest,1 high,2 medium,3 low,4 lowest |
+| worker_group | varchar | worker group who assign the task |
+| create_time | datetime | create time |
+| update_time | datetime | update time |
+
+### t_ds_command
+
+| Field | Type | Comment |
+| --- | --- | --- |
+| id | int | primary key |
+| command_type | tinyint | command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 re-run, 8 pause, 9 stop, 10 resume waiting thread |
+| process_definition_id | int | process definition id |
+| command_param | text | JSON command parameters |
+| task_depend_type | tinyint | node dependency type: 0 current node, 1 forward, 2 backward |
+| failure_strategy | tinyint | failed policy: 0 end, 1 continue |
+| warning_type | tinyint | alarm type: 0 no alarm, 1 alarm if process success, 2: alarm if process failed, 3: warning whatever results |
+| warning_group_id | int | warning group id |
+| schedule_time | datetime | schedule time |
+| start_time | datetime | start time |
+| executor_id | int | executor id |
+| dependence | varchar | dependence column |
+| update_time | datetime | update time |
+| process_instance_priority | int | process instance priority: 0 highest,1 high,2 medium,3 low,4 lowest |
+| worker_group_id | int |  worker group who assign the task |
\ No newline at end of file
diff --git a/docs/docs/en/architecture/task-structure.md b/docs/docs/en/architecture/task-structure.md
new file mode 100644
index 0000000..bd64824
--- /dev/null
+++ b/docs/docs/en/architecture/task-structure.md
@@ -0,0 +1,1114 @@
+# Task Structure
+
+## Overall Tasks Storage Structure
+
+All tasks in DolphinScheduler are saved in the `t_ds_process_definition` table.
+
+The following shows the `t_ds_process_definition` table structure:
+
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|id|int(11)|primary key
+2|name|varchar(255)|process definition name
+3|version|int(11)|process definition version
+4|release_state|tinyint(4)|release status of process definition: 0 not released, 1 released
+5|project_id|int(11)|project id
+6|user_id|int(11)|user id of the process definition
+7|process_definition_json|longtext|process definition JSON
+8|description|text|process definition description
+9|global_params|text|global parameters
+10|flag|tinyint(4)|specify whether the process is available: 0 is not available, 1 is available
+11|locations|text|node location information
+12|connects|text|node connectivity info
+13|receivers|text|receivers
+14|receivers_cc|text|CC receivers
+15|create_time|datetime|create time
+16|timeout|int(11) |timeout
+17|tenant_id|int(11) |tenant id
+18|update_time|datetime|update time
+19|modify_by|varchar(36)|specify the user that made the modification
+20|resource_ids|varchar(255)|resource ids
+
+The `process_definition_json` field is the core field, which defines the task information in the DAG diagram, and it is stored in JSON format.
+
+The following table describes the common data structure.
+No. | field  | type  |  description
+-------- | ---------| -------- | ---------
+1|globalParams|Array|global parameters
+2|tasks|Array|task collections in the process [for the structure of each type, please refer to the following sections]
+3|tenantId|int|tenant ID
+4|timeout|int|timeout
+
+Data example:
+```bash
+{
+    "globalParams":[
+        {
+            "prop":"golbal_bizdate",
+            "direct":"IN",
+            "type":"VARCHAR",
+            "value":"${system.biz.date}"
+        }
+    ],
+    "tasks":Array[1],
+    "tenantId":0,
+    "timeout":0
+}
+```
+
+## The Detailed Explanation of The Storage Structure of Each Task Type
+
+### Shell Nodes
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type | |String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters |JSON format
+5| |rawScript |String| Shell script |
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch | |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks | |
+
+**Node data example:**
+
+```bash
+{
+    "type":"SHELL",
+    "id":"tasks-80760",
+    "name":"Shell Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"echo "This is a shell script""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### SQL Node
+
+Perform data query and update operations on the specified datasource through SQL.
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |note
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task id|
+2|type ||String |task type |SQL
+3| name| |String|task name|
+4| params| |Object|customized parameters|JSON format
+5| |type |String |database type
+6| |datasource |Int |datasource id
+7| |sql |String |query SQL statement
+8| |udfs | String| udf functions|specify UDF function ids, separate by comma
+9| |sqlType | String| SQL node type |0 for query and 1 for none-query SQL
+10| |title |String | mail title
+11| |receivers |String |receivers
+12| |receiversCc |String |CC receivers
+13| |showType | String|display type of mail|options: TABLE or ATTACHMENT
+14| |connParams | String|connect parameters
+15| |preStatements | Array|preposition SQL statements
+16| | postStatements| Array|post-position SQL statements||
+17| | localParams| Array|customized parameters||
+18|description | |String|description | |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch  | |
+21| | successNode| Array|jump to node if success| |
+22| | failedNode|Array|jump to node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times | |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority | |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks | |
+
+**Node data example:**
+
+```bash
+{
+    "type":"SQL",
+    "id":"tasks-95648",
+    "name":"SqlTask-Query",
+    "params":{
+        "type":"MYSQL",
+        "datasource":1,
+        "sql":"select id , namge , age from emp where id =  ${id}",
+        "udfs":"",
+        "sqlType":"0",
+        "title":"xxxx@xxx.com",
+        "receivers":"xxxx@xxx.com",
+        "receiversCc":"",
+        "showType":"TABLE",
+        "localParams":[
+            {
+                "prop":"id",
+                "direct":"IN",
+                "type":"INTEGER",
+                "value":"1"
+            }
+        ],
+        "connParams":"",
+        "preStatements":[
+            "insert into emp ( id,name ) value (1,'Li' )"
+        ],
+        "postStatements":[
+
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### Procedure [stored procedures] Node
+
+**The following shows the node data structure:**
+**Node data example:**
+
+### Spark Node
+
+**The following shows the node data structure:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |SPARK
+3| name| |String|task name |
+4| params| |Object|customized parameters |JSON format
+5| |mainClass |String | main class
+6| |mainArgs | String| execution arguments
+7| |others | String| other arguments
+8| |mainJar |Object | application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |driverCores | String| driver cores
+11| |driverMemory | String| driver memory
+12| |numExecutors |String | executor count
+13| |executorMemory |String | executor memory
+14| |executorCores |String | executor cores
+15| |programType | String| program type|JAVA,SCALA,PYTHON
+16| | sparkVersion| String|	Spark version| SPARK1 , SPARK2
+17| | localParams| Array|customized local parameters
+18| | resourceList| Array|resource files
+19|description | |String|description | |
+20|runFlag | |String |execution flag| |
+21|conditionResult | |Object|condition branch| |
+22| | successNode| Array|jump to node if success| |
+23| | failedNode|Array|jump to node if failure| 
+24| dependence| |Object |task dependency |mutual exclusion with params
+25|maxRetryTimes | |String|max retry times | |
+26|retryInterval | |String |retry interval| |
+27|timeout | |Object|timeout | |
+28| taskInstancePriority| |String|task priority | |
+29|workerGroup | |String |Worker group| |
+30|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"SPARK",
+    "id":"tasks-87430",
+    "name":"SparkTask",
+    "params":{
+        "mainClass":"org.apache.spark.examples.SparkPi",
+        "mainJar":{
+            "id":4
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "driverCores":1,
+        "driverMemory":"512M",
+        "numExecutors":2,
+        "executorMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"10",
+        "others":"",
+        "programType":"SCALA",
+        "sparkVersion":"SPARK2"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### MapReduce(MR) Node
+
+**The following shows the node data structure:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type |MR
+3| name| |String|task name |
+4| params| |Object|customized parameters |JSON format
+5| |mainClass |String | main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object | application jar package
+9| |programType | String|program type|JAVA,PYTHON
+10| | localParams| Array|customized local parameters
+11| | resourceList| Array|resource files
+12|description | |String|description | |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump to node if success| |
+16| | failedNode|Array|jump to node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times | |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"MR",
+    "id":"tasks-28997",
+    "name":"MRTask",
+    "params":{
+        "mainClass":"wordcount",
+        "mainJar":{
+            "id":5
+        },
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "mainArgs":"/tmp/wordcount/input /tmp/wordcount/output/",
+        "others":"",
+        "programType":"JAVA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### Python Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|  task Id|
+2|type ||String |task type|PYTHON
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |rawScript |String| Python script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description | |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure | 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14|maxRetryTimes | |String|max retry times | |
+15|retryInterval | |String |retry interval| |
+16|timeout | |Object|timeout | |
+17| taskInstancePriority| |String|task priority | |
+18|workerGroup | |String |Worker group| |
+19|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"PYTHON",
+    "id":"tasks-5463",
+    "name":"Python Task",
+    "params":{
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "rawScript":"print("This is a python script")"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### Flink Node
+
+**The following shows the node data structure:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|FLINK
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |mainClass |String |main class
+6| |mainArgs | String|execution arguments
+7| |others | String|other arguments
+8| |mainJar |Object |application jar package
+9| |deployMode |String |deployment mode |local,client,cluster
+10| |slot | String| slot count
+11| |taskManager |String | taskManager count
+12| |taskManagerMemory |String |taskManager memory size
+13| |jobManagerMemory |String | jobManager memory size
+14| |programType | String| program type|JAVA,SCALA,PYTHON
+15| | localParams| Array|local parameters
+16| | resourceList| Array|resource files
+17|description | |String|description | |
+18|runFlag | |String |execution flag| |
+19|conditionResult | |Object|condition branch| |
+20| | successNode| Array|jump node if success| |
+21| | failedNode|Array|jump node if failure| 
+22| dependence| |Object |task dependency |mutual exclusion with params
+23|maxRetryTimes | |String|max retry times| |
+24|retryInterval | |String |retry interval| |
+25|timeout | |Object|timeout | |
+26| taskInstancePriority| |String|task priority| |
+27|workerGroup | |String |Worker group| |
+38|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"FLINK",
+    "id":"tasks-17135",
+    "name":"FlinkTask",
+    "params":{
+        "mainClass":"com.flink.demo",
+        "mainJar":{
+            "id":6
+        },
+        "deployMode":"cluster",
+        "resourceList":[
+            {
+                "id":3,
+                "name":"run.sh",
+                "res":"run.sh"
+            }
+        ],
+        "localParams":[
+
+        ],
+        "slot":1,
+        "taskManager":"2",
+        "jobManagerMemory":"1G",
+        "taskManagerMemory":"2G",
+        "executorCores":2,
+        "mainArgs":"100",
+        "others":"",
+        "programType":"SCALA"
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### HTTP Node
+
+**The following shows the node data structure:**
+
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task Id|
+2|type ||String |task type|HTTP
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |url |String |request url
+6| |httpMethod | String|http method|GET,POST,HEAD,PUT,DELETE
+7| | httpParams| Array|http parameters
+8| |httpCheckCondition | String|validation of HTTP code status|default code 200
+9| |condition |String |validation conditions
+10| | localParams| Array|customized local parameters
+11|description | |String|description| |
+12|runFlag | |String |execution flag| |
+13|conditionResult | |Object|condition branch| |
+14| | successNode| Array|jump node if success| |
+15| | failedNode|Array|jump node if failure| 
+16| dependence| |Object |task dependency |mutual exclusion with params
+17|maxRetryTimes | |String|max retry times | |
+18|retryInterval | |String |retry interval| |
+19|timeout | |Object|timeout | |
+20| taskInstancePriority| |String|task priority| |
+21|workerGroup | |String |Worker group| |
+22|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"HTTP",
+    "id":"tasks-60499",
+    "name":"HttpTask",
+    "params":{
+        "localParams":[
+
+        ],
+        "httpParams":[
+            {
+                "prop":"id",
+                "httpParametersType":"PARAMETER",
+                "value":"1"
+            },
+            {
+                "prop":"name",
+                "httpParametersType":"PARAMETER",
+                "value":"Bo"
+            }
+        ],
+        "url":"https://www.xxxxx.com:9012",
+        "httpMethod":"POST",
+        "httpCheckCondition":"STATUS_CODE_DEFAULT",
+        "condition":""
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### DataX Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task Id|
+2|type ||String |task type|DATAX
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |customConfig |Int |specify whether use customized config| 0 none customized, 1 customized
+6| |dsType |String | datasource type
+7| |dataSource |Int | datasource ID
+8| |dtType | String|target database type
+9| |dataTarget | Int|target database ID 
+10| |sql |String | SQL statements
+11| |targetTable |String |target table
+12| |jobSpeedByte |Int |job speed limiting(bytes)
+13| |jobSpeedRecord | Int|job speed limiting(records)
+14| |preStatements | Array|preposition SQL
+15| | postStatements| Array|post-position SQL
+16| | json| String|customized configs|valid if customConfig=1
+17| | localParams| Array|customized parameters|valid if customConfig=1
+18|description | |String|description| |
+19|runFlag | |String |execution flag| |
+20|conditionResult | |Object|condition branch| |
+21| | successNode| Array|jump node if success| |
+22| | failedNode|Array|jump node if failure| 
+23| dependence| |Object |task dependency |mutual exclusion with params
+24|maxRetryTimes | |String|max retry times| |
+25|retryInterval | |String |retry interval| |
+26|timeout | |Object|timeout | |
+27| taskInstancePriority| |String|task priority| |
+28|workerGroup | |String |Worker group| |
+29|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"DATAX",
+    "id":"tasks-91196",
+    "name":"DataxTask-DB",
+    "params":{
+        "customConfig":0,
+        "dsType":"MYSQL",
+        "dataSource":1,
+        "dtType":"MYSQL",
+        "dataTarget":1,
+        "sql":"select id, name ,age from user ",
+        "targetTable":"emp",
+        "jobSpeedByte":524288,
+        "jobSpeedRecord":500,
+        "preStatements":[
+            "truncate table emp "
+        ],
+        "postStatements":[
+            "truncate table user"
+        ]
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            ""
+        ],
+        "failedNode":[
+            ""
+        ]
+    },
+    "dependence":{
+
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+
+    ]
+}
+```
+
+### Sqoop Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String|task ID|
+2|type ||String |task type|SQOOP
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| | concurrency| Int|concurrency rate
+6| | modelType|String |flow direction|import,export
+7| |sourceType|String |datasource type|
+8| |sourceParams |String|datasource parameters| JSON format
+9| | targetType|String |target datasource type
+10| |targetParams | String|target datasource parameters|JSON format
+11| |localParams |Array |customized local parameters
+12|description | |String|description| |
+13|runFlag | |String |execution flag| |
+14|conditionResult | |Object|condition branch| |
+15| | successNode| Array|jump node if success| |
+16| | failedNode|Array|jump node if failure| 
+17| dependence| |Object |task dependency |mutual exclusion with params
+18|maxRetryTimes | |String|max retry times| |
+19|retryInterval | |String |retry interval| |
+20|timeout | |Object|timeout | |
+21| taskInstancePriority| |String|task priority| |
+22|workerGroup | |String |Worker group| |
+23|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+            "type":"SQOOP",
+            "id":"tasks-82041",
+            "name":"Sqoop Task",
+            "params":{
+                "concurrency":1,
+                "modelType":"import",
+                "sourceType":"MYSQL",
+                "targetType":"HDFS",
+                "sourceParams":"{"srcType":"MYSQL","srcDatasource":1,"srcTable":"","srcQueryType":"1","srcQuerySql":"selec id , name from user","srcColumnType":"0","srcColumns":"","srcConditionList":[],"mapColumnHive":[{"prop":"hivetype-key","direct":"IN","type":"VARCHAR","value":"hivetype-value"}],"mapColumnJava":[{"prop":"javatype-key","direct":"IN","type":"VARCHAR","value":"javatype-value"}]}",
+                "targetParams":"{"targetPath":"/user/hive/warehouse/ods.db/user","deleteTargetDir":false,"fileType":"--as-avrodatafile","compressionCodec":"snappy","fieldsTerminated":",","linesTerminated":"@"}",
+                "localParams":[
+
+                ]
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+### Condition Branch Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type |SHELL
+3| name| |String|task name |
+4| params| |Object|customized parameters | null
+5|description | |String|description| |
+6|runFlag | |String |execution flag| |
+7|conditionResult | |Object|condition branch | |
+8| | successNode| Array|jump to node if success| |
+9| | failedNode|Array|jump to node if failure| 
+10| dependence| |Object |task dependency |mutual exclusion with params
+11|maxRetryTimes | |String|max retry times | |
+12|retryInterval | |String |retry interval| |
+13|timeout | |Object|timeout | |
+14| taskInstancePriority| |String|task priority | |
+15|workerGroup | |String |Worker group| |
+16|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+    "type":"CONDITIONS",
+    "id":"tasks-96189",
+    "name":"条件",
+    "params":{
+
+    },
+    "description":"",
+    "runFlag":"NORMAL",
+    "conditionResult":{
+        "successNode":[
+            "test04"
+        ],
+        "failedNode":[
+            "test05"
+        ]
+    },
+    "dependence":{
+        "relation":"AND",
+        "dependTaskList":[
+
+        ]
+    },
+    "maxRetryTimes":"0",
+    "retryInterval":"1",
+    "timeout":{
+        "strategy":"",
+        "interval":null,
+        "enable":false
+    },
+    "taskInstancePriority":"MEDIUM",
+    "workerGroup":"default",
+    "preTasks":[
+        "test01",
+        "test02"
+    ]
+}
+```
+
+### Subprocess Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|SHELL
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |processDefinitionId |Int| process definition ID
+6|description | |String|description | |
+7|runFlag | |String |execution flag| |
+8|conditionResult | |Object|condition branch | |
+9| | successNode| Array|jump to node if success| |
+10| | failedNode|Array|jump to node if failure| 
+11| dependence| |Object |task dependency |mutual exclusion with params
+12|maxRetryTimes | |String|max retry times| |
+13|retryInterval | |String |retry interval| |
+14|timeout | |Object|timeout| |
+15| taskInstancePriority| |String|task priority| |
+16|workerGroup | |String |Worker group| |
+17|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+            "type":"SUB_PROCESS",
+            "id":"tasks-14806",
+            "name":"SubProcessTask",
+            "params":{
+                "processDefinitionId":2
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+
+            },
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
+
+### DEPENDENT Node
+
+**The following shows the node data structure:**
+No.|parameter name||type|description |notes
+-------- | ---------| ---------| -------- | --------- | ---------
+1|id | |String| task ID|
+2|type ||String |task type|DEPENDENT
+3| name| |String|task name|
+4| params| |Object|customized parameters |JSON format
+5| |rawScript |String|Shell script|
+6| | localParams| Array|customized local parameters||
+7| | resourceList| Array|resource files||
+8|description | |String|description| |
+9|runFlag | |String |execution flag| |
+10|conditionResult | |Object|condition branch| |
+11| | successNode| Array|jump to node if success| |
+12| | failedNode|Array|jump to node if failure| 
+13| dependence| |Object |task dependency |mutual exclusion with params
+14| | relation|String |relation|AND,OR
+15| | dependTaskList|Array |dependent task list|
+16|maxRetryTimes | |String|max retry times| |
+17|retryInterval | |String |retry interval| |
+18|timeout | |Object|timeout| |
+19| taskInstancePriority| |String|task priority| |
+20|workerGroup | |String |Worker group| |
+21|preTasks | |Array|preposition tasks| |
+
+**Node data example:**
+
+```bash
+{
+            "type":"DEPENDENT",
+            "id":"tasks-57057",
+            "name":"DenpendentTask",
+            "params":{
+
+            },
+            "description":"",
+            "runFlag":"NORMAL",
+            "conditionResult":{
+                "successNode":[
+                    ""
+                ],
+                "failedNode":[
+                    ""
+                ]
+            },
+            "dependence":{
+                "relation":"AND",
+                "dependTaskList":[
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":7,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"ALL",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    },
+                    {
+                        "relation":"AND",
+                        "dependItemList":[
+                            {
+                                "projectId":1,
+                                "definitionId":5,
+                                "definitionList":[
+                                    {
+                                        "value":8,
+                                        "label":"MRTask"
+                                    },
+                                    {
+                                        "value":7,
+                                        "label":"FlinkTask"
+                                    },
+                                    {
+                                        "value":6,
+                                        "label":"SparkTask"
+                                    },
+                                    {
+                                        "value":5,
+                                        "label":"SqlTask-Update"
+                                    },
+                                    {
+                                        "value":4,
+                                        "label":"SqlTask-Query"
+                                    },
+                                    {
+                                        "value":3,
+                                        "label":"SubProcessTask"
+                                    },
+                                    {
+                                        "value":2,
+                                        "label":"Python Task"
+                                    },
+                                    {
+                                        "value":1,
+                                        "label":"Shell Task"
+                                    }
+                                ],
+                                "depTasks":"SqlTask-Update",
+                                "cycle":"day",
+                                "dateValue":"today"
+                            }
+                        ]
+                    }
+                ]
+            },
+            "maxRetryTimes":"0",
+            "retryInterval":"1",
+            "timeout":{
+                "strategy":"",
+                "interval":null,
+                "enable":false
+            },
+            "taskInstancePriority":"MEDIUM",
+            "workerGroup":"default",
+            "preTasks":[
+
+            ]
+        }
+```
\ No newline at end of file
diff --git a/docs/docs/en/faq.md b/docs/docs/en/faq.md
new file mode 100644
index 0000000..45b9e11
--- /dev/null
+++ b/docs/docs/en/faq.md
@@ -0,0 +1,689 @@
+<!-- markdown-link-check-disable -->
+## Q:项目的名称是?
+
+A:DolphinScheduler
+
+---
+
+## Q:DolphinScheduler 服务介绍及建议运行内存
+
+A:DolphinScheduler 由 5 个服务组成,MasterServer、WorkerServer、ApiServer、AlertServer、LoggerServer 和 UI。
+
+| 服务                      | 说明                                                         |
+| ------------------------- | ------------------------------------------------------------ |
+| MasterServer              | 主要负责 **DAG** 的切分和任务状态的监控                      |
+| WorkerServer/LoggerServer | 主要负责任务的提交、执行和任务状态的更新。LoggerServer 用于 Rest Api 通过 **RPC** 查看日志 |
+| ApiServer                 | 提供 Rest Api 服务,供 UI 进行调用                            |
+| AlertServer               | 提供告警服务                                                 |
+| UI                        | 前端页面展示                                                 |
+
+注意:**由于服务比较多,建议单机部署最好是 4 核 16G 以上**
+
+---
+
+
+## Q:系统支持哪些邮箱?
+
+A:支持绝大多数邮箱,qq、163、126、139、outlook、aliyun 等皆支持。支持 **TLS 和 SSL** 协议,可以在 alert.properties 中选择性配置
+
+---
+
+## Q:常用的系统变量时间参数有哪些,如何使用?
+
+A:请参考[使用手册](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/guide/parameter/built-in.html) 第8小节
+
+---
+
+## Q:pip install kazoo 这个安装报错。是必须安装的吗?
+
+A: 这个是 python 连接 Zookeeper 需要使用到的,用于删除Zookeeper中的master/worker临时节点信息。所以如果是第一次安装,就可以忽略错误。在1.3.0之后,kazoo不再需要了,我们用程序来代替kazoo所做的
+
+---
+
+## Q:怎么指定机器运行任务
+
+A:使用 **管理员** 创建 Worker 分组,在 **流程定义启动** 的时候可**指定Worker分组**或者在**任务节点上指定Worker分组**。如果不指定,则使用 Default,**Default默认是使用的集群里所有的Worker中随机选取一台来进行任务提交、执行**
+
+---
+
+## Q:任务的优先级
+
+A:我们同时 **支持流程和任务的优先级**。优先级我们有 **HIGHEST、HIGH、MEDIUM、LOW 和 LOWEST** 五种级别。**可以设置不同流程实例之间的优先级,也可以设置同一个流程实例中不同任务实例的优先级**。详细内容请参考任务优先级设计 https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1
+
+----
+
+## Q:dolphinscheduler-grpc 报错
+
+A:在 1.2 及以前版本中,在根目录下执行:mvn -U clean package assembly:assembly -Dmaven.test.skip=true,然后刷新下整个项目就好,1.3版本中不再使用 GRPC 进行通信了
+
+----
+
+## Q:DolphinScheduler 支持 windows 上运行么
+
+A: 理论上只有 **Worker 是需要在 Linux 上运行的**,其它的服务都是可以在 windows 上正常运行的。但是还是建议最好能在 linux 上部署使用
+
+-----
+
+## Q:UI 在 linux 编译 node-sass 提示:Error:EACCESS:permission denied,mkdir xxxx
+
+A:单独安装 **npm install node-sass --unsafe-perm**,之后再 **npm install**
+
+---
+
+## Q:UI 不能正常登陆访问
+
+A:     1,如果是 node 启动的查看 dolphinscheduler-ui 下的 .env 文件里的 API_BASE 配置是否是 Api Server 服务地址
+
+​       2,如果是 nginx 启动的并且是通过 **install-dolphinscheduler-ui.sh** 安装的,查看             
+
+​              **/etc/nginx/conf.d/dolphinscheduler.conf** 中的 proxy_pass 配置是否是 Api Server 服务地址
+
+​       3,如果以上配置都是正确的,那么请查看 Api Server 服务是否是正常的,
+
+​			curl http://localhost:12345/dolphinscheduler/users/get-user-info 查看 Api Server 日志,
+
+​			如果提示 cn.dolphinscheduler.api.interceptor.LoginHandlerInterceptor:[76] - session info is null,则证明 Api Server 服务是正常的
+
+​       4,如果以上都没有问题,需要查看一下 **application.properties** 中的 **server.context-path 和 server.port 配置**是否正确
+注意:1.3 版本直接使用 Jetty 进行前端代码的解析,无需再安装配置 nginx 了
+
+---
+
+## Q:流程定义手动启动或调度启动之后,没有流程实例生成
+
+A: 	  1,首先通过 **jps 查看MasterServer服务是否存在**,或者从服务监控直接查看 zk 中是否存在 master 服务
+
+​	   2,如果存在 master 服务,查看 **命令状态统计** 或者 **t_ds_error_command** 中是否增加的新记录,如果增加了,**请查看 message 字段定位启动异常原因**
+
+---
+
+## Q:任务状态一直处于提交成功状态
+
+A:        1,首先通过 **jps 查看 WorkerServer 服务是否存在**,或者从服务监控直接查看 zk 中是否存在 worker 服务
+
+​          2,如果 **WorkerServer** 服务正常,需要 **查看 MasterServer 是否把 task 任务放到 zk 队列中** ,**需要查看 MasterServer 日志及 zk 队列中是否有任务阻塞**
+
+​	   3,如果以上都没有问题,需要定位是否指定了 Worker 分组,但是 **Worker 分组的机器不是在线状态**
+
+---
+
+## Q:install.sh 中需要注意问题
+
+A:  	   1,如果替换变量中包含特殊字符,**请用 \ 转移符进行转移**
+
+​	    2,installPath="/data1_1T/dolphinscheduler",**这个目录不能和当前要一键安装的 install.sh 目录是一样的**
+
+​	    3,deployUser="dolphinscheduler",**部署用户必须具有 sudo 权限**,因为 worker 是通过 sudo -u 租户 sh xxx.command 进行执行的
+
+​	    4,monitorServerState="false",服务监控脚本是否启动,默认是不启动服务监控脚本的。**如果启动服务监控脚本,则每 5 分钟定时来监控 master 和 worker 的服务是否 down 机,如果 down 机则会自动重启**
+
+​	    5,hdfsStartupSate="false",是否开启 HDFS 资源上传功能。默认是不开启的,**如果不开启则资源中心是不能使用的**。如果开启,需要 conf/common/hadoop/hadoop.properties 中配置 fs.defaultFS 和 yarn 的相关配置,如果使用 namenode HA,需要将 core-site.xml 和 hdfs-site.xml 复制到conf根目录下
+
+​	注意:**1.0.x 版本是不会自动创建 hdfs 根目录的,需要自行创建,并且需要部署用户有hdfs的操作权限**
+
+---
+
+## Q:流程定义和流程实例下线异常
+
+A : 对于 **1.0.4 以前的版本中**,修改 dolphinscheduler-api cn.dolphinscheduler.api.quartz 包下的代码即可
+
+```
+public boolean deleteJob(String jobName, String jobGroupName) {
+    lock.writeLock().lock();
+    try {
+      JobKey jobKey = new JobKey(jobName,jobGroupName);
+      if(scheduler.checkExists(jobKey)){
+        logger.info("try to delete job, job name: {}, job group name: {},", jobName, jobGroupName);
+        return scheduler.deleteJob(jobKey);
+      }else {
+        return true;
+      }
+
+    } catch (SchedulerException e) {
+      logger.error(String.format("delete job : %s failed",jobName), e);
+    } finally {
+      lock.writeLock().unlock();
+    }
+    return false;
+  }
+```
+
+---
+
+## Q:HDFS 启动之前创建的租户,能正常使用资源中心吗
+
+A: 不能。因为在未启动 HDFS 创建的租户,不会在 HDFS 中注册租户目录。所以上次资源会报错
+
+## Q:多 Master 和多 Worker 状态下,服务掉了,怎么容错
+
+A:  **注意:Master 监控 Master 及 Worker 服务。**
+
+​	1,如果 Master 服务掉了,其它的 Master 会接管挂掉的 Master 的流程,继续监控 Worker task 状态
+
+​	2,如果 Worker 服务掉了,Master 会监控到 Worker 服务掉了,如果存在 Yarn 任务,Kill Yarn 任务之后走重试
+
+具体请看容错设计:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1
+
+---
+
+## Q:对于 Master 和 Worker 一台机器伪分布式下的容错
+
+A : 1.0.3 版本只实现了 Master 启动流程容错,不走 Worker 容错。也就是说如果 Worker 挂掉的时候,没有 Master 存在。这流程将会出现问题。我们会在 **1.1.0** 版本中增加 Master 和 Worker 启动自容错,修复这个问题。如果想手动修改这个问题,需要针对 **跨重启正在运行流程** **并且已经掉的正在运行的 Worker 任务,需要修改为失败**,**同时跨重启正在运行流程设置为失败状态**。然后从失败节点进行流程恢复即可
+
+---
+
+## Q:定时容易设置成每秒执行
+
+A : 设置定时的时候需要注意,如果第一位(* * * * * ? *)设置成 \* ,则表示每秒执行。**我们将会在 1.1.0 版本中加入显示最近调度的时间列表** ,使用 http://cron.qqe2.com/  可以在线看近 5 次运行时间
+
+
+
+## Q:定时有有效时间范围吗
+
+A:有的,**如果定时的起止时间是同一个时间,那么此定时将是无效的定时**。**如果起止时间的结束时间比当前的时间小,很有可能定时会被自动删除**
+
+
+
+## Q:任务依赖有几种实现
+
+A:  1,**DAG** 之间的任务依赖关系,是从 **入度为零** 进行 DAG 切分的
+
+​	 2,有 **任务依赖节点** ,可以实现跨流程的任务或者流程依赖,具体请参考 依赖(DEPENDENT)节点:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C.html#%E4%BB%BB%E5%8A%A1%E8%8A%82%E7%82%B9%E7%B1%BB%E5%9E%8B%E5%92%8C%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE
+
+
+## Q:流程定义有几种启动方式
+
+A: 1,在 **流程定义列表**,点击 **启动** 按钮
+
+​		2,**流程定义列表添加定时器**,调度启动流程定义
+
+​		3,流程定义 **查看或编辑** DAG 页面,任意 **任务节点右击** 启动流程定义
+
+​		4,可以对流程定义 DAG 编辑,设置某些任务的运行标志位 **禁止运行**,则在启动流程定义的时候,将该节点的连线将从 DAG 中去掉
+
+## Q:Python 任务设置 Python 版本
+
+A:  只需要修改 conf/env/dolphinscheduler_env.sh 中的 PYTHON_HOME
+
+```
+export PYTHON_HOME=/bin/python
+```
+
+注意:这了 **PYTHON_HOME** ,是 python 命令的绝对路径,而不是单纯的 PYTHON_HOME,还需要注意的是 export PATH 的时候,需要直接
+
+```
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH
+```
+
+
+## Q:Worker Task 通过 sudo -u 租户 sh xxx.command 会产生子进程,在 kill 的时候,是否会杀掉
+
+A: 我们会在 1.0.4 中增加 kill 任务同时,kill 掉任务产生的各种所有子进程
+
+
+
+## Q:DolphinScheduler 中的队列怎么用,用户队列和租户队列是什么意思
+
+A : DolphinScheduler 中的队列可以在用户或者租户上指定队列,**用户指定的队列优先级是高于租户队列的优先级的。**,例如:对 MR 任务指定队列,是通过 mapreduce.job.queuename 来指定队列的。
+
+注意:MR 在用以上方法指定队列的时候,传递参数请使用如下方式:
+
+```
+	Configuration conf = new Configuration();
+        GenericOptionsParser optionParser = new GenericOptionsParser(conf, args);
+        String[] remainingArgs = optionParser.getRemainingArgs();
+```
+
+
+如果是 Spark 任务 --queue 方式指定队列
+
+
+
+## Q:Master 或者 Worker 报如下告警
+
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_worker_lack_res.png" width="60%" />
+ </p>
+
+
+A : 修改 conf 下的 master.properties **master.reserved.memory** 的值为更小的值,比如说 0.1 或者
+
+worker.properties **worker.reserved.memory** 的值为更小的值,比如说 0.1
+
+
+
+## Q:hive 版本是 1.1.0+cdh5.15.0,SQL hive 任务连接报错
+
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/cdh_hive_error.png" width="60%" />
+ </p>
+
+
+
+A: 将 hive pom
+
+```
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>2.1.0</version>
+</dependency>
+```
+
+修改为
+
+```
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-jdbc</artifactId>
+    <version>1.1.0</version>
+</dependency>
+```
+
+---
+
+## Q:如何增加一台工作服务器
+A: 1,参考官网[部署文档](https://dolphinscheduler.apache.org/zh-cn/docs/laster/user_doc/installation/cluster.html) 1.3 小节,创建部署用户和 hosts 映射
+
+​	2,参考官网[部署文档](https://dolphinscheduler.apache.org/zh-cn/docs/laster/user_doc/installation/cluster.html) 1.4 小节,配置 hosts 映射和 ssh 打通及修改目录权限.
+​          1.4 小节的最后一步是在当前新增机器上执行的,即需要给部署目录部署用户的权限
+
+​	3,复制正在运行的服务器上的部署目录到新机器的同样的部署目录下
+
+​	4,到 bin 下,启动 worker server 和 logger server
+```
+        ./dolphinscheduler-daemon.sh start worker-server
+        ./dolphinscheduler-daemon.sh start logger-server
+```
+
+---
+
+## Q:DolphinScheduler 什么时候发布新版本,同时新旧版本区别,以及如何升级,版本号规范 
+A:1,Apache 项目的发版流程是通过邮件列表完成的。 你可以订阅 DolphinScheduler 的邮件列表,订阅之后如果有发版,你就可以收到邮件。请参照这篇[指引](https://github.com/apache/dolphinscheduler#get-help)来订阅 DolphinScheduler 的邮件列表。
+
+   2,当项目发版的时候,会有发版说明告知具体的变更内容,同时也会有从旧版本升级到新版本的升级文档。
+
+   3,版本号为 x.y.z, 当 x 增加时代表全新架构的版本。当 y 增加时代表与 y 版本之前的不兼容需要升级脚本或其他人工处理才能升级。当 z 增加代表是 bug 修复,升级完全兼容。无需额外处理。之前有个问题 1.0.2 的升级不兼容 1.0.1 需要升级脚本。
+
+---
+
+## Q:后续任务在前置任务失败情况下仍旧可以执行
+A:在启动工作流的时候,你可以设置失败策略:继续还是失败。
+![设置任务失败策略](https://user-images.githubusercontent.com/15833811/80368215-ee378080-88be-11ea-9074-01a33d012b23.png)
+
+---
+
+## Q:工作流模板 DAG、工作流实例、工作任务及实例之间是什么关系 工作流模板 DAG、工作流实例、工作任务及实例之间是什么关系,一个 dag 支持最大并发 100,是指产生 100 个工作流实例并发运行吗?一个 dag 中的任务节点,也有并发数的配置,是指任务也可以并发多个线程运行吗?最大数 100 吗?
+A:
+
+1.2.1 version
+```
+   master.properties
+   设置 master 节点并发执行的最大工作流数
+   master.exec.threads=100
+   
+   Control the number of parallel tasks in each workflow
+   设置每个工作流可以并发执行的最大任务数
+   master.exec.task.number=20
+   
+   worker.properties
+   设置 worker 节点并发执行的最大任务数
+   worker.exec.threads=100
+```
+
+---
+
+## Q:工作组管理页面没有展示按钮
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/39816903/81903776-d8cb9180-95f4-11ea-98cb-94ca1e6a1db5.png" width="60%" />
+</p>
+A:1.3.0 版本,为了支持 k8s,worker ip 一直变动,因此我们不能在 UI 界面上配置,工作组可以配置在 worker.properties 上配置名称。
+
+---
+
+## Q:为什么不把 mysql 的 jdbc 连接包添加到 docker 镜像里面
+A:Mysql jdbc 连接包的许可证和 apache v2 的许可证不兼容,因此它不能被加入到 docker 镜像里面。
+
+---
+
+## Q:当一个任务提交多个 yarn 程序的时候经常失败
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/16174111/81312485-476e9380-90b9-11ea-9aad-ed009db899b1.png" width="60%" />
+</p>
+A:这个 Bug 在 dev 分支已修复,并加入到需求/待做列表。
+
+---
+
+## Q:Master 服务和 Worker 服务在运行几天之后停止了
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/18378986/81293969-c3101680-90a0-11ea-87e5-ac9f0dd53f5e.png" width="60%" />
+</p>
+A:会话超时时间太短了,只有 0.3 秒,修改 zookeeper.properties 的配置项:
+
+```
+   zookeeper.session.timeout=60000
+   zookeeper.connection.timeout=30000
+```
+
+---
+
+## Q:使用 docker-compose 默认配置启动,显示 zookeeper 错误
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/42579056/80374318-13c98780-88c9-11ea-8d5f-53448b957f02.png" width="60%" />
+ </p>
+A:这个问题在 dev-1.3.0 版本解决了。这个 [pr](https://github.com/apache/dolphinscheduler/pull/2595) 已经解决了这个 bug,主要的改动点:
+
+```
+    在docker-compose.yml文件中增加zookeeper的环境变量ZOO_4LW_COMMANDS_WHITELIST。
+    把minLatency,avgLatency and maxLatency的类型从int改成float。
+```
+
+---
+
+## Q:界面上显示任务一直运行,结束不了,从日志上看任务实例为空
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/51871547/80302626-b1478d00-87dd-11ea-97d4-08aa2244a6d0.jpg" width="60%" />
+ </p>
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/51871547/80302626-b1478d00-87dd-11ea-97d4-08aa2244a6d0.jpg" width="60%" />
+ </p>
+A:这个 [bug](https://github.com/apache/dolphinscheduler/issues/1477)  描述了问题的详情。这个问题在 1.2.1 版本已经被修复了。
+对于 1.2.1 以下的版本,这种情况的一些提示:
+
+```
+1,清空 zk 下这个路径的任务:/dolphinscheduler/task_queue
+2,修改任务状态为失败(int 值 6)
+3,运行工作流来从失败中恢复
+```
+
+---
+
+## Q:zk 中注册的 master 信息 ip 地址是 127.0.0.1,而不是配置的域名所对应或者解析的 ip 地址,可能导致不能查看任务日志
+A:修复 bug:
+```
+   1、confirm hostname
+   $hostname
+   hadoop1
+   2、hostname -i
+   127.0.0.1 10.3.57.15
+   3、edit /etc/hosts,delete hadoop1 from 127.0.0.1 record
+   $cat /etc/hosts
+   127.0.0.1 localhost
+   10.3.57.15 ds1 hadoop1
+   4、hostname -i
+   10.3.57.15
+```
+   hostname 命令返回服务器主机名,hostname -i 返回的是服务器主机名在 /etc/hosts 中所有匹配的ip地址。所以我把 /etc/hosts 中 127.0.0.1 中的主机名删掉,只保留内网 ip 的解析就可以了,没必要把 127.0.0.1 整条注释掉, 只要 hostname 命令返回值在 /etc/hosts 中对应的内网 ip 正确就可以,ds 程序取了第一个值,我理解上 ds 程序不应该用 hostname -i 取值这样有点问题,因为好多公司服务器的主机名都是运维配置的,感觉还是直接取配置文件的域名解析的返回 ip 更准确,或者 znode 中存域名信息而不是 /etc/hosts。
+
+---
+
+## Q:调度系统设置了一个秒级的任务,导致系统挂掉
+A:调度系统不支持秒级任务。
+
+---
+
+## Q:编译前后端代码 (dolphinscheduler-ui) 报错不能下载"https://github.com/sass/node-sass/releases/download/v4.13.1/darwin-x64-72_binding.node"
+A:1,cd dolphinscheduler-ui 然后删除 node_modules 目录 
+```
+sudo rm -rf node_modules
+```
+   ​	2,通过 npm.taobao.org 下载 node-sass
+ ```
+ sudo npm uninstall node-sass
+ sudo npm i node-sass --sass_binary_site=https://npm.taobao.org/mirrors/node-sass/
+ ```
+   3,如果步骤 2 报错,请重新构建 node-saas [参考链接](https://dolphinscheduler.apache.org/en-us/development/frontend-development.html)
+```
+ sudo npm rebuild node-sass
+```
+当问题解决之后,如果你不想每次编译都下载这个 node,你可以设置系统环境变量:SASS_BINARY_PATH= /xxx/xxx/xxx/xxx.node。
+
+---
+
+## Q:当使用 mysql 作为 ds 数据库需要如何配置
+A:1,修改项目根目录 maven 配置文件,移除 scope 的 test 属性,这样 mysql 的包就可以在其它阶段被加载
+```
+<dependency>
+	<groupId>mysql</groupId>
+	<artifactId>mysql-connector-java</artifactId>
+	<version>${mysql.connector.version}</version>
+	<scope>test<scope>
+</dependency>
+```
+   ​	2,修改 application-dao.properties 和 quzrtz.properties 来使用 mysql 驱动
+   默认驱动是 postgres 主要由于许可证原因。
+
+---
+
+## Q:shell 任务是如何运行的
+A:1,被执行的服务器在哪里配置,以及实际执行的服务器是哪台? 要指定在某个 worker 上去执行,可以在 worker 分组中配置,固定 IP,这样就可以把路径写死。如果配置的 worker 分组有多个 worker,实际执行的服务器由调度决定的,具有随机性。
+
+   ​	2,如果是服务器上某个路径的一个 shell 文件,怎么指向这个路径?服务器上某个路径下的 shell 文件,涉及到权限问题,不建议这么做。建议你可以使用资源中心的存储功能,然后在 shell 编辑器里面使用资源引用就可以,系统会帮助你把脚本下载到执行目录下。如果以 hdfs 作为资源中心,在执行的时候,调度器会把依赖的 jar 包,文件等资源拉到 worker 的执行目录上,我这边是 /tmp/escheduler/exec/process,该配置可以在 install.sh 中进行指定。
+
+   3,以哪个用户来执行任务?执行任务的时候,调度器会采用 sudo -u 租户的方式去执行,租户是一个 linux 用户。
+
+---
+
+## Q:生产环境部署方式有推荐的最佳实践吗
+A:1,如果没有很多任务要运行,出于稳定性考虑我们建议使用 3 个节点,并且最好把 Master/Worder 服务部署在不同的节点。如果你只有一个节点,当然只能把所有的服务部署在同一个节点!通常来说,需要多少节点取决于你的业务,海豚调度系统本身不需要很多的资源。充分测试之后,你们将找到使用较少节点的合适的部署方式。
+
+---
+
+## Q:DEPENDENT 节点
+A:1,DEPENDENT 节点实际是没有执行体的,是专门用来配置数据周期依赖逻辑,然后再把执行节点挂载后面,来实现任务间的周期依赖。
+
+---
+
+## Q:如何改变 Master 服务的启动端口
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/8263441/62352160-0f3e9100-b53a-11e9-95ba-3ae3dde49c72.png" width="60%" />
+ </p>
+A:1,修改 application_master.properties 配置文件,例如:server.port=12345。
+
+---
+
+## Q:调度任务不能上线
+A:1,我们可以成功创建调度任务,并且表 t_scheduler_schedules 中也成功加入了一条记录,但当我点击上线后,前端页面无反应且会把 t_scheduler_schedules 这张表锁定,我测试过将 t_scheduler_schedules 中的 RELEASE_state 字段手动更新为 1 这样前端会显示为上线状态。DS 版本 1.2+ 表名是 t_ds_schedules,其它版本表名是 t_scheduler_schedules。
+
+---
+
+## Q:请问 swagger ui 的地址是什么
+A:1,1.2+ 版本地址是:http://apiServerIp:apiServerPort/dolphinscheduler/doc.html?language=zh_CN&lang=cn,其它版本是 http://apiServerIp:apiServerPort/escheduler/doc.html?language=zh_CN&lang=cn。
+
+---
+
+## Q:前端安装包缺少文件
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/41460919/61437083-d960b080-a96e-11e9-87f1-297ba3aca5e3.png" width="60%" />
+ </p>
+ <p align="center">
+    <img src="https://user-images.githubusercontent.com/41460919/61437218-1b89f200-a96f-11e9-8e48-3fac47eb2389.png" width="60%" />
+  </p>
+A: 1,用户修改了 api server 配置文件中的![apiServerContextPath](https://user-images.githubusercontent.com/41460919/61678323-1b09a680-ad35-11e9-9707-3ba68bbc70d6.png)配置项,导致了这个问题,恢复成默认配置之后问题解决。
+
+---
+
+## Q:上传比较大的文件卡住
+<p align="center">
+   <img src="https://user-images.githubusercontent.com/21357069/58231400-805b0e80-7d69-11e9-8107-7f37b06a95df.png" width="60%" />
+ </p>
+A:1,编辑 ngnix 配置文件 vi /etc/nginx/nginx.conf,更改上传大小 client_max_body_size 1024m。
+     
+   ​	2,更新 google chrome 版本到最新版本。
+
+---
+
+## Q:创建 spark 数据源,点击“测试连接”,系统回退回到登入页面
+A:1,edit /etc/nginx/conf.d/escheduler.conf
+```
+     proxy_connect_timeout 300s;
+     proxy_read_timeout 300s;
+     proxy_send_timeout 300s;
+```
+
+---
+
+## Q:欢迎订阅 DolphinScheduler 开发邮件列表
+A:在使用 DolphinScheduler 的过程中,如果您有任何问题或者想法、建议,都可以通过 Apache 邮件列表参与到 DolphinScheduler 的社区建设中来。
+   发送订阅邮件也非常简单,步骤如下:
+
+   1,用自己的邮箱向 dev-subscribe@dolphinscheduler.apache.org 发送一封邮件,主题和内容任意。
+
+   2, 接收确认邮件并回复。 完成步骤1后,您将收到一封来自 dev-help@dolphinscheduler.apache.org 的确认邮件(如未收到,请确认邮件是否被自动归入垃圾邮件、推广邮件、订阅邮件等文件夹)。然后直接回复该邮件,或点击邮件里的链接快捷回复即可,主题和内容任意。
+
+   3, 接收欢迎邮件。 完成以上步骤后,您会收到一封主题为 WELCOME to dev@dolphinscheduler.apache.org 的欢迎邮件,至此您已成功订阅 Apache DolphinScheduler的邮件列表。
+
+---
+
+## Q:工作流依赖
+A:1,目前是按照自然天来判断,上月末:判断时间是工作流 A start_time/scheduler_time between '2019-05-31 00:00:00' and '2019-05-31 23:59:59'。上月:是判断上个月从 1 号到月末每天都要有完成的A实例。上周: 上周 7 天都要有完成的 A 实例。前两天: 判断昨天和前天,两天都要有完成的 A 实例。
+
+---
+
+## Q:DS 后端接口文档
+A:1,http://106.75.43.194:8888/dolphinscheduler/doc.html?language=zh_CN&lang=zh。
+
+
+## dolphinscheduler 在运行过程中,ip 地址获取错误的问题
+
+master 服务、worker 服务在 zookeeper 注册时,会以 ip:port 的形式创建相关信息
+
+如果 ip 地址获取错误,请检查网络信息,如 Linux 系统通过 `ifconfig` 命令查看网络信息,以下图为例:
+
+<p align="center">
+  <img src="/img/network/network_config.png" width="60%" />
+</p>
+
+可以使用 dolphinscheduler 提供的三种策略,获取可用 ip:
+
+* default: 优先获取内网网卡获取 ip 地址,其次获取外网网卡获取 ip 地址,在前两项失效情况下,使用第一块可用网卡的地址
+* inner: 使用内网网卡获取 ip地址,如果获取失败抛出异常信息
+* outer: 使用外网网卡获取 ip地址,如果获取失败抛出异常信息
+
+配置方式是在 `common.properties` 中修改相关配置:
+
+```shell
+# network IP gets priority, default: inner outer
+# dolphin.scheduler.network.priority.strategy=default
+```
+
+以上配置修改后重启服务生效
+
+如果 ip 地址获取依然错误,请下载 [dolphinscheduler-netutils.jar](/asset/dolphinscheduler-netutils.jar) 到相应机器,执行以下命令以进一步排障,并反馈给社区开发人员:
+
+```shell
+java -jar target/dolphinscheduler-netutils.jar
+```
+
+## 配置 sudo 免密,用于解决默认配置 sudo 权限过大或不能申请 root 权限的使用问题
+
+配置 dolphinscheduler OS 账号的 sudo 权限为部分普通用户范围内的一个普通用户管理者,限制指定用户在指定主机上运行某些命令,详细配置请看 sudo 权限管理
+例如 sudo 权限管理配置 dolphinscheduler OS 账号只能操作用户 userA,userB,userC 的权限(其中用户 userA,userB,userC 用于多租户向大数据集群提交作业)
+
+```shell
+echo 'dolphinscheduler  ALL=(userA,userB,userC)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+```
+
+---
+
+## Q:Yarn多集群支持
+A:将Worker节点分别部署至多个Yarn集群,步骤如下(例如AWS EMR):
+
+   1. 将 Worker 节点部署至 EMR 集群的 Master 节点
+   
+   2. 将 `conf/common.properties` 中的 `yarn.application.status.address` 修改为当前集群的 Yarn 的信息
+   
+   3. 通过 `bin/dolphinscheduler-daemon.sh start worker-server` 和 `bin/dolphinscheduler-daemon.sh start logger-server` 分别启动 worker-server 和 logger-server
+
+---
+
+## Q:Update process definition error: Duplicate key TaskDefinition
+
+A:在DS 2.0.4之前(2.0.0-alpha之后),可能存在版本切换的重复键问题,导致更新工作流失败;可参考如下SQL进行重复数据的删除,以MySQL为例:(注意:操作前请务必备份原数据,SQL来源于pr [#8408](https://github.com/apache/dolphinscheduler/pull/8408))
+
+```SQL
+DELETE FROM t_ds_process_task_relation_log WHERE id IN
+(
+ SELECT
+     x.id
+ FROM
+     (
+         SELECT
+             aa.id
+         FROM
+             t_ds_process_task_relation_log aa
+                 JOIN
+             (
+                 SELECT
+                     a.process_definition_code
+                      ,MAX(a.id) as min_id
+                      ,a.pre_task_code
+                      ,a.pre_task_version
+                      ,a.post_task_code
+                      ,a.post_task_version
+                      ,a.process_definition_version
+                      ,COUNT(*) cnt
+                 FROM
+                     t_ds_process_task_relation_log a
+                         JOIN (
+                         SELECT
+                             code
+                         FROM
+                             t_ds_process_definition
+                         GROUP BY code
+                     )b ON b.code = a.process_definition_code
+                 WHERE 1=1
+                 GROUP BY a.pre_task_code
+                        ,a.post_task_code
+                        ,a.pre_task_version
+                        ,a.post_task_version
+                        ,a.process_definition_code
+                        ,a.process_definition_version
+                 HAVING COUNT(*) > 1
+             )bb ON bb.process_definition_code = aa.process_definition_code
+                 AND bb.pre_task_code = aa.pre_task_code
+                 AND bb.post_task_code = aa.post_task_code
+                 AND bb.process_definition_version = aa.process_definition_version
+                 AND bb.pre_task_version = aa.pre_task_version
+                 AND bb.post_task_version = aa.post_task_version
+                 AND bb.min_id != aa.id
+     )x
+)
+;
+
+DELETE FROM t_ds_task_definition_log WHERE id IN
+(
+   SELECT
+       x.id
+   FROM
+       (
+           SELECT
+               a.id
+           FROM
+               t_ds_task_definition_log a
+                   JOIN
+               (
+                   SELECT
+                       code
+                        ,name
+                        ,version
+                        ,MAX(id) AS min_id
+                   FROM
+                       t_ds_task_definition_log
+                   GROUP BY code
+                          ,name
+                          ,version
+                   HAVING COUNT(*) > 1
+               )b ON b.code = a.code
+                   AND b.name = a.name
+                   AND b.version = a.version
+                   AND b.min_id != a.id
+       )x
+)
+;
+```
+
+---
+
+我们会持续收集更多的 FAQ。
\ No newline at end of file
diff --git a/docs/docs/en/guide/alert/alert_plugin_user_guide.md b/docs/docs/en/guide/alert/alert_plugin_user_guide.md
new file mode 100644
index 0000000..8ea72f8
--- /dev/null
+++ b/docs/docs/en/guide/alert/alert_plugin_user_guide.md
@@ -0,0 +1,14 @@
+# Alert Component User Guide
+
+## How to Create Alert Plugins and Alert Groups
+
+In version 2.0.0, users need to create alert instances, and then associate them with alert groups. Alert group can use multiple alert instances and notify them one by one.
+
+First, go to the Security Center page. Select Alarm Group Management, click Alarm Instance Management on the left and create an alarm instance. Select the corresponding alarm plug-in and fill in the relevant alarm parameters.
+
+Then select Alarm Group Management, create an alarm group, and choose the corresponding alarm instance.
+
+<img src="/img/alert/alert_step_1.png">
+<img src="/img/alert/alert_step_2.png">
+<img src="/img/alert/alert_step_3.png">
+<img src="/img/alert/alert_step_4.png">
\ No newline at end of file
diff --git a/docs/docs/en/guide/alert/dingtalk.md b/docs/docs/en/guide/alert/dingtalk.md
new file mode 100644
index 0000000..2cc3043
--- /dev/null
+++ b/docs/docs/en/guide/alert/dingtalk.md
@@ -0,0 +1,27 @@
+# DingTalk
+
+If you need to use `DingTalk` for alerting, create an alert instance in the alert instance management and select the DingTalk plugin. 
+The following shows the `DingTalk` configuration example:
+
+![dingtalk-plugin](/img/alert/dingtalk-plugin.png)
+
+## Parameter Configuration
+
+* Webhook
+  > The format is: https://oapi.dingtalk.com/robot/send?access_token=XXXXXX
+* Keyword
+  > Custom keywords for security settings
+* Secret
+  > Signature of security settings
+* MessageType
+  > Support both text and markdown types
+
+When a custom bot sends a message, you can specify the "@person list" by their mobile phone number. When the selected people in the "@people list" receive the message, there will be a `@` message reminder. `No disturb` mode always receives reminders, and "someone @ you" appears in the message.
+* @Mobiles
+  > The mobile phone number of the "@person"
+* @UserIds
+  > The user ID by "@person"
+* @All
+  > @Everyone
+
+[DingTalk Custom Robot Access Development Documentation](https://open.dingtalk.com/document/robots/custom-robot-access)
diff --git a/docs/docs/en/guide/alert/enterprise-webexteams.md b/docs/docs/en/guide/alert/enterprise-webexteams.md
new file mode 100644
index 0000000..5acb0b1
--- /dev/null
+++ b/docs/docs/en/guide/alert/enterprise-webexteams.md
@@ -0,0 +1,24 @@
+# Webex Teams
+
+If you need to use `Webex Teams` to alert, create an alert instance in the alert instance management, and choose the WebexTeams plugin.
+The following is the `WebexTeams` configuration example:
+
+![enterprise-webexteams-plugin](/img/alert/enterprise-webexteams-plugin.png)
+
+## Parameter Configuration
+
+* botAccessToken
+  > The robot's access token
+* roomID
+  > The ID of the room that receives message (only support one room ID)
+* toPersonId
+  > The person ID of the recipient when sending a private 1:1 message
+* toPersonEmail
+  > The email address of the recipient when sending a private 1:1 message
+* atSomeoneInRoom
+  > If the message destination is room, the emails of the person being @, use `,` (eng commas) to separate multiple emails
+* destination
+  > The destination of the message (one message only support one destination)
+
+[WebexTeams Application Bot Guide](https://developer.webex.com/docs/bots)
+[WebexTeams Message Guide](https://developer.webex.com/docs/api/v1/messages/create-a-message)
diff --git a/docs/docs/en/guide/alert/enterprise-wechat.md b/docs/docs/en/guide/alert/enterprise-wechat.md
new file mode 100644
index 0000000..750c10c
--- /dev/null
+++ b/docs/docs/en/guide/alert/enterprise-wechat.md
@@ -0,0 +1,14 @@
+# Enterprise WeChat
+
+If you need to use `Enterprise WeChat` to alert, create an alert instance in the alert instance management, and choose the WeChat plugin.
+The following is the `WeChat` configuration example:
+
+![enterprise-wechat-plugin](/img/alert/enterprise-wechat-plugin.png)
+
+The parameter `send.type` corresponds to app and group chat respectively:
+
+APP: https://work.weixin.qq.com/api/doc/90000/90135/90236
+
+Group Chat: https://work.weixin.qq.com/api/doc/90000/90135/90248
+
+The parameter `user.send.msg` corresponds to the `content` in the document, and the corresponding variable is `{msg}`.
\ No newline at end of file
diff --git a/docs/docs/en/guide/alert/telegram.md b/docs/docs/en/guide/alert/telegram.md
new file mode 100644
index 0000000..203ab4f
--- /dev/null
+++ b/docs/docs/en/guide/alert/telegram.md
@@ -0,0 +1,42 @@
+# Telegram
+
+If you need `Telegram` to alert, create an alert instance in the alert instance management, and choose the `Telegram` plugin.
+The following shows the `Telegram` configuration example:
+
+![telegram-plugin](/img/alert/telegram-plugin.png)
+
+## Parameter Configuration
+
+* WebHook:
+  > The WebHook of Telegram when use robot to send message
+* botToken
+  > The robot's access token
+* chatId
+  > Sub Telegram Channel
+* parseMode
+  > Message parse type (support txt, markdown, markdownV2, html)
+* EnableProxy
+  > Enable proxy sever
+* Proxy
+  > the proxy address of the proxy server
+* Port
+  > the proxy port of Proxy-Server
+* User
+  > Authentication(Username) for the proxy server
+* Password
+  > Authentication(Password) for the proxy server
+
+**NOTICE**:The webhook needs to be able to receive and use the same JSON body of HTTP POST that DolphinScheduler constructs and the following shows the JSON body:
+
+```json
+{
+    "text": "[{\"projectId\":1,\"projectName\":\"p1\",\"owner\":\"admin\",\"processId\":35,\"processDefinitionCode\":4928367293568,\"processName\":\"s11-3-20220324084708668\",\"taskCode\":4928359068928,\"taskName\":\"s1\",\"taskType\":\"SHELL\",\"taskState\":\"FAILURE\",\"taskStartTime\":\"2022-03-24 08:47:08\",\"taskEndTime\":\"2022-03-24 08:47:09\",\"taskHost\":\"192.168.1.103:1234\",\"logPath\":\"\"}]",
+    "chat_id": "chat id number"
+}
+```
+
+References:
+- [Telegram Application Bot Guide](https://core.telegram.org/bots)
+- [Telegram Bots Api](https://core.telegram.org/bots/api)
+- [Telegram SendMessage Api](https://core.telegram.org/bots/api#sendmessage)
+
diff --git a/docs/docs/en/guide/datasource/hive.md b/docs/docs/en/guide/datasource/hive.md
new file mode 100644
index 0000000..9eff601
--- /dev/null
+++ b/docs/docs/en/guide/datasource/hive.md
@@ -0,0 +1,39 @@
+# HIVE
+
+## Use HiveServer2
+
+![hive](/img/new_ui/dev/datasource/hive.png)
+
+- Datasource: select `HIVE`
+- Datasource name: enter the name of the DataSource
+- Description: enter a description of the DataSource
+- IP/Host Name: enter the HIVE service IP
+- Port: enter the HIVE service port
+- Username: set the username for HIVE connection
+- Password: set the password for HIVE connection
+- Database name: enter the database name of the HIVE connection
+- Jdbc connection parameters: parameter settings for HIVE connection, in JSON format
+
+> NOTICE: If you wish to execute multiple HIVE SQL in the same session, you could set `support.hive.oneSession = true` in `common.properties`. 
+> It is helpful when you try to set env variables before running HIVE SQL. Default value of `support.hive.oneSession` is `false` and multi-SQLs run in different sessions.
+
+## Use HiveServer2 HA ZooKeeper
+
+![hive-server2](/img/new_ui/dev/datasource/hiveserver2.png)
+
+NOTICE: If Kerberos is disabled, ensure the parameter `hadoop.security.authentication.startup.state` is false, and parameter `java.security.krb5.conf.path` value sets null. 
+If **Kerberos** is enabled, needs to set the following parameters  in `common.properties`: 
+
+```conf
+# whether to startup kerberos
+hadoop.security.authentication.startup.state=true
+
+# java.security.krb5.conf path
+java.security.krb5.conf.path=/opt/krb5.conf
+
+# login user from keytab username
+login.user.keytab.username=hdfs-mycluster@ESZ.COM
+
+# login user from keytab path
+login.user.keytab.path=/opt/hdfs.headless.keytab
+```
diff --git a/docs/docs/en/guide/datasource/introduction.md b/docs/docs/en/guide/datasource/introduction.md
new file mode 100644
index 0000000..5095cd6
--- /dev/null
+++ b/docs/docs/en/guide/datasource/introduction.md
@@ -0,0 +1,6 @@
+# DataSource
+
+DataSource supports MySQL, PostgreSQL, Hive/Impala, Spark, ClickHouse, Oracle, SQL Server and other DataSources.
+
+- Click "Data Source Center -> Create Data Source" to create different types of DataSources according to requirements.
+- Click "Test Connection" to test whether the DataSource can connect successfully.
\ No newline at end of file
diff --git a/docs/docs/en/guide/datasource/mysql.md b/docs/docs/en/guide/datasource/mysql.md
new file mode 100644
index 0000000..0ac7cda
--- /dev/null
+++ b/docs/docs/en/guide/datasource/mysql.md
@@ -0,0 +1,14 @@
+# MySQL
+
+![mysql](/img/new_ui/dev/datasource/mysql.png)
+
+- Datasource: select MYSQL
+- Datasource name: enter the name of the DataSource
+- Description: enter a description of the DataSource
+- IP/Host Name: enter the MYSQL service IP
+- Port: enter the MYSQL service port
+- Username: set the username for MYSQL connection
+- Password: set the password for MYSQL connection
+- Database name: enter the database name of the MYSQL connection
+- Jdbc connection parameters: parameter settings for MYSQL connection, in JSON format
+
diff --git a/docs/docs/en/guide/datasource/postgresql.md b/docs/docs/en/guide/datasource/postgresql.md
new file mode 100644
index 0000000..fe05169
--- /dev/null
+++ b/docs/docs/en/guide/datasource/postgresql.md
@@ -0,0 +1,13 @@
+# PostgreSQL
+
+![postgresql](/img/new_ui/dev/datasource/postgresql.png)
+
+- Datasource: select POSTGRESQL
+- Datasource name: enter the name of the DataSource
+- Description: enter a description of the DataSource
+- IP/Host Name: enter the PostgreSQL service IP
+- Port: enter the PostgreSQL service port
+- Username: set the username for PostgreSQL connection
+- Password: set the password for PostgreSQL connection
+- Database name: enter the database name of the PostgreSQL connection
+- Jdbc connection parameters: parameter settings for PostgreSQL connection, in JSON format
diff --git a/docs/docs/en/guide/datasource/spark.md b/docs/docs/en/guide/datasource/spark.md
new file mode 100644
index 0000000..61e2c7d
--- /dev/null
+++ b/docs/docs/en/guide/datasource/spark.md
@@ -0,0 +1,13 @@
+# Spark
+
+![sparksql](/img/new_ui/dev/datasource/sparksql.png)
+
+- Datasource: select Spark
+- Datasource name: enter the name of the DataSource
+- Description: enter a description of the DataSource
+- IP/Host Name: enter the Spark service IP
+- Port: enter the Spark service port
+- Username: set the username for Spark connection
+- Password: set the password for Spark connection
+- Database name: enter the database name of the Spark connection
+- Jdbc connection parameters: parameter settings for Spark connection, in JSON format
diff --git a/docs/docs/en/guide/expansion-reduction.md b/docs/docs/en/guide/expansion-reduction.md
new file mode 100644
index 0000000..f34acb5
--- /dev/null
+++ b/docs/docs/en/guide/expansion-reduction.md
@@ -0,0 +1,248 @@
+# DolphinScheduler Expansion and Reduction
+
+## Expansion 
+
+This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.
+
+```
+ Attention: There cannot be more than one master service process or worker service process on a physical machine.
+       If the physical machine which locate the expansion master or worker node has already installed the scheduled service, check the [1.4 Modify configuration] and edit the configuration file `conf/config/install_config.conf` on ** all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
+```
+
+### Basic software installation
+
+* [required] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (version 1.8+): must install, install and configure `JAVA_HOME` and `PATH` variables under `/etc/profile`
+* [optional] If the expansion is a worker node, you need to consider whether to install an external client, such as Hadoop, Hive, Spark Client.
+
+
+```markdown
+ Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but will only call their Client for the corresponding task submission.
+```
+
+### Get Installation Package
+
+- Check the version of DolphinScheduler used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.
+- Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in `/opt/` directory, and the full path is `/opt/dolphinscheduler`.
+- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to `dolphinscheduler` and store it in the `/opt` directory. 
+- Add database dependency package, this document uses Mysql database, add `mysql-connector-java` driver package to `/opt/dolphinscheduler/lib` directory.
+
+```shell
+# create the installation directory, please do not create the installation directory in /root, /home and other high privilege directories 
+mkdir -p /opt
+cd /opt
+# decompress
+tar -zxvf apache-dolphinscheduler-1.3.8-bin.tar.gz -C /opt 
+cd /opt
+mv apache-dolphinscheduler-1.3.8-bin  dolphinscheduler
+```
+
+```markdown
+ Attention: You can copy the installation package directly from an existing environment to an expanded physical machine.
+```
+
+### Create Deployment Users
+
+- Create deployment user on **all** expansion machines, and make sure to configure sudo-free. If we plan to deploy scheduling on four expansion machines, ds1, ds2, ds3, and ds4, create deployment users on each machine is prerequisite.
+
+```shell
+# to create a user, you need to log in with root and set the deployment user name, modify it by yourself, the following take `dolphinscheduler` as an example:
+useradd dolphinscheduler;
+
+# set the user password, please change it by yourself, the following take `dolphinscheduler123` as an example
+echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
+
+# configure sudo password-free
+echo 'dolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+```
+
+```markdown
+ Attention:
+ - Since it is `sudo -u {linux-user}` to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+ - If you find the line `Default requiretty` in the `/etc/sudoers` file, please also comment it out.
+ - If have needs to use resource uploads, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
+```
+
+### Modify Configuration
+
+- From an existing node such as `Master/Worker`, copy the configuration directory directly to replace the configuration directory in the new node. After finishing the file copy, check whether the configuration items are correct.
+    
+    ```markdown
+    Highlights:
+    datasource.properties: database connection information 
+    zookeeper.properties: information for connecting zk 
+    common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
+    env/dolphinscheduler_env.sh: environment Variables
+    ````
+
+- Modify the `dolphinscheduler_env.sh` environment variable in the `conf/env` directory according to the machine configuration (the following is the example that all the used software install under `/opt/soft`)
+
+    ```shell
+        export HADOOP_HOME=/opt/soft/hadoop
+        export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+        # export SPARK_HOME1=/opt/soft/spark1
+        export SPARK_HOME2=/opt/soft/spark2
+        export PYTHON_HOME=/opt/soft/python
+        export JAVA_HOME=/opt/soft/jav
+        export HIVE_HOME=/opt/soft/hive
+        export FLINK_HOME=/opt/soft/flink
+        export DATAX_HOME=/opt/soft/datax/bin/datax.py
+        export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+    
+    ```
+
+    `Attention: This step is very important, such as `JAVA_HOME` and `PATH` is necessary to configure if haven not used just ignore or comment out`
+
+
+- Soft link the `JDK` to `/usr/bin/java` (still using `JAVA_HOME=/opt/soft/java` as an example)
+
+    ```shell
+    sudo ln -s /opt/soft/java/bin/java /usr/bin/java
+    ```
+
+ - Modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+    
+    * To add a new master node, you need to modify the IPs and masters parameters.
+    * To add a new worker node, modify the IPs and workers parameters.
+
+```shell
+# which machines to deploy DS services on, separated by commas between multiple physical machines
+ips="ds1,ds2,ds3,ds4"
+
+# ssh port,default 22
+sshPort="22"
+
+# which machine the master service is deployed on
+masters="existing master01,existing master02,ds1,ds2"
+
+# the worker service is deployed on which machine, and specify the worker belongs to which worker group, the following example of "default" is the group name
+workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
+
+```
+- If the expansion is for worker nodes, you need to set the worker group, refer to the security of the [Worker grouping](./security.md)
+
+- On all new nodes, change the directory permissions so that the deployment user has access to the DolphinScheduler directory
+
+```shell
+sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
+```
+
+### Restart the Cluster and Verify
+
+- Restart the cluster
+
+```shell
+# stop command:
+
+bin/stop-all.sh # stop all services
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  # stop master service
+sh bin/dolphinscheduler-daemon.sh stop worker-server  # stop worker service
+sh bin/dolphinscheduler-daemon.sh stop api-server     # stop api    service
+sh bin/dolphinscheduler-daemon.sh stop alert-server   # stop alert  service
+
+
+# start command::
+bin/start-all.sh # start all services
+
+sh bin/dolphinscheduler-daemon.sh start master-server  # start master service
+sh bin/dolphinscheduler-daemon.sh start worker-server  # start worker service
+sh bin/dolphinscheduler-daemon.sh start api-server     # start api    service
+sh bin/dolphinscheduler-daemon.sh start alert-server   # start alert  service
+
+```
+
+```
+ Attention: When using `stop-all.sh` or `stop-all.sh`, if the physical machine execute the command is not configured to be ssh-free on all machines, it will prompt to enter the password
+```
+
+- After completing the script, use the `jps` command to see if every node service is started (`jps` comes with the `Java JDK`)
+
+```
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    ApiApplicationServer ----- api    service
+    AlertServer          ----- alert  service
+```
+
+After successful startup, you can view the logs, which are stored in the `logs` folder.
+
+```Log Path
+ logs/
+    ├── dolphinscheduler-alert-server.log
+    ├── dolphinscheduler-master-server.log
+    ├── dolphinscheduler-worker-server.log
+    ├── dolphinscheduler-api-server.log
+```
+
+If the above services start normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
+
+-----------------------------------------------------------------------------
+
+## Reduction
+
+The reduction is to reduce the master or worker services for the existing DolphinScheduler cluster.
+There are two steps for shrinking. After performing the following two steps, the shrinking operation can be completed.
+
+### Stop the Service on the Scaled-Down Node
+
+ * If you are scaling down the master node, identify the physical machine where the master service is located, and stop the master service on the physical machine.
+ * If scale down the worker node, determine the physical machine where the worker service scale down and stop the worker services on the physical machine.
+ 
+```shell
+# stop command:
+bin/stop-all.sh # stop all services
+
+sh bin/dolphinscheduler-daemon.sh stop master-server  # stop master service
+sh bin/dolphinscheduler-daemon.sh stop worker-server  # stop worker service
+sh bin/dolphinscheduler-daemon.sh stop api-server     # stop api    service
+sh bin/dolphinscheduler-daemon.sh stop alert-server   # stop alert  service
+
+
+# start command:
+bin/start-all.sh # start all services
+
+sh bin/dolphinscheduler-daemon.sh start master-server # start master service
+sh bin/dolphinscheduler-daemon.sh start worker-server # start worker service
+sh bin/dolphinscheduler-daemon.sh start api-server    # start api    service
+sh bin/dolphinscheduler-daemon.sh start alert-server  # start alert  service
+
+```
+
+```
+ Attention: When using `stop-all.sh` or `stop-all.sh`, if the machine without the command is not configured to be ssh-free for all machines, it will prompt to enter the password
+```
+
+- After the script is completed, use the `jps` command to see if every node service was successfully shut down (`jps` comes with the `Java JDK`)
+
+```
+    MasterServer         ----- master service
+    WorkerServer         ----- worker service
+    ApiApplicationServer ----- api    service
+    AlertServer          ----- alert  service
+```
+If the corresponding master service or worker service does not exist, then the master or worker service is successfully shut down.
+
+
+### Modify the Configuration File
+
+ - modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
+    
+    * to scale down the master node, modify the IPs and masters parameters.
+    * to scale down worker nodes, modify the IPs and workers parameters.
+
+```shell
+# which machines to deploy DS services on, "localhost" for this machine
+ips="ds1,ds2,ds3,ds4"
+
+# ssh port,default: 22
+sshPort="22"
+
+# which machine the master service is deployed on
+masters="existing master01,existing master02,ds1,ds2"
+
+# The worker service is deployed on which machine, and specify which worker group this worker belongs to, the following example of "default" is the group name
+workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
+
+```
diff --git a/docs/docs/en/guide/flink-call.md b/docs/docs/en/guide/flink-call.md
new file mode 100644
index 0000000..5f71b69
--- /dev/null
+++ b/docs/docs/en/guide/flink-call.md
@@ -0,0 +1,123 @@
+# Flink Calls Operating Steps
+
+## Create a Queue
+
+1. Log in to the scheduling system, click `Security`, then click `Queue manage` on the left, and click `Create queue` to create a queue.
+2. Fill in the name and value of the queue, and click "Submit" 
+
+<p align="center">
+   <img src="/img/api/create_queue.png" width="80%" />
+ </p>
+
+## Create a Tenant 
+
+```
+1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If the Linux OS environment does not have this user, the worker will create this user when executing the script.
+2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person only has one name and one ID number.  
+3. After creating a tenant, there will be a folder in the HDFS relevant directory.  
+```
+
+<p align="center">
+   <img src="/img/api/create_tenant.png" width="80%" />
+ </p>
+
+## Create a User
+
+<p align="center">
+   <img src="/img/api/create_user.png" width="80%" />
+ </p>
+
+## Create a Token
+
+1. Log in to the scheduling system, click `Security`, then click `Token manage` on the left, and click `Create token` to create a token.
+
+<p align="center">
+   <img src="/img/token-management-en.png" width="80%" />
+ </p>
+
+
+2. Select the `Expiration time` (token validity time), select `User` (choose the specified user to perform the API operation), click "Generate token", copy the `Token` string, and click "Submit".
+
+<p align="center">
+   <img src="/img/create-token-en1.png" width="80%" />
+ </p>
+
+## Token Usage
+
+1. Open the API documentation page
+
+   > Address:http://{api server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
+
+<p align="center">
+   <img src="/img/api-documentation-en.png" width="80%" />
+ </p>
+
+
+2. Select a test API, the API selected for this test is `queryAllProjectList`
+
+   > projects/query-project-list
+
+3. Open `Postman`, fill in the API address, and enter the `Token` in `Headers`, and then send the request to view the result:
+
+   ```
+   token: The Token just generated
+   ```
+
+<p align="center">
+   <img src="/img/test-api.png" width="80%" />
+ </p>  
+
+## User Authorization
+
+<p align="center">
+   <img src="/img/api/user_authorization.png" width="80%" />
+ </p>
+
+## User Login
+
+```
+http://192.168.1.163:12345/dolphinscheduler/ui/#/monitor/servers/master
+```
+
+<p align="center">
+   <img src="/img/api/user_login.png" width="80%" />
+ </p>
+
+## Upload the Resource
+
+<p align="center">
+   <img src="/img/api/upload_resource.png" width="80%" />
+ </p>
+
+## Create a Workflow
+
+<p align="center">
+   <img src="/img/api/create_workflow1.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow2.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow3.png" width="80%" />
+ </p>
+
+
+<p align="center">
+   <img src="/img/api/create_workflow4.png" width="80%" />
+ </p>
+
+## View the Execution Result
+
+<p align="center">
+   <img src="/img/api/execution_result.png" width="80%" />
+ </p>
+
+## View Log
+
+<p align="center">
+   <img src="/img/api/log.png" width="80%" />
+ </p>
\ No newline at end of file
diff --git a/docs/docs/en/guide/homepage.md b/docs/docs/en/guide/homepage.md
new file mode 100644
index 0000000..f8db29d
--- /dev/null
+++ b/docs/docs/en/guide/homepage.md
@@ -0,0 +1,5 @@
+# Home Page
+
+The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.
+
+![homepage](/img/new_ui/dev/homepage/homepage.png)
diff --git a/docs/docs/en/guide/installation/cluster.md b/docs/docs/en/guide/installation/cluster.md
new file mode 100644
index 0000000..4f1ead9
--- /dev/null
+++ b/docs/docs/en/guide/installation/cluster.md
@@ -0,0 +1,39 @@
+# Cluster Deployment
+
+Cluster deployment is to deploy the DolphinScheduler on multiple machines for running massive tasks in production.
+
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
+
+## Deployment Steps
+
+Cluster deployment uses the same scripts and configuration files as [pseudo-cluster deployment](pseudo-cluster.md), so the preparation and deployment steps are the same as pseudo-cluster deployment. The difference is that [pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while cluster deployment (Cluster) is for multiple machines. And steps of "Modify Configuration" are quite different between pseudo-cluster deployment and cluster deployment.
+
+### Prerequisites and DolphinScheduler Startup Environment Preparations
+
+Configure all the configurations refer to [pseudo-cluster deployment](pseudo-cluster.md) on every machine, except sections `Prerequisites`, `Start ZooKeeper` and `Initialize the Database` of the `DolphinScheduler Startup Environment`.
+
+### Modify Configuration
+
+This step differs quite a lot from [pseudo-cluster deployment](pseudo-cluster.md), because the deployment script transfers the required resources for installation to each deployment machine by using `scp`. So we only need to modify the configuration of the machine that runs `install.sh` script and configurations will dispatch to cluster by `scp`. The configuration file is under the path `conf/config/install_config.conf`, here we only need to modify section **INSTALL MACHINE**, **DolphinS [...]
+
+```shell
+# ---------------------------------------------------------
+# INSTALL MACHINE
+# ---------------------------------------------------------
+# Using IP or machine hostname for the server going to deploy master, worker, API server, the IP of the server
+# If you using a hostname, make sure machines could connect each other by hostname
+# As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, where ds1, ds2 install the master server, ds3, ds4, and ds5 installs worker server, the alert server is installed in ds4, and the API server is installed in ds5
+ips="ds1,ds2,ds3,ds4,ds5"
+masters="ds1,ds2"
+workers="ds3:default,ds4:default,ds5:default"
+alertServer="ds4"
+apiServers="ds5"
+```
+
+## Start and Login DolphinScheduler
+
+Same as pseudo-cluster.md](pseudo-cluster.md)
+
+## Start and Stop Server
+
+Same as pseudo-cluster.md](pseudo-cluster.md)
\ No newline at end of file
diff --git a/docs/docs/en/guide/installation/docker.md b/docs/docs/en/guide/installation/docker.md
new file mode 100644
index 0000000..e74d43d
--- /dev/null
+++ b/docs/docs/en/guide/installation/docker.md
@@ -0,0 +1,1025 @@
+# QuickStart in Docker
+
+## Prerequisites
+
+ - [Docker](https://docs.docker.com/engine/install/) version: 1.13.1+
+ - [Docker Compose](https://docs.docker.com/compose/) version: 1.11.0+
+
+## How to Use this Docker Image
+
+Here are 3 ways to quickly install DolphinScheduler:
+
+### Start DolphinScheduler by Docker Compose (Recommended)
+
+In this way, you need to install [docker-compose](https://docs.docker.com/compose/) as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet.
+
+For Windows 7-10, you can install [Docker Toolbox](https://github.com/docker/toolbox/releases). For Windows 10 64-bit, you can install [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), and meet the [system requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements).
+
+#### Configure Memory not Less Than 4GB
+
+For Mac user, click `Docker Desktop -> Preferences -> Resources -> Memory`.
+
+For Windows Docker Toolbox users, configure the following two settings:
+
+ - **Memory**: Open Oracle VirtualBox Manager, if you double-click `Docker Quickstart Terminal` and successfully run `Docker Toolbox`, you will see a Virtual Machine named `default`. And click `Settings -> System -> Motherboard -> Base Memory`
+ - **Port Forwarding**: Click `Settings -> Network -> Advanced -> Port Forwarding -> Add`. fill `Name`, `Host Port` and `Guest Port` forms with `12345`, regardless of `Host IP` and `Guest IP`
+
+For Windows Docker Desktop user
+ - **Hyper-V Mode**: Click `Docker Desktop -> Settings -> Resources -> Memory`
+ - **WSL 2 Mode**: Refer to [WSL 2 utility VM](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
+
+#### Download the Source Code Package
+
+Please download the source code package `apache-dolphinscheduler-1.3.8-src.tar.gz`, download address: [download address](/en-us/download/download.html).
+
+#### Pull Image and Start the Service
+
+> For Mac and Linux users, open **Terminal**
+> For Windows Docker Toolbox user, open **Docker Quickstart Terminal**
+> For Windows Docker Desktop user, open **Windows PowerShell**
+
+```
+$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
+$ cd apache-dolphinscheduler-1.3.8-src/docker/docker-swarm
+$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+$ docker tag apache/dolphinscheduler:1.3.8 apache/dolphinscheduler:latest
+$ docker-compose up -d
+```
+
+> PowerShell should run `cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm`
+
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default.
+
+#### Login
+
+Visit the Web UI: http://localhost:12345/dolphinscheduler (Modify the IP address if needed).
+
+The default username is `admin` and the default password is `dolphinscheduler123`.
+
+<p align="center">
+  <img src="/img/login_en.png" width="60%" />
+</p>
+
+Please refer to the [Quick Start](../start/quick-start.md) to explore how to use DolphinScheduler.
+
+### Start via Existing PostgreSQL and ZooKeeper Service
+
+In this way, you need to install [docker](https://docs.docker.com/engine/install/) as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet.
+
+#### Basic Required Software
+
+ - [PostgreSQL](https://www.postgresql.org/download/) (version 8.2.15+)
+ - [ZooKeeper](https://zookeeper.apache.org/releases.html) (version 3.4.6+)
+ - [Docker](https://docs.docker.com/engine/install/) (version 1.13.1+)
+
+#### Login to the PostgreSQL Database and Create a Database Named `dolphinscheduler`
+
+#### Initialize the Database, Import `sql/dolphinscheduler_postgre.sql` to Create Tables and Initial Data
+
+#### Download the DolphinScheduler Image
+
+We have already uploaded the user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:
+
+```
+docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+```
+
+#### 5. Run a DolphinScheduler Instance
+
+```
+$ docker run -d --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-p 12345:12345 \
+apache/dolphinscheduler:1.3.8 all
+```
+
+Note: database test username and password need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your related PostgreSQL and ZooKeeper host IP.
+
+#### Login
+
+Same as above
+
+### Start a Standalone DolphinScheduler Server
+
+The following services automatically start when the container starts:
+
+```
+     MasterServer         ----- master service
+     WorkerServer         ----- worker service
+     ApiApplicationServer ----- api service
+     AlertServer          ----- alert service
+```
+
+If you just want to run part of the services in the DolphinScheduler, you can start a single service in DolphinScheduler by running the following commands.
+
+* Start a **master server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-master \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+apache/dolphinscheduler:1.3.8 master-server
+```
+
+* Start a **worker server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-worker \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+apache/dolphinscheduler:1.3.8 worker-server
+```
+
+* Start an **api server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-api \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-p 12345:12345 \
+apache/dolphinscheduler:1.3.8 api-server
+```
+
+* Start an **alert server**, For example:
+
+```
+$ docker run -d --name dolphinscheduler-alert \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+apache/dolphinscheduler:1.3.8 alert-server
+```
+
+**Note**: You must specify environment variables `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_PASSWORD`, `ZOOKEEPER_QUORUM` when start a single DolphinScheduler server.
+
+## Environment Variables
+
+The Docker container is configured through environment variables, and the [Appendix-Environment Variables](#appendix-environment-variables) lists the configurable environment variables of the DolphinScheduler and their default values.
+
+Especially, it can be configured through the environment variable configuration file `config.env.sh` in Docker Compose and Docker Swarm.
+
+## Support Matrix
+
+| Type                                                         | Support      | Notes                                 |
+| ------------------------------------------------------------ | ------------ | ------------------------------------- |
+| Shell                                                        | Yes          |                                       |
+| Python2                                                      | Yes          |                                       |
+| Python3                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop2                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop3                                                      | Not Sure     | Not tested                            |
+| Spark-Local(client)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-YARN(cluster)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-Standalone(cluster)                                    | Not Yet      |                                       |
+| Spark-Kubernetes(cluster)                                    | Not Yet      |                                       |
+| Flink-Local(local>=1.11)                                     | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-YARN(yarn-cluster)                                     | Indirect Yes | Refer to FAQ                          |
+| Flink-YARN(yarn-session/yarn-per-job/yarn-application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Standalone(default)                                    | Not Yet      |                                       |
+| Flink-Standalone(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Kubernetes(default)                                    | Not Yet      |                                       |
+| Flink-Kubernetes(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-NativeKubernetes(kubernetes-session/application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| MapReduce                                                    | Indirect Yes | Refer to FAQ                          |
+| Kerberos                                                     | Indirect Yes | Refer to FAQ                          |
+| HTTP                                                         | Yes          |                                       |
+| DataX                                                        | Indirect Yes | Refer to FAQ                          |
+| Sqoop                                                        | Indirect Yes | Refer to FAQ                          |
+| SQL-MySQL                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-PostgreSQL                                               | Yes          |                                       |
+| SQL-Hive                                                     | Indirect Yes | Refer to FAQ                          |
+| SQL-Spark                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-ClickHouse                                               | Indirect Yes | Refer to FAQ                          |
+| SQL-Oracle                                                   | Indirect Yes | Refer to FAQ                          |
+| SQL-SQLServer                                                | Indirect Yes | Refer to FAQ                          |
+| SQL-DB2                                                      | Indirect Yes | Refer to FAQ                          |
+
+## FAQ
+
+### How to Manage DolphinScheduler by Docker Compose?
+
+Start, restart, stop or list containers:
+
+```
+docker-compose start
+docker-compose restart
+docker-compose stop
+docker-compose ps
+```
+
+Stop containers and remove containers, networks:
+
+```
+docker-compose down
+```
+
+Stop containers and remove containers, networks and volumes:
+
+```
+docker-compose down -v
+```
+
+### How to View the Logs of a Container?
+
+List all running containers logs:
+
+```
+docker ps
+docker ps --format "{{.Names}}" # only print names
+```
+
+View the logs of a container named docker-swarm_dolphinscheduler-api_1:
+
+```
+docker logs docker-swarm_dolphinscheduler-api_1
+docker logs -f docker-swarm_dolphinscheduler-api_1 # follow log output
+docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # show last 10 lines from the end of the logs
+```
+
+### How to Scale Master and Worker by Docker Compose?
+
+Scale master to 2 instances:
+
+```
+docker-compose up -d --scale dolphinscheduler-master=2 dolphinscheduler-master
+```
+
+Scale worker to 3 instances:
+
+```
+docker-compose up -d --scale dolphinscheduler-worker=3 dolphinscheduler-worker
+```
+
+### How to Deploy DolphinScheduler on Docker Swarm?
+
+Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to [create-swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/)).
+
+Start a stack named `dolphinscheduler`:
+
+```
+docker stack deploy -c docker-stack.yml dolphinscheduler
+```
+
+List the services in the stack named `dolphinscheduler`:
+
+```
+docker stack services dolphinscheduler
+```
+
+Stop and remove the stack named `dolphinscheduler`:
+
+```
+docker stack rm dolphinscheduler
+```
+
+Remove the volumes of the stack named `dolphinscheduler`:
+
+```
+docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e "^dolphinscheduler")
+```
+
+### How to Scale Master and Worker on Docker Swarm?
+
+Scale master of the stack named `dolphinscheduler` to 2 instances:
+
+```
+docker service scale dolphinscheduler_dolphinscheduler-master=2
+```
+
+Scale worker of the stack named `dolphinscheduler` to 3 instances:
+
+```
+docker service scale dolphinscheduler_dolphinscheduler-worker=3
+```
+
+### How to Build a Docker Image?
+
+#### Build From the Source Code (Require Maven 3.3+ and JDK 1.8+)
+
+In Unix-Like, execute in Terminal:
+
+```bash
+$ bash ./docker/build/hooks/build
+```
+
+In Windows, execute in cmd or PowerShell:
+
+```bat
+C:\dolphinscheduler-src>.\docker\build\hooks\build.bat
+```
+
+Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` script files if you don't understand.
+
+#### Build From the Binary Distribution (Not require Maven 3.3+ and JDK 1.8+)
+
+Please download the binary distribution package `apache-dolphinscheduler-1.3.8-bin.tar.gz`, download address: [download address](/en-us/download/download.html). And put `apache-dolphinscheduler-1.3.8-bin.tar.gz` into the `apache-dolphinscheduler-1.3.8-src/docker/build` directory, execute in Terminal or PowerShell:
+
+```
+$ cd apache-dolphinscheduler-1.3.8-src/docker/build
+$ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
+```
+
+> PowerShell should use `cd apache-dolphinscheduler-1.3.8-src/docker/build`
+
+#### Build Multi-Platform Images
+
+Currently, support build images including `linux/amd64` and `linux/arm64` platform architecture, requirements:
+
+1. Support [docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/)
+2. Own the push permission of `https://hub.docker.com/r/apache/dolphinscheduler` (**Be cautious**: The build command will automatically push the multi-platform architecture images to the docker hub of `apache/dolphinscheduler` by default)
+
+Execute:
+
+```bash
+$ docker login # login to push apache/dolphinscheduler
+$ bash ./docker/build/hooks/build x
+```
+
+### How to Add an Environment Variable for Docker?
+
+If you would like to do additional initialization or add environment variables when compiling or execution, you can add one or more environment variables in the script `/root/start-init-conf.sh`. If involves configuration modification, modify the script `/opt/dolphinscheduler/conf/*.tpl`.
+
+For example, add an environment variable `SECURITY_AUTHENTICATION_TYPE` in `/root/start-init-conf.sh`:
+
+```
+export SECURITY_AUTHENTICATION_TYPE=PASSWORD
+```
+
+Add the `SECURITY_AUTHENTICATION_TYPE` to the template file `application-api.properties.tpl`:
+
+```
+security.authentication.type=${SECURITY_AUTHENTICATION_TYPE}
+```
+
+`/root/start-init-conf.sh` will dynamically generate config file:
+
+```sh
+echo "generate dolphinscheduler config"
+ls ${DOLPHINSCHEDULER_HOME}/conf/ | grep ".tpl" | while read line; do
+eval "cat << EOF
+$(cat ${DOLPHINSCHEDULER_HOME}/conf/${line})
+EOF
+" > ${DOLPHINSCHEDULER_HOME}/conf/${line%.*}
+done
+```
+
+### How to Use MySQL as the DolphinScheduler's Database Instead of PostgreSQL?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Modify all the `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`
+
+5. Comment the `dolphinscheduler-postgresql` block in `docker-compose.yml`.
+
+6. Add `dolphinscheduler-mysql` service in `docker-compose.yml` (**Optional**, you can directly use an external MySQL database).
+
+7. Modify DATABASE environment variables in `config.env.sh`:
+
+```
+DATABASE_TYPE=mysql
+DATABASE_DRIVER=com.mysql.jdbc.Driver
+DATABASE_HOST=dolphinscheduler-mysql
+DATABASE_PORT=3306
+DATABASE_USERNAME=root
+DATABASE_PASSWORD=root
+DATABASE_DATABASE=dolphinscheduler
+DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
+```
+
+> If you have added `dolphinscheduler-mysql` service in `docker-compose.yml`, just set `DATABASE_HOST` to `dolphinscheduler-mysql`
+
+8. Run the DolphinScheduler (See **How to use this docker image**)
+
+### How to Support MySQL Datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+5. Run the DolphinScheduler (See **How to use this docker image**).
+
+6. Add a MySQL datasource in `Datasource manage`.
+
+### How to Support Oracle Datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of Oracle.
+>
+> If you want to add Oracle datasource, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`).
+
+2. Create a new `Dockerfile` to add Oracle driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including Oracle driver:
+
+```
+docker build -t apache/dolphinscheduler:oracle-driver .
+```
+
+4. Modify all `image` fields to `apache/dolphinscheduler:oracle-driver` in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+5. Run the DolphinScheduler (See **How to use this docker image**).
+
+6. Add an Oracle datasource in `Datasource manage`.
+
+### How to Support Python 2 pip and Custom requirements.txt?
+
+1. Create a new `Dockerfile` to install pip:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY requirements.txt /tmp
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python-pip && \
+    pip install --no-cache-dir -r /tmp/requirements.txt && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **pip 18.1**. If you need to upgrade the pip, just add one more line.
+
+```
+    pip install --no-cache-dir -U pip && \
+```
+
+2. Build a new docker image including pip:
+
+```
+docker build -t apache/dolphinscheduler:pip .
+```
+
+3. Modify all `image` fields to `apache/dolphinscheduler:pip` in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+4. Run the DolphinScheduler (See **How to use this docker image**).
+
+5. Verify pip under a new Python task.
+
+### How to Support Python 3?
+
+1. Create a new `Dockerfile` to install Python 3:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python3 && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip`.
+
+```
+    apt-get install -y --no-install-recommends python3-pip && \
+```
+
+2. Build a new docker image including Python 3:
+
+```
+docker build -t apache/dolphinscheduler:python3 .
+```
+
+3. Modify all `image` fields to `apache/dolphinscheduler:python3` in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+4. Modify `PYTHON_HOME` to `/usr/bin/python3` in `config.env.sh`.
+
+5. Run the DolphinScheduler (See **How to use this docker image**).
+
+6. Verify Python 3 under a new Python task.
+
+### How to Support Hadoop, Spark, Flink, Hive or DataX?
+
+Take Spark 2.4.7 as an example:
+
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`.
+
+2. Run the DolphinScheduler (See **How to use this docker image**).
+
+3. Copy the Spark 2.4.7 release binary into the Docker container.
+
+```bash
+docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
+```
+
+Because the volume `dolphinscheduler-shared-local` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost.
+
+4. Attach the container and ensure that `SPARK_HOME2` exists.
+
+```bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
+cd /opt/soft
+tar zxf spark-2.4.7-bin-hadoop2.7.tgz
+rm -f spark-2.4.7-bin-hadoop2.7.tgz
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well.
+
+5. Verify Spark under a Shell task.
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+6. Verify Spark under a Spark task.
+
+The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
+
+- Spark Version: `SPARK2`
+- Main Class: `org.apache.spark.examples.SparkPi`
+- Main Package: `spark-examples_2.11-2.4.7.jar`
+- Deploy Mode: `local`
+
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+7. Verify Spark on YARN.
+
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.
+
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists.
+
+### How to Support Spark 3?
+
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`.
+
+Take Spark 3.1.1 as an example:
+
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`.
+
+2. Run the DolphinScheduler (See **How to use this docker image**).
+
+3. Copy the Spark 3.1.1 release binary into the Docker container.
+
+```bash
+docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
+```
+
+4. Attach the container and ensure that `SPARK_HOME2` exists.
+
+```bash
+docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
+cd /opt/soft
+tar zxf spark-3.1.1-bin-hadoop2.7.tgz
+rm -f spark-3.1.1-bin-hadoop2.7.tgz
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well.
+
+5. Verify Spark under a Shell task.
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+### How to Support Shared Storage between Master, Worker and API server?
+
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 1 and 2 can be skipped directly, and execute the command like `docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put Hadoop into the shared directory `/opt/soft` in the container.
+
+For example, Master, Worker and API servers may use Hadoop at the same time.
+
+1. Modify the volume `dolphinscheduler-shared-local` to support NFS in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+```yaml
+volumes:
+  dolphinscheduler-shared-local:
+    driver_opts:
+      type: "nfs"
+      o: "addr=10.40.0.199,nolock,soft,rw"
+      device: ":/path/to/shared/dir"
+```
+
+2. Put the Hadoop into the NFS.
+
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct.
+
+### How to Support Local File Resource Storage Instead of HDFS and S3?
+
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 2 can be skipped directly.
+
+1. Modify the following environment variables in `config.env.sh`:
+
+```
+RESOURCE_STORAGE_TYPE=HDFS
+FS_DEFAULT_FS=file:///
+```
+
+2. Modify the volume `dolphinscheduler-resource-local` to support NFS in `docker-compose.yml`.
+
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
+
+```yaml
+volumes:
+  dolphinscheduler-resource-local:
+    driver_opts:
+      type: "nfs"
+      o: "addr=10.40.0.199,nolock,soft,rw"
+      device: ":/path/to/resource/dir"
+```
+
+### How to Support S3 Resource Storage Like MinIO?
+
+Take MinIO as an example: modify the following environment variables in `config.env.sh`.
+
+```
+RESOURCE_STORAGE_TYPE=S3
+RESOURCE_UPLOAD_PATH=/dolphinscheduler
+FS_DEFAULT_FS=s3a://BUCKET_NAME
+FS_S3A_ENDPOINT=http://MINIO_IP:9000
+FS_S3A_ACCESS_KEY=MINIO_ACCESS_KEY
+FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
+```
+
+Modify `BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` to actual values.
+
+> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.
+
+### How to Configure SkyWalking?
+
+Modify SkyWalking environment variables in `config.env.sh`:
+
+```
+SKYWALKING_ENABLE=true
+SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
+SW_GRPC_LOG_SERVER_HOST=127.0.0.1
+SW_GRPC_LOG_SERVER_PORT=11800
+```
+
+## Appendix-Environment Variables
+
+### Database
+
+**`DATABASE_TYPE`**
+
+This environment variable sets the `TYPE` for the `database`. The default value is `postgresql`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DRIVER`**
+
+This environment variable sets the `DRIVER` for the `database`. The default value is `org.postgresql.Driver`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_HOST`**
+
+This environment variable sets the `HOST` for the `database`. The default value is `127.0.0.1`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PORT`**
+
+This environment variable sets the `PORT` for the `database`. The default value is `5432`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_USERNAME`**
+
+This environment variable sets the `USERNAME` for the `database`. The default value is `root`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PASSWORD`**
+
+This environment variable sets the `PASSWORD` for the `database`. The default value is `root`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_DATABASE`**
+
+This environment variable sets the `DATABASE` for the `database`. The default value is `dolphinscheduler`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+**`DATABASE_PARAMS`**
+
+This environment variable sets the `PARAMS` for the `database`. The default value is `characterEncoding=utf8`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+
+### ZooKeeper
+
+**`ZOOKEEPER_QUORUM`**
+
+This environment variable sets ZooKeeper quorum. The default value is `127.0.0.1:2181`.
+
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`.
+
+**`ZOOKEEPER_ROOT`**
+
+This environment variable sets the ZooKeeper root directory for DolphinScheduler. The default value is `/dolphinscheduler`.
+
+### Common
+
+**`DOLPHINSCHEDULER_OPTS`**
+
+This environment variable sets JVM options for DolphinScheduler, suitable for `master-server`, `worker-server`, `api-server`, `alert-server`. The default value is empty.
+
+**`DATA_BASEDIR_PATH`**
+
+This environment variable sets user data directory, customized configuration, please make sure the directory exists and have read-write permissions. The default value is `/tmp/dolphinscheduler`
+
+**`RESOURCE_STORAGE_TYPE`**
+
+This environment variable sets resource storage types for DolphinScheduler like `HDFS`, `S3`, `NONE`. The default value is `HDFS`.
+
+**`RESOURCE_UPLOAD_PATH`**
+
+This environment variable sets resource store path on `HDFS/S3` for resource storage. The default value is `/dolphinscheduler`.
+
+**`FS_DEFAULT_FS`**
+
+This environment variable sets `fs.defaultFS` for resource storage like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`. The default value is `file:///`.
+
+**`FS_S3A_ENDPOINT`**
+
+This environment variable sets `s3` endpoint for resource storage. The default value is `s3.xxx.amazonaws.com`.
+
+**`FS_S3A_ACCESS_KEY`**
+
+This environment variable sets `s3` access key for resource storage. The default value is `xxxxxxx`.
+
+**`FS_S3A_SECRET_KEY`**
+
+This environment variable sets `s3` secret key for resource storage. The default value is `xxxxxxx`.
+
+**`HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE`**
+
+This environment variable sets whether to startup Kerberos. The default value is `false`.
+
+**`JAVA_SECURITY_KRB5_CONF_PATH`**
+
+This environment variable sets `java.security.krb5.conf` path. The default value is `/opt/krb5.conf`.
+
+**`LOGIN_USER_KEYTAB_USERNAME`**
+
+This environment variable sets the `keytab` username for the login user. The default value is `hdfs@HADOOP.COM`.
+
+**`LOGIN_USER_KEYTAB_PATH`**
+
+This environment variable sets the `keytab` path for the login user. The default value is `/opt/hdfs.keytab`.
+
+**`KERBEROS_EXPIRE_TIME`**
+
+This environment variable sets Kerberos expiration time, use hour as unit. The default value is `2`.
+
+**`HDFS_ROOT_USER`**
+
+This environment variable sets HDFS root user when `resource.storage.type=HDFS`. The default value is `hdfs`.
+
+**`RESOURCE_MANAGER_HTTPADDRESS_PORT`**
+
+This environment variable sets resource manager HTTP address port. The default value is `8088`.
+
+**`YARN_RESOURCEMANAGER_HA_RM_IDS`**
+
+This environment variable sets yarn `resourcemanager` ha rm ids. The default value is empty.
+
+**`YARN_APPLICATION_STATUS_ADDRESS`**
+
+This environment variable sets yarn application status address. The default value is `http://ds1:%s/ws/v1/cluster/apps/%s`.
+
+**`SKYWALKING_ENABLE`**
+
+This environment variable sets whether to enable SkyWalking. The default value is `false`.
+
+**`SW_AGENT_COLLECTOR_BACKEND_SERVICES`**
+
+This environment variable sets agent collector backend services for SkyWalking. The default value is `127.0.0.1:11800`.
+
+**`SW_GRPC_LOG_SERVER_HOST`**
+
+This environment variable sets `gRPC` log server host for SkyWalking. The default value is `127.0.0.1`.
+
+**`SW_GRPC_LOG_SERVER_PORT`**
+
+This environment variable sets `gRPC` log server port for SkyWalking. The default value is `11800`.
+
+**`HADOOP_HOME`**
+
+This environment variable sets `HADOOP_HOME`. The default value is `/opt/soft/hadoop`.
+
+**`HADOOP_CONF_DIR`**
+
+This environment variable sets `HADOOP_CONF_DIR`. The default value is `/opt/soft/hadoop/etc/hadoop`.
+
+**`SPARK_HOME1`**
+
+This environment variable sets `SPARK_HOME1`. The default value is `/opt/soft/spark1`.
+
+**`SPARK_HOME2`**
+
+This environment variable sets `SPARK_HOME2`. The default value is `/opt/soft/spark2`.
+
+**`PYTHON_HOME`**
+
+This environment variable sets `PYTHON_HOME`. The default value is `/usr/bin/python`.
+
+**`JAVA_HOME`**
+
+This environment variable sets `JAVA_HOME`. The default value is `/usr/local/openjdk-8`.
+
+**`HIVE_HOME`**
+
+This environment variable sets `HIVE_HOME`. The default value is `/opt/soft/hive`.
+
+**`FLINK_HOME`**
+
+This environment variable sets `FLINK_HOME`. The default value is `/opt/soft/flink`.
+
+**`DATAX_HOME`**
+
+This environment variable sets `DATAX_HOME`. The default value is `/opt/soft/datax`.
+
+### Master Server
+
+**`MASTER_SERVER_OPTS`**
+
+This environment variable sets JVM options for `master-server`. The default value is `-Xms1g -Xmx1g -Xmn512m`.
+
+**`MASTER_EXEC_THREADS`**
+
+This environment variable sets execute thread number for `master-server`. The default value is `100`.
+
+**`MASTER_EXEC_TASK_NUM`**
+
+This environment variable sets execute task number for `master-server`. The default value is `20`.
+
+**`MASTER_DISPATCH_TASK_NUM`**
+
+This environment variable sets dispatch task number for `master-server`. The default value is `3`.
+
+**`MASTER_HOST_SELECTOR`**
+
+This environment variable sets host selector for `master-server`. Optional values include `Random`, `RoundRobin` and `LowerWeight`. The default value is `LowerWeight`.
+
+**`MASTER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat intervals for `master-server`. The default value is `10`.
+
+**`MASTER_TASK_COMMIT_RETRYTIMES`**
+
+This environment variable sets task commit retry times for `master-server`. The default value is `5`.
+
+**`MASTER_TASK_COMMIT_INTERVAL`**
+
+This environment variable sets task commit interval for `master-server`. The default value is `1`.
+
+**`MASTER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max CPU load avg for `master-server`. The default value is `-1`.
+
+**`MASTER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `master-server`, the unit is G. The default value is `0.3`.
+
+### Worker Server
+
+**`WORKER_SERVER_OPTS`**
+
+This environment variable sets JVM options for `worker-server`. The default value is `-Xms1g -Xmx1g -Xmn512m`.
+
+**`WORKER_EXEC_THREADS`**
+
+This environment variable sets execute thread number for `worker-server`. The default value is `100`.
+
+**`WORKER_HEARTBEAT_INTERVAL`**
+
+This environment variable sets heartbeat interval for `worker-server`. The default value is `10`.
+
+**`WORKER_MAX_CPULOAD_AVG`**
+
+This environment variable sets max CPU load avg for `worker-server`. The default value is `-1`.
+
+**`WORKER_RESERVED_MEMORY`**
+
+This environment variable sets reserved memory for `worker-server`, the unit is G. The default value is `0.3`.
+
+**`WORKER_GROUPS`**
+
+This environment variable sets groups for `worker-server`. The default value is `default`.
+
+### Alert Server
+
+**`ALERT_SERVER_OPTS`**
+
+This environment variable sets JVM options for `alert-server`. The default value is `-Xms512m -Xmx512m -Xmn256m`.
+
+**`XLS_FILE_PATH`**
+
+This environment variable sets `xls` file path for `alert-server`. The default value is `/tmp/xls`.
+
+**`MAIL_SERVER_HOST`**
+
+This environment variable sets mail server host for `alert-server`. The default value is empty.
+
+**`MAIL_SERVER_PORT`**
+
+This environment variable sets mail server port for `alert-server`. The default value is empty.
+
+**`MAIL_SENDER`**
+
+This environment variable sets mail sender for `alert-server`. The default value is empty.
+
+**`MAIL_USER=`**
+
+This environment variable sets mail user for `alert-server`. The default value is empty.
+
+**`MAIL_PASSWD`**
+
+This environment variable sets mail password for `alert-server`. The default value is empty.
+
+**`MAIL_SMTP_STARTTLS_ENABLE`**
+
+This environment variable sets SMTP `tls` for `alert-server`. The default value is `true`.
+
+**`MAIL_SMTP_SSL_ENABLE`**
+
+This environment variable sets SMTP `ssl` for `alert-server`. The default value is `false`.
+
+**`MAIL_SMTP_SSL_TRUST`**
+
+This environment variable sets SMTP `ssl` trust for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_ENABLE`**
+
+This environment variable sets enterprise WeChat enables for `alert-server`. The default value is `false`.
+
+**`ENTERPRISE_WECHAT_CORP_ID`**
+
+This environment variable sets enterprise WeChat corp id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_SECRET`**
+
+This environment variable sets enterprise WeChat secret for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_AGENT_ID`**
+
+This environment variable sets enterprise WeChat agent id for `alert-server`. The default value is empty.
+
+**`ENTERPRISE_WECHAT_USERS`**
+
+This environment variable sets enterprise WeChat users for `alert-server`. The default value is empty.
+
+### API Server
+
+**`API_SERVER_OPTS`**
+
+This environment variable sets JVM options for `api-server`. The default value is `-Xms512m -Xmx512m -Xmn256m`.
diff --git a/docs/docs/en/guide/installation/kubernetes.md b/docs/docs/en/guide/installation/kubernetes.md
new file mode 100644
index 0000000..5cd9c1f
--- /dev/null
+++ b/docs/docs/en/guide/installation/kubernetes.md
@@ -0,0 +1,754 @@
+# QuickStart in Kubernetes
+
+Kubernetes deployment is DolphinScheduler deployment in a Kubernetes cluster, which can schedule massive tasks and can be used in production.
+
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
+
+## Prerequisites
+
+ - [Helm](https://helm.sh/) version 3.1.0+
+ - [Kubernetes](https://kubernetes.io/) version 1.12+
+ - PV provisioner support in the underlying infrastructure
+
+## Install DolphinScheduler
+
+Please download the source code package `apache-dolphinscheduler-1.3.8-src.tar.gz`, download address: [download address](/en-us/download/download.html)
+
+To publish the release name `dolphinscheduler` version, please execute the following commands:
+
+```
+$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
+$ cd apache-dolphinscheduler-1.3.8-src/docker/kubernetes/dolphinscheduler
+$ helm repo add bitnami https://charts.bitnami.com/bitnami
+$ helm dependency update .
+$ helm install dolphinscheduler . --set image.tag=1.3.8
+```
+
+To publish the release name `dolphinscheduler` version to `test` namespace:
+
+```bash
+$ helm install dolphinscheduler . -n test
+```
+
+> **Tip**: If a namespace named `test` is used, the optional parameter `-n test` needs to be added to the `helm` and `kubectl` commands.
+
+These commands are used to deploy DolphinScheduler on the Kubernetes cluster by default. The [Appendix-Configuration](#appendix-configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default.
+
+## Access DolphinScheduler UI
+
+If `ingress.enabled` in `values.yaml` is set to `true`, you could access `http://${ingress.host}/dolphinscheduler` in browser.
+
+> **Tip**: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
+
+Otherwise, when `api.service.type=ClusterIP` you need to execute `port-forward` commands:
+
+```bash
+$ kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
+$ kubectl port-forward --address 0.0.0.0 -n test svc/dolphinscheduler-api 12345:12345 # with test namespace
+```
+
+> **Tip**: If the error of `unable to do port forwarding: socat not found` appears, you need to install `socat` first.
+
+Access the web: `http://localhost:12345/dolphinscheduler` (Modify the IP address if needed).
+
+Or when `api.service.type=NodePort` you need to execute the command:
+
+```bash
+NODE_IP=$(kubectl get no -n {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+NODE_PORT=$(kubectl get svc {{ template "dolphinscheduler.fullname" . }}-api -n {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}")
+echo http://$NODE_IP:$NODE_PORT/dolphinscheduler
+```
+
+Access the web: `http://$NODE_IP:$NODE_PORT/dolphinscheduler`.
+
+The default username is `admin` and the default password is `dolphinscheduler123`.
+
+Please refer to the `Quick Start` in the chapter [Quick Start](../start/quick-start.md) to explore how to use DolphinScheduler.
+
+## Uninstall the Chart
+
+To uninstall or delete the `dolphinscheduler` deployment:
+
+```bash
+$ helm uninstall dolphinscheduler
+```
+
+The command removes all the Kubernetes components (except PVC) associated with the `dolphinscheduler` and deletes the release.
+
+Run the command below to delete the PVC's associated with `dolphinscheduler`:
+
+```bash
+$ kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler
+```
+
+> **Note**: Deleting the PVC's will delete all data as well. Please be cautious before doing it.
+
+## Configuration
+
+The configuration file is `values.yaml`, and the [Appendix-Configuration](#appendix-configuration) tables lists the configurable parameters of the DolphinScheduler and their default values.
+
+## Support Matrix
+
+| Type                                                         | Support      | Notes                                 |
+| ------------------------------------------------------------ | ------------ | ------------------------------------- |
+| Shell                                                        | Yes          |                                       |
+| Python2                                                      | Yes          |                                       |
+| Python3                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop2                                                      | Indirect Yes | Refer to FAQ                          |
+| Hadoop3                                                      | Not Sure     | Not tested                            |
+| Spark-Local(client)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-YARN(cluster)                                          | Indirect Yes | Refer to FAQ                          |
+| Spark-Standalone(cluster)                                    | Not Yet      |                                       |
+| Spark-Kubernetes(cluster)                                    | Not Yet      |                                       |
+| Flink-Local(local>=1.11)                                     | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-YARN(yarn-cluster)                                     | Indirect Yes | Refer to FAQ                          |
+| Flink-YARN(yarn-session/yarn-per-job/yarn-application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Standalone(default)                                    | Not Yet      |                                       |
+| Flink-Standalone(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-Kubernetes(default)                                    | Not Yet      |                                       |
+| Flink-Kubernetes(remote>=1.11)                               | Not Yet      | Generic CLI mode is not yet supported |
+| Flink-NativeKubernetes(kubernetes-session/application>=1.11) | Not Yet      | Generic CLI mode is not yet supported |
+| MapReduce                                                    | Indirect Yes | Refer to FAQ                          |
+| Kerberos                                                     | Indirect Yes | Refer to FAQ                          |
+| HTTP                                                         | Yes          |                                       |
+| DataX                                                        | Indirect Yes | Refer to FAQ                          |
+| Sqoop                                                        | Indirect Yes | Refer to FAQ                          |
+| SQL-MySQL                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-PostgreSQL                                               | Yes          |                                       |
+| SQL-Hive                                                     | Indirect Yes | Refer to FAQ                          |
+| SQL-Spark                                                    | Indirect Yes | Refer to FAQ                          |
+| SQL-ClickHouse                                               | Indirect Yes | Refer to FAQ                          |
+| SQL-Oracle                                                   | Indirect Yes | Refer to FAQ                          |
+| SQL-SQLServer                                                | Indirect Yes | Refer to FAQ                          |
+| SQL-DB2                                                      | Indirect Yes | Refer to FAQ                          |
+
+## FAQ
+
+### How to View the Logs of a Pod Container?
+
+List all pods (aka `po`):
+
+```
+kubectl get po
+kubectl get po -n test # with test namespace
+```
+
+View the logs of a pod container named `dolphinscheduler-master-0`:
+
+```
+kubectl logs dolphinscheduler-master-0
+kubectl logs -f dolphinscheduler-master-0 # follow log output
+kubectl logs --tail 10 dolphinscheduler-master-0 -n test # show last 10 lines from the end of the logs
+```
+
+### How to Scale API, master and worker on Kubernetes?
+
+List all deployments (aka `deploy`):
+
+```
+kubectl get deploy
+kubectl get deploy -n test # with test namespace
+```
+
+Scale api to 3 replicas:
+
+```
+kubectl scale --replicas=3 deploy dolphinscheduler-api
+kubectl scale --replicas=3 deploy dolphinscheduler-api -n test # with test namespace
+```
+
+List all stateful sets (aka `sts`):
+
+```
+kubectl get sts
+kubectl get sts -n test # with test namespace
+```
+
+Scale master to 2 replicas:
+
+```
+kubectl scale --replicas=2 sts dolphinscheduler-master
+kubectl scale --replicas=2 sts dolphinscheduler-master -n test # with test namespace
+```
+
+Scale worker to 6 replicas:
+
+```
+kubectl scale --replicas=6 sts dolphinscheduler-worker
+kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test namespace
+```
+
+### How to Use MySQL as the DolphinScheduler's Database Instead of PostgreSQL?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry.
+
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`.
+
+6. Modify postgresql `enabled` to `false` in `values.yaml`.
+
+7. Modify externalDatabase (especially modify `host`, `username` and `password`) in `values.yaml`:
+
+```yaml
+externalDatabase:
+  type: "mysql"
+  driver: "com.mysql.jdbc.Driver"
+  host: "localhost"
+  port: "3306"
+  username: "root"
+  password: "root"
+  database: "dolphinscheduler"
+  params: "useUnicode=true&characterEncoding=UTF-8"
+```
+
+8. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+### How to Support MySQL Datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of MySQL.
+>
+> If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
+
+2. Create a new `Dockerfile` to add MySQL driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including MySQL driver:
+
+```
+docker build -t apache/dolphinscheduler:mysql-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry.
+
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`.
+
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+7. Add a MySQL datasource in `Datasource manage`.
+
+### How to Support Oracle Datasource in `Datasource manage`?
+
+> Because of the commercial license, we cannot directly use the driver of Oracle.
+>
+> If you want to add Oracle datasource, you can build a new image based on the `apache/dolphinscheduler` image follow the following instructions:
+
+1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`)
+
+2. Create a new `Dockerfile` to add Oracle driver:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
+```
+
+3. Build a new docker image including Oracle driver:
+
+```
+docker build -t apache/dolphinscheduler:oracle-driver .
+```
+
+4. Push the docker image `apache/dolphinscheduler:oracle-driver` to a docker registry.
+
+5. Modify image `repository` and update `tag` to `oracle-driver` in `values.yaml`.
+
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+7. Add an Oracle datasource in `Datasource manage`.
+
+### How to Support Python 2 pip and Custom requirements.txt?
+
+1. Create a new `Dockerfile` to install pip:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+COPY requirements.txt /tmp
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python-pip && \
+    pip install --no-cache-dir -r /tmp/requirements.txt && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **pip 18.1**. If you upgrade the pip, just add the following command.
+
+```
+    pip install --no-cache-dir -U pip && \
+```
+
+2. Build a new docker image including pip:
+
+```
+docker build -t apache/dolphinscheduler:pip .
+```
+
+3. Push the docker image `apache/dolphinscheduler:pip` to a docker registry.
+
+4. Modify image `repository` and update `tag` to `pip` in `values.yaml`.
+
+5. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+6. Verify pip under a new Python task.
+
+### How to Support Python 3?
+
+1. Create a new `Dockerfile` to install Python 3:
+
+```
+FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends python3 && \
+    rm -rf /var/lib/apt/lists/*
+```
+
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like:
+
+```
+    apt-get install -y --no-install-recommends python3-pip && \
+```
+
+2. Build a new docker image including Python 3:
+
+```
+docker build -t apache/dolphinscheduler:python3 .
+```
+
+3. Push the docker image `apache/dolphinscheduler:python3` to a docker registry.
+
+4. Modify image `repository` and update `tag` to `python3` in `values.yaml`.
+
+5. Modify `PYTHON_HOME` to `/usr/bin/python3` in `values.yaml`.
+
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+7. Verify Python 3 under a new Python task.
+
+### How to Support Hadoop, Spark, Flink, Hive or DataX?
+
+Take Spark 2.4.7 as an example:
+
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`.
+
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on.
+
+3. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+4. Copy the Spark 2.4.7 release binary into the Docker container.
+
+```bash
+kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
+kubectl cp -n test spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
+```
+
+Because the volume `sharedStoragePersistence` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost.
+
+5. Attach the container and ensure that `SPARK_HOME2` exists.
+
+```bash
+kubectl exec -it dolphinscheduler-worker-0 bash
+kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
+cd /opt/soft
+tar zxf spark-2.4.7-bin-hadoop2.7.tgz
+rm -f spark-2.4.7-bin-hadoop2.7.tgz
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well.
+
+6. Verify Spark under a Shell task.
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+7. Verify Spark under a Spark task.
+
+The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
+
+- Spark Version: `SPARK2`
+- Main Class: `org.apache.spark.examples.SparkPi`
+- Main Package: `spark-examples_2.11-2.4.7.jar`
+- Deploy Mode: `local`
+
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+8. Verify Spark on YARN.
+
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.
+
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists.
+
+### How to Support Spark 3?
+
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`.
+
+Take Spark 3.1.1 as an example:
+
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`.
+
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on.
+
+3. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
+
+4. Copy the Spark 3.1.1 release binary into the Docker container.
+
+```bash
+kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
+kubectl cp -n test spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
+```
+
+5. Attach the container and ensure that `SPARK_HOME2` exists.
+
+```bash
+kubectl exec -it dolphinscheduler-worker-0 bash
+kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
+cd /opt/soft
+tar zxf spark-3.1.1-bin-hadoop2.7.tgz
+rm -f spark-3.1.1-bin-hadoop2.7.tgz
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
+$SPARK_HOME2/bin/spark-submit --version
+```
+
+The last command will print the Spark version if everything goes well.
+
+6. Verify Spark under a Shell task.
+
+```
+$SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
+```
+
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
+
+### How to Support Shared Storage Between Master, Worker and Api Server?
+
+For example, Master, Worker and API server may use Hadoop at the same time.
+
+1. Modify the following configurations in `values.yaml`
+
+```yaml
+common:
+  sharedStoragePersistence:
+    enabled: false
+    mountPath: "/opt/soft"
+    accessModes:
+    - "ReadWriteMany"
+    storageClassName: "-"
+    storage: "20Gi"
+```
+
+Modify `storageClassName` and `storage` to actual environment values.
+
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`.
+
+2. Copy the Hadoop into the directory `/opt/soft`.
+
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct.
+
+### How to Support Local File Resource Storage Instead of HDFS and S3?
+
+Modify the following configurations in `values.yaml`:
+
+```yaml
+common:
+  configmap:
+    RESOURCE_STORAGE_TYPE: "HDFS"
+    RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
+    FS_DEFAULT_FS: "file:///"
+  fsFileResourcePersistence:
+    enabled: true
+    accessModes:
+    - "ReadWriteMany"
+    storageClassName: "-"
+    storage: "20Gi"
+```
+
+Modify `storageClassName` and `storage` to actual environment values.
+
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`.
+
+### How to Support S3 Resource Storage Like MinIO?
+
+Take MinIO as an example: Modify the following configurations in `values.yaml`:
+
+```yaml
+common:
+  configmap:
+    RESOURCE_STORAGE_TYPE: "S3"
+    RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
+    FS_DEFAULT_FS: "s3a://BUCKET_NAME"
+    FS_S3A_ENDPOINT: "http://MINIO_IP:9000"
+    FS_S3A_ACCESS_KEY: "MINIO_ACCESS_KEY"
+    FS_S3A_SECRET_KEY: "MINIO_SECRET_KEY"
+```
+
+Modify `BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY`  to actual environment values.
+
+> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.
+
+### How to Configure SkyWalking?
+
+Modify SkyWalking configurations in `values.yaml`:
+
+```yaml
+common:
+  configmap:
+    SKYWALKING_ENABLE: "true"
+    SW_AGENT_COLLECTOR_BACKEND_SERVICES: "127.0.0.1:11800"
+    SW_GRPC_LOG_SERVER_HOST: "127.0.0.1"
+    SW_GRPC_LOG_SERVER_PORT: "11800"
+```
+
+## Appendix-Configuration
+
+| Parameter                                                                         | Description                                                                                                                    | Default                                               |
+| --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `timezone`                                                                        | World time and date for cities in all time zones                                                                               | `Asia/Shanghai`                                       |
+|                                                                                   |                                                                                                                                |                                                       |
+| `image.repository`                                                                | Docker image repository for the DolphinScheduler                                                                               | `apache/dolphinscheduler`                             |
+| `image.tag`                                                                       | Docker image version for the DolphinScheduler                                                                                  | `latest`                                              |
+| `image.pullPolicy`                                                                | Image pull policy. Options: Always, Never, IfNotPresent                                                                          | `IfNotPresent`                                        |
+| `image.pullSecret`                                                                | Image pull secret. An optional reference to secret in the same namespace to use for pulling any of the images                  | `nil`                                                 |
+|                                                                                   |                                                                                                                                |                                                       |
+| `postgresql.enabled`                                                              | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL                             | `true`                                                |
+| `postgresql.postgresqlUsername`                                                   | The username for internal PostgreSQL                                                                                           | `root`                                                |
+| `postgresql.postgresqlPassword`                                                   | The password for internal PostgreSQL                                                                                           | `root`                                                |
+| `postgresql.postgresqlDatabase`                                                   | The database for internal PostgreSQL                                                                                           | `dolphinscheduler`                                    |
+| `postgresql.persistence.enabled`                                                  | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL                                   | `false`                                               |
+| `postgresql.persistence.size`                                                     | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `postgresql.persistence.storageClass`                                             | PostgreSQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning      | `-`                                                   |
+| `externalDatabase.type`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database type will use it       | `postgresql`                                          |
+| `externalDatabase.driver`                                                         | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database driver will use it     | `org.postgresql.Driver`                               |
+| `externalDatabase.host`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database host will use it       | `localhost`                                           |
+| `externalDatabase.port`                                                           | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database port will use it       | `5432`                                                |
+| `externalDatabase.username`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database username will use it   | `root`                                                |
+| `externalDatabase.password`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database password will use it   | `root`                                                |
+| `externalDatabase.database`                                                       | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database database will use it   | `dolphinscheduler`                                    |
+| `externalDatabase.params`                                                         | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database params will use it     | `characterEncoding=utf8`                              |
+|                                                                                   |                                                                                                                                |                                                       |
+| `zookeeper.enabled`                                                               | If not exists external ZooKeeper, by default, the DolphinScheduler will use a internal ZooKeeper                               | `true`                                                |
+| `zookeeper.fourlwCommandsWhitelist`                                               | A list of comma separated Four Letter Words commands to use                                                                    | `srvr,ruok,wchs,cons`                                 |
+| `zookeeper.persistence.enabled`                                                   | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal ZooKeeper                                     | `false`                                               |
+| `zookeeper.persistence.size`                                                      | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `zookeeper.persistence.storageClass`                                              | ZooKeeper data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning       | `-`                                                   |
+| `zookeeper.zookeeperRoot`                                                         | Specify dolphinscheduler root directory in ZooKeeper                                                                           | `/dolphinscheduler`                                   |
+| `externalZookeeper.zookeeperQuorum`                                               | If exists external ZooKeeper, and set `zookeeper.enabled` value to false. Specify Zookeeper quorum                             | `127.0.0.1:2181`                                      |
+| `externalZookeeper.zookeeperRoot`                                                 | If exists external ZooKeeper, and set `zookeeper.enabled` value to false. Specify dolphinscheduler root directory in Zookeeper | `/dolphinscheduler`                                   |
+|                                                                                   |                                                                                                                                |                                                       |
+| `common.configmap.DOLPHINSCHEDULER_OPTS`                                          | The jvm options for dolphinscheduler, suitable for all servers                                                                 | `""`                                                  |
+| `common.configmap.DATA_BASEDIR_PATH`                                              | User data directory path, self configuration, please make sure the directory exists and have read write permissions            | `/tmp/dolphinscheduler`                               |
+| `common.configmap.RESOURCE_STORAGE_TYPE`                                          | Resource storage type: HDFS, S3, NONE                                                                                          | `HDFS`                                                |
+| `common.configmap.RESOURCE_UPLOAD_PATH`                                           | Resource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions                  | `/dolphinscheduler`                                   |
+| `common.configmap.FS_DEFAULT_FS`                                                  | Resource storage file system like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`                              | `file:///`                                            |
+| `common.configmap.FS_S3A_ENDPOINT`                                                | S3 endpoint when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                       | `s3.xxx.amazonaws.com`                                |
+| `common.configmap.FS_S3A_ACCESS_KEY`                                              | S3 access key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                     | `xxxxxxx`                                             |
+| `common.configmap.FS_S3A_SECRET_KEY`                                              | S3 secret key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3`                                                     | `xxxxxxx`                                             |
+| `common.configmap.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE`                   | Whether to startup kerberos                                                                                                    | `false`                                               |
+| `common.configmap.JAVA_SECURITY_KRB5_CONF_PATH`                                   | The java.security.krb5.conf path                                                                                               | `/opt/krb5.conf`                                      |
+| `common.configmap.LOGIN_USER_KEYTAB_USERNAME`                                     | The login user from keytab username                                                                                            | `hdfs@HADOOP.COM`                                     |
+| `common.configmap.LOGIN_USER_KEYTAB_PATH`                                         | The login user from keytab path                                                                                                | `/opt/hdfs.keytab`                                    |
+| `common.configmap.KERBEROS_EXPIRE_TIME`                                           | The kerberos expire time, the unit is hour                                                                                     | `2`                                                   |
+| `common.configmap.HDFS_ROOT_USER`                                                 | The HDFS root user who must have the permission to create directories under the HDFS root path                                 | `hdfs`                                                |
+| `common.configmap.RESOURCE_MANAGER_HTTPADDRESS_PORT`                              | Set resource manager httpaddress port for yarn                                                                                 | `8088`                                                |
+| `common.configmap.YARN_RESOURCEMANAGER_HA_RM_IDS`                                 | If resourcemanager HA is enabled, please set the HA IPs                                                                        | `nil`                                                 |
+| `common.configmap.YARN_APPLICATION_STATUS_ADDRESS`                                | If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname, otherwise keep default          | `http://ds1:%s/ws/v1/cluster/apps/%s`               |
+| `common.configmap.SKYWALKING_ENABLE`                                              | Set whether to enable skywalking                                                                                               | `false`                                               |
+| `common.configmap.SW_AGENT_COLLECTOR_BACKEND_SERVICES`                            | Set agent collector backend services for skywalking                                                                            | `127.0.0.1:11800`                                     |
+| `common.configmap.SW_GRPC_LOG_SERVER_HOST`                                        | Set grpc log server host for skywalking                                                                                        | `127.0.0.1`                                           |
+| `common.configmap.SW_GRPC_LOG_SERVER_PORT`                                        | Set grpc log server port for skywalking                                                                                        | `11800`                                               |
+| `common.configmap.HADOOP_HOME`                                                    | Set `HADOOP_HOME` for DolphinScheduler's task environment                                                                      | `/opt/soft/hadoop`                                    |
+| `common.configmap.HADOOP_CONF_DIR`                                                | Set `HADOOP_CONF_DIR` for DolphinScheduler's task environment                                                                  | `/opt/soft/hadoop/etc/hadoop`                         |
+| `common.configmap.SPARK_HOME1`                                                    | Set `SPARK_HOME1` for DolphinScheduler's task environment                                                                      | `/opt/soft/spark1`                                    |
+| `common.configmap.SPARK_HOME2`                                                    | Set `SPARK_HOME2` for DolphinScheduler's task environment                                                                      | `/opt/soft/spark2`                                    |
+| `common.configmap.PYTHON_HOME`                                                    | Set `PYTHON_HOME` for DolphinScheduler's task environment                                                                      | `/usr/bin/python`                                     |
+| `common.configmap.JAVA_HOME`                                                      | Set `JAVA_HOME` for DolphinScheduler's task environment                                                                        | `/usr/local/openjdk-8`                                |
+| `common.configmap.HIVE_HOME`                                                      | Set `HIVE_HOME` for DolphinScheduler's task environment                                                                        | `/opt/soft/hive`                                      |
+| `common.configmap.FLINK_HOME`                                                     | Set `FLINK_HOME` for DolphinScheduler's task environment                                                                       | `/opt/soft/flink`                                     |
+| `common.configmap.DATAX_HOME`                                                     | Set `DATAX_HOME` for DolphinScheduler's task environment                                                                       | `/opt/soft/datax`                                     |
+| `common.sharedStoragePersistence.enabled`                                         | Set `common.sharedStoragePersistence.enabled` to `true` to mount a shared storage volume for Hadoop, Spark binary and etc      | `false`                                               |
+| `common.sharedStoragePersistence.mountPath`                                       | The mount path for the shared storage volume                                                                                   | `/opt/soft`                                           |
+| `common.sharedStoragePersistence.accessModes`                                     | `PersistentVolumeClaim` access modes, must be `ReadWriteMany`                                                                  | `[ReadWriteMany]`                                     |
+| `common.sharedStoragePersistence.storageClassName`                                | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany                                    | `-`                                                   |
+| `common.sharedStoragePersistence.storage`                                         | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `common.fsFileResourcePersistence.enabled`                                        | Set `common.fsFileResourcePersistence.enabled` to `true` to mount a new file resource volume for `api` and `worker`            | `false`                                               |
+| `common.fsFileResourcePersistence.accessModes`                                    | `PersistentVolumeClaim` access modes, must be `ReadWriteMany`                                                                  | `[ReadWriteMany]`                                     |
+| `common.fsFileResourcePersistence.storageClassName`                               | Resource persistent volume storage class, must support the access mode: ReadWriteMany                                          | `-`                                                   |
+| `common.fsFileResourcePersistence.storage`                                        | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `master.podManagementPolicy`                                                      | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down  | `Parallel`                                            |
+| `master.replicas`                                                                 | Replicas is the desired number of replicas of the given Template                                                               | `3`                                                   |
+| `master.annotations`                                                              | The `annotations` for master server                                                                                            | `{}`                                                  |
+| `master.affinity`                                                                 | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `master.nodeSelector`                                                             | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `master.tolerations`                                                              | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `master.resources`                                                                | The `resource` limit and request config for master server                                                                      | `{}`                                                  |
+| `master.configmap.MASTER_SERVER_OPTS`                                             | The jvm options for master server                                                                                              | `-Xms1g -Xmx1g -Xmn512m`                              |
+| `master.configmap.MASTER_EXEC_THREADS`                                            | Master execute thread number to limit process instances                                                                        | `100`                                                 |
+| `master.configmap.MASTER_EXEC_TASK_NUM`                                           | Master execute task number in parallel per process instance                                                                    | `20`                                                  |
+| `master.configmap.MASTER_DISPATCH_TASK_NUM`                                       | Master dispatch task number per batch                                                                                          | `3`                                                   |
+| `master.configmap.MASTER_HOST_SELECTOR`                                           | Master host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeight                      | `LowerWeight`                                         |
+| `master.configmap.MASTER_HEARTBEAT_INTERVAL`                                      | Master heartbeat interval, the unit is second                                                                                  | `10`                                                  |
+| `master.configmap.MASTER_TASK_COMMIT_RETRYTIMES`                                  | Master commit task retry times                                                                                                 | `5`                                                   |
+| `master.configmap.MASTER_TASK_COMMIT_INTERVAL`                                    | master commit task interval, the unit is second                                                                                | `1`                                                   |
+| `master.configmap.MASTER_MAX_CPULOAD_AVG`                                         | Master max cpuload avg, only higher than the system cpu load average, master server can schedule                               | `-1` (`the number of cpu cores * 2`)                  |
+| `master.configmap.MASTER_RESERVED_MEMORY`                                         | Master reserved memory, only lower than system available memory, master server can schedule, the unit is G                     | `0.3`                                                 |
+| `master.livenessProbe.enabled`                                                    | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `master.livenessProbe.initialDelaySeconds`                                        | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `master.livenessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `master.livenessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `master.livenessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `master.livenessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `master.readinessProbe.enabled`                                                   | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `master.readinessProbe.initialDelaySeconds`                                       | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `master.readinessProbe.periodSeconds`                                             | How often to perform the probe                                                                                                 | `30`                                                  |
+| `master.readinessProbe.timeoutSeconds`                                            | When the probe times out                                                                                                       | `5`                                                   |
+| `master.readinessProbe.failureThreshold`                                          | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `master.readinessProbe.successThreshold`                                          | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `master.persistentVolumeClaim.enabled`                                            | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master`                                        | `false`                                               |
+| `master.persistentVolumeClaim.accessModes`                                        | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `master.persistentVolumeClaim.storageClassName`                                   | `Master` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning   | `-`                                                   |
+| `master.persistentVolumeClaim.storage`                                            | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `worker.podManagementPolicy`                                                      | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down  | `Parallel`                                            |
+| `worker.replicas`                                                                 | Replicas is the desired number of replicas of the given Template                                                               | `3`                                                   |
+| `worker.annotations`                                                              | The `annotations` for worker server                                                                                            | `{}`                                                  |
+| `worker.affinity`                                                                 | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `worker.nodeSelector`                                                             | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `worker.tolerations`                                                              | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `worker.resources`                                                                | The `resource` limit and request config for worker server                                                                      | `{}`                                                  |
+| `worker.configmap.WORKER_SERVER_OPTS`                                             | The jvm options for worker server                                                                                              | `-Xms1g -Xmx1g -Xmn512m`                              |
+| `worker.configmap.WORKER_EXEC_THREADS`                                            | Worker execute thread number to limit task instances                                                                           | `100`                                                 |
+| `worker.configmap.WORKER_HEARTBEAT_INTERVAL`                                      | Worker heartbeat interval, the unit is second                                                                                  | `10`                                                  |
+| `worker.configmap.WORKER_MAX_CPULOAD_AVG`                                         | Worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks                    | `-1` (`the number of cpu cores * 2`)                  |
+| `worker.configmap.WORKER_RESERVED_MEMORY`                                         | Worker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G          | `0.3`                                                 |
+| `worker.configmap.WORKER_GROUPS`                                                  | Worker groups                                                                                                                  | `default`                                             |
+| `worker.livenessProbe.enabled`                                                    | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `worker.livenessProbe.initialDelaySeconds`                                        | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `worker.livenessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `worker.livenessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `worker.livenessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `worker.livenessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `worker.readinessProbe.enabled`                                                   | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `worker.readinessProbe.initialDelaySeconds`                                       | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `worker.readinessProbe.periodSeconds`                                             | How often to perform the probe                                                                                                 | `30`                                                  |
+| `worker.readinessProbe.timeoutSeconds`                                            | When the probe times out                                                                                                       | `5`                                                   |
+| `worker.readinessProbe.failureThreshold`                                          | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `worker.readinessProbe.successThreshold`                                          | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `worker.persistentVolumeClaim.enabled`                                            | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker`                            | `false`                                               |
+| `worker.persistentVolumeClaim.dataPersistentVolume.enabled`                       | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker`                  | `false`                                               |
+| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes`                   | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName`              | `Worker` data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning        | `-`                                                   |
+| `worker.persistentVolumeClaim.dataPersistentVolume.storage`                       | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `worker.persistentVolumeClaim.logsPersistentVolume.enabled`                       | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker`                  | `false`                                               |
+| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes`                   | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName`              | `Worker` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning   | `-`                                                   |
+| `worker.persistentVolumeClaim.logsPersistentVolume.storage`                       | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `alert.replicas`                                                                  | Replicas is the desired number of replicas of the given Template                                                               | `1`                                                   |
+| `alert.strategy.type`                                                             | Type of deployment. Can be "Recreate" or "RollingUpdate"                                                                       | `RollingUpdate`                                       |
+| `alert.strategy.rollingUpdate.maxSurge`                                           | The maximum number of pods that can be scheduled above the desired number of pods                                              | `25%`                                                 |
+| `alert.strategy.rollingUpdate.maxUnavailable`                                     | The maximum number of pods that can be unavailable during the update                                                           | `25%`                                                 |
+| `alert.annotations`                                                               | The `annotations` for alert server                                                                                             | `{}`                                                  |
+| `alert.affinity`                                                                  | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `alert.nodeSelector`                                                              | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `alert.tolerations`                                                               | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `alert.resources`                                                                 | The `resource` limit and request config for alert server                                                                       | `{}`                                                  |
+| `alert.configmap.ALERT_SERVER_OPTS`                                               | The jvm options for alert server                                                                                               | `-Xms512m -Xmx512m -Xmn256m`                          |
+| `alert.configmap.XLS_FILE_PATH`                                                   | XLS file path                                                                                                                  | `/tmp/xls`                                            |
+| `alert.configmap.MAIL_SERVER_HOST`                                                | Mail `SERVER HOST `                                                                                                            | `nil`                                                 |
+| `alert.configmap.MAIL_SERVER_PORT`                                                | Mail `SERVER PORT`                                                                                                             | `nil`                                                 |
+| `alert.configmap.MAIL_SENDER`                                                     | Mail `SENDER`                                                                                                                  | `nil`                                                 |
+| `alert.configmap.MAIL_USER`                                                       | Mail `USER`                                                                                                                    | `nil`                                                 |
+| `alert.configmap.MAIL_PASSWD`                                                     | Mail `PASSWORD`                                                                                                                | `nil`                                                 |
+| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE`                                       | Mail `SMTP STARTTLS` enable                                                                                                    | `false`                                               |
+| `alert.configmap.MAIL_SMTP_SSL_ENABLE`                                            | Mail `SMTP SSL` enable                                                                                                         | `false`                                               |
+| `alert.configmap.MAIL_SMTP_SSL_TRUST`                                             | Mail `SMTP SSL TRUST`                                                                                                          | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_ENABLE`                                        | `Enterprise Wechat` enable                                                                                                     | `false`                                               |
+| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID`                                       | `Enterprise Wechat` corp id                                                                                                    | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_SECRET`                                        | `Enterprise Wechat` secret                                                                                                     | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID`                                      | `Enterprise Wechat` agent id                                                                                                   | `nil`                                                 |
+| `alert.configmap.ENTERPRISE_WECHAT_USERS`                                         | `Enterprise Wechat` users                                                                                                      | `nil`                                                 |
+| `alert.livenessProbe.enabled`                                                     | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `alert.livenessProbe.initialDelaySeconds`                                         | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `alert.livenessProbe.periodSeconds`                                               | How often to perform the probe                                                                                                 | `30`                                                  |
+| `alert.livenessProbe.timeoutSeconds`                                              | When the probe times out                                                                                                       | `5`                                                   |
+| `alert.livenessProbe.failureThreshold`                                            | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `alert.livenessProbe.successThreshold`                                            | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `alert.readinessProbe.enabled`                                                    | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `alert.readinessProbe.initialDelaySeconds`                                        | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `alert.readinessProbe.periodSeconds`                                              | How often to perform the probe                                                                                                 | `30`                                                  |
+| `alert.readinessProbe.timeoutSeconds`                                             | When the probe times out                                                                                                       | `5`                                                   |
+| `alert.readinessProbe.failureThreshold`                                           | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `alert.readinessProbe.successThreshold`                                           | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `alert.persistentVolumeClaim.enabled`                                             | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert`                                          | `false`                                               |
+| `alert.persistentVolumeClaim.accessModes`                                         | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `alert.persistentVolumeClaim.storageClassName`                                    | `Alert` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning    | `-`                                                   |
+| `alert.persistentVolumeClaim.storage`                                             | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+|                                                                                   |                                                                                                                                |                                                       |
+| `api.replicas`                                                                    | Replicas is the desired number of replicas of the given Template                                                               | `1`                                                   |
+| `api.strategy.type`                                                               | Type of deployment. Can be "Recreate" or "RollingUpdate"                                                                       | `RollingUpdate`                                       |
+| `api.strategy.rollingUpdate.maxSurge`                                             | The maximum number of pods that can be scheduled above the desired number of pods                                              | `25%`                                                 |
+| `api.strategy.rollingUpdate.maxUnavailable`                                       | The maximum number of pods that can be unavailable during the update                                                           | `25%`                                                 |
+| `api.annotations`                                                                 | The `annotations` for api server                                                                                               | `{}`                                                  |
+| `api.affinity`                                                                    | If specified, the pod's scheduling constraints                                                                                 | `{}`                                                  |
+| `api.nodeSelector`                                                                | NodeSelector is a selector which must be true for the pod to fit on a node                                                     | `{}`                                                  |
+| `api.tolerations`                                                                 | If specified, the pod's tolerations                                                                                            | `{}`                                                  |
+| `api.resources`                                                                   | The `resource` limit and request config for api server                                                                         | `{}`                                                  |
+| `api.configmap.API_SERVER_OPTS`                                                   | The jvm options for api server                                                                                                 | `-Xms512m -Xmx512m -Xmn256m`                          |
+| `api.livenessProbe.enabled`                                                       | Turn on and off liveness probe                                                                                                 | `true`                                                |
+| `api.livenessProbe.initialDelaySeconds`                                           | Delay before liveness probe is initiated                                                                                       | `30`                                                  |
+| `api.livenessProbe.periodSeconds`                                                 | How often to perform the probe                                                                                                 | `30`                                                  |
+| `api.livenessProbe.timeoutSeconds`                                                | When the probe times out                                                                                                       | `5`                                                   |
+| `api.livenessProbe.failureThreshold`                                              | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `api.livenessProbe.successThreshold`                                              | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `api.readinessProbe.enabled`                                                      | Turn on and off readiness probe                                                                                                | `true`                                                |
+| `api.readinessProbe.initialDelaySeconds`                                          | Delay before readiness probe is initiated                                                                                      | `30`                                                  |
+| `api.readinessProbe.periodSeconds`                                                | How often to perform the probe                                                                                                 | `30`                                                  |
+| `api.readinessProbe.timeoutSeconds`                                               | When the probe times out                                                                                                       | `5`                                                   |
+| `api.readinessProbe.failureThreshold`                                             | Minimum consecutive successes for the probe                                                                                    | `3`                                                   |
+| `api.readinessProbe.successThreshold`                                             | Minimum consecutive failures for the probe                                                                                     | `1`                                                   |
+| `api.persistentVolumeClaim.enabled`                                               | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api`                                              | `false`                                               |
+| `api.persistentVolumeClaim.accessModes`                                           | `PersistentVolumeClaim` access modes                                                                                           | `[ReadWriteOnce]`                                     |
+| `api.persistentVolumeClaim.storageClassName`                                      | `api` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning      | `-`                                                   |
+| `api.persistentVolumeClaim.storage`                                               | `PersistentVolumeClaim` size                                                                                                   | `20Gi`                                                |
+| `api.service.type`                                                                | `type` determines how the Service is exposed. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer            | `ClusterIP`                                           |
+| `api.service.clusterIP`                                                           | `clusterIP` is the IP address of the service and is usually assigned randomly by the master                                    | `nil`                                                 |
+| `api.service.nodePort`                                                            | `nodePort` is the port on each node on which this service is exposed when type=NodePort                                        | `nil`                                                 |
+| `api.service.externalIPs`                                                         | `externalIPs` is a list of IP addresses for which nodes in the cluster will also accept traffic for this service               | `[]`                                                  |
+| `api.service.externalName`                                                        | `externalName` is the external reference that kubedns or equivalent will return as a CNAME record for this service             | `nil`                                                 |
+| `api.service.loadBalancerIP`                                                      | `loadBalancerIP` when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field          | `nil`                                                 |
+| `api.service.annotations`                                                         | `annotations` may need to be set when service.type is LoadBalancer                                                             | `{}`                                                  |
+|                                                                                   |                                                                                                                                |                                                       |
+| `ingress.enabled`                                                                 | Enable ingress                                                                                                                 | `false`                                               |
+| `ingress.host`                                                                    | Ingress host                                                                                                                   | `dolphinscheduler.org`                                |
+| `ingress.path`                                                                    | Ingress path                                                                                                                   | `/dolphinscheduler`                                   |
+| `ingress.tls.enabled`                                                             | Enable ingress tls                                                                                                             | `false`                                               |
+| `ingress.tls.secretName`                                                          | Ingress tls secret name                                                                                                        | `dolphinscheduler-tls`                                |
diff --git a/docs/docs/en/guide/installation/pseudo-cluster.md b/docs/docs/en/guide/installation/pseudo-cluster.md
new file mode 100644
index 0000000..b887030
--- /dev/null
+++ b/docs/docs/en/guide/installation/pseudo-cluster.md
@@ -0,0 +1,201 @@
+# Pseudo-Cluster Deployment
+
+The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.
+
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
+
+## Preparation
+
+Pseudo-cluster deployment of DolphinScheduler requires external software support:
+
+* JDK:Download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step, if it already exists in your environment.
+* Binary package: Download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html)
+* Database: [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+) or [MySQL](https://dev.mysql.com/downloads/mysql/) (5.7+), you can choose one of the two, such as MySQL requires JDBC Driver 8.0.16
+* Registry Center: [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+),[download link][zookeeper]
+* Process tree analysis
+  * `pstree` for macOS
+  * `psmisc` for Fedora/Red/Hat/CentOS/Ubuntu/Debian
+
+> **_Note:_** DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support.
+
+## DolphinScheduler Startup Environment
+
+### Configure User Exemption and Permissions
+
+Create a deployment user, and make sure to configure `sudo` without password. Here make an example to create user `dolphinscheduler`:
+
+```shell
+# To create a user, login as root
+useradd dolphinscheduler
+
+# Add password
+echo "dolphinscheduler" | passwd --stdin dolphinscheduler
+
+# Configure sudo without password
+sed -i '$adolphinscheduler  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' /etc/sudoers
+sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
+
+# Modify directory permissions and grant permissions for user you created above
+chown -R dolphinscheduler:dolphinscheduler apache-dolphinscheduler-*-bin
+```
+
+> **_NOTICE:_**
+>
+> * Due to DolphinScheduler's multi-tenant task switch user using command `sudo -u {linux-user}`, the deployment user needs to have `sudo` privileges and be password-free. If novice learners don’t understand, you can ignore this point for now.
+> * If you find the line "Defaults requirett" in the `/etc/sudoers` file, please comment the content.
+
+### Configure Machine SSH Password-Free Login
+
+Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The following shows the steps to configure password-free login:
+
+```shell
+su dolphinscheduler
+
+ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
+cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+chmod 600 ~/.ssh/authorized_keys
+```
+
+> **_Notice:_** After the configuration is complete, you can run the command `ssh localhost` to test works or not. If you can login with ssh without password stands for successful.
+
+### Start ZooKeeper
+
+Go to the ZooKeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in `conf/zoo.cfg` to `dataDir=./tmp/zookeeper`.
+
+```shell
+# Start ZooKeeper
+./bin/zkServer.sh start
+```
+
+<!--
+Modify the database configuration and initialize
+
+```properties
+spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+spring.datasource.url=jdbc:mysql://localhost:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
+# Modify it if you are not using dolphinscheduler/dolphinscheduler as your username and password
+spring.datasource.username=dolphinscheduler
+spring.datasource.password=dolphinscheduler
+```
+
+After modifying and saving, execute the following command to create database tables and init basic data.
+
+```shell
+sh script/create-dolphinscheduler.sh
+```
+-->
+
+## Modify Configuration
+
+After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of `conf/config/install_config.conf`. Generally, you just need to modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** part to complete the deployment, the following describes the parameters that must be modified:
+
+```shell
+# ---------------------------------------------------------
+# INSTALL MACHINE
+# ---------------------------------------------------------
+# Due to the master, worker, and API server being deployed on a single node, the IP of the server is the machine IP or localhost
+ips="localhost"
+masters="localhost"
+workers="localhost:default"
+alertServer="localhost"
+apiServers="localhost"
+
+# DolphinScheduler installation path, it will auto-create if not exists
+installPath="~/dolphinscheduler"
+
+# Deploy user, use the user you create in section **Configure machine SSH password-free login**
+deployUser="dolphinscheduler"
+
+# ---------------------------------------------------------
+# DolphinScheduler ENV
+# ---------------------------------------------------------
+# The path of JAVA_HOME, which JDK install path in section **Preparation**
+javaHome="/your/java/home/here"
+
+# ---------------------------------------------------------
+# Database
+# ---------------------------------------------------------
+# Database type, username, password, IP, port, metadata. For now `dbtype` supports `mysql` and `postgresql`
+dbtype="mysql"
+dbhost="localhost:3306"
+# Need to modify if you are not using `dolphinscheduler/dolphinscheduler` as your username and password
+username="dolphinscheduler"
+password="dolphinscheduler"
+dbname="dolphinscheduler"
+
+# ---------------------------------------------------------
+# Registry Server
+# ---------------------------------------------------------
+# Registration center address, the address of ZooKeeper service
+registryServers="localhost:2181"
+```
+
+## Initialize the Database
+
+DolphinScheduler metadata is stored in the relational database. Currently, supports PostgreSQL and MySQL. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database:
+
+```shell
+mysql -uroot -p
+
+mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
+
+# Change {user} and {password} by requests
+mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
+mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
+
+mysql> flush privileges;
+```
+
+After the above steps done you would create a new database for DolphinScheduler, then run Shell scripts to init database:
+
+```shell
+sh script/create-dolphinscheduler.sh
+```
+
+## Start DolphinScheduler
+
+Use **deployment user** you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder.
+
+```shell
+sh install.sh
+```
+
+> **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in the terminal,
+ this is non-important information that you can ignore.
+
+## Login DolphinScheduler
+
+Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+
+## Start or Stop Server
+
+```shell
+# Stop all DolphinScheduler server
+sh ./bin/stop-all.sh
+
+# Start all DolphinScheduler server
+sh ./bin/start-all.sh
+
+# Start or stop DolphinScheduler Master
+sh ./bin/dolphinscheduler-daemon.sh stop master-server
+sh ./bin/dolphinscheduler-daemon.sh start master-server
+
+# Start or stop DolphinScheduler Worker
+sh ./bin/dolphinscheduler-daemon.sh start worker-server
+sh ./bin/dolphinscheduler-daemon.sh stop worker-server
+
+# Start or stop DolphinScheduler Api
+sh ./bin/dolphinscheduler-daemon.sh start api-server
+sh ./bin/dolphinscheduler-daemon.sh stop api-server
+
+# Start or stop Alert
+sh ./bin/dolphinscheduler-daemon.sh start alert-server
+sh ./bin/dolphinscheduler-daemon.sh stop alert-server
+```
+
+> **_Note:_**: Please refer to the section of "System Architecture Design" for service usage
+
+[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
+[zookeeper]: https://zookeeper.apache.org/releases.html
+[mysql]: https://downloads.MySQL.com/archives/c-j/
+[issue]: https://github.com/apache/dolphinscheduler/issues/6597
diff --git a/docs/docs/en/guide/installation/skywalking-agent.md b/docs/docs/en/guide/installation/skywalking-agent.md
new file mode 100644
index 0000000..fedec67
--- /dev/null
+++ b/docs/docs/en/guide/installation/skywalking-agent.md
@@ -0,0 +1,74 @@
+SkyWalking Agent Deployment
+=============================
+
+The `dolphinscheduler-skywalking` module provides [SkyWalking](https://skywalking.apache.org/) monitor agent for the DolphinScheduler project.
+
+This document describes how to enable SkyWalking version 8.4+ support with this module (recommend using SkyWalking 8.5.0).
+
+## Installation
+
+The following configuration is used to enable the SkyWalking agent.
+
+### Through Environment Variable Configuration (for Docker Compose)
+
+Modify SkyWalking environment variables in `docker/docker-swarm/config.env.sh`:
+
+```
+SKYWALKING_ENABLE=true
+SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
+SW_GRPC_LOG_SERVER_HOST=127.0.0.1
+SW_GRPC_LOG_SERVER_PORT=11800
+```
+
+And run:
+
+```shell
+$ docker-compose up -d
+```
+
+### Through Environment Variable Configuration (for Docker)
+
+```shell
+$ docker run -d --name dolphinscheduler \
+-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
+-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
+-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
+-e SKYWALKING_ENABLE="true" \
+-e SW_AGENT_COLLECTOR_BACKEND_SERVICES="your.skywalking-oap-server.com:11800" \
+-e SW_GRPC_LOG_SERVER_HOST="your.skywalking-log-reporter.com" \
+-e SW_GRPC_LOG_SERVER_PORT="11800" \
+-p 12345:12345 \
+apache/dolphinscheduler:1.3.8 all
+```
+
+### Through install_config.conf Configuration (for DolphinScheduler install.sh)
+
+Add the following configurations to `${workDir}/conf/config/install_config.conf`.
+
+```properties
+
+# SkyWalking config
+# note: enable SkyWalking tracking plugin
+enableSkywalking="true"
+# note: configure SkyWalking backend service address
+skywalkingServers="your.skywalking-oap-server.com:11800"
+# note: configure SkyWalking log reporter host
+skywalkingLogReporterHost="your.skywalking-log-reporter.com"
+# note: configure SkyWalking log reporter port
+skywalkingLogReporterPort="11800"
+
+```
+
+## Usage
+
+### Import Dashboard
+
+#### Import DolphinScheduler Dashboard to SkyWalking Server
+
+Copy the `${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml` file into `${skywalking-oap-server.home}/config/ui-initialized-templates/` directory, and restart SkyWalking oap-server.
+
+#### View DolphinScheduler Dashboard
+
+If you have opened the SkyWalking dashboard with a browser before, you need to clear the browser cache.
+
+![img1](/img/skywalking/import-dashboard-1.jpg)
diff --git a/docs/docs/en/guide/installation/standalone.md b/docs/docs/en/guide/installation/standalone.md
new file mode 100644
index 0000000..a03944c
--- /dev/null
+++ b/docs/docs/en/guide/installation/standalone.md
@@ -0,0 +1,42 @@
+# Standalone
+
+Standalone only for quick experience for DolphinScheduler.
+
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
+
+> **_Note:_** Standalone only recommends the usage of fewer than 20 workflows, because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause instability.
+
+## Preparation
+
+* JDK:download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step if it already exists in your environment.
+* Binary package: download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html).
+
+## Start DolphinScheduler Standalone Server
+
+### Extract and Start DolphinScheduler
+
+There is a standalone startup script in the binary compressed package, which can be quickly started after extraction. Switch to a user with sudo permission and run the script:
+
+```shell
+# Extract and start Standalone Server
+tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
+cd apache-dolphinscheduler-*-bin
+sh ./bin/dolphinscheduler-daemon.sh start standalone-server
+```
+
+### Login DolphinScheduler
+
+Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+
+### Start or Stop Server
+
+The script `./bin/dolphinscheduler-daemon.sh`can be used not only quickly start standalone, but also to stop the service operation. The following are all the commands:
+
+```shell
+# Start Standalone Server
+sh ./bin/dolphinscheduler-daemon.sh start standalone-server
+# Stop Standalone Server
+sh ./bin/dolphinscheduler-daemon.sh stop standalone-server
+```
+
+[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
diff --git a/docs/docs/en/guide/monitor.md b/docs/docs/en/guide/monitor.md
new file mode 100644
index 0000000..420a190
--- /dev/null
+++ b/docs/docs/en/guide/monitor.md
@@ -0,0 +1,32 @@
+# Monitor
+
+## Service Management
+
+- Service management is mainly to monitor and display the health status and basic information of each service in the system.
+
+## Monitor Master Server
+
+- Mainly related to master information.
+
+![master](/img/new_ui/dev/monitor/master.png)
+
+## Monitor Worker Server
+
+- Mainly related to worker information.
+
+![worker](/img/new_ui/dev/monitor/worker.png)
+
+## Monitor DB
+
+- Mainly the health status of the DB.
+
+![db](/img/new_ui/dev/monitor/db.png)
+
+## Statistics Management
+
+![statistics](/img/new_ui/dev/monitor/statistics.png)
+
+- Number of commands wait to be executed: statistics of the `t_ds_command` table data.
+- The number of failed commands: statistics of the `t_ds_error_command` table data.
+- Number of tasks wait to run: count the data of `task_queue` in the ZooKeeper.
+- Number of tasks wait to be killed: count the data of `task_kill` in the ZooKeeper.
diff --git a/docs/docs/en/guide/open-api.md b/docs/docs/en/guide/open-api.md
new file mode 100644
index 0000000..1bac59b
--- /dev/null
+++ b/docs/docs/en/guide/open-api.md
@@ -0,0 +1,69 @@
+# Open API
+
+## Background
+
+Generally, projects and processes are created through pages, but considering the integration with third-party systems requires API calls to manage projects and workflows.
+
+## The Operation Steps of DolphinScheduler API Calls
+
+### Create a Token
+
+1. Log in to the scheduling system, click "Security", then click "Token manage" on the left, and click "Create token" to create a token.
+
+<p align="center">
+   <img src="/img/token-management-en.png" width="80%" />
+ </p>
+
+2. Select the "Expiration time" (Token validity time), select "User" (choose the specified user to perform the API operation), click "Generate token", copy the `Token` string, and click "Submit".
+
+<p align="center">
+   <img src="/img/create-token-en1.png" width="80%" />
+ </p>
+
+### Token Usage
+
+1. Open the API documentation page
+    > Address:http://{API server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
+<p align="center">
+   <img src="/img/api-documentation-en.png" width="80%" />
+ </p>
+ 
+2. select a test API, the API selected for this test is `queryAllProjectList`
+    > projects/query-project-list
+3. Open `Postman`, fill in the API address, enter the `Token` in `Headers`, and then send the request to view the result:
+    ```
+    token: The Token just generated
+    ```
+<p align="center">
+   <img src="/img/test-api.png" width="80%" />
+ </p>  
+
+### Create a Project
+
+Here is an example of creating a project named "wudl-flink-test":
+<p align="center">
+   <img src="/img/api/create_project1.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_project2.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_project3.png" width="80%" />
+ </p>
+The returned `msg` information is "success", indicating that we have successfully created the project through API.
+
+If you are interested in the source code of creating a project, please continue to read the following:
+
+### Appendix: The Source Code of Creating a Project
+
+<p align="center">
+   <img src="/img/api/create_source1.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/api/create_source2.png" width="80%" />
+ </p>
+
+
diff --git a/docs/docs/en/guide/parameter/built-in.md b/docs/docs/en/guide/parameter/built-in.md
new file mode 100644
index 0000000..ddb2edb
--- /dev/null
+++ b/docs/docs/en/guide/parameter/built-in.md
@@ -0,0 +1,48 @@
+# Built-in Parameter
+
+## Basic Built-in Parameter
+
+<table>
+    <tr><th>Variable</th><th>Declaration Method</th><th>Meaning</th></tr>
+    <tr>
+        <td>system.biz.date</td>
+        <td>${system.biz.date}</td>
+        <td>The day before the schedule time of the daily scheduling instance, the format is yyyyMMdd</td>
+    </tr>
+    <tr>
+        <td>system.biz.curdate</td>
+        <td>${system.biz.curdate}</td>
+        <td>The schedule time of the daily scheduling instance, the format is yyyyMMdd</td>
+    </tr>
+    <tr>
+        <td>system.datetime</td>
+        <td>${system.datetime}</td>
+        <td>The schedule time of the daily scheduling instance, the format is yyyyMMddHHmmss</td>
+    </tr>
+</table>
+
+## Extended Built-in Parameter
+
+- Support custom variables in the code, declaration way: `${variable name}`. Refers to "System Parameter".
+
+- Benchmark variable defines as `$[...]` format, time format `$[yyyyMMddHHmmss]` can be decomposed and combined arbitrarily, such as: `$[yyyyMMdd]`, `$[HHmmss]`, `$[yyyy-MM-dd]`, etc.
+
+- Or define by the 2 following ways:
+
+      1. Use add_month(yyyyMMdd, offset) function to add or minus number of months
+      the first parameter of this function is [yyyyMMdd], represents the time format
+      the second parameter is offset, represents the number of months the user wants to add or minus
+      * Next N years:$[add_months(yyyyMMdd,12*N)]
+      * N years before:$[add_months(yyyyMMdd,-12*N)]
+      * Next N months:$[add_months(yyyyMMdd,N)]
+      * N months before:$[add_months(yyyyMMdd,-N)]
+      *********************************************************************************************************
+      1. Add or minus numbers directly after the time format
+      * Next N weeks:$[yyyyMMdd+7*N]
+      * First N weeks:$[yyyyMMdd-7*N]
+      * Next N days:$[yyyyMMdd+N]
+      * N days before:$[yyyyMMdd-N]
+      * Next N hours:$[HHmmss+N/24]
+      * First N hours:$[HHmmss-N/24]
+      * Next N minutes:$[HHmmss+N/24/60]
+      * First N minutes:$[HHmmss-N/24/60]
\ No newline at end of file
diff --git a/docs/docs/en/guide/parameter/context.md b/docs/docs/en/guide/parameter/context.md
new file mode 100644
index 0000000..e211e4b
--- /dev/null
+++ b/docs/docs/en/guide/parameter/context.md
@@ -0,0 +1,66 @@
+# Refer to Parameter Context
+
+DolphinScheduler provides the ability to refer to each other between parameters, including local parameters refer to global parameters and upstream and downstream parameter transfer. Due to the existence of references, it involves the priority of parameters when the parameter names are the same. see also [Parameter Priority](priority.md)
+
+## Local Task Refers to Global Parameter
+
+The premise of local tasks referring global parameters is that you have already defined [Global Parameter](global.md). The usage is similar to the usage in [local parameters](local.md), but the value of the parameter needs to be configured as the key of the global parameter.
+
+![parameter-call-global-in-local](/img/global_parameter.png)
+
+As the figure above shows, `${biz_date}` and `${curdate}` are examples of local parameters that refer to global parameters. Observe the last line of the above figure, `local_param_bizdate` uses `${global_bizdate}` to refer to the global parameter. In the shell script, you can use `${local_param_bizdate}` to refer to the value of the global variable `global_bizdate`, or set the value of `local_param_bizdate` directly through JDBC. Similarly, `local_param` refers to the global parameters d [...]
+
+## Pass Parameter From Upstream Task to Downstream
+
+DolphinScheduler allows parameter transfer between tasks. Currently, transfer direction only supports one-way transfer from upstream to downstream. The task types that support this feature are:
+
+* [Shell](../task/shell.md)
+* [SQL](../task/sql.md)
+* [Procedure](../task/stored-procedure.md)
+
+When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
+
+### SQL
+
+`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+
+If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+
+If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+
+Let's make an example of the SQL node process in the above picture:
+
+The following defines the [createParam1] node in the above figure:
+
+![png05](/img/globalParam/image-20210723104957031.png)
+
+The following defines the [createParam2] node:
+
+![png06](/img/globalParam/image-20210723105026924.png)
+
+Find the value of the variable in the [Workflow Instance] page corresponding to the node instance.
+
+The following shows the Node instance [createparam1]:
+
+![png07](/img/globalParam/image-20210723105131381.png)
+
+Here, the value of "id" is 12.
+
+Let's see the case of the node instance [createparam2].
+
+![png08](/img/globalParam/image-20210723105255850.png)
+
+There is only the "id" value. Although the user-defined SQL query both "id" and "database_name" field, only set the `OUT` parameter `id` due to only one parameter "id" is defined for output. The length of the result list is 10 due to display reasons.
+
+### SHELL
+
+`prop` is user-specified and the direction is `OUT`. The output is defined as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and leave the value part blank.
+The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+For example, in the figure below:
+
+![png09](/img/globalParam/image-20210723101242216.png)
+
+When the log detects the `${setValue(key=value1)}` format in the shell node definition, it will assign value1 to the key, and downstream nodes can use the variable key directly. Similarly, you can find the corresponding node instance on the [Workflow Instance] page to see the value of the variable.
+
+![png10](/img/globalParam/image-20210723102522383.png)
diff --git a/docs/docs/en/guide/parameter/global.md b/docs/docs/en/guide/parameter/global.md
new file mode 100644
index 0000000..bd4b25d
--- /dev/null
+++ b/docs/docs/en/guide/parameter/global.md
@@ -0,0 +1,19 @@
+# Global Parameter
+
+## Scope
+
+The parameters defined on the process definition page can apply to all the scope of the process tasks.
+
+## Usage
+
+Usage of global parameters is: at the process define page, click the '+' beside the 'Set global' and fill in the key and value to save:
+
+<p align="center">
+   <img src="/img/supplement_global_parameter_en.png" width="80%" />
+ </p>
+
+<p align="center">
+   <img src="/img/local_parameter_en.png" width="80%" />
+ </p>
+
+The `global_bizdate` parameter defined here can be referenced by local parameters of any other task node, and set the value of `global_bizdate` by referencing the system parameter `system.biz.date`.
\ No newline at end of file
diff --git a/docs/docs/en/guide/parameter/local.md b/docs/docs/en/guide/parameter/local.md
new file mode 100644
index 0000000..a764e7b
--- /dev/null
+++ b/docs/docs/en/guide/parameter/local.md
@@ -0,0 +1,19 @@
+# Local Parameter
+
+## Scope
+
+Parameters configured on the task definition page, the scope of this parameter is inside this task only. But if you configure according to [Refer to Parameter Context](context.md), it could pass to downstream tasks.
+
+## Usage
+
+Usage of local parameters is: at the task define page, click the '+' beside the 'Custom Parameters' and fill in the key and value to save:
+
+<p align="center">
+     <img src="/img/supplement_local_parameter_en.png" width="80%" />
+</p>
+
+<p align="center">
+     <img src="/img/global_parameter_en.png" width="80%" />
+</p>
+
+If you want to call the [built-in parameter](built-in.md) in the local parameters, fill in thevalue of built-in parameters in `value`. As in the above figure, `${biz_date}` and `${curdate}`.
\ No newline at end of file
diff --git a/docs/docs/en/guide/parameter/priority.md b/docs/docs/en/guide/parameter/priority.md
new file mode 100644
index 0000000..560d8f0
--- /dev/null
+++ b/docs/docs/en/guide/parameter/priority.md
@@ -0,0 +1,40 @@
+# Parameter Priority
+
+DolphinScheduler has three parameter types:
+
+* [Global Parameter](global.md): parameters defined at the workflow define page
+* [Parameter Context](context.md): parameters passed by upstream task nodes
+* [Local Parameter](local.md): parameters belong to its node, which is the parameters defined by the user in [Custom Parameters]. The user can define part of the parameters when creating workflow definitions.
+
+Due to there are multiple sources of the parameter value, there will raise parameter priority issues when the parameter name is the same. The priority of DolphinScheduler parameters from high to low is: `Global Parameter > Parameter Context > Local Parameter`
+
+In the case of upstream tasks can pass parameters to the downstream, there may be multiple tasks upstream that pass the same parameter name:
+
+* Downstream nodes prefer to use parameters with non-empty values
+* If there are multiple parameters with non-empty values, select the value from the upstream task with the earliest completion time
+
+## Example
+
+Followings are examples shows task parameters priority problems:#############
+
+1: Use shell nodes to explain the first case.
+
+![png01](/img/globalParam/image-20210723102938239.png)
+
+The [useParam] node can use the parameters which are set in the [createParam] node. The [useParam] node cannot obtain the parameters from the [noUseParam] node due to there is no dependency between them. Other task node types have the same usage rules with shell example here.
+
+![png02](/img/globalParam/image-20210723103316896.png)
+
+The [createParam] node can use parameters directly. In addition, the node creates two parameters named "key" and "key1" and "key1" has the same name as the one passed by the upstream node and assign value "12". However, due to the priority rules, the value assignment "12" is discarded and the value from the upstream node is finally used.
+
+2: Use SQL nodes to explain another case.
+
+![png03](/img/globalParam/image-20210723103937052.png)
+
+The following shows the definition of the [use_create] node:
+
+![png04](/img/globalParam/image-20210723104411489.png)
+
+"status" is the own parameters of the node and set by the current node. However, the user also sets the "status" parameter when saving, assigning its value to -1. Then the value of status will be -1 with higher priority when the SQL is executed. The value of the node's own variable is discarded.
+
+The "ID" here is the parameter set by the upstream node. The user sets the parameters of the same parameter name "ID" for the [createparam1] node and [createparam2] node. And the [use_create] node uses the value of [createParam1] which is finished first.
\ No newline at end of file
diff --git a/docs/docs/en/guide/project/project-list.md b/docs/docs/en/guide/project/project-list.md
new file mode 100644
index 0000000..38f7191
--- /dev/null
+++ b/docs/docs/en/guide/project/project-list.md
@@ -0,0 +1,18 @@
+# Project Management
+
+## Create Project
+
+- Click "Project Management" to enter the project management page, click the "Create Project" button, enter the project name, project description, and click "Submit" to create a new project.
+
+![project-list](/img/new_ui/dev/project/project-list.png)
+
+## Project Home
+
+- Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project. The introduction for those metrics:
+
+- Task status statistics: within the specified time range, count the number of task instances status as successful submission, running, ready to pause, pause, ready to stop, stop, failure, success, need fault tolerance, kill and waiting threads
+- Process status statistics: within the specified time range, count the number of workflow instances status as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, need fault tolerance, kill and waiting threads
+- Workflow definition statistics: count the workflow definitions created by this user and granted by the administrator
+
+![project-overview](/img/new_ui/dev/project/project-overview.png)
+
diff --git a/docs/docs/en/guide/project/task-instance.md b/docs/docs/en/guide/project/task-instance.md
new file mode 100644
index 0000000..bafa588
--- /dev/null
+++ b/docs/docs/en/guide/project/task-instance.md
@@ -0,0 +1,11 @@
+## Task Instance
+
+- Click Project Management -> Workflow -> Task Instance. Enter the task instance page, as shown in the figure below, click workflow instance name, you can jump to the workflow instance DAG chart to view the task status.
+     <p align="center">
+        <img src="/img/task-list-en.png" width="80%" />
+     </p>
+
+- View log:Click the "view log" button in the operation column to view task execution log.
+     <p align="center">
+        <img src="/img/task-log2-en.png" width="80%" />
+     </p>
diff --git a/docs/docs/en/guide/project/workflow-definition.md b/docs/docs/en/guide/project/workflow-definition.md
new file mode 100644
index 0000000..400c17e
--- /dev/null
+++ b/docs/docs/en/guide/project/workflow-definition.md
@@ -0,0 +1,114 @@
+# Workflow Definition
+
+## Create workflow definition
+
+- Click Project Management -> Workflow -> Workflow Definition, enter the workflow definition page, and click the "Create Workflow" button to enter the **workflow DAG edit** page, as shown in the following figure:
+  <p align="center">
+      <img src="/img/dag5.png" width="80%" />
+  </p>
+- Drag from the toolbar <img src="/img/tasks/icons/shell.png" width="15"/> to the canvas, to add a shell task to the canvas, as shown in the figure below:
+
+  ![demo-shell-simple](/img/tasks/demo/shell.jpg)
+
+- **Add parameter settings for shell task:**
+
+1. Fill in the "Node Name", "Description" and "Script" fields;
+2. Check “Normal” for “Run Flag”. If “Prohibit Execution” is checked, the task will not execute when the workflow runs;
+3. Select "Task Priority": when the number of worker threads is insufficient, high priority tasks will execute first in the execution queue, and tasks with the same priority will execute in the order of first in, first out;
+4. Timeout alarm (optional): check the timeout alarm, timeout failure, and fill in the "timeout period". When the task execution time exceeds **timeout period**, an alert email will send and the task timeout fails;
+5. Resources (optional). Resources are files create or upload in the Resource Center -> File Management page. For example, the file name is `test.sh`, and the command to call the resource in the script is `sh test.sh`;
+6. Customize parameters (optional);
+7. Click the "Confirm Add" button to save the task settings.
+
+- **Set dependencies between tasks:** Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 execute in parallel, When task 1 finished execution, tasks 2 and 3 will execute simultaneously.
+
+  <p align="center">
+     <img src="/img/dag6.png" width="80%" />
+  </p>
+
+- **Delete dependencies:** Click the "arrow" icon in the upper right corner <img src="/img/arrow.png" width="35"/>, select the connection line, and click the "Delete" icon in the upper right corner <img src= "/img/delete.png" width="35"/>, delete dependencies between tasks.
+  <p align="center">
+     <img src="/img/dag7.png" width="80%" />
+  </p>
+
+- **Save workflow definition:** Click the "Save" button, and the "Set DAG chart name" window pops up, as shown in the figure below. Enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to [global parameters](../parameter/global.md)), click the "Add" button to finish workflow definition creation.
+  <p align="center">
+     <img src="/img/dag8.png" width="80%" />
+   </p>
+> For other types of tasks, please refer to [Task Node Type and Parameter Settings](#TaskParamers). <!-- markdown-link-check-disable-line -->
+
+## Workflow Definition Operation Function
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown below:
+
+<p align="center">
+<img src="/img/work_list_en.png" width="80%" />
+</p>
+The following are the operation functions of the workflow definition list:
+
+- **Edit:** Only "Offline" workflow definitions can be edited. Workflow DAG editing is the same as [Create Workflow Definition](#creatDag) <!-- markdown-link-check-disable-line -->
+- **Online:** When the workflow status is "Offline", used to make workflow online. Only the workflow in the "Online" state can run, but cannot edit
+- **Offline:** When the workflow status is "Online", used to make workflow offline. Only the workflow in the "Offline" state can be edited, but cannot run
+- **Run:** Only workflow in the online state can run. See [2.3.3 Run Workflow](#run-the-workflow) for the operation steps.
+- **Timing:** Timing can only set to online workflows, and the system automatically schedules to run the workflow on time. The status after creating a timing setting is "offline", and the timing must set online on the timing management page to make effect. See [2.3.4 Workflow Timing](#workflow-timing) for timing operation steps
+- **Timing Management:** The timing management page can edit, online or offline and delete timing
+- **Delete:** Delete the workflow definition
+- **Download:** Download workflow definition to local
+- **Tree Diagram:** Display the task node type and task status in a tree structure, as shown in the figure below:
+  <p align="center">
+      <img src="/img/tree_en.png" width="80%" />
+  </p>
+
+## Run the Workflow
+
+- Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, as shown in the figure below, click the "Go Online" button <img src="/img/online.png" width="35"/>to make workflow online.
+  <p align="center">
+      <img src="/img/work_list_en.png" width="80%" />
+  </p>
+
+- Click the "Run" button to pop up the startup parameter setting window, as shown in the figure below, set the startup parameters, click the "Run" button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.
+     <p align="center">
+       <img src="/img/run_work_en.png" width="80%" />
+     </p>  
+  Description of workflow operating parameters: 
+       
+      * Failure strategy: When a task node fails to execute, other parallel task nodes need to execute this strategy. "Continue" means: after a certain task fails, other task nodes execute normally; "End" means: terminate all tasks execution, and terminate the entire process
+      * Notification strategy: When the process is over, send the process execution result notification email according to the process status, options including no send, send if sucess, send of failure, send whatever result
+      * Process priority: The priority of process operation, divide into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high priority processes will execute first in the execution queue, and processes with the same priority will execute in the order of first in, first out;
+      * Worker group: The process can only be executed in the specified worker machine group. The default is `Default`, which can execute on any worker
+      * Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process result information or email will send to all members in the notification group
+      * Recipient: select notification policy||timeout alarm||when fault tolerance occurs, process result information or alarm email will be sent to the recipient list
+      * Cc: select notification policy||timeout alarm||when fault tolerance occurs, the process result information or warning email will be copied to the CC list
+      * Startup parameter: Set or overwrite global parameter values when starting a new process instance
+      * Complement: two modes including serial complement and parallel complement. Serial complement: within the specified time range, the complements are executed from the start date to the end date and N process instances are generated in turn; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances.
+    * You can select complement time range (only support continuous date) when executing a timing workflow definition. For example, need to fill in the data from 1st May to 10th May, as shown in the figure below:
+
+    <p align="center">
+        <img src="/img/complement_en1.png" width="80%" />
+    </p>
+
+  > Serial mode: the complement execute sequentially from 1st May to 10th May, and the process instance page generates 10 process instances;
+
+  > Parallel mode: The tasks from 1st May to 10th May execute simultaneously, and the process instance page generates 10 process instances;
+
+## Workflow Timing
+
+- Create timing: Click Project Management->Workflow->Workflow Definition, enter the workflow definition page, make the workflow online, click the "timing" button <img src="/img/timing.png" width="35"/> , the timing parameter setting dialog box pops up, as shown in the figure below:
+  <p align="center">
+      <img src="/img/time_schedule_en.png" width="80%" />
+  </p>
+- Choose the start and end time. In the time range, the workflow runs at regular intervals; If not in the time range, no regular workflow instances generate.
+- Add a timing that execute once a day at 5 AM, as shown in the following figure:
+  <p align="center">
+      <img src="/img/timer-en.png" width="80%" />
+  </p>
+- Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as workflow running parameters.
+- Click the "Create" button to create the timing. Now the timing status is "**Offline**" and the timing needs to be **Online** to make effect.
+- Timing online: Click the "Timing Management" button <img src="/img/timeManagement.png" width="35"/>, enter the timing management page, click the "online" button, the timing status will change to "online", as shown in the below figure, the workflow makes effect regularly.
+  <p align="center">
+      <img src="/img/time-manage-list-en.png" width="80%" />
+  </p>
+
+## Import Workflow
+
+Click Project Management -> Workflow -> Workflow Definition to enter the workflow definition page, click the "Import Workflow" button to import the local workflow file, the workflow definition list displays the imported workflow and the status is offline.
diff --git a/docs/docs/en/guide/project/workflow-instance.md b/docs/docs/en/guide/project/workflow-instance.md
new file mode 100644
index 0000000..51670a3
--- /dev/null
+++ b/docs/docs/en/guide/project/workflow-instance.md
@@ -0,0 +1,62 @@
+# Workflow Instance
+
+## View Workflow Instance
+
+- Click Project Management -> Workflow -> Workflow Instance, enter the Workflow Instance page, as shown in the figure below:
+     <p align="center">
+        <img src="/img/instance-list-en.png" width="80%" />
+     </p>
+- Click the workflow name to enter the DAG view page, and check the task execution status, as shown in the figure below:
+  <p align="center">
+    <img src="/img/instance-runs-en.png" width="80%" />
+  </p>
+
+## View Task Log
+
+- Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the figure below:
+   <p align="center">
+     <img src="/img/instanceViewLog-en.png" width="80%" />
+   </p>
+- Click "View Log", a log window pops up, as shown in the figure below, you can also view the task log on the task instance page, refer to [Task View Log](./task-instance.md)
+   <p align="center">
+     <img src="/img/task-log-en.png" width="80%" />
+   </p>
+
+## View Task History
+
+- Click Project Management -> Workflow -> Workflow Instance, enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Double-click the task node, as shown in the figure below, click "View History" to jump to the task instance page, and display a list of task instances running by the workflow instance
+   <p align="center">
+     <img src="/img/task_history_en.png" width="80%" />
+   </p>
+
+## View Operation Parameters
+
+- Click Project Management -> Workflow -> Workflow Instance, enter the workflow instance page, and click the workflow name to enter the workflow DAG page;
+- Click the icon in the upper left corner <img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon <img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure:
+   <p align="center">
+     <img src="/img/run_params_en.png" width="80%" />
+   </p>
+
+## Workflow Instance Operation Function
+
+Click Project Management -> Workflow -> Workflow Instance, enter the workflow instance page, as shown in the figure below:
+
+  <p align="center">
+    <img src="/img/instance-list-en.png" width="80%" />
+  </p>
+
+- **Edit:** only can edit terminated processes. Click the "Edit" button or the workflow instance name to enter the DAG edit page. After the edit, click the "Save" button to confirm, as shown in the figure below. In the pop-up box, check "Whether to update to workflow definition" to update the workflow definition; if not checked, no effect on the workflow definition
+     <p align="center">
+       <img src="/img/editDag-en.png" width="80%" />
+     </p>
+- **Rerun:** re-execute the terminated process
+- **Recovery failed:** for failed processes, you can perform failure recovery operations, starting from the failed node
+- **Stop:** to **stop** the running process, the background code will first `kill` the worker process, and then execute `kill -9` operation
+- **Pause:** Perform a **pause** operation on the running process, the system status will change to **waiting for execution**, it will wait for the task to finish, and pause the next sequence task.
+- **Resume pause:** to resume the paused process, start running directly from the **paused node**
+- **Delete:** delete the workflow instance and the task instance under the workflow instance
+- **Gantt chart:** the vertical axis of the Gantt chart is the topological sorting of task instances of the workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure below:
+     <p align="center">
+         <img src="/img/gantt-en.png" width="80%" />
+     </p>
diff --git a/docs/docs/en/guide/resource.md b/docs/docs/en/guide/resource.md
new file mode 100644
index 0000000..c6cf07d
--- /dev/null
+++ b/docs/docs/en/guide/resource.md
@@ -0,0 +1,165 @@
+# Resource Center
+
+If you want to use the resource upload function, you can appoint the local file directory as the upload directory for a single machine (this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO or other related environments.
+
+> **_Note:_**
+>
+> * If you want to use the resource upload function, the deployment user in [installation and deployment](installation/standalone.md) must have relevant operation authority.
+> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step.
+
+## HDFS Resource Configuration
+
+- Upload resource files and UDF functions, all uploaded files and resources will be stored on HDFS, so require the following configurations:
+
+```  
+conf/common.properties  
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/dolphinscheduler" is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020    
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+```
+
+## File Management
+
+> It is the management of various resource files, including creating basic `txt/log/sh/conf/py/java` and jar packages and other type files, and can do edit, rename, download, delete and other operations to the files.
+
+![file-manage](/img/new_ui/dev/resource/file-manage.png)
+
+- Create a file
+  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties.
+
+![create-file](/img/new_ui/dev/resource/create-file.png)
+
+- upload files
+
+> Upload file: Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name.
+
+![upload-file](/img/new_ui/dev/resource/upload-file.png)
+
+- File View
+
+> For the files that can be viewed, click the file name to view the file details.
+
+<p align="center">
+   <img src="/img/file_detail_en.png" width="80%" />
+ </p>
+
+- Download file
+
+> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file.
+
+- File rename
+
+![rename-file](/img/new_ui/dev/resource/rename-file.png)
+
+- delete
+  > File list -> Click the "Delete" button to delete the specified file.
+
+- Re-upload file
+
+  > Re-upload file: Click the "Re-upload File" button to upload a new file to replace the old file, drag the file to the re-upload area, the file name will be automatically completed with the new uploaded file name.
+
+    <p align="center">
+      <img src="/img/reupload_file_en.png" width="80%" />
+    </p>
+
+
+## UDF Management
+
+### Resource Management
+
+> The resource management and file management functions are similar. The difference is that the resource management is the UDF upload function, and the file management uploads the user programs, scripts and configuration files.
+> Operation function: rename, download, delete.
+
+- Upload UDF resources
+  > Same as uploading files.
+
+### Function Management
+
+- Create UDF function
+  > Click "Create UDF Function", enter the UDF function parameters, select the UDF resource, and click "Submit" to create the UDF function.
+
+> Currently, only supports temporary UDF functions of Hive.
+
+- UDF function name: enter the name of the UDF function.
+- Package name Class name: enter the full path of the UDF function.
+- UDF resource: set the resource file corresponding to the created UDF function.
+
+![create-udf](/img/new_ui/dev/resource/create-udf.png)
+ 
+## Task Group Settings
+
+The task group is mainly used to control the concurrency of task instances and is designed to control the pressure of other resources (it can also control the pressure of the Hadoop cluster, the cluster will have queue control it). When creating a new task definition, you can configure the corresponding task group and configure the priority of the task running in the task group. 
+
+### Task Group Configuration 
+
+#### Create Task Group 
+
+![create-taskGroup](/img/new_ui/dev/resource/create-taskGroup.png)
+
+The user clicks [Resources] - [Task Group Management] - [Task Group option] - [Create Task Group] 
+
+![create-taskGroup](/img/new_ui/dev/resource/create-taskGroup.png) 
+
+You need to enter the information inside the picture:
+
+- Task group name: the name displayed of the task group
+
+- Project name: the project range that the task group functions, this item is optional, if not selected, all the projects in the whole system can use this task group.
+
+- Resource pool size: The maximum number of concurrent task instances allowed.
+
+#### View Task Group Queue 
+
+![view-queue](/img/new_ui/dev/resource/view-queue.png) 
+
+Click the button to view task group usage information:
+
+![view-queue](/img/new_ui/dev/resource/view-groupQueue.png) 
+
+#### Use of Task Groups 
+
+**Note**: The usage of task groups is applicable to tasks executed by workers, such as [switch] nodes, [condition] nodes, [sub_process] and other node types executed by the master are not controlled by the task group. Let's take the shell node as an example: 
+
+![use-queue](/img/new_ui/dev/resource/use-queue.png)                 
+
+Regarding the configuration of the task group, all you need to do is to configure these parts in the red box:
+
+- Task group name: The task group name is displayed on the task group configuration page. Here you can only see the task group that the project has permission to access (the project is selected when creating a task group) or the task group that scope globally (no project is selected when creating a task group).
+
+- Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
+
+### Implementation Logic of Task Group 
+
+#### Get Task Group Resources: 
+
+The master judges whether the task is configured with a task group when distributing the task. If the task is not configured, it is normally thrown to the worker to run; if a task group is configured, it checks whether the remaining size of the task group resource pool meets the current task operation before throwing it to the worker for execution. , if the resource pool -1 is satisfied, continue to run; if not, exit the task distribution and wait for other tasks to wake up. 
+
+#### Release and Wake Up: 
+
+When the task that has occupied the task group resource is finished, the task group resource will be released. After the release, it will check whether there is a task waiting in the current task group. If there is, mark the task with the best priority to run, and create a new executable event. The event stores the task ID that is marked to acquire the resource, and then the task obtains the task group resource and run. 
+
+#### Task Group Flowchart 
+
+<p align="center">
+    <img src="/img/task_group_process.png" width="80%" />
+</p>        
diff --git a/docs/docs/en/guide/security.md b/docs/docs/en/guide/security.md
new file mode 100644
index 0000000..44e2cf4
--- /dev/null
+++ b/docs/docs/en/guide/security.md
@@ -0,0 +1,151 @@
+# Security (Authorization System)
+
+* Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, can authorize to the resources, data sources, projects, etc.
+* Administrator login, the default username and password is `admin/dolphinscheduler123`
+
+## Create Queue
+
+- Configure `queue` parameter to execute programs such as Spark and MapReduce.
+- The administrator enters the `Security Center->Queue Management` page and clicks the "Create Queue" button to create a new queue.
+
+![create-queue](/img/new_ui/dev/security/create-queue.png)
+
+## Add Tenant
+
+- The tenant corresponds to the Linux user, which is used by the worker to submit the job. The task will fail if Linux does not have this user exists. You can set the parameter `worker.tenant.auto.create` as `true` in configuration file `worker.properties`. After that DolphinScheduler will create a user if not exists, The property `worker.tenant.auto.create=true` requests worker run `sudo` command without password.
+- Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
+- The administrator enters the `Security Center->Tenant Management` page and clicks the `Create Tenant` button to create a tenant.
+
+![create-tenant](/img/new_ui/dev/security/create-tenant.png)
+
+## Create Normal User
+
+- Users are divided into **administrator users** and **normal users**
+
+  - The administrator has authorization to authorize and user management authorities but does not have the authority to create project and workflow definition operations.
+  - Normal users can create projects and create, edit and execute workflow definitions.
+  - **Note**: If the user switches tenants, all resources under the tenant to which the user belongs will be copied to the new tenant that is switched.
+
+- The administrator enters the `Security Center -> User Management` page and clicks the `Create User` button to create a user.
+
+![create-user](/img/new_ui/dev/security/create-user.png)
+
+> **Edit user information**
+
+- The administrator enters the `Security Center->User Management` page and clicks the `Edit` button to edit user information.
+- After a normal user logs in, click the user information in the username drop-down box to enter the user information page, and click the `Edit` button to edit the user information.
+
+> **Modify user password**
+
+- The administrator enters the `Security Center->User Management` page and clicks the `Edit` button. When editing user information, enter the new password to modify the user password.
+- After a normal user logs in, click the user information in the username drop-down box to enter the password modification page, enter the password and confirm the password and click the `Edit` button, then the password modification is a success.
+
+## Create Alarm Group
+
+- The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group by email.
+
+* The administrator enters the `Security Center -> Alarm Group Management` page and clicks the `Create Alarm Group` button to create an alarm group.
+
+![create-alarmInstance](/img/new_ui/dev/security/create-alarmInstance.png)
+
+## Token Management
+
+> Since the back-end interface has login check, token management provides a way to execute various operations on the system by calling interfaces.
+
+- The administrator enters the `Security Center -> Token Management page`, clicks the `Create Token` button, selects the expiration time and user, clicks the `Generate Token` button, and clicks the `Submit` button, then create the selected user's token successfully.
+
+![create-token](/img/new_ui/dev/security/create-token.png)
+
+- After a normal user logs in, click the user information in the username drop-down box, enter the token management page, select the expiration time, click the `Generate Token` button, and click the `Submit` button, then the user creates a token successfully.
+- Call example:
+
+```java
+    /**
+     * test token
+     */
+    public  void doPOSTParam()throws Exception{
+        // create HttpClient
+        CloseableHttpClient httpclient = HttpClients.createDefault();
+
+        // create http post request
+        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create");
+        httpPost.setHeader("token", "123");
+        // set parameters
+        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
+        parameters.add(new BasicNameValuePair("projectName", "qzw"));
+        parameters.add(new BasicNameValuePair("desc", "qzw"));
+        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
+        httpPost.setEntity(formEntity);
+        CloseableHttpResponse response = null;
+        try {
+            // execute
+            response = httpclient.execute(httpPost);
+            // response status code 200
+            if (response.getStatusLine().getStatusCode() == 200) {
+                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
+                System.out.println(content);
+            }
+        } finally {
+            if (response != null) {
+                response.close();
+            }
+            httpclient.close();
+        }
+    }
+```
+
+## Granted Permissions
+
+    * Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
+    * The administrator can authorize the projects, resources, data sources and UDF functions to normal users which not created by them. Because the way to authorize projects, resources, data sources and UDF functions to users is the same, we take project authorization as an example.
+    * Note: The user has all permissions to the projects created by them. Projects will not be displayed in the project list and the selected project list.
+
+- The administrator enters the `Security Center -> User Management` page and clicks the `Authorize` button of the user who needs to be authorized, as shown in the figure below:
+ <p align="center">
+  <img src="/img/auth-en.png" width="80%" />
+</p>
+
+- Select the project and authorize the project.
+
+<p align="center">
+   <img src="/img/authproject-en.png" width="80%" />
+ </p>
+
+- Resources, data sources, and UDF function authorization are the same as project authorization.
+
+## Worker Grouping
+
+Each worker node will belong to its own worker group, and the default group is "default".
+
+When executing a task, the task can be assigned to the specified worker group, and the task will be executed by the worker node in the group.
+
+> Add or update worker group
+
+- Open the `conf/worker.properties` configuration file on the worker node where you want to configure the groups and modify the `worker.groups` parameter.
+- The `worker.groups` parameter is followed by the name of the group corresponding to the worker node, which is `default`.
+- If the worker node corresponds to more than one group, they are separated by commas.
+
+```conf
+worker.groups=default,test
+```
+- You can also change the worker group for the worker during execution, and if the modification is successful, the worker will use the new group and ignore the configuration in `worker.properties`. The step to modify work group as below: `Security Center -> Worker Group Management -> click 'New Worker Group' -> click 'New Worker Group' ->  enter 'Group Name' -> Select Exists Worker -> Click Submit`. 
+
+## Environmental Management
+
+* Configure the Worker operating environment online. A Worker can specify multiple environments, and each environment is equivalent to the `dolphinscheduler_env.sh` file.
+
+* The default environment is the `dolphinscheduler_env.sh` file.
+
+* When executing a task, the task can be assigned to the specified worker group, and select the corresponding environment according to the worker group. Finally, the worker node executes the environment first and then executes the task.
+
+> Add or update environment
+
+- The environment configuration is equivalent to the configuration in the `dolphinscheduler_env.sh` file.
+
+![create-environment](/img/new_ui/dev/security/create-environment.png)
+
+> Usage environment
+
+- Create a task node in the workflow definition, select the worker group and the environment corresponding to the worker group. When executing the task, the Worker will execute the environment first before executing the task.
+
+![use-environment](/img/new_ui/dev/security/use-environment.png)
diff --git a/docs/docs/en/guide/start/quick-start.md b/docs/docs/en/guide/start/quick-start.md
new file mode 100644
index 0000000..9310f9d
--- /dev/null
+++ b/docs/docs/en/guide/start/quick-start.md
@@ -0,0 +1,62 @@
+# Quick Start
+
+* Watch Apache DolphinScheduler Quick Start Tutorile here:
+  [![image](/img/video_cover/quick-use.png)](https://www.youtube.com/watch?v=nrF20hpCkug)
+
+
+* Administrator user login
+
+  > Address:http://localhost:12345/dolphinscheduler  Username and password: `admin/dolphinscheduler123`
+
+![login](/img/new_ui/dev/quick-start/login.png)
+
+* Create a queue
+
+![create-queue](/img/new_ui/dev/quick-start/create-queue.png)
+
+* Create a tenant
+
+![create-tenant](/img/new_ui/dev/quick-start/create-tenant.png)
+
+* Create Ordinary Users
+
+![create-user](/img/new_ui/dev/quick-start/create-user.png)
+
+* Create an alarm instance
+
+![create-alarmInstance](/img/new_ui/dev/quick-start/create-alarmInstance.png)
+
+* Create an alarm group
+
+![create-alarmGroup](/img/new_ui/dev/quick-start/create-alarmGroup.png)
+  
+* Create a worker group
+
+![create-workerGroup](/img/new_ui/dev/quick-start/create-workerGroup.png)
+
+ * Create environment
+
+![create-environment](/img/new_ui/dev/quick-start/create-environment.png)
+    
+* Create a token
+  
+![create-token](/img/new_ui/dev/quick-start/create-token.png)
+     
+* Login with regular users
+  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
+
+* `Project Management - > Create Project - > Click on Project Name`
+
+![project](/img/new_ui/dev/quick-start/project.png)
+
+* `Click Workflow Definition - > Create Workflow Definition - > Online Process Definition`
+
+<p align="center">
+   <img src="/img/process_definition_en.png" width="60%" />
+ </p>
+
+* `Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log`
+
+ <p align="center">
+   <img src="/img/log_en.png" width="60%" />
+</p>
diff --git a/docs/docs/en/guide/task/conditions.md b/docs/docs/en/guide/task/conditions.md
new file mode 100644
index 0000000..6e404a9
--- /dev/null
+++ b/docs/docs/en/guide/task/conditions.md
@@ -0,0 +1,36 @@
+# Conditions
+
+Condition is a conditional node, that determines which downstream task should run based on the condition of the upstream task. Currently, the Conditions support multiple upstream tasks, but only two downstream tasks. When the number of upstream tasks exceeds one, achieve complex upstream dependencies by through `and` and `or` operators.
+
+## Create Task
+
+Drag from the toolbar <img src="/img/conditions.png" width="20"/> task node to canvas to create a new Conditions task, as shown in the figure below:
+
+  <p align="center">
+   <img src="/img/condition_dag_en.png" width="80%" />
+  </p>
+
+  <p align="center">
+   <img src="/img/condition_task_en.png" width="80%" />
+  </p>
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Downstream tasks selection: supports two branches success and failure.
+  - Success: When the upstream task runs successfully, run the success branch.
+  - Failure: When the upstream task runs failed, run the failure branch.
+- Upstream condition selection: can select one or more upstream tasks for conditions.
+  - Add an upstream dependency: the first parameter is to choose a specified task name, and the second parameter is to choose the upstream task status to trigger conditions.
+  - Select upstream task relationship: use `and` and `or` operators to handle the complex relationship of upstream when there are multiple upstream tasks for conditions.
+
+## Related Task
+
+[switch](switch.md): Conditions task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream nodes. The [Switch](switch.md) task node mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the result of user written expression.
diff --git a/docs/docs/en/guide/task/datax.md b/docs/docs/en/guide/task/datax.md
new file mode 100644
index 0000000..8f43510
--- /dev/null
+++ b/docs/docs/en/guide/task/datax.md
@@ -0,0 +1,63 @@
+# DataX
+
+## Overview
+
+DataX task type for executing DataX programs. For DataX nodes, the worker will execute `${DATAX_HOME}/bin/datax.py` to analyze the input json file.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/datax.png" width="15"/> task node to canvas.
+
+## Task Parameter
+
+- **Node name**: The node name in a workflow definition is unique.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Custom template**: Customize the content of the DataX node's JSON profile when the default DataSource provided does not meet the requirements.
+- **JSON**: JSON configuration file for DataX synchronization.
+- **Custom parameters**: SQL task type, and stored procedure is a custom parameter order, to set customized parameter type and data type for the method is the same as the stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the `${variable}` in the SQL statement.
+- **Data source**: Select the data source to extract data.
+- **SQL statement**: The SQL statement used to extract data from the target database, the SQL query column name is automatically parsed when execute the node, and mapped to the target table to synchronize column name. When the column names of the source table and the target table are inconsistent, they can be converted by column alias (as)
+- **Target library**: Select the target library for data synchronization.
+- **Pre-SQL**: Pre-SQL executes before the SQL statement (executed by the target database).
+- **Post-SQL**: Post-SQL executes after the SQL statement (executed by the target database).
+- **Stream limit (number of bytes)**: Limit the number of bytes for a query.
+- **Limit flow (number of records)**: Limit the number of records for a query.
+- **Running memory**: Set the minimum and maximum memory required, which can be set according to the actual production environment.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+## Task Example
+
+This example demonstrates how to import data from Hive into MySQL.
+
+### Configure the DataX environment in DolphinScheduler
+
+If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
+
+![datax_task01](/img/tasks/demo/datax_task01.png)
+
+After finish the environment configuration, need to restart DolphinScheduler.
+
+### Configure DataX Task Node
+
+As the default DataSource does not contain data read from Hive, require a custom JSON, refer to: [HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md). Note: Partition directories exist on the HDFS path, when importing data in real world situations, partitioning is recommended to be passed as a parameter, using custom parameters.
+
+After finish the required JSON file, you can configure the node by following the steps in the diagram below:
+
+![datax_task02](/img/tasks/demo/datax_task02.png)
+
+### View Execution Result
+
+![datax_task03](/img/tasks/demo/datax_task03.png)
+
+### Notice
+
+If the default DataSource provided does not meet your needs, you can configure the writer and reader of the DataX according to the actual usage environment in the custom template options, available at [DataX](https://github.com/alibaba/DataX).
diff --git a/docs/docs/en/guide/task/dependent.md b/docs/docs/en/guide/task/dependent.md
new file mode 100644
index 0000000..ff86207
--- /dev/null
+++ b/docs/docs/en/guide/task/dependent.md
@@ -0,0 +1,27 @@
+# Dependent Node
+
+- Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B from yesterday, and the dependent node will check whether process B run successful yesterday.
+
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node to the canvas, as shown in the figure below:
+
+<p align="center">
+   <img src="/img/dependent-nodes-en.png" width="80%" />
+ </p>
+
+> The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.
+
+  <p align="center">
+   <img src="/img/depend-node-en.png" width="80%" />
+ </p>
+
+> For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:
+
+ <p align="center">
+   <img src="/img/depend-node1-en.png" width="80%" />
+ </p>
+
+> If the weekly report A also needs to be executed successfully last Tuesday:
+
+ <p align="center">
+   <img src="/img/depend-node3-en.png" width="80%" />
+ </p>
\ No newline at end of file
diff --git a/docs/docs/en/guide/task/emr.md b/docs/docs/en/guide/task/emr.md
new file mode 100644
index 0000000..354cc7b
--- /dev/null
+++ b/docs/docs/en/guide/task/emr.md
@@ -0,0 +1,60 @@
+# Amazon EMR
+
+## Overview
+
+Amazon EMR task type, for creating EMR clusters on AWS and running computing tasks. Using [aws-java-sdk](https://aws.amazon.com/cn/sdk-for-java/) in the background code, to transfer JSON parameters to  [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object and submit to AWS.
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- JSON: JSON corresponding to the [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object, for details refer to [API_RunJobFlow_Examples](https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html#API_RunJobFlow_Examples).
+
+## JSON example
+
+```json
+{
+  "Name": "SparkPi",
+  "ReleaseLabel": "emr-5.34.0",
+  "Applications": [
+    {
+      "Name": "Spark"
+    }
+  ],
+  "Instances": {
+    "InstanceGroups": [
+      {
+        "Name": "Primary node",
+        "InstanceRole": "MASTER",
+        "InstanceType": "m4.xlarge",
+        "InstanceCount": 1
+      }
+    ],
+    "KeepJobFlowAliveWhenNoSteps": false,
+    "TerminationProtected": false
+  },
+  "Steps": [
+    {
+      "Name": "calculate_pi",
+      "ActionOnFailure": "CONTINUE",
+      "HadoopJarStep": {
+        "Jar": "command-runner.jar",
+        "Args": [
+          "/usr/lib/spark/bin/run-example",
+          "SparkPi",
+          "15"
+        ]
+      }
+    }
+  ],
+  "JobFlowRole": "EMR_EC2_DefaultRole",
+  "ServiceRole": "EMR_DefaultRole"
+}
+```
+
diff --git a/docs/docs/en/guide/task/flink.md b/docs/docs/en/guide/task/flink.md
new file mode 100644
index 0000000..ab4f76b
--- /dev/null
+++ b/docs/docs/en/guide/task/flink.md
@@ -0,0 +1,69 @@
+# Flink Node
+
+## Overview
+
+Flink task type for executing Flink programs. For Flink nodes, the worker submits the task by using the Flink command `flink run`. See [flink cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/) for more details.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/flink.png" width="15"/>task node to canvas.
+
+## Task Parameter
+
+- **Node name**: The node name in a workflow definition is unique.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Program type**: Supports Java, Scala and Python.
+- **The class of main function**: The **full path** of Main Class, the entry point of the Flink program.
+- **Main jar package**: The jar package of the Flink program (upload by Resource Center).
+- **Deployment mode**: Support 2 deployment modes: cluster and local.
+- **Flink version**: Select version according to the execution env.
+- **Task name** (optional): Flink task name.
+- **JobManager memory size**: Used to set the size of jobManager memories, which can be set according to the actual production environment.
+- **Number of slots**: Used to set the number of slots, which can be set according to the actual production environment.
+- **TaskManager memory size**: Used to set the size of taskManager memories, which can be set according to the actual production environment.
+- **Number of TaskManager**: Used to set the number of taskManagers, which can be set according to the actual production environment.
+- **Parallelism**: Used to set the degree of parallelism for executing Flink tasks.
+- **Main program parameters**: Set the input parameters for the Flink program and support the substitution of custom parameter variables.
+- **Optional parameters**: Support `--jar`, `--files`,` --archives`, `--conf` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **Custom parameter**: It is a local user-defined parameter for Flink, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+## Task Example
+
+### Execute the WordCount Program
+
+This is a common introductory case in the big data ecosystem, which often apply to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text. (Flink's releases attach this example job)
+
+#### Configure the flink environment in DolphinScheduler
+
+If you are using the flink task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
+
+![demo-flink-simple](/img/tasks/demo/flink_task01.png)
+
+#### Upload the Main Package
+
+When using the Flink task node, you need to upload the jar package to the Resource Centre for the execution, refer to the [resource center](../resource.md).
+
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
+
+![resource_upload](/img/tasks/demo/upload_jar.png)
+
+#### Configure Flink Nodes
+
+Configure the required content according to the parameter descriptions above.
+
+![demo-flink-simple](/img/tasks/demo/flink_task02.png)
+
+## Notice
+
+JAVA and Scala only used for identification, there is no difference. If use Python to develop Flink, there is no class of the main function and the rest is the same.
diff --git a/docs/docs/en/guide/task/http.md b/docs/docs/en/guide/task/http.md
new file mode 100644
index 0000000..3034913
--- /dev/null
+++ b/docs/docs/en/guide/task/http.md
@@ -0,0 +1,47 @@
+# HTTP Node
+
+## Overview
+
+This node is used to perform http type tasks such as the common POST and GET request types, and also supports http request validation and other functions.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag the <img src="/img/tasks/icons/http.png" width="15"/> from the toolbar to the drawing board.
+
+## Task Parameter
+
+- **Node name**: The node name in a workflow definition is unique.
+- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
+- **Descriptive information**: describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
+- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
+- **Environment Name**: Configure the environment name in which to run the script.
+- **Number of failed retry attempts**: The number of times the task failed to be resubmitted.
+- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task.
+- **Delayed execution time**: the time, in cents, that a task is delayed in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- **Request address**: HTTP request URL.
+- **Request type**: Support GET, POSt, HEAD, PUT, DELETE.
+- **Request parameters**: Support Parameter, Body, Headers.
+- **Verification conditions**: support default response code, custom response code, content included, content not included.
+- **Verification content**: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.
+- **Custom parameter**: It is a user-defined parameter of http part, which will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+
+## Example
+
+HTTP defines the different methods of interacting with the server, the most basic methods are GET, POST, PUT and DELETE. Here we use the http task node to demonstrate the use of POST to send a request to the system's login page to submit data.
+
+The main configuration parameters are as follows:
+
+- URL: Address to access the target resource. Here is the system's login page.
+- HTTP Parameters:
+     - userName: Username
+     - userPassword: User login password
+
+![http_task](/img/tasks/demo/http_task01.png)
+
+## Notice
+
+None
diff --git a/docs/docs/en/guide/task/map-reduce.md b/docs/docs/en/guide/task/map-reduce.md
new file mode 100644
index 0000000..5ea69ba
--- /dev/null
+++ b/docs/docs/en/guide/task/map-reduce.md
@@ -0,0 +1,73 @@
+# MapReduce Node
+
+## Overview
+
+MapReduce(MR) task type used for executing MapReduce programs. For MapReduce nodes, the worker submits the task by using the Hadoop command `hadoop jar`. See [Hadoop Command Manual](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#jar) for more details.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/mr.png" width="15"/> to the canvas.
+
+## Task Parameter
+
+- **Node name**: The node name in a workflow definition is unique.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**:  Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Resource**: Refers to the list of resource files that called in the script, and upload or create files by the Resource Center file management.
+- **Custom parameters**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+### JAVA or SCALA Program
+
+- **Program type**: Select JAVA or SCALA program.
+- **The class of the main function**: The **full path** of Main Class, the entry point of the MapReduce program.
+- **Main jar package**: The jar package of the MapReduce program.
+- **Task name** (optional): MapReduce task name.
+- **Command line parameters**: Set the input parameters of the MapReduce program and support the substitution of custom parameter variables.
+- **Other parameters**: support `-D`, `-files`, `-libjars`, `-archives` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **User-defined parameter**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
+
+## Python Program
+
+- **Program type**: Select Python language.
+- **Main jar package**: The Python jar package for running MapReduce.
+- **Other parameters**: support `-D`, `-mapper`, `-reducer,` `-input` `-output` format, and you can set the input of user-defined parameters, such as:
+- `-mapper "mapper.py 1"` `-file mapper.py` `-reducer reducer.py` `-file reducer.py` `–input /journey/words.txt` `-output /journey/out/mr/\${currentTimeMillis}`
+- The `mapper.py 1` after `-mapper` is two parameters, the first parameter is `mapper.py`, and the second parameter is `1`.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **User-defined parameter**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
+
+## Task Example
+
+### Execute the WordCount Program
+
+This example is a common introductory type of MapReduce application, which used to count the number of identical words in the input text.
+
+#### Configure the MapReduce Environment in DolphinScheduler
+
+If you are using the MapReduce task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
+
+![mr_configure](/img/tasks/demo/mr_task01.png)
+
+#### Upload the Main Package
+
+When using the MapReduce task node, you need to use the Resource Centre to upload the jar package for the execution. Refer to the [resource centre](../resource.md).
+
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
+
+![resource_upload](/img/tasks/demo/upload_jar.png)
+
+#### Configure MapReduce Nodes
+
+Configure the required content according to the parameter descriptions above.
+
+![demo-mr-simple](/img/tasks/demo/mr_task02.png)
diff --git a/docs/docs/en/guide/task/pigeon.md b/docs/docs/en/guide/task/pigeon.md
new file mode 100644
index 0000000..a726f1a
--- /dev/null
+++ b/docs/docs/en/guide/task/pigeon.md
@@ -0,0 +1,19 @@
+# Pigeon
+
+Pigeon is a task used to trigger remote tasks, acquire logs or status by calling remote WebSocket service. It is DolphinScheduler uses a remote WebSocket service to call tasks.
+
+## Create
+
+Drag from the toolbar <img src="/img/pigeon.png" width="20"/> to the canvas to create a new Pigeon task.
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Target task name: Target task name of this Pigeon node.
\ No newline at end of file
diff --git a/docs/docs/en/guide/task/python.md b/docs/docs/en/guide/task/python.md
new file mode 100644
index 0000000..56df104
--- /dev/null
+++ b/docs/docs/en/guide/task/python.md
@@ -0,0 +1,15 @@
+# Python Node
+
+- Using python nodes, you can directly execute python scripts. For python nodes, workers use `python **` to submit tasks.
+
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png) task node to the canvas, as shown in the following figure:
+
+<p align="center">
+   <img src="/img/python-en.png" width="80%" />
+ </p>
+
+- Script: Python program developed by the user.
+- Environment Name: Specific Python interpreter path for running the script. If you need to use Python **virtualenv**, you should create multiply environments for each **virtualenv**.  
+- Resources: Refers to the list of resource files that need to be called in the script.
+- User-defined parameter: It is a user-defined local parameter of Python, and will replace the content with `${variable}` in the script.
+- Note: If you import the python file under the resource directory tree, you need to add the `__init__.py` file.
diff --git a/docs/docs/en/guide/task/shell.md b/docs/docs/en/guide/task/shell.md
new file mode 100644
index 0000000..a026d2f
--- /dev/null
+++ b/docs/docs/en/guide/task/shell.md
@@ -0,0 +1,43 @@
+# Shell
+
+## Overview
+
+Shell task used to create a shell task type and execute a series of shell scripts. When the worker run the shell task, a temporary shell script is generated, and use the Linux user with the same name as the tenant executes the script.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag  from the toolbar <img src="/img/tasks/icons/shell.png" width="15"/> to the canvas.
+
+## Task Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Environment Name: Configure the environment name in which run the script.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Script: Shell program developed by users.
+- Resource: Refers to the list of resource files that called in the script, and upload or create files by the Resource Center file management.
+- Custom parameters: It is a user-defined local parameter of Shell, and will replace the content with `${variable}` in the script.
+- Predecessor task: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+## Task Example
+
+### Simply Print
+
+We make an example simulate from a common task which runs by one command. The example is to print one line in the log file, as shown in the following figure:
+"This is a demo of shell task".
+
+![demo-shell-simple](/img/tasks/demo/shell.jpg)
+
+### Custom Parameters
+
+This example simulates a custom parameter task. We use parameters for reusing existing tasks as template or coping with the dynamic task. In this case,
+we declare a custom parameter named "param_key", with the value "param_val". Then we use `echo` to print the parameter "${param_key}" we just declared. 
+After running this example, we would see "param_val" print in the log.
+
+![demo-shell-custom-param](/img/tasks/demo/shell_custom_param.jpg)
diff --git a/docs/docs/en/guide/task/spark.md b/docs/docs/en/guide/task/spark.md
new file mode 100644
index 0000000..7e02544
--- /dev/null
+++ b/docs/docs/en/guide/task/spark.md
@@ -0,0 +1,68 @@
+# Spark Node
+
+## Overview
+
+Spark task type used to execute Spark program. For Spark nodes, the worker submits the task by using the spark command `spark submit`. See [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit) for more details.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/spark.png" width="15"/> to the canvas.
+
+## Task Parameter
+
+- **Node name**: The node name in a workflow definition is unique.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Program type**: Supports Java, Scala and Python.
+- **Spark version**: Support Spark1 and Spark2.
+- **The class of main function**: The **full path** of Main Class, the entry point of the Spark program.
+- **Main jar package**: The Spark jar package (upload by Resource Center).
+- **Deployment mode**: Support 3 deployment modes: yarn-cluster, yarn-client and local. 
+- **Task name** (optional): Spark task name.
+- **Driver core number**: Set the number of Driver core, which can be set according to the actual production environment.
+- **Driver memory size**: Set the size of Driver memories, which can be set according to the actual production environment.
+- **Number of Executor**: Set the number of Executor, which can be set according to the actual production environment.
+- **Executor memory size**: Set the size of Executor memories, which can be set according to the actual production environment.
+- **Main program parameters**: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- **Optional parameters**: support `--jars`, `--files`,` --archives`, `--conf` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **Custom parameter**: It is a local user-defined parameter for Spark, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+## Task Example
+
+### Execute the WordCount Program
+
+This is a common introductory case in the big data ecosystem, which often apply to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text. (Flink's releases attach this example job)
+
+#### Configure the Spark Environment in DolphinScheduler
+
+If you are using the Spark task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
+
+![spark_configure](/img/tasks/demo/spark_task01.png)
+
+#### Upload the Main Package
+
+When using the Spark task node, you need to upload the jar package to the Resource Centre for the execution, refer to the [resource center](../resource.md).
+
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
+
+![resource_upload](/img/tasks/demo/upload_jar.png)
+
+#### Configure Spark Nodes
+
+Configure the required content according to the parameter descriptions above.
+
+![demo-spark-simple](/img/tasks/demo/spark_task02.png)
+
+## Notice
+
+JAVA and Scala only used for identification, there is no difference. If use Python to develop Flink, there is no class of the main function and the rest is the same.
diff --git a/docs/docs/en/guide/task/sql.md b/docs/docs/en/guide/task/sql.md
new file mode 100644
index 0000000..9c62245
--- /dev/null
+++ b/docs/docs/en/guide/task/sql.md
@@ -0,0 +1,43 @@
+# SQL
+
+## Overview
+
+SQL task type used to connect to databases and execute SQL.
+
+## Create DataSource
+
+Refer to [DataSource](../datasource/introduction.md)
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/sql.png" width="25"/> to the canvas.
+
+## Task Parameter
+
+- Data source: Select the corresponding DataSource.
+- SQL type: Supports query and non-query. The query is a `select` type query, which is returned with a result set. You can specify three templates for email notification: form, attachment or form attachment. Non-queries return without a result set, three types of operations are: update, delete and insert.
+- SQL parameter: The input parameter format is `key1=value1;key2=value2...`.
+- SQL statement: SQL statement.
+- UDF function: For Hive DataSources, you can refer to UDF functions created in the resource center, but other DataSource do not support UDF functions.
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order, to set customized parameter type and data type for the method is the same as the stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the `${variable}` in the SQL statement.
+- Pre-SQL: Pre-SQL executes before the SQL statement.
+- Post-SQL: Post-SQL executes after the SQL statement.
+
+## Task Example
+
+### Create a Temporary Table in Hive and Write Data
+
+This example creates a temporary table `tmp_hello_world` in Hive and writes a row of data. Before creating a temporary table, we need to ensure that the table does not exist. So we use custom parameters to obtain the time of the day as the suffix of the table name every time we run, this task can run every different day. The format of the created table name is: `tmp_hello_world_{yyyyMMdd}`.
+
+![hive-sql](/img/tasks/demo/hive-sql.png)
+
+### After Running the Task Successfully, Query the Results in Hive
+
+Log in to the bigdata cluster and use 'hive' command or 'beeline' or 'JDBC' and other methods to connect to the 'Apache Hive' for the query. The query SQL is `select * from tmp_hello_world_{yyyyMMdd}`, please replace `{yyyyMMdd}` with the date of the running day. The following shows the query screenshot:
+
+![hive-sql](/img/tasks/demo/hive-result.png)
+
+## Notice
+
+Pay attention to the selection of SQL type. If it is an insert operation, need to change to "Non-Query" type.
diff --git a/docs/docs/en/guide/task/stored-procedure.md b/docs/docs/en/guide/task/stored-procedure.md
new file mode 100644
index 0000000..ea2ed48
--- /dev/null
+++ b/docs/docs/en/guide/task/stored-procedure.md
@@ -0,0 +1,13 @@
+# Stored Procedure
+
+- Execute the stored procedure according to the selected DataSource.
+
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png) task node into the canvas, as shown in the figure below:
+
+<p align="center">
+   <img src="/img/procedure-en.png" width="80%" />
+ </p>
+
+- DataSource: The DataSource type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding DataSource.
+- Method: The method name of the stored procedure.
+- Custom parameters: The custom parameter types of the stored procedure support `IN` and `OUT`, and the data types support: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN.
\ No newline at end of file
diff --git a/docs/docs/en/guide/task/sub-process.md b/docs/docs/en/guide/task/sub-process.md
new file mode 100644
index 0000000..c1d3b2f
--- /dev/null
+++ b/docs/docs/en/guide/task/sub-process.md
@@ -0,0 +1,46 @@
+# SubProcess Node
+
+## Overview
+
+The sub-process node is to execute an external workflow definition as a task node.
+
+## Create Task
+
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/sub_process.png" width="15"/> task node to canvas to create a new SubProcess task.
+
+## Task Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Environment Name: Configure the environment name in which run the script.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process.
+- Predecessor task: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+## Task Example
+
+This example simulates a common task type, here we use a child node task to recall the [Shell](shell.md) to print out "hello". This means executing a shell task as a child node.
+
+### Create a Shell task
+
+Create a shell task to print "hello" and define the workflow as `test_dag01`.
+
+![subprocess_task01](/img/tasks/demo/subprocess_task01.png)
+
+## Create the Sub_process task
+
+To use the sub_process, you need to create the sub-node task, which is the shell task we created in the first step. After that, as shown in the diagram below, select the corresponding sub-node in position ⑤.
+
+![subprocess_task02](/img/tasks/demo/subprocess_task02.png)
+
+After creating the sub_process, create a corresponding shell task for printing "world" and link both together. Save the current workflow and run it to get the expected result.
+
+![subprocess_task03](/img/tasks/demo/subprocess_task03.png)
+
+## Notice
+
+When using `sub_process` to recall a sub-node task, you need to ensure that the defined sub-node is online status, otherwise, the sub_process workflow will not work properly.
diff --git a/docs/docs/en/guide/task/switch.md b/docs/docs/en/guide/task/switch.md
new file mode 100644
index 0000000..0f911ed
--- /dev/null
+++ b/docs/docs/en/guide/task/switch.md
@@ -0,0 +1,39 @@
+# Switch
+
+The switch is a conditional judgment node, decide the branch executes according to the value of [global variable](../parameter/global.md) and the expression result written by the user.
+
+## Create
+
+Drag from the toolbar <img src="/img/switch.png" width="20"/>  task node to canvas to create a task. 
+**Note** After created a switch task, you must first configure the upstream and downstream, then configure the parameter of task branches.
+
+## Parameter
+
+- Node name: The node name in a workflow definition is unique.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Condition: You can configure multiple conditions for the switch task. When the conditions are satisfied, execute the configured branch. You can configure multiple different conditions to satisfy different businesses.
+- Branch flow: The default branch flow, when all the conditions are not satisfied, execute this branch flow.
+
+## Detail
+
+Here we have three tasks, the dependencies are `A -> B -> [C, D]`, and `task_a` is a shell task and `task_b` is a switch task
+
+- In task A, a global variable named `id` is defined through [global variable](../parameter/global.md), and the declaration method is `${setValue(id=1)}`
+- Task B adds conditions and uses global variables declared upstream to achieve conditional judgment (Note: switch can get the global variables value, as long as its direct or indirect upstream have already assigned the global variables before switch acquires). We want to execute task C when `id = 1`, otherwise run task D
+  - Configure task C to run when the global variable `id=1`. Then edit `${id} == 1` in the condition of task B, and select `C` as branch flow
+  - For other tasks, select `D` as branch flow
+
+The following shows the switch task configuration:
+
+![task-switch-configure](../../../../../../img/switch_configure.jpg)
+
+## Related Task
+
+[condition](conditions.md):[Condition](conditions.md)task mainly executes the corresponding branch based on the execution result status (success, failure) of the upstream node. 
+The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
\ No newline at end of file
diff --git a/docs/docs/en/guide/upgrade.md b/docs/docs/en/guide/upgrade.md
new file mode 100644
index 0000000..b401ce1
--- /dev/null
+++ b/docs/docs/en/guide/upgrade.md
@@ -0,0 +1,84 @@
+# DolphinScheduler Upgrade Documentation
+
+## Back-Up Previous Version's Files and Database
+
+## Stop All Services of DolphinScheduler
+
+ `sh ./script/stop-all.sh`
+
+## Download the Newest Version Installation Package
+
+- [download](/en-us/download/download.html) the latest version of the installation packages.
+- The following upgrade operations need to be performed in the new version's directory.
+
+## Database Upgrade
+
+- Modify the following properties in `conf/datasource.properties`.
+
+- If using MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add MYSQL connector jar into lib dir, here we download `mysql-connector-java-8.0.16.jar`, and then correctly configure database connection information. You can download MYSQL connector jar from [here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use PostgreSQL as the database, you just need to comment out Mysql related configurations and correctly confi [...]
+
+    ```properties
+      # postgre
+      #spring.datasource.driver-class-name=org.postgresql.Driver
+      #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
+      # mysql
+      spring.datasource.driver-class-name=com.mysql.jdbc.Driver
+      spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
+      spring.datasource.username=xxx
+      spring.datasource.password=xxx
+    ```
+
+- Execute database upgrade script:
+
+    `sh ./script/upgrade-dolphinscheduler.sh`
+
+## Backend Service Upgrade
+
+### Modify the Content in `conf/config/install_config.conf` File
+
+- Standalone Deployment please refer to the [Standalone-Deployment](./installation/standalone.md).
+- Cluster Deployment please refer to the [Cluster-Deployment](./installation/cluster.md).
+
+#### Masters Need Attentions
+
+Create worker group in 1.3.1 version has a different design: 
+
+- Before version 1.3.1 worker group can be created through UI interface.
+- Since version 1.3.1 worker group can be created by modifying the worker configuration. 
+
+#### When Upgrade from Version Before 1.3.1 to 1.3.2, the Below Operations are What We Need to Do to Keep Worker Group Configuration Consist with Previous
+
+1. Go to the backup database, search records in `t_ds_worker_group table`, mainly focus `id, name and IP` three columns.
+
+| id | name | ip_list    |
+| :---         |     :---:      |          ---: |
+| 1   | service1     | 192.168.xx.10    |
+| 2   | service2     | 192.168.xx.11,192.168.xx.12      |
+
+2. Modify the worker configuration in `conf/config/install_config.conf` file.
+
+Assume bellow are the machine worker service to be deployed:
+| hostname | ip |
+| :---  | :---:  |
+| ds1   | 192.168.xx.10     |
+| ds2   | 192.168.xx.11     |
+| ds3   | 192.168.xx.12     |
+
+To keep worker group config consistent with the previous version, we need to modify workers configuration as below:
+
+```shell
+#worker service is deployed on which machine, and also specify which worker group this worker belongs to. 
+workers="ds1:service1,ds2:service2,ds3:service2"
+```
+
+#### The Worker Group has Been Enhanced in Version 1.3.2
+
+Workers in 1.3.1 can't belong to more than one worker group, but in 1.3.2 it's supported. So in 1.3.1 it's not supported when `workers="ds1:service1,ds1:service2"`, and in 1.3.2 it's supported. 
+
+### Execute Deploy Script
+
+```shell
+`sh install.sh`
+```
+
+
diff --git a/docs/docs/en/history-versions.md b/docs/docs/en/history-versions.md
new file mode 100644
index 0000000..bf2218e
--- /dev/null
+++ b/docs/docs/en/history-versions.md
@@ -0,0 +1,70 @@
+# 历史版本:
+#### 以下是Apache DolphinScheduler每个稳定版本的设置说明。
+
+### 版本:2.0.5
+
+#### 地址:[2.0.5 文档](../2.0.5/user_doc/guide/quick-start.md)
+
+### 版本:2.0.3
+
+#### 地址:[2.0.3 文档](../2.0.3/user_doc/guide/quick-start.md)
+
+### 版本:2.0.2
+
+#### 地址:[2.0.2 文档](../2.0.2/user_doc/guide/quick-start.md)
+
+### 版本:2.0.1
+
+#### 地址:[2.0.1 文档](../2.0.1/user_doc/guide/quick-start.md)
+
+### 版本:2.0.0
+
+#### 地址:[2.0.0 文档](../2.0.0/user_doc/guide/quick-start.md)
+
+### 版本:1.3.9
+
+#### 地址:[1.3.9 文档](../1.3.9/user_doc/quick-start.md)
+
+### 版本:1.3.8
+
+#### 地址:[1.3.8 文档](../1.3.8/user_doc/quick-start.md)
+
+### 版本:1.3.6
+
+#### 地址:[1.3.6 文档](../1.3.6/user_doc/quick-start.md)
+
+### 版本:1.3.5
+
+#### 地址:[1.3.5 文档](../1.3.5/user_doc/quick-start.md)
+
+### 版本:1.3.4
+
+##### 地址:[1.3.4 文档](../1.3.4/user_doc/quick-start.md)
+
+### 版本:1.3.3
+
+#### 地址:[1.3.3 文档](../1.3.4/user_doc/quick-start.md)
+
+### 版本:1.3.2
+
+#### 地址:[1.3.2 文档](../1.3.2/user_doc/quick-start.md)
+
+### 版本:1.3.1
+
+#### 地址:[1.3.1 文档](../1.3.1/user_doc/quick-start.md)
+
+### 版本:1.2.1
+
+#### 地址:[1.2.1 文档](../1.2.1/user_doc/quick-start.md)
+
+### 版本:1.2.0
+
+#### 地址:[1.2.0 文档](../1.2.0/user_doc/quick-start.md)
+
+### 版本:1.1.0
+
+#### 地址:[1.1.0 文档](../1.2.0/user_doc/quick-start.md)
+
+### 版本:Dev
+
+#### 地址:[Dev 文档](../dev/user_doc/about/introduction.md)
diff --git a/docs/docs/zh/about/glossary.md b/docs/docs/zh/about/glossary.md
new file mode 100644
index 0000000..bcd28ac
--- /dev/null
+++ b/docs/docs/zh/about/glossary.md
@@ -0,0 +1,58 @@
+## 名词解释
+
+在对Apache DolphinScheduler了解之前,我们先来认识一下调度系统常用的名词
+
+### 1.名词解释
+
+**DAG:** 全称Directed Acyclic Graph,简称DAG。工作流中的Task任务以有向无环图的形式组装起来,从入度为零的节点进行拓扑遍历,直到无后继节点为止。举例如下图:
+
+<p align="center">
+  <img src="/img/dag_examples_cn.jpg" alt="dag示例"  width="60%" />
+  <p align="center">
+        <em>dag示例</em>
+  </p>
+</p>
+
+**流程定义**:通过拖拽任务节点并建立任务节点的关联所形成的可视化**DAG**
+
+**流程实例**:流程实例是流程定义的实例化,可以通过手动启动或定时调度生成,流程定义每运行一次,产生一个流程实例
+
+**任务实例**:任务实例是流程定义中任务节点的实例化,标识着具体的任务执行状态
+
+**任务类型**:目前支持有SHELL、SQL、SUB_PROCESS(子流程)、PROCEDURE、MR、SPARK、PYTHON、DEPENDENT(依赖)、,同时计划支持动态插件扩展,注意:其中子 **SUB_PROCESS**
+也是一个单独的流程定义,是可以单独启动执行的
+
+**调度方式**:系统支持基于cron表达式的定时调度和手动调度。命令类型支持:启动工作流、从当前节点开始执行、恢复被容错的工作流、恢复暂停流程、从失败节点开始执行、补数、定时、重跑、暂停、停止、恢复等待线程。
+其中 **恢复被容错的工作流** 和 **恢复等待线程** 两种命令类型是由调度内部控制使用,外部无法调用
+
+**定时调度**:系统采用 **quartz** 分布式调度器,并同时支持cron表达式可视化的生成
+
+**依赖**:系统不单单支持 **DAG** 简单的前驱和后继节点之间的依赖,同时还提供**任务依赖**节点,支持**流程间的自定义任务依赖**
+
+**优先级** :支持流程实例和任务实例的优先级,如果流程实例和任务实例的优先级不设置,则默认是先进先出
+
+**邮件告警**:支持 **SQL任务** 查询结果邮件发送,流程实例运行结果邮件告警及容错告警通知
+
+**失败策略**:对于并行运行的任务,如果有任务失败,提供两种失败策略处理方式,**继续**是指不管并行运行任务的状态,直到流程失败结束。**结束**是指一旦发现失败任务,则同时Kill掉正在运行的并行任务,流程失败结束
+
+**补数**:补历史数据,支持**区间并行和串行**两种补数方式
+
+### 2.模块介绍
+
+- dolphinscheduler-alert 告警模块,提供 AlertServer 服务。
+
+- dolphinscheduler-api web应用模块,提供 ApiServer 服务。
+
+- dolphinscheduler-common 通用的常量枚举、工具类、数据结构或者基类
+
+- dolphinscheduler-dao 提供数据库访问等操作。
+
+- dolphinscheduler-remote 基于 netty 的客户端、服务端
+
+- dolphinscheduler-server MasterServer 和 WorkerServer 服务
+
+- dolphinscheduler-service service模块,包含Quartz、Zookeeper、日志客户端访问服务,便于server模块和api模块调用
+
+- dolphinscheduler-ui 前端模块
+
+
diff --git a/docs/docs/zh/about/hardware.md b/docs/docs/zh/about/hardware.md
new file mode 100644
index 0000000..1ec2f24
--- /dev/null
+++ b/docs/docs/zh/about/hardware.md
@@ -0,0 +1,47 @@
+# 软硬件环境建议配置
+
+DolphinScheduler 作为一款开源分布式工作流任务调度系统,可以很好地部署和运行在 Intel 架构服务器及主流虚拟化环境下,并支持主流的Linux操作系统环境
+
+## 1. Linux 操作系统版本要求
+
+| 操作系统       | 版本         |
+| :----------------------- | :----------: |
+| Red Hat Enterprise Linux | 7.0 及以上   |
+| CentOS                   | 7.0 及以上   |
+| Oracle Enterprise Linux  | 7.0 及以上   |
+| Ubuntu LTS               | 16.04 及以上 |
+
+> **注意:**
+>以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上
+
+## 2. 服务器建议配置
+DolphinScheduler 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
+### 生产环境
+
+| **CPU** | **内存** | **硬盘类型** | **网络** | **实例数量** |
+| --- | --- | --- | --- | --- |
+| 4核+ | 8 GB+ | SAS | 千兆网卡 | 1+ |
+
+> **注意:**
+> - 以上建议配置为部署 DolphinScheduler 的最低配置,生产环境强烈推荐使用更高的配置
+> - 硬盘大小配置建议 50GB+ ,系统盘和数据盘分开
+
+
+## 3. 网络要求
+
+DolphinScheduler正常运行提供如下的网络端口配置:
+
+| 组件 | 默认端口 | 说明 |
+|  --- | --- | --- |
+| MasterServer |  5678  | 非通信端口,只需本机端口不冲突即可 |
+| WorkerServer | 1234  | 非通信端口,只需本机端口不冲突即可 |
+| ApiApplicationServer |  12345 | 提供后端通信端口 |
+
+
+> **注意:**
+> - MasterServer 和 WorkerServer 不需要开启网络间通信,只需本机端口不冲突即可
+> - 管理员可根据实际环境中 DolphinScheduler 组件部署方案,在网络侧和主机侧开放相关端口
+
+## 4. 客户端 Web 浏览器要求
+
+DolphinScheduler 推荐 Chrome 以及使用 Chromium 内核的较新版本浏览器访问前端可视化操作界面
\ No newline at end of file
diff --git a/docs/docs/zh/about/introduction.md b/docs/docs/zh/about/introduction.md
new file mode 100644
index 0000000..578ce51
--- /dev/null
+++ b/docs/docs/zh/about/introduction.md
@@ -0,0 +1,12 @@
+# 关于DolphinScheduler
+
+Apache DolphinScheduler是一个分布式易扩展的可视化DAG工作流任务调度开源系统。解决数据研发ETL 错综复杂的依赖关系,不能直观监控任务健康状态等问题。DolphinScheduler以DAG流式的方式将Task组装起来,可实时监控任务的运行状态,同时支持重试、从指定节点恢复失败、暂停及Kill任务等操作
+
+# 简单易用
+DAG监控界面,所有流程定义都是可视化,通过拖拽任务定制DAG,通过API方式与第三方系统对接, 一键部署
+# 高可靠性
+去中心化的多Master和多Worker, 自身支持HA功能, 采用任务队列来避免过载,不会造成机器卡死
+# 丰富的使用场景
+支持暂停恢复操作.支持多租户,更好的应对大数据的使用场景. 支持更多的任务类型,如 spark, hive, mr, python, sub_process, shell
+# 高扩展性
+支持自定义任务类型,调度器使用分布式调度,调度能力随集群线性增长,Master和Worker支持动态上下线
diff --git a/docs/docs/zh/architecture/cache.md b/docs/docs/zh/architecture/cache.md
new file mode 100644
index 0000000..8b4804d
--- /dev/null
+++ b/docs/docs/zh/architecture/cache.md
@@ -0,0 +1,42 @@
+### 缓存
+
+#### 缓存目的
+
+由于在master-server调度过程中,会产生大量的数据库读取操作,如tenant,user,processDefinition等,一方面对DB产生很大的读压力,另一方面则会使整个核心调度流程变得缓慢;
+
+考虑到这部分业务数据是读多写少的场景,故引入了缓存模块,以减少DB读压力,加快核心调度流程;
+
+#### 缓存设置
+
+```yaml
+spring:
+  cache:
+    # default enable cache, you can disable by `type: none`
+    type: none
+    cache-names:
+      - tenant
+      - user
+      - processDefinition
+      - processTaskRelation
+      - taskDefinition
+    caffeine:
+      spec: maximumSize=100,expireAfterWrite=300s,recordStats
+```
+
+缓存模块采用[spring-cache](https://spring.io/guides/gs/caching/)机制,可直接在spring配置文件中配置是否开启缓存(默认`none`关闭), 缓存类型;
+
+目前采用[caffeine](https://github.com/ben-manes/caffeine)进行缓存管理,可自由设置缓存相关配置,如缓存大小、过期时间等;
+
+#### 缓存读取
+
+缓存采用spring-cache的注解,配置在相关的mapper层,可参考如:`TenantMapper`.
+
+#### 缓存更新
+
+业务数据的更新来自于api-server, 而缓存端在master-server, 故需要对api-server的数据更新做监听(aspect切面拦截`@CacheEvict`),当需要进行缓存驱逐时会通知master-server,master-server接收到cacheEvictCommand后进行缓存驱逐;
+
+需要注意的是:缓存更新的兜底策略来自于用户在caffeine中的过期策略配置,请结合业务进行配置;
+
+时序图如下图所示:
+
+<img src="/img/cache-evict.png" alt="cache-evict" style="zoom: 67%;" />
\ No newline at end of file
diff --git a/docs/docs/zh/architecture/configuration.md b/docs/docs/zh/architecture/configuration.md
new file mode 100644
index 0000000..ceb99a2
--- /dev/null
+++ b/docs/docs/zh/architecture/configuration.md
@@ -0,0 +1,406 @@
+<!-- markdown-link-check-disable -->
+
+# 前言
+本文档为dolphinscheduler配置文件说明文档,针对版本为 dolphinscheduler-1.3.x 版本.
+
+# 目录结构
+目前dolphinscheduler 所有的配置文件都在 [conf ] 目录中.
+为了更直观的了解[conf]目录所在的位置以及包含的配置文件,请查看下面dolphinscheduler安装目录的简化说明.
+本文主要讲述dolphinscheduler的配置文件.其他部分先不做赘述.
+
+[注:以下 dolphinscheduler 简称为DS.]
+```
+
+├─bin                               DS命令存放目录
+│  ├─dolphinscheduler-daemon.sh         启动/关闭DS服务脚本
+│  ├─start-all.sh                       根据配置文件启动所有DS服务
+│  ├─stop-all.sh                        根据配置文件关闭所有DS服务
+├─conf                              配置文件目录
+│  ├─application-api.properties         api服务配置文件
+│  ├─datasource.properties              数据库配置文件
+│  ├─zookeeper.properties               zookeeper配置文件
+│  ├─master.properties                  master服务配置文件
+│  ├─worker.properties                  worker服务配置文件
+│  ├─quartz.properties                  quartz服务配置文件
+│  ├─common.properties                  公共服务[存储]配置文件
+│  ├─alert.properties                   alert服务配置文件
+│  ├─config                             环境变量配置文件夹
+│      ├─install_config.conf                DS环境变量配置脚本[用于DS安装/启动]
+│  ├─env                                运行脚本环境变量配置目录
+│      ├─dolphinscheduler_env.sh            运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+│  ├─org                                mybatis mapper文件目录
+│  ├─i18n                               i18n配置文件目录
+│  ├─logback-api.xml                    api服务日志配置文件
+│  ├─logback-master.xml                 master服务日志配置文件
+│  ├─logback-worker.xml                 worker服务日志配置文件
+│  ├─logback-alert.xml                  alert服务日志配置文件
+├─sql                               DS的元数据创建升级sql文件
+│  ├─create                             创建SQL脚本目录
+│  ├─upgrade                            升级SQL脚本目录
+│  ├─dolphinscheduler_postgre.sql       postgre数据库初始化脚本
+│  ├─dolphinscheduler_mysql.sql         mysql数据库初始化脚本
+│  ├─soft_version                       当前DS版本标识文件
+├─script                            DS服务部署,数据库创建/升级脚本目录
+│  ├─create-dolphinscheduler.sh         DS数据库初始化脚本      
+│  ├─upgrade-dolphinscheduler.sh        DS数据库升级脚本                
+│  ├─monitor-server.sh                  DS服务监控启动脚本               
+│  ├─scp-hosts.sh                       安装文件传输脚本                                                    
+│  ├─remove-zk-node.sh                  清理zookeeper缓存文件脚本       
+├─ui                                前端WEB资源目录
+├─lib                               DS依赖的jar存放目录
+├─install.sh                        自动安装DS服务脚本
+
+
+```
+
+
+# 配置文件详解
+
+序号| 服务分类 |  配置文件|
+|--|--|--|
+1|启动/关闭DS服务脚本|dolphinscheduler-daemon.sh
+2|数据库连接配置 | datasource.properties
+3|zookeeper连接配置|zookeeper.properties
+4|公共[存储]配置|common.properties
+5|API服务配置|application-api.properties
+6|Master服务配置|master.properties
+7|Worker服务配置|worker.properties
+8|Alert 服务配置|alert.properties
+9|Quartz配置|quartz.properties
+10|DS环境变量配置脚本[用于DS安装/启动]|install_config.conf
+11|运行脚本加载环境变量配置文件 <br />[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]|dolphinscheduler_env.sh
+12|各服务日志配置文件|api服务日志配置文件 : logback-api.xml  <br /> master服务日志配置文件  : logback-master.xml    <br /> worker服务日志配置文件 : logback-worker.xml  <br /> alert服务日志配置文件 : logback-alert.xml 
+
+
+## 1.dolphinscheduler-daemon.sh [启动/关闭DS服务脚本]
+dolphinscheduler-daemon.sh脚本负责DS的启动&关闭. 
+start-all.sh/stop-all.sh最终也是通过dolphinscheduler-daemon.sh对集群进行启动/关闭操作.
+目前DS只是做了一个基本的设置,JVM参数请根据各自资源的实际情况自行设置.
+
+默认简化参数如下:
+```bash
+export DOLPHINSCHEDULER_OPTS="
+-server 
+-Xmx16g 
+-Xms1g 
+-Xss512k 
+-XX:+UseConcMarkSweepGC 
+-XX:+CMSParallelRemarkEnabled 
+-XX:+UseFastAccessorMethods 
+-XX:+UseCMSInitiatingOccupancyOnly 
+-XX:CMSInitiatingOccupancyFraction=70
+"
+```
+
+> 不建议设置"-XX:DisableExplicitGC" , DS使用Netty进行通讯,设置该参数,可能会导致内存泄漏.
+
+## 2.datasource.properties [数据库连接]
+在DS中使用Druid对数据库连接进行管理,默认简化配置如下.
+|参数 | 默认值| 描述|
+|--|--|--|
+spring.datasource.driver-class-name| |数据库驱动
+spring.datasource.url||数据库连接地址
+spring.datasource.username||数据库用户名
+spring.datasource.password||数据库密码
+spring.datasource.initialSize|5| 初始连接池数量
+spring.datasource.minIdle|5| 最小连接池数量
+spring.datasource.maxActive|5| 最大连接池数量
+spring.datasource.maxWait|60000| 最大等待时长
+spring.datasource.timeBetweenEvictionRunsMillis|60000| 连接检测周期
+spring.datasource.timeBetweenConnectErrorMillis|60000| 重试间隔
+spring.datasource.minEvictableIdleTimeMillis|300000| 连接保持空闲而不被驱逐的最小时间
+spring.datasource.validationQuery|SELECT 1|检测连接是否有效的sql
+spring.datasource.validationQueryTimeout|3| 检测连接是否有效的超时时间[seconds]
+spring.datasource.testWhileIdle|true| 申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
+spring.datasource.testOnBorrow|true| 申请连接时执行validationQuery检测连接是否有效
+spring.datasource.testOnReturn|false| 归还连接时执行validationQuery检测连接是否有效
+spring.datasource.defaultAutoCommit|true| 是否开启自动提交
+spring.datasource.keepAlive|true| 连接池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作。
+spring.datasource.poolPreparedStatements|true| 开启PSCache
+spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| 要启用PSCache,必须配置大于0,当大于0时,poolPreparedStatements自动触发修改为true。
+
+
+## 3.zookeeper.properties [zookeeper连接配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+zookeeper.quorum|localhost:2181| zk集群连接信息
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS在zookeeper存储根目录
+zookeeper.session.timeout|60000|  session 超时
+zookeeper.connection.timeout|30000|  连接超时
+zookeeper.retry.base.sleep|100| 基本重试时间差
+zookeeper.retry.max.sleep|30000| 最大重试时间
+zookeeper.retry.maxtime|10|最大重试次数
+
+
+## 4.common.properties [hadoop、s3、yarn配置]
+common.properties配置文件目前主要是配置hadoop/s3a相关的配置. 
+|参数 |默认值| 描述| 
+|--|--|--|
+data.basedir.path|/tmp/dolphinscheduler|本地工作目录,用于存放临时文件
+resource.storage.type|NONE|资源文件存储类型: HDFS,S3,NONE
+resource.upload.path|/dolphinscheduler|资源文件存储路径
+hadoop.security.authentication.startup.state|false|hadoop是否开启kerberos权限
+java.security.krb5.conf.path|/opt/krb5.conf|kerberos配置目录
+login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos登录用户
+login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos登录用户keytab
+kerberos.expire.time|2|kerberos过期时间,整数,单位为小时
+resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties|资源中心支持的文件格式
+hdfs.root.user|hdfs|如果存储类型为HDFS,需要配置拥有对应操作权限的用户
+fs.defaultFS|hdfs://mycluster:8020|请求地址如果resource.storage.type=S3,该值类似为: s3a://dolphinscheduler. 如果resource.storage.type=HDFS, 如果 hadoop 配置了 HA,需要复制core-site.xml 和 hdfs-site.xml 文件到conf目录
+fs.s3a.endpoint||s3 endpoint地址
+fs.s3a.access.key||s3 access key
+fs.s3a.secret.key||s3 secret key
+yarn.resourcemanager.ha.rm.ids||yarn resourcemanager 地址, 如果resourcemanager开启了HA, 输入HA的IP地址(以逗号分隔),如果resourcemanager为单节点, 该值为空即可
+yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|如果resourcemanager开启了HA或者没有使用resourcemanager,保持默认值即可. 如果resourcemanager为单节点,你需要将ds1 配置为resourcemanager对应的hostname
+dolphinscheduler.env.path|env/dolphinscheduler_env.sh|运行脚本加载环境变量配置文件[如: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
+development.state|false|是否处于开发模式
+
+
+## 5.application-api.properties [API服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+server.port|12345|api服务通讯端口
+server.servlet.session.timeout|7200|session超时时间
+server.servlet.context-path|/dolphinscheduler |请求路径
+spring.servlet.multipart.max-file-size|1024MB|最大上传文件大小
+spring.servlet.multipart.max-request-size|1024MB|最大请求大小
+server.jetty.max-http-post-size|5000000|jetty服务最大发送请求大小
+spring.messages.encoding|UTF-8|请求编码
+spring.jackson.time-zone|GMT+8|设置时区
+spring.messages.basename|i18n/messages|i18n配置
+security.authentication.type|PASSWORD|权限校验类型
+
+
+## 6.master.properties [Master服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+master.listen.port|5678|master监听端口
+master.exec.threads|100|master工作线程数量,用于限制并行的流程实例数量
+master.exec.task.num|20|master每个流程实例的并行任务数量
+master.dispatch.task.num|3|master每个批次的派发任务数量
+master.host.selector|LowerWeight|master host选择器,用于选择合适的worker执行任务,可选值: Random, RoundRobin, LowerWeight
+master.heartbeat.interval|10|master心跳间隔,单位为秒
+master.task.commit.retryTimes|5|任务重试次数
+master.task.commit.interval|1000|任务提交间隔,单位为毫秒
+master.max.cpuload.avg|-1|master最大cpuload均值,只有高于系统cpuload均值时,master服务才能调度任务. 默认值为-1: cpu cores * 2
+master.reserved.memory|0.3|master预留内存,只有低于系统可用内存时,master服务才能调度任务,单位为G
+
+
+## 7.worker.properties [Worker服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+worker.listen.port|1234|worker监听端口
+worker.exec.threads|100|worker工作线程数量,用于限制并行的任务实例数量
+worker.heartbeat.interval|10|worker心跳间隔,单位为秒
+worker.max.cpuload.avg|-1|worker最大cpuload均值,只有高于系统cpuload均值时,worker服务才能被派发任务. 默认值为-1: cpu cores * 2
+worker.reserved.memory|0.3|worker预留内存,只有低于系统可用内存时,worker服务才能被派发任务,单位为G
+worker.groups|default|worker分组配置,逗号分隔,例如'worker.groups=default,test' <br> worker启动时会根据该配置自动加入对应的分组
+
+
+## 8.alert.properties [Alert 告警服务配置]
+|参数 |默认值| 描述| 
+|--|--|--|
+alert.type|EMAIL|告警类型|
+mail.protocol|SMTP| 邮件服务器协议
+mail.server.host|xxx.xxx.com|邮件服务器地址
+mail.server.port|25|邮件服务器端口
+mail.sender|xxx@xxx.com|发送人邮箱
+mail.user|xxx@xxx.com|发送人邮箱名称
+mail.passwd|111111|发送人邮箱密码
+mail.smtp.starttls.enable|true|邮箱是否开启tls
+mail.smtp.ssl.enable|false|邮箱是否开启ssl
+mail.smtp.ssl.trust|xxx.xxx.com|邮箱ssl白名单
+xls.file.path|/tmp/xls|邮箱附件临时工作目录
+||以下为企业微信配置[选填]|
+enterprise.wechat.enable|false|企业微信是否启用
+enterprise.wechat.corp.id|xxxxxxx|
+enterprise.wechat.secret|xxxxxxx|
+enterprise.wechat.agent.id|xxxxxxx|
+enterprise.wechat.users|xxxxxxx|
+enterprise.wechat.token.url|https://qyapi.weixin.qq.com/cgi-bin/gettoken?  <br /> corpid=$corpId&corpsecret=$secret|
+enterprise.wechat.push.url|https://qyapi.weixin.qq.com/cgi-bin/message/send?  <br /> access_token=$token|
+enterprise.wechat.user.send.msg||发送消息格式
+enterprise.wechat.team.send.msg||群发消息格式
+plugin.dir|/Users/xx/your/path/to/plugin/dir|插件目录
+
+
+## 9.quartz.properties [Quartz配置]
+这里面主要是quartz配置,请结合实际业务场景&资源进行配置,本文暂时不做展开.
+|参数 |默认值| 描述| 
+|--|--|--|
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate
+org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
+org.quartz.scheduler.instanceName | DolphinScheduler
+org.quartz.scheduler.instanceId | AUTO
+org.quartz.scheduler.makeSchedulerThreadDaemon | true
+org.quartz.jobStore.useProperties | false
+org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool
+org.quartz.threadPool.makeThreadsDaemons | true
+org.quartz.threadPool.threadCount | 25
+org.quartz.threadPool.threadPriority | 5
+org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX
+org.quartz.jobStore.tablePrefix | QRTZ_
+org.quartz.jobStore.isClustered | true
+org.quartz.jobStore.misfireThreshold | 60000
+org.quartz.jobStore.clusterCheckinInterval | 5000
+org.quartz.jobStore.acquireTriggersWithinLock|true
+org.quartz.jobStore.dataSource | myDs
+org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
+
+
+## 10.install_config.conf [DS环境变量配置脚本[用于DS安装/启动]]
+install_config.conf这个配置文件比较繁琐,这个文件主要有两个地方会用到.
+* 1.DS集群的自动安装. 
+
+> 调用install.sh脚本会自动加载该文件中的配置.并根据该文件中的内容自动配置上述的配置文件中的内容. 
+> 比如:dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties 等文件.
+
+
+* 2.DS集群的启动&关闭.
+>DS集群在启动&关闭的时候,会加载该配置文件中的masters,workers,alertServer,apiServers等参数,启动/关闭DS集群.
+
+文件内容如下:
+```bash
+
+# 注意: 该配置文件中如果包含特殊字符,如: `.*[]^${}\+?|()@#&`, 请转义,
+#      示例: `[` 转义为 `\[`
+
+# 数据库类型, 目前仅支持 postgresql 或者 mysql
+dbtype="mysql"
+
+# 数据库 地址 & 端口
+dbhost="192.168.xx.xx:3306"
+
+# 数据库 名称
+dbname="dolphinscheduler"
+
+
+# 数据库 用户名
+username="xx"
+
+# 数据库 密码
+password="xx"
+
+# Zookeeper地址
+zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
+
+# 将DS安装到哪个目录,如: /data1_1T/dolphinscheduler,
+installPath="/data1_1T/dolphinscheduler"
+
+# 使用哪个用户部署
+# 注意: 部署用户需要sudo 权限, 并且可以操作 hdfs .
+#     如果使用hdfs的话,根目录必须使用该用户进行创建.否则会有权限相关的问题.
+deployUser="dolphinscheduler"
+
+
+# 以下为告警服务配置
+# 邮件服务器地址
+mailServerHost="smtp.exmail.qq.com"
+
+# 邮件服务器 端口
+mailServerPort="25"
+
+# 发送者
+mailSender="xxxxxxxxxx"
+
+# 发送用户
+mailUser="xxxxxxxxxx"
+
+# 邮箱密码
+mailPassword="xxxxxxxxxx"
+
+# TLS协议的邮箱设置为true,否则设置为false
+starttlsEnable="true"
+
+# 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true
+sslEnable="false"
+
+# 邮件服务地址值,同 mailServerHost
+sslTrust="smtp.exmail.qq.com"
+
+#业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE。如果想上传到HDFS,请配置为HDFS;如果不需要资源上传功能请选择NONE。
+resourceStorageType="NONE"
+
+# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+# Note,s3 be sure to create the root directory /dolphinscheduler
+defaultFS="hdfs://mycluster:8020"
+
+# 如果resourceStorageType 为S3 需要配置的参数如下:
+s3Endpoint="http://192.168.xx.xx:9010"
+s3AccessKey="xxxxxxxxxx"
+s3SecretKey="xxxxxxxxxx"
+
+# 如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx",否则如果是单ResourceManager或者根本没用到yarn,请配置yarnHaIps=""即可,如果没用到yarn,配置为""
+yarnHaIps="192.168.xx.xx,192.168.xx.xx"
+
+# 如果是单ResourceManager,则配置为ResourceManager节点ip或主机名,否则保持默认值即可。
+singleYarnIp="yarnIp1"
+
+# 资源文件在 HDFS/S3  存储路径
+resourceUploadPath="/dolphinscheduler"
+
+
+# HDFS/S3  操作用户
+hdfsRootUser="hdfs"
+
+# 以下为 kerberos 配置
+
+# kerberos是否开启
+kerberosStartUp="false"
+# kdc krb5 config file path
+krb5ConfPath="$installPath/conf/krb5.conf"
+# keytab username
+keytabUserName="hdfs-mycluster@ESZ.COM"
+# username keytab path
+keytabPath="$installPath/conf/hdfs.headless.keytab"
+
+
+# api 服务端口
+apiServerPort="12345"
+
+
+# 部署DS的所有主机hostname
+ips="ds1,ds2,ds3,ds4,ds5"
+
+# ssh 端口 , 默认 22
+sshPort="22"
+
+# 部署master服务主机
+masters="ds1,ds2"
+
+# 部署 worker服务的主机
+# 注意: 每一个worker都需要设置一个worker 分组的名称,默认值为 "default"
+workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"
+
+#  部署alert服务主机
+alertServer="ds3"
+
+# 部署api服务主机 
+apiServers="ds1"
+```
+
+## 11.dolphinscheduler_env.sh [环境变量配置]
+通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中.
+涉及到的任务类型有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等
+```bash
+export HADOOP_HOME=/opt/soft/hadoop
+export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
+export SPARK_HOME1=/opt/soft/spark1
+export SPARK_HOME2=/opt/soft/spark2
+export PYTHON_HOME=/opt/soft/python
+export JAVA_HOME=/opt/soft/java
+export HIVE_HOME=/opt/soft/hive
+export FLINK_HOME=/opt/soft/flink
+export DATAX_HOME=/opt/soft/datax/bin/datax.py
+
+export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
+
+```
+
+## 12.各服务日志配置文件
+对应服务服务名称| 日志文件名 |
+|--|--|--|
+api服务日志配置文件 |logback-api.xml|
+master服务日志配置文件|logback-master.xml |
+worker服务日志配置文件|logback-worker.xml |
+alert服务日志配置文件|logback-alert.xml |
diff --git a/docs/docs/zh/architecture/design.md b/docs/docs/zh/architecture/design.md
new file mode 100644
index 0000000..a2183b0
--- /dev/null
+++ b/docs/docs/zh/architecture/design.md
@@ -0,0 +1,287 @@
+## 系统架构设计
+
+### 2.系统架构
+
+#### 2.1 系统架构图
+<p align="center">
+  <img src="/img/architecture-1.3.0.jpg" alt="系统架构图"  width="70%" />
+  <p align="center">
+        <em>系统架构图</em>
+  </p>
+</p>
+
+#### 2.2 启动流程活动图
+<p align="center">
+  <img src="/img/process-start-flow-1.3.0.png" alt="启动流程活动图"  width="70%" />
+  <p align="center">
+        <em>启动流程活动图</em>
+  </p>
+</p>
+
+#### 2.3 架构说明
+
+* **MasterServer** 
+
+    MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+    MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+    MasterServer基于netty提供监听服务。
+
+    ##### 该服务内主要包含:
+
+    - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
+
+    - **MasterSchedulerThread**是一个扫描线程,定时扫描数据库中的 **command** 表,根据不同的**命令类型**进行不同的业务操作
+
+    - **MasterExecThread**主要是负责DAG任务切分、任务提交监控、各种不同命令类型的逻辑处理
+
+    - **MasterTaskExecThread**主要负责任务的持久化
+
+* **WorkerServer** 
+
+     WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。
+     WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
+     Server基于netty提供监听服务。Worker
+     ##### 该服务包含:
+     - **FetchTaskThread**主要负责不断从**Task Queue**中领取任务,并根据不同任务类型调用**TaskScheduleThread**对应执行器。
+
+* **ZooKeeper** 
+
+    ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
+    我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
+
+* **Task Queue** 
+
+    提供任务队列的操作,目前队列也是基于Zookeeper来实现。由于队列中存的信息较少,不必担心队列里数据过多的情况,实际上我们压测过百万级数据存队列,对系统稳定性和性能没影响。
+
+* **Alert** 
+
+    提供告警相关接口,接口主要包括**告警**两种类型的告警数据的存储、查询和通知功能。其中通知功能又有**邮件通知**和**SNMP(暂未实现)**两种。
+
+* **API** 
+
+    API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+    接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
+
+* **UI** 
+
+    系统的前端页面,提供系统的各种可视化操作界面。
+
+#### 2.3 架构设计思想
+
+##### 一、去中心化vs中心化 
+
+###### 中心化思想
+
+中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+<p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_slave.png" alt="master-slave角色"  width="50%" />
+ </p>
+
+- Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
+- Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
+
+
+
+中心化思想设计存在的问题:
+
+- 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
+- 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
+
+
+
+###### 去中心化
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png" alt="去中心化"  width="50%" />
+ </p>
+
+- 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
+- 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
+- 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
+
+
+
+- DolphinScheduler的去中心化是Master/Worker注册到Zookeeper中,实现Master集群和Worker集群无中心,并使用Zookeeper分布式锁来选举其中的一台Master或Worker为“管理者”来执行任务。
+
+#####  二、分布式锁实践
+
+DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Master执行Scheduler,或者只有一台Worker执行任务的提交。
+1. 获取分布式锁的核心流程算法如下
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/distributed_lock.png" alt="获取分布式锁流程"  width="50%" />
+ </p>
+
+2. DolphinScheduler中Scheduler线程分布式锁实现流程图:
+ <p align="center">
+   <img src="/img/distributed_lock_procss.png" alt="获取分布式锁流程"  width="50%" />
+ </p>
+
+
+##### 三、线程不足循环等待问题
+
+-  如果一个DAG中没有子流程,则如果Command中的数据条数大于线程池设置的阈值,则直接流程等待或失败。
+-  如果一个大的DAG中嵌套了很多子流程,如下图则会产生“死等”状态:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/lack_thread.png" alt="线程不足循环等待问题"  width="50%" />
+ </p>
+上图中MainFlowThread等待SubFlowThread1结束,SubFlowThread1等待SubFlowThread2结束, SubFlowThread2等待SubFlowThread3结束,而SubFlowThread3等待线程池有新线程,则整个DAG流程不能结束,从而其中的线程也不能释放。这样就形成的子父流程循环等待的状态。此时除非启动新的Master来增加线程来打破这样的”僵局”,否则调度集群将不能再使用。
+
+对于启动新Master来打破僵局,似乎有点差强人意,于是我们提出了以下三种方案来降低这种风险:
+
+1. 计算所有Master的线程总和,然后对每一个DAG需要计算其需要的线程数,也就是在DAG流程执行之前做预计算。因为是多Master线程池,所以总线程数不太可能实时获取。 
+2. 对单Master线程池进行判断,如果线程池已经满了,则让线程直接失败。
+3. 增加一种资源不足的Command类型,如果线程池不足,则将主流程挂起。这样线程池就有了新的线程,可以让资源不足挂起的流程重新唤醒执行。
+
+注意:Master Scheduler线程在获取Command的时候是FIFO的方式执行的。
+
+于是我们选择了第三种方式来解决线程不足的问题。
+
+
+##### 四、容错设计
+容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
+
+###### 1. 宕机容错
+
+服务容错设计依赖于ZooKeeper的Watcher机制,实现原理如图:
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant.png" alt="DolphinScheduler容错设计"  width="40%" />
+ </p>
+其中Master监控其他Master和Worker的目录,如果监听到remove事件,则会根据具体的业务逻辑进行流程实例容错或者任务实例容错。
+
+- Master容错流程:
+
+<p align="center">
+   <img src="/img/failover-master.jpg" alt="容错流程"  width="50%" />
+ </p>
+
+容错范围:从host的维度来看,Master的容错范围包括:自身host+注册中心上不存在的节点host,容错的整个过程会加锁;
+
+容错内容:Master的容错内容包括:容错工作流实例和任务实例,在容错前会比较实例的开始时间和服务节点的启动时间,在服务启动时间之后的则跳过容错;
+
+容错后处理:ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程调度,遍历 DAG 找到”正在运行”和“提交成功”的任务,对”正在运行”的任务监控其任务实例的状态,对”提交成功”的任务需要判断Task Queue中是否已经存在,如果存在则同样监控任务实例的状态,如果不存在则重新提交任务实例。
+
+
+
+- Worker容错流程:
+
+<p align="center">
+   <img src="/img/failover-worker.jpg" alt="容错流程"  width="50%" />
+ </p>
+
+容错范围:从工作流实例的维度看,每个Master只负责容错自己的工作流实例;只有在`handleDeadServer`时会加锁;
+
+容错内容:当发送Worker节点的remove事件时,Master只容错任务实例,在容错前会比较实例的开始时间和服务节点的启动时间,在服务启动时间之后的则跳过容错;
+
+容错后处理:Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
+
+ 注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+
+###### 2.任务失败重试
+
+这里首先要区分任务失败重试、流程失败恢复、流程失败重跑的概念:
+
+- 任务失败重试是任务级别的,是调度系统自动进行的,比如一个Shell任务设置重试次数为3次,那么在Shell任务运行失败后会自己再最多尝试运行3次
+- 流程失败恢复是流程级别的,是手动进行的,恢复是从只能**从失败的节点开始执行**或**从当前节点开始执行**
+- 流程失败重跑也是流程级别的,是手动进行的,重跑是从开始节点进行
+
+
+
+接下来说正题,我们将工作流中的任务节点分了两种类型。
+
+- 一种是业务节点,这种节点都对应一个实际的脚本或者处理语句,比如Shell节点,MR节点、Spark节点、依赖节点等。
+
+- 还有一种是逻辑节点,这种节点不做实际的脚本或语句处理,只是整个流程流转的逻辑处理,比如子流程节等。
+
+每一个**业务节点**都可以配置失败重试的次数,当该任务节点失败,会自动重试,直到成功或者超过配置的重试次数。**逻辑节点**不支持失败重试。但是逻辑节点里的任务支持重试。
+
+如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作
+
+
+
+##### 五、任务优先级设计
+在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
+
+-  按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+    - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+
+        - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/process_priority.png" alt="流程优先级配置"  width="40%" />
+             </p>
+
+        - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+            <p align="center">
+               <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task_priority.png" alt="任务优先级配置"  width="35%" />
+             </p>
+
+
+##### 六、Logback和netty实现日志访问
+
+-  由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+  -  将日志放到ES搜索引擎上
+  -  通过netty通信获取远程日志信息
+
+-  介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+
+ <p align="center">
+   <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png" alt="grpc远程访问"  width="50%" />
+ </p>
+
+
+- 我们使用自定义Logback的FileAppender和Filter功能,实现每个任务实例生成一个日志文件。
+- FileAppender主要实现如下:
+
+ ```java
+ /**
+  * task log appender
+  */
+ public class TaskLogAppender extends FileAppender<ILoggingEvent> {
+ 
+     ...
+
+    @Override
+    protected void append(ILoggingEvent event) {
+
+        if (currentlyActiveFile == null){
+            currentlyActiveFile = getFile();
+        }
+        String activeFile = currentlyActiveFile;
+        // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+        String threadName = event.getThreadName();
+        String[] threadNameArr = threadName.split("-");
+        // logId = processDefineId_processInstanceId_taskInstanceId
+        String logId = threadNameArr[1];
+        ...
+        super.subAppend(event);
+    }
+}
+ ```
+
+
+以/流程定义id/流程实例id/任务实例id.log的形式生成日志
+
+- 过滤匹配以TaskLogInfo开始的线程名称:
+
+- TaskLogFilter实现如下:
+
+ ```java
+ /**
+ *  task log filter
+ */
+public class TaskLogFilter extends Filter<ILoggingEvent> {
+
+    @Override
+    public FilterReply decide(ILoggingEvent event) {
+        if (event.getThreadName().startsWith("TaskLogInfo-")){
+            return FilterReply.ACCEPT;
+        }
+        return FilterReply.DENY;
+    }
+}
+ ```
+
+### 总结
+本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
+
+
diff --git a/docs/docs/zh/architecture/load-balance.md b/docs/docs/zh/architecture/load-balance.md
new file mode 100644
index 0000000..cb381f3
--- /dev/null
+++ b/docs/docs/zh/architecture/load-balance.md
@@ -0,0 +1,58 @@
+### 负载均衡
+负载均衡即通过路由算法(通常是集群环境),合理的分摊服务器压力,达到服务器性能的最大优化。
+
+### DolphinScheduler-Worker 负载均衡算法
+
+DolphinScheduler-Master 分配任务至 worker,默认提供了三种算法:
+
+加权随机(random)
+
+平滑轮询(roundrobin)
+
+线性负载(lowerweight)
+
+默认配置为线性加权负载。
+
+由于路由是在客户端做的,即 master 服务,因此你可以更改 master.properties 中的 master.host.selector 来配置你所想要的算法。
+
+eg:master.host.selector=random(不区分大小写)
+
+### Worker 负载均衡配置
+
+配置文件 worker.properties
+
+#### 权重
+
+上述所有的负载算法都是基于权重来进行加权分配的,权重影响分流结果。你可以在 修改 worker.weight 的值来给不同的机器设置不同的权重。
+
+#### 预热
+
+考虑到 JIT 优化,我们会让 worker 在启动后低功率的运行一段时间,使其逐渐达到最佳状态,这段过程我们称之为预热。感兴趣的同学可以去阅读 JIT 相关的文章。
+
+因此 worker 在启动后,他的权重会随着时间逐渐达到最大(默认十分钟,我们没有提供配置项,如果需要,你可以修改并提交相关的 PR)。
+
+### 负载均衡算法细述
+
+#### 随机(加权)
+
+该算法比较简单,即在符合的 worker 中随机选取一台(权重会影响他的比重)。
+
+#### 平滑轮询(加权)
+
+加权轮询算法一个明显的缺陷。即在某些特殊的权重下,加权轮询调度会生成不均匀的实例序列,这种不平滑的负载可能会使某些实例出现瞬时高负载的现象,导致系统存在宕机的风险。为了解决这个调度缺陷,我们提供了平滑加权轮询算法。
+
+每台 worker 都有两个权重,即 weight(预热完成后保持不变),current_weight(动态变化),每次路由。都会遍历所有的 worker,使其 current_weight+weight,同时累加所有 worker 的 weight,计为  total_weight,然后挑选 current_weight 最大的作为本次执行任务的 worker,与此同时,将这台 worker 的 current_weight-total_weight。
+
+#### 线性加权(默认算法)
+
+该算法每隔一段时间会向注册中心上报自己的负载信息。我们主要根据两个信息来进行判断
+
+* load 平均值(默认是 CPU 核数 *2)
+* 可用物理内存(默认是 0.3,单位是 G)
+
+如果两者任何一个低于配置项,那么这台 worker 将不参与负载。(即不分配流量)
+
+你可以在 worker.properties 修改下面的属性来自定义配置
+
+* worker.max.cpuload.avg=-1 (worker最大cpuload均值,只有高于系统cpuload均值时,worker服务才能被派发任务. 默认值为-1: cpu cores * 2)
+* worker.reserved.memory=0.3 (worker预留内存,只有低于系统可用内存时,worker服务才能被派发任务,单位为G)
diff --git a/docs/docs/zh/architecture/metadata.md b/docs/docs/zh/architecture/metadata.md
new file mode 100644
index 0000000..e298b48
--- /dev/null
+++ b/docs/docs/zh/architecture/metadata.md
@@ -0,0 +1,185 @@
+# Dolphin Scheduler 1.3元数据文档
+
+<a name="25Ald"></a>
+### 表概览
+| 表名 | 表信息 |
+| :---: | :---: |
+| t_ds_access_token | 访问ds后端的token |
+| t_ds_alert | 告警信息 |
+| t_ds_alertgroup | 告警组 |
+| t_ds_command | 执行命令 |
+| t_ds_datasource | 数据源 |
+| t_ds_error_command | 错误命令 |
+| t_ds_process_definition | 流程定义 |
+| t_ds_process_instance | 流程实例 |
+| t_ds_project | 项目 |
+| t_ds_queue | 队列 |
+| t_ds_relation_datasource_user | 用户关联数据源 |
+| t_ds_relation_process_instance | 子流程 |
+| t_ds_relation_project_user | 用户关联项目 |
+| t_ds_relation_resources_user | 用户关联资源 |
+| t_ds_relation_udfs_user | 用户关联UDF函数 |
+| t_ds_relation_user_alertgroup | 用户关联告警组 |
+| t_ds_resources | 资源文件 |
+| t_ds_schedules | 流程定时调度 |
+| t_ds_session | 用户登录的session |
+| t_ds_task_instance | 任务实例 |
+| t_ds_tenant | 租户 |
+| t_ds_udfs | UDF资源 |
+| t_ds_user | 用户 |
+| t_ds_version | ds版本信息 |
+
+<a name="VNVGr"></a>
+### 用户	队列	数据源
+![image.png](/img/metadata-erd/user-queue-datasource.png)
+
+- 一个租户下可以有多个用户<br />
+- t_ds_user中的queue字段存储的是队列表中的queue_name信息,t_ds_tenant下存的是queue_id,在流程定义执行过程中,用户队列优先级最高,用户队列为空则采用租户队列<br />
+- t_ds_datasource表中的user_id字段表示创建该数据源的用户,t_ds_relation_datasource_user中的user_id表示,对数据源有权限的用户<br />
+<a name="HHyGV"></a>
+### 项目	资源	告警
+![image.png](/img/metadata-erd/project-resource-alert.png)
+
+- 一个用户可以有多个项目,用户项目授权通过t_ds_relation_project_user表完成project_id和user_id的关系绑定<br />
+- t_ds_projcet表中的user_id表示创建该项目的用户,t_ds_relation_project_user表中的user_id表示对项目有权限的用户<br />
+- t_ds_resources表中的user_id表示创建该资源的用户,t_ds_relation_resources_user中的user_id表示对资源有权限的用户<br />
+- t_ds_udfs表中的user_id表示创建该UDF的用户,t_ds_relation_udfs_user表中的user_id表示对UDF有权限的用户<br />
+<a name="Bg2Sn"></a>
+### 命令	流程	任务
+![image.png](/img/metadata-erd/command.png)<br />![image.png](/img/metadata-erd/process-task.png)
... 6701 lines suppressed ...