You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2019/12/05 02:33:16 UTC

[incubator-dolphinscheduler-website] branch asf-site updated: Automated deployment: Thu Dec 5 02:33:05 UTC 2019 82ebc1d1968df28505d0c01c36a901b68cacdec3

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 2a601d2  Automated deployment: Thu Dec  5 02:33:05 UTC 2019 82ebc1d1968df28505d0c01c36a901b68cacdec3
2a601d2 is described below

commit 2a601d29082f4977a96ad82e0d0963a7ec250bec
Author: qiaozhanwei <qi...@users.noreply.github.com>
AuthorDate: Thu Dec 5 02:33:05 2019 +0000

    Automated deployment: Thu Dec  5 02:33:05 UTC 2019 82ebc1d1968df28505d0c01c36a901b68cacdec3
---
 en-us/docs/user_doc/quick-start.html     |  24 +-
 en-us/docs/user_doc/quick-start.json     |   2 +-
 en-us/docs/user_doc/system-manual.html   | 207 +++++---
 en-us/docs/user_doc/system-manual.json   |   2 +-
 img/Statistics.png                       | Bin 0 -> 69251 bytes
 img/addtenant.png                        | Bin 15441 -> 85215 bytes
 img/alarm-group-en.png                   | Bin 22743 -> 85472 bytes
 img/arrow.png                            | Bin 0 -> 3729 bytes
 img/auth-project-en.png                  | Bin 38090 -> 97132 bytes
 img/auth_user.png                        | Bin 10254 -> 107501 bytes
 img/creat_token.png                      | Bin 0 -> 84027 bytes
 img/create-datasource-en.png             | Bin 63342 -> 137955 bytes
 img/create-file.png                      | Bin 257575 -> 197478 bytes
 img/create-queue-en.png                  | Bin 256748 -> 79506 bytes
 img/create-queue.png                     | Bin 254254 -> 78280 bytes
 img/create-tenant-en.png                 | Bin 23181 -> 91406 bytes
 img/create-user-en.png                   | Bin 29137 -> 111260 bytes
 img/create_group_en.png                  | Bin 21948 -> 0 bytes
 img/create_queue_en.png                  | Bin 15455 -> 0 bytes
 img/create_tenant_en.png                 | Bin 23262 -> 0 bytes
 img/create_user_en.png                   | Bin 29065 -> 0 bytes
 img/current-node-en.png                  | Bin 50278 -> 151620 bytes
 img/dag0.png                             | Bin 0 -> 156182 bytes
 img/dag4.png                             | Bin 24671 -> 96284 bytes
 img/delete.png                           | Bin 0 -> 3996 bytes
 img/dependent-nodes-en.png               | Bin 474946 -> 268570 bytes
 img/double-click-en.png                  | Bin 514531 -> 348709 bytes
 img/edit-datasource-en.png               | Bin 140793 -> 135608 bytes
 img/editDag.png                          | Bin 0 -> 107545 bytes
 img/file-upload-en.png                   | Bin 23380 -> 113567 bytes
 img/file-view-en.png                     | Bin 412574 -> 182123 bytes
 img/file_create.png                      | Bin 48006 -> 196033 bytes
 img/file_rename.png                      | Bin 9581 -> 87874 bytes
 img/file_upload.png                      | Bin 14500 -> 99982 bytes
 img/flink.png                            | Bin 0 -> 35897 bytes
 img/flink_edit.png                       | Bin 0 -> 157665 bytes
 img/global_param.png                     | Bin 0 -> 11035 bytes
 img/global_parameter.png                 | Bin 116141 -> 395176 bytes
 img/global_parameters_en.png             | Bin 297256 -> 92345 bytes
 img/hell_dag.png                         | Bin 0 -> 195097 bytes
 img/hive-en.png                          | Bin 63662 -> 136652 bytes
 img/hive_edit.png                        | Bin 46641 -> 146315 bytes
 img/hive_edit2.png                       | Bin 48423 -> 143592 bytes
 img/home.png                             | Bin 0 -> 70928 bytes
 img/home_en.png                          | Bin 0 -> 80943 bytes
 img/http.png                             | Bin 0 -> 19862 bytes
 img/http_edit.png                        | Bin 0 -> 155809 bytes
 img/incubator-dolphinscheduler-1.1.0.png | Bin 0 -> 146531 bytes
 img/instanceViewLog.png                  | Bin 0 -> 122912 bytes
 img/java-program-en.png                  | Bin 66504 -> 258687 bytes
 img/line.png                             | Bin 0 -> 3720 bytes
 img/local_parameter.png                  | Bin 25661 -> 146531 bytes
 img/login.jpg                            | Bin 49053 -> 0 bytes
 img/login.png                            | Bin 0 -> 130863 bytes
 img/login_en.png                         | Bin 62592 -> 155386 bytes
 img/mail_edit.png                        | Bin 14438 -> 81205 bytes
 img/mr_edit.png                          | Bin 136183 -> 236993 bytes
 img/mr_java.png                          | Bin 0 -> 197796 bytes
 img/mysql-en.png                         | Bin 63979 -> 141245 bytes
 img/mysql_edit.png                       | Bin 48390 -> 103910 bytes
 img/node-setting-en.png                  | Bin 60223 -> 179675 bytes
 img/online.png                           | Bin 0 -> 9868 bytes
 img/postgresql_edit.png                  | Bin 32368 -> 113864 bytes
 img/procedure_edit.png                   | Bin 89355 -> 169101 bytes
 img/project-home.png                     | Bin 0 -> 85207 bytes
 img/python-en.png                        | Bin 92943 -> 220242 bytes
 img/python-program-en.png                | Bin 67233 -> 264774 bytes
 img/python_edit.png                      | Bin 467741 -> 188037 bytes
 img/redirect.png                         | Bin 0 -> 145894 bytes
 img/run_params.png                       | Bin 0 -> 138944 bytes
 img/run_params_button.png                | Bin 0 -> 18919 bytes
 img/shell-en.png                         | Bin 90325 -> 228311 bytes
 img/shell.png                            | Bin 0 -> 7798 bytes
 img/shell_dag.png                        | Bin 0 -> 195097 bytes
 img/shell_edit.png                       | Bin 157618 -> 0 bytes
 img/spark-submit-en.png                  | Bin 218246 -> 310471 bytes
 img/spark_datesource.png                 | Bin 29955 -> 102067 bytes
 img/spark_edit.png                       | Bin 123946 -> 276893 bytes
 img/sql-node.png                         | Bin 477610 -> 277611 bytes
 img/sql-node2.png                        | Bin 505130 -> 322814 bytes
 img/start-process-en.png                 | Bin 348124 -> 89147 bytes
 img/statistics-en.png                    | Bin 0 -> 71466 bytes
 img/sub-process-en.png                   | Bin 43911 -> 91597 bytes
 img/subprocess_edit.png                  | Bin 76964 -> 109610 bytes
 img/task_history.png                     | Bin 109027 -> 192196 bytes
 img/time-schedule3.png                   | Bin 0 -> 39720 bytes
 img/timeManagement.png                   | Bin 0 -> 27044 bytes
 img/timer-en.png                         | Bin 44114 -> 132972 bytes
 img/timing-en.png                        | Bin 103700 -> 113125 bytes
 img/timing.png                           | Bin 0 -> 27550 bytes
 img/token-en.png                         | Bin 0 -> 86232 bytes
 img/tree.png                             | Bin 0 -> 126010 bytes
 img/udf-function.png                     | Bin 42571 -> 167569 bytes
 img/udf_edit.png                         | Bin 26472 -> 92950 bytes
 img/user-defined-en.png                  | Bin 84667 -> 152237 bytes
 img/user-defined1-en.png                 | Bin 105812 -> 376676 bytes
 img/useredit2.png                        | Bin 17285 -> 106651 bytes
 img/work_list.png                        | Bin 0 -> 127465 bytes
 img/worker-group-en.png                  | Bin 275270 -> 85539 bytes
 img/worker1.png                          | Bin 253566 -> 81758 bytes
 img/worker_group.png                     | Bin 0 -> 85119 bytes
 img/worker_group_en.png                  | Bin 0 -> 88694 bytes
 img/zookeeper-en.png                     | Bin 64497 -> 137255 bytes
 zh-cn/docs/user_doc/quick-start.html     |  14 +-
 zh-cn/docs/user_doc/quick-start.json     |   2 +-
 zh-cn/docs/user_doc/system-manual.html   | 880 +++++++++++++++++++------------
 zh-cn/docs/user_doc/system-manual.json   |   2 +-
 107 files changed, 707 insertions(+), 426 deletions(-)

diff --git a/en-us/docs/user_doc/quick-start.html b/en-us/docs/user_doc/quick-start.html
index afad9a5..d1c4791 100644
--- a/en-us/docs/user_doc/quick-start.html
+++ b/en-us/docs/user_doc/quick-start.html
@@ -28,11 +28,11 @@
 <li>Create queue</li>
 </ul>
 <p align="center">
-   <img src="/img/create_queue_en.png" width="60%" />
+   <img src="/img/create-queue-en.png" width="60%" />
  </p>
 <ul>
 <li>Create tenant  <p align="center">
-<img src="/img/create_tenant_en.png" width="60%" />
+<img src="/img/create-tenant-en.png" width="60%" />
 </li>
 </ul>
   </p>
@@ -40,16 +40,30 @@
 <li>Creating Ordinary Users</li>
 </ul>
 <p align="center">
-      <img src="/img/create_user_en.png" width="60%" />
+      <img src="/img/create-user-en.png" width="60%" />
  </p>
 <ul>
 <li>Create an alarm group</li>
 </ul>
  <p align="center">
-    <img src="/img/create_group_en.png" width="60%" />
+    <img src="/img/alarm-group-en.png" width="60%" />
   </p>
 <ul>
-<li>Log in with regular users</li>
+<li>Create an worker group</li>
+</ul>
+   <p align="center">
+      <img src="/img/worker-group-en.png" width="60%" />
+    </p>
+<ul>
+<li>
+<p>Create an token</p>
+<p align="center">
+   <img src="/img/token-en.png" width="60%" />
+ </p>
+</li>
+<li>
+<p>Log in with regular users</p>
+</li>
 </ul>
 <blockquote>
 <p>Click on the user name in the upper right corner to &quot;exit&quot; and re-use the normal user login.</p>
diff --git a/en-us/docs/user_doc/quick-start.json b/en-us/docs/user_doc/quick-start.json
index 3c73c12..bb427ed 100644
--- a/en-us/docs/user_doc/quick-start.json
+++ b/en-us/docs/user_doc/quick-start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick-start.md",
-  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:192.168.xx.xx:8888  Username and password:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create_queue_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create tenant  <p align=\"center\">\n<img src=\"/img/create_tenant_en. [...]
+  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:192.168.xx.xx:8888  Username and password:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue-en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create tenant  <p align=\"center\">\n<img src=\"/img/create-tenant-en. [...]
   "link": "/en-us/docs/user_doc/quick-start.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/user_doc/system-manual.html b/en-us/docs/user_doc/system-manual.html
index 1f39a8f..01cbb82 100644
--- a/en-us/docs/user_doc/system-manual.html
+++ b/en-us/docs/user_doc/system-manual.html
@@ -14,16 +14,21 @@
 <body>
 	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_colorful.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class="language-switch language-switch-normal">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_gray.png"/><div><ul class="ant-menu blackClass ant [...]
 <h2>Operational Guidelines</h2>
+<h3>Home page</h3>
+<p>The homepage contains task status statistics, process status statistics, and workflow definition statistics for all user projects.</p>
+<p align="center">
+      <img src="/img/home_en.png" width="80%" />
+ </p>
 <h3>Create a project</h3>
 <ul>
 <li>Click &quot;Project - &gt; Create Project&quot;, enter project name,  description, and click &quot;Submit&quot; to create a new project.</li>
 <li>Click on the project name to enter the project home page.</li>
 </ul>
 <p align="center">
-      <img src="/img/project_home_en.png" width="60%" />
+      <img src="/img/project_home_en.png" width="80%" />
  </p>
 <blockquote>
-<p>Project Home Page contains task status statistics, process status statistics.</p>
+<p>The project home page contains task status statistics, process status statistics, and workflow definition statistics for the project.</p>
 </blockquote>
 <ul>
 <li>Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame.</li>
@@ -39,25 +44,25 @@
 <li>Selecting &quot;task priority&quot; will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order.</li>
 <li>Timeout alarm. Fill in &quot;Overtime Time&quot;. When the task execution time exceeds the overtime, it can alarm and fail over time.</li>
 <li>Fill in &quot;Custom Parameters&quot; and refer to [Custom Parameters](#Custom Parameters)<p align="center">
-<img src="/img/process_definitions_en.png" width="60%" />
+<img src="/img/process_definitions_en.png" width="80%" />
   </p>
 </li>
 <li>Increase the order of execution between nodes: click &quot;line connection&quot;. As shown, task 2 and task 3 are executed in parallel. When task 1 is executed, task 2 and task 3 are executed simultaneously.</li>
 </ul>
 <p align="center">
-   <img src="/img/task_en.png" width="60%" />
+   <img src="/img/task_en.png" width="80%" />
  </p>
 <ul>
 <li>Delete dependencies: Click on the arrow icon to &quot;drag nodes and select items&quot;, select the connection line, click on the delete icon to delete dependencies between nodes.</li>
 </ul>
 <p align="center">
-      <img src="/img/delete_dependencies_en.png" width="60%" />
+      <img src="/img/delete_dependencies_en.png" width="80%" />
  </p>
 <ul>
 <li>Click &quot;Save&quot;, enter the name of the process definition, the description of the process definition, and set the global parameters.</li>
 </ul>
 <p align="center">
-   <img src="/img/global_parameters_en.png" width="60%" />
+   <img src="/img/global_parameters_en.png" width="80%" />
  </p>
 <ul>
 <li>For other types of nodes, refer to [task node types and parameter settings](#task node types and parameter settings)</li>
@@ -86,13 +91,13 @@
 </li>
 </ul>
 <p align="center">
-   <img src="/img/start-process-en.png" width="60%" />
+   <img src="/img/start-process-en.png" width="80%" />
  </p>
 <ul>
 <li>Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure:</li>
 </ul>
 <p align="center">
-   <img src="/img/complement-en.png" width="60%" />
+   <img src="/img/complement-en.png" width="80%" />
  </p>
 <blockquote>
 <p>Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.</p>
@@ -103,13 +108,13 @@
 <li>Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances.</li>
 </ul>
 <p align="center">
-   <img src="/img/timing-en.png" width="60%" />
+   <img src="/img/timing-en.png" width="80%" />
  </p>
 <ul>
 <li>Add a timer to be executed once a day at 5:00 a.m. as shown below:</li>
 </ul>
 <p align="center">
-      <img src="/img/timer-en.png" width="60%" />
+      <img src="/img/timer-en.png" width="80%" />
  </p>
 <ul>
 <li>Timely online,<strong>the newly created timer is offline. You need to click &quot;Timing Management - &gt;online&quot; to work properly.</strong></li>
@@ -122,25 +127,25 @@
 <p>Click on the process name to see the status of task execution.</p>
 </blockquote>
   <p align="center">
-   <img src="/img/process-instances-en.png" width="60%" />
+   <img src="/img/process-instances-en.png" width="80%" />
  </p>
 <blockquote>
 <p>Click on the task node, click &quot;View Log&quot; to view the task execution log.</p>
 </blockquote>
   <p align="center">
-   <img src="/img/view-log-en.png" width="60%" />
+   <img src="/img/view-log-en.png" width="80%" />
  </p>
 <blockquote>
 <p>Click on the task instance node, click <strong>View History</strong> to view the list of task instances that the process instance runs.</p>
 </blockquote>
  <p align="center">
-    <img src="/img/instance-runs-en.png" width="60%" />
+    <img src="/img/instance-runs-en.png" width="80%" />
   </p>
 <blockquote>
 <p>Operations on workflow instances:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/workflow-instances-en.png" width="60%" />
+   <img src="/img/workflow-instances-en.png" width="80%" />
 </p>
 <ul>
 <li>Editor: You can edit the terminated process. When you save it after editing, you can choose whether to update the process definition or not.</li>
@@ -153,20 +158,20 @@
 <li>Gantt diagram: The vertical axis of Gantt diagram is the topological ordering of task instances under a process instance, and the horizontal axis is the running time of task instances, as shown in the figure:</li>
 </ul>
 <p align="center">
-      <img src="/img/gantt-en.png" width="60%" />
+      <img src="/img/gantt-en.png" width="80%" />
 </p>
 <h3>View task instances</h3>
 <blockquote>
 <p>Click on &quot;Task Instance&quot; to enter the Task List page and query the performance of the task.</p>
 </blockquote>
 <p align="center">
-   <img src="/img/task-instances-en.png" width="60%" />
+   <img src="/img/task-instances-en.png" width="80%" />
 </p>
 <blockquote>
 <p>Click &quot;View Log&quot; in the action column to view the log of task execution.</p>
 </blockquote>
 <p align="center">
-   <img src="/img/task-execution-en.png" width="60%" />
+   <img src="/img/task-execution-en.png" width="80%" />
 </p>
 <h3>Create data source</h3>
 <blockquote>
@@ -186,7 +191,7 @@
 <li>Jdbc connection parameters: parameter settings for MySQL connections, filled in as JSON</li>
 </ul>
 <p align="center">
-   <img src="/img/mysql-en.png" width="60%" />
+   <img src="/img/mysql-en.png" width="80%" />
  </p>
 <blockquote>
 <p>Click &quot;Test Connect&quot; to test whether the data source can be successfully connected.</p>
@@ -204,12 +209,12 @@
 <li>Jdbc connection parameters: parameter settings for POSTGRESQL connections, filled in as JSON</li>
 </ul>
 <p align="center">
-   <img src="/img/create-datasource-en.png" width="60%" />
+   <img src="/img/create-datasource-en.png" width="80%" />
  </p>
 <h4>Create and edit HIVE data source</h4>
 <p>1.Connect with HiveServer 2</p>
  <p align="center">
-    <img src="/img/hive-en.png" width="60%" />
+    <img src="/img/hive-en.png" width="80%" />
   </p>
 <ul>
 <li>Datasource: Select HIVE</li>
@@ -224,15 +229,15 @@
 </ul>
 <p>2.Connect using Hive Server 2 HA Zookeeper mode</p>
  <p align="center">
-    <img src="/img/zookeeper-en.png" width="60%" />
+    <img src="/img/zookeeper-en.png" width="80%" />
   </p>
 <p>Note: If <strong>kerberos</strong> is turned on, you need to fill in <strong>Principal</strong></p>
 <p align="center">
-    <img src="/img/principal-en.png" width="60%" />
+    <img src="/img/principal-en.png" width="80%" />
   </p>
-<h4>Create and Edit Datasource</h4>
+<h4>Create and Edit Spark Datasource</h4>
 <p align="center">
-   <img src="/img/edit-datasource-en.png" width="60%" />
+   <img src="/img/edit-datasource-en.png" width="80%" />
  </p>
 <ul>
 <li>Datasource: Select Spark</li>
@@ -247,24 +252,47 @@
 </ul>
 <p>Note: If <strong>kerberos</strong> If Kerberos is turned on, you need to fill in  <strong>Principal</strong></p>
 <p align="center">
-    <img src="/img/kerberos-en.png" width="60%" />
+    <img src="/img/kerberos-en.png" width="80%" />
   </p>
 <h3>Upload Resources</h3>
 <ul>
 <li>Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:</li>
 </ul>
-<pre><code>conf/common/common.properties
-    -- hdfs.startup.state=true
-conf/common/hadoop.properties  
-    -- fs.defaultFS=hdfs://xxxx:8020  
-    -- yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-    -- yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+<pre><code>conf/common/common.properties  
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
+    data.store2hdfs.basepath=/dolphinscheduler
+    # resource upload startup type : HDFS,S3,NONE
+    res.upload.startup.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab
+    
+conf/common/hadoop.properties      
+    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
+    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020    
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
 </code></pre>
+<ul>
+<li>yarn.resourcemanager.ha.rm.ids and yarn.application.status.address only need to configure one address, and the other address is empty.</li>
+<li>You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project and restart the api-server service.</li>
+</ul>
 <h4>File Manage</h4>
 <blockquote>
 <p>It is the management of various resource files, including creating basic txt/log/sh/conf files, uploading jar packages and other types of files, editing, downloading, deleting and other operations.</p>
 <p align="center">
- <img src="/img/file-manage-en.png" width="60%" />
+ <img src="/img/file-manage-en.png" width="80%" />
 </p>
 </blockquote>
 <ul>
@@ -274,7 +302,7 @@ conf/common/hadoop.properties
 <p>File formats support the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql</p>
 </blockquote>
 <p align="center">
-   <img src="/img/create-file.png" width="60%" />
+   <img src="/img/create-file.png" width="80%" />
  </p>
 <ul>
 <li>Upload Files</li>
@@ -283,7 +311,7 @@ conf/common/hadoop.properties
 <p>Upload Files: Click the Upload button to upload, drag the file to the upload area, and the file name will automatically complete the uploaded file name.</p>
 </blockquote>
 <p align="center">
-   <img src="/img/file-upload-en.png" width="60%" />
+   <img src="/img/file-upload-en.png" width="80%" />
  </p>
 <ul>
 <li>File View</li>
@@ -292,7 +320,7 @@ conf/common/hadoop.properties
 <p>For viewable file types, click on the file name to view file details</p>
 </blockquote>
 <p align="center">
-   <img src="/img/file-view-en.png" width="60%" />
+   <img src="/img/file-view-en.png" width="80%" />
  </p>
 <ul>
 <li>Download files</li>
@@ -304,7 +332,7 @@ conf/common/hadoop.properties
 <li>File rename</li>
 </ul>
 <p align="center">
-   <img src="/img/rename-en.png" width="60%" />
+   <img src="/img/rename-en.png" width="80%" />
  </p>
 <h4>Delete</h4>
 <blockquote>
@@ -336,18 +364,18 @@ conf/common/hadoop.properties
 </ul>
 </blockquote>
 <p align="center">
-   <img src="/img/udf-function.png" width="60%" />
+   <img src="/img/udf-function.png" width="80%" />
  </p>
 <h2>Security</h2>
 <ul>
 <li>The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc.</li>
-<li>Administrator login, default username password: admin/dolphinscheduler 123</li>
+<li>Administrator login, default username password: admin/dolphinscheduler123</li>
 </ul>
 <h3>Create queues</h3>
 <ul>
 <li>Queues are used to execute spark, mapreduce and other programs, which require the use of &quot;queue&quot; parameters.</li>
 <li>&quot;Security&quot; - &gt; &quot;Queue Manage&quot; - &gt; &quot;Create Queue&quot;   <p align="center">
-  <img src="/img/create-queue-en.png" width="60%" />
+  <img src="/img/create-queue-en.png" width="80%" />
 </p>
 </li>
 </ul>
@@ -357,7 +385,7 @@ conf/common/hadoop.properties
 <li>Tenant Code:<strong>the tenant code is the only account on Linux that can't be duplicated.</strong></li>
 </ul>
  <p align="center">
-    <img src="/img/create-tenant-en.png" width="60%" />
+    <img src="/img/create-tenant-en.png" width="80%" />
   </p>
 <h3>Create Ordinary Users</h3>
 <ul>
@@ -368,13 +396,13 @@ conf/common/hadoop.properties
 * Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.**
 </code></pre>
 <p align="center">
-      <img src="/img/create-user-en.png" width="60%" />
+      <img src="/img/create-user-en.png" width="80%" />
  </p>
 <h3>Create alarm group</h3>
 <ul>
 <li>The alarm group is a parameter set at start-up. After the process is finished, the status of the process and other information will be sent to the alarm group by mail.</li>
 <li>New and Editorial Warning Group<p align="center">
-<img src="/img/alarm-group-en.png" width="60%" />
+<img src="/img/alarm-group-en.png" width="80%" />
 </p>
 </li>
 </ul>
@@ -386,13 +414,16 @@ conf/common/hadoop.properties
 <li>
 <p>Multiple IP addresses within a worker group (<strong>aliases can not be written</strong>), separated by <strong>commas in English</strong></p>
 <p align="center">
-  <img src="/img/worker-group-en.png" width="60%" />
+  <img src="/img/worker-group-en.png" width="80%" />
 </p>
 </li>
 </ul>
 <h3>Token manage</h3>
 <ul>
-<li>Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface.</li>
+<li>Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface.<p align="center">
+  <img src="/img/token-en.png" width="80%" />
+</p>
+</li>
 <li>Call examples:</li>
 </ul>
 <pre><code class="language-令牌调用示例">    /**
@@ -440,7 +471,7 @@ conf/common/hadoop.properties
 </blockquote>
 <ul>
 <li>1.Click on the authorization button of the designated person as follows:<p align="center">
-  <img src="/img/operation-en.png" width="60%" />
+  <img src="/img/operation-en.png" width="80%" />
 </li>
 </ul>
  </p>
@@ -448,7 +479,7 @@ conf/common/hadoop.properties
 <li>2.Select the project button to authorize the project</li>
 </ul>
 <p align="center">
-   <img src="/img/auth-project-en.png" width="60%" />
+   <img src="/img/auth-project-en.png" width="80%" />
  </p>
 <h3>Monitor center</h3>
 <ul>
@@ -459,29 +490,39 @@ conf/common/hadoop.properties
 <li>Mainly related information about master.</li>
 </ul>
 <p align="center">
-      <img src="/img/master-monitor-en.png" width="60%" />
+      <img src="/img/master-monitor-en.png" width="80%" />
  </p>
 <h4>Worker monitor</h4>
 <ul>
 <li>Mainly related information of worker.</li>
 </ul>
 <p align="center">
-   <img src="/img/worker-monitor-en.png" width="60%" />
+   <img src="/img/worker-monitor-en.png" width="80%" />
  </p>
 <h4>Zookeeper monitor</h4>
 <ul>
 <li>Mainly the configuration information of each worker and master in zookpeeper.</li>
 </ul>
 <p align="center">
-   <img src="/img/zookeeper-monitor-en.png" width="60%" />
+   <img src="/img/zookeeper-monitor-en.png" width="80%" />
  </p>
 <h4>DB monitor</h4>
 <ul>
-<li>Mainly the health status of mysql</li>
+<li>Mainly the health status of DB</li>
 </ul>
 <p align="center">
-   <img src="/img/db-monitor-en.png" width="60%" />
+   <img src="/img/db-monitor-en.png" width="80%" />
+ </p>
+<h4>statistics Manage</h4>
+ <p align="center">
+   <img src="/img/statistics-en.png" width="80%" />
  </p>
+<ul>
+<li>Commands to be executed: statistics on t_ds_command table</li>
+<li>Number of commands that failed to execute: statistics on the t_ds_error_command table</li>
+<li>Number of tasks to run: statistics of task_queue data in zookeeper</li>
+<li>Number of tasks to be killed: statistics of task_kill in zookeeper</li>
+</ul>
 <h2>Task Node Type and Parameter Setting</h2>
 <h3>Shell</h3>
 <ul>
@@ -491,7 +532,7 @@ conf/common/hadoop.properties
 <p>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_SHELL.png" alt="PNG"> task node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/shell-en.png" width="60%" />
+   <img src="/img/shell-en.png" width="80%" />
  </p>`
 <ul>
 <li>Node name: The node name in a process definition is unique</li>
@@ -511,7 +552,7 @@ conf/common/hadoop.properties
 <p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG"> task node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/sub-process-en.png" width="60%" />
+   <img src="/img/sub-process-en.png" width="80%" />
  </p>
 <ul>
 <li>Node name: The node name in a process definition is unique</li>
@@ -527,7 +568,7 @@ conf/common/hadoop.properties
 <p>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_DEPENDENT.png" alt="PNG"> ask node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/current-node-en.png" width="60%" />
+   <img src="/img/current-node-en.png" width="80%" />
  </p>
 <blockquote>
 <p>Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed.</p>
@@ -555,7 +596,7 @@ conf/common/hadoop.properties
 <p>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_PROCEDURE.png" alt="PNG"> task node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/node-setting-en.png" width="60%" />
+   <img src="/img/node-setting-en.png" width="80%" />
  </p>
 <ul>
 <li>Datasource: The data source type of stored procedure supports MySQL and POSTGRESQL, and chooses the corresponding data source.</li>
@@ -564,19 +605,17 @@ conf/common/hadoop.properties
 </ul>
 <h3>SQL</h3>
 <ul>
+<li>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png" alt="PNG"> task node in the toolbar onto the palette.</li>
 <li>Execute non-query SQL functionality<p align="center">
-  <img src="/img/dependent-nodes-en.png" width="60%" />
+  <img src="/img/dependent-nodes-en.png" width="80%" />
 </li>
 </ul>
  </p>
 <ul>
 <li>Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients.</li>
 </ul>
-<blockquote>
-<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png" alt="PNG"> task node in the toolbar onto the palette and double-click the task node as follows:</p>
-</blockquote>
 <p align="center">
-   <img src="/img/double-click-en.png" width="60%" />
+   <img src="/img/double-click-en.png" width="80%" />
  </p>
 <ul>
 <li>Datasource: Select the corresponding datasource</li>
@@ -585,6 +624,8 @@ conf/common/hadoop.properties
 <li>sql statement: SQL statement</li>
 <li>UDF function: For HIVE type data sources, you can refer to UDF functions created in the resource center, other types of data sources do not support UDF functions for the time being.</li>
 <li>Custom parameters: SQL task type, and stored procedure is to customize the order of parameters to set values for methods. Custom parameter type and data type are the same as stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the ${variable} in the SQL statement.</li>
+<li>Pre Statement: Pre-sql is executed before the sql statement</li>
+<li>Post Statement: Post-sql is executed after the sql statement</li>
 </ul>
 <h3>SPARK</h3>
 <ul>
@@ -594,7 +635,7 @@ conf/common/hadoop.properties
 <p>Drag the   <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_SPARK.png" alt="PNG">  task node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/spark-submit-en.png" width="60%" />
+   <img src="/img/spark-submit-en.png" width="80%" />
  </p>
 <ul>
 <li>Program Type: Support JAVA, Scala and Python</li>
@@ -620,7 +661,7 @@ conf/common/hadoop.properties
 <li>JAVA program</li>
 </ol>
  <p align="center">
-    <img src="/img/java-program-en.png" width="60%" />
+    <img src="/img/java-program-en.png" width="80%" />
   </p>
 <ul>
 <li>Class of the main function: The full path of the MR program's entry Main Class</li>
@@ -635,7 +676,7 @@ conf/common/hadoop.properties
 <li>Python program</li>
 </ol>
 <p align="center">
-   <img src="/img/python-program-en.png" width="60%" />
+   <img src="/img/python-program-en.png" width="80%" />
  </p>
 <ul>
 <li>Program Type: Select Python Language</li>
@@ -654,7 +695,7 @@ conf/common/hadoop.properties
 <p>Drag the <img src="https://analysys.github.io/easyscheduler_docs/images/toolbar_PYTHON.png" alt="PNG"> task node in the toolbar onto the palette and double-click the task node as follows:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/python-en.png" width="60%" />
+   <img src="/img/python-en.png" width="80%" />
  </p>
 <ul>
 <li>Script: User-developed Python program</li>
@@ -678,15 +719,15 @@ conf/common/hadoop.properties
     </tr>
 </table>
 <h3>Time Customization Parameters</h3>
-<blockquote>
+<ul>
+<li>
 <p>Support code to customize the variable name, declaration: ${variable name}. It can refer to &quot;system parameters&quot; or specify &quot;constants&quot;.</p>
-</blockquote>
-<blockquote>
+</li>
+<li>
 <p>When we define this benchmark variable as [...], [yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as:[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] ,etc.</p>
-</blockquote>
-<blockquote>
+</li>
+<li>
 <p>Can also do this:</p>
-</blockquote>
 <ul>
 <li>Later N years: $[add_months (yyyyyyMMdd, 12*N)]</li>
 <li>The previous N years: $[add_months (yyyyyyMMdd, -12*N)]</li>
@@ -701,25 +742,25 @@ conf/common/hadoop.properties
 <li>After N minutes: $[HHmmss + N/24/60]</li>
 <li>First N minutes: $[HHmmss-N/24/60]</li>
 </ul>
+</li>
+</ul>
 <h3>User-defined parameters</h3>
-<blockquote>
-<p>User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.</p>
-</blockquote>
-<blockquote>
+<ul>
+<li>User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.</li>
+</ul>
 <p>For example:</p>
-</blockquote>
 <p align="center">
-   <img src="/img/user-defined-en.png" width="60%" />
+   <img src="/img/user-defined-en.png" width="80%" />
  </p>
-<blockquote>
-<p>global_bizdate is a global parameter, referring to system parameters.</p>
-</blockquote>
+<ul>
+<li>global_bizdate is a global parameter, referring to system parameters.</li>
+</ul>
 <p align="center">
-   <img src="/img/user-defined1-en.png" width="60%" />
+   <img src="/img/user-defined1-en.png" width="80%" />
  </p>
-<blockquote>
-<p>In tasks, local_param_bizdate refers to global parameters by  <span class="katex"><span class="katex-mathml"><math><semantics><mrow><mrow><mi>g</mi><mi>l</mi><mi>o</mi><mi>b</mi><mi>a</mi><msub><mi>l</mi><mi>b</mi></msub><mi>i</mi><mi>z</mi><mi>d</mi><mi>a</mi><mi>t</mi><mi>e</mi></mrow><mi>f</mi><mi>o</mi><mi>r</mi><mi>s</mi><mi>c</mi><mi>r</mi><mi>i</mi><mi>p</mi><mi>t</mi><mi>s</mi><mo separator="true">,</mo><mi>t</mi><mi>h</mi><mi>e</mi><mi>v</mi><mi>a</mi><mi>l</mi><mi>u</mi><mi> [...]
-</blockquote>
+<ul>
+<li>In tasks, local_param_bizdate refers to global parameters by  ${global_bizdate} for scripts, the value of variable local_param_bizdate can be referenced by ${local_param_bizdate}, or the value of local_param_bizdate can be set directly by JDBC.</li>
+</ul>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
 Incubation is required of all newly accepted projects until a further review indicates 
 that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
diff --git a/en-us/docs/user_doc/system-manual.json b/en-us/docs/user_doc/system-manual.json
index ba29335..582eca1 100644
--- a/en-us/docs/user_doc/system-manual.json
+++ b/en-us/docs/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>System Use Manual</h1>\n<h2>Operational Guidelines</h2>\n<h3>Create a project</h3>\n<ul>\n<li>Click &quot;Project - &gt; Create Project&quot;, enter project name,  description, and click &quot;Submit&quot; to create a new project.</li>\n<li>Click on the project name to enter the project home page.</li>\n</ul>\n<p align=\"center\">\n      <img src=\"/img/project_home_en.png\" width=\"60%\" />\n </p>\n<blockquote>\n<p>Project Home Page contains task status statistics, proc [...]
+  "__html": "<h1>System Use Manual</h1>\n<h2>Operational Guidelines</h2>\n<h3>Home page</h3>\n<p>The homepage contains task status statistics, process status statistics, and workflow definition statistics for all user projects.</p>\n<p align=\"center\">\n      <img src=\"/img/home_en.png\" width=\"80%\" />\n </p>\n<h3>Create a project</h3>\n<ul>\n<li>Click &quot;Project - &gt; Create Project&quot;, enter project name,  description, and click &quot;Submit&quot; to create a new project.</l [...]
   "link": "/en-us/docs/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/img/Statistics.png b/img/Statistics.png
new file mode 100644
index 0000000..245127f
Binary files /dev/null and b/img/Statistics.png differ
diff --git a/img/addtenant.png b/img/addtenant.png
index c3909ec..78f2be0 100755
Binary files a/img/addtenant.png and b/img/addtenant.png differ
diff --git a/img/alarm-group-en.png b/img/alarm-group-en.png
index 948213f..529072a 100644
Binary files a/img/alarm-group-en.png and b/img/alarm-group-en.png differ
diff --git a/img/arrow.png b/img/arrow.png
new file mode 100644
index 0000000..7feb83c
Binary files /dev/null and b/img/arrow.png differ
diff --git a/img/auth-project-en.png b/img/auth-project-en.png
index 40c6fec..2d71448 100644
Binary files a/img/auth-project-en.png and b/img/auth-project-en.png differ
diff --git a/img/auth_user.png b/img/auth_user.png
index e03f00c..ebcbc24 100755
Binary files a/img/auth_user.png and b/img/auth_user.png differ
diff --git a/img/creat_token.png b/img/creat_token.png
new file mode 100644
index 0000000..9dcbe8a
Binary files /dev/null and b/img/creat_token.png differ
diff --git a/img/create-datasource-en.png b/img/create-datasource-en.png
index c9649a4..d1058ff 100644
Binary files a/img/create-datasource-en.png and b/img/create-datasource-en.png differ
diff --git a/img/create-file.png b/img/create-file.png
index 83bc395..55dd23d 100644
Binary files a/img/create-file.png and b/img/create-file.png differ
diff --git a/img/create-queue-en.png b/img/create-queue-en.png
index f95af54..6974e71 100644
Binary files a/img/create-queue-en.png and b/img/create-queue-en.png differ
diff --git a/img/create-queue.png b/img/create-queue.png
index 537df26..265a39d 100644
Binary files a/img/create-queue.png and b/img/create-queue.png differ
diff --git a/img/create-tenant-en.png b/img/create-tenant-en.png
index 9cdda93..bae5a11 100644
Binary files a/img/create-tenant-en.png and b/img/create-tenant-en.png differ
diff --git a/img/create-user-en.png b/img/create-user-en.png
index 7303985..35789aa 100644
Binary files a/img/create-user-en.png and b/img/create-user-en.png differ
diff --git a/img/create_group_en.png b/img/create_group_en.png
deleted file mode 100644
index 5a97cbf..0000000
Binary files a/img/create_group_en.png and /dev/null differ
diff --git a/img/create_queue_en.png b/img/create_queue_en.png
deleted file mode 100644
index 498b86e..0000000
Binary files a/img/create_queue_en.png and /dev/null differ
diff --git a/img/create_tenant_en.png b/img/create_tenant_en.png
deleted file mode 100644
index 079ee14..0000000
Binary files a/img/create_tenant_en.png and /dev/null differ
diff --git a/img/create_user_en.png b/img/create_user_en.png
deleted file mode 100644
index 028f347..0000000
Binary files a/img/create_user_en.png and /dev/null differ
diff --git a/img/current-node-en.png b/img/current-node-en.png
index be59f4a..153ece3 100644
Binary files a/img/current-node-en.png and b/img/current-node-en.png differ
diff --git a/img/dag0.png b/img/dag0.png
new file mode 100644
index 0000000..58b6569
Binary files /dev/null and b/img/dag0.png differ
diff --git a/img/dag4.png b/img/dag4.png
index b3ec5a8..f315d44 100755
Binary files a/img/dag4.png and b/img/dag4.png differ
diff --git a/img/delete.png b/img/delete.png
new file mode 100644
index 0000000..7fe978f
Binary files /dev/null and b/img/delete.png differ
diff --git a/img/dependent-nodes-en.png b/img/dependent-nodes-en.png
index e9b2f3e..97bbb08 100644
Binary files a/img/dependent-nodes-en.png and b/img/dependent-nodes-en.png differ
diff --git a/img/double-click-en.png b/img/double-click-en.png
index 2a02627..886849b 100644
Binary files a/img/double-click-en.png and b/img/double-click-en.png differ
diff --git a/img/edit-datasource-en.png b/img/edit-datasource-en.png
index b6935df..55d2cd9 100644
Binary files a/img/edit-datasource-en.png and b/img/edit-datasource-en.png differ
diff --git a/img/editDag.png b/img/editDag.png
new file mode 100644
index 0000000..caade91
Binary files /dev/null and b/img/editDag.png differ
diff --git a/img/file-upload-en.png b/img/file-upload-en.png
index f10ecb7..8604dd1 100644
Binary files a/img/file-upload-en.png and b/img/file-upload-en.png differ
diff --git a/img/file-view-en.png b/img/file-view-en.png
index 6f81aaa..cbf7719 100644
Binary files a/img/file-view-en.png and b/img/file-view-en.png differ
diff --git a/img/file_create.png b/img/file_create.png
index 464b179..55abdf8 100755
Binary files a/img/file_create.png and b/img/file_create.png differ
diff --git a/img/file_rename.png b/img/file_rename.png
index bcbc6da..63ad8a6 100755
Binary files a/img/file_rename.png and b/img/file_rename.png differ
diff --git a/img/file_upload.png b/img/file_upload.png
index b2f36ea..1bb6dad 100755
Binary files a/img/file_upload.png and b/img/file_upload.png differ
diff --git a/img/flink.png b/img/flink.png
new file mode 100644
index 0000000..7d5cdff
Binary files /dev/null and b/img/flink.png differ
diff --git a/img/flink_edit.png b/img/flink_edit.png
new file mode 100644
index 0000000..afd1fa5
Binary files /dev/null and b/img/flink_edit.png differ
diff --git a/img/global_param.png b/img/global_param.png
new file mode 100644
index 0000000..379b8b7
Binary files /dev/null and b/img/global_param.png differ
diff --git a/img/global_parameter.png b/img/global_parameter.png
index 9fb415c..34572aa 100755
Binary files a/img/global_parameter.png and b/img/global_parameter.png differ
diff --git a/img/global_parameters_en.png b/img/global_parameters_en.png
index 88d6ac2..d2a84af 100644
Binary files a/img/global_parameters_en.png and b/img/global_parameters_en.png differ
diff --git a/img/hell_dag.png b/img/hell_dag.png
new file mode 100644
index 0000000..a0599b4
Binary files /dev/null and b/img/hell_dag.png differ
diff --git a/img/hive-en.png b/img/hive-en.png
index f1bf032..01c54ee 100644
Binary files a/img/hive-en.png and b/img/hive-en.png differ
diff --git a/img/hive_edit.png b/img/hive_edit.png
index 50d0eed..2a419d0 100755
Binary files a/img/hive_edit.png and b/img/hive_edit.png differ
diff --git a/img/hive_edit2.png b/img/hive_edit2.png
index 789d65f..8edf932 100755
Binary files a/img/hive_edit2.png and b/img/hive_edit2.png differ
diff --git a/img/home.png b/img/home.png
new file mode 100644
index 0000000..5b5ce0a
Binary files /dev/null and b/img/home.png differ
diff --git a/img/home_en.png b/img/home_en.png
new file mode 100644
index 0000000..efaf2b8
Binary files /dev/null and b/img/home_en.png differ
diff --git a/img/http.png b/img/http.png
new file mode 100644
index 0000000..c96781c
Binary files /dev/null and b/img/http.png differ
diff --git a/img/http_edit.png b/img/http_edit.png
new file mode 100644
index 0000000..f3d4aaa
Binary files /dev/null and b/img/http_edit.png differ
diff --git a/img/incubator-dolphinscheduler-1.1.0.png b/img/incubator-dolphinscheduler-1.1.0.png
new file mode 100644
index 0000000..63129a0
Binary files /dev/null and b/img/incubator-dolphinscheduler-1.1.0.png differ
diff --git a/img/instanceViewLog.png b/img/instanceViewLog.png
new file mode 100644
index 0000000..dfc8a56
Binary files /dev/null and b/img/instanceViewLog.png differ
diff --git a/img/java-program-en.png b/img/java-program-en.png
index f28a141..30cde03 100644
Binary files a/img/java-program-en.png and b/img/java-program-en.png differ
diff --git a/img/line.png b/img/line.png
new file mode 100644
index 0000000..50cae52
Binary files /dev/null and b/img/line.png differ
diff --git a/img/local_parameter.png b/img/local_parameter.png
index 1eac919..63129a0 100755
Binary files a/img/local_parameter.png and b/img/local_parameter.png differ
diff --git a/img/login.jpg b/img/login.jpg
deleted file mode 100755
index b6574e1..0000000
Binary files a/img/login.jpg and /dev/null differ
diff --git a/img/login.png b/img/login.png
new file mode 100644
index 0000000..d406085
Binary files /dev/null and b/img/login.png differ
diff --git a/img/login_en.png b/img/login_en.png
index 4441500..a134738 100644
Binary files a/img/login_en.png and b/img/login_en.png differ
diff --git a/img/mail_edit.png b/img/mail_edit.png
index a7ca3f7..13445fa 100755
Binary files a/img/mail_edit.png and b/img/mail_edit.png differ
diff --git a/img/mr_edit.png b/img/mr_edit.png
index 1fa8549..854dea7 100755
Binary files a/img/mr_edit.png and b/img/mr_edit.png differ
diff --git a/img/mr_java.png b/img/mr_java.png
new file mode 100644
index 0000000..af21b3c
Binary files /dev/null and b/img/mr_java.png differ
diff --git a/img/mysql-en.png b/img/mysql-en.png
index 5f8313d..8754f56 100644
Binary files a/img/mysql-en.png and b/img/mysql-en.png differ
diff --git a/img/mysql_edit.png b/img/mysql_edit.png
index 1ae75cb..d5ffb58 100755
Binary files a/img/mysql_edit.png and b/img/mysql_edit.png differ
diff --git a/img/node-setting-en.png b/img/node-setting-en.png
index 7f2dfd3..22b9b0c 100644
Binary files a/img/node-setting-en.png and b/img/node-setting-en.png differ
diff --git a/img/online.png b/img/online.png
new file mode 100644
index 0000000..2a723fc
Binary files /dev/null and b/img/online.png differ
diff --git a/img/postgresql_edit.png b/img/postgresql_edit.png
index 79c1eec..2b70d1d 100755
Binary files a/img/postgresql_edit.png and b/img/postgresql_edit.png differ
diff --git a/img/procedure_edit.png b/img/procedure_edit.png
index e6d31ab..470e8ca 100755
Binary files a/img/procedure_edit.png and b/img/procedure_edit.png differ
diff --git a/img/project-home.png b/img/project-home.png
new file mode 100644
index 0000000..25c5c73
Binary files /dev/null and b/img/project-home.png differ
diff --git a/img/python-en.png b/img/python-en.png
index 618072c..c0111e7 100644
Binary files a/img/python-en.png and b/img/python-en.png differ
diff --git a/img/python-program-en.png b/img/python-program-en.png
index 9273290..35b5dfb 100644
Binary files a/img/python-program-en.png and b/img/python-program-en.png differ
diff --git a/img/python_edit.png b/img/python_edit.png
index e2f6380..e24c81a 100755
Binary files a/img/python_edit.png and b/img/python_edit.png differ
diff --git a/img/redirect.png b/img/redirect.png
new file mode 100644
index 0000000..1eca997
Binary files /dev/null and b/img/redirect.png differ
diff --git a/img/run_params.png b/img/run_params.png
new file mode 100644
index 0000000..d8fc4fd
Binary files /dev/null and b/img/run_params.png differ
diff --git a/img/run_params_button.png b/img/run_params_button.png
new file mode 100644
index 0000000..b1b3a80
Binary files /dev/null and b/img/run_params_button.png differ
diff --git a/img/shell-en.png b/img/shell-en.png
index 7745176..4fc5034 100644
Binary files a/img/shell-en.png and b/img/shell-en.png differ
diff --git a/img/shell.png b/img/shell.png
new file mode 100644
index 0000000..b7359f3
Binary files /dev/null and b/img/shell.png differ
diff --git a/img/shell_dag.png b/img/shell_dag.png
new file mode 100644
index 0000000..a0599b4
Binary files /dev/null and b/img/shell_dag.png differ
diff --git a/img/shell_edit.png b/img/shell_edit.png
deleted file mode 100755
index 1fe8870..0000000
Binary files a/img/shell_edit.png and /dev/null differ
diff --git a/img/spark-submit-en.png b/img/spark-submit-en.png
index 2372405..f7a4308 100644
Binary files a/img/spark-submit-en.png and b/img/spark-submit-en.png differ
diff --git a/img/spark_datesource.png b/img/spark_datesource.png
index ac30d9f..14f621d 100755
Binary files a/img/spark_datesource.png and b/img/spark_datesource.png differ
diff --git a/img/spark_edit.png b/img/spark_edit.png
index b7c2321..c62c58b 100755
Binary files a/img/spark_edit.png and b/img/spark_edit.png differ
diff --git a/img/sql-node.png b/img/sql-node.png
index 97260ef..d179612 100644
Binary files a/img/sql-node.png and b/img/sql-node.png differ
diff --git a/img/sql-node2.png b/img/sql-node2.png
index 0163d5d..fe1896e 100644
Binary files a/img/sql-node2.png and b/img/sql-node2.png differ
diff --git a/img/start-process-en.png b/img/start-process-en.png
index e4cff8c..67bb390 100644
Binary files a/img/start-process-en.png and b/img/start-process-en.png differ
diff --git a/img/statistics-en.png b/img/statistics-en.png
new file mode 100644
index 0000000..a454343
Binary files /dev/null and b/img/statistics-en.png differ
diff --git a/img/sub-process-en.png b/img/sub-process-en.png
index f8d8e50..a95c885 100644
Binary files a/img/sub-process-en.png and b/img/sub-process-en.png differ
diff --git a/img/subprocess_edit.png b/img/subprocess_edit.png
index 6a2152a..67c921f 100755
Binary files a/img/subprocess_edit.png and b/img/subprocess_edit.png differ
diff --git a/img/task_history.png b/img/task_history.png
index 07f8ad6..9ff35d1 100755
Binary files a/img/task_history.png and b/img/task_history.png differ
diff --git a/img/time-schedule3.png b/img/time-schedule3.png
new file mode 100644
index 0000000..dc032b9
Binary files /dev/null and b/img/time-schedule3.png differ
diff --git a/img/timeManagement.png b/img/timeManagement.png
new file mode 100644
index 0000000..d9c3160
Binary files /dev/null and b/img/timeManagement.png differ
diff --git a/img/timer-en.png b/img/timer-en.png
index 72eab0a..b815c32 100644
Binary files a/img/timer-en.png and b/img/timer-en.png differ
diff --git a/img/timing-en.png b/img/timing-en.png
index a1642b7..6341d82 100644
Binary files a/img/timing-en.png and b/img/timing-en.png differ
diff --git a/img/timing.png b/img/timing.png
new file mode 100644
index 0000000..f364cf0
Binary files /dev/null and b/img/timing.png differ
diff --git a/img/token-en.png b/img/token-en.png
new file mode 100644
index 0000000..fea1b1f
Binary files /dev/null and b/img/token-en.png differ
diff --git a/img/tree.png b/img/tree.png
new file mode 100644
index 0000000..d9446c3
Binary files /dev/null and b/img/tree.png differ
diff --git a/img/udf-function.png b/img/udf-function.png
index 4c81761..b06a1a0 100644
Binary files a/img/udf-function.png and b/img/udf-function.png differ
diff --git a/img/udf_edit.png b/img/udf_edit.png
index eb5df04..e6fa212 100755
Binary files a/img/udf_edit.png and b/img/udf_edit.png differ
diff --git a/img/user-defined-en.png b/img/user-defined-en.png
index 33454b6..d9d9190 100644
Binary files a/img/user-defined-en.png and b/img/user-defined-en.png differ
diff --git a/img/user-defined1-en.png b/img/user-defined1-en.png
index 9f9e9f5..89145ff 100644
Binary files a/img/user-defined1-en.png and b/img/user-defined1-en.png differ
diff --git a/img/useredit2.png b/img/useredit2.png
index 0e9f5d7..dac4869 100755
Binary files a/img/useredit2.png and b/img/useredit2.png differ
diff --git a/img/work_list.png b/img/work_list.png
new file mode 100644
index 0000000..5cbf652
Binary files /dev/null and b/img/work_list.png differ
diff --git a/img/worker-group-en.png b/img/worker-group-en.png
index 48235bf..f31959b 100644
Binary files a/img/worker-group-en.png and b/img/worker-group-en.png differ
diff --git a/img/worker1.png b/img/worker1.png
index 03d4e00..acf491d 100644
Binary files a/img/worker1.png and b/img/worker1.png differ
diff --git a/img/worker_group.png b/img/worker_group.png
new file mode 100644
index 0000000..8c6a474
Binary files /dev/null and b/img/worker_group.png differ
diff --git a/img/worker_group_en.png b/img/worker_group_en.png
new file mode 100644
index 0000000..34cadeb
Binary files /dev/null and b/img/worker_group_en.png differ
diff --git a/img/zookeeper-en.png b/img/zookeeper-en.png
index f7d7f01..b982fd3 100644
Binary files a/img/zookeeper-en.png and b/img/zookeeper-en.png differ
diff --git a/zh-cn/docs/user_doc/quick-start.html b/zh-cn/docs/user_doc/quick-start.html
index e766651..4967781 100644
--- a/zh-cn/docs/user_doc/quick-start.html
+++ b/zh-cn/docs/user_doc/quick-start.html
@@ -21,7 +21,7 @@
 </li>
 </ul>
 <p align="center">
-   <img src="/img/login.jpg" width="60%" />
+   <img src="/img/login.png" width="60%" />
  </p>
 <ul>
 <li>创建队列</li>
@@ -48,6 +48,18 @@
     <img src="/img/mail_edit.png" width="60%" />
   </p>
 <ul>
+<li>创建Worker分组</li>
+</ul>
+ <p align="center">
+    <img src="/img/worker_group.png" width="60%" />
+  </p>
+<ul>
+<li>创建token令牌</li>
+</ul>
+ <p align="center">
+    <img src="/img/creat_token.png" width="60%" />
+  </p>
+<ul>
 <li>使用普通用户登录</li>
 </ul>
 <blockquote>
diff --git a/zh-cn/docs/user_doc/quick-start.json b/zh-cn/docs/user_doc/quick-start.json
index f13003a..fa89deb 100644
--- a/zh-cn/docs/user_doc/quick-start.json
+++ b/zh-cn/docs/user_doc/quick-start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick-start.md",
-  "__html": "<h1>快速上手</h1>\n<ul>\n<li>管理员用户登录\n<blockquote>\n<p>地址:192.168.xx.xx:8888 用户名密码:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login.jpg\" width=\"60%\" />\n </p>\n<ul>\n<li>创建队列</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue.png\" width=\"60%\" />\n </p>\n<ul>\n<li>创建租户</li>\n</ul>\n   <p align=\"center\">\n    <img src=\"/img/addtenant.png\" width=\"60%\" />\n  </p>\n<ul>\n<li>创建普通用户</li>\n</ul>\n<p a [...]
+  "__html": "<h1>快速上手</h1>\n<ul>\n<li>管理员用户登录\n<blockquote>\n<p>地址:192.168.xx.xx:8888 用户名密码:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login.png\" width=\"60%\" />\n </p>\n<ul>\n<li>创建队列</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue.png\" width=\"60%\" />\n </p>\n<ul>\n<li>创建租户</li>\n</ul>\n   <p align=\"center\">\n    <img src=\"/img/addtenant.png\" width=\"60%\" />\n  </p>\n<ul>\n<li>创建普通用户</li>\n</ul>\n<p a [...]
   "link": "/zh-cn/docs/user_doc/quick-start.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/user_doc/system-manual.html b/zh-cn/docs/user_doc/system-manual.html
index f88861c..6371a9d 100644
--- a/zh-cn/docs/user_doc/system-manual.html
+++ b/zh-cn/docs/user_doc/system-manual.html
@@ -18,167 +18,360 @@
 <p>请参照<a href="quick-start.html">快速上手</a></p>
 </blockquote>
 <h2>操作指南</h2>
-<h3>创建项目</h3>
-<ul>
-<li>点击“项目管理-&gt;创建项目”,输入项目名称,项目描述,点击“提交”,创建新的项目。</li>
-<li>点击项目名称,进入项目首页。</li>
-</ul>
+<h3>1. 首页</h3>
+<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。
 <p align="center">
-   <img src="/img/project.png" width="60%" />
- </p>
-<blockquote>
-<p>项目首页其中包含任务状态统计,流程状态统计、工作流定义统计</p>
-</blockquote>
+<img src="/img/home.png" width="80%" />
+</p></p>
+<h3>2. 项目管理</h3>
+<h4>2.1 创建项目</h4>
 <ul>
-<li>任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数</li>
-<li>流程状态统计:是指在指定时间范围内,统计工作流实例中的待运行、失败、运行中、完成、成功的个数</li>
-<li>工作流定义统计:是统计该用户创建的工作流定义及管理员授予该用户的工作流定义</li>
+<li>
+<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>
+<p align="center">
+    <img src="/img/project.png" width="80%" />
+</p>
+</li>
 </ul>
-<h3>创建工作流定义</h3>
+<h4>2.2 项目首页</h4>
 <ul>
-<li>进入项目首页,点击“工作流定义”,进入工作流定义列表页。</li>
-<li>点击“创建工作流”,创建新的工作流定义。</li>
-<li>拖拽“SHELL&quot;节点到画布,新增一个Shell任务。</li>
-<li>填写”节点名称“,”描述“,”脚本“字段。</li>
-<li>选择“任务优先级”,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行。</li>
-<li>超时告警, 填写”超时时长“,当任务执行时间超过<strong>超时时长</strong>可以告警并且超时失败。</li>
-<li>填写&quot;自定义参数&quot;,参考<a href="#%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%8F%82%E6%95%B0">自定义参数</a></li>
-</ul>
+<li>
+<p>在项目管理页面点击项目名称链接,进入项目首页,如下图所示,项目首页包含该项目的任务状态统计、流程状态统计、工作流定义统计。</p>
 <p align="center">
-   <img src="/img/dag1.png" width="60%" />
- </p>
-<ul>
-<li>增加节点之间执行的先后顺序: 点击”线条连接“;如图示,任务2和任务3并行执行,当任务1执行完,任务2、3会同时执行。</li>
+   <img src="/img/project-home.png" width="80%" />
+</p>
+</li>
+<li>
+<p>任务状态统计:在指定时间范围内,统计任务实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数</p>
+</li>
+<li>
+<p>流程状态统计:在指定时间范围内,统计工作流实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数</p>
+</li>
+<li>
+<p>工作流定义统计:统计用户创建的工作流定义及管理员授予该用户的工作流定义</p>
+</li>
 </ul>
-<p align="center">
-   <img src="/img/dag2.png" width="60%" />
- </p>
+<h4>2.3 工作流定义</h4>
+<h4><span id=creatDag>2.3.1 创建工作流定义</span></h4>
 <ul>
-<li>删除依赖关系: 点击箭头图标”拖动节点和选中项“,选中连接线,点击删除图标,删除节点间依赖关系。</li>
+<li>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,点击“创建工作流”按钮,进入<strong>工作流DAG编辑</strong>页面,如下图所示:<p align="center">
+    <img src="/img/dag0.png" width="80%" />
+</p>  
+</li>
+<li>工具栏中拖拽<img src="/img/shell.png" width="35"/>到画板中,新增一个Shell任务,如下图所示:<p align="center">
+    <img src="/img/shell_dag.png" width="80%" />
+</p>  
+</li>
+<li><strong>添加shell任务的参数设置:</strong></li>
 </ul>
+<ol>
+<li>填写“节点名称”,“描述”,“脚本”字段;</li>
+<li>“运行标志”勾选“正常”,若勾选“禁止执行”,运行工作流不会执行该任务;</li>
+<li>选择“任务优先级”:当worker线程数不足时,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行;</li>
+<li>超时告警(非必选):勾选超时告警、超时失败,填写“超时时长”,当任务执行时间超过<strong>超时时长</strong>,会发送告警邮件并且任务超时失败;</li>
+<li>资源(非必选)。资源文件是资源中心-&gt;文件管理页面创建或上传的文件,如文件名为<code>test.sh</code>,脚本中调用资源命令为<code>sh test.sh</code>;</li>
+<li>自定义参数(非必填),参考<a href="#UserDefinedParameters">自定义参数</a>;</li>
+<li>点击&quot;确认添加&quot;按钮,保存任务设置。</li>
+</ol>
+<ul>
+<li>
+<p><strong>增加任务执行的先后顺序:</strong> 点击右上角图标<img src="/img/line.png" width="35"/>连接任务;如下图所示,任务2和任务3并行执行,当任务1执行完,任务2、3会同时执行。</p>
+<p align="center">
+   <img src="/img/dag2.png" width="80%" />
+</p>
+</li>
+<li>
+<p><strong>删除依赖关系:</strong> 点击右上角&quot;箭头&quot;图标<img src="/img/arrow.png" width="35"/>,选中连接线,点击右上角&quot;删除&quot;图标<img src="/img/delete.png" width="35"/>,删除任务间的依赖关系。</p>
+<p align="center">
+   <img src="/img/dag3.png" width="80%" />
+</p>
+</li>
+<li>
+<p><strong>保存工作流定义:</strong> 点击”保存“按钮,弹出&quot;设置DAG图名称&quot;弹框,如下图所示,输入工作流定义名称,工作流定义描述,设置全局参数(选填,参考<a href="#UserDefinedParameters">自定义参数</a>),点击&quot;添加&quot;按钮,工作流定义创建成功。</p>
 <p align="center">
-   <img src="/img/dag3.png" width="60%" />
+   <img src="/img/dag4.png" width="80%" />
  </p>
-<ul>
-<li>点击”保存“,输入工作流定义名称,工作流定义描述,设置全局参数,参考<a href="#%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%8F%82%E6%95%B0">自定义参数</a>。</li>
+</li>
 </ul>
+<blockquote>
+<p>其他类型任务,请参考 <a href="#TaskParamers">任务节点类型和参数设置</a>。</p>
+</blockquote>
+<h4>2.3.2  工作流定义操作功能</h4>
+<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,如下图所示:
 <p align="center">
-   <img src="/img/dag4.png" width="60%" />
- </p>
-<ul>
-<li>其他类型节点,请参考 <a href="#%E4%BB%BB%E5%8A%A1%E8%8A%82%E7%82%B9%E7%B1%BB%E5%9E%8B%E5%92%8C%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE">任务节点类型和参数设置</a></li>
+<img src="/img/work_list.png" width="80%" />
+</p>
+工作流定义列表的操作功能如下:</p>
+<ul>
+<li><strong>编辑:</strong> 只能编辑&quot;下线&quot;的工作流定义。工作流DAG编辑同<a href="#creatDag">创建工作流定义</a>。</li>
+<li><strong>上线:</strong> 工作流状态为&quot;下线&quot;时,上线工作流,只有&quot;上线&quot;状态的工作流能运行,但不能编辑。</li>
+<li><strong>下线:</strong> 工作流状态为&quot;上线&quot;时,下线工作流,下线状态的工作流可以编辑,但不能运行。</li>
+<li><strong>运行:</strong> 只有上线的工作流能运行。运行操作步骤见<a href="#runWorkflow">2.3.3 运行工作流</a></li>
+<li><strong>定时:</strong> 只有上线的工作流能设置定时,系统自动定时调度工作流运行。创建定时后的状态为&quot;下线&quot;,需在定时管理页面上线定时才生效。定时操作步骤见<a href="#creatTiming">2.3.4 工作流定时</a>。</li>
+<li><strong>定时管理:</strong> 定时管理页面可编辑、上线/下线、删除定时。</li>
+<li><strong>删除:</strong> 删除工作流定义。</li>
+<li><strong>下载:</strong> 下载工作流定义到本地。</li>
+<li><strong>树形图:</strong> 以树形结构展示任务节点的类型及任务状态,如下图所示:<p align="center">
+    <img src="/img/tree.png" width="80%" />
+</p>  
+</li>
 </ul>
-<h3>执行工作流定义</h3>
+<h4><span id=runWorkflow>2.3.3 运行工作流</span></h4>
 <ul>
-<li><strong>未上线状态的工作流定义可以编辑,但是不可以运行</strong>,所以先上线工作流</li>
+<li>
+<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,如下图所示,点击&quot;上线&quot;按钮<img src="/img/online.png" width="35"/>,上线工作流。</p>
+<p align="center">
+    <img src="/img/work_list.png" width="80%" />
+</p>
+</li>
+<li>
+<p>点击”运行“按钮,弹出启动参数设置弹框,如下图所示,设置启动参数,点击弹框中的&quot;运行&quot;按钮,工作流开始运行,工作流实例页面生成一条工作流实例。</p>
+ <p align="center">
+   <img src="/img/run-work.png" width="80%" />
+ </p>  
+</li>
 </ul>
+<p><span id=runParamers>工作流运行参数说明:</span></p>
+<pre><code>* 失败策略:当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略。”继续“表示:某一任务失败后,其他任务节点正常执行;”结束“表示:终止所有正在执行的任务,并终止整个流程。
+* 通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件,包含任何状态都不发,成功发,失败发,成功或失败都发。
+* 流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。当master线程数不足时,级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。
+* worker分组:该流程只能在指定的worker机器组里执行。默认是Default,可以在任一worker上执行。
+* 通知组:选择通知策略||超时报警||发生容错时,会发送流程信息或邮件到通知组里的所有成员。
+* 收件人:选择通知策略||超时报警||发生容错时,会发送流程信息或告警邮件到收件人列表。
+* 抄送人:选择通知策略||超时报警||发生容错时,会抄送流程信息或告警邮件到抄送人列表。
+* 补数:包括串行补数、并行补数2种模式。串行补数:指定时间范围内,从开始日期至结束日期依次执行补数,只生成一条流程实例;并行补数:指定时间范围内,多天同时进行补数,生成N条流程实例。 
+</code></pre>
+<ul>
+<li>
+<p>补数: 执行指定日期的工作流定义,可以选择补数时间范围(目前只支持针对连续的天进行补数),比如需要补5月1号到5月10号的数据,如下图所示:</p>
+<p align="center">
+    <img src="/img/complement.png" width="80%" />
+</p>
 <blockquote>
-<p>点击工作流定义,返回工作流定义列表,点击”上线“图标,上线工作流定义。</p>
+<p>串行模式:补数从5月1号到5月10号依次执行,流程实例页面生成一条流程实例;</p>
 </blockquote>
 <blockquote>
-<p>下线工作流定义的时候,要先将定时管理中的定时任务下线,这样才能成功下线工作流定义</p>
+<p>并行模式:同时执行5月1号到5月10号的任务,流程实例页面生成十条流程实例。</p>
 </blockquote>
+</li>
+</ul>
+<h4><span id=creatTiming>2.3.4 工作流定时</span></h4>
 <ul>
-<li>点击”运行“,执行工作流。运行参数说明:
+<li>创建定时:点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,上线工作流,点击&quot;定时&quot;按钮<img src="/img/timing.png" width="35"/>,弹出定时参数设置弹框,如下图所示:<p align="center">
+    <img src="/img/time-schedule.png" width="80%" />
+</p>
+</li>
+<li>选择起止时间。在起止时间范围内,定时运行工作流;不在起止时间范围内,不再产生定时工作流实例。</li>
+<li>添加一个每天凌晨5点执行一次的定时,如下图所示:<p align="center">
+    <img src="/img/time-schedule2.png" width="80%" />
+</p>
+</li>
+<li>失败策略、通知策略、流程优先级、Worker分组、通知组、收件人、抄送人同<a href="#runParamers">工作流运行参数</a>。</li>
+<li>点击&quot;创建&quot;按钮,创建定时成功,此时定时状态为&quot;<strong>下线</strong>&quot;,定时需<strong>上线</strong>才生效。</li>
+<li>定时上线:点击&quot;定时管理&quot;按钮<img src="/img/timeManagement.png" width="35"/>,进入定时管理页面,点击&quot;上线&quot;按钮,定时状态变为&quot;上线&quot;,如下图所示,工作流定时生效。<p align="center">
+    <img src="/img/time-schedule3.png" width="80%" />
+</p>
+</li>
+</ul>
+<h4>2.3.5 导入工作流</h4>
+<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,点击&quot;导入工作流&quot;按钮,导入本地工作流文件,工作流定义列表显示导入的工作流,状态为下线。</p>
+<h4>2.4 工作流实例</h4>
+<h4>2.4.1 查看工作流实例</h4>
 <ul>
-<li>失败策略:<strong>当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略</strong>。”继续“表示:其他任务节点正常执行,”结束“表示:终止所有正在执行的任务,并终止整个流程。</li>
-<li>通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件。</li>
-<li>流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。</li>
-<li>worker分组: 这个流程只能在指定的机器组里执行。默认是Default,可以在任一worker上执行。</li>
-<li>通知组: 当流程结束,或者发生容错时,会发送流程信息邮件到通知组里所有成员。</li>
-<li>收件人:输入邮箱后按回车键保存。当流程结束、发生容错时,会发送告警邮件到收件人列表。</li>
-<li>抄送人:输入邮箱后按回车键保存。当流程结束、发生容错时,会抄送告警邮件到抄送人列表。</li>
+<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,如下图所示:   <p align="center">
+      <img src="/img/instance-list.png" width="80%" />
+   </p>           
+</li>
+<li>点击工作流名称,进入DAG查看页面,查看任务执行状态,如下图所示。<p align="center">
+  <img src="/img/instance-detail.png" width="80%" />
+</p>
+</li>
 </ul>
+<h4>2.4.2 查看任务日志</h4>
+<ul>
+<li>进入工作流实例页面,点击工作流名称,进入DAG查看页面,双击任务节点,如下图所示: <p align="center">
+   <img src="/img/instanceViewLog.png" width="80%" />
+ </p>
+</li>
+<li>点击&quot;查看日志&quot;,弹出日志弹框,如下图所示,任务实例页面也可查看任务日志,参考<a href="#taskLog">任务查看日志</a>。 <p align="center">
+   <img src="/img/task-log.png" width="80%" />
+ </p>
 </li>
 </ul>
-  <p align="center">
-   <img src="/img/run-work.png" width="60%" />
+<h4>2.4.3 查看任务历史记录</h4>
+<ul>
+<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面;</li>
+<li>双击任务节点,如下图所示,点击&quot;查看历史&quot;,跳转到任务实例页面,并展示该工作流实例运行的任务实例列表 <p align="center">
+   <img src="/img/task_history.png" width="80%" />
  </p>
+</li>
+</ul>
+<h4>2.4.4 查看运行参数</h4>
 <ul>
-<li>补数: 执行指定日期的工作流定义,可以选择补数时间范围(目前只支持针对连续的天进行补数),比如要补5月1号到5月10号的数据,如图示:</li>
+<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面;</li>
+<li>点击左上角图标<img src="/img/run_params_button.png" width="35"/>,查看工作流实例的启动参数;点击图标<img src="/img/global_param.png" width="35"/>,查看工作流实例的全局参数和局部参数,如下图所示: <p align="center">
+   <img src="/img/run_params.png" width="80%" />
+ </p>      
+</li>
 </ul>
+<h4>2.4.4 工作流实例操作功能</h4>
+<p>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,如下图所示:<br>
 <p align="center">
-   <img src="/img/complement.png" width="60%" />
- </p>
+<img src="/img/instance-list.png" width="80%" />
+</p></p>
+<ul>
+<li><strong>编辑:</strong> 只能编辑已终止的流程。点击&quot;编辑&quot;按钮或工作流实例名称进入DAG编辑页面,编辑后点击&quot;保存&quot;按钮,弹出保存DAG弹框,如下图所示,在弹框中勾选&quot;是否更新到工作流定义&quot;,保存后则更新工作流定义;若不勾选,则不更新工作流定义。   <p align="center">
+     <img src="/img/editDag.png" width="80%" />
+   </p>
+</li>
+<li><strong>重跑:</strong> 重新执行已经终止的流程。</li>
+<li><strong>恢复失败:</strong> 针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。</li>
+<li><strong>停止:</strong> 对正在运行的流程进行<strong>停止</strong>操作,后台会先<code>kill</code>worker进程,再执行<code>kill -9</code>操作</li>
+<li><strong>暂停:</strong> 对正在运行的流程进行<strong>暂停</strong>操作,系统状态变为<strong>等待执行</strong>,会等待正在执行的任务结束,暂停下一个要执行的任务。</li>
+<li><strong>恢复暂停:</strong> 对暂停的流程恢复,直接从<strong>暂停的节点</strong>开始运行</li>
+<li><strong>删除:</strong> 删除工作流实例及工作流实例下的任务实例</li>
+<li><strong>甘特图:</strong> Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:   <p align="center">
+       <img src="/img/gant-pic.png" width="80%" />
+   </p>
+</li>
+</ul>
+<h4>2.5 任务实例</h4>
+<ul>
+<li>
+<p>点击项目管理-&gt;工作流-&gt;任务实例,进入任务实例页面,如下图所示,点击工作流实例名称,可跳转到工作流实例DAG图查看任务状态。</p>
+   <p align="center">
+      <img src="/img/task-list.png" width="80%" />
+   </p>
+</li>
+<li>
+<p><span id=taskLog>查看日志:</span>点击操作列中的“查看日志”按钮,可以查看任务执行的日志情况。</p>
+   <p align="center">
+      <img src="/img/task-log2.png" width="80%" />
+   </p>
+</li>
+</ul>
+<h3>3. 资源中心</h3>
+<h4>3.1 hdfs资源配置</h4>
+<ul>
+<li>上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:</li>
+</ul>
+<pre><code>conf/common/common.properties  
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
+    data.store2hdfs.basepath=/dolphinscheduler
+    # resource upload startup type : HDFS,S3,NONE
+    res.upload.startup.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab
+    
+conf/common/hadoop.properties      
+    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
+    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020    
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+</code></pre>
+<ul>
+<li>yarn.resourcemanager.ha.rm.ids与yarn.application.status.address只需配置其中一个地址,另一个地址配置为空。</li>
+<li>需要从Hadoop集群的conf目录下复制core-site.xml、hdfs-site.xml到dolphinscheduler项目的conf目录下,重启api-server服务。</li>
+</ul>
+<h4>3.2 文件管理</h4>
 <blockquote>
-<p>补数执行模式有<strong>串行执行、并行执行</strong>,串行模式下,补数会从5月1号到5月10号依次执行;并行模式下,会同时执行5月1号到5月10号的任务。</p>
+<p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf/py/java等文件、上传jar包等各种类型文件,可进行编辑、重命名、下载、删除等操作。</p>
 </blockquote>
-<h3>定时工作流定义</h3>
+  <p align="center">
+   <img src="/img/file-manage.png" width="80%" />
+ </p>
 <ul>
-<li>创建定时:&quot;工作流定义-&gt;定时”</li>
-<li>选择起止时间,在起止时间范围内,定时正常工作,超过范围,就不会再继续产生定时工作流实例了。</li>
+<li>创建文件</li>
 </ul>
+<blockquote>
+<p>文件格式支持以下几种类型:txt、log、sh、conf、cfg、py、java、sql、xml、hql、properties</p>
+</blockquote>
 <p align="center">
-   <img src="/img/time-schedule.png" width="60%" />
+   <img src="/img/file_create.png" width="80%" />
  </p>
 <ul>
-<li>添加一个每天凌晨5点执行一次的定时,如图示:</li>
+<li>上传文件</li>
 </ul>
+<blockquote>
+<p>上传文件:点击&quot;上传文件&quot;按钮进行上传,将文件拖拽到上传区域,文件名会自动以上传的文件名称补全</p>
+</blockquote>
 <p align="center">
-   <img src="/img/time-schedule2.png" width="60%" />
+   <img src="/img/file_upload.png" width="80%" />
  </p>
 <ul>
-<li>定时上线,<strong>新创建的定时是下线状态,需要点击“定时管理-&gt;上线”,定时才能正常工作</strong>。</li>
+<li>文件查看</li>
 </ul>
-<h3>查看工作流实例</h3>
 <blockquote>
-<p>点击“工作流实例”,查看工作流实例列表。</p>
+<p>对可查看的文件类型,点击文件名称,可查看文件详情</p>
 </blockquote>
+<p align="center">
+   <img src="/img/file_detail.png" width="80%" />
+ </p>
+<ul>
+<li>下载文件</li>
+</ul>
 <blockquote>
-<p>点击工作流名称,查看任务执行状态。</p>
+<p>点击文件列表的&quot;下载&quot;按钮下载文件或者在文件详情中点击右上角&quot;下载&quot;按钮下载文件</p>
 </blockquote>
-  <p align="center">
-   <img src="/img/instance-detail.png" width="60%" />
+<ul>
+<li>文件重命名</li>
+</ul>
+<p align="center">
+   <img src="/img/file_rename.png" width="80%" />
  </p>
+<ul>
+<li>删除</li>
+</ul>
 <blockquote>
-<p>点击任务节点,点击“查看日志”,查看任务执行日志。</p>
+<p>文件列表-&gt;点击&quot;删除&quot;按钮,删除指定文件</p>
 </blockquote>
-  <p align="center">
-   <img src="/img/task-log.png" width="60%" />
- </p>
+<h4>3.3 UDF管理</h4>
+<h4>3.3.1 资源管理</h4>
 <blockquote>
-<p>点击任务实例节点,点击<strong>查看历史</strong>,可以查看该工作流实例运行的该任务实例列表</p>
+<p>资源管理和文件管理功能类似,不同之处是资源管理是上传的UDF函数,文件管理上传的是用户程序,脚本及配置文件
+操作功能:重命名、下载、删除。</p>
 </blockquote>
- <p align="center">
-    <img src="/img/task_history.png" width="60%" />
-  </p>
+<ul>
+<li>上传udf资源</li>
+</ul>
 <blockquote>
-<p>对工作流实例的操作:</p>
+<p>和上传文件相同。</p>
 </blockquote>
-<p align="center">
-   <img src="/img/instance-list.png" width="60%" />
-</p>
+<h4>3.3.2 函数管理</h4>
 <ul>
-<li>编辑:可以对已经终止的流程进行编辑,编辑后保存的时候,可以选择是否更新到工作流定义。</li>
-<li>重跑:可以对已经终止的流程进行重新执行。</li>
-<li>恢复失败:针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。</li>
-<li>停止:对正在运行的流程进行<strong>停止</strong>操作,后台会先对worker进程<code>kill</code>,再执行<code>kill -9</code>操作</li>
-<li>暂停:可以对正在运行的流程进行<strong>暂停</strong>操作,系统状态变为<strong>等待执行</strong>,会等待正在执行的任务结束,暂停下一个要执行的任务。</li>
-<li>恢复暂停:可以对暂停的流程恢复,直接从<strong>暂停的节点</strong>开始运行</li>
-<li>删除:删除工作流实例及工作流实例下的任务实例</li>
-<li>甘特图:Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:</li>
+<li>创建udf函数</li>
 </ul>
-<p align="center">
-   <img src="/img/gant-pic.png" width="60%" />
-</p>
-<h3>查看任务实例</h3>
 <blockquote>
-<p>点击“任务实例”,进入任务列表页,查询任务执行情况</p>
+<p>点击“创建UDF函数”,输入udf函数参数,选择udf资源,点击“提交”,创建udf函数。</p>
 </blockquote>
-<p align="center">
-   <img src="/img/task-list.png" width="60%" />
-</p>
 <blockquote>
-<p>点击操作列中的“查看日志”,可以查看任务执行的日志情况。</p>
+<p>目前只支持HIVE的临时UDF函数</p>
 </blockquote>
+<ul>
+<li>UDF函数名称:输入UDF函数时的名称</li>
+<li>包名类名:输入UDF函数的全路径</li>
+<li>UDF资源:设置创建的UDF对应的资源文件</li>
+</ul>
 <p align="center">
-   <img src="/img/task-log2.png" width="60%" />
-</p>
-<h3>创建数据源</h3>
+   <img src="/img/udf_edit.png" width="80%" />
+ </p>
+<h3>4. 创建数据源</h3>
 <blockquote>
-<p>数据源中心支持MySQL、POSTGRESQL、HIVE及Spark等数据源</p>
+<p>数据源中心支持MySQL、POSTGRESQL、HIVE/IMPALA、SPARK、CLICKHOUSE、ORACLE、SQLSERVER等数据源</p>
 </blockquote>
-<h4>创建、编辑MySQL数据源</h4>
+<h4>4.1 创建/编辑MySQL数据源</h4>
 <ul>
 <li>
 <p>点击“数据源中心-&gt;创建数据源”,根据需求创建不同类型的数据源。</p>
@@ -193,7 +386,7 @@
 <p>描述:输入数据源的描述</p>
 </li>
 <li>
-<p>IP/主机名:输入连接MySQL的IP</p>
+<p>IP主机名:输入连接MySQL的IP</p>
 </li>
 <li>
 <p>端口:输入连接MySQL的端口</p>
@@ -212,12 +405,12 @@
 </li>
 </ul>
 <p align="center">
-   <img src="/img/mysql_edit.png" width="60%" />
+   <img src="/img/mysql_edit.png" width="80%" />
  </p>
 <blockquote>
 <p>点击“测试连接”,测试数据源是否可以连接成功。</p>
 </blockquote>
-<h4>创建、编辑POSTGRESQL数据源</h4>
+<h4>4.2 创建/编辑POSTGRESQL数据源</h4>
 <ul>
 <li>数据源:选择POSTGRESQL</li>
 <li>数据源名称:输入数据源的名称</li>
@@ -230,12 +423,12 @@
 <li>Jdbc连接参数:用于POSTGRESQL连接的参数设置,以JSON形式填写</li>
 </ul>
 <p align="center">
-   <img src="/img/postgresql_edit.png" width="60%" />
+   <img src="/img/postgresql_edit.png" width="80%" />
  </p>
-<h4>创建、编辑HIVE数据源</h4>
+<h4>4.3 创建/编辑HIVE数据源</h4>
 <p>1.使用HiveServer2方式连接</p>
  <p align="center">
-    <img src="/img/hive_edit.png" width="60%" />
+    <img src="/img/hive_edit.png" width="80%" />
   </p>
 <ul>
 <li>数据源:选择HIVE</li>
@@ -250,15 +443,15 @@
 </ul>
 <p>2.使用HiveServer2 HA Zookeeper方式连接</p>
  <p align="center">
-    <img src="/img/hive_edit2.png" width="60%" />
+    <img src="/img/hive_edit2.png" width="80%" />
   </p>
 <p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
 <p align="center">
-    <img src="/img/hive_kerberos.png" width="60%" />
+    <img src="/img/hive_kerberos.png" width="80%" />
   </p>
-<h4>创建、编辑Spark数据源</h4>
+<h4>4.4 创建/编辑Spark数据源</h4>
 <p align="center">
-   <img src="/img/spark_datesource.png" width="60%" />
+   <img src="/img/spark_datesource.png" width="80%" />
  </p>
 <ul>
 <li>数据源:选择Spark</li>
@@ -273,153 +466,92 @@
 </ul>
 <p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
 <p align="center">
-    <img src="/img/sparksql_kerberos.png" width="60%" />
+    <img src="/img/sparksql_kerberos.png" width="80%" />
   </p>
-<h3>上传资源</h3>
-<ul>
-<li>上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:</li>
-</ul>
-<pre><code>conf/common/common.properties
-    -- hdfs.startup.state=true
-conf/common/hadoop.properties  
-    -- fs.defaultFS=hdfs://xxxx:8020  
-    -- yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-    -- yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+<h3>5. 安全中心(权限系统)</h3>
+<pre><code> * 安全中心只有管理员账户才有权限操作,分别有队列管理、租户管理、用户管理、告警组管理、worker分组管理、令牌管理等功能,在用户管理模块可以对资源、数据源、项目等授权
+ * 管理员登录,默认用户名密码:admin/dolphinscheduler123
 </code></pre>
-<h4>文件管理</h4>
-<blockquote>
-<p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf等文件、上传jar包等各种类型文件,以及编辑、下载、删除等操作。</p>
-</blockquote>
-  <p align="center">
-   <img src="/img/file-manage.png" width="60%" />
- </p>
+<h4>5.1 创建队列</h4>
 <ul>
-<li>创建文件</li>
-</ul>
-<blockquote>
-<p>文件格式支持以下几种类型:txt、log、sh、conf、cfg、py、java、sql、xml、hql</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_create.png" width="60%" />
- </p>
-<ul>
-<li>上传文件</li>
+<li>队列是在执行spark、mapreduce等程序,需要用到“队列”参数时使用的。</li>
+<li>管理员进入安全中心-&gt;队列管理页面,点击“创建队列”按钮,创建队列。</li>
 </ul>
-<blockquote>
-<p>上传文件:点击上传按钮进行上传,将文件拖拽到上传区域,文件名会自动以上传的文件名称补全</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_upload.png" width="60%" />
- </p>
+ <p align="center">
+    <img src="/img/create-queue.png" width="80%" />
+  </p>
+<h4>5.2 添加租户</h4>
 <ul>
-<li>文件查看</li>
+<li>租户对应的是Linux的用户,用于worker提交作业所使用的用户。如果linux没有这个用户,worker会在执行脚本的时候创建这个用户。</li>
+<li>租户编码:<strong>租户编码是Linux上的用户,唯一,不能重复</strong></li>
+<li>管理员进入安全中心-&gt;租户管理页面,点击“创建租户”按钮,创建租户。</li>
 </ul>
-<blockquote>
-<p>对可查看的文件类型,点击 文件名称 可以查看文件详情</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_detail.png" width="60%" />
- </p>
+ <p align="center">
+    <img src="/img/addtenant.png" width="80%" />
+  </p>
+<h4>5.3 创建普通用户</h4>
 <ul>
-<li>下载文件</li>
+<li>用户分为<strong>管理员用户</strong>和<strong>普通用户</strong></li>
 </ul>
-<blockquote>
-<p>可以在 文件详情 中点击右上角下载按钮下载文件,或者在文件列表后的下载按钮下载文件</p>
-</blockquote>
+<pre><code>* 管理员有授权和用户管理等权限,没有创建项目和工作流定义的操作的权限。
+* 普通用户可以创建项目和对工作流定义的创建,编辑,执行等操作。
+* 注意:如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下。
+</code></pre>
 <ul>
-<li>文件重命名</li>
+<li>管理员进入安全中心-&gt;用户管理页面,点击“创建用户”按钮,创建用户。</li>
 </ul>
 <p align="center">
-   <img src="/img/file_rename.png" width="60%" />
+   <img src="/img/useredit2.png" width="80%" />
  </p>
-<h4>删除</h4>
 <blockquote>
-<p>文件列表-&gt;点击&quot;删除&quot;按钮,删除指定文件</p>
-</blockquote>
-<h4>资源管理</h4>
-<blockquote>
-<p>资源管理和文件管理功能类似,不同之处是资源管理是上传的UDF函数,文件管理上传的是用户程序,脚本及配置文件</p>
-</blockquote>
-<ul>
-<li>上传udf资源</li>
-</ul>
-<blockquote>
-<p>和上传文件相同。</p>
+<p><strong>编辑用户信息</strong></p>
 </blockquote>
-<h4>函数管理</h4>
 <ul>
-<li>创建udf函数</li>
+<li>管理员进入安全中心-&gt;用户管理页面,点击&quot;编辑&quot;按钮,编辑用户信息。</li>
+<li>普通用户登录后,点击用户名下拉框中的用户信息,进入用户信息页面,点击&quot;编辑&quot;按钮,编辑用户信息。</li>
 </ul>
 <blockquote>
-<p>点击“创建UDF函数”,输入udf函数参数,选择udf资源,点击“提交”,创建udf函数。</p>
+<p><strong>修改用户密码</strong></p>
 </blockquote>
-<blockquote>
-<p>目前只支持HIVE的临时UDF函数</p>
-</blockquote>
-<ul>
-<li>UDF函数名称:输入UDF函数时的名称</li>
-<li>包名类名:输入UDF函数的全路径</li>
-<li>参数:用来标注函数的输入参数</li>
-<li>数据库名:预留字段,用于创建永久UDF函数</li>
-<li>UDF资源:设置创建的UDF对应的资源文件</li>
-</ul>
-<p align="center">
-   <img src="/img/udf_edit.png" width="60%" />
- </p>
-<h2>安全中心(权限系统)</h2>
-<ul>
-<li>安全中心是只有管理员账户才有权限的功能,有队列管理、租户管理、用户管理、告警组管理、worker分组、令牌管理等功能,还可以对资源、数据源、项目等授权</li>
-<li>管理员登录,默认用户名密码:admin/escheduler123</li>
-</ul>
-<h3>创建队列</h3>
-<ul>
-<li>队列是在执行spark、mapreduce等程序,需要用到“队列”参数时使用的。</li>
-<li>“安全中心”-&gt;“队列管理”-&gt;“创建队列”</li>
-</ul>
- <p align="center">
-    <img src="/img/create-queue.png" width="60%" />
-  </p>
-<h3>添加租户</h3>
-<ul>
-<li>租户对应的是Linux的用户,用于worker提交作业所使用的用户。如果linux没有这个用户,worker会在执行脚本的时候创建这个用户。</li>
-<li>租户编码:<strong>租户编码是Linux上的用户,唯一,不能重复</strong></li>
-</ul>
- <p align="center">
-    <img src="/img/addtenant.png" width="60%" />
-  </p>
-<h3>创建普通用户</h3>
 <ul>
-<li>用户分为<strong>管理员用户</strong>和<strong>普通用户</strong></li>
+<li>管理员进入安全中心-&gt;用户管理页面,点击&quot;编辑&quot;按钮,编辑用户信息时,输入新密码修改用户密码。</li>
+<li>普通用户登录后,点击用户名下拉框中的用户信息,进入修改密码页面,输入密码并确认密码后点击&quot;编辑&quot;按钮,则修改密码成功。</li>
 </ul>
-<pre><code>* 管理员有**授权和用户管理**等权限,没有**创建项目和工作流定义**的操作的权限
-* 普通用户可以**创建项目和对工作流定义的创建,编辑,执行**等操作。
-* 注意:**如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下**
-</code></pre>
-<p align="center">
-   <img src="/img/useredit2.png" width="60%" />
- </p>
-<h3>创建告警组</h3>
+<h4>5.4 创建告警组</h4>
 <ul>
 <li>告警组是在启动时设置的参数,在流程结束以后会将流程的状态和其他信息以邮件形式发送给告警组。</li>
 </ul>
 <ul>
-<li>新建、编辑告警组</li>
+<li>管理员进入安全中心-&gt;告警组管理页面,点击“创建告警组”按钮,创建告警组。</li>
 </ul>
   <p align="center">
-    <img src="/img/mail_edit.png" width="60%" />
+    <img src="/img/mail_edit.png" width="80%" />
   </p>
-<h3>创建worker分组</h3>
+<h4>5.5 创建worker分组</h4>
 <ul>
 <li>worker分组,提供了一种让任务在指定的worker上运行的机制。管理员创建worker分组,在任务节点和运行参数中设置中可以指定该任务运行的worker分组,如果指定的分组被删除或者没有指定分组,则该任务会在任一worker上运行。</li>
-<li>worker分组内多个ip地址(<strong>不能写别名</strong>),以<strong>英文逗号</strong>分隔</li>
+<li>管理员进入安全中心-&gt;Worker分组管理页面,点击“创建Worker分组”按钮,创建Worker分组。worker分组内有多个ip地址(<strong>不能写别名</strong>),以<strong>英文逗号</strong>分隔。</li>
 </ul>
   <p align="center">
-    <img src="/img/worker1.png" width="60%" />
+    <img src="/img/worker1.png" width="80%" />
   </p>
-<h3>令牌管理</h3>
+<h4>5.6 令牌管理</h4>
+<blockquote>
+<p>由于后端接口有登录检查,令牌管理提供了一种可以通过调用接口的方式对系统进行各种操作。</p>
+</blockquote>
 <ul>
-<li>由于后端接口有登录检查,令牌管理,提供了一种可以通过调用接口的方式对系统进行各种操作。</li>
-<li>调用示例:</li>
+<li>管理员进入安全中心-&gt;令牌管理页面,点击“创建令牌”按钮,选择失效时间与用户,点击&quot;生成令牌&quot;按钮,点击&quot;提交&quot;按钮,则选择用户的token创建成功。</li>
+</ul>
+  <p align="center">
+      <img src="/img/creat_token.png" width="80%" />
+   </p>
+<ul>
+<li>
+<p>普通用户登录后,点击用户名下拉框中的用户信息,进入令牌管理页面,选择失效时间,点击&quot;生成令牌&quot;按钮,点击&quot;提交&quot;按钮,则该用户创建token成功。</p>
+</li>
+<li>
+<p>调用示例:</p>
+</li>
 </ul>
 <pre><code class="language-令牌调用示例">    /**
      * test token
@@ -454,107 +586,144 @@ conf/common/hadoop.properties
         }
     }
 </code></pre>
-<h3>授予权限</h3>
-<ul>
-<li>授予权限包括项目权限,资源权限,数据源权限,UDF函数权限。</li>
-</ul>
-<blockquote>
-<p>管理员可以对普通用户进行非其创建的项目、资源、数据源和UDF函数进行授权。因为项目、资源、数据源和UDF函数授权方式都是一样的,所以以项目授权为例介绍。</p>
-</blockquote>
-<blockquote>
-<p>注意:<strong>对于用户自己创建的项目,该用户拥有所有的权限。则项目列表和已选项目列表中不会体现</strong></p>
-</blockquote>
+<h4>5.7 授予权限</h4>
+<pre><code>* 授予权限包括项目权限,资源权限,数据源权限,UDF函数权限。
+* 管理员可以对普通用户进行非其创建的项目、资源、数据源和UDF函数进行授权。因为项目、资源、数据源和UDF函数授权方式都是一样的,所以以项目授权为例介绍。
+* 注意:对于用户自己创建的项目,该用户拥有所有的权限。则项目列表和已选项目列表中不会显示。
+</code></pre>
 <ul>
-<li>1.点击指定人的授权按钮,如下图:</li>
+<li>管理员进入安全中心-&gt;用户管理页面,点击需授权用户的“授权”按钮,如下图所示:</li>
 </ul>
   <p align="center">
-   <img src="/img/auth_user.png" width="60%" />
+   <img src="/img/auth_user.png" width="80%" />
  </p>
 <ul>
-<li>2.选中项目按钮,进行项目授权</li>
+<li>选择项目,进行项目授权。</li>
 </ul>
 <p align="center">
-   <img src="/img/auth_project.png" width="60%" />
+   <img src="/img/auth_project.png" width="80%" />
  </p>
-<h2>监控中心</h2>
-<h3>服务管理</h3>
+<ul>
+<li>资源、数据源、UDF函数授权同项目授权。</li>
+</ul>
+<h3>6. 监控中心</h3>
+<h4>6.1 服务管理</h4>
 <ul>
 <li>服务管理主要是对系统中的各个服务的健康状况和基本信息的监控和显示</li>
 </ul>
-<h4>master监控</h4>
+<h4>6.1.1 master监控</h4>
 <ul>
 <li>主要是master的相关信息。</li>
 </ul>
 <p align="center">
-   <img src="/img/master-jk.png" width="60%" />
+   <img src="/img/master-jk.png" width="80%" />
  </p>
-<h4>worker监控</h4>
+<h4>6.1.2 worker监控</h4>
 <ul>
 <li>主要是worker的相关信息。</li>
 </ul>
 <p align="center">
-   <img src="/img/worker-jk.png" width="60%" />
+   <img src="/img/worker-jk.png" width="80%" />
  </p>
-<h4>Zookeeper监控</h4>
+<h4>6.1.3 Zookeeper监控</h4>
 <ul>
 <li>主要是zookpeeper中各个worker和master的相关配置信息。</li>
 </ul>
 <p align="center">
-   <img src="/img/zk-jk.png" width="60%" />
+   <img src="/img/zk-jk.png" width="80%" />
  </p>
-<h4>DB监控</h4>
+<h4>6.1.4 DB监控</h4>
 <ul>
 <li>主要是DB的健康状况</li>
 </ul>
 <p align="center">
-   <img src="/img/mysql-jk.png" width="60%" />
+   <img src="/img/mysql-jk.png" width="80%" />
+ </p>
+<h4>6.2 统计管理</h4>
+<p align="center">
+   <img src="/img/Statistics.png" width="80%" />
  </p>
-<h2>任务节点类型和参数设置</h2>
-<h3>Shell节点</h3>
 <ul>
-<li>shell节点,在worker执行的时候,会生成一个临时shell脚本,使用租户同名的linux用户执行这个脚本。</li>
+<li>待执行命令数:统计t_ds_command表的数据</li>
+<li>执行失败的命令数:统计t_ds_error_command表的数据</li>
+<li>待运行任务数:统计zookeeper中task_queue的数据</li>
+<li>待杀死任务数:统计zookeeper中task_kill的数据</li>
 </ul>
+<h3>7. <span id=TaskParamers>任务节点类型和参数设置</span></h3>
+<h4>7.1 Shell节点</h4>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SHELL.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>shell节点,在worker执行的时候,会生成一个临时shell脚本,使用租户同名的linux用户执行这个脚本。</p>
 </blockquote>
-<p align="center">
-   <img src="/img/shell_edit.png" width="60%" />
- </p>
 <ul>
-<li>节点名称:一个工作流定义中的节点名称是唯一的</li>
-<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
-<li>描述信息:描述该节点的功能</li>
-<li>失败重试次数:任务失败重新提交的次数,支持下拉和手填</li>
-<li>失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填</li>
-<li>脚本:用户开发的SHELL程序</li>
-<li>资源:是指脚本中需要调用的资源文件列表</li>
-<li>自定义参数:是SHELL局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
+<li>
+<p>点击项目管理-项目名称-工作流定义,点击&quot;创建工作流&quot;按钮,进入DAG编辑页面。</p>
+</li>
+<li>
+<p>工具栏中拖动<img src="/img/shell.png" width="35"/>到画板中,如下图所示:</p>
+<p align="center">
+    <img src="/img/shell_dag.png" width="80%" />
+</p> 
+</li>
+<li>
+<p>节点名称:一个工作流定义中的节点名称是唯一的。</p>
+</li>
+<li>
+<p>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</p>
+</li>
+<li>
+<p>描述信息:描述该节点的功能。</p>
+</li>
+<li>
+<p>任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</p>
+</li>
+<li>
+<p>Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。</p>
+</li>
+<li>
+<p>失败重试次数:任务失败重新提交的次数,支持下拉和手填。</p>
+</li>
+<li>
+<p>失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。</p>
+</li>
+<li>
+<p>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</p>
+</li>
+<li>
+<p>脚本:用户开发的SHELL程序。</p>
+</li>
+<li>
+<p>资源:是指脚本中需要调用的资源文件列表,资源中心-文件管理上传或创建的文件。</p>
+</li>
+<li>
+<p>自定义参数:是SHELL局部的用户自定义参数,会替换脚本中以${变量}的内容。</p>
+</li>
 </ul>
-<h3>子流程节点</h3>
+<h4>7.2 子流程节点</h4>
 <ul>
 <li>子流程节点,就是把外部的某个工作流定义当做一个任务节点去执行。</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/subprocess_edit.png" width="60%" />
+   <img src="/img/subprocess_edit.png" width="80%" />
  </p>
 <ul>
 <li>节点名称:一个工作流定义中的节点名称是唯一的</li>
 <li>运行标志:标识这个节点是否能正常调度</li>
 <li>描述信息:描述该节点的功能</li>
+<li>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</li>
 <li>子节点:是选择子流程的工作流定义,右上角进入该子节点可以跳转到所选子流程的工作流定义</li>
 </ul>
-<h3>依赖(DEPENDENT)节点</h3>
+<h4>7.3 依赖(DEPENDENT)节点</h4>
 <ul>
 <li>依赖节点,就是<strong>依赖检查节点</strong>。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/dependent_edit.png" width="60%" />
+   <img src="/img/dependent_edit.png" width="80%" />
  </p>
 <blockquote>
 <p>依赖节点提供了逻辑判断功能,比如检查昨天的B流程是否成功,或者C流程是否执行成功。</p>
@@ -574,60 +743,60 @@ conf/common/hadoop.properties
  <p align="center">
    <img src="/img/depend-node3.png" width="80%" />
  </p>
-<h3>存储过程节点</h3>
+<h4>7.4 存储过程节点</h4>
 <ul>
 <li>根据选择的数据源,执行存储过程。</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/procedure_edit.png" width="60%" />
+   <img src="/img/procedure_edit.png" width="80%" />
  </p>
 <ul>
 <li>数据源:存储过程的数据源类型支持MySQL和POSTGRESQL两种,选择对应的数据源</li>
 <li>方法:是存储过程的方法名称</li>
 <li>自定义参数:存储过程的自定义参数类型支持IN、OUT两种,数据类型支持VARCHAR、INTEGER、LONG、FLOAT、DOUBLE、DATE、TIME、TIMESTAMP、BOOLEAN九种数据类型</li>
 </ul>
-<h3>SQL节点</h3>
+<h4>7.5 SQL节点</h4>
 <ul>
-<li>执行非查询SQL功能</li>
+<li>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">任务节点到画板中</li>
+<li>非查询SQL功能:编辑非查询SQL任务信息,sql类型选择非查询,如下图所示:</li>
 </ul>
   <p align="center">
-   <img src="/img/sql-node.png" width="60%" />
+   <img src="/img/sql-node.png" width="80%" />
  </p>
 <ul>
-<li>执行查询SQL功能,可以选择通过表格和附件形式发送邮件到指定的收件人。</li>
+<li>查询SQL功能:编辑查询SQL任务信息,sql类型选择查询,选择表格或附件形式发送邮件到指定的收件人,如下图所示。</li>
 </ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
-</blockquote>
 <p align="center">
-   <img src="/img/sql-node2.png" width="60%" />
+   <img src="/img/sql-node2.png" width="80%" />
  </p>
 <ul>
 <li>数据源:选择对应的数据源</li>
-<li>sql类型:支持查询和非查询两种,查询是select类型的查询,是有结果集返回的,可以指定邮件通知为表格、附件或表格附件三种模板。非查询是没有结果集返回的,是针对update、delete、insert三种类型的操作</li>
+<li>sql类型:支持查询和非查询两种,查询是select类型的查询,是有结果集返回的,可以指定邮件通知为表格、附件或表格附件三种模板。非查询是没有结果集返回的,是针对update、delete、insert三种类型的操作。</li>
 <li>sql参数:输入参数格式为key1=value1;key2=value2…</li>
 <li>sql语句:SQL语句</li>
-<li>UDF函数:对于HIVE类型的数据源,可以引用资源中心中创建的UDF函数,其他类型的数据源暂不支持UDF函数</li>
-<li>自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}</li>
+<li>UDF函数:对于HIVE类型的数据源,可以引用资源中心中创建的UDF函数,其他类型的数据源暂不支持UDF函数。</li>
+<li>自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。</li>
+<li>前置sql:前置sql在sql语句之前执行。</li>
+<li>后置sql:后置sql在sql语句之后执行。</li>
 </ul>
-<h3>SPARK节点</h3>
+<h4>7.6 SPARK节点</h4>
 <ul>
 <li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/spark_edit.png" width="60%" />
+   <img src="/img/spark_edit.png" width="80%" />
  </p>
 <ul>
 <li>程序类型:支持JAVA、Scala和Python三种语言</li>
 <li>主函数的class:是Spark程序的入口Main Class的全路径</li>
 <li>主jar包:是Spark的jar包</li>
-<li>部署方式:支持yarn-cluster、yarn-client、和local三种模式</li>
+<li>部署方式:支持yarn-cluster、yarn-client和local三种模式</li>
 <li>Driver内核数:可以设置Driver内核数及内存数</li>
 <li>Executor数量:可以设置Executor数量、Executor内存数和Executor内核数</li>
 <li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
@@ -636,19 +805,19 @@ conf/common/hadoop.properties
 <li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
 </ul>
 <p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样</p>
-<h3>MapReduce(MR)节点</h3>
+<h4>7.7 MapReduce(MR)节点</h4>
 <ul>
 <li>使用MR节点,可以直接执行MR程序。对于mr节点,worker会使用<code>hadoop jar</code>方式提交任务</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <ol>
 <li>JAVA程序</li>
 </ol>
  <p align="center">
-    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mr_java.png" width="60%" />
-  </p>
+   <img src="/img/mr_java.png" width="80%" />
+ </p>
 <ul>
 <li>主函数的class:是MR程序的入口Main Class的全路径</li>
 <li>程序类型:选择JAVA语言</li>
@@ -662,7 +831,7 @@ conf/common/hadoop.properties
 <li>Python程序</li>
 </ol>
 <p align="center">
-   <img src="/img/mr_edit.png" width="60%" />
+   <img src="/img/mr_edit.png" width="80%" />
  </p>
 <ul>
 <li>程序类型:选择Python语言</li>
@@ -673,22 +842,68 @@ conf/common/hadoop.properties
 <li>资源: 如果其他参数中引用了资源文件,需要在资源中选择指定</li>
 <li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
 </ul>
-<h3>Python节点</h3>
+<h4>7.8 Python节点</h4>
 <ul>
 <li>使用python节点,可以直接执行python脚本,对于python节点,worker会使用<code>python **</code>方式提交任务。</li>
 </ul>
 <blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">任务节点到画板中,双击任务节点,如下图:</p>
+<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">任务节点到画板中,如下图所示:</p>
 </blockquote>
 <p align="center">
-   <img src="/img/python_edit.png" width="60%" />
+   <img src="/img/python_edit.png" width="80%" />
  </p>
 <ul>
 <li>脚本:用户开发的Python程序</li>
 <li>资源:是指脚本中需要调用的资源文件列表</li>
 <li>自定义参数:是Python局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
 </ul>
-<h3>系统参数</h3>
+<h4>7.9 Flink节点</h4>
+<ul>
+<li>拖动工具栏中的<img src="/img/flink.png" width="35"/>任务节点到画板中,如下图所示:</li>
+</ul>
+<p align="center">
+  <img src="/img/flink_edit.png" width="80%" />
+</p>
+<ul>
+<li>程序类型:支持JAVA、Scala和Python三种语言</li>
+<li>主函数的class:是Flink程序的入口Main Class的全路径</li>
+<li>主jar包:是Flink的jar包</li>
+<li>部署方式:支持cluster、local三种模式</li>
+<li>slot数量:可以设置slot数</li>
+<li>taskManage数量:可以设置taskManage数</li>
+<li>jobManager内存数:可以设置jobManager内存数</li>
+<li>taskManager内存数:可以设置taskManager内存数</li>
+<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
+<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
+<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
+<li>自定义参数:是Flink局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
+</ul>
+<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Flink则没有主函数的class,其他都是一样</p>
+<h4>7.10 http节点</h4>
+<ul>
+<li>拖动工具栏中的<img src="/img/http.png" width="35"/>任务节点到画板中,如下图所示:</li>
+</ul>
+<p align="center">
+   <img src="/img/http_edit.png" width="80%" />
+ </p>
+<ul>
+<li>节点名称:一个工作流定义中的节点名称是唯一的。</li>
+<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
+<li>描述信息:描述该节点的功能。</li>
+<li>任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</li>
+<li>Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。</li>
+<li>失败重试次数:任务失败重新提交的次数,支持下拉和手填。</li>
+<li>失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。</li>
+<li>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</li>
+<li>请求地址:http请求URL。</li>
+<li>请求类型:支持GET、POSt、HEAD、PUT、DELETE。</li>
+<li>请求参数:支持Parameter、Body、Headers。</li>
+<li>校验条件:支持默认响应码、自定义响应码、内容包含、内容不包含。</li>
+<li>校验内容:当校验条件选择自定义响应码、内容包含、内容不包含时,需填写校验内容。</li>
+<li>自定义参数:是http局部的用户自定义参数,会替换脚本中以${变量}的内容。</li>
+</ul>
+<h4>8. 参数</h4>
+<h4>8.1 系统参数</h4>
 <table>
     <tr><th>变量</th><th>含义</th></tr>
     <tr>
@@ -704,49 +919,48 @@ conf/common/hadoop.properties
         <td>日常调度实例定时的定时时间,格式为 yyyyMMddHHmmss,补数据时,该日期 +1</td>
     </tr>
 </table>
-<h3>时间自定义参数</h3>
-<blockquote>
+<h4>8.2 时间自定义参数</h4>
+<ul>
+<li>
 <p>支持代码中自定义变量名,声明方式:${变量名}。可以是引用 &quot;系统参数&quot; 或指定 &quot;常量&quot;。</p>
-</blockquote>
-<blockquote>
+</li>
+<li>
 <p>我们定义这种基准变量为 [...] 格式的,[yyyyMMddHHmmss] 是可以任意分解组合的,比如:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] 等</p>
-</blockquote>
-<blockquote>
-<p>也可以这样:</p>
-</blockquote>
+</li>
+<li>
+<p>也可以使用以下格式:</p>
+<pre><code>* 后 N 年:$[add_months(yyyyMMdd,12*N)]
+* 前 N 年:$[add_months(yyyyMMdd,-12*N)]
+* 后 N 月:$[add_months(yyyyMMdd,N)]
+* 前 N 月:$[add_months(yyyyMMdd,-N)]
+* 后 N 周:$[yyyyMMdd+7*N]
+* 前 N 周:$[yyyyMMdd-7*N]
+* 后 N 天:$[yyyyMMdd+N]
+* 前 N 天:$[yyyyMMdd-N]
+* 后 N 小时:$[HHmmss+N/24]
+* 前 N 小时:$[HHmmss-N/24]
+* 后 N 分钟:$[HHmmss+N/24/60]
+* 前 N 分钟:$[HHmmss-N/24/60]
+</code></pre>
+</li>
+</ul>
+<h4>8.3 <span id=UserDefinedParameters>用户自定义参数</span></h4>
 <ul>
-<li>后 N 年:$[add_months(yyyyMMdd,12*N)]</li>
-<li>前 N 年:$[add_months(yyyyMMdd,-12*N)]</li>
-<li>后 N 月:$[add_months(yyyyMMdd,N)]</li>
-<li>前 N 月:$[add_months(yyyyMMdd,-N)]</li>
-<li>后 N 周:$[yyyyMMdd+7*N]</li>
-<li>前 N 周:$[yyyyMMdd-7*N]</li>
-<li>后 N 天:$[yyyyMMdd+N]</li>
-<li>前 N 天:$[yyyyMMdd-N]</li>
-<li>后 N 小时:$[HHmmss+N/24]</li>
-<li>前 N 小时:$[HHmmss-N/24]</li>
-<li>后 N 分钟:$[HHmmss+N/24/60]</li>
-<li>前 N 分钟:$[HHmmss-N/24/60]</li>
-</ul>
-<h3>用户自定义参数</h3>
-<blockquote>
-<p>用户自定义参数分为全局参数和局部参数。全局参数是保存工作流定义和工作流实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。</p>
-</blockquote>
-<blockquote>
-<p>例如:</p>
-</blockquote>
+<li>用户自定义参数分为全局参数和局部参数。全局参数是保存工作流定义和工作流实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。
+例如:</li>
+</ul>
 <p align="center">
-   <img src="/img/local_parameter.png" width="60%" />
+   <img src="/img/local_parameter.png" width="80%" />
  </p>
-<blockquote>
-<p>global_bizdate为全局参数,引用的是系统参数。</p>
-</blockquote>
+<ul>
+<li>global_bizdate为全局参数,引用的是系统参数。</li>
+</ul>
 <p align="center">
-   <img src="/img/global_parameter.png" width="60%" />
+   <img src="/img/global_parameter.png" width="80%" />
  </p>
-<blockquote>
-<p>任务中local_param_bizdate通过{global_bizdate}来引用全局参数,对于脚本可以通过{local_param_bizdate}来引用变量local_param_bizdate的值,或通过JDBC直接将local_param_bizdate的值set进去</p>
-</blockquote>
+<ul>
+<li>任务中local_param_bizdate通过${global_bizdate}来引用全局参数,对于脚本可以通过${local_param_bizdate}来引全局变量global_bizdate的值,或通过JDBC直接将local_param_bizdate的值set进去</li>
+</ul>
 </div></section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
 Incubation is required of all newly accepted projects until a further review indicates 
 that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
diff --git a/zh-cn/docs/user_doc/system-manual.json b/zh-cn/docs/user_doc/system-manual.json
index 0f60f61..7920317 100644
--- a/zh-cn/docs/user_doc/system-manual.json
+++ b/zh-cn/docs/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>创建项目</h3>\n<ul>\n<li>点击“项目管理-&gt;创建项目”,输入项目名称,项目描述,点击“提交”,创建新的项目。</li>\n<li>点击项目名称,进入项目首页。</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/project.png\" width=\"60%\" />\n </p>\n<blockquote>\n<p>项目首页其中包含任务状态统计,流程状态统计、工作流定义统计</p>\n</blockquote>\n<ul>\n<li>任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数</li>\n<li>流程状态统计:是指在指定时间范围内,统计工作流实例中的待运行、失败 [...]
+  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img src=\"/img/project.png\" width=\"80%\" />\n</p>\n</li>\n</ul>\n<h4>2.2 [...]
   "link": "/zh-cn/docs/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file