You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2022/01/10 10:21:06 UTC

[dolphinscheduler-website] branch asf-site updated: Automated deployment: 6aafa4d1ff73e02771192df15f441f31f6620fae

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new c86213a  Automated deployment: 6aafa4d1ff73e02771192df15f441f31f6620fae
c86213a is described below

commit c86213af73b073efe2f90925c02dc3312d094cb2
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Mon Jan 10 10:21:02 2022 +0000

    Automated deployment: 6aafa4d1ff73e02771192df15f441f31f6620fae
---
 en-us/docs/2.0.0/user_doc/expansion-reduction.html |    2 +-
 en-us/docs/2.0.0/user_doc/expansion-reduction.json |    2 +-
 en-us/docs/2.0.0/user_doc/guide/system-manual.html | 1053 --------------------
 en-us/docs/2.0.0/user_doc/guide/system-manual.json |    6 -
 en-us/docs/2.0.1/user_doc/expansion-reduction.html |    2 +-
 en-us/docs/2.0.1/user_doc/expansion-reduction.json |    2 +-
 en-us/docs/2.0.2/user_doc/expansion-reduction.html |    2 +-
 en-us/docs/2.0.2/user_doc/expansion-reduction.json |    2 +-
 en-us/docs/dev/user_doc/expansion-reduction.html   |    2 +-
 en-us/docs/dev/user_doc/expansion-reduction.json   |    2 +-
 .../docs/latest/user_doc/expansion-reduction.html  |    2 +-
 .../docs/latest/user_doc/expansion-reduction.json  |    2 +-
 zh-cn/docs/2.0.0/user_doc/expansion-reduction.html |    2 +-
 zh-cn/docs/2.0.0/user_doc/expansion-reduction.json |    2 +-
 zh-cn/docs/2.0.0/user_doc/guide/system-manual.html | 1008 -------------------
 zh-cn/docs/2.0.0/user_doc/guide/system-manual.json |    6 -
 zh-cn/docs/2.0.1/user_doc/expansion-reduction.html |    2 +-
 zh-cn/docs/2.0.1/user_doc/expansion-reduction.json |    2 +-
 zh-cn/docs/2.0.2/user_doc/expansion-reduction.html |    2 +-
 zh-cn/docs/2.0.2/user_doc/expansion-reduction.json |    2 +-
 zh-cn/docs/dev/user_doc/expansion-reduction.html   |    2 +-
 zh-cn/docs/dev/user_doc/expansion-reduction.json   |    2 +-
 .../docs/latest/user_doc/expansion-reduction.html  |    2 +-
 .../docs/latest/user_doc/expansion-reduction.json  |    2 +-
 24 files changed, 20 insertions(+), 2093 deletions(-)

diff --git a/en-us/docs/2.0.0/user_doc/expansion-reduction.html b/en-us/docs/2.0.0/user_doc/expansion-reduction.html
index 11d3c9e..d10f1b0 100644
--- a/en-us/docs/2.0.0/user_doc/expansion-reduction.html
+++ b/en-us/docs/2.0.0/user_doc/expansion-reduction.html
@@ -116,7 +116,7 @@ workers=&quot;existing worker01:default,existing worker02:default,ds3:default,ds
 </code></pre>
 <ul>
 <li>
-<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual <a href="/en-us/docs/2.0.0/user_doc/guide/system-manual.html">5.7 Worker grouping</a></p>
+<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the security <a href="./guide/security.md">Worker grouping</a></p>
 </li>
 <li>
 <p>On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory</p>
diff --git a/en-us/docs/2.0.0/user_doc/expansion-reduction.json b/en-us/docs/2.0.0/user_doc/expansion-reduction.json
index 56afa90..02e00f2 100644
--- a/en-us/docs/2.0.0/user_doc/expansion-reduction.json
+++ b/en-us/docs/2.0.0/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
+  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
   "link": "/dist/en-us/docs/2.0.0/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/2.0.0/user_doc/guide/system-manual.html b/en-us/docs/2.0.0/user_doc/guide/system-manual.html
deleted file mode 100644
index 3ee45c1..0000000
--- a/en-us/docs/2.0.0/user_doc/guide/system-manual.html
+++ /dev/null
@@ -1,1053 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-<head>
-  <meta charset="UTF-8">
-  <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
-  <meta name="keywords" content="system-manual">
-  <meta name="description" content="system-manual">
-  <title>system-manual</title>
-  <link rel="shortcut icon" href="/img/favicon.ico">
-  <link rel="stylesheet" href="/build/vendor.23870e5.css">
-</head>
-<body>
-  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
-<h2>Get started quickly</h2>
-<blockquote>
-<p>Please refer to <a href="https://dolphinscheduler.apache.org/en-us/docs/2.0.0/user_doc/guide/quick-start.html">Quick Start</a></p>
-</blockquote>
-<h2>Operation guide</h2>
-<h3>1. Home</h3>
-<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>
-<p align="center">
-<img src="/img/home_en.png" width="80%" />
-</p>
-<h3>2. Project management</h3>
-<h4>2.1 Create project</h4>
-<ul>
-<li>
-<p>Click &quot;Project Management&quot; to enter the project management page, click the &quot;Create Project&quot; button, enter the project name, project description, and click &quot;Submit&quot; to create a new project.</p>
-<p align="center">
-    <img src="/img/create_project_en1.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.2 Project home</h4>
-<ul>
-<li>
-<p>Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project.</p>
-<p align="center">
-   <img src="/img/project_home_en.png" width="80%" />
-</p>
-</li>
-<li>
-<p>Task status statistics: within the specified time range, count the number of task instances as successful submission, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads</p>
-</li>
-<li>
-<p>Process status statistics: within the specified time range, count the number of the status of the workflow instance as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads</p>
-</li>
-<li>
-<p>Workflow definition statistics: Count the workflow definitions created by this user and the workflow definitions granted to this user by the administrator</p>
-</li>
-</ul>
-<h4>2.3 Workflow definition</h4>
-<h4><span id=creatDag>2.3.1 Create workflow definition</span></h4>
-<ul>
-<li>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, and click the &quot;Create Workflow&quot; button to enter the <strong>workflow DAG edit</strong> page, as shown in the following figure:<p align="center">
-    <img src="/img/dag5.png" width="80%" />
-</p>
-</li>
-<li>Drag in the toolbar <img src="/img/shell.png" width="35"/> Add a Shell task to the drawing board, as shown in the figure below:<p align="center">
-    <img src="/img/shell-en.png" width="80%" />
-</p>
-</li>
-<li><strong>Add parameter settings for this shell task:</strong></li>
-</ul>
-<ol>
-<li>Fill in the &quot;Node Name&quot;, &quot;Description&quot;, and &quot;Script&quot; fields;</li>
-<li>Check “Normal” for “Run Flag”. If “Prohibit Execution” is checked, the task will not be executed when the workflow runs;</li>
-<li>Select &quot;Task Priority&quot;: When the number of worker threads is insufficient, high-level tasks will be executed first in the execution queue, and tasks with the same priority will be executed in the order of first in, first out;</li>
-<li>Timeout alarm (optional): Check the timeout alarm, timeout failure, and fill in the &quot;timeout period&quot;. When the task execution time exceeds <strong>timeout period</strong>, an alert email will be sent and the task timeout fails;</li>
-<li>Resources (optional). Resource files are files created or uploaded on the Resource Center -&gt; File Management page. For example, the file name is <code>test.sh</code>, and the command to call the resource in the script is <code>sh test.sh</code>;</li>
-<li>Custom parameters (optional), refer to <a href="#UserDefinedParameters">Custom Parameters</a>;</li>
-<li>Click the &quot;Confirm Add&quot; button to save the task settings.</li>
-</ol>
-<ul>
-<li>
-<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished executing, tasks 2 and 3 will be executed simultaneously.</p>
-<p align="center">
-   <img src="/img/dag6.png" width="80%" />
-</p>
-</li>
-<li>
-<p><strong>Delete dependencies:</strong> Click the &quot;arrow&quot; icon in the upper right corner <img src="/img/arrow.png" width="35"/>, select the connection line, and click the &quot;Delete&quot; icon in the upper right corner <img src= "/img/delete.png" width="35"/>, delete dependencies between tasks.</p>
-<p align="center">
-   <img src="/img/dag7.png" width="80%" />
-</p>
-</li>
-<li>
-<p><strong>Save workflow definition:</strong> Click the &quot;Save&quot; button, and the &quot;Set DAG chart name&quot; pop-up box will pop up, as shown in the figure below. Enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to <a href="#UserDefinedParameters"> Custom parameters</a>), click the &quot;Add&quot; button, and the workflow definition is created successfully.</p>
-<p align="center">
-   <img src="/img/dag8.png" width="80%" />
- </p>
-</li>
-</ul>
-<blockquote>
-<p>For other types of tasks, please refer to <a href="#TaskParamers">Task Node Type and Parameter Settings</a>.</p>
-</blockquote>
-<h4>2.3.2 Workflow definition operation function</h4>
-<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, as shown below:</p>
-<p align="center">
-<img src="/img/work_list_en.png" width="80%" />
-</p>
-The operation functions of the workflow definition list are as follows:
-<ul>
-<li><strong>Edit:</strong> Only &quot;offline&quot; workflow definitions can be edited. Workflow DAG editing is the same as <a href="#creatDag">Create Workflow Definition</a>.</li>
-<li><strong>Online:</strong> When the workflow status is &quot;Offline&quot;, used to online workflow. Only the workflow in the &quot;Online&quot; state can run, but cannot be edited.</li>
-<li><strong>Offline:</strong> When the workflow status is &quot;Online&quot;, used to offline workflow. Only the workflow in the &quot;Offline&quot; state can be edited, but not run.</li>
-<li><strong>Run:</strong> Only workflow in the online state can run. See <a href="#runWorkflow">2.3.3 Run Workflow</a> for the operation steps</li>
-<li><strong>Timing:</strong> Timing can only be set in online workflows, and the system automatically schedules the workflow to run on a regular basis. The status after creating a timing is &quot;offline&quot;, and the timing must be online on the timing management page to take effect. See <a href="#creatTiming">2.3.4 Workflow Timing</a> for timing operation steps.</li>
-<li><strong>Timing Management:</strong> The timing management page can be edited, online/offline, and deleted.</li>
-<li><strong>Delete:</strong> Delete the workflow definition.</li>
-<li><strong>Download:</strong> Download workflow definition to local.</li>
-<li><strong>Tree Diagram:</strong> Display the task node type and task status in a tree structure, as shown in the figure below:<p align="center">
-    <img src="/img/tree_en.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4><span id=runWorkflow>2.3.3 Run the workflow</span></h4>
-<ul>
-<li>
-<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, as shown in the figure below, click the &quot;Go Online&quot; button <img src="/img/online.png" width="35"/>,Go online workflow.</p>
-<p align="center">
-    <img src="/img/work_list_en.png" width="80%" />
-</p>
-</li>
-<li>
-<p>Click the &quot;Run&quot; button to pop up the startup parameter setting pop-up box, as shown in the figure below, set the startup parameters, click the &quot;Run&quot; button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.</p>
-   <p align="center">
-     <img src="/img/run_work_en.png" width="80%" />
-   </p>  
-<span id=runParamers>Description of workflow operating parameters:</span> 
-<pre><code>* Failure strategy: When a task node fails to execute, other parallel task nodes need to execute the strategy. &quot;Continue&quot; means: after a certain task fails, other task nodes execute normally; &quot;End&quot; means: terminate all tasks being executed, and terminate the entire process.
-* Notification strategy: When the process is over, the process execution information notification email is sent according to the process status, including any status is not sent, successful sent, failed sent, successful or failed sent.
-* Process priority: The priority of process operation, divided into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high-level processes will be executed first in the execution queue, and processes with the same priority will be executed in a first-in first-out order.
-* Worker group: The process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
-* Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process information or email will be sent to all members in the notification group.
-* Recipient: Select notification policy||Timeout alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
-* Cc: Select the notification strategy||Timeout alarm||When fault tolerance occurs, the process information or warning email will be copied to the CC list.
-* Startup parameter: Set or overwrite global parameter values when starting a new process instance.
-* Complement: Two modes including serial complement and parallel complement. Serial complement: Within the specified time range, the complements are executed from the start date to the end date and N process instances are generated in turn; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances.
-</code></pre>
-<ul>
-<li>For example, you need to fill in the data from May 1 to May 10.</li>
-</ul>
-  <p align="center">
-      <img src="/img/complement_en1.png" width="80%" />
-  </p>
-<blockquote>
-<p>Serial mode: The complement is executed sequentially from May 1 to May 10, and ten process instances are generated on the process instance page;</p>
-</blockquote>
-<blockquote>
-<p>Parallel mode: The tasks from May 1 to may 10 are executed simultaneously, and 10 process instances are generated on the process instance page.</p>
-</blockquote>
-</li>
-</ul>
-<h4><span id=creatTiming>2.3.4 Workflow timing</span></h4>
-<ul>
-<li>Create timing: Click Project Management-&gt;Workflow-&gt;Workflow Definition, enter the workflow definition page, go online the workflow, click the &quot;timing&quot; button <img src="/img/timing.png" width="35"/> ,The timing parameter setting dialog box pops up, as shown in the figure below:<p align="center">
-    <img src="/img/time_schedule_en.png" width="80%" />
-</p>
-</li>
-<li>Choose the start and end time. In the start and end time range, the workflow is run at regular intervals; not in the start and end time range, no more regular workflow instances are generated.</li>
-<li>Add a timing that is executed once every day at 5 AM, as shown in the following figure:<p align="center">
-    <img src="/img/timer-en.png" width="80%" />
-</p>
-</li>
-<li>Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as <a href="#runParamers">workflow running parameters</a>.</li>
-<li>Click the &quot;Create&quot; button to create the timing successfully. At this time, the timing status is &quot;<strong>Offline</strong>&quot; and the timing needs to be <strong>Online</strong> to take effect.</li>
-<li>Timing online: Click the &quot;timing management&quot; button <img src="/img/timeManagement.png" width="35"/>, enter the timing management page, click the &quot;online&quot; button, the timing status will change to &quot;online&quot;, as shown in the below figure, the workflow takes effect regularly.<p align="center">
-    <img src="/img/time-manage-list-en.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.3.5 Import workflow</h4>
-<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, click the &quot;Import Workflow&quot; button to import the local workflow file, the workflow definition list displays the imported workflow, and the status is offline.</p>
-<h4>2.4 Workflow instance</h4>
-<h4>2.4.1 View workflow instance</h4>
-<ul>
-<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the Workflow Instance page, as shown in the figure below:   <p align="center">
-      <img src="/img/instance-list-en.png" width="80%" />
-   </p>
-</li>
-<li>Click the workflow name to enter the DAG view page to view the task execution status, as shown in the figure below.<p align="center">
-  <img src="/img/instance-runs-en.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.4.2 View task log</h4>
-<ul>
-<li>Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the following figure: <p align="center">
-   <img src="/img/instanceViewLog-en.png" width="80%" />
- </p>
-</li>
-<li>Click &quot;View Log&quot;, a log pop-up box will pop up, as shown in the figure below, the task log can also be viewed on the task instance page, refer to <a href="#taskLog">Task View Log</a>。 <p align="center">
-   <img src="/img/task-log-en.png" width="80%" />
- </p>
-</li>
-</ul>
-<h4>2.4.3 View task history</h4>
-<ul>
-<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;</li>
-<li>Double-click the task node, as shown in the figure below, click &quot;View History&quot; to jump to the task instance page, and display a list of task instances running by the workflow instance <p align="center">
-   <img src="/img/task_history_en.png" width="80%" />
- </p>
-</li>
-</ul>
-<h4>2.4.4 View operating parameters</h4>
-<ul>
-<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;</li>
-<li>Click the icon in the upper left corner <img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon <img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure: <p align="center">
-   <img src="/img/run_params_en.png" width="80%" />
- </p>
-</li>
-</ul>
-<h4>2.4.4 Workflow instance operation function</h4>
-<p>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the Workflow Instance page, as shown in the figure below:</p>
-  <p align="center">
-    <img src="/img/instance-list-en.png" width="80%" />
-  </p>
-<ul>
-<li><strong>Edit:</strong> Only terminated processes can be edited. Click the &quot;Edit&quot; button or the name of the workflow instance to enter the DAG edit page. After edit, click the &quot;Save&quot; button to pop up the Save DAG pop-up box, as shown in the figure below. In the pop-up box, check &quot;Whether to update to workflow definition&quot; and save After that, the workflow definition will be updated; if it is not checked, the workflow definition will not be updated.   <p al [...]
-     <img src="/img/editDag-en.png" width="80%" />
-   </p>
-</li>
-<li><strong>Rerun:</strong> Re-execute the terminated process.</li>
-<li><strong>Recovery failed:</strong> For failed processes, you can perform recovery operations, starting from the failed node.</li>
-<li><strong>Stop:</strong> To <strong>stop</strong> the running process, the background will first <code>kill</code>worker process, and then execute <code>kill -9</code> operation</li>
-<li><strong>Pause:</strong> Perform a <strong>pause</strong> operation on the running process, the system status will change to <strong>waiting for execution</strong>, it will wait for the end of the task being executed, and pause the next task to be executed.</li>
-<li><strong>Resume pause:</strong> To resume the paused process, start running directly from the <strong>paused node</strong></li>
-<li><strong>Delete:</strong> Delete the workflow instance and the task instance under the workflow instance</li>
-<li><strong>Gantt chart:</strong> The vertical axis of the Gantt chart is the topological sorting of task instances under a certain workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:   <p align="center">
-       <img src="/img/gantt-en.png" width="80%" />
-   </p>
-</li>
-</ul>
-<h4>2.5 Task instance</h4>
-<ul>
-<li>
-<p>Click Project Management -&gt; Workflow -&gt; Task Instance to enter the task instance page, as shown in the figure below, click the name of the workflow instance, you can jump to the workflow instance DAG chart to view the task status.</p>
-   <p align="center">
-      <img src="/img/task-list-en.png" width="80%" />
-   </p>
-</li>
-<li>
-<p><span id=taskLog>View log:</span>Click the &quot;view log&quot; button in the operation column to view the log of task execution.</p>
-   <p align="center">
-      <img src="/img/task-log2-en.png" width="80%" />
-   </p>
-</li>
-</ul>
-<h3>3. Resource Center</h3>
-<h4>3.1 hdfs resource configuration</h4>
-<ul>
-<li>Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:</li>
-</ul>
-<pre><code>
-conf/common/common.properties
-    # Users who have permission to create directories under the HDFS root path
-    hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
-    # whether kerberos starts
-    hadoop.security.authentication.startup.state=false
-    # java.security.krb5.conf path
-    java.security.krb5.conf.path=/opt/krb5.conf
-    # loginUserFromKeytab user
-    login.user.keytab.username=hdfs-mycluster@ESZ.COM
-    # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-
-conf/common/hadoop.properties
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020
-    #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-
-</code></pre>
-<ul>
-<li>Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids and yarn.application.status.address, and the other address is empty.</li>
-<li>You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project, and restart the api-server service.</li>
-</ul>
-<h4>3.2 File management</h4>
-<blockquote>
-<p>It is the management of various resource files, including creating basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, and can do edit, rename, download, delete and other operations.</p>
-</blockquote>
-  <p align="center">
-   <img src="/img/file-manage-en.png" width="80%" />
- </p>
-<ul>
-<li>Create a file
-<blockquote>
-<p>The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties</p>
-</blockquote>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/file_create_en.png" width="80%" />
- </p>
-<ul>
-<li>upload files</li>
-</ul>
-<blockquote>
-<p>Upload file: Click the &quot;Upload File&quot; button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file-upload-en.png" width="80%" />
- </p>
-<ul>
-<li>File View</li>
-</ul>
-<blockquote>
-<p>For the file types that can be viewed, click the file name to view the file details</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_detail_en.png" width="80%" />
- </p>
-<ul>
-<li>download file</li>
-</ul>
-<blockquote>
-<p>Click the &quot;Download&quot; button in the file list to download the file or click the &quot;Download&quot; button in the upper right corner of the file details to download the file</p>
-</blockquote>
-<ul>
-<li>File rename</li>
-</ul>
-<p align="center">
-   <img src="/img/file_rename_en.png" width="80%" />
- </p>
-<ul>
-<li>delete
-<blockquote>
-<p>File list -&gt; Click the &quot;Delete&quot; button to delete the specified file</p>
-</blockquote>
-</li>
-</ul>
-<h4>3.3 UDF management</h4>
-<h4>3.3.1 Resource management</h4>
-<blockquote>
-<p>The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
-Operation function: rename, download, delete.</p>
-</blockquote>
-<ul>
-<li>Upload udf resources
-<blockquote>
-<p>Same as uploading files.</p>
-</blockquote>
-</li>
-</ul>
-<h4>3.3.2 Function management</h4>
-<ul>
-<li>Create UDF function
-<blockquote>
-<p>Click &quot;Create UDF Function&quot;, enter the udf function parameters, select the udf resource, and click &quot;Submit&quot; to create the udf function.</p>
-</blockquote>
-</li>
-</ul>
-<blockquote>
-<p>Currently only supports temporary UDF functions of HIVE</p>
-</blockquote>
-<ul>
-<li>UDF function name: the name when the UDF function is entered</li>
-<li>Package name Class name: Enter the full path of the UDF function</li>
-<li>UDF resource: Set the resource file corresponding to the created UDF</li>
-</ul>
-<p align="center">
-   <img src="/img/udf_edit_en.png" width="80%" />
- </p>
-<h3>4. Create data source</h3>
-<blockquote>
-<p>Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, ORACLE, SQLSERVER and other data sources</p>
-</blockquote>
-<h4>4.1 Create/Edit MySQL data source</h4>
-<ul>
-<li>
-<p>Click &quot;Data Source Center -&gt; Create Data Source&quot; to create different types of data sources according to requirements.</p>
-</li>
-<li>
-<p>Data source: select MYSQL</p>
-</li>
-<li>
-<p>Data source name: enter the name of the data source</p>
-</li>
-<li>
-<p>Description: Enter a description of the data source</p>
-</li>
-<li>
-<p>IP hostname: enter the IP to connect to MySQL</p>
-</li>
-<li>
-<p>Port: Enter the port to connect to MySQL</p>
-</li>
-<li>
-<p>Username: Set the username for connecting to MySQL</p>
-</li>
-<li>
-<p>Password: Set the password for connecting to MySQL</p>
-</li>
-<li>
-<p>Database name: Enter the name of the database connected to MySQL</p>
-</li>
-<li>
-<p>Jdbc connection parameters: parameter settings for MySQL connection, filled in in JSON form</p>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/mysql-en.png" width="80%" />
- </p>
-<blockquote>
-<p>Click &quot;Test Connection&quot; to test whether the data source can be successfully connected.</p>
-</blockquote>
-<h4>4.2 Create/Edit POSTGRESQL data source</h4>
-<ul>
-<li>Data source: select POSTGRESQL</li>
-<li>Data source name: enter the name of the data source</li>
-<li>Description: Enter a description of the data source</li>
-<li>IP/Host Name: Enter the IP to connect to POSTGRESQL</li>
-<li>Port: Enter the port to connect to POSTGRESQL</li>
-<li>Username: Set the username for connecting to POSTGRESQL</li>
-<li>Password: Set the password for connecting to POSTGRESQL</li>
-<li>Database name: Enter the name of the database connected to POSTGRESQL</li>
-<li>Jdbc connection parameters: parameter settings for POSTGRESQL connection, filled in in JSON form</li>
-</ul>
-<p align="center">
-   <img src="/img/postgresql-en.png" width="80%" />
- </p>
-<h4>4.3 Create/Edit HIVE data source</h4>
-<p>1.Use HiveServer2 to connect</p>
- <p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-<ul>
-<li>
-<p>Data source: select HIVE</p>
-</li>
-<li>
-<p>Data source name: enter the name of the data source</p>
-</li>
-<li>
-<p>Description: Enter a description of the data source</p>
-</li>
-<li>
-<p>IP/Host Name: Enter the IP connected to HIVE</p>
-</li>
-<li>
-<p>Port: Enter the port connected to HIVE</p>
-</li>
-<li>
-<p>Username: Set the username for connecting to HIVE</p>
-</li>
-<li>
-<p>Password: Set the password for connecting to HIVE</p>
-</li>
-<li>
-<p>Database name: Enter the name of the database connected to HIVE</p>
-</li>
-<li>
-<p>Jdbc connection parameters: parameter settings for HIVE connection, filled in in JSON form</p>
-<p>2.Use HiveServer2 HA Zookeeper to connect</p>
-</li>
-</ul>
- <p align="center">
-    <img src="/img/hive1-en.png" width="80%" />
-  </p>
-<p>Note: If you enable <strong>kerberos</strong>, you need to fill in <strong>Principal</strong></p>
-<p align="center">
-    <img src="/img/hive-en.png" width="80%" />
-  </p>
-<h4>4.4 Create/Edit Spark data source</h4>
-<p align="center">
-   <img src="/img/spark-en.png" width="80%" />
- </p>
-<ul>
-<li>Data source: select Spark</li>
-<li>Data source name: enter the name of the data source</li>
-<li>Description: Enter a description of the data source</li>
-<li>IP/Hostname: Enter the IP connected to Spark</li>
-<li>Port: Enter the port connected to Spark</li>
-<li>Username: Set the username for connecting to Spark</li>
-<li>Password: Set the password for connecting to Spark</li>
-<li>Database name: Enter the name of the database connected to Spark</li>
-<li>Jdbc connection parameters: parameter settings for Spark connection, filled in in JSON form</li>
-</ul>
-<h3>5. Security Center (Permission System)</h3>
-<pre><code> * Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
- * Administrator login, default user name and password: admin/dolphinscheduler123
-</code></pre>
-<h4>5.1 Create queue</h4>
-<ul>
-<li>Queue is used when the &quot;queue&quot; parameter is needed to execute programs such as spark and mapreduce.</li>
-<li>The administrator enters the Security Center-&gt;Queue Management page and clicks the &quot;Create Queue&quot; button to create a queue.</li>
-</ul>
-<p align="center">
-   <img src="/img/create-queue-en.png" width="80%" />
- </p>
-<h4>5.2 Add tenant</h4>
-<ul>
-<li>The tenant corresponds to the Linux user, which is used by the worker to submit the job. If Linux does not have this user, the worker will create this user when executing the script.</li>
-<li>Tenant Code: <strong>Tenant Code is the only user on Linux and cannot be repeated</strong></li>
-<li>The administrator enters the Security Center-&gt;Tenant Management page and clicks the &quot;Create Tenant&quot; button to create a tenant.</li>
-</ul>
- <p align="center">
-    <img src="/img/addtenant-en.png" width="80%" />
-  </p>
-<h4>5.3 Create normal user</h4>
-<ul>
-<li>
-<p>Users are divided into <strong>administrator users</strong> and <strong>normal users</strong></p>
-<ul>
-<li>The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.</li>
-<li>Ordinary users can create projects and create, edit, and execute workflow definitions.</li>
-<li>Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.</li>
-</ul>
-</li>
-<li>
-<p>The administrator enters the Security Center -&gt; User Management page and clicks the &quot;Create User&quot; button to create a user.</p>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/user-en.png" width="80%" />
- </p>
-<blockquote>
-<p><strong>Edit user information</strong></p>
-</blockquote>
-<ul>
-<li>The administrator enters the Security Center-&gt;User Management page and clicks the &quot;Edit&quot; button to edit user information.</li>
-<li>After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the &quot;Edit&quot; button to edit the user information.</li>
-</ul>
-<blockquote>
-<p><strong>Modify user password</strong></p>
-</blockquote>
-<ul>
-<li>The administrator enters the Security Center-&gt;User Management page and clicks the &quot;Edit&quot; button. When editing user information, enter the new password to modify the user password.</li>
-<li>After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the &quot;Edit&quot; button, then the password modification is successful.</li>
-</ul>
-<h4>5.4 Create alarm group</h4>
-<ul>
-<li>The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.</li>
-</ul>
-<ul>
-<li>
-<p>The administrator enters the Security Center -&gt; Alarm Group Management page and clicks the &quot;Create Alarm Group&quot; button to create an alarm group.</p>
-<p align="center">
-  <img src="/img/mail-en.png" width="80%" />
-</li>
-</ul>
-<h4>5.5 Token management</h4>
-<blockquote>
-<p>Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.</p>
-</blockquote>
-<ul>
-<li>
-<p>The administrator enters the Security Center -&gt; Token Management page, clicks the &quot;Create Token&quot; button, selects the expiration time and user, clicks the &quot;Generate Token&quot; button, and clicks the &quot;Submit&quot; button, then the selected user's token is created successfully.</p>
-<p align="center">
-    <img src="/img/create-token-en.png" width="80%" />
- </p>
-<ul>
-<li>After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the &quot;generate token&quot; button, and click the &quot;submit&quot; button, then the user creates a token successfully.</li>
-<li>Call example:</li>
-</ul>
-</li>
-</ul>
-<pre><code>Token call example
-    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost(&quot;http://127.0.0.1:12345/escheduler/projects/create&quot;);
-        httpPost.setHeader(&quot;token&quot;, &quot;123&quot;);
-        // set parameters
-        List&lt;NameValuePair&gt; parameters = new ArrayList&lt;NameValuePair&gt;();
-        parameters.add(new BasicNameValuePair(&quot;projectName&quot;, &quot;qzw&quot;));
-        parameters.add(new BasicNameValuePair(&quot;desc&quot;, &quot;qzw&quot;));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), &quot;UTF-8&quot;);
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-</code></pre>
-<h4>5.6 Granted permission</h4>
-<pre><code>* Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
-* The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
-* Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
-</code></pre>
-<ul>
-<li>The administrator enters the Security Center -&gt; User Management page and clicks the &quot;Authorize&quot; button of the user who needs to be authorized, as shown in the figure below:</li>
-</ul>
- <p align="center">
-  <img src="/img/auth-en.png" width="80%" />
-</p>
-<ul>
-<li>Select the project to authorize the project.</li>
-</ul>
-<p align="center">
-   <img src="/img/authproject-en.png" width="80%" />
- </p>
-<ul>
-<li>Resources, data sources, and UDF function authorization are the same as project authorization.</li>
-</ul>
-<h4>5.7 Worker grouping</h4>
-<p>Each worker node will belong to its own worker group, and the default group is &quot;default&quot;.</p>
-<p>When the task is executed, the task can be assigned to the specified worker group, and the task will be executed by the worker node in the group.</p>
-<blockquote>
-<p>Add/Update worker group</p>
-</blockquote>
-<ul>
-<li>Open the &quot;conf/worker.properties&quot; configuration file on the worker node where you want to set the groups, and modify the &quot;worker.groups&quot; parameter</li>
-<li>The &quot;worker.groups&quot; parameter is followed by the name of the group corresponding to the worker node, which is “default”.</li>
-<li>If the worker node corresponds to more than one group, they are separated by commas</li>
-</ul>
-<pre><code>Example: 
-worker.groups=default,test
-</code></pre>
-<h3>6. monitoring Center</h3>
-<h4>6.1 Service management</h4>
-<ul>
-<li>Service management is mainly to monitor and display the health status and basic information of each service in the system</li>
-</ul>
-<h4>6.1.1 master monitoring</h4>
-<ul>
-<li>Mainly related to master information.</li>
-</ul>
-<p align="center">
-   <img src="/img/master-jk-en.png" width="80%" />
- </p>
-<h4>6.1.2 worker monitoring</h4>
-<ul>
-<li>Mainly related to worker information.</li>
-</ul>
-<p align="center">
-   <img src="/img/worker-jk-en.png" width="80%" />
- </p>
-<h4>6.1.3 Zookeeper monitoring</h4>
-<ul>
-<li>Mainly related configuration information of each worker and master in ZooKeeper.</li>
-</ul>
-<p alignlinux ="center">
-   <img src="/img/zookeeper-monitor-en.png" width="80%" />
- </p>
-<h4>6.1.4 DB monitoring</h4>
-<ul>
-<li>Mainly the health of the DB</li>
-</ul>
-<p align="center">
-   <img src="/img/mysql-jk-en.png" width="80%" />
- </p>
-<h4>6.2 Statistics management</h4>
-<p align="center">
-   <img src="/img/statistics-en.png" width="80%" />
- </p>
-<ul>
-<li>Number of commands to be executed: statistics on the t_ds_command table</li>
-<li>The number of failed commands: statistics on the t_ds_error_command table</li>
-<li>Number of tasks to run: Count the data of task_queue in Zookeeper</li>
-<li>Number of tasks to be killed: Count the data of task_kill in Zookeeper</li>
-</ul>
-<h3>7. <span id=TaskParamers>Task node type and parameter settings</span></h3>
-<h4>7.1 Shell node</h4>
-<blockquote>
-<p>Shell node, when the worker is executed, a temporary shell script is generated, and the Linux user with the same name as the tenant executes the script.</p>
-</blockquote>
-<ul>
-<li>
-<p>Click Project Management-Project Name-Workflow Definition, and click the &quot;Create Workflow&quot; button to enter the DAG editing page.</p>
-</li>
-<li>
-<p>Drag <img src="/img/shell.png" width="35"/> from the toolbar to the drawing board, as shown in the figure below:</p>
-<p align="center">
-    <img src="/img/shell-en.png" width="80%" />
-</p>
-</li>
-<li>
-<p>Node name: The node name in a workflow definition is unique.</p>
-</li>
-<li>
-<p>Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</p>
-</li>
-<li>
-<p>Descriptive information: describe the function of the node.</p>
-</li>
-<li>
-<p>Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</p>
-</li>
-<li>
-<p>Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</p>
-</li>
-<li>
-<p>Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.</p>
-</li>
-<li>
-<p>Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.</p>
-</li>
-<li>
-<p>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</p>
-</li>
-<li>
-<p>Script: SHELL program developed by users.</p>
-</li>
-<li>
-<p>Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.</p>
-</li>
-<li>
-<p>User-defined parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with ${variable} in the script.</p>
-</li>
-</ul>
-<h4>7.2 Sub-process node</h4>
-<ul>
-<li>The sub-process node is to execute a certain external workflow definition as a task node.
-<blockquote>
-<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
-</blockquote>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/sub-process-en.png" width="80%" />
- </p>
-<ul>
-<li>Node name: The node name in a workflow definition is unique</li>
-<li>Run flag: identify whether this node can be scheduled normally</li>
-<li>Descriptive information: describe the function of the node</li>
-<li>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
-<li>Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process</li>
-</ul>
-<h4>7.3 DEPENDENT node</h4>
-<ul>
-<li>Dependent nodes are <strong>dependency check nodes</strong>. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.</li>
-</ul>
-<blockquote>
-<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/dependent-nodes-en.png" width="80%" />
- </p>
-<blockquote>
-<p>The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.</p>
-</blockquote>
-  <p align="center">
-   <img src="/img/depend-node-en.png" width="80%" />
- </p>
-<blockquote>
-<p>For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:</p>
-</blockquote>
- <p align="center">
-   <img src="/img/depend-node1-en.png" width="80%" />
- </p>
-<blockquote>
-<p>If the weekly report A also needs to be executed successfully last Tuesday:</p>
-</blockquote>
- <p align="center">
-   <img src="/img/depend-node3-en.png" width="80%" />
- </p>
-<h4>7.4 Stored procedure node</h4>
-<ul>
-<li>According to the selected data source, execute the stored procedure.
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/procedure-en.png" width="80%" />
- </p>
-<ul>
-<li>Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source</li>
-<li>Method: is the method name of the stored procedure</li>
-<li>Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN</li>
-</ul>
-<h4>7.5 SQL node</h4>
-<ul>
-<li>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">Task node into the drawing board</li>
-<li>Non-query SQL function: edit non-query SQL task information, select non-query for sql type, as shown in the figure below:</li>
-</ul>
- <p align="center">
-  <img src="/img/sql-en.png" width="80%" />
-</p>
-<ul>
-<li>Query SQL function: Edit and query SQL task information, sql type selection query, select form or attachment to send mail to the specified recipient, as shown in the figure below.</li>
-</ul>
-<p align="center">
-   <img src="/img/sql-node-en.png" width="80%" />
- </p>
-<ul>
-<li>Data source: select the corresponding data source</li>
-<li>sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.</li>
-<li>sql parameter: the input parameter format is key1=value1;key2=value2...</li>
-<li>sql statement: SQL statement</li>
-<li>UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.</li>
-<li>Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.</li>
-<li>Pre-sql: Pre-sql is executed before the sql statement.</li>
-<li>Post-sql: Post-sql is executed after the sql statement.</li>
-</ul>
-<h4>7.6 SPARK node</h4>
-<ul>
-<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>
-</ul>
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
-<ul>
-<li>Program type: supports JAVA, Scala and Python three languages</li>
-<li>The class of the main function: is the full path of the Spark program’s entry Main Class</li>
-<li>Main jar package: Spark jar package</li>
-<li>Deployment mode: support three modes of yarn-cluster, yarn-client and local</li>
-<li>Driver core number: You can set the number of Driver cores and the number of memory</li>
-<li>Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores</li>
-<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
-<li>Other parameters: support --jars, --files, --archives, --conf format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
-</ul>
-<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same</p>
-<h4>7.7 MapReduce(MR) node</h4>
-<ul>
-<li>Using the MR node, you can directly execute the MR program. For the mr node, the worker will use the <code>hadoop jar</code> method to submit tasks</li>
-</ul>
-<blockquote>
-<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<ol>
-<li>JAVA program</li>
-</ol>
- <p align="center">
-   <img src="/img/mr_java_en.png" width="80%" />
- </p>
-<ul>
-<li>The class of the main function: is the full path of the Main Class, the entry point of the MR program</li>
-<li>Program type: select JAVA language</li>
-<li>Main jar package: is the MR jar package</li>
-<li>Command line parameters: set the input parameters of the MR program and support the substitution of custom parameter variables</li>
-<li>Other parameters: support -D, -files, -libjars, -archives format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
-</ul>
-<ol start="2">
-<li>Python program</li>
-</ol>
-<p align="center">
-   <img src="/img/mr_edit_en.png" width="80%" />
- </p>
-<ul>
-<li>Program type: select Python language</li>
-<li>Main jar package: is the Python jar package for running MR</li>
-<li>Other parameters: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:</li>
-<li>-mapper &quot;<a href="http://mapper.py">mapper.py</a> 1&quot; -file <a href="http://mapper.py">mapper.py</a> -reducer <a href="http://reducer.py">reducer.py</a> -file <a href="http://reducer.py">reducer.py</a> –input /journey/words.txt -output /journey/out/mr/${currentTimeMillis}</li>
-<li>The <a href="http://mapper.py">mapper.py</a> 1 after -mapper is two parameters, the first parameter is <a href="http://mapper.py">mapper.py</a>, and the second parameter is 1</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
-</ul>
-<h4>7.8 Python Node</h4>
-<ul>
-<li>Using python nodes, you can directly execute python scripts. For python nodes, workers will use <code>python **</code> to submit tasks.</li>
-</ul>
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/python-en.png" width="80%" />
- </p>
-<ul>
-<li>Script: Python program developed by the user</li>
-<li>Resources: refers to the list of resource files that need to be called in the script</li>
-<li>User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with ${variable} in the script</li>
-<li>Note: If you import the python file under the resource directory tree, you need to add the <strong>init</strong>.py file</li>
-</ul>
-<h4>7.9 Flink Node</h4>
-<ul>
-<li>Drag in the toolbar<img src="/img/flink.png" width="35"/>The task node to the drawing board, as shown in the following figure:</li>
-</ul>
-<p align="center">
-  <img src="/img/flink-en.png" width="80%" />
-</p>
-<ul>
-<li>Program type: supports JAVA, Scala and Python three languages</li>
-<li>The class of the main function: is the full path of the Main Class, the entry point of the Flink program</li>
-<li>Main jar package: is the Flink jar package</li>
-<li>Deployment mode: support three modes of cluster and local</li>
-<li>Number of slots: You can set the number of slots</li>
-<li>Number of taskManage: You can set the number of taskManage</li>
-<li>JobManager memory number: You can set the jobManager memory number</li>
-<li>TaskManager memory number: You can set the taskManager memory number</li>
-<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
-<li>Other parameters: support --jars, --files, --archives, --conf format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>Custom parameter: It is a local user-defined parameter of Flink, which will replace the content with ${variable} in the script</li>
-</ul>
-<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same</p>
-<h4>7.10 http Node</h4>
-<ul>
-<li>Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:</li>
-</ul>
-<p align="center">
-   <img src="/img/http-en.png" width="80%" />
- </p>
-<ul>
-<li>Node name: The node name in a workflow definition is unique.</li>
-<li>Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</li>
-<li>Descriptive information: describe the function of the node.</li>
-<li>Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</li>
-<li>Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</li>
-<li>Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.</li>
-<li>Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.</li>
-<li>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
-<li>Request address: http request URL.</li>
-<li>Request type: support GET, POSt, HEAD, PUT, DELETE.</li>
-<li>Request parameters: Support Parameter, Body, Headers.</li>
-<li>Verification conditions: support default response code, custom response code, content included, content not included.</li>
-<li>Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.</li>
-<li>Custom parameter: It is a user-defined parameter of http part, which will replace the content with ${variable} in the script.</li>
-</ul>
-<h4>7.11 DATAX Node</h4>
-<ul>
-<li>
-<p>Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the drawing board</p>
-<p align="center">
- <img src="/img/datax-en.png" width="80%" />
-</p>
-</li>
-<li>
-<p>Custom template: When you turn on the custom template switch, you can customize the content of the json configuration file of the datax node (applicable when the control configuration does not meet the requirements)</p>
-</li>
-<li>
-<p>Data source: select the data source to extract the data</p>
-</li>
-<li>
-<p>sql statement: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias (as)</p>
-</li>
-<li>
-<p>Target library: select the target library for data synchronization</p>
-</li>
-<li>
-<p>Target table: the name of the target table for data synchronization</p>
-</li>
-<li>
-<p>Pre-sql: Pre-sql is executed before the sql statement (executed by the target library).</p>
-</li>
-<li>
-<p>Post-sql: Post-sql is executed after the sql statement (executed by the target library).</p>
-</li>
-<li>
-<p>json: json configuration file for datax synchronization</p>
-</li>
-<li>
-<p>Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.</p>
-</li>
-</ul>
-<h4>8. parameter</h4>
-<h4>8.1 System parameters</h4>
-<table>
-    <tr><th>variable</th><th>meaning</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>The day before the scheduled time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td>The timing time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>The timing time of the daily scheduling instance, the format is yyyyMMddHHmmss, when the data is supplemented, the date is +1</td>
-    </tr>
-</table>
-<h4>8.2 Time custom parameters</h4>
-<ul>
-<li>
-<p>Support custom variable names in the code, declaration method: ${variable name}. It can refer to &quot;system parameters&quot; or specify &quot;constants&quot;.</p>
-</li>
-<li>
-<p>We define this benchmark variable as $[...] format, $[yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as: $[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd], etc.</p>
-</li>
-<li>
-<p>The following format can also be used:</p>
-<pre><code>* Next N years:$[add_months(yyyyMMdd,12*N)]
-* N years before:$[add_months(yyyyMMdd,-12*N)]
-* Next N months:$[add_months(yyyyMMdd,N)]
-* N months before:$[add_months(yyyyMMdd,-N)]
-* Next N weeks:$[yyyyMMdd+7*N]
-* First N weeks:$[yyyyMMdd-7*N]
-* Next N days:$[yyyyMMdd+N]
-* N days before:$[yyyyMMdd-N]
-* Next N hours:$[HHmmss+N/24]
-* First N hours:$[HHmmss-N/24]
-* Next N minutes:$[HHmmss+N/24/60]
-* First N minutes:$[HHmmss-N/24/60]
-</code></pre>
-</li>
-</ul>
-<h4>8.3 <span id=UserDefinedParameters>User-defined parameters</span></h4>
-<ul>
-<li>User-defined parameters are divided into global parameters and local parameters. Global parameters are global parameters passed when saving workflow definitions and workflow instances. Global parameters can be referenced in the local parameters of any task node in the entire process.
-example:</li>
-</ul>
-<p align="center">
-   <img src="/img/local_parameter_en.png" width="80%" />
- </p>
-<ul>
-<li>global_bizdate is a global parameter, which refers to a system parameter.</li>
-</ul>
-<p align="center">
-   <img src="/img/global_parameter_en.png" width="80%" />
- </p>
-<ul>
-<li>In the task, local_param_bizdate uses ${global_bizdate} to refer to global parameters. For scripts, you can use ${local_param_bizdate} to refer to the value of global variable global_bizdate, or directly set the value of local_param_bizdate through JDBC.</li>
-</ul>
-</div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
-  <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
-  <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
-  <script>window.rootPath = '';</script>
-  <script src="/build/vendor.2ace653.js"></script>
-  <script src="/build/docs.md.0cdf107.js"></script>
-  <script>
-    var _hmt = _hmt || [];
-    (function() {
-      var hm = document.createElement("script");
-      hm.src = "https://hm.baidu.com/hm.js?4e7b4b400dd31fa015018a435c64d06f";
-      var s = document.getElementsByTagName("script")[0];
-      s.parentNode.insertBefore(hm, s);
-    })();
-  </script>
-  <!-- Global site tag (gtag.js) - Google Analytics -->
-  <script async src="https://www.googletagmanager.com/gtag/js?id=G-899J8PYKJZ"></script>
-  <script>
-    window.dataLayer = window.dataLayer || [];
-    function gtag(){dataLayer.push(arguments);}
-    gtag('js', new Date());
-
-    gtag('config', 'G-899J8PYKJZ');
-  </script>
-</body>
-</html>
\ No newline at end of file
diff --git a/en-us/docs/2.0.0/user_doc/guide/system-manual.json b/en-us/docs/2.0.0/user_doc/guide/system-manual.json
deleted file mode 100644
index 5fbaa7a..0000000
--- a/en-us/docs/2.0.0/user_doc/guide/system-manual.json
+++ /dev/null
@@ -1,6 +0,0 @@
-{
-  "filename": "system-manual.md",
-  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"https://dolphinscheduler.apache.org/en-us/docs/2.0.0/user_doc/guide/quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80% [...]
-  "link": "/dist/en-us/docs/2.0.0/user_doc/guide/system-manual.html",
-  "meta": {}
-}
\ No newline at end of file
diff --git a/en-us/docs/2.0.1/user_doc/expansion-reduction.html b/en-us/docs/2.0.1/user_doc/expansion-reduction.html
index d7aa4b8..0dbf020 100644
--- a/en-us/docs/2.0.1/user_doc/expansion-reduction.html
+++ b/en-us/docs/2.0.1/user_doc/expansion-reduction.html
@@ -116,7 +116,7 @@ workers=&quot;existing worker01:default,existing worker02:default,ds3:default,ds
 </code></pre>
 <ul>
 <li>
-<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual <a href="/en-us/docs/2.0.1/user_doc/system-manual.html">5.7 Worker grouping</a></p>
+<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the security <a href="./guide/security.md">Worker grouping</a></p>
 </li>
 <li>
 <p>On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory</p>
diff --git a/en-us/docs/2.0.1/user_doc/expansion-reduction.json b/en-us/docs/2.0.1/user_doc/expansion-reduction.json
index 37f4ed6..1484c9e 100644
--- a/en-us/docs/2.0.1/user_doc/expansion-reduction.json
+++ b/en-us/docs/2.0.1/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
+  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
   "link": "/dist/en-us/docs/2.0.1/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/2.0.2/user_doc/expansion-reduction.html b/en-us/docs/2.0.2/user_doc/expansion-reduction.html
index fe862ef..53e39c4 100644
--- a/en-us/docs/2.0.2/user_doc/expansion-reduction.html
+++ b/en-us/docs/2.0.2/user_doc/expansion-reduction.html
@@ -116,7 +116,7 @@ workers=&quot;existing worker01:default,existing worker02:default,ds3:default,ds
 </code></pre>
 <ul>
 <li>
-<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual <a href="/en-us/docs/2.0.2/user_doc/system-manual.html">5.7 Worker grouping</a></p>
+<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the security <a href="./guide/security.md">Worker grouping</a></p>
 </li>
 <li>
 <p>On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory</p>
diff --git a/en-us/docs/2.0.2/user_doc/expansion-reduction.json b/en-us/docs/2.0.2/user_doc/expansion-reduction.json
index b956adf..129edce 100644
--- a/en-us/docs/2.0.2/user_doc/expansion-reduction.json
+++ b/en-us/docs/2.0.2/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
+  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
   "link": "/dist/en-us/docs/2.0.2/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/expansion-reduction.html b/en-us/docs/dev/user_doc/expansion-reduction.html
index 5c74003..12aff5d 100644
--- a/en-us/docs/dev/user_doc/expansion-reduction.html
+++ b/en-us/docs/dev/user_doc/expansion-reduction.html
@@ -115,7 +115,7 @@ workers=&quot;existing worker01:default,existing worker02:default,ds3:default,ds
 </code></pre>
 <ul>
 <li>
-<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual <a href="/en-us/docs/1.3.8/user_doc/system-manual.html">5.7 Worker grouping</a></p>
+<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the security <a href="./guide/security.md">Worker grouping</a></p>
 </li>
 <li>
 <p>On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory</p>
diff --git a/en-us/docs/dev/user_doc/expansion-reduction.json b/en-us/docs/dev/user_doc/expansion-reduction.json
index a6463b6..34ebee8 100644
--- a/en-us/docs/dev/user_doc/expansion-reduction.json
+++ b/en-us/docs/dev/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
+  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
   "link": "/dist/en-us/docs/dev/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/expansion-reduction.html b/en-us/docs/latest/user_doc/expansion-reduction.html
index fe862ef..53e39c4 100644
--- a/en-us/docs/latest/user_doc/expansion-reduction.html
+++ b/en-us/docs/latest/user_doc/expansion-reduction.html
@@ -116,7 +116,7 @@ workers=&quot;existing worker01:default,existing worker02:default,ds3:default,ds
 </code></pre>
 <ul>
 <li>
-<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the user manual <a href="/en-us/docs/2.0.2/user_doc/system-manual.html">5.7 Worker grouping</a></p>
+<p>If the expansion is for worker nodes, you need to set the worker group. Please refer to the security <a href="./guide/security.md">Worker grouping</a></p>
 </li>
 <li>
 <p>On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory</p>
diff --git a/en-us/docs/latest/user_doc/expansion-reduction.json b/en-us/docs/latest/user_doc/expansion-reduction.json
index b956adf..129edce 100644
--- a/en-us/docs/latest/user_doc/expansion-reduction.json
+++ b/en-us/docs/latest/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
+  "__html": "<!-- markdown-link-check-disable -->\n<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled [...]
   "link": "/dist/en-us/docs/2.0.2/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/2.0.0/user_doc/expansion-reduction.html b/zh-cn/docs/2.0.0/user_doc/expansion-reduction.html
index 70924b4..ab854da 100644
--- a/zh-cn/docs/2.0.0/user_doc/expansion-reduction.html
+++ b/zh-cn/docs/2.0.0/user_doc/expansion-reduction.html
@@ -119,7 +119,7 @@ workers=&quot;现有worker01:default,现有worker02:default,ds3:default,ds4:defa
 </code></pre>
 <ul>
 <li>
-<p>如果扩容的是worker节点,需要设置worker分组.请参考用户手册<a href="https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html">5.7 创建worker分组 </a></p>
+<p>如果扩容的是worker节点,需要设置worker分组.请参考安全中心<a href="./guide/security.md">创建worker分组</a></p>
 </li>
 <li>
 <p>在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限</p>
diff --git a/zh-cn/docs/2.0.0/user_doc/expansion-reduction.json b/zh-cn/docs/2.0.0/user_doc/expansion-reduction.json
index b4dda27..1f2e26e 100644
--- a/zh-cn/docs/2.0.0/user_doc/expansion-reduction.json
+++ b/zh-cn/docs/2.0.0/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
+  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
   "link": "/dist/zh-cn/docs/2.0.0/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/2.0.0/user_doc/guide/system-manual.html b/zh-cn/docs/2.0.0/user_doc/guide/system-manual.html
deleted file mode 100644
index ddc535b..0000000
--- a/zh-cn/docs/2.0.0/user_doc/guide/system-manual.html
+++ /dev/null
@@ -1,1008 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-<head>
-  <meta charset="UTF-8">
-  <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
-  <meta name="keywords" content="system-manual">
-  <meta name="description" content="system-manual">
-  <title>system-manual</title>
-  <link rel="shortcut icon" href="/img/favicon.ico">
-  <link rel="stylesheet" href="/build/vendor.23870e5.css">
-</head>
-<body>
-  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/zh-cn/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">En</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant [...]
-<h2>快速上手</h2>
-<blockquote>
-<p>请参照<a href="https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html">快速上手</a></p>
-</blockquote>
-<h2>操作指南</h2>
-<h3>1. 首页</h3>
-<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。
-<p align="center">
-<img src="/img/home.png" width="80%" />
-</p></p>
-<h3>2. 项目管理</h3>
-<h4>2.1 创建项目</h4>
-<ul>
-<li>
-<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>
-<p align="center">
-    <img src="/img/project.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.2 项目首页</h4>
-<ul>
-<li>
-<p>在项目管理页面点击项目名称链接,进入项目首页,如下图所示,项目首页包含该项目的任务状态统计、流程状态统计、工作流定义统计。</p>
-<p align="center">
-   <img src="/img/project-home.png" width="80%" />
-</p>
-</li>
-<li>
-<p>任务状态统计:在指定时间范围内,统计任务实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数</p>
-</li>
-<li>
-<p>流程状态统计:在指定时间范围内,统计工作流实例中状态为提交成功、正在运行、准备暂停、暂停、准备停止、停止、失败、成功、需要容错、kill、等待线程的个数</p>
-</li>
-<li>
-<p>工作流定义统计:统计用户创建的工作流定义及管理员授予该用户的工作流定义</p>
-</li>
-</ul>
-<h4>2.3 工作流定义</h4>
-<h4><span id=creatDag>2.3.1 创建工作流定义</span></h4>
-<ul>
-<li>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,点击“创建工作流”按钮,进入<strong>工作流DAG编辑</strong>页面,如下图所示:<p align="center">
-    <img src="/img/dag0.png" width="80%" />
-</p>  
-</li>
-<li>工具栏中拖拽<img src="/img/shell.png" width="35"/>到画板中,新增一个Shell任务,如下图所示:<p align="center">
-    <img src="/img/shell_dag.png" width="80%" />
-</p>  
-</li>
-<li><strong>添加shell任务的参数设置:</strong></li>
-</ul>
-<ol>
-<li>填写“节点名称”,“描述”,“脚本”字段;</li>
-<li>“运行标志”勾选“正常”,若勾选“禁止执行”,运行工作流不会执行该任务;</li>
-<li>选择“任务优先级”:当worker线程数不足时,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行;</li>
-<li>超时告警(非必选):勾选超时告警、超时失败,填写“超时时长”,当任务执行时间超过<strong>超时时长</strong>,会发送告警邮件并且任务超时失败;</li>
-<li>资源(非必选)。资源文件是资源中心-&gt;文件管理页面创建或上传的文件,如文件名为<code>test.sh</code>,脚本中调用资源命令为<code>sh test.sh</code>;</li>
-<li>自定义参数(非必填),参考<a href="#UserDefinedParameters">自定义参数</a>;</li>
-<li>点击&quot;确认添加&quot;按钮,保存任务设置。</li>
-</ol>
-<ul>
-<li>
-<p><strong>增加任务执行的先后顺序:</strong> 点击右上角图标<img src="/img/line.png" width="35"/>连接任务;如下图所示,任务2和任务3并行执行,当任务1执行完,任务2、3会同时执行。</p>
-<p align="center">
-   <img src="/img/dag2.png" width="80%" />
-</p>
-</li>
-<li>
-<p><strong>删除依赖关系:</strong> 点击右上角&quot;箭头&quot;图标<img src="/img/arrow.png" width="35"/>,选中连接线,点击右上角&quot;删除&quot;图标<img src="/img/delete.png" width="35"/>,删除任务间的依赖关系。</p>
-<p align="center">
-   <img src="/img/dag3.png" width="80%" />
-</p>
-</li>
-<li>
-<p><strong>保存工作流定义:</strong> 点击”保存“按钮,弹出&quot;设置DAG图名称&quot;弹框,如下图所示,输入工作流定义名称,工作流定义描述,设置全局参数(选填,参考<a href="#UserDefinedParameters">自定义参数</a>),点击&quot;添加&quot;按钮,工作流定义创建成功。</p>
-<p align="center">
-   <img src="/img/dag4.png" width="80%" />
- </p>
-</li>
-</ul>
-<blockquote>
-<p>其他类型任务,请参考 <a href="#TaskParamers">任务节点类型和参数设置</a>。</p>
-</blockquote>
-<h4>2.3.2  工作流定义操作功能</h4>
-<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,如下图所示:
-<p align="center">
-<img src="/img/work_list.png" width="80%" />
-</p>
-工作流定义列表的操作功能如下:</p>
-<ul>
-<li><strong>编辑:</strong> 只能编辑&quot;下线&quot;的工作流定义。工作流DAG编辑同<a href="#creatDag">创建工作流定义</a>。</li>
-<li><strong>上线:</strong> 工作流状态为&quot;下线&quot;时,上线工作流,只有&quot;上线&quot;状态的工作流能运行,但不能编辑。</li>
-<li><strong>下线:</strong> 工作流状态为&quot;上线&quot;时,下线工作流,下线状态的工作流可以编辑,但不能运行。</li>
-<li><strong>运行:</strong> 只有上线的工作流能运行。运行操作步骤见<a href="#runWorkflow">2.3.3 运行工作流</a></li>
-<li><strong>定时:</strong> 只有上线的工作流能设置定时,系统自动定时调度工作流运行。创建定时后的状态为&quot;下线&quot;,需在定时管理页面上线定时才生效。定时操作步骤见<a href="#creatTiming">2.3.4 工作流定时</a>。</li>
-<li><strong>定时管理:</strong> 定时管理页面可编辑、上线/下线、删除定时。</li>
-<li><strong>删除:</strong> 删除工作流定义。</li>
-<li><strong>下载:</strong> 下载工作流定义到本地。</li>
-<li><strong>树形图:</strong> 以树形结构展示任务节点的类型及任务状态,如下图所示:<p align="center">
-    <img src="/img/tree.png" width="80%" />
-</p>  
-</li>
-</ul>
-<h4><span id=runWorkflow>2.3.3 运行工作流</span></h4>
-<ul>
-<li>
-<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,如下图所示,点击&quot;上线&quot;按钮<img src="/img/online.png" width="35"/>,上线工作流。</p>
-<p align="center">
-    <img src="/img/work_list.png" width="80%" />
-</p>
-</li>
-<li>
-<p>点击”运行“按钮,弹出启动参数设置弹框,如下图所示,设置启动参数,点击弹框中的&quot;运行&quot;按钮,工作流开始运行,工作流实例页面生成一条工作流实例。</p>
- <p align="center">
-   <img src="/img/run-work.png" width="80%" />
- </p>  
-</li>
-</ul>
-<p><span id=runParamers>工作流运行参数说明:</span></p>
-<pre><code>* 失败策略:当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略。”继续“表示:某一任务失败后,其他任务节点正常执行;”结束“表示:终止所有正在执行的任务,并终止整个流程。
-* 通知策略:当流程结束,根据流程状态发送流程执行信息通知邮件,包含任何状态都不发,成功发,失败发,成功或失败都发。
-* 流程优先级:流程运行的优先级,分五个等级:最高(HIGHEST),高(HIGH),中(MEDIUM),低(LOW),最低(LOWEST)。当master线程数不足时,级别高的流程在执行队列中会优先执行,相同优先级的流程按照先进先出的顺序执行。
-* worker分组:该流程只能在指定的worker机器组里执行。默认是Default,可以在任一worker上执行。
-* 通知组:选择通知策略||超时报警||发生容错时,会发送流程信息或邮件到通知组里的所有成员。
-* 收件人:选择通知策略||超时报警||发生容错时,会发送流程信息或告警邮件到收件人列表。
-* 抄送人:选择通知策略||超时报警||发生容错时,会抄送流程信息或告警邮件到抄送人列表。
-* 启动参数: 在启动新的流程实例时,设置或覆盖全局参数的值。
-* 补数:包括串行补数、并行补数2种模式。串行补数:指定时间范围内,从开始日期至结束日期依次执行补数,依次生成N条流程实例;并行补数:指定时间范围内,多天同时进行补数,同时生成N条流程实例。 
-</code></pre>
-<ul>
-<li>
-<p>补数: 执行指定日期的工作流定义,可以选择补数时间范围(目前只支持针对连续的天进行补数),比如需要补5月1号到5月10号的数据,如下图所示:</p>
-<p align="center">
-    <img src="/img/complement.png" width="80%" />
-</p>
-<blockquote>
-<p>串行模式:补数从5月1号到5月10号依次执行,依次在流程实例页面生成十条流程实例;</p>
-</blockquote>
-<blockquote>
-<p>并行模式:同时执行5月1号到5月10号的任务,同时在流程实例页面生成十条流程实例。</p>
-</blockquote>
-</li>
-</ul>
-<h4><span id=creatTiming>2.3.4 工作流定时</span></h4>
-<ul>
-<li>创建定时:点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,上线工作流,点击&quot;定时&quot;按钮<img src="/img/timing.png" width="35"/>,弹出定时参数设置弹框,如下图所示:<p align="center">
-    <img src="/img/time-schedule.png" width="80%" />
-</p>
-</li>
-<li>选择起止时间。在起止时间范围内,定时运行工作流;不在起止时间范围内,不再产生定时工作流实例。</li>
-<li>添加一个每天凌晨5点执行一次的定时,如下图所示:<p align="center">
-    <img src="/img/time-schedule2.png" width="80%" />
-</p>
-</li>
-<li>失败策略、通知策略、流程优先级、Worker分组、通知组、收件人、抄送人同<a href="#runParamers">工作流运行参数</a>。</li>
-<li>点击&quot;创建&quot;按钮,创建定时成功,此时定时状态为&quot;<strong>下线</strong>&quot;,定时需<strong>上线</strong>才生效。</li>
-<li>定时上线:点击&quot;定时管理&quot;按钮<img src="/img/timeManagement.png" width="35"/>,进入定时管理页面,点击&quot;上线&quot;按钮,定时状态变为&quot;上线&quot;,如下图所示,工作流定时生效。<p align="center">
-    <img src="/img/time-schedule3.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.3.5 导入工作流</h4>
-<p>点击项目管理-&gt;工作流-&gt;工作流定义,进入工作流定义页面,点击&quot;导入工作流&quot;按钮,导入本地工作流文件,工作流定义列表显示导入的工作流,状态为下线。</p>
-<h4>2.4 工作流实例</h4>
-<h4>2.4.1 查看工作流实例</h4>
-<ul>
-<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,如下图所示:   <p align="center">
-      <img src="/img/instance-list.png" width="80%" />
-   </p>           
-</li>
-<li>点击工作流名称,进入DAG查看页面,查看任务执行状态,如下图所示。<p align="center">
-  <img src="/img/instance-detail.png" width="80%" />
-</p>
-</li>
-</ul>
-<h4>2.4.2 查看任务日志</h4>
-<ul>
-<li>进入工作流实例页面,点击工作流名称,进入DAG查看页面,双击任务节点,如下图所示: <p align="center">
-   <img src="/img/instanceViewLog.png" width="80%" />
- </p>
-</li>
-<li>点击&quot;查看日志&quot;,弹出日志弹框,如下图所示,任务实例页面也可查看任务日志,参考<a href="#taskLog">任务查看日志</a>。 <p align="center">
-   <img src="/img/task-log.png" width="80%" />
- </p>
-</li>
-</ul>
-<h4>2.4.3 查看任务历史记录</h4>
-<ul>
-<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面;</li>
-<li>双击任务节点,如下图所示,点击&quot;查看历史&quot;,跳转到任务实例页面,并展示该工作流实例运行的任务实例列表 <p align="center">
-   <img src="/img/task_history.png" width="80%" />
- </p>
-</li>
-</ul>
-<h4>2.4.4 查看运行参数</h4>
-<ul>
-<li>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,点击工作流名称,进入工作流DAG页面;</li>
-<li>点击左上角图标<img src="/img/run_params_button.png" width="35"/>,查看工作流实例的启动参数;点击图标<img src="/img/global_param.png" width="35"/>,查看工作流实例的全局参数和局部参数,如下图所示: <p align="center">
-   <img src="/img/run_params.png" width="80%" />
- </p>      
-</li>
-</ul>
-<h4>2.4.4 工作流实例操作功能</h4>
-<p>点击项目管理-&gt;工作流-&gt;工作流实例,进入工作流实例页面,如下图所示:<br>
-<p align="center">
-<img src="/img/instance-list.png" width="80%" />
-</p></p>
-<ul>
-<li><strong>编辑:</strong> 只能编辑已终止的流程。点击&quot;编辑&quot;按钮或工作流实例名称进入DAG编辑页面,编辑后点击&quot;保存&quot;按钮,弹出保存DAG弹框,如下图所示,在弹框中勾选&quot;是否更新到工作流定义&quot;,保存后则更新工作流定义;若不勾选,则不更新工作流定义。   <p align="center">
-     <img src="/img/editDag.png" width="80%" />
-   </p>
-</li>
-<li><strong>重跑:</strong> 重新执行已经终止的流程。</li>
-<li><strong>恢复失败:</strong> 针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。</li>
-<li><strong>停止:</strong> 对正在运行的流程进行<strong>停止</strong>操作,后台会先<code>kill</code>worker进程,再执行<code>kill -9</code>操作</li>
-<li><strong>暂停:</strong> 对正在运行的流程进行<strong>暂停</strong>操作,系统状态变为<strong>等待执行</strong>,会等待正在执行的任务结束,暂停下一个要执行的任务。</li>
-<li><strong>恢复暂停:</strong> 对暂停的流程恢复,直接从<strong>暂停的节点</strong>开始运行</li>
-<li><strong>删除:</strong> 删除工作流实例及工作流实例下的任务实例</li>
-<li><strong>甘特图:</strong> Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:   <p align="center">
-       <img src="/img/gant-pic.png" width="80%" />
-   </p>
-</li>
-</ul>
-<h4>2.5 任务实例</h4>
-<ul>
-<li>
-<p>点击项目管理-&gt;工作流-&gt;任务实例,进入任务实例页面,如下图所示,点击工作流实例名称,可跳转到工作流实例DAG图查看任务状态。</p>
-   <p align="center">
-      <img src="/img/task-list.png" width="80%" />
-   </p>
-</li>
-<li>
-<p><span id=taskLog>查看日志:</span>点击操作列中的“查看日志”按钮,可以查看任务执行的日志情况。</p>
-   <p align="center">
-      <img src="/img/task-log2.png" width="80%" />
-   </p>
-</li>
-</ul>
-<h3>3. 资源中心</h3>
-<h4>3.1 hdfs资源配置</h4>
-<ul>
-<li>上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:</li>
-</ul>
-<pre><code>conf/common.properties  
-    # Users who have permission to create directories under the HDFS root path
-    hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/dolphinscheduler&quot; is recommended
-    resource.upload.path=/dolphinscheduler
-    # resource storage type : HDFS,S3,NONE
-    resource.storage.type=HDFS
-    # whether kerberos starts
-    hadoop.security.authentication.startup.state=false
-    # java.security.krb5.conf path
-    java.security.krb5.conf.path=/opt/krb5.conf
-    # loginUserFromKeytab user
-    login.user.keytab.username=hdfs-mycluster@ESZ.COM
-    # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab    
-    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
-    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
-    # Note,s3 be sure to create the root directory /dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020    
-    #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
-    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-
-</code></pre>
-<h4>3.2 文件管理</h4>
-<blockquote>
-<p>是对各种资源文件的管理,包括创建基本的txt/log/sh/conf/py/java等文件、上传jar包等各种类型文件,可进行编辑、重命名、下载、删除等操作。</p>
-</blockquote>
-  <p align="center">
-   <img src="/img/file-manage.png" width="80%" />
- </p>
-<ul>
-<li>创建文件</li>
-</ul>
-<blockquote>
-<p>文件格式支持以下几种类型:txt、log、sh、conf、cfg、py、java、sql、xml、hql、properties</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_create.png" width="80%" />
- </p>
-<ul>
-<li>上传文件</li>
-</ul>
-<blockquote>
-<p>上传文件:点击&quot;上传文件&quot;按钮进行上传,将文件拖拽到上传区域,文件名会自动以上传的文件名称补全</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_upload.png" width="80%" />
- </p>
-<ul>
-<li>文件查看</li>
-</ul>
-<blockquote>
-<p>对可查看的文件类型,点击文件名称,可查看文件详情</p>
-</blockquote>
-<p align="center">
-   <img src="/img/file_detail.png" width="80%" />
- </p>
-<ul>
-<li>下载文件</li>
-</ul>
-<blockquote>
-<p>点击文件列表的&quot;下载&quot;按钮下载文件或者在文件详情中点击右上角&quot;下载&quot;按钮下载文件</p>
-</blockquote>
-<ul>
-<li>文件重命名</li>
-</ul>
-<p align="center">
-   <img src="/img/file_rename.png" width="80%" />
- </p>
-<ul>
-<li>删除</li>
-</ul>
-<blockquote>
-<p>文件列表-&gt;点击&quot;删除&quot;按钮,删除指定文件</p>
-</blockquote>
-<h4>3.3 UDF管理</h4>
-<h4>3.3.1 资源管理</h4>
-<blockquote>
-<p>资源管理和文件管理功能类似,不同之处是资源管理是上传的UDF函数,文件管理上传的是用户程序,脚本及配置文件
-操作功能:重命名、下载、删除。</p>
-</blockquote>
-<ul>
-<li>上传udf资源</li>
-</ul>
-<blockquote>
-<p>和上传文件相同。</p>
-</blockquote>
-<h4>3.3.2 函数管理</h4>
-<ul>
-<li>创建udf函数</li>
-</ul>
-<blockquote>
-<p>点击“创建UDF函数”,输入udf函数参数,选择udf资源,点击“提交”,创建udf函数。</p>
-</blockquote>
-<blockquote>
-<p>目前只支持HIVE的临时UDF函数</p>
-</blockquote>
-<ul>
-<li>UDF函数名称:输入UDF函数时的名称</li>
-<li>包名类名:输入UDF函数的全路径</li>
-<li>UDF资源:设置创建的UDF对应的资源文件</li>
-</ul>
-<p align="center">
-   <img src="/img/udf_edit.png" width="80%" />
- </p>
-<h3>4. 创建数据源</h3>
-<blockquote>
-<p>数据源中心支持MySQL、POSTGRESQL、HIVE/IMPALA、SPARK、CLICKHOUSE、ORACLE、SQLSERVER等数据源</p>
-</blockquote>
-<h4>4.1 创建/编辑MySQL数据源</h4>
-<ul>
-<li>
-<p>点击“数据源中心-&gt;创建数据源”,根据需求创建不同类型的数据源。</p>
-</li>
-<li>
-<p>数据源:选择MYSQL</p>
-</li>
-<li>
-<p>数据源名称:输入数据源的名称</p>
-</li>
-<li>
-<p>描述:输入数据源的描述</p>
-</li>
-<li>
-<p>IP主机名:输入连接MySQL的IP</p>
-</li>
-<li>
-<p>端口:输入连接MySQL的端口</p>
-</li>
-<li>
-<p>用户名:设置连接MySQL的用户名</p>
-</li>
-<li>
-<p>密码:设置连接MySQL的密码</p>
-</li>
-<li>
-<p>数据库名:输入连接MySQL的数据库名称</p>
-</li>
-<li>
-<p>Jdbc连接参数:用于MySQL连接的参数设置,以JSON形式填写</p>
-</li>
-</ul>
-<p align="center">
-   <img src="/img/mysql_edit.png" width="80%" />
- </p>
-<blockquote>
-<p>点击“测试连接”,测试数据源是否可以连接成功。</p>
-</blockquote>
-<h4>4.2 创建/编辑POSTGRESQL数据源</h4>
-<ul>
-<li>数据源:选择POSTGRESQL</li>
-<li>数据源名称:输入数据源的名称</li>
-<li>描述:输入数据源的描述</li>
-<li>IP/主机名:输入连接POSTGRESQL的IP</li>
-<li>端口:输入连接POSTGRESQL的端口</li>
-<li>用户名:设置连接POSTGRESQL的用户名</li>
-<li>密码:设置连接POSTGRESQL的密码</li>
-<li>数据库名:输入连接POSTGRESQL的数据库名称</li>
-<li>Jdbc连接参数:用于POSTGRESQL连接的参数设置,以JSON形式填写</li>
-</ul>
-<p align="center">
-   <img src="/img/postgresql_edit.png" width="80%" />
- </p>
-<h4>4.3 创建/编辑HIVE数据源</h4>
-<p>1.使用HiveServer2方式连接</p>
- <p align="center">
-    <img src="/img/hive_edit.png" width="80%" />
-  </p>
-<ul>
-<li>数据源:选择HIVE</li>
-<li>数据源名称:输入数据源的名称</li>
-<li>描述:输入数据源的描述</li>
-<li>IP/主机名:输入连接HIVE的IP</li>
-<li>端口:输入连接HIVE的端口</li>
-<li>用户名:设置连接HIVE的用户名</li>
-<li>密码:设置连接HIVE的密码</li>
-<li>数据库名:输入连接HIVE的数据库名称</li>
-<li>Jdbc连接参数:用于HIVE连接的参数设置,以JSON形式填写</li>
-</ul>
-<p>2.使用HiveServer2 HA Zookeeper方式连接</p>
- <p align="center">
-    <img src="/img/hive_edit2.png" width="80%" />
-  </p>
-<p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
-<p align="center">
-    <img src="/img/hive_kerberos.png" width="80%" />
-  </p>
-<h4>4.4 创建/编辑Spark数据源</h4>
-<p align="center">
-   <img src="/img/spark_datesource.png" width="80%" />
- </p>
-<ul>
-<li>数据源:选择Spark</li>
-<li>数据源名称:输入数据源的名称</li>
-<li>描述:输入数据源的描述</li>
-<li>IP/主机名:输入连接Spark的IP</li>
-<li>端口:输入连接Spark的端口</li>
-<li>用户名:设置连接Spark的用户名</li>
-<li>密码:设置连接Spark的密码</li>
-<li>数据库名:输入连接Spark的数据库名称</li>
-<li>Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写</li>
-</ul>
-<p>注意:如果开启了<strong>kerberos</strong>,则需要填写 <strong>Principal</strong></p>
-<p align="center">
-    <img src="/img/sparksql_kerberos.png" width="80%" />
-  </p>
-<h3>5. 安全中心(权限系统)</h3>
-<pre><code> * 安全中心只有管理员账户才有权限操作,分别有队列管理、租户管理、用户管理、告警组管理、worker分组管理、令牌管理等功能,在用户管理模块可以对资源、数据源、项目等授权
- * 管理员登录,默认用户名密码:admin/dolphinscheduler123
-</code></pre>
-<h4>5.1 创建队列</h4>
-<ul>
-<li>队列是在执行spark、mapreduce等程序,需要用到“队列”参数时使用的。</li>
-<li>管理员进入安全中心-&gt;队列管理页面,点击“创建队列”按钮,创建队列。</li>
-</ul>
- <p align="center">
-    <img src="/img/create-queue.png" width="80%" />
-  </p>
-<h4>5.2 添加租户</h4>
-<ul>
-<li>租户对应的是Linux的用户,用于worker提交作业所使用的用户。如果linux没有这个用户,worker会在执行脚本的时候创建这个用户。</li>
-<li>租户编码:<strong>租户编码是Linux上的用户,唯一,不能重复</strong></li>
-<li>管理员进入安全中心-&gt;租户管理页面,点击“创建租户”按钮,创建租户。</li>
-</ul>
- <p align="center">
-    <img src="/img/addtenant.png" width="80%" />
-  </p>
-<h4>5.3 创建普通用户</h4>
-<ul>
-<li>用户分为<strong>管理员用户</strong>和<strong>普通用户</strong></li>
-</ul>
-<pre><code>* 管理员有授权和用户管理等权限,没有创建项目和工作流定义的操作的权限。
-* 普通用户可以创建项目和对工作流定义的创建,编辑,执行等操作。
-* 注意:如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下。
-</code></pre>
-<ul>
-<li>管理员进入安全中心-&gt;用户管理页面,点击“创建用户”按钮,创建用户。</li>
-</ul>
-<p align="center">
-   <img src="/img/useredit2.png" width="80%" />
- </p>
-<blockquote>
-<p><strong>编辑用户信息</strong></p>
-</blockquote>
-<ul>
-<li>管理员进入安全中心-&gt;用户管理页面,点击&quot;编辑&quot;按钮,编辑用户信息。</li>
-<li>普通用户登录后,点击用户名下拉框中的用户信息,进入用户信息页面,点击&quot;编辑&quot;按钮,编辑用户信息。</li>
-</ul>
-<blockquote>
-<p><strong>修改用户密码</strong></p>
-</blockquote>
-<ul>
-<li>管理员进入安全中心-&gt;用户管理页面,点击&quot;编辑&quot;按钮,编辑用户信息时,输入新密码修改用户密码。</li>
-<li>普通用户登录后,点击用户名下拉框中的用户信息,进入修改密码页面,输入密码并确认密码后点击&quot;编辑&quot;按钮,则修改密码成功。</li>
-</ul>
-<h4>5.4 创建告警组</h4>
-<ul>
-<li>告警组是在启动时设置的参数,在流程结束以后会将流程的状态和其他信息以邮件形式发送给告警组。</li>
-</ul>
-<ul>
-<li>管理员进入安全中心-&gt;告警组管理页面,点击“创建告警组”按钮,创建告警组。</li>
-</ul>
-  <p align="center">
-    <img src="/img/mail_edit.png" width="80%" />
-  </p>
-<h4>5.5 令牌管理</h4>
-<blockquote>
-<p>由于后端接口有登录检查,令牌管理提供了一种可以通过调用接口的方式对系统进行各种操作。</p>
-</blockquote>
-<ul>
-<li>管理员进入安全中心-&gt;令牌管理页面,点击“创建令牌”按钮,选择失效时间与用户,点击&quot;生成令牌&quot;按钮,点击&quot;提交&quot;按钮,则选择用户的token创建成功。</li>
-</ul>
-  <p align="center">
-      <img src="/img/creat_token.png" width="80%" />
-   </p>
-<ul>
-<li>
-<p>普通用户登录后,点击用户名下拉框中的用户信息,进入令牌管理页面,选择失效时间,点击&quot;生成令牌&quot;按钮,点击&quot;提交&quot;按钮,则该用户创建token成功。</p>
-</li>
-<li>
-<p>调用示例:</p>
-</li>
-</ul>
-<pre><code class="language-令牌调用示例">    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost(&quot;http://127.0.0.1:12345/escheduler/projects/create&quot;);
-        httpPost.setHeader(&quot;token&quot;, &quot;123&quot;);
-        // set parameters
-        List&lt;NameValuePair&gt; parameters = new ArrayList&lt;NameValuePair&gt;();
-        parameters.add(new BasicNameValuePair(&quot;projectName&quot;, &quot;qzw&quot;));
-        parameters.add(new BasicNameValuePair(&quot;desc&quot;, &quot;qzw&quot;));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), &quot;UTF-8&quot;);
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-</code></pre>
-<h4>5.6 授予权限</h4>
-<pre><code>* 授予权限包括项目权限,资源权限,数据源权限,UDF函数权限。
-* 管理员可以对普通用户进行非其创建的项目、资源、数据源和UDF函数进行授权。因为项目、资源、数据源和UDF函数授权方式都是一样的,所以以项目授权为例介绍。
-* 注意:对于用户自己创建的项目,该用户拥有所有的权限。则项目列表和已选项目列表中不会显示。
-</code></pre>
-<ul>
-<li>管理员进入安全中心-&gt;用户管理页面,点击需授权用户的“授权”按钮,如下图所示:</li>
-</ul>
-  <p align="center">
-   <img src="/img/auth_user.png" width="80%" />
- </p>
-<ul>
-<li>选择项目,进行项目授权。</li>
-</ul>
-<p align="center">
-   <img src="/img/auth_project.png" width="80%" />
- </p>
-<ul>
-<li>资源、数据源、UDF函数授权同项目授权。</li>
-</ul>
-<h4>5.7 Worker分组</h4>
-<p>每个worker节点都会归属于自己的Worker分组,默认分组为default.</p>
-<p>在任务执行时,可以将任务分配给指定worker分组,最终由该组中的worker节点执行该任务.</p>
-<blockquote>
-<p>新增/更新 worker分组</p>
-</blockquote>
-<ul>
-<li>打开要设置分组的worker节点上的&quot;conf/worker.properties&quot;配置文件. 修改worker.groups参数.</li>
-<li>worker.groups参数后面对应的为该worker节点对应的分组名称,默认为default.</li>
-<li>如果该worker节点对应多个分组,则以逗号隔开.</li>
-</ul>
-<pre><code>示例: 
-worker.groups=default,test
-</code></pre>
-<h3>6. 监控中心</h3>
-<h4>6.1 服务管理</h4>
-<ul>
-<li>服务管理主要是对系统中的各个服务的健康状况和基本信息的监控和显示</li>
-</ul>
-<h4>6.1.1 master监控</h4>
-<ul>
-<li>主要是master的相关信息。</li>
-</ul>
-<p align="center">
-   <img src="/img/master-jk.png" width="80%" />
- </p>
-<h4>6.1.2 worker监控</h4>
-<ul>
-<li>主要是worker的相关信息。</li>
-</ul>
-<p align="center">
-   <img src="/img/worker-jk.png" width="80%" />
- </p>
-<h4>6.1.3 Zookeeper监控</h4>
-<ul>
-<li>主要是zookpeeper中各个worker和master的相关配置信息。</li>
-</ul>
-<p align="center">
-   <img src="/img/zk-jk.png" width="80%" />
- </p>
-<h4>6.1.4 DB监控</h4>
-<ul>
-<li>主要是DB的健康状况</li>
-</ul>
-<p align="center">
-   <img src="/img/mysql-jk.png" width="80%" />
- </p>
-<h4>6.2 统计管理</h4>
-<p align="center">
-   <img src="/img/Statistics.png" width="80%" />
- </p>
-<ul>
-<li>待执行命令数:统计t_ds_command表的数据</li>
-<li>执行失败的命令数:统计t_ds_error_command表的数据</li>
-<li>待运行任务数:统计Zookeeper中task_queue的数据</li>
-<li>待杀死任务数:统计Zookeeper中task_kill的数据</li>
-</ul>
-<h3>7. <span id=TaskParamers>任务节点类型和参数设置</span></h3>
-<h4>7.1 Shell节点</h4>
-<blockquote>
-<p>shell节点,在worker执行的时候,会生成一个临时shell脚本,使用租户同名的linux用户执行这个脚本。</p>
-</blockquote>
-<ul>
-<li>
-<p>点击项目管理-项目名称-工作流定义,点击&quot;创建工作流&quot;按钮,进入DAG编辑页面。</p>
-</li>
-<li>
-<p>工具栏中拖动<img src="/img/shell.png" width="35"/>到画板中,如下图所示:</p>
-<p align="center">
-    <img src="/img/shell_dag.png" width="80%" />
-</p> 
-</li>
-<li>
-<p>节点名称:一个工作流定义中的节点名称是唯一的。</p>
-</li>
-<li>
-<p>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</p>
-</li>
-<li>
-<p>描述信息:描述该节点的功能。</p>
-</li>
-<li>
-<p>任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</p>
-</li>
-<li>
-<p>Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。</p>
-</li>
-<li>
-<p>失败重试次数:任务失败重新提交的次数,支持下拉和手填。</p>
-</li>
-<li>
-<p>失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。</p>
-</li>
-<li>
-<p>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</p>
-</li>
-<li>
-<p>脚本:用户开发的SHELL程序。</p>
-</li>
-<li>
-<p>资源:是指脚本中需要调用的资源文件列表,资源中心-文件管理上传或创建的文件。</p>
-</li>
-<li>
-<p>自定义参数:是SHELL局部的用户自定义参数,会替换脚本中以${变量}的内容。</p>
-</li>
-</ul>
-<h4>7.2 子流程节点</h4>
-<ul>
-<li>子流程节点,就是把外部的某个工作流定义当做一个任务节点去执行。</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/subprocess_edit.png" width="80%" />
- </p>
-<ul>
-<li>节点名称:一个工作流定义中的节点名称是唯一的</li>
-<li>运行标志:标识这个节点是否能正常调度</li>
-<li>描述信息:描述该节点的功能</li>
-<li>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</li>
-<li>子节点:是选择子流程的工作流定义,右上角进入该子节点可以跳转到所选子流程的工作流定义</li>
-</ul>
-<h4>7.3 依赖(DEPENDENT)节点</h4>
-<ul>
-<li>依赖节点,就是<strong>依赖检查节点</strong>。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/dependent_edit.png" width="80%" />
- </p>
-<blockquote>
-<p>依赖节点提供了逻辑判断功能,比如检查昨天的B流程是否成功,或者C流程是否执行成功。</p>
-</blockquote>
-  <p align="center">
-   <img src="/img/depend-node.png" width="80%" />
- </p>
-<blockquote>
-<p>例如,A流程为周报任务,B、C流程为天任务,A任务需要B、C任务在上周的每一天都执行成功,如图示:</p>
-</blockquote>
- <p align="center">
-   <img src="/img/depend-node2.png" width="80%" />
- </p>
-<blockquote>
-<p>假如,周报A同时还需要自身在上周二执行成功:</p>
-</blockquote>
- <p align="center">
-   <img src="/img/depend-node3.png" width="80%" />
- </p>
-<h4>7.4 存储过程节点</h4>
-<ul>
-<li>根据选择的数据源,执行存储过程。</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/procedure_edit.png" width="80%" />
- </p>
-<ul>
-<li>数据源:存储过程的数据源类型支持MySQL和POSTGRESQL两种,选择对应的数据源</li>
-<li>方法:是存储过程的方法名称</li>
-<li>自定义参数:存储过程的自定义参数类型支持IN、OUT两种,数据类型支持VARCHAR、INTEGER、LONG、FLOAT、DOUBLE、DATE、TIME、TIMESTAMP、BOOLEAN九种数据类型</li>
-</ul>
-<h4>7.5 SQL节点</h4>
-<ul>
-<li>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">任务节点到画板中</li>
-<li>非查询SQL功能:编辑非查询SQL任务信息,sql类型选择非查询,如下图所示:</li>
-</ul>
-  <p align="center">
-   <img src="/img/sql-node.png" width="80%" />
- </p>
-<ul>
-<li>查询SQL功能:编辑查询SQL任务信息,sql类型选择查询,选择表格或附件形式发送邮件到指定的收件人,如下图所示。</li>
-</ul>
-<p align="center">
-   <img src="/img/sql-node2.png" width="80%" />
- </p>
-<ul>
-<li>数据源:选择对应的数据源</li>
-<li>sql类型:支持查询和非查询两种,查询是select类型的查询,是有结果集返回的,可以指定邮件通知为表格、附件或表格附件三种模板。非查询是没有结果集返回的,是针对update、delete、insert三种类型的操作。</li>
-<li>sql参数:输入参数格式为key1=value1;key2=value2…</li>
-<li>sql语句:SQL语句</li>
-<li>UDF函数:对于HIVE类型的数据源,可以引用资源中心中创建的UDF函数,其他类型的数据源暂不支持UDF函数。</li>
-<li>自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。</li>
-<li>前置sql:前置sql在sql语句之前执行。</li>
-<li>后置sql:后置sql在sql语句之后执行。</li>
-</ul>
-<h4>7.6 SPARK节点</h4>
-<ul>
-<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark_edit.png" width="80%" />
- </p>
-<ul>
-<li>程序类型:支持JAVA、Scala和Python三种语言</li>
-<li>主函数的class:是Spark程序的入口Main Class的全路径</li>
-<li>主jar包:是Spark的jar包</li>
-<li>部署方式:支持yarn-cluster、yarn-client和local三种模式</li>
-<li>Driver内核数:可以设置Driver内核数及内存数</li>
-<li>Executor数量:可以设置Executor数量、Executor内存数和Executor内核数</li>
-<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
-<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
-<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
-</ul>
-<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样</p>
-<h4>7.7 MapReduce(MR)节点</h4>
-<ul>
-<li>使用MR节点,可以直接执行MR程序。对于mr节点,worker会使用<code>hadoop jar</code>方式提交任务</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<ol>
-<li>JAVA程序</li>
-</ol>
- <p align="center">
-   <img src="/img/mr_java.png" width="80%" />
- </p>
-<ul>
-<li>主函数的class:是MR程序的入口Main Class的全路径</li>
-<li>程序类型:选择JAVA语言</li>
-<li>主jar包:是MR的jar包</li>
-<li>命令行参数:是设置MR程序的输入参数,支持自定义参数变量的替换</li>
-<li>其他参数:支持 –D、-files、-libjars、-archives格式</li>
-<li>资源: 如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
-</ul>
-<ol start="2">
-<li>Python程序</li>
-</ol>
-<p align="center">
-   <img src="/img/mr_edit.png" width="80%" />
- </p>
-<ul>
-<li>程序类型:选择Python语言</li>
-<li>主jar包:是运行MR的Python jar包</li>
-<li>其他参数:支持 –D、-mapper、-reducer、-input  -output格式,这里可以设置用户自定义参数的输入,比如:</li>
-<li>-mapper  &quot;<a href="http://mapper.py">mapper.py</a> 1&quot;  -file <a href="http://mapper.py">mapper.py</a>   -reducer <a href="http://reducer.py">reducer.py</a>  -file <a href="http://reducer.py">reducer.py</a> –input /journey/words.txt -output /journey/out/mr/${currentTimeMillis}</li>
-<li>其中 -mapper 后的 <a href="http://mapper.py">mapper.py</a> 1是两个参数,<a href="http://xn--mapper-9m7iglm85bf76bbzbb87i.py">第一个参数是mapper.py</a>,第二个参数是1</li>
-<li>资源: 如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
-</ul>
-<h4>7.8 Python节点</h4>
-<ul>
-<li>使用python节点,可以直接执行python脚本,对于python节点,worker会使用<code>python **</code>方式提交任务。</li>
-</ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/python_edit.png" width="80%" />
- </p>
-<ul>
-<li>脚本:用户开发的Python程序</li>
-<li>资源:是指脚本中需要调用的资源文件列表</li>
-<li>自定义参数:是Python局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
-<li>注意:若引入资源目录树下的python文件,需添加__init__.py文件</li>
-</ul>
-<h4>7.9 Flink节点</h4>
-<ul>
-<li>拖动工具栏中的<img src="/img/flink.png" width="35"/>任务节点到画板中,如下图所示:</li>
-</ul>
-<p align="center">
-  <img src="/img/flink_edit.png" width="80%" />
-</p>
-<ul>
-<li>程序类型:支持JAVA、Scala和Python三种语言</li>
-<li>主函数的class:是Flink程序的入口Main Class的全路径</li>
-<li>主jar包:是Flink的jar包</li>
-<li>部署方式:支持cluster、local三种模式</li>
-<li>slot数量:可以设置slot数</li>
-<li>taskManage数量:可以设置taskManage数</li>
-<li>jobManager内存数:可以设置jobManager内存数</li>
-<li>taskManager内存数:可以设置taskManager内存数</li>
-<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
-<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
-<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是Flink局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
-</ul>
-<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Flink则没有主函数的class,其他都是一样</p>
-<h4>7.10 http节点</h4>
-<ul>
-<li>拖动工具栏中的<img src="/img/http.png" width="35"/>任务节点到画板中,如下图所示:</li>
-</ul>
-<p align="center">
-   <img src="/img/http_edit.png" width="80%" />
- </p>
-<ul>
-<li>节点名称:一个工作流定义中的节点名称是唯一的。</li>
-<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
-<li>描述信息:描述该节点的功能。</li>
-<li>任务优先级:worker线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</li>
-<li>Worker分组:任务分配给worker组的机器机执行,选择Default,会随机选择一台worker机执行。</li>
-<li>失败重试次数:任务失败重新提交的次数,支持下拉和手填。</li>
-<li>失败重试间隔:任务失败重新提交任务的时间间隔,支持下拉和手填。</li>
-<li>超时告警:勾选超时告警、超时失败,当任务超过&quot;超时时长&quot;后,会发送告警邮件并且任务执行失败.</li>
-<li>请求地址:http请求URL。</li>
-<li>请求类型:支持GET、POSt、HEAD、PUT、DELETE。</li>
-<li>请求参数:支持Parameter、Body、Headers。</li>
-<li>校验条件:支持默认响应码、自定义响应码、内容包含、内容不包含。</li>
-<li>校验内容:当校验条件选择自定义响应码、内容包含、内容不包含时,需填写校验内容。</li>
-<li>自定义参数:是http局部的用户自定义参数,会替换脚本中以${变量}的内容。</li>
-</ul>
-<h4>7.11 DATAX节点</h4>
-<ul>
-<li>拖动工具栏中的<img src="/img/datax.png" width="35"/>任务节点到画板中</li>
-</ul>
-  <p align="center">
-   <img src="/img/datax_edit.png" width="80%" />
-  </p>
-<ul>
-<li>自定义模板:打开自定义模板开关时,可以自定义datax节点的json配置文件内容(适用于控件配置不满足需求时)</li>
-<li>数据源:选择抽取数据的数据源</li>
-<li>sql语句:目标库抽取数据的sql语句,节点执行时自动解析sql查询列名,映射为目标表同步列名,源表和目标表列名不一致时,可以通过列别名(as)转换</li>
-<li>目标库:选择数据同步的目标库</li>
-<li>目标表:数据同步的目标表名</li>
-<li>前置sql:前置sql在sql语句之前执行(目标库执行)。</li>
-<li>后置sql:后置sql在sql语句之后执行(目标库执行)。</li>
-<li>json:datax同步的json配置文件</li>
-<li>自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。</li>
-</ul>
-<h4>8. 参数</h4>
-<h4>8.1 系统参数</h4>
-<table>
-    <tr><th>变量</th><th>含义</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>日常调度实例定时的定时时间前一天,格式为 yyyyMMdd,补数据时,该日期 +1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td>日常调度实例定时的定时时间,格式为 yyyyMMdd,补数据时,该日期 +1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>日常调度实例定时的定时时间,格式为 yyyyMMddHHmmss,补数据时,该日期 +1</td>
-    </tr>
-</table>
-<h4>8.2 时间自定义参数</h4>
-<ul>
-<li>
-<p>支持代码中自定义变量名,声明方式:${变量名}。可以是引用 &quot;系统参数&quot; 或指定 &quot;常量&quot;。</p>
-</li>
-<li>
-<p>我们定义这种基准变量为 [...] 格式的,[yyyyMMddHHmmss] 是可以任意分解组合的,比如:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] 等</p>
-</li>
-<li>
-<p>也可以使用以下格式:</p>
-<pre><code>* 后 N 年:$[add_months(yyyyMMdd,12*N)]
-* 前 N 年:$[add_months(yyyyMMdd,-12*N)]
-* 后 N 月:$[add_months(yyyyMMdd,N)]
-* 前 N 月:$[add_months(yyyyMMdd,-N)]
-* 后 N 周:$[yyyyMMdd+7*N]
-* 前 N 周:$[yyyyMMdd-7*N]
-* 后 N 天:$[yyyyMMdd+N]
-* 前 N 天:$[yyyyMMdd-N]
-* 后 N 小时:$[HHmmss+N/24]
-* 前 N 小时:$[HHmmss-N/24]
-* 后 N 分钟:$[HHmmss+N/24/60]
-* 前 N 分钟:$[HHmmss-N/24/60]
-</code></pre>
-</li>
-</ul>
-<h4>8.3 <span id=UserDefinedParameters>用户自定义参数</span></h4>
-<ul>
-<li>用户自定义参数分为全局参数和局部参数。全局参数是保存工作流定义和工作流实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。
-例如:</li>
-</ul>
-<p align="center">
-   <img src="/img/local_parameter.png" width="80%" />
- </p>
-<ul>
-<li>global_bizdate为全局参数,引用的是系统参数。</li>
-</ul>
-<p align="center">
-   <img src="/img/global_parameter.png" width="80%" />
- </p>
-<ul>
-<li>任务中local_param_bizdate通过${global_bizdate}来引用全局参数,对于脚本可以通过${local_param_bizdate}来引全局变量global_bizdate的值,或通过JDBC直接将local_param_bizdate的值set进去</li>
-</ul>
-</div></section><footer class="footer-container"><div class="footer-body"><div><h3>联系我们</h3><h4>有问题需要反馈?请通过以下方式联系我们。</h4></div><div class="contact-container"><ul><li><a href="/zh-cn/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>邮件列表</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterblue.png"/><p [...]
-  <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
-  <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
-  <script>window.rootPath = '';</script>
-  <script src="/build/vendor.2ace653.js"></script>
-  <script src="/build/docs.md.0cdf107.js"></script>
-  <script>
-    var _hmt = _hmt || [];
-    (function() {
-      var hm = document.createElement("script");
-      hm.src = "https://hm.baidu.com/hm.js?4e7b4b400dd31fa015018a435c64d06f";
-      var s = document.getElementsByTagName("script")[0];
-      s.parentNode.insertBefore(hm, s);
-    })();
-  </script>
-  <!-- Global site tag (gtag.js) - Google Analytics -->
-  <script async src="https://www.googletagmanager.com/gtag/js?id=G-899J8PYKJZ"></script>
-  <script>
-    window.dataLayer = window.dataLayer || [];
-    function gtag(){dataLayer.push(arguments);}
-    gtag('js', new Date());
-
-    gtag('config', 'G-899J8PYKJZ');
-  </script>
-</body>
-</html>
\ No newline at end of file
diff --git a/zh-cn/docs/2.0.0/user_doc/guide/system-manual.json b/zh-cn/docs/2.0.0/user_doc/guide/system-manual.json
deleted file mode 100644
index 799112f..0000000
--- a/zh-cn/docs/2.0.0/user_doc/guide/system-manual.json
+++ /dev/null
@@ -1,6 +0,0 @@
-{
-  "filename": "system-manual.md",
-  "__html": "<h1>系统使用手册</h1>\n<h2>快速上手</h2>\n<blockquote>\n<p>请参照<a href=\"https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html\">快速上手</a></p>\n</blockquote>\n<h2>操作指南</h2>\n<h3>1. 首页</h3>\n<p>首页包含用户所有项目的任务状态统计、流程状态统计、工作流定义统计。\n<p align=\"center\">\n<img src=\"/img/home.png\" width=\"80%\" />\n</p></p>\n<h3>2. 项目管理</h3>\n<h4>2.1 创建项目</h4>\n<ul>\n<li>\n<p>点击&quot;项目管理&quot;进入项目管理页面,点击“创建项目”按钮,输入项目名称,项目描述,点击“提交”,创建新的项目。</p>\n<p align=\"center\">\n    <img sr [...]
-  "link": "/dist/zh-cn/docs/2.0.0/user_doc/guide/system-manual.html",
-  "meta": {}
-}
\ No newline at end of file
diff --git a/zh-cn/docs/2.0.1/user_doc/expansion-reduction.html b/zh-cn/docs/2.0.1/user_doc/expansion-reduction.html
index ee2aad7..9e15c63 100644
--- a/zh-cn/docs/2.0.1/user_doc/expansion-reduction.html
+++ b/zh-cn/docs/2.0.1/user_doc/expansion-reduction.html
@@ -119,7 +119,7 @@ workers=&quot;现有worker01:default,现有worker02:default,ds3:default,ds4:defa
 </code></pre>
 <ul>
 <li>
-<p>如果扩容的是worker节点,需要设置worker分组.请参考用户手册<a href="https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html">5.7 创建worker分组 </a></p>
+<p>如果扩容的是worker节点,需要设置worker分组.请参考安全中心<a href="./guide/security.md">创建worker分组</a></p>
 </li>
 <li>
 <p>在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限</p>
diff --git a/zh-cn/docs/2.0.1/user_doc/expansion-reduction.json b/zh-cn/docs/2.0.1/user_doc/expansion-reduction.json
index 0cb5d86..80807f8 100644
--- a/zh-cn/docs/2.0.1/user_doc/expansion-reduction.json
+++ b/zh-cn/docs/2.0.1/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
+  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
   "link": "/dist/zh-cn/docs/2.0.1/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html b/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html
index 12e6581..df1ba74 100644
--- a/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html
+++ b/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html
@@ -119,7 +119,7 @@ workers=&quot;现有worker01:default,现有worker02:default,ds3:default,ds4:defa
 </code></pre>
 <ul>
 <li>
-<p>如果扩容的是worker节点,需要设置worker分组.请参考用户手册<a href="https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html">5.7 创建worker分组 </a></p>
+<p>如果扩容的是worker节点,需要设置worker分组.请参安全中心<a href="./guide/security.md">创建worker分组</a></p>
 </li>
 <li>
 <p>在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限</p>
diff --git a/zh-cn/docs/2.0.2/user_doc/expansion-reduction.json b/zh-cn/docs/2.0.2/user_doc/expansion-reduction.json
index e6870d0..022c50a 100644
--- a/zh-cn/docs/2.0.2/user_doc/expansion-reduction.json
+++ b/zh-cn/docs/2.0.2/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
+  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
   "link": "/dist/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/dev/user_doc/expansion-reduction.html b/zh-cn/docs/dev/user_doc/expansion-reduction.html
index 8cdf3af..5d4bc13 100644
--- a/zh-cn/docs/dev/user_doc/expansion-reduction.html
+++ b/zh-cn/docs/dev/user_doc/expansion-reduction.html
@@ -119,7 +119,7 @@ workers=&quot;现有worker01:default,现有worker02:default,ds3:default,ds4:defa
 </code></pre>
 <ul>
 <li>
-<p>如果扩容的是worker节点,需要设置worker分组.请参考用户手册<a href="/zh-cn/docs/1.3.8/user_doc/system-manual.html">5.7 创建worker分组 </a></p>
+<p>如果扩容的是worker节点,需要设置worker分组.请参考安全中心<a href="./guide/security.md">创建worker分组</a></p>
 </li>
 <li>
 <p>在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限</p>
diff --git a/zh-cn/docs/dev/user_doc/expansion-reduction.json b/zh-cn/docs/dev/user_doc/expansion-reduction.json
index 1ecf3b0..fe9e3fe 100644
--- a/zh-cn/docs/dev/user_doc/expansion-reduction.json
+++ b/zh-cn/docs/dev/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
+  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
   "link": "/dist/zh-cn/docs/dev/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/latest/user_doc/expansion-reduction.html b/zh-cn/docs/latest/user_doc/expansion-reduction.html
index 12e6581..df1ba74 100644
--- a/zh-cn/docs/latest/user_doc/expansion-reduction.html
+++ b/zh-cn/docs/latest/user_doc/expansion-reduction.html
@@ -119,7 +119,7 @@ workers=&quot;现有worker01:default,现有worker02:default,ds3:default,ds4:defa
 </code></pre>
 <ul>
 <li>
-<p>如果扩容的是worker节点,需要设置worker分组.请参考用户手册<a href="https://dolphinscheduler.apache.org/zh-cn/docs/2.0.0/user_doc/guide/quick-start.html">5.7 创建worker分组 </a></p>
+<p>如果扩容的是worker节点,需要设置worker分组.请参安全中心<a href="./guide/security.md">创建worker分组</a></p>
 </li>
 <li>
 <p>在所有的新增节点上,修改目录权限,使得部署用户对dolphinscheduler目录有操作权限</p>
diff --git a/zh-cn/docs/latest/user_doc/expansion-reduction.json b/zh-cn/docs/latest/user_doc/expansion-reduction.json
index e6870d0..022c50a 100644
--- a/zh-cn/docs/latest/user_doc/expansion-reduction.json
+++ b/zh-cn/docs/latest/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
+  "__html": "<h1>DolphinScheduler扩容/缩容 文档</h1>\n<h2>1. DolphinScheduler扩容文档</h2>\n<p>本文扩容是针对现有的DolphinScheduler集群添加新的master或者worker节点的操作说明.</p>\n<pre><code> 注意: 一台物理机上不能存在多个master服务进程或者worker服务进程.\n       如果扩容master或者worker节点所在的物理机已经安装了调度的服务,请直接跳到 [1.4.修改配置]. 编辑 ** 所有 ** 节点上的配置文件 `conf/config/install_config.conf`. 新增masters或者workers参数,重启调度集群即可.\n</code></pre>\n<h3>1.1. 基础软件安装(必装项请自行安装)</h3>\n<ul>\n<li>[必装] <a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JD [...]
   "link": "/dist/zh-cn/docs/2.0.2/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file