You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2020/10/14 14:19:27 UTC

[incubator-dolphinscheduler-website] branch asf-site updated: Automated deployment: Wed Oct 14 14:19:16 UTC 2020 0a888d6a44d34a3c8b9348b1a6536937b0767b23

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8dcef02  Automated deployment: Wed Oct 14 14:19:16 UTC 2020 0a888d6a44d34a3c8b9348b1a6536937b0767b23
8dcef02 is described below

commit 8dcef023f4d8020d27f52228a2d6b674fa6af1c3
Author: dailidong <da...@users.noreply.github.com>
AuthorDate: Wed Oct 14 14:19:17 2020 +0000

    Automated deployment: Wed Oct 14 14:19:16 UTC 2020 0a888d6a44d34a3c8b9348b1a6536937b0767b23
---
 en-us/docs/1.3.2/user_doc/cluster-deployment.html |    2 +-
 en-us/docs/1.3.2/user_doc/cluster-deployment.json |    2 +-
 en-us/docs/1.3.2/user_doc/system-manual.html      | 1026 +++++++++++++++++++++
 en-us/docs/1.3.2/user_doc/system-manual.json      |    6 +
 img/complement_en1.png                            |  Bin 0 -> 197427 bytes
 img/create_project_en1.png                        |  Bin 0 -> 231406 bytes
 6 files changed, 1034 insertions(+), 2 deletions(-)

diff --git a/en-us/docs/1.3.2/user_doc/cluster-deployment.html b/en-us/docs/1.3.2/user_doc/cluster-deployment.html
index 17d4a17..a461c36 100644
--- a/en-us/docs/1.3.2/user_doc/cluster-deployment.html
+++ b/en-us/docs/1.3.2/user_doc/cluster-deployment.html
@@ -252,7 +252,7 @@ sslTrust="smtp.qq.com"
 #</span><span class="bash"> resource storage <span class="hljs-built_in">type</span>:HDFS,S3,NONE</span>
 resourceStorageType="HDFS"
 <span class="hljs-meta">
-#</span><span class="bash"> <span class="hljs-keyword">if</span> resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml <span class="hljs-keyword">in</span> the conf directory.</span>
+#</span><span class="bash"> If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or host name.</span>
 <span class="hljs-meta">#</span><span class="bash"> <span class="hljs-keyword">if</span> S3,write S3 address,HA,<span class="hljs-keyword">for</span> example :s3a://dolphinscheduler,</span>
 <span class="hljs-meta">#</span><span class="bash"> Note,s3 be sure to create the root directory /dolphinscheduler</span>
 defaultFS="hdfs://mycluster:8020"
diff --git a/en-us/docs/1.3.2/user_doc/cluster-deployment.json b/en-us/docs/1.3.2/user_doc/cluster-deployment.json
index 5f9951d..ef1fc42 100644
--- a/en-us/docs/1.3.2/user_doc/cluster-deployment.json
+++ b/en-us/docs/1.3.2/user_doc/cluster-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "cluster-deployment.md",
-  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7)  :  Choose One</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) :  Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) :Required</li>\n<li>Hadoop (2.6+) or MinIO :Optional. If you need to upload a [...]
+  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7)  :  Choose One</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) :  Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) :Required</li>\n<li>Hadoop (2.6+) or MinIO :Optional. If you need to upload a [...]
   "link": "/en-us/docs/1.3.2/user_doc/cluster-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.2/user_doc/system-manual.html b/en-us/docs/1.3.2/user_doc/system-manual.html
new file mode 100644
index 0000000..8274929
--- /dev/null
+++ b/en-us/docs/1.3.2/user_doc/system-manual.html
@@ -0,0 +1,1026 @@
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+	<meta charset="UTF-8">
+	<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
+	<meta name="keywords" content="system-manual" />
+	<meta name="description" content="system-manual" />
+	<!-- 网页标签标题 -->
+	<title>system-manual</title>
+	<link rel="shortcut icon" href="/img/docsite.ico"/>
+	<link rel="stylesheet" href="/build/documentation.css" />
+</head>
+<body>
+	<div id="root"><div class="documentation-page" data-reactroot=""><header class="header-container header-container-normal"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_colorful.svg"/></a><div class="search search-normal"><span class="icon-search"></span></div><span class="language-switch language-switch-normal">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_gray.png"/><div><ul class="ant-menu blackClass ant [...]
+<h2>Get started quickly</h2>
+<blockquote>
+<p>Please refer to <a href="quick-start.html">Quick Start</a></p>
+</blockquote>
+<h2>Operation guide</h2>
+<h3>1. Home</h3>
+<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>
+<p align="center">
+<img src="/img/home_en.png" width="80%" />
+</p>
+<h3>2. Project management</h3>
+<h4>2.1 Create project</h4>
+<ul>
+<li>
+<p>Click &quot;Project Management&quot; to enter the project management page, click the &quot;Create Project&quot; button, enter the project name, project description, and click &quot;Submit&quot; to create a new project.</p>
+<p align="center">
+    <img src="/img/create_project_en1.png" width="80%" />
+</p>
+</li>
+</ul>
+<h4>2.2 Project home</h4>
+<ul>
+<li>
+<p>Click the project name link on the project management page to enter the project home page, as shown in the figure below, the project home page contains the task status statistics, process status statistics, and workflow definition statistics of the project.</p>
+<p align="center">
+   <img src="/img/project_home_en.png" width="80%" />
+</p>
+</li>
+<li>
+<p>Task status statistics: within the specified time range, count the number of task instances as successful submission, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads</p>
+</li>
+<li>
+<p>Process status statistics: within the specified time range, count the number of the status of the workflow instance as submission success, running, ready to pause, pause, ready to stop, stop, failure, success, fault tolerance, kill, and waiting threads</p>
+</li>
+<li>
+<p>Workflow definition statistics: Count the workflow definitions created by this user and the workflow definitions granted to this user by the administrator</p>
+</li>
+</ul>
+<h4>2.3 Workflow definition</h4>
+<h4><span id=creatDag>2.3.1 Create workflow definition</span></h4>
+<ul>
+<li>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, and click the &quot;Create Workflow&quot; button to enter the <strong>workflow DAG edit</strong> page, as shown in the following figure:<p align="center">
+    <img src="/img/dag5.png" width="80%" />
+</p>
+</li>
+<li>Drag in the toolbar <img src="/img/shell.png" width="35"/> Add a Shell task to the drawing board, as shown in the figure below:<p align="center">
+    <img src="/img/shell-en.png" width="80%" />
+</p>
+</li>
+<li><strong>Add parameter settings for this shell task:</strong></li>
+</ul>
+<ol>
+<li>Fill in the &quot;Node Name&quot;, &quot;Description&quot;, and &quot;Script&quot; fields;</li>
+<li>Check “Normal” for “Run Flag”. If “Prohibit Execution” is checked, the task will not be executed when the workflow runs;</li>
+<li>Select &quot;Task Priority&quot;: When the number of worker threads is insufficient, high-level tasks will be executed first in the execution queue, and tasks with the same priority will be executed in the order of first in, first out;</li>
+<li>Timeout alarm (optional): Check the timeout alarm, timeout failure, and fill in the &quot;timeout period&quot;. When the task execution time exceeds <strong>timeout period</strong>, an alert email will be sent and the task timeout fails;</li>
+<li>Resources (optional). Resource files are files created or uploaded on the Resource Center -&gt; File Management page. For example, the file name is <code>test.sh</code>, and the command to call the resource in the script is <code>sh test.sh</code>;</li>
+<li>Custom parameters (optional), refer to <a href="#UserDefinedParameters">Custom Parameters</a>;</li>
+<li>Click the &quot;Confirm Add&quot; button to save the task settings.</li>
+</ol>
+<ul>
+<li>
+<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished execute, tasks 2 and 3 will be executed simultaneously.</p>
+<p align="center">
+   <img src="/img/dag6.png" width="80%" />
+</p>
+</li>
+<li>
+<p><strong>Delete dependencies:</strong> Click the &quot;arrow&quot; icon in the upper right corner <img src="/img/arrow.png" width="35"/>, select the connection line, and click the &quot;Delete&quot; icon in the upper right corner <img src= "/img/delete.png" width="35"/>, delete dependencies between tasks.</p>
+<p align="center">
+   <img src="/img/dag7.png" width="80%" />
+</p>
+</li>
+<li>
+<p><strong>Save workflow definition:</strong> Click the &quot;Save&quot; button, and the &quot;Set DAG chart name&quot; pop-up box will pop up, as shown in the figure below. Enter the workflow definition name, workflow definition description, and set global parameters (optional, refer to <a href="#UserDefinedParameters"> Custom parameters</a>), click the &quot;Add&quot; button, and the workflow definition is created successfully.</p>
+<p align="center">
+   <img src="/img/dag8.png" width="80%" />
+ </p>
+</li>
+</ul>
+<blockquote>
+<p>For other types of tasks, please refer to <a href="#TaskParamers">Task Node Type and Parameter Settings</a>.</p>
+</blockquote>
+<h4>2.3.2 Workflow definition operation function</h4>
+<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, as shown below:</p>
+<p align="center">
+<img src="/img/work_list_en.png" width="80%" />
+</p>
+The operation functions of the workflow definition list are as follows:
+<ul>
+<li><strong>Edit:</strong> Only &quot;offline&quot; workflow definitions can be edited. Workflow DAG editing is the same as <a href="#creatDag">Create Workflow Definition</a>.</li>
+<li><strong>Online:</strong> When the workflow status is &quot;Offline&quot;, used to online workflow. Only the workflow in the &quot;Online&quot; state can run, but cannot be edited.</li>
+<li><strong>Offline:</strong> When the workflow status is &quot;Online&quot;, used to offline workflow. Only the workflow in the &quot;Offline&quot; state can be edited, but not run.</li>
+<li><strong>Run:</strong> Only workflow in the online state can run. See <a href="#runWorkflow">2.3.3 Run Workflow</a> for the operation steps</li>
+<li><strong>Timing:</strong> Timing can only be set in online workflows, and the system automatically schedules the workflow to run on a regular basis. The status after creating a timing is &quot;offline&quot;, and the timing must be online on the timing management page to take effect. See <a href="#creatTiming">2.3.4 Workflow Timing</a> for timing operation steps.</li>
+<li><strong>Timing Management:</strong> The timing management page can be edited, online/offline, and deleted.</li>
+<li><strong>Delete:</strong> Delete the workflow definition.</li>
+<li><strong>Download:</strong> Download workflow definition to local.</li>
+<li><strong>Tree Diagram:</strong> Display the task node type and task status in a tree structure, as shown in the figure below:<p align="center">
+    <img src="/img/tree_en.png" width="80%" />
+</p>
+</li>
+</ul>
+<h4><span id=runWorkflow>2.3.3 Run the workflow</span></h4>
+<ul>
+<li>
+<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, as shown in the figure below, click the &quot;Go Online&quot; button <img src="/img/online.png" width="35"/>,Go online workflow.</p>
+<p align="center">
+    <img src="/img/work_list_en.png" width="80%" />
+</p>
+</li>
+<li>
+<p>Click the &quot;Run&quot; button to pop up the startup parameter setting pop-up box, as shown in the figure below, set the startup parameters, click the &quot;Run&quot; button in the pop-up box, the workflow starts running, and the workflow instance page generates a workflow instance.</p>
+   <p align="center">
+     <img src="/img/run_work_en.png" width="80%" />
+   </p>  
+<span id=runParamers>Description of workflow operating parameters:</span> 
+<pre><code>* Failure strategy: When a task node fails to execute, other parallel task nodes need to execute the strategy. &quot;Continue&quot; means: after a certain task fails, other task nodes execute normally; &quot;End&quot; means: terminate all tasks being executed, and terminate the entire process.
+* Notification strategy: When the process is over, the process execution information notification email is sent according to the process status, including any status is not sent, successful sent, failed sent, successful or failed sent.
+* Process priority: The priority of process operation, divided into five levels: highest (HIGHEST), high (HIGH), medium (MEDIUM), low (LOW), and lowest (LOWEST). When the number of master threads is insufficient, high-level processes will be executed first in the execution queue, and processes with the same priority will be executed in a first-in first-out order.
+* Worker group: The process can only be executed in the specified worker machine group. The default is Default, which can be executed on any worker.
+* Notification group: select notification strategy||timeout alarm||when fault tolerance occurs, process information or email will be sent to all members in the notification group.
+* Recipient: Select notification policy||Timeout alarm||When fault tolerance occurs, process information or alarm email will be sent to the recipient list.
+* Cc: Select the notification strategy||Timeout alarm||When fault tolerance occurs, the process information or warning email will be copied to the CC list.
+* Complement: Two modes including serial complement and parallel complement. Serial complement: within the specified time range, the complement is executed sequentially from the start date to the end date, and only one process instance is generated; parallel complement: within the specified time range, multiple days are complemented at the same time to generate N process instances.
+</code></pre>
+<ul>
+<li>For example, you need to fill in the data from May 1 to May 10.</li>
+</ul>
+  <p align="center">
+      <img src="/img/complement_en1.png" width="80%" />
+  </p>
+<blockquote>
+<p>Serial mode: The complement is executed sequentially from May 1 to May 10, and a process instance is generated on the process instance page;</p>
+</blockquote>
+<blockquote>
+<p>Parallel mode: The tasks from May 1 to may 10 are executed simultaneously, and 10 process instances are generated on the process instance page.</p>
+</blockquote>
+</li>
+</ul>
+<h4><span id=creatTiming>2.3.4 Workflow timing</span></h4>
+<ul>
+<li>Create timing: Click Project Management-&gt;Workflow-&gt;Workflow Definition, enter the workflow definition page, go online the workflow, click the &quot;timing&quot; button <img src="/img/timing.png" width="35"/> ,The timing parameter setting dialog box pops up, as shown in the figure below:<p align="center">
+    <img src="/img/time_schedule_en.png" width="80%" />
+</p>
+</li>
+<li>Choose the start and end time. In the start and end time range, the workflow is run at regular intervals; not in the start and end time range, no more regular workflow instances are generated.</li>
+<li>Add a timing that is executed once every day at 5 AM, as shown in the following figure:<p align="center">
+    <img src="/img/timer-en.png" width="80%" />
+</p>
+</li>
+<li>Failure strategy, notification strategy, process priority, worker group, notification group, recipient, and CC are the same as <a href="#runParamers">workflow running parameters</a>.</li>
+<li>Click the &quot;Create&quot; button to create the timing successfully. At this time, the timing status is &quot;<strong>Offline</strong>&quot; and the timing needs to be <strong>Online</strong> to take effect.</li>
+<li>Timing online: Click the &quot;timing management&quot; button <img src="/img/timeManagement.png" width="35"/>, enter the timing management page, click the &quot;online&quot; button, the timing status will change to &quot;online&quot;, as shown in the below figure, the workflow takes effect regularly.<p align="center">
+    <img src="/img/time-manage-list-en.png" width="80%" />
+</p>
+</li>
+</ul>
+<h4>2.3.5 Import workflow</h4>
+<p>Click Project Management -&gt; Workflow -&gt; Workflow Definition to enter the workflow definition page, click the &quot;Import Workflow&quot; button to import the local workflow file, the workflow definition list displays the imported workflow, and the status is offline.</p>
+<h4>2.4 Workflow instance</h4>
+<h4>2.4.1 View workflow instance</h4>
+<ul>
+<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the Workflow Instance page, as shown in the figure below:   <p align="center">
+      <img src="/img/instance-list-en.png" width="80%" />
+   </p>
+</li>
+<li>Click the workflow name to enter the DAG view page to view the task execution status, as shown in the figure below.<p align="center">
+  <img src="/img/instance-runs-en.png" width="80%" />
+</p>
+</li>
+</ul>
+<h4>2.4.2 View task log</h4>
+<ul>
+<li>Enter the workflow instance page, click the workflow name, enter the DAG view page, double-click the task node, as shown in the following figure: <p align="center">
+   <img src="/img/instanceViewLog-en.png" width="80%" />
+ </p>
+</li>
+<li>Click &quot;View Log&quot;, a log pop-up box will pop up, as shown in the figure below, the task log can also be viewed on the task instance page, refer to <a href="#taskLog">Task View Log</a>。 <p align="center">
+   <img src="/img/task-log-en.png" width="80%" />
+ </p>
+</li>
+</ul>
+<h4>2.4.3 View task history</h4>
+<ul>
+<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;</li>
+<li>Double-click the task node, as shown in the figure below, click &quot;View History&quot; to jump to the task instance page, and display a list of task instances running by the workflow instance <p align="center">
+   <img src="/img/task_history_en.png" width="80%" />
+ </p>
+</li>
+</ul>
+<h4>2.4.4 View operating parameters</h4>
+<ul>
+<li>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the workflow instance page, and click the workflow name to enter the workflow DAG page;</li>
+<li>Click the icon in the upper left corner <img src="/img/run_params_button.png" width="35"/>,View the startup parameters of the workflow instance; click the icon <img src="/img/global_param.png" width="35"/>,View the global and local parameters of the workflow instance, as shown in the following figure: <p align="center">
+   <img src="/img/run_params_en.png" width="80%" />
+ </p>
+</li>
+</ul>
+<h4>2.4.4 Workflow instance operation function</h4>
+<p>Click Project Management -&gt; Workflow -&gt; Workflow Instance to enter the Workflow Instance page, as shown in the figure below:</p>
+  <p align="center">
+    <img src="/img/instance-list-en.png" width="80%" />
+  </p>
+<ul>
+<li><strong>Edit:</strong> Only terminated processes can be edited. Click the &quot;Edit&quot; button or the name of the workflow instance to enter the DAG edit page. After edit, click the &quot;Save&quot; button to pop up the Save DAG pop-up box, as shown in the figure below. In the pop-up box, check &quot;Whether to update to workflow definition&quot; and save After that, the workflow definition will be updated; if it is not checked, the workflow definition will not be updated.   <p al [...]
+     <img src="/img/editDag-en.png" width="80%" />
+   </p>
+</li>
+<li><strong>Rerun:</strong> Re-execute the terminated process.</li>
+<li><strong>Recovery failed:</strong> For failed processes, you can perform recovery operations, starting from the failed node.</li>
+<li><strong>Stop:</strong> To <strong>stop</strong> the running process, the background will first <code>kill</code>worker process, and then execute <code>kill -9</code> operation</li>
+<li><strong>Pause:</strong> Perform a <strong>pause</strong> operation on the running process, the system status will change to <strong>waiting for execution</strong>, it will wait for the end of the task being executed, and pause the next task to be executed.</li>
+<li><strong>Resume pause:</strong> To resume the paused process, start running directly from the <strong>paused node</strong></li>
+<li><strong>Delete:</strong> Delete the workflow instance and the task instance under the workflow instance</li>
+<li><strong>Gantt chart:</strong> The vertical axis of the Gantt chart is the topological sorting of task instances under a certain workflow instance, and the horizontal axis is the running time of the task instances, as shown in the figure:   <p align="center">
+       <img src="/img/gantt-en.png" width="80%" />
+   </p>
+</li>
+</ul>
+<h4>2.5 Task instance</h4>
+<ul>
+<li>
+<p>Click Project Management -&gt; Workflow -&gt; Task Instance to enter the task instance page, as shown in the figure below, click the name of the workflow instance, you can jump to the workflow instance DAG chart to view the task status.</p>
+   <p align="center">
+      <img src="/img/task-list-en.png" width="80%" />
+   </p>
+</li>
+<li>
+<p><span id=taskLog>View log:</span>Click the &quot;view log&quot; button in the operation column to view the log of task execution.</p>
+   <p align="center">
+      <img src="/img/task-log2-en.png" width="80%" />
+   </p>
+</li>
+</ul>
+<h3>3. Resource Center</h3>
+<h4>3.1 hdfs resource configuration</h4>
+<ul>
+<li>Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:</li>
+</ul>
+<pre><code>
+conf/common/common.properties
+    # Users who have permission to create directories under the HDFS root path
+    hdfs.root.user=hdfs
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。&quot;/escheduler&quot; is recommended
+    data.store2hdfs.basepath=/dolphinscheduler
+    # resource upload startup type : HDFS,S3,NONE
+    res.upload.startup.type=HDFS
+    # whether kerberos starts
+    hadoop.security.authentication.startup.state=false
+    # java.security.krb5.conf path
+    java.security.krb5.conf.path=/opt/krb5.conf
+    # loginUserFromKeytab user
+    login.user.keytab.username=hdfs-mycluster@ESZ.COM
+    # loginUserFromKeytab path
+    login.user.keytab.path=/opt/hdfs.headless.keytab
+
+conf/common/hadoop.properties
+    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
+    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020
+    #resourcemanager ha note this need ips , this empty if single
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
+    # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
+    yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
+
+</code></pre>
+<ul>
+<li>Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids and yarn.application.status.address, and the other address is empty.</li>
+<li>You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project, and restart the api-server service.</li>
+</ul>
+<h4>3.2 File management</h4>
+<blockquote>
+<p>It is the management of various resource files, including creating basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, and can do edit, rename, download, delete and other operations.</p>
+</blockquote>
+  <p align="center">
+   <img src="/img/file-manage-en.png" width="80%" />
+ </p>
+<ul>
+<li>Create a file
+<blockquote>
+<p>The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties</p>
+</blockquote>
+</li>
+</ul>
+<p align="center">
+   <img src="/img/file_create_en.png" width="80%" />
+ </p>
+<ul>
+<li>upload files</li>
+</ul>
+<blockquote>
+<p>Upload file: Click the &quot;Upload File&quot; button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name</p>
+</blockquote>
+<p align="center">
+   <img src="/img/file-upload-en.png" width="80%" />
+ </p>
+<ul>
+<li>File View</li>
+</ul>
+<blockquote>
+<p>For the file types that can be viewed, click the file name to view the file details</p>
+</blockquote>
+<p align="center">
+   <img src="/img/file_detail_en.png" width="80%" />
+ </p>
+<ul>
+<li>download file</li>
+</ul>
+<blockquote>
+<p>Click the &quot;Download&quot; button in the file list to download the file or click the &quot;Download&quot; button in the upper right corner of the file details to download the file</p>
+</blockquote>
+<ul>
+<li>File rename</li>
+</ul>
+<p align="center">
+   <img src="/img/file_rename_en.png" width="80%" />
+ </p>
+<ul>
+<li>delete
+<blockquote>
+<p>File list -&gt; Click the &quot;Delete&quot; button to delete the specified file</p>
+</blockquote>
+</li>
+</ul>
+<h4>3.3 UDF management</h4>
+<h4>3.3.1 Resource management</h4>
+<blockquote>
+<p>The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
+Operation function: rename, download, delete.</p>
+</blockquote>
+<ul>
+<li>Upload udf resources
+<blockquote>
+<p>Same as uploading files.</p>
+</blockquote>
+</li>
+</ul>
+<h4>3.3.2 Function management</h4>
+<ul>
+<li>Create UDF function
+<blockquote>
+<p>Click &quot;Create UDF Function&quot;, enter the udf function parameters, select the udf resource, and click &quot;Submit&quot; to create the udf function.</p>
+</blockquote>
+</li>
+</ul>
+<blockquote>
+<p>Currently only supports temporary UDF functions of HIVE</p>
+</blockquote>
+<ul>
+<li>UDF function name: the name when the UDF function is entered</li>
+<li>Package name Class name: Enter the full path of the UDF function</li>
+<li>UDF resource: Set the resource file corresponding to the created UDF</li>
+</ul>
+<p align="center">
+   <img src="/img/udf_edit_en.png" width="80%" />
+ </p>
+<h3>4. Create data source</h3>
+<blockquote>
+<p>Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, ORACLE, SQLSERVER and other data sources</p>
+</blockquote>
+<h4>4.1 Create/Edit MySQL data source</h4>
+<ul>
+<li>
+<p>Click &quot;Data Source Center -&gt; Create Data Source&quot; to create different types of data sources according to requirements.</p>
+</li>
+<li>
+<p>Data source: select MYSQL</p>
+</li>
+<li>
+<p>Data source name: enter the name of the data source</p>
+</li>
+<li>
+<p>Description: Enter a description of the data source</p>
+</li>
+<li>
+<p>IP hostname: enter the IP to connect to MySQL</p>
+</li>
+<li>
+<p>Port: Enter the port to connect to MySQL</p>
+</li>
+<li>
+<p>Username: Set the username for connecting to MySQL</p>
+</li>
+<li>
+<p>Password: Set the password for connecting to MySQL</p>
+</li>
+<li>
+<p>Database name: Enter the name of the database connected to MySQL</p>
+</li>
+<li>
+<p>Jdbc connection parameters: parameter settings for MySQL connection, filled in in JSON form</p>
+</li>
+</ul>
+<p align="center">
+   <img src="/img/mysql-en.png" width="80%" />
+ </p>
+<blockquote>
+<p>Click &quot;Test Connection&quot; to test whether the data source can be successfully connected.</p>
+</blockquote>
+<h4>4.2 Create/Edit POSTGRESQL data source</h4>
+<ul>
+<li>Data source: select POSTGRESQL</li>
+<li>Data source name: enter the name of the data source</li>
+<li>Description: Enter a description of the data source</li>
+<li>IP/Host Name: Enter the IP to connect to POSTGRESQL</li>
+<li>Port: Enter the port to connect to POSTGRESQL</li>
+<li>Username: Set the username for connecting to POSTGRESQL</li>
+<li>Password: Set the password for connecting to POSTGRESQL</li>
+<li>Database name: Enter the name of the database connected to POSTGRESQL</li>
+<li>Jdbc connection parameters: parameter settings for POSTGRESQL connection, filled in in JSON form</li>
+</ul>
+<p align="center">
+   <img src="/img/postgresql-en.png" width="80%" />
+ </p>
+<h4>4.3 Create/Edit HIVE data source</h4>
+<p>1.Use HiveServer2 to connect</p>
+ <p align="center">
+    <img src="/img/hive-en.png" width="80%" />
+  </p>
+<ul>
+<li>
+<p>Data source: select HIVE</p>
+</li>
+<li>
+<p>Data source name: enter the name of the data source</p>
+</li>
+<li>
+<p>Description: Enter a description of the data source</p>
+</li>
+<li>
+<p>IP/Host Name: Enter the IP connected to HIVE</p>
+</li>
+<li>
+<p>Port: Enter the port connected to HIVE</p>
+</li>
+<li>
+<p>Username: Set the username for connecting to HIVE</p>
+</li>
+<li>
+<p>Password: Set the password for connecting to HIVE</p>
+</li>
+<li>
+<p>Database name: Enter the name of the database connected to HIVE</p>
+</li>
+<li>
+<p>Jdbc connection parameters: parameter settings for HIVE connection, filled in in JSON form</p>
+<p>2.Use HiveServer2 HA Zookeeper to connect</p>
+</li>
+</ul>
+ <p align="center">
+    <img src="/img/hive1-en.png" width="80%" />
+  </p>
+<p>Note: If you enable <strong>kerberos</strong>, you need to fill in <strong>Principal</strong></p>
+<p align="center">
+    <img src="/img/hive-en.png" width="80%" />
+  </p>
+<h4>4.4 Create/Edit Spark data source</h4>
+<p align="center">
+   <img src="/img/spark-en.png" width="80%" />
+ </p>
+<ul>
+<li>Data source: select Spark</li>
+<li>Data source name: enter the name of the data source</li>
+<li>Description: Enter a description of the data source</li>
+<li>IP/Hostname: Enter the IP connected to Spark</li>
+<li>Port: Enter the port connected to Spark</li>
+<li>Username: Set the username for connecting to Spark</li>
+<li>Password: Set the password for connecting to Spark</li>
+<li>Database name: Enter the name of the database connected to Spark</li>
+<li>Jdbc connection parameters: parameter settings for Spark connection, filled in in JSON form</li>
+</ul>
+<h3>5. Security Center (Permission System)</h3>
+<pre><code> * Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
+ * Administrator login, default user name and password: admin/dolphinscheduler123
+</code></pre>
+<h4>5.1 Create queue</h4>
+<ul>
+<li>Queue is used when the &quot;queue&quot; parameter is needed to execute programs such as spark and mapreduce.</li>
+<li>The administrator enters the Security Center-&gt;Queue Management page and clicks the &quot;Create Queue&quot; button to create a queue.</li>
+</ul>
+<p align="center">
+   <img src="/img/create-queue-en.png" width="80%" />
+ </p>
+<h4>5.2 Add tenant</h4>
+<ul>
+<li>The tenant corresponds to the Linux user, which is used by the worker to submit the job. If Linux does not have this user, the worker will create this user when executing the script.</li>
+<li>Tenant Code: <strong>Tenant Code is the only user on Linux and cannot be repeated</strong></li>
+<li>The administrator enters the Security Center-&gt;Tenant Management page and clicks the &quot;Create Tenant&quot; button to create a tenant.</li>
+</ul>
+ <p align="center">
+    <img src="/img/addtenant-en.png" width="80%" />
+  </p>
+<h4>5.3 Create normal user</h4>
+<ul>
+<li>
+<p>Users are divided into <strong>administrator users</strong> and <strong>normal users</strong></p>
+<ul>
+<li>The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.</li>
+<li>Ordinary users can create projects and create, edit, and execute workflow definitions.</li>
+<li>Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.</li>
+</ul>
+</li>
+<li>
+<p>The administrator enters the Security Center -&gt; User Management page and clicks the &quot;Create User&quot; button to create a user.</p>
+</li>
+</ul>
+<p align="center">
+   <img src="/img/user-en.png" width="80%" />
+ </p>
+<blockquote>
+<p><strong>Edit user information</strong></p>
+</blockquote>
+<ul>
+<li>The administrator enters the Security Center-&gt;User Management page and clicks the &quot;Edit&quot; button to edit user information.</li>
+<li>After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the &quot;Edit&quot; button to edit the user information.</li>
+</ul>
+<blockquote>
+<p><strong>Modify user password</strong></p>
+</blockquote>
+<ul>
+<li>The administrator enters the Security Center-&gt;User Management page and clicks the &quot;Edit&quot; button. When editing user information, enter the new password to modify the user password.</li>
+<li>After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the &quot;Edit&quot; button, then the password modification is successful.</li>
+</ul>
+<h4>5.4 Create alarm group</h4>
+<ul>
+<li>The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.</li>
+</ul>
+<ul>
+<li>
+<p>The administrator enters the Security Center -&gt; Alarm Group Management page and clicks the &quot;Create Alarm Group&quot; button to create an alarm group.</p>
+<p align="center">
+  <img src="/img/mail-en.png" width="80%" />
+</li>
+</ul>
+<h4>5.5 Token management</h4>
+<blockquote>
+<p>Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.</p>
+</blockquote>
+<ul>
+<li>
+<p>The administrator enters the Security Center -&gt; Token Management page, clicks the &quot;Create Token&quot; button, selects the expiration time and user, clicks the &quot;Generate Token&quot; button, and clicks the &quot;Submit&quot; button, then the selected user's token is created successfully.</p>
+<p align="center">
+    <img src="/img/create-token-en.png" width="80%" />
+ </p>
+<ul>
+<li>After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the &quot;generate token&quot; button, and click the &quot;submit&quot; button, then the user creates a token successfully.</li>
+<li>Call example:</li>
+</ul>
+</li>
+</ul>
+<pre><code>Token call example
+    /**
+     * test token
+     */
+    public  void doPOSTParam()throws Exception{
+        // create HttpClient
+        CloseableHttpClient httpclient = HttpClients.createDefault();
+
+        // create http post request
+        HttpPost httpPost = new HttpPost(&quot;http://127.0.0.1:12345/escheduler/projects/create&quot;);
+        httpPost.setHeader(&quot;token&quot;, &quot;123&quot;);
+        // set parameters
+        List&lt;NameValuePair&gt; parameters = new ArrayList&lt;NameValuePair&gt;();
+        parameters.add(new BasicNameValuePair(&quot;projectName&quot;, &quot;qzw&quot;));
+        parameters.add(new BasicNameValuePair(&quot;desc&quot;, &quot;qzw&quot;));
+        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
+        httpPost.setEntity(formEntity);
+        CloseableHttpResponse response = null;
+        try {
+            // execute
+            response = httpclient.execute(httpPost);
+            // response status code 200
+            if (response.getStatusLine().getStatusCode() == 200) {
+                String content = EntityUtils.toString(response.getEntity(), &quot;UTF-8&quot;);
+                System.out.println(content);
+            }
+        } finally {
+            if (response != null) {
+                response.close();
+            }
+            httpclient.close();
+        }
+    }
+</code></pre>
+<h4>5.6 Granted permission</h4>
+<pre><code>* Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
+* The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
+* Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
+</code></pre>
+<ul>
+<li>The administrator enters the Security Center -&gt; User Management page and clicks the &quot;Authorize&quot; button of the user who needs to be authorized, as shown in the figure below:</li>
+</ul>
+ <p align="center">
+  <img src="/img/auth-en.png" width="80%" />
+</p>
+<ul>
+<li>Select the project to authorize the project.</li>
+</ul>
+<p align="center">
+   <img src="/img/authproject-en.png" width="80%" />
+ </p>
+<ul>
+<li>Resources, data sources, and UDF function authorization are the same as project authorization.</li>
+</ul>
+<h3>6. monitoring Center</h3>
+<h4>6.1 Service management</h4>
+<ul>
+<li>Service management is mainly to monitor and display the health status and basic information of each service in the system</li>
+</ul>
+<h4>6.1.1 master monitoring</h4>
+<ul>
+<li>Mainly related to master information.</li>
+</ul>
+<p align="center">
+   <img src="/img/master-jk-en.png" width="80%" />
+ </p>
+<h4>6.1.2 worker monitoring</h4>
+<ul>
+<li>Mainly related to worker information.</li>
+</ul>
+<p align="center">
+   <img src="/img/worker-jk-en.png" width="80%" />
+ </p>
+<h4>6.1.3 Zookeeper monitoring</h4>
+<ul>
+<li>Mainly related configuration information of each worker and master in zookpeeper.</li>
+</ul>
+<p align="center">
+   <img src="/img/zookeeper-monitor-en.png" width="80%" />
+ </p>
+<h4>6.1.4 DB monitoring</h4>
+<ul>
+<li>Mainly the health of the DB</li>
+</ul>
+<p align="center">
+   <img src="/img/mysql-jk-en.png" width="80%" />
+ </p>
+<h4>6.2 Statistics management</h4>
+<p align="center">
+   <img src="/img/statistics-en.png" width="80%" />
+ </p>
+<ul>
+<li>Number of commands to be executed: statistics on the t_ds_command table</li>
+<li>The number of failed commands: statistics on the t_ds_error_command table</li>
+<li>Number of tasks to run: Count the data of task_queue in Zookeeper</li>
+<li>Number of tasks to be killed: Count the data of task_kill in Zookeeper</li>
+</ul>
+<h3>7. <span id=TaskParamers>Task node type and parameter settings</span></h3>
+<h4>7.1 Shell node</h4>
+<blockquote>
+<p>Shell node, when the worker is executed, a temporary shell script is generated, and the linux user with the same name as the tenant executes the script.</p>
+</blockquote>
+<ul>
+<li>
+<p>Click Project Management-Project Name-Workflow Definition, and click the &quot;Create Workflow&quot; button to enter the DAG editing page.</p>
+</li>
+<li>
+<p>Drag <img src="/img/shell.png" width="35"/> from the toolbar to the drawing board, as shown in the figure below:</p>
+<p align="center">
+    <img src="/img/shell-en.png" width="80%" />
+</p>
+</li>
+<li>
+<p>Node name: The node name in a workflow definition is unique.</p>
+</li>
+<li>
+<p>Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</p>
+</li>
+<li>
+<p>Descriptive information: describe the function of the node.</p>
+</li>
+<li>
+<p>Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</p>
+</li>
+<li>
+<p>Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</p>
+</li>
+<li>
+<p>Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.</p>
+</li>
+<li>
+<p>Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.</p>
+</li>
+<li>
+<p>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</p>
+</li>
+<li>
+<p>Script: SHELL program developed by users.</p>
+</li>
+<li>
+<p>Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.</p>
+</li>
+<li>
+<p>User-defined parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with ${variable} in the script.</p>
+</li>
+</ul>
+<h4>7.2 Sub-process node</h4>
+<ul>
+<li>The sub-process node is to execute a certain external workflow definition as a task node.
+<blockquote>
+<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
+</blockquote>
+</li>
+</ul>
+<p align="center">
+   <img src="/img/sub-process-en.png" width="80%" />
+ </p>
+<ul>
+<li>Node name: The node name in a workflow definition is unique</li>
+<li>Run flag: identify whether this node can be scheduled normally</li>
+<li>Descriptive information: describe the function of the node</li>
+<li>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
+<li>Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process</li>
+</ul>
+<h4>7.3 DEPENDENT node</h4>
+<ul>
+<li>Dependent nodes are <strong>dependency check nodes</strong>. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.</li>
+</ul>
+<blockquote>
+<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
+</blockquote>
+<p align="center">
+   <img src="/img/dependent-nodes-en.png" width="80%" />
+ </p>
+<blockquote>
+<p>The dependent node provides a logical judgment function, such as checking whether the B process was successful yesterday, or whether the C process was executed successfully.</p>
+</blockquote>
+  <p align="center">
+   <img src="/img/depend-node-en.png" width="80%" />
+ </p>
+<blockquote>
+<p>For example, process A is a weekly report task, processes B and C are daily tasks, and task A requires tasks B and C to be successfully executed every day of the last week, as shown in the figure:</p>
+</blockquote>
+ <p align="center">
+   <img src="/img/depend-node1-en.png" width="80%" />
+ </p>
+<blockquote>
+<p>If the weekly report A also needs to be executed successfully last Tuesday:</p>
+</blockquote>
+ <p align="center">
+   <img src="/img/depend-node3-en.png" width="80%" />
+ </p>
+<h4>7.4 Stored procedure node</h4>
+<ul>
+<li>According to the selected data source, execute the stored procedure.
+<blockquote>
+<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
+</blockquote>
+</li>
+</ul>
+<p align="center">
+   <img src="/img/procedure-en.png" width="80%" />
+ </p>
+<ul>
+<li>Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source</li>
+<li>Method: is the method name of the stored procedure</li>
+<li>Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN</li>
+</ul>
+<h4>7.5 SQL node</h4>
+<ul>
+<li>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png" alt="PNG">Task node into the drawing board</li>
+<li>Non-query SQL function: edit non-query SQL task information, select non-query for sql type, as shown in the figure below:</li>
+</ul>
+ <p align="center">
+  <img src="/img/sql-en.png" width="80%" />
+</p>
+<ul>
+<li>Query SQL function: Edit and query SQL task information, sql type selection query, select form or attachment to send mail to the specified recipient, as shown in the figure below.</li>
+</ul>
+<p align="center">
+   <img src="/img/sql-node-en.png" width="80%" />
+ </p>
+<ul>
+<li>Data source: select the corresponding data source</li>
+<li>sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.</li>
+<li>sql parameter: the input parameter format is key1=value1;key2=value2...</li>
+<li>sql statement: SQL statement</li>
+<li>UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.</li>
+<li>Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.</li>
+<li>Pre-sql: Pre-sql is executed before the sql statement.</li>
+<li>Post-sql: Post-sql is executed after the sql statement.</li>
+</ul>
+<h4>7.6 SPARK node</h4>
+<ul>
+<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>
+</ul>
+<blockquote>
+<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
+</blockquote>
+<p align="center">
+   <img src="/img/spark-submit-en.png" width="80%" />
+ </p>
+<ul>
+<li>Program type: supports JAVA, Scala and Python three languages</li>
+<li>The class of the main function: is the full path of the Spark program’s entry Main Class</li>
+<li>Main jar package: Spark jar package</li>
+<li>Deployment mode: support three modes of yarn-cluster, yarn-client and local</li>
+<li>Driver core number: You can set the number of Driver cores and the number of memory</li>
+<li>Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores</li>
+<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
+<li>Other parameters: support --jars, --files, --archives, --conf format</li>
+<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
+<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+</ul>
+<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same</p>
+<h4>7.7 MapReduce(MR) node</h4>
+<ul>
+<li>Using the MR node, you can directly execute the MR program. For the mr node, the worker will use the <code>hadoop jar</code> method to submit tasks</li>
+</ul>
+<blockquote>
+<p>Drag the <img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png" alt="PNG"> task node in the toolbar to the drawing board, as shown in the following figure:</p>
+</blockquote>
+<ol>
+<li>JAVA program</li>
+</ol>
+ <p align="center">
+   <img src="/img/mr_java_en.png" width="80%" />
+ </p>
+<ul>
+<li>The class of the main function: is the full path of the Main Class, the entry point of the MR program</li>
+<li>Program type: select JAVA language</li>
+<li>Main jar package: is the MR jar package</li>
+<li>Command line parameters: set the input parameters of the MR program and support the substitution of custom parameter variables</li>
+<li>Other parameters: support -D, -files, -libjars, -archives format</li>
+<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
+<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+</ul>
+<ol start="2">
+<li>Python program</li>
+</ol>
+<p align="center">
+   <img src="/img/mr_edit_en.png" width="80%" />
+ </p>
+<ul>
+<li>Program type: select Python language</li>
+<li>Main jar package: is the Python jar package for running MR</li>
+<li>Other parameters: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:</li>
+<li>-mapper &quot;<a href="http://mapper.py">mapper.py</a> 1&quot; -file <a href="http://mapper.py">mapper.py</a> -reducer <a href="http://reducer.py">reducer.py</a> -file <a href="http://reducer.py">reducer.py</a> –input /journey/words.txt -output /journey/out/mr/${currentTimeMillis}</li>
+<li>The <a href="http://mapper.py">mapper.py</a> 1 after -mapper is two parameters, the first parameter is <a href="http://mapper.py">mapper.py</a>, and the second parameter is 1</li>
+<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
+<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+</ul>
+<h4>7.8 Python Node</h4>
+<ul>
+<li>Using python nodes, you can directly execute python scripts. For python nodes, workers will use <code>python **</code> to submit tasks.</li>
+</ul>
+<blockquote>
+<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
+</blockquote>
+<p align="center">
+   <img src="/img/python-en.png" width="80%" />
+ </p>
+<ul>
+<li>Script: Python program developed by the user</li>
+<li>Resources: refers to the list of resource files that need to be called in the script</li>
+<li>User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with ${variable} in the script</li>
+</ul>
+<h4>7.9 Flink Node</h4>
+<ul>
+<li>Drag in the toolbar<img src="/img/flink.png" width="35"/>The task node to the drawing board, as shown in the following figure:</li>
+</ul>
+<p align="center">
+  <img src="/img/flink-en.png" width="80%" />
+</p>
+<ul>
+<li>Program type: supports JAVA, Scala and Python three languages</li>
+<li>The class of the main function: is the full path of the Main Class, the entry point of the Flink program</li>
+<li>Main jar package: is the Flink jar package</li>
+<li>Deployment mode: support three modes of cluster and local</li>
+<li>Number of slots: You can set the number of slots</li>
+<li>Number of taskManage: You can set the number of taskManage</li>
+<li>JobManager memory number: You can set the jobManager memory number</li>
+<li>TaskManager memory number: You can set the taskManager memory number</li>
+<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
+<li>Other parameters: support --jars, --files, --archives, --conf format</li>
+<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
+<li>Custom parameter: It is a local user-defined parameter of Flink, which will replace the content with ${variable} in the script</li>
+</ul>
+<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same</p>
+<h4>7.10 http Node</h4>
+<ul>
+<li>Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:</li>
+</ul>
+<p align="center">
+   <img src="/img/http-en.png" width="80%" />
+ </p>
+<ul>
+<li>Node name: The node name in a workflow definition is unique.</li>
+<li>Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</li>
+<li>Descriptive information: describe the function of the node.</li>
+<li>Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</li>
+<li>Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</li>
+<li>Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.</li>
+<li>Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.</li>
+<li>Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
+<li>Request address: http request URL.</li>
+<li>Request type: support GET, POSt, HEAD, PUT, DELETE.</li>
+<li>Request parameters: Support Parameter, Body, Headers.</li>
+<li>Verification conditions: support default response code, custom response code, content included, content not included.</li>
+<li>Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.</li>
+<li>Custom parameter: It is a user-defined parameter of http part, which will replace the content with ${variable} in the script.</li>
+</ul>
+<h4>7.11 DATAX Node</h4>
+<ul>
+<li>
+<p>Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the drawing board</p>
+<p align="center">
+ <img src="/img/datax-en.png" width="80%" />
+</p>
+</li>
+<li>
+<p>Custom template: When you turn on the custom template switch, you can customize the content of the json configuration file of the datax node (applicable when the control configuration does not meet the requirements)</p>
+</li>
+<li>
+<p>Data source: select the data source to extract the data</p>
+</li>
+<li>
+<p>sql statement: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias (as)</p>
+</li>
+<li>
+<p>Target library: select the target library for data synchronization</p>
+</li>
+<li>
+<p>Target table: the name of the target table for data synchronization</p>
+</li>
+<li>
+<p>Pre-sql: Pre-sql is executed before the sql statement (executed by the target library).</p>
+</li>
+<li>
+<p>Post-sql: Post-sql is executed after the sql statement (executed by the target library).</p>
+</li>
+<li>
+<p>json: json configuration file for datax synchronization</p>
+</li>
+<li>
+<p>Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.</p>
+</li>
+</ul>
+<h4>8. parameter</h4>
+<h4>8.1 System parameters</h4>
+<table>
+    <tr><th>variable</th><th>meaning</th></tr>
+    <tr>
+        <td>${system.biz.date}</td>
+        <td>The day before the scheduled time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
+    </tr>
+    <tr>
+        <td>${system.biz.curdate}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMdd, when the data is supplemented, the date is +1</td>
+    </tr>
+    <tr>
+        <td>${system.datetime}</td>
+        <td>The timing time of the daily scheduling instance, the format is yyyyMMddHHmmss, when the data is supplemented, the date is +1</td>
+    </tr>
+</table>
+<h4>8.2 Time custom parameters</h4>
+<ul>
+<li>
+<p>Support custom variable names in the code, declaration method: ${variable name}. It can refer to &quot;system parameters&quot; or specify &quot;constants&quot;.</p>
+</li>
+<li>
+<p>We define this benchmark variable as $[...] format, $[yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as: $[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd], etc.</p>
+</li>
+<li>
+<p>The following format can also be used:</p>
+<pre><code>* Next N years:$[add_months(yyyyMMdd,12*N)]
+* N years before:$[add_months(yyyyMMdd,-12*N)]
+* Next N months:$[add_months(yyyyMMdd,N)]
+* N months before:$[add_months(yyyyMMdd,-N)]
+* Next N weeks:$[yyyyMMdd+7*N]
+* First N weeks:$[yyyyMMdd-7*N]
+* Next N days:$[yyyyMMdd+N]
+* N days before:$[yyyyMMdd-N]
+* Next N hours:$[HHmmss+N/24]
+* First N hours:$[HHmmss-N/24]
+* Next N minutes:$[HHmmss+N/24/60]
+* First N minutes:$[HHmmss-N/24/60]
+</code></pre>
+</li>
+</ul>
+<h4>8.3 <span id=UserDefinedParameters>User-defined parameters</span></h4>
+<ul>
+<li>User-defined parameters are divided into global parameters and local parameters. Global parameters are global parameters passed when saving workflow definitions and workflow instances. Global parameters can be referenced in the local parameters of any task node in the entire process.
+example:</li>
+</ul>
+<p align="center">
+   <img src="/img/local_parameter_en.png" width="80%" />
+ </p>
+<ul>
+<li>global_bizdate is a global parameter, which refers to a system parameter.</li>
+</ul>
+<p align="center">
+   <img src="/img/global_parameter_en.png" width="80%" />
+ </p>
+<ul>
+<li>In the task, local_param_bizdate uses ${global_bizdate} to refer to global parameters. For scripts, you can use ${local_param_bizdate} to refer to the value of global variable global_bizdate, or directly set the value of local_param_bizdate through JDBC.</li>
+</ul>
+</div></section><footer class="footer-container"><div class="footer-body"><img src="/img/ds_gray.svg"/><div class="cols-container"><div class="col col-12"><h3>Disclaimer</h3><p>Apache DolphinScheduler (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by Incubator. 
+Incubation is required of all newly accepted projects until a further review indicates 
+that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. 
+While incubation status is not necessarily a reflection of the completeness or stability of the code, 
+it does indicate that the project has yet to be fully endorsed by the ASF.</p></div><div class="col col-6"><dl><dt>Documentation</dt><dd><a href="/en-us/docs/development/architecture-design.html" target="_self">Overview</a></dd><dd><a href="/en-us/docs/1.2.0/user_doc/quick-start.html" target="_self">Quick start</a></dd><dd><a href="/en-us/docs/development/backend-development.html" target="_self">Developer guide</a></dd></dl></div><div class="col col-6"><dl><dt>ASF</dt><dd><a href="http:/ [...]
+	<script src="https://f.alicdn.com/react/15.4.1/react-with-addons.min.js"></script>
+	<script src="https://f.alicdn.com/react/15.4.1/react-dom.min.js"></script>
+	<script>
+		window.rootPath = '';
+  </script>
+	<script src="/build/documentation.js"></script>
+</body>
+</html>
\ No newline at end of file
diff --git a/en-us/docs/1.3.2/user_doc/system-manual.json b/en-us/docs/1.3.2/user_doc/system-manual.json
new file mode 100644
index 0000000..1687db9
--- /dev/null
+++ b/en-us/docs/1.3.2/user_doc/system-manual.json
@@ -0,0 +1,6 @@
+{
+  "filename": "system-manual.md",
+  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" />\n</p>\n<h3>2. Project management</h3>\n<h4>2.1 Create project< [...]
+  "link": "/en-us/docs/1.3.2/user_doc/system-manual.html",
+  "meta": {}
+}
\ No newline at end of file
diff --git a/img/complement_en1.png b/img/complement_en1.png
new file mode 100644
index 0000000..8d706aa
Binary files /dev/null and b/img/complement_en1.png differ
diff --git a/img/create_project_en1.png b/img/create_project_en1.png
new file mode 100644
index 0000000..29df496
Binary files /dev/null and b/img/create_project_en1.png differ