You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2021/10/02 04:01:12 UTC

[dolphinscheduler-website] branch asf-site updated: Automated deployment: 308e2c95ee7af5b3e9e5699656455867d7374b74

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 94ea874  Automated deployment: 308e2c95ee7af5b3e9e5699656455867d7374b74
94ea874 is described below

commit 94ea8743c10db77525888df2258fde0b6a47a06c
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Sat Oct 2 04:01:06 2021 +0000

    Automated deployment: 308e2c95ee7af5b3e9e5699656455867d7374b74
---
 en-us/docs/1.3.8/user_doc/ambari-integration.html  |   6 +-
 en-us/docs/1.3.8/user_doc/ambari-integration.json  |   2 +-
 en-us/docs/1.3.8/user_doc/architecture-design.html |  30 +++---
 en-us/docs/1.3.8/user_doc/architecture-design.json |   2 +-
 en-us/docs/1.3.8/user_doc/cluster-deployment.html  |  16 ++--
 en-us/docs/1.3.8/user_doc/cluster-deployment.json  |   2 +-
 en-us/docs/1.3.8/user_doc/configuration-file.html  |  12 +--
 en-us/docs/1.3.8/user_doc/configuration-file.json  |   2 +-
 en-us/docs/1.3.8/user_doc/docker-deployment.html   | 104 ++++++++++-----------
 en-us/docs/1.3.8/user_doc/docker-deployment.json   |   2 +-
 en-us/docs/1.3.8/user_doc/expansion-reduction.html |   4 +-
 en-us/docs/1.3.8/user_doc/expansion-reduction.json |   2 +-
 en-us/docs/1.3.8/user_doc/flink-call.html          |  10 +-
 en-us/docs/1.3.8/user_doc/flink-call.json          |   2 +-
 .../docs/1.3.8/user_doc/kubernetes-deployment.html |  14 +--
 .../docs/1.3.8/user_doc/kubernetes-deployment.json |   2 +-
 en-us/docs/1.3.8/user_doc/metadata-1.3.html        |   6 +-
 en-us/docs/1.3.8/user_doc/metadata-1.3.json        |   2 +-
 en-us/docs/1.3.8/user_doc/open-api.html            |   2 +-
 en-us/docs/1.3.8/user_doc/open-api.json            |   2 +-
 en-us/docs/1.3.8/user_doc/quick-start.html         |   8 +-
 en-us/docs/1.3.8/user_doc/quick-start.json         |   2 +-
 .../user_doc/skywalking-agent-deployment.html      |  28 +++---
 .../user_doc/skywalking-agent-deployment.json      |   2 +-
 .../docs/1.3.8/user_doc/standalone-deployment.html |  50 +++++-----
 .../docs/1.3.8/user_doc/standalone-deployment.json |   2 +-
 en-us/docs/1.3.8/user_doc/system-manual.html       |   8 +-
 en-us/docs/1.3.8/user_doc/system-manual.json       |   2 +-
 en-us/docs/1.3.8/user_doc/task-structure.html      |   2 +-
 en-us/docs/1.3.8/user_doc/task-structure.json      |   2 +-
 en-us/docs/1.3.8/user_doc/upgrade.html             |  30 +++---
 en-us/docs/1.3.8/user_doc/upgrade.json             |   2 +-
 en-us/docs/latest/user_doc/ambari-integration.html |   6 +-
 en-us/docs/latest/user_doc/ambari-integration.json |   2 +-
 .../docs/latest/user_doc/architecture-design.html  |  30 +++---
 .../docs/latest/user_doc/architecture-design.json  |   2 +-
 en-us/docs/latest/user_doc/cluster-deployment.html |  16 ++--
 en-us/docs/latest/user_doc/cluster-deployment.json |   2 +-
 en-us/docs/latest/user_doc/configuration-file.html |  12 +--
 en-us/docs/latest/user_doc/configuration-file.json |   2 +-
 en-us/docs/latest/user_doc/docker-deployment.html  | 104 ++++++++++-----------
 en-us/docs/latest/user_doc/docker-deployment.json  |   2 +-
 .../docs/latest/user_doc/expansion-reduction.html  |   4 +-
 .../docs/latest/user_doc/expansion-reduction.json  |   2 +-
 en-us/docs/latest/user_doc/flink-call.html         |  10 +-
 en-us/docs/latest/user_doc/flink-call.json         |   2 +-
 .../latest/user_doc/kubernetes-deployment.html     |  14 +--
 .../latest/user_doc/kubernetes-deployment.json     |   2 +-
 en-us/docs/latest/user_doc/metadata-1.3.html       |   6 +-
 en-us/docs/latest/user_doc/metadata-1.3.json       |   2 +-
 en-us/docs/latest/user_doc/open-api.html           |   2 +-
 en-us/docs/latest/user_doc/open-api.json           |   2 +-
 en-us/docs/latest/user_doc/quick-start.html        |   8 +-
 en-us/docs/latest/user_doc/quick-start.json        |   2 +-
 .../user_doc/skywalking-agent-deployment.html      |  28 +++---
 .../user_doc/skywalking-agent-deployment.json      |   2 +-
 .../latest/user_doc/standalone-deployment.html     |  50 +++++-----
 .../latest/user_doc/standalone-deployment.json     |   2 +-
 en-us/docs/latest/user_doc/system-manual.html      |   8 +-
 en-us/docs/latest/user_doc/system-manual.json      |   2 +-
 en-us/docs/latest/user_doc/task-structure.html     |   2 +-
 en-us/docs/latest/user_doc/task-structure.json     |   2 +-
 en-us/docs/latest/user_doc/upgrade.html            |  30 +++---
 en-us/docs/latest/user_doc/upgrade.json            |   2 +-
 64 files changed, 362 insertions(+), 362 deletions(-)

diff --git a/en-us/docs/1.3.8/user_doc/ambari-integration.html b/en-us/docs/1.3.8/user_doc/ambari-integration.html
index 7cb0c8c..0731345 100644
--- a/en-us/docs/1.3.8/user_doc/ambari-integration.html
+++ b/en-us/docs/1.3.8/user_doc/ambari-integration.html
@@ -26,7 +26,7 @@
 </ul>
 </li>
 <li>
-<p>Create an installation for DolphinScheduler with the user have read and write access to the installation directory (/opt/soft)</p>
+<p>Create an installation for DolphinScheduler with the user has read and write access to the installation directory (/opt/soft)</p>
 </li>
 <li>
 <p>Install with rpm package</p>
@@ -38,7 +38,7 @@
 <li>Execute with DolphinScheduler installation user: <code>rpm -ivh apache-dolphinscheduler-xxx.noarch.rpm</code></li>
 <li>Mysql-connector-java packaged using the default POM file will not be included.</li>
 <li>The RPM package was packaged in the project with the installation path of /opt/soft.
-If you use mysql as the database, you need add it manually.</li>
+If you use MySQL as the database, you need to add it manually.</li>
 </ul>
 </li>
 <li>
@@ -92,7 +92,7 @@ flush privileges;
 <p><img src="https://dolphinscheduler.apache.org/img/ambari-plugin/DS2_AMBARI_004.png" alt=""></p>
 </li>
 <li>
-<p>System Env Optimization will export some system environment config. Modify according to actual situation</p>
+<p>System Env Optimization will export some system environment config. Modify according to the actual situation</p>
 <p><img src="https://dolphinscheduler.apache.org/img/ambari-plugin/DS2_AMBARI_020.png" alt=""></p>
 </li>
 <li>
diff --git a/en-us/docs/1.3.8/user_doc/ambari-integration.json b/en-us/docs/1.3.8/user_doc/ambari-integration.json
index 471386d..0dc3bdb 100644
--- a/en-us/docs/1.3.8/user_doc/ambari-integration.json
+++ b/en-us/docs/1.3.8/user_doc/ambari-integration.json
@@ -1,6 +1,6 @@
 {
   "filename": "ambari-integration.md",
-  "__html": "<h3>Instructions for using the DolphinScheduler's Ambari plug-in</h3>\n<h4>Note</h4>\n<ol>\n<li>This document is intended for users with a basic understanding of Ambari</li>\n<li>This document is a description of adding the DolphinScheduler service to the installed Ambari service</li>\n<li>This document is based on version 2.5.2 of Ambari</li>\n</ol>\n<h4>Installation preparation</h4>\n<ol>\n<li>\n<p>Prepare the RPM packages</p>\n<ul>\n<li>It is generated by executing the co [...]
+  "__html": "<h3>Instructions for using the DolphinScheduler's Ambari plug-in</h3>\n<h4>Note</h4>\n<ol>\n<li>This document is intended for users with a basic understanding of Ambari</li>\n<li>This document is a description of adding the DolphinScheduler service to the installed Ambari service</li>\n<li>This document is based on version 2.5.2 of Ambari</li>\n</ol>\n<h4>Installation preparation</h4>\n<ol>\n<li>\n<p>Prepare the RPM packages</p>\n<ul>\n<li>It is generated by executing the co [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/ambari-integration.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/architecture-design.html b/en-us/docs/1.3.8/user_doc/architecture-design.html
index 3035cf4..6dd8ddb 100644
--- a/en-us/docs/1.3.8/user_doc/architecture-design.html
+++ b/en-us/docs/1.3.8/user_doc/architecture-design.html
@@ -20,17 +20,17 @@
         <em>dag example</em>
   </p>
 </p>
-<p><strong>Process definition</strong>:Visualization formed by dragging task nodes and establishing task node associations<strong>DAG</strong></p>
-<p><strong>Process instance</strong>:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated</p>
-<p><strong>Task instance</strong>:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status</p>
-<p><strong>Task type</strong>: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: <strong>SUB_PROCESS</strong>  It is also a separate process definition that can be started and executed separately</p>
-<p><strong>Scheduling method:</strong> The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them <strong>Resume fault-tolerant workflow</strong> 和 <strong>Resume waiting thread</strong> The two command types are used by the in [...]
-<p><strong>Scheduled</strong>:System adopts <strong>quartz</strong> distributed scheduler, and supports the visual generation of cron expressions</p>
-<p><strong>Rely</strong>:The system not only supports <strong>DAG</strong> simple dependencies between the predecessor and successor nodes, but also provides <strong>task dependent</strong> nodes, supporting <strong>between processes</strong></p>
-<p><strong>Priority</strong> :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out</p>
-<p><strong>Email alert</strong>:Support <strong>SQL task</strong> Query result email sending, process instance running result email alert and fault tolerance alert notification</p>
-<p><strong>Failure strategy</strong>:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. <strong>Continue</strong> refers to regardless of the status of the task running in parallel until the end of the process failure. <strong>End</strong> means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends</p>
-<p><strong>Complement</strong>:Supplement historical data,Supports <strong>interval parallel and serial</strong> two complement methods</p>
+<p><strong>Process definition</strong>: Visualization formed by dragging task nodes and establishing task node associations<strong>DAG</strong></p>
+<p><strong>Process instance</strong>: The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated</p>
+<p><strong>Task instance</strong>: The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status</p>
+<p><strong>Task type</strong>: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: <strong>SUB_PROCESS</strong>  It is also a separate process definition that can be started and executed separately</p>
+<p><strong>Scheduling method</strong>: The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread. Among them <strong>Resume fault-tolerant workflow</strong> and <strong>Resume waiting thread</strong> The two command types are used by the [...]
+<p><strong>Scheduled</strong>: System adopts <strong>quartz</strong> distributed scheduler, and supports the visual generation of cron expressions</p>
+<p><strong>Rely</strong>: The system not only supports <strong>DAG</strong> simple dependencies between the predecessor and successor nodes, but also provides <strong>task dependent</strong> nodes, supporting <strong>between processes</strong></p>
+<p><strong>Priority</strong>: Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in-first-out</p>
+<p><strong>Email alert</strong>: Support <strong>SQL task</strong> Query result email sending, process instance running result email alert and fault tolerance alert notification</p>
+<p><strong>Failure strategy</strong>: For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. <strong>Continue</strong> refers to regardless of the status of the task running in parallel until the end of the process failure. <strong>End</strong> means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends</p>
+<p><strong>Complement</strong>: Supplement historical data,Supports <strong>interval parallel and serial</strong> two complement methods</p>
 <h3>2.System Structure</h3>
 <h4>2.1 System architecture diagram</h4>
 <p align="center">
@@ -169,7 +169,7 @@ In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlow
 <li>Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.</li>
 <li>Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.</li>
 </ol>
-<p>note:The Master Scheduler thread is executed by FIFO when acquiring the Command.</p>
+<p>note: The Master Scheduler thread is executed by FIFO when acquiring the Command.</p>
 <p>So we chose the third way to solve the problem of insufficient threads.</p>
 <h5>Four、Fault-tolerant design</h5>
 <p>Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.</p>
@@ -192,7 +192,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
  </p>
-<p>Once the Master Scheduler thread finds that the task instance is in the &quot;fault tolerant&quot; state, it takes over the task and resubmits it.</p>
+<p>Once the Master Scheduler thread finds that the task instance is in the &quot;fault-tolerant&quot; state, it takes over the task and resubmits it.</p>
 <p>Note: Due to &quot;network jitter&quot;, the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.</p>
 <h6>2.Task failed and try again</h6>
 <p>Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:</p>
@@ -218,7 +218,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
 <li>According to <strong>priority of different process instances</strong> priority over <strong>priority of the same process instance</strong> priority over <strong>priority of tasks within the same process</strong>priority over <strong>tasks within the same process</strong>submission order from high to Low task processing.
 <ul>
 <li>
-<p>The specific implementation is to parse the priority according to the json of the task instance, and then save the <strong>process instance priority_process instance id_task priority_task id</strong> information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first</p>
+<p>The specific implementation is to parse the priority according to the JSON of the task instance, and then save the <strong>process instance priority_process instance id_task priority_task id</strong> information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first</p>
 <ul>
 <li>
 <p>The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below</p>
@@ -243,7 +243,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
 <p>Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:</p>
 </li>
 <li>
-<p>Put logs on ES search engine</p>
+<p>Put logs on the ES search engine</p>
 </li>
 <li>
 <p>Obtain remote log information through netty communication</p>
diff --git a/en-us/docs/1.3.8/user_doc/architecture-design.json b/en-us/docs/1.3.8/user_doc/architecture-design.json
index 80707a5..aca1fb9 100644
--- a/en-us/docs/1.3.8/user_doc/architecture-design.json
+++ b/en-us/docs/1.3.8/user_doc/architecture-design.json
@@ -1,6 +1,6 @@
 {
   "filename": "architecture-design.md",
-  "__html": "<h2>System Architecture Design</h2>\n<p>Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system</p>\n<h3>1.Glossary</h3>\n<p><strong>DAG:</strong> The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes [...]
+  "__html": "<h2>System Architecture Design</h2>\n<p>Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system</p>\n<h3>1.Glossary</h3>\n<p><strong>DAG:</strong> The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/architecture-design.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/cluster-deployment.html b/en-us/docs/1.3.8/user_doc/cluster-deployment.html
index a65d659..b453483 100644
--- a/en-us/docs/1.3.8/user_doc/cluster-deployment.html
+++ b/en-us/docs/1.3.8/user_doc/cluster-deployment.html
@@ -49,7 +49,7 @@ sed -i &#x27;s/Defaults    requirett/#Defaults    requirett/g&#x27; /etc/sudoers
 
 </code></pre>
 <pre><code> Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
+ - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners can ignore it if they don't understand.
  - If find the &quot;Default requiretty&quot; in the &quot;/etc/sudoers&quot; file, also comment out.
  - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
 </code></pre>
@@ -213,7 +213,7 @@ zkQuorum=&quot;192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181&quot;
 installPath=&quot;/opt/soft/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> deployment user</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself</span>
+<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate HDFS. If HDFS is enabled, the root directory needs to be created by itself</span>
 deployUser=&quot;dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> alert config,take QQ email <span class="hljs-keyword">for</span> example</span>
@@ -237,10 +237,10 @@ mailUser=&quot;xxx@qq.com&quot;
 <span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is email service authorization code, not the email login password.</span>
 mailPassword=&quot;xxx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
 starttlsEnable=&quot;true&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported.</span>
 <span class="hljs-meta">#</span><span class="bash"> note: only one of TLS and SSL can be <span class="hljs-keyword">in</span> the <span class="hljs-literal">true</span> state.</span>
 sslEnable=&quot;false&quot;
 <span class="hljs-meta">
@@ -252,18 +252,18 @@ sslTrust=&quot;smtp.qq.com&quot;
 resourceStorageType=&quot;HDFS&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or host name.</span>
-<span class="hljs-meta">#</span><span class="bash"> <span class="hljs-keyword">if</span> S3,write S3 address,HA,<span class="hljs-keyword">for</span> example :s3a://dolphinscheduler,</span>
+<span class="hljs-meta">#</span><span class="bash"> <span class="hljs-keyword">if</span> S3,write S3 address,HA,<span class="hljs-keyword">for</span> example: s3a://dolphinscheduler,</span>
 <span class="hljs-meta">#</span><span class="bash"> Note,s3 be sure to create the root directory /dolphinscheduler</span>
 defaultFS=&quot;hdfs://mycluster:8020&quot;
 <span class="hljs-meta">
 
-#</span><span class="bash"> <span class="hljs-keyword">if</span> not use hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> not use Hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
 yarnHaIps=&quot;192.168.xx.xx,192.168.xx.xx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 with actual resourcemanager hostname.</span>
 singleYarnIp=&quot;yarnIp1&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
+#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this Hadoop HDFS path, self configuration, please make sure the directory exists on HDFS and have read-write permissions. /dolphinscheduler is recommended</span>
 resourceUploadPath=&quot;/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> who have permissions to create directory under HDFS/S3 root path</span>
diff --git a/en-us/docs/1.3.8/user_doc/cluster-deployment.json b/en-us/docs/1.3.8/user_doc/cluster-deployment.json
index 362952a..5251f98 100644
--- a/en-us/docs/1.3.8/user_doc/cluster-deployment.json
+++ b/en-us/docs/1.3.8/user_doc/cluster-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "cluster-deployment.md",
-  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
+  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/cluster-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/configuration-file.html b/en-us/docs/1.3.8/user_doc/configuration-file.html
index 778b241..b9468bf 100644
--- a/en-us/docs/1.3.8/user_doc/configuration-file.html
+++ b/en-us/docs/1.3.8/user_doc/configuration-file.html
@@ -18,7 +18,7 @@
 <pre><code>
 ├─bin                               DS application commands directory
 │  ├─dolphinscheduler-daemon.sh         startup/shutdown DS application 
-│  ├─start-all.sh                       startup all DS services with configurations
+│  ├─start-all.sh                  A     startup all DS services with configurations
 │  ├─stop-all.sh                        shutdown all DS services with configurations
 ├─conf                              configurations directory
 │  ├─application-api.properties         API-service config properties
@@ -527,7 +527,7 @@ Currently, DS just makes a basic config, please config further JVM options bas
 <tr>
 <td>master.max.cpuload.avg</td>
 <td>-1</td>
-<td>master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2</td>
+<td>master max CPU load avg, only higher than the system CPU load average, master server can schedule. default value -1: the number of CPU cores * 2</td>
 </tr>
 <tr>
 <td>master.reserved.memory</td>
@@ -564,7 +564,7 @@ Currently, DS just makes a basic config, please config further JVM options bas
 <tr>
 <td>worker.max.cpuload.avg</td>
 <td>-1</td>
-<td>worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2</td>
+<td>worker max CPU load avg, only higher than the system CPU load average, worker server can be dispatched tasks. default value -1: the number of CPU cores * 2</td>
 </tr>
 <tr>
 <td>worker.reserved.memory</td>
@@ -823,7 +823,7 @@ Files such as <a href="http://dolphinscheduler-daemon.sh">dolphinscheduler-daemo
 <span class="hljs-comment"># Note:  please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&amp;`.</span>
 <span class="hljs-comment">#   eg: `[` escape to `\[`</span>
 
-<span class="hljs-comment"># Database type (DS currently only supports postgresql and mysql)</span>
+<span class="hljs-comment"># Database type (DS currently only supports PostgreSQL and MySQL)</span>
 dbtype=<span class="hljs-string">&quot;mysql&quot;</span>
 
 <span class="hljs-comment"># Database url &amp; port</span>
@@ -902,9 +902,9 @@ resourceUploadPath=<span class="hljs-string">&quot;/dolphinscheduler&quot;</span
 <span class="hljs-comment"># HDFS/S3 root user</span>
 hdfsRootUser=<span class="hljs-string">&quot;hdfs&quot;</span>
 
-<span class="hljs-comment"># Followings are kerberos configs</span>
+<span class="hljs-comment"># Followings are Kerberos configs</span>
 
-<span class="hljs-comment"># Spicify kerberos enable or not</span>
+<span class="hljs-comment"># Spicify Kerberos enable or not</span>
 kerberosStartUp=<span class="hljs-string">&quot;false&quot;</span>
 
 <span class="hljs-comment"># Kdc krb5 config file path</span>
diff --git a/en-us/docs/1.3.8/user_doc/configuration-file.json b/en-us/docs/1.3.8/user_doc/configuration-file.json
index 225ac16..b693975 100644
--- a/en-us/docs/1.3.8/user_doc/configuration-file.json
+++ b/en-us/docs/1.3.8/user_doc/configuration-file.json
@@ -1,6 +1,6 @@
 {
   "filename": "configuration-file.md",
-  "__html": "<h1>Preface</h1>\n<p>This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.</p>\n<h1>Directory Structure</h1>\n<p>Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations a [...]
+  "__html": "<h1>Preface</h1>\n<p>This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.</p>\n<h1>Directory Structure</h1>\n<p>Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations a [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/configuration-file.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/docker-deployment.html b/en-us/docs/1.3.8/user_doc/docker-deployment.html
index 5a87b01..4c9a7ad 100644
--- a/en-us/docs/1.3.8/user_doc/docker-deployment.html
+++ b/en-us/docs/1.3.8/user_doc/docker-deployment.html
@@ -416,14 +416,14 @@ COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
 <p>Comment the <code>dolphinscheduler-postgresql</code> block in <code>docker-compose.yml</code></p>
 </li>
 <li>
-<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use a external MySQL database)</p>
+<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use an external MySQL database)</p>
 </li>
 <li>
 <p>Modify DATABASE environment variables in <code>config.env.sh</code></p>
@@ -469,7 +469,7 @@ COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
@@ -504,14 +504,14 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:oracle-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
 <p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
 </li>
 <li>
-<p>Add a Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code></p>
 </li>
 </ol>
 <h3>How to support Python 2 pip and custom requirements.txt?</h3>
@@ -537,7 +537,7 @@ RUN apt-get update &amp;&amp; \
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:pip</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="4">
 <li>
@@ -568,7 +568,7 @@ RUN apt-get update &amp;&amp; \
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:python3</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="4">
 <li>
@@ -607,7 +607,7 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
 ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="5">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -656,7 +656,7 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
 ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="5">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -669,10 +669,10 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </blockquote>
 <p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
 <ol>
-<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support nfs in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support NFS in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
   <span class="hljs-attr">dolphinscheduler-shared-local:</span>
@@ -683,7 +683,7 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <ol start="2">
 <li>
-<p>Put the Hadoop into the nfs</p>
+<p>Put the Hadoop into the NFS</p>
 </li>
 <li>
 <p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct</p>
@@ -700,10 +700,10 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 FS_DEFAULT_FS=file:///
 </code></pre>
 <ol start="2">
-<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support nfs in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support NFS in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
   <span class="hljs-attr">dolphinscheduler-resource-local:</span>
@@ -723,10 +723,10 @@ FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
 </code></pre>
 <p><code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> need to be modified to actual values</p>
 <blockquote>
-<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
+<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
 </blockquote>
 <h3>How to configure SkyWalking?</h3>
-<p>Modify SKYWALKING environment variables in <code>config.env.sh</code>:</p>
+<p>Modify SkyWalking environment variables in <code>config.env.sh</code>:</p>
 <pre><code>SKYWALKING_ENABLE=true
 SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
 SW_GRPC_LOG_SERVER_HOST=127.0.0.1
@@ -735,42 +735,42 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <h2>Appendix-Environment Variables</h2>
 <h3>Database</h3>
 <p><strong><code>DATABASE_TYPE</code></strong></p>
-<p>This environment variable sets the type for database. The default value is <code>postgresql</code>.</p>
+<p>This environment variable sets the type for the database. The default value is <code>postgresql</code>.</p>
 <p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_DRIVER</code></strong></p>
-<p>This environment variable sets the type for database. The default value is <code>org.postgresql.Driver</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the type for the database. The default value is <code>org.postgresql.Driver</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_HOST</code></strong></p>
-<p>This environment variable sets the host for database. The default value is <code>127.0.0.1</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the host for the database. The default value is <code>127.0.0.1</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PORT</code></strong></p>
-<p>This environment variable sets the port for database. The default value is <code>5432</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the port for the database. The default value is <code>5432</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_USERNAME</code></strong></p>
-<p>This environment variable sets the username for database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the username for the database. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PASSWORD</code></strong></p>
-<p>This environment variable sets the password for database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the password for the database. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_DATABASE</code></strong></p>
-<p>This environment variable sets the database for database. The default value is <code>dolphinscheduler</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the database for the database. The default value is <code>dolphinscheduler</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PARAMS</code></strong></p>
-<p>This environment variable sets the database for database. The default value is <code>characterEncoding=utf8</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the database for the database. The default value is <code>characterEncoding=utf8</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <h3>ZooKeeper</h3>
 <p><strong><code>ZOOKEEPER_QUORUM</code></strong></p>
 <p>This environment variable sets zookeeper quorum. The default value is <code>127.0.0.1:2181</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
 <p><strong><code>ZOOKEEPER_ROOT</code></strong></p>
 <p>This environment variable sets zookeeper root directory for dolphinscheduler. The default value is <code>/dolphinscheduler</code>.</p>
 <h3>Common</h3>
 <p><strong><code>DOLPHINSCHEDULER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for dolphinscheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>, <code>logger-server</code>. The default value is empty.</p>
+<p>This environment variable sets JVM options for dolphinscheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>, <code>logger-server</code>. The default value is empty.</p>
 <p><strong><code>DATA_BASEDIR_PATH</code></strong></p>
-<p>User data directory path, self configuration, please make sure the directory exists and have read write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
+<p>User data directory path, self configuration, please make sure the directory exists and have read-write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
 <p><strong><code>RESOURCE_STORAGE_TYPE</code></strong></p>
-<p>This environment variable sets resource storage type for dolphinscheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
+<p>This environment variable sets resource storage types for dolphinscheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
 <p><strong><code>RESOURCE_UPLOAD_PATH</code></strong></p>
 <p>This environment variable sets resource store path on HDFS/S3 for resource storage. The default value is <code>/dolphinscheduler</code>.</p>
 <p><strong><code>FS_DEFAULT_FS</code></strong></p>
@@ -782,31 +782,31 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p><strong><code>FS_S3A_SECRET_KEY</code></strong></p>
 <p>This environment variable sets s3 secret key for resource storage. The default value is <code>xxxxxxx</code>.</p>
 <p><strong><code>HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE</code></strong></p>
-<p>This environment variable sets whether to startup kerberos. The default value is <code>false</code>.</p>
+<p>This environment variable sets whether to startup Kerberos. The default value is <code>false</code>.</p>
 <p><strong><code>JAVA_SECURITY_KRB5_CONF_PATH</code></strong></p>
 <p>This environment variable sets java.security.krb5.conf path. The default value is <code>/opt/krb5.conf</code>.</p>
 <p><strong><code>LOGIN_USER_KEYTAB_USERNAME</code></strong></p>
-<p>This environment variable sets login user from keytab username. The default value is <code>hdfs@HADOOP.COM</code>.</p>
+<p>This environment variable sets login user from the keytab username. The default value is <code>hdfs@HADOOP.COM</code>.</p>
 <p><strong><code>LOGIN_USER_KEYTAB_PATH</code></strong></p>
-<p>This environment variable sets login user from keytab path. The default value is <code>/opt/hdfs.keytab</code>.</p>
+<p>This environment variable sets login user from the keytab path. The default value is <code>/opt/hdfs.keytab</code>.</p>
 <p><strong><code>KERBEROS_EXPIRE_TIME</code></strong></p>
-<p>This environment variable sets kerberos expire time, the unit is hour. The default value is <code>2</code>.</p>
+<p>This environment variable sets Kerberos expire time, the unit is hour. The default value is <code>2</code>.</p>
 <p><strong><code>HDFS_ROOT_USER</code></strong></p>
-<p>This environment variable sets hdfs root user when resource.storage.type=HDFS. The default value is <code>hdfs</code>.</p>
+<p>This environment variable sets HDFS root user when resource.storage.type=HDFS. The default value is <code>hdfs</code>.</p>
 <p><strong><code>RESOURCE_MANAGER_HTTPADDRESS_PORT</code></strong></p>
-<p>This environment variable sets resource manager httpaddress port. The default value is <code>8088</code>.</p>
+<p>This environment variable sets resource manager HTTP address port. The default value is <code>8088</code>.</p>
 <p><strong><code>YARN_RESOURCEMANAGER_HA_RM_IDS</code></strong></p>
 <p>This environment variable sets yarn resourcemanager ha rm ids. The default value is empty.</p>
 <p><strong><code>YARN_APPLICATION_STATUS_ADDRESS</code></strong></p>
 <p>This environment variable sets yarn application status address. The default value is <code>http://ds1:%s/ws/v1/cluster/apps/%s</code>.</p>
 <p><strong><code>SKYWALKING_ENABLE</code></strong></p>
-<p>This environment variable sets whether to enable skywalking. The default value is <code>false</code>.</p>
+<p>This environment variable sets whether to enable SkyWalking. The default value is <code>false</code>.</p>
 <p><strong><code>SW_AGENT_COLLECTOR_BACKEND_SERVICES</code></strong></p>
-<p>This environment variable sets agent collector backend services for skywalking. The default value is <code>127.0.0.1:11800</code>.</p>
+<p>This environment variable sets agent collector backend services for SkyWalking. The default value is <code>127.0.0.1:11800</code>.</p>
 <p><strong><code>SW_GRPC_LOG_SERVER_HOST</code></strong></p>
-<p>This environment variable sets grpc log server host for skywalking. The default value is <code>127.0.0.1</code>.</p>
+<p>This environment variable sets gRPC log server host for SkyWalking. The default value is <code>127.0.0.1</code>.</p>
 <p><strong><code>SW_GRPC_LOG_SERVER_PORT</code></strong></p>
-<p>This environment variable sets grpc log server port for skywalking. The default value is <code>11800</code>.</p>
+<p>This environment variable sets gRPC log server port for SkyWalking. The default value is <code>11800</code>.</p>
 <p><strong><code>HADOOP_HOME</code></strong></p>
 <p>This environment variable sets <code>HADOOP_HOME</code>. The default value is <code>/opt/soft/hadoop</code>.</p>
 <p><strong><code>HADOOP_CONF_DIR</code></strong></p>
@@ -827,7 +827,7 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p>This environment variable sets <code>DATAX_HOME</code>. The default value is <code>/opt/soft/datax</code>.</p>
 <h3>Master Server</h3>
 <p><strong><code>MASTER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>master-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
+<p>This environment variable sets JVM options for <code>master-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
 <p><strong><code>MASTER_EXEC_THREADS</code></strong></p>
 <p>This environment variable sets exec thread number for <code>master-server</code>. The default value is <code>100</code>.</p>
 <p><strong><code>MASTER_EXEC_TASK_NUM</code></strong></p>
@@ -843,25 +843,25 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p><strong><code>MASTER_TASK_COMMIT_INTERVAL</code></strong></p>
 <p>This environment variable sets task commit interval for <code>master-server</code>. The default value is <code>1</code>.</p>
 <p><strong><code>MASTER_MAX_CPULOAD_AVG</code></strong></p>
-<p>This environment variable sets max cpu load avg for <code>master-server</code>. The default value is <code>-1</code>.</p>
+<p>This environment variable sets max CPU load avg for <code>master-server</code>. The default value is <code>-1</code>.</p>
 <p><strong><code>MASTER_RESERVED_MEMORY</code></strong></p>
 <p>This environment variable sets reserved memory for <code>master-server</code>, the unit is G. The default value is <code>0.3</code>.</p>
 <h3>Worker Server</h3>
 <p><strong><code>WORKER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>worker-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
+<p>This environment variable sets JVM options for <code>worker-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
 <p><strong><code>WORKER_EXEC_THREADS</code></strong></p>
 <p>This environment variable sets exec thread number for <code>worker-server</code>. The default value is <code>100</code>.</p>
 <p><strong><code>WORKER_HEARTBEAT_INTERVAL</code></strong></p>
 <p>This environment variable sets heartbeat interval for <code>worker-server</code>. The default value is <code>10</code>.</p>
 <p><strong><code>WORKER_MAX_CPULOAD_AVG</code></strong></p>
-<p>This environment variable sets max cpu load avg for <code>worker-server</code>. The default value is <code>-1</code>.</p>
+<p>This environment variable sets max CPU load avg for <code>worker-server</code>. The default value is <code>-1</code>.</p>
 <p><strong><code>WORKER_RESERVED_MEMORY</code></strong></p>
 <p>This environment variable sets reserved memory for <code>worker-server</code>, the unit is G. The default value is <code>0.3</code>.</p>
 <p><strong><code>WORKER_GROUPS</code></strong></p>
 <p>This environment variable sets groups for <code>worker-server</code>. The default value is <code>default</code>.</p>
 <h3>Alert Server</h3>
 <p><strong><code>ALERT_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>alert-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>alert-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 <p><strong><code>XLS_FILE_PATH</code></strong></p>
 <p>This environment variable sets xls file path for <code>alert-server</code>. The default value is <code>/tmp/xls</code>.</p>
 <p><strong><code>MAIL_SERVER_HOST</code></strong></p>
@@ -892,10 +892,10 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p>This environment variable sets enterprise wechat users for <code>alert-server</code>. The default value is empty.</p>
 <h3>Api Server</h3>
 <p><strong><code>API_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>api-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>api-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 <h3>Logger Server</h3>
 <p><strong><code>LOGGER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>logger-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>logger-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/1.3.8/user_doc/docker-deployment.json b/en-us/docs/1.3.8/user_doc/docker-deployment.json
index 33dae92..32d2dd9 100644
--- a/en-us/docs/1.3.8/user_doc/docker-deployment.json
+++ b/en-us/docs/1.3.8/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
+  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/expansion-reduction.html b/en-us/docs/1.3.8/user_doc/expansion-reduction.html
index 4427028..d0869fe 100644
--- a/en-us/docs/1.3.8/user_doc/expansion-reduction.html
+++ b/en-us/docs/1.3.8/user_doc/expansion-reduction.html
@@ -28,7 +28,7 @@
 <li>Check which version of DolphinScheduler is used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.</li>
 <li>Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in /opt/ directory, and the full path is /opt/dolphinscheduler.</li>
 <li>Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to dolphinscheduler and store it in the /opt directory.</li>
-<li>Add database dependency package, this article use Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.</li>
+<li>Add database dependency package, this article uses Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.</li>
 </ul>
 <pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> create the installation directory, please <span class="hljs-keyword">do</span> not create the installation directory <span class="hljs-keyword">in</span> /root, /home and other high privilege directories</span> 
 mkdir -p /opt
@@ -56,7 +56,7 @@ sed -i &#x27;s/Defaults    requirett/#Defaults    requirett/g&#x27; /etc/sudoers
 
 </code></pre>
 <pre><code class="language-markdown"> Attention:
-<span class="hljs-bullet"> -</span> Since it is sudo -u {linux-user} to switch between different linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+<span class="hljs-bullet"> -</span> Since it is sudo -u {linux-user} to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
 <span class="hljs-bullet"> -</span> If you find the line &quot;Default requiretty&quot; in the /etc/sudoers file, please also comment it out.
 <span class="hljs-bullet"> -</span> If resource uploads are used, you also need to assign read and write permissions to the deployment user on <span class="hljs-code">`HDFS or MinIO`</span>.
 </code></pre>
diff --git a/en-us/docs/1.3.8/user_doc/expansion-reduction.json b/en-us/docs/1.3.8/user_doc/expansion-reduction.json
index 76c5066..e85f0aa 100644
--- a/en-us/docs/1.3.8/user_doc/expansion-reduction.json
+++ b/en-us/docs/1.3.8/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
+  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/flink-call.html b/en-us/docs/1.3.8/user_doc/flink-call.html
index daa8586..60e7bb4 100644
--- a/en-us/docs/1.3.8/user_doc/flink-call.html
+++ b/en-us/docs/1.3.8/user_doc/flink-call.html
@@ -14,15 +14,15 @@
 <h3>Create a queue</h3>
 <ol>
 <li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>
-<li>Fill in the name and value of queue, and click &quot;Submit&quot;</li>
+<li>Fill in the name and value of the queue, and click &quot;Submit&quot;</li>
 </ol>
 <p align="center">
    <img src="/img/api/create_queue.png" width="80%" />
  </p>
 <h3>Create a tenant</h3>
-<pre><code>1.The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
-2.Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
-3.After creating a tenant, there will be a folder in the HDFS relevant directory.  
+<pre><code>1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
+2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
+3. After creating a tenant, there will be a folder in the HDFS relevant directory.  
 </code></pre>
 <p align="center">
    <img src="/img/api/create_tenant.png" width="80%" />
@@ -66,7 +66,7 @@
 </li>
 <li>
 <p>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result</p>
-<pre><code>token:The Token just generated
+<pre><code>token: The Token just generated
 </code></pre>
 </li>
 </ol>
diff --git a/en-us/docs/1.3.8/user_doc/flink-call.json b/en-us/docs/1.3.8/user_doc/flink-call.json
index 42cffe1..a177fde 100644
--- a/en-us/docs/1.3.8/user_doc/flink-call.json
+++ b/en-us/docs/1.3.8/user_doc/flink-call.json
@@ -1,6 +1,6 @@
 {
   "filename": "flink-call.md",
-  "__html": "<h1>Flink Calls Operating steps</h1>\n<h3>Create a queue</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>\n<li>Fill in the name and value of queue, and click &quot;Submit&quot;</li>\n</ol>\n<p align=\"center\">\n   <img src=\"/img/api/create_queue.png\" width=\"80%\" />\n </p>\n<h3>Create a tenant</h3>\n<pre><code>1.The tenant corresponds to [...]
+  "__html": "<h1>Flink Calls Operating steps</h1>\n<h3>Create a queue</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>\n<li>Fill in the name and value of the queue, and click &quot;Submit&quot;</li>\n</ol>\n<p align=\"center\">\n   <img src=\"/img/api/create_queue.png\" width=\"80%\" />\n </p>\n<h3>Create a tenant</h3>\n<pre><code>1. The tenant correspon [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/flink-call.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html b/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html
index 90633e2..c1898a6 100644
--- a/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html
+++ b/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html
@@ -253,7 +253,7 @@ kubectl get deploy -n test # with test namespace
 <pre><code>kubectl scale --replicas=3 deploy dolphinscheduler-api
 kubectl scale --replicas=3 deploy dolphinscheduler-api -n test # with test namespace
 </code></pre>
-<p>List all statefulsets (aka <code>sts</code>):</p>
+<p>List all stateful sets (aka <code>sts</code>):</p>
 <pre><code>kubectl get sts
 kubectl get sts -n test # with test namespace
 </code></pre>
@@ -380,7 +380,7 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Add a Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code></p>
 </li>
 </ol>
 <h3>How to support Python 2 pip and custom requirements.txt?</h3>
@@ -463,7 +463,7 @@ RUN apt-get update &amp;&amp; \
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Copy the Spark 2.4.7 release binary into Docker container</p>
+<p>Copy the Spark 2.4.7 release binary into the Docker container</p>
 </li>
 </ol>
 <pre><code class="language-bash">kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
@@ -481,7 +481,7 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
 ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="6">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -518,7 +518,7 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Copy the Spark 3.1.1 release binary into Docker container</p>
+<p>Copy the Spark 3.1.1 release binary into the Docker container</p>
 </li>
 </ol>
 <pre><code class="language-bash">kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
@@ -535,7 +535,7 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
 ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="6">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -543,7 +543,7 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
 <h3>How to support shared storage between Master, Worker and Api server?</h3>
-<p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
+<p>For example, Master, Worker and API server may use Hadoop at the same time</p>
 <ol>
 <li>Modify the following configurations in <code>values.yaml</code></li>
 </ol>
diff --git a/en-us/docs/1.3.8/user_doc/kubernetes-deployment.json b/en-us/docs/1.3.8/user_doc/kubernetes-deployment.json
index 80fa6e5..6ab8a0d 100644
--- a/en-us/docs/1.3.8/user_doc/kubernetes-deployment.json
+++ b/en-us/docs/1.3.8/user_doc/kubernetes-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "kubernetes-deployment.md",
-  "__html": "<h1>QuickStart in Kubernetes</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://helm.sh/\">Helm</a> 3.1.0+</li>\n<li><a href=\"https://kubernetes.io/\">Kubernetes</a> 1.12+</li>\n<li>PV provisioner support in the underlying infrastructure</li>\n</ul>\n<h2>Installing the Chart</h2>\n<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href=\"/en-us/download/download.html\">download</a></p>\n<p>To install the chart wi [...]
+  "__html": "<h1>QuickStart in Kubernetes</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://helm.sh/\">Helm</a> 3.1.0+</li>\n<li><a href=\"https://kubernetes.io/\">Kubernetes</a> 1.12+</li>\n<li>PV provisioner support in the underlying infrastructure</li>\n</ul>\n<h2>Installing the Chart</h2>\n<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href=\"/en-us/download/download.html\">download</a></p>\n<p>To install the chart wi [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/metadata-1.3.html b/en-us/docs/1.3.8/user_doc/metadata-1.3.html
index 035d2c4..0c9b2e1 100644
--- a/en-us/docs/1.3.8/user_doc/metadata-1.3.html
+++ b/en-us/docs/1.3.8/user_doc/metadata-1.3.html
@@ -127,7 +127,7 @@
 <p><img src="/img/metadata-erd/user-queue-datasource.png" alt="image.png"></p>
 <ul>
 <li>Multiple users can belong to one tenant</li>
-<li>The queue field in t_ds_user table stores the queue_name information in t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.</li>
+<li>The queue field in the t_ds_user table stores the queue_name information in the t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.</li>
 <li>The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
 <a name="7euSN"></a></li>
 </ul>
@@ -363,7 +363,7 @@
 <tr>
 <td>process_instance_json</td>
 <td>longtext</td>
-<td>process instance json(copy的process definition 的json)</td>
+<td>process instance json</td>
 </tr>
 <tr>
 <td>flag</td>
@@ -378,7 +378,7 @@
 <tr>
 <td>is_sub_process</td>
 <td>int</td>
-<td>whether the process is sub process:  1 sub-process,0 not sub-process</td>
+<td>whether the process is sub process: 1 sub-process, 0 not sub-process</td>
 </tr>
 <tr>
 <td>executor_id</td>
diff --git a/en-us/docs/1.3.8/user_doc/metadata-1.3.json b/en-us/docs/1.3.8/user_doc/metadata-1.3.json
index b79044f..3a95421 100644
--- a/en-us/docs/1.3.8/user_doc/metadata-1.3.json
+++ b/en-us/docs/1.3.8/user_doc/metadata-1.3.json
@@ -1,6 +1,6 @@
 {
   "filename": "metadata-1.3.md",
-  "__html": "<h1>Dolphin Scheduler 1.3 MetaData</h1>\n<p><a name=\"V5KOl\"></a></p>\n<h3>Dolphin Scheduler 1.2 DB Table Overview</h3>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">Table Name</th>\n<th style=\"text-align:center\">Comment</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">t_ds_access_token</td>\n<td style=\"text-align:center\">token for access ds backend</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">t_ds_alert</td>\n<td style=\"text-align [...]
+  "__html": "<h1>Dolphin Scheduler 1.3 MetaData</h1>\n<p><a name=\"V5KOl\"></a></p>\n<h3>Dolphin Scheduler 1.2 DB Table Overview</h3>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">Table Name</th>\n<th style=\"text-align:center\">Comment</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">t_ds_access_token</td>\n<td style=\"text-align:center\">token for access ds backend</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">t_ds_alert</td>\n<td style=\"text-align [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/metadata-1.3.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/open-api.html b/en-us/docs/1.3.8/user_doc/open-api.html
index 4dab319..f3ed36f 100644
--- a/en-us/docs/1.3.8/user_doc/open-api.html
+++ b/en-us/docs/1.3.8/user_doc/open-api.html
@@ -44,7 +44,7 @@
 <p>projects/query-project-list</p>
 </blockquote>
 </li>
-<li>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result<pre><code>token:The Token just generated
+<li>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result<pre><code>token: The Token just generated
 </code></pre>
 </li>
 </ol>
diff --git a/en-us/docs/1.3.8/user_doc/open-api.json b/en-us/docs/1.3.8/user_doc/open-api.json
index f0e7121..5d5c449 100644
--- a/en-us/docs/1.3.8/user_doc/open-api.json
+++ b/en-us/docs/1.3.8/user_doc/open-api.json
@@ -1,6 +1,6 @@
 {
   "filename": "open-api.md",
-  "__html": "<h1>Open API</h1>\n<h2>Background</h2>\n<p>Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.</p>\n<h2>The Operation Steps of DS API Calls</h2>\n<h3>Create a token</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Token manage&quot; on the left, and click &quot;Create token&quot; to create a token.</li>\n</ol>\n<p align=\"center\ [...]
+  "__html": "<h1>Open API</h1>\n<h2>Background</h2>\n<p>Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.</p>\n<h2>The Operation Steps of DS API Calls</h2>\n<h3>Create a token</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Token manage&quot; on the left, and click &quot;Create token&quot; to create a token.</li>\n</ol>\n<p align=\"center\ [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/open-api.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/quick-start.html b/en-us/docs/1.3.8/user_doc/quick-start.html
index fdab1ab..7c8a90c 100644
--- a/en-us/docs/1.3.8/user_doc/quick-start.html
+++ b/en-us/docs/1.3.8/user_doc/quick-start.html
@@ -15,7 +15,7 @@
 <li>
 <p>Administrator user login</p>
 <blockquote>
-<p>Address:<a href="http://192.168.xx.xx:12345/dolphinscheduler">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password:admin/dolphinscheduler123</p>
+<p>Address:<a href="http://192.168.xx.xx:12345/dolphinscheduler">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password: admin/dolphinscheduler123</p>
 </blockquote>
 </li>
 </ul>
@@ -47,20 +47,20 @@
     <img src="/img/alarm-group-en.png" width="60%" />
   </p>
 <ul>
-<li>Create an worker group</li>
+<li>Create a worker group</li>
 </ul>
    <p align="center">
       <img src="/img/worker-group-en.png" width="60%" />
     </p>
 <ul>
 <li>
-<p>Create an token</p>
+<p>Create a token</p>
 <p align="center">
    <img src="/img/token-en.png" width="60%" />
  </p>
 </li>
 <li>
-<p>Log in with regular users</p>
+<p>Login with regular users</p>
 </li>
 </ul>
 <blockquote>
diff --git a/en-us/docs/1.3.8/user_doc/quick-start.json b/en-us/docs/1.3.8/user_doc/quick-start.json
index f610920..5271fdb 100644
--- a/en-us/docs/1.3.8/user_doc/quick-start.json
+++ b/en-us/docs/1.3.8/user_doc/quick-start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick-start.md",
-  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:<a href=\"http://192.168.xx.xx:12345/dolphinscheduler\">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue-en.png\" width=\"60%\" />\n < [...]
+  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:<a href=\"http://192.168.xx.xx:12345/dolphinscheduler\">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password: admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue-en.png\" width=\"60%\" />\n  [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/quick-start.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html b/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html
index c3ab1c1..f467271 100644
--- a/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html
+++ b/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html
@@ -11,12 +11,12 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<p>The dolphinscheduler-skywalking module provides <a href="https://skywalking.apache.org/">Skywalking</a> monitor agent for the Dolphinscheduler project.</p>
-<p>This document describes how to enable Skywalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>
+<p>The dolphinscheduler-skywalking module provides <a href="https://skywalking.apache.org/">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>
+<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>
 <h1>Installation</h1>
-<p>The following configuration is used to enable Skywalking agent.</p>
+<p>The following configuration is used to enable SkyWalking agent.</p>
 <h3>Through environment variable configuration (for Docker Compose)</h3>
-<p>Modify SKYWALKING environment variables in <code>docker/docker-swarm/config.env.sh</code>:</p>
+<p>Modify SkyWalking environment variables in <code>docker/docker-swarm/config.env.sh</code>:</p>
 <pre><code>SKYWALKING_ENABLE=true
 SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
 SW_GRPC_LOG_SERVER_HOST=127.0.0.1
@@ -40,23 +40,23 @@ apache/dolphinscheduler:1.3.8 all</span>
 <h3>Through install_config.conf configuration (for DolphinScheduler <a href="http://install.sh">install.sh</a>)</h3>
 <p>Add the following configurations to <code>${workDir}/conf/config/install_config.conf</code>.</p>
 <pre><code class="language-properties"><span class="hljs-comment">
-# skywalking config</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> enable skywalking tracking plugin</span>
+# SkyWalking config</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> enable SkyWalking tracking plugin</span>
 <span class="hljs-attr">enableSkywalking</span>=<span class="hljs-string">&quot;true&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking backend service address</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking backend service address</span>
 <span class="hljs-attr">skywalkingServers</span>=<span class="hljs-string">&quot;your.skywalking-oap-server.com:11800&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking log reporter host</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking log reporter host</span>
 <span class="hljs-attr">skywalkingLogReporterHost</span>=<span class="hljs-string">&quot;your.skywalking-log-reporter.com&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking log reporter port</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking log reporter port</span>
 <span class="hljs-attr">skywalkingLogReporterPort</span>=<span class="hljs-string">&quot;11800&quot;</span>
 
 </code></pre>
 <h1>Usage</h1>
-<h3>Import dashboard</h3>
-<h4>Import dolphinscheduler dashboard to skywalking sever</h4>
-<p>Copy the <code>${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml</code> file into <code>${skywalking-oap-server.home}/config/ui-initialized-templates/</code> directory, and restart Skywalking oap-server.</p>
-<h4>View dolphinscheduler dashboard</h4>
-<p>If you have opened Skywalking dashboard with a browser before, you need to clear browser cache.</p>
+<h3>Import Dashboard</h3>
+<h4>Import DolphinScheduler Dashboard to SkyWalking Sever</h4>
+<p>Copy the <code>${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml</code> file into <code>${skywalking-oap-server.home}/config/ui-initialized-templates/</code> directory, and restart SkyWalking oap-server.</p>
+<h4>View DolphinScheduler Dashboard</h4>
+<p>If you have opened SkyWalking dashboard with a browser before, you need to clear the browser cache.</p>
 <p><img src="/img/skywalking/import-dashboard-1.jpg" alt="img1"></p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
diff --git a/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.json b/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.json
index 2d52f4b..b146e12 100644
--- a/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.json
+++ b/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "skywalking-agent-deployment.md",
-  "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The dolphinscheduler-skywalking module provides <a href=\"https://skywalking.apache.org/\">Skywalking</a> monitor agent for the Dolphinscheduler project.</p>\n<p>This document describes how to enable Skywalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>\n<h1>Installation</h1>\n<p>The following configuration is used to enable Skywalking agent.</p>\n<h3>Through environment variable configuration (for Docker [...]
+  "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The dolphinscheduler-skywalking module provides <a href=\"https://skywalking.apache.org/\">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>\n<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>\n<h1>Installation</h1>\n<p>The following configuration is used to enable SkyWalking agent.</p>\n<h3>Through environment variable configuration (for Docker [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/standalone-deployment.html b/en-us/docs/1.3.8/user_doc/standalone-deployment.html
index 39a80d1..df878a9 100644
--- a/en-us/docs/1.3.8/user_doc/standalone-deployment.html
+++ b/en-us/docs/1.3.8/user_doc/standalone-deployment.html
@@ -11,17 +11,17 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<h1>1、Install basic softwares (please install required softwares by yourself)</h1>
+<h1>1、Install Basic Software (please install required software by yourself)</h1>
 <ul>
 <li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>
 <li><a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>
 <li>ZooKeeper (3.4.6+) : Required</li>
 <li>pstree or psmisc : &quot;pstree&quot; is required for Mac OS and &quot;psmisc&quot; is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian</li>
-<li>Hadoop (2.6+) or MinIO : Optional. If you need resource function, for Standalone Deployment you can choose a local directory as the upload destination (this does not need Hadoop deployed). Of course, you can also choose to upload to Hadoop or MinIO.</li>
+<li>Hadoop (2.6+) or MinIO : Optional. If you need the resource function, for Standalone Deployment you can choose a local directory as the upload destination (this does not need Hadoop deployed). Of course, you can also choose to upload to Hadoop or MinIO.</li>
 </ul>
-<pre><code class="language-markdown"> Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients to run corresponding task.
+<pre><code class="language-markdown"> Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients to run the corresponding task.
 </code></pre>
-<h1>2、Download the binary tar.gz package.</h1>
+<h1>2、Download the Binary tar.gz Package</h1>
 <ul>
 <li>Please download the latest version installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: <a href="/en-us/download/download.html">Download</a>, download package, move to deployment directory and uncompress it.</li>
 </ul>
@@ -35,9 +35,9 @@ tar -zxvf apache-dolphinscheduler-1.3.8-bin.tar.gz -C /opt/dolphinscheduler;
 #</span><span class="bash"> rename</span>
 mv apache-dolphinscheduler-1.3.8-bin  dolphinscheduler-bin
 </code></pre>
-<h1>3、Create deployment user and assign directory operation permissions</h1>
+<h1>3、Create Deployment User and Assign Directory Operation Permissions</h1>
 <ul>
-<li>Create a deployment user, and be sure to configure sudo secret-free. Here take the creation of a dolphinscheduler user as example.</li>
+<li>Create a deployment user, and be sure to configure sudo secret-free. Here take the creation of a dolphinscheduler user as an example.</li>
 </ul>
 <pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> To create a user, you need to <span class="hljs-built_in">log</span> <span class="hljs-keyword">in</span> as root and <span class="hljs-built_in">set</span> the deployment user name.</span>
 useradd dolphinscheduler;
@@ -54,10 +54,10 @@ chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
 </code></pre>
 <pre><code> Notes:
  - Because the task execution is based on 'sudo -u {linux-user}' to switch among different Linux users to implement multi-tenant job running, so the deployment user must have sudo permissions and is secret-free. If beginner learners don’t understand, you can ignore this point for now.
- - Please comment out line &quot;Defaults requirett&quot;, if it present in &quot;/etc/sudoers&quot; file. 
- - If you need to use resource upload, you need to assign user the permission to operate the local file system, HDFS or MinIO.
+ - Please comment out line &quot;Defaults requirett&quot;, if it is present in &quot;/etc/sudoers&quot; file. 
+ - If you need to use resource upload, you need to assign the user the permission to operate the local file system, HDFS or MinIO.
 </code></pre>
-<h1>4、SSH secret-free configuration</h1>
+<h1>4、SSH Secret-Free Configuration</h1>
 <ul>
 <li>
 <p>Switch to the deployment user and configure SSH local secret-free login</p>
@@ -69,8 +69,8 @@ chmod 600 ~/.ssh/authorized_keys
 </code></pre>
 </li>
 </ul>
-<p>​  Note: <em>If configure successed, the dolphinscheduler user does not need to enter a password when executing the command <code>ssh localhost</code>.</em></p>
-<h1>5、Database initialization</h1>
+<p>​  Note: <em>If the configuration is successful, the dolphinscheduler user does not need to enter a password when executing the command <code>ssh localhost</code>.</em></p>
+<h1>5、Database Initialization</h1>
 <ul>
 <li>Log in to the database, the default database type is PostgreSQL. If you choose MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.</li>
 </ul>
@@ -113,7 +113,7 @@ chmod 600 ~/.ssh/authorized_keys
 </li>
 </ul>
 <p>​       <em>Note: If you execute the above script and report &quot;/bin/java: No such file or directory&quot; error, please configure JAVA_HOME and PATH variables in /etc/profile.</em></p>
-<h1>6、Modify runtime parameters.</h1>
+<h1>6、Modify Runtime Parameters.</h1>
 <ul>
 <li>
 <p>Modify the environment variable in <code>dolphinscheduler_env.sh</code> file under 'conf/env' directory (take the relevant software installed under '/opt/soft' as example)</p>
@@ -164,7 +164,7 @@ zkQuorum=&quot;localhost:2181&quot;
 installPath=&quot;/opt/soft/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> deployment user</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself</span>
+<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate HDFS. If HDFS is enabled, the root directory needs to be created by itself</span>
 deployUser=&quot;dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> alert config,take QQ email <span class="hljs-keyword">for</span> example</span>
@@ -175,7 +175,7 @@ mailProtocol=&quot;SMTP&quot;
 mailServerHost=&quot;smtp.qq.com&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail server port</span>
-<span class="hljs-meta">#</span><span class="bash"> note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, port may be different, make sure the port is correct.</span>
+<span class="hljs-meta">#</span><span class="bash"> note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, the port may be different, make sure the port is correct.</span>
 mailServerPort=&quot;25&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail sender</span>
@@ -185,13 +185,13 @@ mailSender=&quot;xxx@qq.com&quot;
 mailUser=&quot;xxx@qq.com&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail sender password</span>
-<span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is email service authorization code, not the email login password.</span>
+<span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is the email service authorization code, not the email login password.</span>
 mailPassword=&quot;xxx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
 starttlsEnable=&quot;true&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
 <span class="hljs-meta">#</span><span class="bash"> note: only one of TLS and SSL can be <span class="hljs-keyword">in</span> the <span class="hljs-literal">true</span> state.</span>
 sslEnable=&quot;false&quot;
 <span class="hljs-meta">
@@ -202,17 +202,17 @@ sslTrust=&quot;smtp.qq.com&quot;
 resourceStorageType=&quot;HDFS&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> here is an example of saving to a <span class="hljs-built_in">local</span> file system</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: If you want to upload resource file(jar file and so on)to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml of hadoop cluster <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or ho [...]
+<span class="hljs-meta">#</span><span class="bash"> Note: If you want to upload resource file(jar file and so on)to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml of Hadoop cluster <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or ho [...]
 defaultFS=&quot;file:///data/dolphinscheduler&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> <span class="hljs-keyword">if</span> not use hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> not use Hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
 <span class="hljs-meta">#</span><span class="bash"> Note: For tasks that depend on YARN to execute, you need to ensure that YARN information is configured correctly <span class="hljs-keyword">in</span> order to ensure successful execution results.</span>
 yarnHaIps=&quot;192.168.xx.xx,192.168.xx.xx&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.</span>
 singleYarnIp=&quot;yarnIp1&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
+#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this Hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
 resourceUploadPath=&quot;/data/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> specify the user who have permissions to create directory under HDFS/S3 root path</span>
@@ -241,7 +241,7 @@ alertServer=&quot;localhost&quot;
 apiServers=&quot;localhost&quot;
 
 </code></pre>
-<p><em>Attention:</em> if you need upload resource function, please execute below command:</p>
+<p><em>Attention:</em> if you need upload resource function, please execute the below command:</p>
 <pre><code>
 sudo mkdir /data/dolphinscheduler
 sudo chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler 
@@ -260,7 +260,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
 </code></pre>
 </li>
 <li>
-<p>After script completed, the following 5 services will be started. Use <code>jps</code> command to check whether the services started (<code>jps</code> comes with <code>java JDK</code>)</p>
+<p>After the script is completed, the following 5 services will be started. Use <code>jps</code> command to check whether the services started (<code>jps</code> comes with <code>java JDK</code>)</p>
 </li>
 </ul>
 <pre><code class="language-aidl">    MasterServer         ----- master service
@@ -270,7 +270,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
     AlertServer          ----- alert service
 </code></pre>
 <p>If the above services started normally, the automatic deployment is successful.</p>
-<p>After the deployment is success, you can view logs. Logs stored in the logs folder.</p>
+<p>After the deployment is done, you can view logs which stored in the logs folder.</p>
 <pre><code class="language-log"> logs/
     ├── dolphinscheduler-alert-server.log
     ├── dolphinscheduler-master-server.log
@@ -278,7 +278,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
     |—— dolphinscheduler-api-server.log
     |—— dolphinscheduler-logger-server.log
 </code></pre>
-<h1>8、login</h1>
+<h1>8、Login</h1>
 <ul>
 <li>
 <p>Access the front page address, interface IP (self-modified)
@@ -288,7 +288,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
  </p>
 </li>
 </ul>
-<h1>9、Start and stop service</h1>
+<h1>9、Start and Stop Service</h1>
 <ul>
 <li>
 <p>Stop all services</p>
diff --git a/en-us/docs/1.3.8/user_doc/standalone-deployment.json b/en-us/docs/1.3.8/user_doc/standalone-deployment.json
index f9e2aaf..cc61174 100644
--- a/en-us/docs/1.3.8/user_doc/standalone-deployment.json
+++ b/en-us/docs/1.3.8/user_doc/standalone-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "standalone-deployment.md",
-  "__html": "<h1>Standalone Deployment</h1>\n<h1>1、Install basic softwares (please install required softwares by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree  [...]
+  "__html": "<h1>Standalone Deployment</h1>\n<h1>1、Install Basic Software (please install required software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/standalone-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/system-manual.html b/en-us/docs/1.3.8/user_doc/system-manual.html
index cff31d6..dbf7caf 100644
--- a/en-us/docs/1.3.8/user_doc/system-manual.html
+++ b/en-us/docs/1.3.8/user_doc/system-manual.html
@@ -73,7 +73,7 @@
 </ol>
 <ul>
 <li>
-<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished execute, tasks 2 and 3 will be executed simultaneously.</p>
+<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished executing, tasks 2 and 3 will be executed simultaneously.</p>
 <p align="center">
    <img src="/img/dag6.png" width="80%" />
 </p>
@@ -652,9 +652,9 @@ worker.groups=default,test
  </p>
 <h4>6.1.3 Zookeeper monitoring</h4>
 <ul>
-<li>Mainly related configuration information of each worker and master in zookpeeper.</li>
+<li>Mainly related configuration information of each worker and master in ZooKeeper.</li>
 </ul>
-<p align="center">
+<p alignlinux ="center">
    <img src="/img/zookeeper-monitor-en.png" width="80%" />
  </p>
 <h4>6.1.4 DB monitoring</h4>
@@ -677,7 +677,7 @@ worker.groups=default,test
 <h3>7. <span id=TaskParamers>Task node type and parameter settings</span></h3>
 <h4>7.1 Shell node</h4>
 <blockquote>
-<p>Shell node, when the worker is executed, a temporary shell script is generated, and the linux user with the same name as the tenant executes the script.</p>
+<p>Shell node, when the worker is executed, a temporary shell script is generated, and the Linux user with the same name as the tenant executes the script.</p>
 </blockquote>
 <ul>
 <li>
diff --git a/en-us/docs/1.3.8/user_doc/system-manual.json b/en-us/docs/1.3.8/user_doc/system-manual.json
index d01effd..5a38ef9 100644
--- a/en-us/docs/1.3.8/user_doc/system-manual.json
+++ b/en-us/docs/1.3.8/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" />\n</p>\n<h3>2. Project management</h3>\n<h4>2.1 Create project< [...]
+  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" />\n</p>\n<h3>2. Project management</h3>\n<h4>2.1 Create project< [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/task-structure.html b/en-us/docs/1.3.8/user_doc/task-structure.html
index e8d8eb8..d90c26c 100644
--- a/en-us/docs/1.3.8/user_doc/task-structure.html
+++ b/en-us/docs/1.3.8/user_doc/task-structure.html
@@ -11,7 +11,7 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<p>All tasks created in Dolphinscheduler are saved in the t_ds_process_definition table.</p>
+<p>All tasks created in DolphinScheduler are saved in the t_ds_process_definition table.</p>
 <p>The following shows the 't_ds_process_definition' table structure:</p>
 <table>
 <thead>
diff --git a/en-us/docs/1.3.8/user_doc/task-structure.json b/en-us/docs/1.3.8/user_doc/task-structure.json
index e3885b8..d005b69 100644
--- a/en-us/docs/1.3.8/user_doc/task-structure.json
+++ b/en-us/docs/1.3.8/user_doc/task-structure.json
@@ -1,6 +1,6 @@
 {
   "filename": "task-structure.md",
-  "__html": "<h1>Overall Tasks Storage Structure</h1>\n<p>All tasks created in Dolphinscheduler are saved in the t_ds_process_definition table.</p>\n<p>The following shows the 't_ds_process_definition' table structure:</p>\n<table>\n<thead>\n<tr>\n<th>No.</th>\n<th>field</th>\n<th>type</th>\n<th>description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>1</td>\n<td>id</td>\n<td>int(11)</td>\n<td>primary key</td>\n</tr>\n<tr>\n<td>2</td>\n<td>name</td>\n<td>varchar(255)</td>\n<td>process defin [...]
+  "__html": "<h1>Overall Tasks Storage Structure</h1>\n<p>All tasks created in DolphinScheduler are saved in the t_ds_process_definition table.</p>\n<p>The following shows the 't_ds_process_definition' table structure:</p>\n<table>\n<thead>\n<tr>\n<th>No.</th>\n<th>field</th>\n<th>type</th>\n<th>description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>1</td>\n<td>id</td>\n<td>int(11)</td>\n<td>primary key</td>\n</tr>\n<tr>\n<td>2</td>\n<td>name</td>\n<td>varchar(255)</td>\n<td>process defin [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/task-structure.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/1.3.8/user_doc/upgrade.html b/en-us/docs/1.3.8/user_doc/upgrade.html
index 96935e1..0fe41b3 100644
--- a/en-us/docs/1.3.8/user_doc/upgrade.html
+++ b/en-us/docs/1.3.8/user_doc/upgrade.html
@@ -11,21 +11,21 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<h2>1. Back up previous version's files and database.</h2>
-<h2>2. Stop all services of DolphinScheduler.</h2>
+<h2>1. Back Up Previous Version's Files and Database.</h2>
+<h2>2. Stop All Services of DolphinScheduler.</h2>
 <p><code>sh ./script/stop-all.sh</code></p>
-<h2>3. Download the new version's installation package.</h2>
+<h2>3. Download the New Version's Installation Package.</h2>
 <ul>
 <li><a href="/en-us/download/download.html">Download</a> the latest version of the installation packages.</li>
 <li>The following upgrade operations need to be performed in the new version's directory.</li>
 </ul>
-<h2>4. Database upgrade</h2>
+<h2>4. Database Upgrade</h2>
 <ul>
 <li>
 <p>Modify the following properties in conf/datasource.properties.</p>
 </li>
 <li>
-<p>If you use MySQL as database to run DolphinScheduler, please comment out PostgreSQL releated configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-5.1.47.jar, and then correctly config database connect infoformation. You can download mysql connector jar <a href="https://downloads.MySQL.com/archives/c-j/">here</a>. Alternatively if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config data [...]
+<p>If you use MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-5.1.47.jar, and then correctly config database connect information. You can download mysql connector jar <a href="https://downloads.MySQL.com/archives/c-j/">here</a>. Alternatively, if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config da [...]
 <pre><code class="language-properties"><span class="hljs-comment">  # postgre</span>
 <span class="hljs-comment">  #spring.datasource.driver-class-name=org.postgresql.Driver</span>
 <span class="hljs-comment">  #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler</span>
@@ -41,20 +41,20 @@
 <p><code>sh ./script/upgrade-dolphinscheduler.sh</code></p>
 </li>
 </ul>
-<h2>5. Backend service upgrade.</h2>
-<h3>5.1 Modify the content in <code>conf/config/install_config.conf</code> file.</h3>
+<h2>5. Backend Service Upgrade.</h2>
+<h3>5.1 Modify the Content in <code>conf/config/install_config.conf</code> File.</h3>
 <ul>
 <li>Standalone Deployment please refer the [6, Modify running arguments] in <a href="/en-us/docs/1.3.8/user_doc/standalone-deployment.html">Standalone-Deployment</a>.</li>
 <li>Cluster Deployment please refer the [6, Modify running arguments] in <a href="/en-us/docs/1.3.8/user_doc/cluster-deployment.html">Cluster-Deployment</a>.</li>
 </ul>
-<h4>Masters need attentions</h4>
+<h4>Masters Need Attentions</h4>
 <p>Create worker group in 1.3.1 version has different design:</p>
 <ul>
-<li>Brfore version 1.3.1 worker group can be created through UI interface.</li>
+<li>Before version 1.3.1 worker group can be created through UI interface.</li>
 <li>Since version 1.3.1 worker group can be created by modify the worker configuration.</li>
 </ul>
-<h4>When upgrade from version before 1.3.1 to 1.3.2, below operations are what we need to do to keep worker group config consist with previous.</h4>
-<p>1, Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and ip these three columns.</p>
+<h4>When Upgrade from Version Before 1.3.1 to 1.3.2, Below Operations are What We Need to Do to Keep Worker Group Config Consist with Previous.</h4>
+<p>1, Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and IP three columns.</p>
 <table>
 <thead>
 <tr>
@@ -100,13 +100,13 @@
 </tr>
 </tbody>
 </table>
-<p>To keep worker group config consistent with previous version, we need to modify workers config item as below:</p>
-<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash">worker service is deployed on <span class="hljs-built_in">which</span> machine, and also specify <span class="hljs-built_in">which</span> worker group this worker belong to.</span> 
+<p>To keep worker group config consistent with the previous version, we need to modify workers config item as below:</p>
+<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash">worker service is deployed on <span class="hljs-built_in">which</span> machine, and also specify <span class="hljs-built_in">which</span> worker group this worker belongs to.</span> 
 workers=&quot;ds1:service1,ds2:service2,ds3:service2&quot;
 </code></pre>
-<h4>The worker group has been enhanced in version 1.3.2.</h4>
+<h4>The Worker Group has Been Enhanced in Version 1.3.2.</h4>
 <p>Worker in 1.3.1 can't belong to more than one worker group, in 1.3.2 it's supported. So in 1.3.1 it's not supported when workers=&quot;ds1:service1,ds1:service2&quot;, and in 1.3.2 it's supported.</p>
-<h3>5.2 Execute deploy script.</h3>
+<h3>5.2 Execute Deploy Script.</h3>
 <pre><code class="language-shell">`sh install.sh`
 </code></pre>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
diff --git a/en-us/docs/1.3.8/user_doc/upgrade.json b/en-us/docs/1.3.8/user_doc/upgrade.json
index fb3c65d..bfbf29a 100644
--- a/en-us/docs/1.3.8/user_doc/upgrade.json
+++ b/en-us/docs/1.3.8/user_doc/upgrade.json
@@ -1,6 +1,6 @@
 {
   "filename": "upgrade.md",
-  "__html": "<h1>DolphinScheduler upgrade documentation</h1>\n<h2>1. Back up previous version's files and database.</h2>\n<h2>2. Stop all services of DolphinScheduler.</h2>\n<p><code>sh ./script/stop-all.sh</code></p>\n<h2>3. Download the new version's installation package.</h2>\n<ul>\n<li><a href=\"/en-us/download/download.html\">Download</a> the latest version of the installation packages.</li>\n<li>The following upgrade operations need to be performed in the new version's directory.</ [...]
+  "__html": "<h1>DolphinScheduler upgrade documentation</h1>\n<h2>1. Back Up Previous Version's Files and Database.</h2>\n<h2>2. Stop All Services of DolphinScheduler.</h2>\n<p><code>sh ./script/stop-all.sh</code></p>\n<h2>3. Download the New Version's Installation Package.</h2>\n<ul>\n<li><a href=\"/en-us/download/download.html\">Download</a> the latest version of the installation packages.</li>\n<li>The following upgrade operations need to be performed in the new version's directory.</ [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/upgrade.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/ambari-integration.html b/en-us/docs/latest/user_doc/ambari-integration.html
index 7cb0c8c..0731345 100644
--- a/en-us/docs/latest/user_doc/ambari-integration.html
+++ b/en-us/docs/latest/user_doc/ambari-integration.html
@@ -26,7 +26,7 @@
 </ul>
 </li>
 <li>
-<p>Create an installation for DolphinScheduler with the user have read and write access to the installation directory (/opt/soft)</p>
+<p>Create an installation for DolphinScheduler with the user has read and write access to the installation directory (/opt/soft)</p>
 </li>
 <li>
 <p>Install with rpm package</p>
@@ -38,7 +38,7 @@
 <li>Execute with DolphinScheduler installation user: <code>rpm -ivh apache-dolphinscheduler-xxx.noarch.rpm</code></li>
 <li>Mysql-connector-java packaged using the default POM file will not be included.</li>
 <li>The RPM package was packaged in the project with the installation path of /opt/soft.
-If you use mysql as the database, you need add it manually.</li>
+If you use MySQL as the database, you need to add it manually.</li>
 </ul>
 </li>
 <li>
@@ -92,7 +92,7 @@ flush privileges;
 <p><img src="https://dolphinscheduler.apache.org/img/ambari-plugin/DS2_AMBARI_004.png" alt=""></p>
 </li>
 <li>
-<p>System Env Optimization will export some system environment config. Modify according to actual situation</p>
+<p>System Env Optimization will export some system environment config. Modify according to the actual situation</p>
 <p><img src="https://dolphinscheduler.apache.org/img/ambari-plugin/DS2_AMBARI_020.png" alt=""></p>
 </li>
 <li>
diff --git a/en-us/docs/latest/user_doc/ambari-integration.json b/en-us/docs/latest/user_doc/ambari-integration.json
index 471386d..0dc3bdb 100644
--- a/en-us/docs/latest/user_doc/ambari-integration.json
+++ b/en-us/docs/latest/user_doc/ambari-integration.json
@@ -1,6 +1,6 @@
 {
   "filename": "ambari-integration.md",
-  "__html": "<h3>Instructions for using the DolphinScheduler's Ambari plug-in</h3>\n<h4>Note</h4>\n<ol>\n<li>This document is intended for users with a basic understanding of Ambari</li>\n<li>This document is a description of adding the DolphinScheduler service to the installed Ambari service</li>\n<li>This document is based on version 2.5.2 of Ambari</li>\n</ol>\n<h4>Installation preparation</h4>\n<ol>\n<li>\n<p>Prepare the RPM packages</p>\n<ul>\n<li>It is generated by executing the co [...]
+  "__html": "<h3>Instructions for using the DolphinScheduler's Ambari plug-in</h3>\n<h4>Note</h4>\n<ol>\n<li>This document is intended for users with a basic understanding of Ambari</li>\n<li>This document is a description of adding the DolphinScheduler service to the installed Ambari service</li>\n<li>This document is based on version 2.5.2 of Ambari</li>\n</ol>\n<h4>Installation preparation</h4>\n<ol>\n<li>\n<p>Prepare the RPM packages</p>\n<ul>\n<li>It is generated by executing the co [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/ambari-integration.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/architecture-design.html b/en-us/docs/latest/user_doc/architecture-design.html
index 3035cf4..6dd8ddb 100644
--- a/en-us/docs/latest/user_doc/architecture-design.html
+++ b/en-us/docs/latest/user_doc/architecture-design.html
@@ -20,17 +20,17 @@
         <em>dag example</em>
   </p>
 </p>
-<p><strong>Process definition</strong>:Visualization formed by dragging task nodes and establishing task node associations<strong>DAG</strong></p>
-<p><strong>Process instance</strong>:The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated</p>
-<p><strong>Task instance</strong>:The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status</p>
-<p><strong>Task type</strong>: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: <strong>SUB_PROCESS</strong>  It is also a separate process definition that can be started and executed separately</p>
-<p><strong>Scheduling method:</strong> The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread。Among them <strong>Resume fault-tolerant workflow</strong> 和 <strong>Resume waiting thread</strong> The two command types are used by the in [...]
-<p><strong>Scheduled</strong>:System adopts <strong>quartz</strong> distributed scheduler, and supports the visual generation of cron expressions</p>
-<p><strong>Rely</strong>:The system not only supports <strong>DAG</strong> simple dependencies between the predecessor and successor nodes, but also provides <strong>task dependent</strong> nodes, supporting <strong>between processes</strong></p>
-<p><strong>Priority</strong> :Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in first-out</p>
-<p><strong>Email alert</strong>:Support <strong>SQL task</strong> Query result email sending, process instance running result email alert and fault tolerance alert notification</p>
-<p><strong>Failure strategy</strong>:For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. <strong>Continue</strong> refers to regardless of the status of the task running in parallel until the end of the process failure. <strong>End</strong> means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends</p>
-<p><strong>Complement</strong>:Supplement historical data,Supports <strong>interval parallel and serial</strong> two complement methods</p>
+<p><strong>Process definition</strong>: Visualization formed by dragging task nodes and establishing task node associations<strong>DAG</strong></p>
+<p><strong>Process instance</strong>: The process instance is the instantiation of the process definition, which can be generated by manual start or scheduled scheduling. Each time the process definition runs, a process instance is generated</p>
+<p><strong>Task instance</strong>: The task instance is the instantiation of the task node in the process definition, which identifies the specific task execution status</p>
+<p><strong>Task type</strong>: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (depends), and plans to support dynamic plug-in expansion, note: <strong>SUB_PROCESS</strong>  It is also a separate process definition that can be started and executed separately</p>
+<p><strong>Scheduling method</strong>: The system supports scheduled scheduling and manual scheduling based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timing, rerun, pause, stop, resume waiting thread. Among them <strong>Resume fault-tolerant workflow</strong> and <strong>Resume waiting thread</strong> The two command types are used by the [...]
+<p><strong>Scheduled</strong>: System adopts <strong>quartz</strong> distributed scheduler, and supports the visual generation of cron expressions</p>
+<p><strong>Rely</strong>: The system not only supports <strong>DAG</strong> simple dependencies between the predecessor and successor nodes, but also provides <strong>task dependent</strong> nodes, supporting <strong>between processes</strong></p>
+<p><strong>Priority</strong>: Support the priority of process instances and task instances, if the priority of process instances and task instances is not set, the default is first-in-first-out</p>
+<p><strong>Email alert</strong>: Support <strong>SQL task</strong> Query result email sending, process instance running result email alert and fault tolerance alert notification</p>
+<p><strong>Failure strategy</strong>: For tasks running in parallel, if a task fails, two failure strategy processing methods are provided. <strong>Continue</strong> refers to regardless of the status of the task running in parallel until the end of the process failure. <strong>End</strong> means that once a failed task is found, Kill will also run the parallel task at the same time, and the process fails and ends</p>
+<p><strong>Complement</strong>: Supplement historical data,Supports <strong>interval parallel and serial</strong> two complement methods</p>
 <h3>2.System Structure</h3>
 <h4>2.1 System architecture diagram</h4>
 <p align="center">
@@ -169,7 +169,7 @@ In the above figure, MainFlowThread waits for the end of SubFlowThread1, SubFlow
 <li>Judge the single-master thread pool. If the thread pool is full, let the thread fail directly.</li>
 <li>Add a Command type with insufficient resources. If the thread pool is insufficient, suspend the main process. In this way, there are new threads in the thread pool, which can make the process suspended by insufficient resources wake up to execute again.</li>
 </ol>
-<p>note:The Master Scheduler thread is executed by FIFO when acquiring the Command.</p>
+<p>note: The Master Scheduler thread is executed by FIFO when acquiring the Command.</p>
 <p>So we chose the third way to solve the problem of insufficient threads.</p>
 <h5>Four、Fault-tolerant design</h5>
 <p>Fault tolerance is divided into service downtime fault tolerance and task retry, and service downtime fault tolerance is divided into master fault tolerance and worker fault tolerance.</p>
@@ -192,7 +192,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
  <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/fault-tolerant_worker.png" alt="Worker fault tolerance flow chart"  width="40%" />
  </p>
-<p>Once the Master Scheduler thread finds that the task instance is in the &quot;fault tolerant&quot; state, it takes over the task and resubmits it.</p>
+<p>Once the Master Scheduler thread finds that the task instance is in the &quot;fault-tolerant&quot; state, it takes over the task and resubmits it.</p>
 <p>Note: Due to &quot;network jitter&quot;, the node may lose its heartbeat with ZooKeeper in a short period of time, and the node's remove event may occur. For this situation, we use the simplest way, that is, once the node and ZooKeeper timeout connection occurs, then directly stop the Master or Worker service.</p>
 <h6>2.Task failed and try again</h6>
 <p>Here we must first distinguish the concepts of task failure retry, process failure recovery, and process failure rerun:</p>
@@ -218,7 +218,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
 <li>According to <strong>priority of different process instances</strong> priority over <strong>priority of the same process instance</strong> priority over <strong>priority of tasks within the same process</strong>priority over <strong>tasks within the same process</strong>submission order from high to Low task processing.
 <ul>
 <li>
-<p>The specific implementation is to parse the priority according to the json of the task instance, and then save the <strong>process instance priority_process instance id_task priority_task id</strong> information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first</p>
+<p>The specific implementation is to parse the priority according to the JSON of the task instance, and then save the <strong>process instance priority_process instance id_task priority_task id</strong> information in the ZooKeeper task queue, when obtained from the task queue, pass String comparison can get the tasks that need to be executed first</p>
 <ul>
 <li>
 <p>The priority of the process definition is to consider that some processes need to be processed before other processes. This can be configured when the process is started or scheduled to start. There are 5 levels in total, which are HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below</p>
@@ -243,7 +243,7 @@ After the fault tolerance of ZooKeeper Master is completed, it is re-scheduled b
 <p>Since Web (UI) and Worker are not necessarily on the same machine, viewing the log cannot be like querying a local file. There are two options:</p>
 </li>
 <li>
-<p>Put logs on ES search engine</p>
+<p>Put logs on the ES search engine</p>
 </li>
 <li>
 <p>Obtain remote log information through netty communication</p>
diff --git a/en-us/docs/latest/user_doc/architecture-design.json b/en-us/docs/latest/user_doc/architecture-design.json
index 80707a5..aca1fb9 100644
--- a/en-us/docs/latest/user_doc/architecture-design.json
+++ b/en-us/docs/latest/user_doc/architecture-design.json
@@ -1,6 +1,6 @@
 {
   "filename": "architecture-design.md",
-  "__html": "<h2>System Architecture Design</h2>\n<p>Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system</p>\n<h3>1.Glossary</h3>\n<p><strong>DAG:</strong> The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes [...]
+  "__html": "<h2>System Architecture Design</h2>\n<p>Before explaining the architecture of the scheduling system, let's first understand the commonly used terms of the scheduling system</p>\n<h3>1.Glossary</h3>\n<p><strong>DAG:</strong> The full name is Directed Acyclic Graph, referred to as DAG. Task tasks in the workflow are assembled in the form of a directed acyclic graph, and topological traversal is performed from nodes with zero degrees of entry until there are no subsequent nodes [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/architecture-design.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/cluster-deployment.html b/en-us/docs/latest/user_doc/cluster-deployment.html
index a65d659..b453483 100644
--- a/en-us/docs/latest/user_doc/cluster-deployment.html
+++ b/en-us/docs/latest/user_doc/cluster-deployment.html
@@ -49,7 +49,7 @@ sed -i &#x27;s/Defaults    requirett/#Defaults    requirett/g&#x27; /etc/sudoers
 
 </code></pre>
 <pre><code> Notes:
- - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners who can ignore it if they don't understand.
+ - Because the task execution service is based on 'sudo -u {linux-user}' to switch between different Linux users to implement multi-tenant running jobs, the deployment user needs to have sudo permissions and is passwordless. The first-time learners can ignore it if they don't understand.
  - If find the &quot;Default requiretty&quot; in the &quot;/etc/sudoers&quot; file, also comment out.
  - If you need to use resource upload, you need to assign the user of permission to operate the local file system, HDFS or MinIO.
 </code></pre>
@@ -213,7 +213,7 @@ zkQuorum=&quot;192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181&quot;
 installPath=&quot;/opt/soft/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> deployment user</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself</span>
+<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate HDFS. If HDFS is enabled, the root directory needs to be created by itself</span>
 deployUser=&quot;dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> alert config,take QQ email <span class="hljs-keyword">for</span> example</span>
@@ -237,10 +237,10 @@ mailUser=&quot;xxx@qq.com&quot;
 <span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is email service authorization code, not the email login password.</span>
 mailPassword=&quot;xxx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
 starttlsEnable=&quot;true&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported.</span>
 <span class="hljs-meta">#</span><span class="bash"> note: only one of TLS and SSL can be <span class="hljs-keyword">in</span> the <span class="hljs-literal">true</span> state.</span>
 sslEnable=&quot;false&quot;
 <span class="hljs-meta">
@@ -252,18 +252,18 @@ sslTrust=&quot;smtp.qq.com&quot;
 resourceStorageType=&quot;HDFS&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> If resourceStorageType = HDFS, and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or host name.</span>
-<span class="hljs-meta">#</span><span class="bash"> <span class="hljs-keyword">if</span> S3,write S3 address,HA,<span class="hljs-keyword">for</span> example :s3a://dolphinscheduler,</span>
+<span class="hljs-meta">#</span><span class="bash"> <span class="hljs-keyword">if</span> S3,write S3 address,HA,<span class="hljs-keyword">for</span> example: s3a://dolphinscheduler,</span>
 <span class="hljs-meta">#</span><span class="bash"> Note,s3 be sure to create the root directory /dolphinscheduler</span>
 defaultFS=&quot;hdfs://mycluster:8020&quot;
 <span class="hljs-meta">
 
-#</span><span class="bash"> <span class="hljs-keyword">if</span> not use hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> not use Hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
 yarnHaIps=&quot;192.168.xx.xx,192.168.xx.xx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 with actual resourcemanager hostname.</span>
 singleYarnIp=&quot;yarnIp1&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
+#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this Hadoop HDFS path, self configuration, please make sure the directory exists on HDFS and have read-write permissions. /dolphinscheduler is recommended</span>
 resourceUploadPath=&quot;/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> who have permissions to create directory under HDFS/S3 root path</span>
diff --git a/en-us/docs/latest/user_doc/cluster-deployment.json b/en-us/docs/latest/user_doc/cluster-deployment.json
index 362952a..5251f98 100644
--- a/en-us/docs/latest/user_doc/cluster-deployment.json
+++ b/en-us/docs/latest/user_doc/cluster-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "cluster-deployment.md",
-  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
+  "__html": "<h1>Cluster Deployment</h1>\n<h1>1、Before you begin (please install requirement basic software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/cluster-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/configuration-file.html b/en-us/docs/latest/user_doc/configuration-file.html
index 778b241..b9468bf 100644
--- a/en-us/docs/latest/user_doc/configuration-file.html
+++ b/en-us/docs/latest/user_doc/configuration-file.html
@@ -18,7 +18,7 @@
 <pre><code>
 ├─bin                               DS application commands directory
 │  ├─dolphinscheduler-daemon.sh         startup/shutdown DS application 
-│  ├─start-all.sh                       startup all DS services with configurations
+│  ├─start-all.sh                  A     startup all DS services with configurations
 │  ├─stop-all.sh                        shutdown all DS services with configurations
 ├─conf                              configurations directory
 │  ├─application-api.properties         API-service config properties
@@ -527,7 +527,7 @@ Currently, DS just makes a basic config, please config further JVM options bas
 <tr>
 <td>master.max.cpuload.avg</td>
 <td>-1</td>
-<td>master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2</td>
+<td>master max CPU load avg, only higher than the system CPU load average, master server can schedule. default value -1: the number of CPU cores * 2</td>
 </tr>
 <tr>
 <td>master.reserved.memory</td>
@@ -564,7 +564,7 @@ Currently, DS just makes a basic config, please config further JVM options bas
 <tr>
 <td>worker.max.cpuload.avg</td>
 <td>-1</td>
-<td>worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2</td>
+<td>worker max CPU load avg, only higher than the system CPU load average, worker server can be dispatched tasks. default value -1: the number of CPU cores * 2</td>
 </tr>
 <tr>
 <td>worker.reserved.memory</td>
@@ -823,7 +823,7 @@ Files such as <a href="http://dolphinscheduler-daemon.sh">dolphinscheduler-daemo
 <span class="hljs-comment"># Note:  please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&amp;`.</span>
 <span class="hljs-comment">#   eg: `[` escape to `\[`</span>
 
-<span class="hljs-comment"># Database type (DS currently only supports postgresql and mysql)</span>
+<span class="hljs-comment"># Database type (DS currently only supports PostgreSQL and MySQL)</span>
 dbtype=<span class="hljs-string">&quot;mysql&quot;</span>
 
 <span class="hljs-comment"># Database url &amp; port</span>
@@ -902,9 +902,9 @@ resourceUploadPath=<span class="hljs-string">&quot;/dolphinscheduler&quot;</span
 <span class="hljs-comment"># HDFS/S3 root user</span>
 hdfsRootUser=<span class="hljs-string">&quot;hdfs&quot;</span>
 
-<span class="hljs-comment"># Followings are kerberos configs</span>
+<span class="hljs-comment"># Followings are Kerberos configs</span>
 
-<span class="hljs-comment"># Spicify kerberos enable or not</span>
+<span class="hljs-comment"># Spicify Kerberos enable or not</span>
 kerberosStartUp=<span class="hljs-string">&quot;false&quot;</span>
 
 <span class="hljs-comment"># Kdc krb5 config file path</span>
diff --git a/en-us/docs/latest/user_doc/configuration-file.json b/en-us/docs/latest/user_doc/configuration-file.json
index 225ac16..b693975 100644
--- a/en-us/docs/latest/user_doc/configuration-file.json
+++ b/en-us/docs/latest/user_doc/configuration-file.json
@@ -1,6 +1,6 @@
 {
   "filename": "configuration-file.md",
-  "__html": "<h1>Preface</h1>\n<p>This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.</p>\n<h1>Directory Structure</h1>\n<p>Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations a [...]
+  "__html": "<h1>Preface</h1>\n<p>This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.</p>\n<h1>Directory Structure</h1>\n<p>Currently, all the configuration files are under [conf ] directory. Please check the following simplified DolphinScheduler installation directories to have a direct view about the position [conf] directory in and configuration files inside. This document only describes DolphinScheduler configurations a [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/configuration-file.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/docker-deployment.html b/en-us/docs/latest/user_doc/docker-deployment.html
index 5a87b01..4c9a7ad 100644
--- a/en-us/docs/latest/user_doc/docker-deployment.html
+++ b/en-us/docs/latest/user_doc/docker-deployment.html
@@ -416,14 +416,14 @@ COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
 <p>Comment the <code>dolphinscheduler-postgresql</code> block in <code>docker-compose.yml</code></p>
 </li>
 <li>
-<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use a external MySQL database)</p>
+<p>Add <code>dolphinscheduler-mysql</code> service in <code>docker-compose.yml</code> (<strong>Optional</strong>, you can directly use an external MySQL database)</p>
 </li>
 <li>
 <p>Modify DATABASE environment variables in <code>config.env.sh</code></p>
@@ -469,7 +469,7 @@ COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:mysql-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
@@ -504,14 +504,14 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:oracle-driver</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="5">
 <li>
 <p>Run a dolphinscheduler (See <strong>How to use this docker image</strong>)</p>
 </li>
 <li>
-<p>Add a Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code></p>
 </li>
 </ol>
 <h3>How to support Python 2 pip and custom requirements.txt?</h3>
@@ -537,7 +537,7 @@ RUN apt-get update &amp;&amp; \
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:pip</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="4">
 <li>
@@ -568,7 +568,7 @@ RUN apt-get update &amp;&amp; \
 <li>Modify all <code>image</code> fields to <code>apache/dolphinscheduler:python3</code> in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <ol start="4">
 <li>
@@ -607,7 +607,7 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
 ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="5">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -656,7 +656,7 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
 ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="5">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -669,10 +669,10 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </blockquote>
 <p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
 <ol>
-<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support nfs in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-shared-local</code> to support NFS in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
   <span class="hljs-attr">dolphinscheduler-shared-local:</span>
@@ -683,7 +683,7 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <ol start="2">
 <li>
-<p>Put the Hadoop into the nfs</p>
+<p>Put the Hadoop into the NFS</p>
 </li>
 <li>
 <p>Ensure that <code>$HADOOP_HOME</code> and <code>$HADOOP_CONF_DIR</code> are correct</p>
@@ -700,10 +700,10 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 FS_DEFAULT_FS=file:///
 </code></pre>
 <ol start="2">
-<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support nfs in <code>docker-compose.yml</code></li>
+<li>Modify the volume <code>dolphinscheduler-resource-local</code> to support NFS in <code>docker-compose.yml</code></li>
 </ol>
 <blockquote>
-<p>If you want to deploy dolphinscheduler on Docker Swarm, you need modify <code>docker-stack.yml</code></p>
+<p>If you want to deploy dolphinscheduler on Docker Swarm, you need to modify <code>docker-stack.yml</code></p>
 </blockquote>
 <pre><code class="language-yaml"><span class="hljs-attr">volumes:</span>
   <span class="hljs-attr">dolphinscheduler-resource-local:</span>
@@ -723,10 +723,10 @@ FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
 </code></pre>
 <p><code>BUCKET_NAME</code>, <code>MINIO_IP</code>, <code>MINIO_ACCESS_KEY</code> and <code>MINIO_SECRET_KEY</code> need to be modified to actual values</p>
 <blockquote>
-<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
+<p><strong>Note</strong>: <code>MINIO_IP</code> can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access</p>
 </blockquote>
 <h3>How to configure SkyWalking?</h3>
-<p>Modify SKYWALKING environment variables in <code>config.env.sh</code>:</p>
+<p>Modify SkyWalking environment variables in <code>config.env.sh</code>:</p>
 <pre><code>SKYWALKING_ENABLE=true
 SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
 SW_GRPC_LOG_SERVER_HOST=127.0.0.1
@@ -735,42 +735,42 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <h2>Appendix-Environment Variables</h2>
 <h3>Database</h3>
 <p><strong><code>DATABASE_TYPE</code></strong></p>
-<p>This environment variable sets the type for database. The default value is <code>postgresql</code>.</p>
+<p>This environment variable sets the type for the database. The default value is <code>postgresql</code>.</p>
 <p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_DRIVER</code></strong></p>
-<p>This environment variable sets the type for database. The default value is <code>org.postgresql.Driver</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the type for the database. The default value is <code>org.postgresql.Driver</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_HOST</code></strong></p>
-<p>This environment variable sets the host for database. The default value is <code>127.0.0.1</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the host for the database. The default value is <code>127.0.0.1</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PORT</code></strong></p>
-<p>This environment variable sets the port for database. The default value is <code>5432</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the port for the database. The default value is <code>5432</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_USERNAME</code></strong></p>
-<p>This environment variable sets the username for database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the username for the database. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PASSWORD</code></strong></p>
-<p>This environment variable sets the password for database. The default value is <code>root</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the password for the database. The default value is <code>root</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_DATABASE</code></strong></p>
-<p>This environment variable sets the database for database. The default value is <code>dolphinscheduler</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the database for the database. The default value is <code>dolphinscheduler</code>.</p>
+<p><strong>Note</strong>: You must specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <p><strong><code>DATABASE_PARAMS</code></strong></p>
-<p>This environment variable sets the database for database. The default value is <code>characterEncoding=utf8</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
+<p>This environment variable sets the database for the database. The default value is <code>characterEncoding=utf8</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>.</p>
 <h3>ZooKeeper</h3>
 <p><strong><code>ZOOKEEPER_QUORUM</code></strong></p>
 <p>This environment variable sets zookeeper quorum. The default value is <code>127.0.0.1:2181</code>.</p>
-<p><strong>Note</strong>: You must be specify it when start a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
+<p><strong>Note</strong>: You must specify it when starting a standalone dolphinscheduler server. Like <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>.</p>
 <p><strong><code>ZOOKEEPER_ROOT</code></strong></p>
 <p>This environment variable sets zookeeper root directory for dolphinscheduler. The default value is <code>/dolphinscheduler</code>.</p>
 <h3>Common</h3>
 <p><strong><code>DOLPHINSCHEDULER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for dolphinscheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>, <code>logger-server</code>. The default value is empty.</p>
+<p>This environment variable sets JVM options for dolphinscheduler, suitable for <code>master-server</code>, <code>worker-server</code>, <code>api-server</code>, <code>alert-server</code>, <code>logger-server</code>. The default value is empty.</p>
 <p><strong><code>DATA_BASEDIR_PATH</code></strong></p>
-<p>User data directory path, self configuration, please make sure the directory exists and have read write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
+<p>User data directory path, self configuration, please make sure the directory exists and have read-write permissions. The default value is <code>/tmp/dolphinscheduler</code></p>
 <p><strong><code>RESOURCE_STORAGE_TYPE</code></strong></p>
-<p>This environment variable sets resource storage type for dolphinscheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
+<p>This environment variable sets resource storage types for dolphinscheduler like <code>HDFS</code>, <code>S3</code>, <code>NONE</code>. The default value is <code>HDFS</code>.</p>
 <p><strong><code>RESOURCE_UPLOAD_PATH</code></strong></p>
 <p>This environment variable sets resource store path on HDFS/S3 for resource storage. The default value is <code>/dolphinscheduler</code>.</p>
 <p><strong><code>FS_DEFAULT_FS</code></strong></p>
@@ -782,31 +782,31 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p><strong><code>FS_S3A_SECRET_KEY</code></strong></p>
 <p>This environment variable sets s3 secret key for resource storage. The default value is <code>xxxxxxx</code>.</p>
 <p><strong><code>HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE</code></strong></p>
-<p>This environment variable sets whether to startup kerberos. The default value is <code>false</code>.</p>
+<p>This environment variable sets whether to startup Kerberos. The default value is <code>false</code>.</p>
 <p><strong><code>JAVA_SECURITY_KRB5_CONF_PATH</code></strong></p>
 <p>This environment variable sets java.security.krb5.conf path. The default value is <code>/opt/krb5.conf</code>.</p>
 <p><strong><code>LOGIN_USER_KEYTAB_USERNAME</code></strong></p>
-<p>This environment variable sets login user from keytab username. The default value is <code>hdfs@HADOOP.COM</code>.</p>
+<p>This environment variable sets login user from the keytab username. The default value is <code>hdfs@HADOOP.COM</code>.</p>
 <p><strong><code>LOGIN_USER_KEYTAB_PATH</code></strong></p>
-<p>This environment variable sets login user from keytab path. The default value is <code>/opt/hdfs.keytab</code>.</p>
+<p>This environment variable sets login user from the keytab path. The default value is <code>/opt/hdfs.keytab</code>.</p>
 <p><strong><code>KERBEROS_EXPIRE_TIME</code></strong></p>
-<p>This environment variable sets kerberos expire time, the unit is hour. The default value is <code>2</code>.</p>
+<p>This environment variable sets Kerberos expire time, the unit is hour. The default value is <code>2</code>.</p>
 <p><strong><code>HDFS_ROOT_USER</code></strong></p>
-<p>This environment variable sets hdfs root user when resource.storage.type=HDFS. The default value is <code>hdfs</code>.</p>
+<p>This environment variable sets HDFS root user when resource.storage.type=HDFS. The default value is <code>hdfs</code>.</p>
 <p><strong><code>RESOURCE_MANAGER_HTTPADDRESS_PORT</code></strong></p>
-<p>This environment variable sets resource manager httpaddress port. The default value is <code>8088</code>.</p>
+<p>This environment variable sets resource manager HTTP address port. The default value is <code>8088</code>.</p>
 <p><strong><code>YARN_RESOURCEMANAGER_HA_RM_IDS</code></strong></p>
 <p>This environment variable sets yarn resourcemanager ha rm ids. The default value is empty.</p>
 <p><strong><code>YARN_APPLICATION_STATUS_ADDRESS</code></strong></p>
 <p>This environment variable sets yarn application status address. The default value is <code>http://ds1:%s/ws/v1/cluster/apps/%s</code>.</p>
 <p><strong><code>SKYWALKING_ENABLE</code></strong></p>
-<p>This environment variable sets whether to enable skywalking. The default value is <code>false</code>.</p>
+<p>This environment variable sets whether to enable SkyWalking. The default value is <code>false</code>.</p>
 <p><strong><code>SW_AGENT_COLLECTOR_BACKEND_SERVICES</code></strong></p>
-<p>This environment variable sets agent collector backend services for skywalking. The default value is <code>127.0.0.1:11800</code>.</p>
+<p>This environment variable sets agent collector backend services for SkyWalking. The default value is <code>127.0.0.1:11800</code>.</p>
 <p><strong><code>SW_GRPC_LOG_SERVER_HOST</code></strong></p>
-<p>This environment variable sets grpc log server host for skywalking. The default value is <code>127.0.0.1</code>.</p>
+<p>This environment variable sets gRPC log server host for SkyWalking. The default value is <code>127.0.0.1</code>.</p>
 <p><strong><code>SW_GRPC_LOG_SERVER_PORT</code></strong></p>
-<p>This environment variable sets grpc log server port for skywalking. The default value is <code>11800</code>.</p>
+<p>This environment variable sets gRPC log server port for SkyWalking. The default value is <code>11800</code>.</p>
 <p><strong><code>HADOOP_HOME</code></strong></p>
 <p>This environment variable sets <code>HADOOP_HOME</code>. The default value is <code>/opt/soft/hadoop</code>.</p>
 <p><strong><code>HADOOP_CONF_DIR</code></strong></p>
@@ -827,7 +827,7 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p>This environment variable sets <code>DATAX_HOME</code>. The default value is <code>/opt/soft/datax</code>.</p>
 <h3>Master Server</h3>
 <p><strong><code>MASTER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>master-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
+<p>This environment variable sets JVM options for <code>master-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
 <p><strong><code>MASTER_EXEC_THREADS</code></strong></p>
 <p>This environment variable sets exec thread number for <code>master-server</code>. The default value is <code>100</code>.</p>
 <p><strong><code>MASTER_EXEC_TASK_NUM</code></strong></p>
@@ -843,25 +843,25 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p><strong><code>MASTER_TASK_COMMIT_INTERVAL</code></strong></p>
 <p>This environment variable sets task commit interval for <code>master-server</code>. The default value is <code>1</code>.</p>
 <p><strong><code>MASTER_MAX_CPULOAD_AVG</code></strong></p>
-<p>This environment variable sets max cpu load avg for <code>master-server</code>. The default value is <code>-1</code>.</p>
+<p>This environment variable sets max CPU load avg for <code>master-server</code>. The default value is <code>-1</code>.</p>
 <p><strong><code>MASTER_RESERVED_MEMORY</code></strong></p>
 <p>This environment variable sets reserved memory for <code>master-server</code>, the unit is G. The default value is <code>0.3</code>.</p>
 <h3>Worker Server</h3>
 <p><strong><code>WORKER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>worker-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
+<p>This environment variable sets JVM options for <code>worker-server</code>. The default value is <code>-Xms1g -Xmx1g -Xmn512m</code>.</p>
 <p><strong><code>WORKER_EXEC_THREADS</code></strong></p>
 <p>This environment variable sets exec thread number for <code>worker-server</code>. The default value is <code>100</code>.</p>
 <p><strong><code>WORKER_HEARTBEAT_INTERVAL</code></strong></p>
 <p>This environment variable sets heartbeat interval for <code>worker-server</code>. The default value is <code>10</code>.</p>
 <p><strong><code>WORKER_MAX_CPULOAD_AVG</code></strong></p>
-<p>This environment variable sets max cpu load avg for <code>worker-server</code>. The default value is <code>-1</code>.</p>
+<p>This environment variable sets max CPU load avg for <code>worker-server</code>. The default value is <code>-1</code>.</p>
 <p><strong><code>WORKER_RESERVED_MEMORY</code></strong></p>
 <p>This environment variable sets reserved memory for <code>worker-server</code>, the unit is G. The default value is <code>0.3</code>.</p>
 <p><strong><code>WORKER_GROUPS</code></strong></p>
 <p>This environment variable sets groups for <code>worker-server</code>. The default value is <code>default</code>.</p>
 <h3>Alert Server</h3>
 <p><strong><code>ALERT_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>alert-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>alert-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 <p><strong><code>XLS_FILE_PATH</code></strong></p>
 <p>This environment variable sets xls file path for <code>alert-server</code>. The default value is <code>/tmp/xls</code>.</p>
 <p><strong><code>MAIL_SERVER_HOST</code></strong></p>
@@ -892,10 +892,10 @@ SW_GRPC_LOG_SERVER_PORT=11800
 <p>This environment variable sets enterprise wechat users for <code>alert-server</code>. The default value is empty.</p>
 <h3>Api Server</h3>
 <p><strong><code>API_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>api-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>api-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 <h3>Logger Server</h3>
 <p><strong><code>LOGGER_SERVER_OPTS</code></strong></p>
-<p>This environment variable sets jvm options for <code>logger-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
+<p>This environment variable sets JVM options for <code>logger-server</code>. The default value is <code>-Xms512m -Xmx512m -Xmn256m</code>.</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/latest/user_doc/docker-deployment.json b/en-us/docs/latest/user_doc/docker-deployment.json
index 33dae92..32d2dd9 100644
--- a/en-us/docs/latest/user_doc/docker-deployment.json
+++ b/en-us/docs/latest/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
+  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/expansion-reduction.html b/en-us/docs/latest/user_doc/expansion-reduction.html
index 4427028..d0869fe 100644
--- a/en-us/docs/latest/user_doc/expansion-reduction.html
+++ b/en-us/docs/latest/user_doc/expansion-reduction.html
@@ -28,7 +28,7 @@
 <li>Check which version of DolphinScheduler is used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.</li>
 <li>Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in /opt/ directory, and the full path is /opt/dolphinscheduler.</li>
 <li>Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to dolphinscheduler and store it in the /opt directory.</li>
-<li>Add database dependency package, this article use Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.</li>
+<li>Add database dependency package, this article uses Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.</li>
 </ul>
 <pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> create the installation directory, please <span class="hljs-keyword">do</span> not create the installation directory <span class="hljs-keyword">in</span> /root, /home and other high privilege directories</span> 
 mkdir -p /opt
@@ -56,7 +56,7 @@ sed -i &#x27;s/Defaults    requirett/#Defaults    requirett/g&#x27; /etc/sudoers
 
 </code></pre>
 <pre><code class="language-markdown"> Attention:
-<span class="hljs-bullet"> -</span> Since it is sudo -u {linux-user} to switch between different linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+<span class="hljs-bullet"> -</span> Since it is sudo -u {linux-user} to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
 <span class="hljs-bullet"> -</span> If you find the line &quot;Default requiretty&quot; in the /etc/sudoers file, please also comment it out.
 <span class="hljs-bullet"> -</span> If resource uploads are used, you also need to assign read and write permissions to the deployment user on <span class="hljs-code">`HDFS or MinIO`</span>.
 </code></pre>
diff --git a/en-us/docs/latest/user_doc/expansion-reduction.json b/en-us/docs/latest/user_doc/expansion-reduction.json
index 76c5066..e85f0aa 100644
--- a/en-us/docs/latest/user_doc/expansion-reduction.json
+++ b/en-us/docs/latest/user_doc/expansion-reduction.json
@@ -1,6 +1,6 @@
 {
   "filename": "expansion-reduction.md",
-  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
+  "__html": "<h1>DolphinScheduler Expansion and Reduction</h1>\n<h2>1. Expansion</h2>\n<p>This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.</p>\n<pre><code> Attention: There cannot be more than one master service process or worker service process on a physical machine.\n       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configur [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/expansion-reduction.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/flink-call.html b/en-us/docs/latest/user_doc/flink-call.html
index daa8586..60e7bb4 100644
--- a/en-us/docs/latest/user_doc/flink-call.html
+++ b/en-us/docs/latest/user_doc/flink-call.html
@@ -14,15 +14,15 @@
 <h3>Create a queue</h3>
 <ol>
 <li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>
-<li>Fill in the name and value of queue, and click &quot;Submit&quot;</li>
+<li>Fill in the name and value of the queue, and click &quot;Submit&quot;</li>
 </ol>
 <p align="center">
    <img src="/img/api/create_queue.png" width="80%" />
  </p>
 <h3>Create a tenant</h3>
-<pre><code>1.The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
-2.Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
-3.After creating a tenant, there will be a folder in the HDFS relevant directory.  
+<pre><code>1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
+2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
+3. After creating a tenant, there will be a folder in the HDFS relevant directory.  
 </code></pre>
 <p align="center">
    <img src="/img/api/create_tenant.png" width="80%" />
@@ -66,7 +66,7 @@
 </li>
 <li>
 <p>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result</p>
-<pre><code>token:The Token just generated
+<pre><code>token: The Token just generated
 </code></pre>
 </li>
 </ol>
diff --git a/en-us/docs/latest/user_doc/flink-call.json b/en-us/docs/latest/user_doc/flink-call.json
index 42cffe1..a177fde 100644
--- a/en-us/docs/latest/user_doc/flink-call.json
+++ b/en-us/docs/latest/user_doc/flink-call.json
@@ -1,6 +1,6 @@
 {
   "filename": "flink-call.md",
-  "__html": "<h1>Flink Calls Operating steps</h1>\n<h3>Create a queue</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>\n<li>Fill in the name and value of queue, and click &quot;Submit&quot;</li>\n</ol>\n<p align=\"center\">\n   <img src=\"/img/api/create_queue.png\" width=\"80%\" />\n </p>\n<h3>Create a tenant</h3>\n<pre><code>1.The tenant corresponds to [...]
+  "__html": "<h1>Flink Calls Operating steps</h1>\n<h3>Create a queue</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Queue manage&quot; on the left, and click &quot;Create queue&quot; to create a queue.</li>\n<li>Fill in the name and value of the queue, and click &quot;Submit&quot;</li>\n</ol>\n<p align=\"center\">\n   <img src=\"/img/api/create_queue.png\" width=\"80%\" />\n </p>\n<h3>Create a tenant</h3>\n<pre><code>1. The tenant correspon [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/flink-call.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/kubernetes-deployment.html b/en-us/docs/latest/user_doc/kubernetes-deployment.html
index 90633e2..c1898a6 100644
--- a/en-us/docs/latest/user_doc/kubernetes-deployment.html
+++ b/en-us/docs/latest/user_doc/kubernetes-deployment.html
@@ -253,7 +253,7 @@ kubectl get deploy -n test # with test namespace
 <pre><code>kubectl scale --replicas=3 deploy dolphinscheduler-api
 kubectl scale --replicas=3 deploy dolphinscheduler-api -n test # with test namespace
 </code></pre>
-<p>List all statefulsets (aka <code>sts</code>):</p>
+<p>List all stateful sets (aka <code>sts</code>):</p>
 <pre><code>kubectl get sts
 kubectl get sts -n test # with test namespace
 </code></pre>
@@ -380,7 +380,7 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Add a Oracle datasource in <code>Datasource manage</code></p>
+<p>Add an Oracle datasource in <code>Datasource manage</code></p>
 </li>
 </ol>
 <h3>How to support Python 2 pip and custom requirements.txt?</h3>
@@ -463,7 +463,7 @@ RUN apt-get update &amp;&amp; \
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Copy the Spark 2.4.7 release binary into Docker container</p>
+<p>Copy the Spark 2.4.7 release binary into the Docker container</p>
 </li>
 </ol>
 <pre><code class="language-bash">kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
@@ -481,7 +481,7 @@ rm -f spark-2.4.7-bin-hadoop2.7.tgz
 ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="6">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -518,7 +518,7 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>Run a DolphinScheduler release in Kubernetes (See <strong>Installing the Chart</strong>)</p>
 </li>
 <li>
-<p>Copy the Spark 3.1.1 release binary into Docker container</p>
+<p>Copy the Spark 3.1.1 release binary into the Docker container</p>
 </li>
 </ol>
 <pre><code class="language-bash">kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
@@ -535,7 +535,7 @@ rm -f spark-3.1.1-bin-hadoop2.7.tgz
 ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
-<p>The last command will print Spark version if everything goes well</p>
+<p>The last command will print the Spark version if everything goes well</p>
 <ol start="6">
 <li>Verify Spark under a Shell task</li>
 </ol>
@@ -543,7 +543,7 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
 <h3>How to support shared storage between Master, Worker and Api server?</h3>
-<p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
+<p>For example, Master, Worker and API server may use Hadoop at the same time</p>
 <ol>
 <li>Modify the following configurations in <code>values.yaml</code></li>
 </ol>
diff --git a/en-us/docs/latest/user_doc/kubernetes-deployment.json b/en-us/docs/latest/user_doc/kubernetes-deployment.json
index 80fa6e5..6ab8a0d 100644
--- a/en-us/docs/latest/user_doc/kubernetes-deployment.json
+++ b/en-us/docs/latest/user_doc/kubernetes-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "kubernetes-deployment.md",
-  "__html": "<h1>QuickStart in Kubernetes</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://helm.sh/\">Helm</a> 3.1.0+</li>\n<li><a href=\"https://kubernetes.io/\">Kubernetes</a> 1.12+</li>\n<li>PV provisioner support in the underlying infrastructure</li>\n</ul>\n<h2>Installing the Chart</h2>\n<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href=\"/en-us/download/download.html\">download</a></p>\n<p>To install the chart wi [...]
+  "__html": "<h1>QuickStart in Kubernetes</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://helm.sh/\">Helm</a> 3.1.0+</li>\n<li><a href=\"https://kubernetes.io/\">Kubernetes</a> 1.12+</li>\n<li>PV provisioner support in the underlying infrastructure</li>\n</ul>\n<h2>Installing the Chart</h2>\n<p>Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: <a href=\"/en-us/download/download.html\">download</a></p>\n<p>To install the chart wi [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/kubernetes-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/metadata-1.3.html b/en-us/docs/latest/user_doc/metadata-1.3.html
index 035d2c4..0c9b2e1 100644
--- a/en-us/docs/latest/user_doc/metadata-1.3.html
+++ b/en-us/docs/latest/user_doc/metadata-1.3.html
@@ -127,7 +127,7 @@
 <p><img src="/img/metadata-erd/user-queue-datasource.png" alt="image.png"></p>
 <ul>
 <li>Multiple users can belong to one tenant</li>
-<li>The queue field in t_ds_user table stores the queue_name information in t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.</li>
+<li>The queue field in the t_ds_user table stores the queue_name information in the t_ds_queue table, but t_ds_tenant stores queue information using queue_id. During the execution of the process definition, the user queue has the highest priority. If the user queue is empty, the tenant queue is used.</li>
 <li>The user_id field in the t_ds_datasource table indicates the user who created the data source. The user_id in t_ds_relation_datasource_user indicates the user who has permission to the data source.
 <a name="7euSN"></a></li>
 </ul>
@@ -363,7 +363,7 @@
 <tr>
 <td>process_instance_json</td>
 <td>longtext</td>
-<td>process instance json(copy的process definition 的json)</td>
+<td>process instance json</td>
 </tr>
 <tr>
 <td>flag</td>
@@ -378,7 +378,7 @@
 <tr>
 <td>is_sub_process</td>
 <td>int</td>
-<td>whether the process is sub process:  1 sub-process,0 not sub-process</td>
+<td>whether the process is sub process: 1 sub-process, 0 not sub-process</td>
 </tr>
 <tr>
 <td>executor_id</td>
diff --git a/en-us/docs/latest/user_doc/metadata-1.3.json b/en-us/docs/latest/user_doc/metadata-1.3.json
index b79044f..3a95421 100644
--- a/en-us/docs/latest/user_doc/metadata-1.3.json
+++ b/en-us/docs/latest/user_doc/metadata-1.3.json
@@ -1,6 +1,6 @@
 {
   "filename": "metadata-1.3.md",
-  "__html": "<h1>Dolphin Scheduler 1.3 MetaData</h1>\n<p><a name=\"V5KOl\"></a></p>\n<h3>Dolphin Scheduler 1.2 DB Table Overview</h3>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">Table Name</th>\n<th style=\"text-align:center\">Comment</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">t_ds_access_token</td>\n<td style=\"text-align:center\">token for access ds backend</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">t_ds_alert</td>\n<td style=\"text-align [...]
+  "__html": "<h1>Dolphin Scheduler 1.3 MetaData</h1>\n<p><a name=\"V5KOl\"></a></p>\n<h3>Dolphin Scheduler 1.2 DB Table Overview</h3>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">Table Name</th>\n<th style=\"text-align:center\">Comment</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">t_ds_access_token</td>\n<td style=\"text-align:center\">token for access ds backend</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">t_ds_alert</td>\n<td style=\"text-align [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/metadata-1.3.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/open-api.html b/en-us/docs/latest/user_doc/open-api.html
index 4dab319..f3ed36f 100644
--- a/en-us/docs/latest/user_doc/open-api.html
+++ b/en-us/docs/latest/user_doc/open-api.html
@@ -44,7 +44,7 @@
 <p>projects/query-project-list</p>
 </blockquote>
 </li>
-<li>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result<pre><code>token:The Token just generated
+<li>Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result<pre><code>token: The Token just generated
 </code></pre>
 </li>
 </ol>
diff --git a/en-us/docs/latest/user_doc/open-api.json b/en-us/docs/latest/user_doc/open-api.json
index f0e7121..5d5c449 100644
--- a/en-us/docs/latest/user_doc/open-api.json
+++ b/en-us/docs/latest/user_doc/open-api.json
@@ -1,6 +1,6 @@
 {
   "filename": "open-api.md",
-  "__html": "<h1>Open API</h1>\n<h2>Background</h2>\n<p>Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.</p>\n<h2>The Operation Steps of DS API Calls</h2>\n<h3>Create a token</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Token manage&quot; on the left, and click &quot;Create token&quot; to create a token.</li>\n</ol>\n<p align=\"center\ [...]
+  "__html": "<h1>Open API</h1>\n<h2>Background</h2>\n<p>Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.</p>\n<h2>The Operation Steps of DS API Calls</h2>\n<h3>Create a token</h3>\n<ol>\n<li>Log in to the scheduling system, click &quot;Security&quot;, then click &quot;Token manage&quot; on the left, and click &quot;Create token&quot; to create a token.</li>\n</ol>\n<p align=\"center\ [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/open-api.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/quick-start.html b/en-us/docs/latest/user_doc/quick-start.html
index fdab1ab..7c8a90c 100644
--- a/en-us/docs/latest/user_doc/quick-start.html
+++ b/en-us/docs/latest/user_doc/quick-start.html
@@ -15,7 +15,7 @@
 <li>
 <p>Administrator user login</p>
 <blockquote>
-<p>Address:<a href="http://192.168.xx.xx:12345/dolphinscheduler">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password:admin/dolphinscheduler123</p>
+<p>Address:<a href="http://192.168.xx.xx:12345/dolphinscheduler">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password: admin/dolphinscheduler123</p>
 </blockquote>
 </li>
 </ul>
@@ -47,20 +47,20 @@
     <img src="/img/alarm-group-en.png" width="60%" />
   </p>
 <ul>
-<li>Create an worker group</li>
+<li>Create a worker group</li>
 </ul>
    <p align="center">
       <img src="/img/worker-group-en.png" width="60%" />
     </p>
 <ul>
 <li>
-<p>Create an token</p>
+<p>Create a token</p>
 <p align="center">
    <img src="/img/token-en.png" width="60%" />
  </p>
 </li>
 <li>
-<p>Log in with regular users</p>
+<p>Login with regular users</p>
 </li>
 </ul>
 <blockquote>
diff --git a/en-us/docs/latest/user_doc/quick-start.json b/en-us/docs/latest/user_doc/quick-start.json
index f610920..5271fdb 100644
--- a/en-us/docs/latest/user_doc/quick-start.json
+++ b/en-us/docs/latest/user_doc/quick-start.json
@@ -1,6 +1,6 @@
 {
   "filename": "quick-start.md",
-  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:<a href=\"http://192.168.xx.xx:12345/dolphinscheduler\">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password:admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue-en.png\" width=\"60%\" />\n < [...]
+  "__html": "<h1>Quick Start</h1>\n<ul>\n<li>\n<p>Administrator user login</p>\n<blockquote>\n<p>Address:<a href=\"http://192.168.xx.xx:12345/dolphinscheduler\">http://192.168.xx.xx:12345/dolphinscheduler</a>  Username and password: admin/dolphinscheduler123</p>\n</blockquote>\n</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/login_en.png\" width=\"60%\" />\n </p>\n<ul>\n<li>Create queue</li>\n</ul>\n<p align=\"center\">\n   <img src=\"/img/create-queue-en.png\" width=\"60%\" />\n  [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/quick-start.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/skywalking-agent-deployment.html b/en-us/docs/latest/user_doc/skywalking-agent-deployment.html
index c3ab1c1..f467271 100644
--- a/en-us/docs/latest/user_doc/skywalking-agent-deployment.html
+++ b/en-us/docs/latest/user_doc/skywalking-agent-deployment.html
@@ -11,12 +11,12 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<p>The dolphinscheduler-skywalking module provides <a href="https://skywalking.apache.org/">Skywalking</a> monitor agent for the Dolphinscheduler project.</p>
-<p>This document describes how to enable Skywalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>
+<p>The dolphinscheduler-skywalking module provides <a href="https://skywalking.apache.org/">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>
+<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>
 <h1>Installation</h1>
-<p>The following configuration is used to enable Skywalking agent.</p>
+<p>The following configuration is used to enable SkyWalking agent.</p>
 <h3>Through environment variable configuration (for Docker Compose)</h3>
-<p>Modify SKYWALKING environment variables in <code>docker/docker-swarm/config.env.sh</code>:</p>
+<p>Modify SkyWalking environment variables in <code>docker/docker-swarm/config.env.sh</code>:</p>
 <pre><code>SKYWALKING_ENABLE=true
 SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
 SW_GRPC_LOG_SERVER_HOST=127.0.0.1
@@ -40,23 +40,23 @@ apache/dolphinscheduler:1.3.8 all</span>
 <h3>Through install_config.conf configuration (for DolphinScheduler <a href="http://install.sh">install.sh</a>)</h3>
 <p>Add the following configurations to <code>${workDir}/conf/config/install_config.conf</code>.</p>
 <pre><code class="language-properties"><span class="hljs-comment">
-# skywalking config</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> enable skywalking tracking plugin</span>
+# SkyWalking config</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> enable SkyWalking tracking plugin</span>
 <span class="hljs-attr">enableSkywalking</span>=<span class="hljs-string">&quot;true&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking backend service address</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking backend service address</span>
 <span class="hljs-attr">skywalkingServers</span>=<span class="hljs-string">&quot;your.skywalking-oap-server.com:11800&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking log reporter host</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking log reporter host</span>
 <span class="hljs-attr">skywalkingLogReporterHost</span>=<span class="hljs-string">&quot;your.skywalking-log-reporter.com&quot;</span>
-<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure skywalking log reporter port</span>
+<span class="hljs-comment"># <span class="hljs-doctag">note:</span> configure SkyWalking log reporter port</span>
 <span class="hljs-attr">skywalkingLogReporterPort</span>=<span class="hljs-string">&quot;11800&quot;</span>
 
 </code></pre>
 <h1>Usage</h1>
-<h3>Import dashboard</h3>
-<h4>Import dolphinscheduler dashboard to skywalking sever</h4>
-<p>Copy the <code>${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml</code> file into <code>${skywalking-oap-server.home}/config/ui-initialized-templates/</code> directory, and restart Skywalking oap-server.</p>
-<h4>View dolphinscheduler dashboard</h4>
-<p>If you have opened Skywalking dashboard with a browser before, you need to clear browser cache.</p>
+<h3>Import Dashboard</h3>
+<h4>Import DolphinScheduler Dashboard to SkyWalking Sever</h4>
+<p>Copy the <code>${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml</code> file into <code>${skywalking-oap-server.home}/config/ui-initialized-templates/</code> directory, and restart SkyWalking oap-server.</p>
+<h4>View DolphinScheduler Dashboard</h4>
+<p>If you have opened SkyWalking dashboard with a browser before, you need to clear the browser cache.</p>
 <p><img src="/img/skywalking/import-dashboard-1.jpg" alt="img1"></p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
diff --git a/en-us/docs/latest/user_doc/skywalking-agent-deployment.json b/en-us/docs/latest/user_doc/skywalking-agent-deployment.json
index 2d52f4b..b146e12 100644
--- a/en-us/docs/latest/user_doc/skywalking-agent-deployment.json
+++ b/en-us/docs/latest/user_doc/skywalking-agent-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "skywalking-agent-deployment.md",
-  "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The dolphinscheduler-skywalking module provides <a href=\"https://skywalking.apache.org/\">Skywalking</a> monitor agent for the Dolphinscheduler project.</p>\n<p>This document describes how to enable Skywalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>\n<h1>Installation</h1>\n<p>The following configuration is used to enable Skywalking agent.</p>\n<h3>Through environment variable configuration (for Docker [...]
+  "__html": "<h1>SkyWalking Agent Deployment</h1>\n<p>The dolphinscheduler-skywalking module provides <a href=\"https://skywalking.apache.org/\">SkyWalking</a> monitor agent for the Dolphinscheduler project.</p>\n<p>This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).</p>\n<h1>Installation</h1>\n<p>The following configuration is used to enable SkyWalking agent.</p>\n<h3>Through environment variable configuration (for Docker [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/skywalking-agent-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/standalone-deployment.html b/en-us/docs/latest/user_doc/standalone-deployment.html
index 39a80d1..df878a9 100644
--- a/en-us/docs/latest/user_doc/standalone-deployment.html
+++ b/en-us/docs/latest/user_doc/standalone-deployment.html
@@ -11,17 +11,17 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<h1>1、Install basic softwares (please install required softwares by yourself)</h1>
+<h1>1、Install Basic Software (please install required software by yourself)</h1>
 <ul>
 <li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>
 <li><a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>
 <li>ZooKeeper (3.4.6+) : Required</li>
 <li>pstree or psmisc : &quot;pstree&quot; is required for Mac OS and &quot;psmisc&quot; is required for Fedora/Red/Hat/CentOS/Ubuntu/Debian</li>
-<li>Hadoop (2.6+) or MinIO : Optional. If you need resource function, for Standalone Deployment you can choose a local directory as the upload destination (this does not need Hadoop deployed). Of course, you can also choose to upload to Hadoop or MinIO.</li>
+<li>Hadoop (2.6+) or MinIO : Optional. If you need the resource function, for Standalone Deployment you can choose a local directory as the upload destination (this does not need Hadoop deployed). Of course, you can also choose to upload to Hadoop or MinIO.</li>
 </ul>
-<pre><code class="language-markdown"> Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients to run corresponding task.
+<pre><code class="language-markdown"> Tips: DolphinScheduler itself does not rely on Hadoop, Hive, Spark, only use their clients to run the corresponding task.
 </code></pre>
-<h1>2、Download the binary tar.gz package.</h1>
+<h1>2、Download the Binary tar.gz Package</h1>
 <ul>
 <li>Please download the latest version installation package to the server deployment directory. For example, use /opt/dolphinscheduler as the installation and deployment directory. Download address: <a href="/en-us/download/download.html">Download</a>, download package, move to deployment directory and uncompress it.</li>
 </ul>
@@ -35,9 +35,9 @@ tar -zxvf apache-dolphinscheduler-1.3.8-bin.tar.gz -C /opt/dolphinscheduler;
 #</span><span class="bash"> rename</span>
 mv apache-dolphinscheduler-1.3.8-bin  dolphinscheduler-bin
 </code></pre>
-<h1>3、Create deployment user and assign directory operation permissions</h1>
+<h1>3、Create Deployment User and Assign Directory Operation Permissions</h1>
 <ul>
-<li>Create a deployment user, and be sure to configure sudo secret-free. Here take the creation of a dolphinscheduler user as example.</li>
+<li>Create a deployment user, and be sure to configure sudo secret-free. Here take the creation of a dolphinscheduler user as an example.</li>
 </ul>
 <pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash"> To create a user, you need to <span class="hljs-built_in">log</span> <span class="hljs-keyword">in</span> as root and <span class="hljs-built_in">set</span> the deployment user name.</span>
 useradd dolphinscheduler;
@@ -54,10 +54,10 @@ chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-bin
 </code></pre>
 <pre><code> Notes:
  - Because the task execution is based on 'sudo -u {linux-user}' to switch among different Linux users to implement multi-tenant job running, so the deployment user must have sudo permissions and is secret-free. If beginner learners don’t understand, you can ignore this point for now.
- - Please comment out line &quot;Defaults requirett&quot;, if it present in &quot;/etc/sudoers&quot; file. 
- - If you need to use resource upload, you need to assign user the permission to operate the local file system, HDFS or MinIO.
+ - Please comment out line &quot;Defaults requirett&quot;, if it is present in &quot;/etc/sudoers&quot; file. 
+ - If you need to use resource upload, you need to assign the user the permission to operate the local file system, HDFS or MinIO.
 </code></pre>
-<h1>4、SSH secret-free configuration</h1>
+<h1>4、SSH Secret-Free Configuration</h1>
 <ul>
 <li>
 <p>Switch to the deployment user and configure SSH local secret-free login</p>
@@ -69,8 +69,8 @@ chmod 600 ~/.ssh/authorized_keys
 </code></pre>
 </li>
 </ul>
-<p>​  Note: <em>If configure successed, the dolphinscheduler user does not need to enter a password when executing the command <code>ssh localhost</code>.</em></p>
-<h1>5、Database initialization</h1>
+<p>​  Note: <em>If the configuration is successful, the dolphinscheduler user does not need to enter a password when executing the command <code>ssh localhost</code>.</em></p>
+<h1>5、Database Initialization</h1>
 <ul>
 <li>Log in to the database, the default database type is PostgreSQL. If you choose MySQL, you need to add the mysql-connector-java driver package to the lib directory of DolphinScheduler.</li>
 </ul>
@@ -113,7 +113,7 @@ chmod 600 ~/.ssh/authorized_keys
 </li>
 </ul>
 <p>​       <em>Note: If you execute the above script and report &quot;/bin/java: No such file or directory&quot; error, please configure JAVA_HOME and PATH variables in /etc/profile.</em></p>
-<h1>6、Modify runtime parameters.</h1>
+<h1>6、Modify Runtime Parameters.</h1>
 <ul>
 <li>
 <p>Modify the environment variable in <code>dolphinscheduler_env.sh</code> file under 'conf/env' directory (take the relevant software installed under '/opt/soft' as example)</p>
@@ -164,7 +164,7 @@ zkQuorum=&quot;localhost:2181&quot;
 installPath=&quot;/opt/soft/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> deployment user</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself</span>
+<span class="hljs-meta">#</span><span class="bash"> Note: the deployment user needs to have sudo privileges and permissions to operate HDFS. If HDFS is enabled, the root directory needs to be created by itself</span>
 deployUser=&quot;dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> alert config,take QQ email <span class="hljs-keyword">for</span> example</span>
@@ -175,7 +175,7 @@ mailProtocol=&quot;SMTP&quot;
 mailServerHost=&quot;smtp.qq.com&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail server port</span>
-<span class="hljs-meta">#</span><span class="bash"> note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, port may be different, make sure the port is correct.</span>
+<span class="hljs-meta">#</span><span class="bash"> note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, the port may be different, make sure the port is correct.</span>
 mailServerPort=&quot;25&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail sender</span>
@@ -185,13 +185,13 @@ mailSender=&quot;xxx@qq.com&quot;
 mailUser=&quot;xxx@qq.com&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> mail sender password</span>
-<span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is email service authorization code, not the email login password.</span>
+<span class="hljs-meta">#</span><span class="bash"> note: The mail.passwd is the email service authorization code, not the email login password.</span>
 mailPassword=&quot;xxx&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported</span>
 starttlsEnable=&quot;true&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> Whether TLS mail protocol is supported,<span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
+#</span><span class="bash"> Whether TLS mail protocol is supported, <span class="hljs-literal">true</span> is supported and <span class="hljs-literal">false</span> is not supported。</span>
 <span class="hljs-meta">#</span><span class="bash"> note: only one of TLS and SSL can be <span class="hljs-keyword">in</span> the <span class="hljs-literal">true</span> state.</span>
 sslEnable=&quot;false&quot;
 <span class="hljs-meta">
@@ -202,17 +202,17 @@ sslTrust=&quot;smtp.qq.com&quot;
 resourceStorageType=&quot;HDFS&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> here is an example of saving to a <span class="hljs-built_in">local</span> file system</span>
-<span class="hljs-meta">#</span><span class="bash"> Note: If you want to upload resource file(jar file and so on)to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml of hadoop cluster <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or ho [...]
+<span class="hljs-meta">#</span><span class="bash"> Note: If you want to upload resource file(jar file and so on)to HDFS and the NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml of Hadoop cluster <span class="hljs-keyword">in</span> the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and Configure the namenode cluster name; <span class="hljs-keyword">if</span> the NameNode is not HA, modify it to a specific IP or ho [...]
 defaultFS=&quot;file:///data/dolphinscheduler&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> <span class="hljs-keyword">if</span> not use hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
+#</span><span class="bash"> <span class="hljs-keyword">if</span> not use Hadoop resourcemanager, please keep default value; <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span>, please <span class="hljs-built_in">type</span> the HA ips ; <span class="hljs-keyword">if</span> resourcemanager is single, make this value empty</span>
 <span class="hljs-meta">#</span><span class="bash"> Note: For tasks that depend on YARN to execute, you need to ensure that YARN information is configured correctly <span class="hljs-keyword">in</span> order to ensure successful execution results.</span>
 yarnHaIps=&quot;192.168.xx.xx,192.168.xx.xx&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> <span class="hljs-keyword">if</span> resourcemanager HA <span class="hljs-built_in">enable</span> or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname.</span>
 singleYarnIp=&quot;yarnIp1&quot;
 <span class="hljs-meta">
-#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
+#</span><span class="bash"> resource store on HDFS/S3 path, resource file will store to this Hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have <span class="hljs-built_in">read</span> write permissions。/dolphinscheduler is recommended</span>
 resourceUploadPath=&quot;/data/dolphinscheduler&quot;
 <span class="hljs-meta">
 #</span><span class="bash"> specify the user who have permissions to create directory under HDFS/S3 root path</span>
@@ -241,7 +241,7 @@ alertServer=&quot;localhost&quot;
 apiServers=&quot;localhost&quot;
 
 </code></pre>
-<p><em>Attention:</em> if you need upload resource function, please execute below command:</p>
+<p><em>Attention:</em> if you need upload resource function, please execute the below command:</p>
 <pre><code>
 sudo mkdir /data/dolphinscheduler
 sudo chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler 
@@ -260,7 +260,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
 </code></pre>
 </li>
 <li>
-<p>After script completed, the following 5 services will be started. Use <code>jps</code> command to check whether the services started (<code>jps</code> comes with <code>java JDK</code>)</p>
+<p>After the script is completed, the following 5 services will be started. Use <code>jps</code> command to check whether the services started (<code>jps</code> comes with <code>java JDK</code>)</p>
 </li>
 </ul>
 <pre><code class="language-aidl">    MasterServer         ----- master service
@@ -270,7 +270,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
     AlertServer          ----- alert service
 </code></pre>
 <p>If the above services started normally, the automatic deployment is successful.</p>
-<p>After the deployment is success, you can view logs. Logs stored in the logs folder.</p>
+<p>After the deployment is done, you can view logs which stored in the logs folder.</p>
 <pre><code class="language-log"> logs/
     ├── dolphinscheduler-alert-server.log
     ├── dolphinscheduler-master-server.log
@@ -278,7 +278,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
     |—— dolphinscheduler-api-server.log
     |—— dolphinscheduler-logger-server.log
 </code></pre>
-<h1>8、login</h1>
+<h1>8、Login</h1>
 <ul>
 <li>
 <p>Access the front page address, interface IP (self-modified)
@@ -288,7 +288,7 @@ sh: bin/dolphinscheduler-daemon.sh: No such file or directory
  </p>
 </li>
 </ul>
-<h1>9、Start and stop service</h1>
+<h1>9、Start and Stop Service</h1>
 <ul>
 <li>
 <p>Stop all services</p>
diff --git a/en-us/docs/latest/user_doc/standalone-deployment.json b/en-us/docs/latest/user_doc/standalone-deployment.json
index f9e2aaf..cc61174 100644
--- a/en-us/docs/latest/user_doc/standalone-deployment.json
+++ b/en-us/docs/latest/user_doc/standalone-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "standalone-deployment.md",
-  "__html": "<h1>Standalone Deployment</h1>\n<h1>1、Install basic softwares (please install required softwares by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree  [...]
+  "__html": "<h1>Standalone Deployment</h1>\n<h1>1、Install Basic Software (please install required software by yourself)</h1>\n<ul>\n<li>PostgreSQL (8.2.15+) or MySQL (5.7) : Choose One, JDBC Driver 5.1.47+ is required if MySQL is used</li>\n<li><a href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\">JDK</a> (1.8+) : Required. Double-check configure JAVA_HOME and PATH environment variables in /etc/profile</li>\n<li>ZooKeeper (3.4.6+) : Required</li>\n<li>pstree or [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/standalone-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/system-manual.html b/en-us/docs/latest/user_doc/system-manual.html
index cff31d6..dbf7caf 100644
--- a/en-us/docs/latest/user_doc/system-manual.html
+++ b/en-us/docs/latest/user_doc/system-manual.html
@@ -73,7 +73,7 @@
 </ol>
 <ul>
 <li>
-<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished execute, tasks 2 and 3 will be executed simultaneously.</p>
+<p><strong>Increase the order of task execution:</strong> Click the icon in the upper right corner <img src="/img/line.png" width="35"/> to connect the task; as shown in the figure below, task 2 and task 3 are executed in parallel, When task 1 finished executing, tasks 2 and 3 will be executed simultaneously.</p>
 <p align="center">
    <img src="/img/dag6.png" width="80%" />
 </p>
@@ -652,9 +652,9 @@ worker.groups=default,test
  </p>
 <h4>6.1.3 Zookeeper monitoring</h4>
 <ul>
-<li>Mainly related configuration information of each worker and master in zookpeeper.</li>
+<li>Mainly related configuration information of each worker and master in ZooKeeper.</li>
 </ul>
-<p align="center">
+<p alignlinux ="center">
    <img src="/img/zookeeper-monitor-en.png" width="80%" />
  </p>
 <h4>6.1.4 DB monitoring</h4>
@@ -677,7 +677,7 @@ worker.groups=default,test
 <h3>7. <span id=TaskParamers>Task node type and parameter settings</span></h3>
 <h4>7.1 Shell node</h4>
 <blockquote>
-<p>Shell node, when the worker is executed, a temporary shell script is generated, and the linux user with the same name as the tenant executes the script.</p>
+<p>Shell node, when the worker is executed, a temporary shell script is generated, and the Linux user with the same name as the tenant executes the script.</p>
 </blockquote>
 <ul>
 <li>
diff --git a/en-us/docs/latest/user_doc/system-manual.json b/en-us/docs/latest/user_doc/system-manual.json
index d01effd..5a38ef9 100644
--- a/en-us/docs/latest/user_doc/system-manual.json
+++ b/en-us/docs/latest/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" />\n</p>\n<h3>2. Project management</h3>\n<h4>2.1 Create project< [...]
+  "__html": "<h1>System User Manual</h1>\n<h2>Get started quickly</h2>\n<blockquote>\n<p>Please refer to <a href=\"quick-start.html\">Quick Start</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. Home</h3>\n<p>The home page contains task status statistics, process status statistics, and workflow definition statistics for all projects of the user.</p>\n<p align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" />\n</p>\n<h3>2. Project management</h3>\n<h4>2.1 Create project< [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/task-structure.html b/en-us/docs/latest/user_doc/task-structure.html
index e8d8eb8..d90c26c 100644
--- a/en-us/docs/latest/user_doc/task-structure.html
+++ b/en-us/docs/latest/user_doc/task-structure.html
@@ -11,7 +11,7 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<p>All tasks created in Dolphinscheduler are saved in the t_ds_process_definition table.</p>
+<p>All tasks created in DolphinScheduler are saved in the t_ds_process_definition table.</p>
 <p>The following shows the 't_ds_process_definition' table structure:</p>
 <table>
 <thead>
diff --git a/en-us/docs/latest/user_doc/task-structure.json b/en-us/docs/latest/user_doc/task-structure.json
index e3885b8..d005b69 100644
--- a/en-us/docs/latest/user_doc/task-structure.json
+++ b/en-us/docs/latest/user_doc/task-structure.json
@@ -1,6 +1,6 @@
 {
   "filename": "task-structure.md",
-  "__html": "<h1>Overall Tasks Storage Structure</h1>\n<p>All tasks created in Dolphinscheduler are saved in the t_ds_process_definition table.</p>\n<p>The following shows the 't_ds_process_definition' table structure:</p>\n<table>\n<thead>\n<tr>\n<th>No.</th>\n<th>field</th>\n<th>type</th>\n<th>description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>1</td>\n<td>id</td>\n<td>int(11)</td>\n<td>primary key</td>\n</tr>\n<tr>\n<td>2</td>\n<td>name</td>\n<td>varchar(255)</td>\n<td>process defin [...]
+  "__html": "<h1>Overall Tasks Storage Structure</h1>\n<p>All tasks created in DolphinScheduler are saved in the t_ds_process_definition table.</p>\n<p>The following shows the 't_ds_process_definition' table structure:</p>\n<table>\n<thead>\n<tr>\n<th>No.</th>\n<th>field</th>\n<th>type</th>\n<th>description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>1</td>\n<td>id</td>\n<td>int(11)</td>\n<td>primary key</td>\n</tr>\n<tr>\n<td>2</td>\n<td>name</td>\n<td>varchar(255)</td>\n<td>process defin [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/task-structure.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/upgrade.html b/en-us/docs/latest/user_doc/upgrade.html
index 96935e1..0fe41b3 100644
--- a/en-us/docs/latest/user_doc/upgrade.html
+++ b/en-us/docs/latest/user_doc/upgrade.html
@@ -11,21 +11,21 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><img class="header-menu-toggle" src="/img/system/menu_white.png"/><div><ul class="ant-menu whiteClass ant-menu-li [...]
-<h2>1. Back up previous version's files and database.</h2>
-<h2>2. Stop all services of DolphinScheduler.</h2>
+<h2>1. Back Up Previous Version's Files and Database.</h2>
+<h2>2. Stop All Services of DolphinScheduler.</h2>
 <p><code>sh ./script/stop-all.sh</code></p>
-<h2>3. Download the new version's installation package.</h2>
+<h2>3. Download the New Version's Installation Package.</h2>
 <ul>
 <li><a href="/en-us/download/download.html">Download</a> the latest version of the installation packages.</li>
 <li>The following upgrade operations need to be performed in the new version's directory.</li>
 </ul>
-<h2>4. Database upgrade</h2>
+<h2>4. Database Upgrade</h2>
 <ul>
 <li>
 <p>Modify the following properties in conf/datasource.properties.</p>
 </li>
 <li>
-<p>If you use MySQL as database to run DolphinScheduler, please comment out PostgreSQL releated configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-5.1.47.jar, and then correctly config database connect infoformation. You can download mysql connector jar <a href="https://downloads.MySQL.com/archives/c-j/">here</a>. Alternatively if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config data [...]
+<p>If you use MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-5.1.47.jar, and then correctly config database connect information. You can download mysql connector jar <a href="https://downloads.MySQL.com/archives/c-j/">here</a>. Alternatively, if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config da [...]
 <pre><code class="language-properties"><span class="hljs-comment">  # postgre</span>
 <span class="hljs-comment">  #spring.datasource.driver-class-name=org.postgresql.Driver</span>
 <span class="hljs-comment">  #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler</span>
@@ -41,20 +41,20 @@
 <p><code>sh ./script/upgrade-dolphinscheduler.sh</code></p>
 </li>
 </ul>
-<h2>5. Backend service upgrade.</h2>
-<h3>5.1 Modify the content in <code>conf/config/install_config.conf</code> file.</h3>
+<h2>5. Backend Service Upgrade.</h2>
+<h3>5.1 Modify the Content in <code>conf/config/install_config.conf</code> File.</h3>
 <ul>
 <li>Standalone Deployment please refer the [6, Modify running arguments] in <a href="/en-us/docs/1.3.8/user_doc/standalone-deployment.html">Standalone-Deployment</a>.</li>
 <li>Cluster Deployment please refer the [6, Modify running arguments] in <a href="/en-us/docs/1.3.8/user_doc/cluster-deployment.html">Cluster-Deployment</a>.</li>
 </ul>
-<h4>Masters need attentions</h4>
+<h4>Masters Need Attentions</h4>
 <p>Create worker group in 1.3.1 version has different design:</p>
 <ul>
-<li>Brfore version 1.3.1 worker group can be created through UI interface.</li>
+<li>Before version 1.3.1 worker group can be created through UI interface.</li>
 <li>Since version 1.3.1 worker group can be created by modify the worker configuration.</li>
 </ul>
-<h4>When upgrade from version before 1.3.1 to 1.3.2, below operations are what we need to do to keep worker group config consist with previous.</h4>
-<p>1, Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and ip these three columns.</p>
+<h4>When Upgrade from Version Before 1.3.1 to 1.3.2, Below Operations are What We Need to Do to Keep Worker Group Config Consist with Previous.</h4>
+<p>1, Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and IP three columns.</p>
 <table>
 <thead>
 <tr>
@@ -100,13 +100,13 @@
 </tr>
 </tbody>
 </table>
-<p>To keep worker group config consistent with previous version, we need to modify workers config item as below:</p>
-<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash">worker service is deployed on <span class="hljs-built_in">which</span> machine, and also specify <span class="hljs-built_in">which</span> worker group this worker belong to.</span> 
+<p>To keep worker group config consistent with the previous version, we need to modify workers config item as below:</p>
+<pre><code class="language-shell"><span class="hljs-meta">#</span><span class="bash">worker service is deployed on <span class="hljs-built_in">which</span> machine, and also specify <span class="hljs-built_in">which</span> worker group this worker belongs to.</span> 
 workers=&quot;ds1:service1,ds2:service2,ds3:service2&quot;
 </code></pre>
-<h4>The worker group has been enhanced in version 1.3.2.</h4>
+<h4>The Worker Group has Been Enhanced in Version 1.3.2.</h4>
 <p>Worker in 1.3.1 can't belong to more than one worker group, in 1.3.2 it's supported. So in 1.3.1 it's not supported when workers=&quot;ds1:service1,ds1:service2&quot;, and in 1.3.2 it's supported.</p>
-<h3>5.2 Execute deploy script.</h3>
+<h3>5.2 Execute Deploy Script.</h3>
 <pre><code class="language-shell">`sh install.sh`
 </code></pre>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><a href="/en-us/community/development/subscribe.html"><p>Email List</p></a></li><li><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterbl [...]
diff --git a/en-us/docs/latest/user_doc/upgrade.json b/en-us/docs/latest/user_doc/upgrade.json
index fb3c65d..bfbf29a 100644
--- a/en-us/docs/latest/user_doc/upgrade.json
+++ b/en-us/docs/latest/user_doc/upgrade.json
@@ -1,6 +1,6 @@
 {
   "filename": "upgrade.md",
-  "__html": "<h1>DolphinScheduler upgrade documentation</h1>\n<h2>1. Back up previous version's files and database.</h2>\n<h2>2. Stop all services of DolphinScheduler.</h2>\n<p><code>sh ./script/stop-all.sh</code></p>\n<h2>3. Download the new version's installation package.</h2>\n<ul>\n<li><a href=\"/en-us/download/download.html\">Download</a> the latest version of the installation packages.</li>\n<li>The following upgrade operations need to be performed in the new version's directory.</ [...]
+  "__html": "<h1>DolphinScheduler upgrade documentation</h1>\n<h2>1. Back Up Previous Version's Files and Database.</h2>\n<h2>2. Stop All Services of DolphinScheduler.</h2>\n<p><code>sh ./script/stop-all.sh</code></p>\n<h2>3. Download the New Version's Installation Package.</h2>\n<ul>\n<li><a href=\"/en-us/download/download.html\">Download</a> the latest version of the installation packages.</li>\n<li>The following upgrade operations need to be performed in the new version's directory.</ [...]
   "link": "/dist/en-us/docs/1.3.8/user_doc/upgrade.html",
   "meta": {}
 }
\ No newline at end of file