You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/03/16 10:10:08 UTC

[dolphinscheduler-website] branch master updated: Proofreading dev documents under /user_doc/guide (#738)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 1a7cac9  Proofreading dev documents under /user_doc/guide (#738)
1a7cac9 is described below

commit 1a7cac94f5a521a83abfd16949c2206faee8812d
Author: Tq <ti...@gmail.com>
AuthorDate: Wed Mar 16 18:10:02 2022 +0800

    Proofreading dev documents under /user_doc/guide (#738)
---
 .../dev/user_doc/guide/expansion-reduction.md      |  70 ++++++++--------
 docs/en-us/dev/user_doc/guide/flink-call.md        |  17 ++--
 docs/en-us/dev/user_doc/guide/introduction.md      |   2 +-
 docs/en-us/dev/user_doc/guide/monitor.md           |  12 +--
 docs/en-us/dev/user_doc/guide/open-api.md          |  25 +++---
 docs/en-us/dev/user_doc/guide/quick-start.md       |  26 +++---
 docs/en-us/dev/user_doc/guide/resource.md          |  91 ++++++++++-----------
 docs/en-us/dev/user_doc/guide/security.md          |  76 ++++++++---------
 docs/en-us/dev/user_doc/guide/upgrade.md           |  31 +++----
 docs/zh-cn/dev/user_doc/guide/introduction.md      |   2 +-
 img/video_cover/quick-use.png                      | Bin 0 -> 546414 bytes
 11 files changed, 179 insertions(+), 173 deletions(-)

diff --git a/docs/en-us/dev/user_doc/guide/expansion-reduction.md b/docs/en-us/dev/user_doc/guide/expansion-reduction.md
index 62fbd20..f34acb5 100644
--- a/docs/en-us/dev/user_doc/guide/expansion-reduction.md
+++ b/docs/en-us/dev/user_doc/guide/expansion-reduction.md
@@ -1,15 +1,17 @@
 # DolphinScheduler Expansion and Reduction
 
 ## Expansion 
+
 This article describes how to add a new master service or worker service to an existing DolphinScheduler cluster.
+
 ```
  Attention: There cannot be more than one master service process or worker service process on a physical machine.
-       If the physical machine where the expansion master or worker node is located has already installed the scheduled service, skip to [1.4 Modify configuration] Edit the configuration file `conf/config/install_config.conf` on **all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
+       If the physical machine which locate the expansion master or worker node has already installed the scheduled service, check the [1.4 Modify configuration] and edit the configuration file `conf/config/install_config.conf` on ** all ** nodes, add masters or workers parameter, and restart the scheduling cluster.
 ```
 
 ### Basic software installation
 
-* [required] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+):Must be installed, please install and configure JAVA_HOME and PATH variables under /etc/profile
+* [required] [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (version 1.8+): must install, install and configure `JAVA_HOME` and `PATH` variables under `/etc/profile`
 * [optional] If the expansion is a worker node, you need to consider whether to install an external client, such as Hadoop, Hive, Spark Client.
 
 
@@ -18,10 +20,12 @@ This article describes how to add a new master service or worker service to an e
 ```
 
 ### Get Installation Package
-- Check which version of DolphinScheduler is used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.
-- Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in /opt/ directory, and the full path is /opt/dolphinscheduler.
-- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to dolphinscheduler and store it in the /opt directory. 
-- Add database dependency package, this article uses Mysql database, add mysql-connector-java driver package to /opt/dolphinscheduler/lib directory.
+
+- Check the version of DolphinScheduler used in your existing environment, and get the installation package of the corresponding version, if the versions are different, there may be compatibility problems.
+- Confirm the unified installation directory of other nodes, this article assumes that DolphinScheduler is installed in `/opt/` directory, and the full path is `/opt/dolphinscheduler`.
+- Please download the corresponding version of the installation package to the server installation directory, uncompress it and rename it to `dolphinscheduler` and store it in the `/opt` directory. 
+- Add database dependency package, this document uses Mysql database, add `mysql-connector-java` driver package to `/opt/dolphinscheduler/lib` directory.
+
 ```shell
 # create the installation directory, please do not create the installation directory in /root, /home and other high privilege directories 
 mkdir -p /opt
@@ -33,18 +37,18 @@ mv apache-dolphinscheduler-1.3.8-bin  dolphinscheduler
 ```
 
 ```markdown
- Attention: The installation package can be copied directly from an existing environment to an expanded physical machine for use.
+ Attention: You can copy the installation package directly from an existing environment to an expanded physical machine.
 ```
 
 ### Create Deployment Users
 
-- Create deployment users on **all** expansion machines, and be sure to configure sudo-free. If we plan to deploy scheduling on four expansion machines, ds1, ds2, ds3, and ds4, we first need to create deployment users on each machine
+- Create deployment user on **all** expansion machines, and make sure to configure sudo-free. If we plan to deploy scheduling on four expansion machines, ds1, ds2, ds3, and ds4, create deployment users on each machine is prerequisite.
 
 ```shell
-# to create a user, you need to log in with root and set the deployment user name, please modify it yourself, later take dolphinscheduler as an example
+# to create a user, you need to log in with root and set the deployment user name, modify it by yourself, the following take `dolphinscheduler` as an example:
 useradd dolphinscheduler;
 
-# set the user password, please change it by yourself, later take dolphinscheduler123 as an example
+# set the user password, please change it by yourself, the following take `dolphinscheduler123` as an example
 echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
 
 # configure sudo password-free
@@ -55,14 +59,14 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
 
 ```markdown
  Attention:
- - Since it is sudo -u {linux-user} to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
- - If you find the line "Default requiretty" in the /etc/sudoers file, please also comment it out.
- - If resource uploads are used, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
+ - Since it is `sudo -u {linux-user}` to switch between different Linux users to run multi-tenant jobs, the deploying user needs to have sudo privileges and be password free.
+ - If you find the line `Default requiretty` in the `/etc/sudoers` file, please also comment it out.
+ - If have needs to use resource uploads, you also need to assign read and write permissions to the deployment user on `HDFS or MinIO`.
 ```
 
 ### Modify Configuration
 
-- From an existing node such as Master/Worker, copy the conf directory directly to replace the conf directory in the new node. After copying, check if the configuration items are correct.
+- From an existing node such as `Master/Worker`, copy the configuration directory directly to replace the configuration directory in the new node. After finishing the file copy, check whether the configuration items are correct.
     
     ```markdown
     Highlights:
@@ -72,7 +76,7 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
     env/dolphinscheduler_env.sh: environment Variables
     ````
 
-- Modify the `dolphinscheduler_env.sh` environment variable in the conf/env directory according to the machine configuration (take the example that the software used is installed in /opt/soft)
+- Modify the `dolphinscheduler_env.sh` environment variable in the `conf/env` directory according to the machine configuration (the following is the example that all the used software install under `/opt/soft`)
 
     ```shell
         export HADOOP_HOME=/opt/soft/hadoop
@@ -88,10 +92,10 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
     
     ```
 
-    `Attention: This step is very important, such as JAVA_HOME and PATH is necessary to configure, not used can be ignored or commented out`
+    `Attention: This step is very important, such as `JAVA_HOME` and `PATH` is necessary to configure if haven not used just ignore or comment out`
 
 
-- Softlink the JDK to /usr/bin/java (still using JAVA_HOME=/opt/soft/java as an example)
+- Soft link the `JDK` to `/usr/bin/java` (still using `JAVA_HOME=/opt/soft/java` as an example)
 
     ```shell
     sudo ln -s /opt/soft/java/bin/java /usr/bin/java
@@ -99,8 +103,8 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
 
  - Modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
     
-    * To add a new master node, you need to modify the ips and masters parameters.
-    * To add a new worker node, modify the ips and workers parameters.
+    * To add a new master node, you need to modify the IPs and masters parameters.
+    * To add a new worker node, modify the IPs and workers parameters.
 
 ```shell
 # which machines to deploy DS services on, separated by commas between multiple physical machines
@@ -116,9 +120,9 @@ masters="existing master01,existing master02,ds1,ds2"
 workers="existing worker01:default,existing worker02:default,ds3:default,ds4:default"
 
 ```
-- If the expansion is for worker nodes, you need to set the worker group. Please refer to the security [Worker grouping](./security.md)
+- If the expansion is for worker nodes, you need to set the worker group, refer to the security of the [Worker grouping](./security.md)
 
-- On all new nodes, change the directory permissions so that the deployment user has access to the dolphinscheduler directory
+- On all new nodes, change the directory permissions so that the deployment user has access to the DolphinScheduler directory
 
 ```shell
 sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
@@ -126,7 +130,7 @@ sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
 
 ### Restart the Cluster and Verify
 
-- restart the cluster
+- Restart the cluster
 
 ```shell
 # stop command:
@@ -150,11 +154,10 @@ sh bin/dolphinscheduler-daemon.sh start alert-server   # start alert  service
 ```
 
 ```
- Attention: When using stop-all.sh or stop-all.sh, if the physical machine executing the command is not configured to be ssh-free on all machines, it will prompt for the password
+ Attention: When using `stop-all.sh` or `stop-all.sh`, if the physical machine execute the command is not configured to be ssh-free on all machines, it will prompt to enter the password
 ```
 
-
-- After the script is completed, use the `jps` command to see if each node service is started (`jps` comes with the `Java JDK`)
+- After completing the script, use the `jps` command to see if every node service is started (`jps` comes with the `Java JDK`)
 
 ```
     MasterServer         ----- master service
@@ -163,7 +166,7 @@ sh bin/dolphinscheduler-daemon.sh start alert-server   # start alert  service
     AlertServer          ----- alert  service
 ```
 
-After successful startup, you can view the logs, which are stored in the logs folder.
+After successful startup, you can view the logs, which are stored in the `logs` folder.
 
 ```Log Path
  logs/
@@ -172,7 +175,8 @@ After successful startup, you can view the logs, which are stored in the logs fo
     ├── dolphinscheduler-worker-server.log
     ├── dolphinscheduler-api-server.log
 ```
-If the above services are started normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
+
+If the above services start normally and the scheduling system page is normal, check whether there is an expanded Master or Worker service in the [Monitor] of the web system. If it exists, the expansion is complete.
 
 -----------------------------------------------------------------------------
 
@@ -184,7 +188,7 @@ There are two steps for shrinking. After performing the following two steps, the
 ### Stop the Service on the Scaled-Down Node
 
  * If you are scaling down the master node, identify the physical machine where the master service is located, and stop the master service on the physical machine.
- * If the worker node is scaled down, determine the physical machine where the worker service is to be scaled down and stop the worker services on the physical machine.
+ * If scale down the worker node, determine the physical machine where the worker service scale down and stop the worker services on the physical machine.
  
 ```shell
 # stop command:
@@ -207,10 +211,10 @@ sh bin/dolphinscheduler-daemon.sh start alert-server  # start alert  service
 ```
 
 ```
- Attention: When using stop-all.sh or stop-all.sh, if the machine without the command is not configured to be ssh-free for all machines, it will prompt for the password.
+ Attention: When using `stop-all.sh` or `stop-all.sh`, if the machine without the command is not configured to be ssh-free for all machines, it will prompt to enter the password
 ```
 
-- After the script is completed, use the `jps` command to see if each node service was successfully shut down (`jps` comes with the `Java JDK`)
+- After the script is completed, use the `jps` command to see if every node service was successfully shut down (`jps` comes with the `Java JDK`)
 
 ```
     MasterServer         ----- master service
@@ -218,15 +222,15 @@ sh bin/dolphinscheduler-daemon.sh start alert-server  # start alert  service
     ApiApplicationServer ----- api    service
     AlertServer          ----- alert  service
 ```
-If the corresponding master service or worker service does not exist, then the master/worker service is successfully shut down.
+If the corresponding master service or worker service does not exist, then the master or worker service is successfully shut down.
 
 
 ### Modify the Configuration File
 
  - modify the configuration file `conf/config/install_config.conf` on the **all** nodes, synchronizing the following configuration.
     
-    * to scale down the master node, modify the ips and masters parameters.
-    * to scale down worker nodes, modify the ips and workers parameters.
+    * to scale down the master node, modify the IPs and masters parameters.
+    * to scale down worker nodes, modify the IPs and workers parameters.
 
 ```shell
 # which machines to deploy DS services on, "localhost" for this machine
diff --git a/docs/en-us/dev/user_doc/guide/flink-call.md b/docs/en-us/dev/user_doc/guide/flink-call.md
index d6f22d1..5f71b69 100644
--- a/docs/en-us/dev/user_doc/guide/flink-call.md
+++ b/docs/en-us/dev/user_doc/guide/flink-call.md
@@ -2,7 +2,7 @@
 
 ## Create a Queue
 
-1. Log in to the scheduling system, click "Security", then click "Queue manage" on the left, and click "Create queue" to create a queue.
+1. Log in to the scheduling system, click `Security`, then click `Queue manage` on the left, and click `Create queue` to create a queue.
 2. Fill in the name and value of the queue, and click "Submit" 
 
 <p align="center">
@@ -12,8 +12,8 @@
 ## Create a Tenant 
 
 ```
-1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If Linux OS environment does not have this user, the worker will create this user when executing the script.
-2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person has a name and id number.  
+1. The tenant corresponds to a Linux user, which the user worker uses to submit jobs. If the Linux OS environment does not have this user, the worker will create this user when executing the script.
+2. Both the tenant and the tenant code are unique and cannot be repeated, just like a person only has one name and one ID number.  
 3. After creating a tenant, there will be a folder in the HDFS relevant directory.  
 ```
 
@@ -29,20 +29,20 @@
 
 ## Create a Token
 
-1. Log in to the scheduling system, click "Security", then click "Token manage" on the left, and click "Create token" to create a token.
+1. Log in to the scheduling system, click `Security`, then click `Token manage` on the left, and click `Create token` to create a token.
 
 <p align="center">
    <img src="/img/token-management-en.png" width="80%" />
  </p>
 
 
-2. Select the "Expiration time" (Token validity), select "User" (to perform the API operation with the specified user), click "Generate token", copy the Token string, and click "Submit"
+2. Select the `Expiration time` (token validity time), select `User` (choose the specified user to perform the API operation), click "Generate token", copy the `Token` string, and click "Submit".
 
 <p align="center">
    <img src="/img/create-token-en1.png" width="80%" />
  </p>
 
-## Use Token
+## Token Usage
 
 1. Open the API documentation page
 
@@ -53,12 +53,11 @@
  </p>
 
 
-2. Select a test API, the API selected for this test: queryAllProjectList
+2. Select a test API, the API selected for this test is `queryAllProjectList`
 
    > projects/query-project-list
-   >                                                                  >
 
-3. Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result
+3. Open `Postman`, fill in the API address, and enter the `Token` in `Headers`, and then send the request to view the result:
 
    ```
    token: The Token just generated
diff --git a/docs/en-us/dev/user_doc/guide/introduction.md b/docs/en-us/dev/user_doc/guide/introduction.md
index 052267d..412a326 100644
--- a/docs/en-us/dev/user_doc/guide/introduction.md
+++ b/docs/en-us/dev/user_doc/guide/introduction.md
@@ -1,3 +1,3 @@
 # User Manual
 
-User Manual show you how to play with DolphinScheduler, if you do not installed, please see [Quick Start](./quick-start.md) to install DolphinScheduler before going forward.
\ No newline at end of file
+The user manual shows common operations about DolphinScheduler. If you still haven't installed DolphinScheduler, refer to the [Quick Start](./quick-start.md) to install DolphinScheduler before going forward.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/monitor.md b/docs/en-us/dev/user_doc/guide/monitor.md
index 4606abd..f113cfa 100644
--- a/docs/en-us/dev/user_doc/guide/monitor.md
+++ b/docs/en-us/dev/user_doc/guide/monitor.md
@@ -2,7 +2,7 @@
 
 ## Service Management
 
-- Service management is mainly to monitor and display the health status and basic information of each service in the system
+- Service management is mainly to monitor and display the health status and basic information of each service in the system.
 
 ## Monitor Master Server
 
@@ -29,7 +29,7 @@
 
 ## Monitor DB
 
-- Mainly the health of the DB
+- Mainly the health status of the DB.
 
 <p align="center">
    <img src="/img/mysql-jk-en.png" width="80%" />
@@ -41,7 +41,7 @@
    <img src="/img/statistics-en.png" width="80%" />
  </p>
 
-- Number of commands to be executed: statistics on the t_ds_command table
-- The number of failed commands: statistics on the t_ds_error_command table
-- Number of tasks to run: Count the data of task_queue in ZooKeeper
-- Number of tasks to be killed: Count the data of task_kill in ZooKeeper
\ No newline at end of file
+- Number of commands wait to be executed: statistics of the `t_ds_command` table data.
+- The number of failed commands: statistics of the `t_ds_error_command` table data.
+- Number of tasks wait to run: count the data of `task_queue` in the ZooKeeper.
+- Number of tasks wait to be killed: count the data of `task_kill` in the ZooKeeper.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/open-api.md b/docs/en-us/dev/user_doc/guide/open-api.md
index 2210724..1bac59b 100644
--- a/docs/en-us/dev/user_doc/guide/open-api.md
+++ b/docs/en-us/dev/user_doc/guide/open-api.md
@@ -1,34 +1,36 @@
 # Open API
 
 ## Background
-Generally, projects and processes are created through pages, but integration with third-party systems requires API calls to manage projects and workflows.
 
-## The Operation Steps of DS API Calls
+Generally, projects and processes are created through pages, but considering the integration with third-party systems requires API calls to manage projects and workflows.
+
+## The Operation Steps of DolphinScheduler API Calls
 
 ### Create a Token
+
 1. Log in to the scheduling system, click "Security", then click "Token manage" on the left, and click "Create token" to create a token.
 
 <p align="center">
    <img src="/img/token-management-en.png" width="80%" />
  </p>
 
-2. Select the "Expiration time" (Token validity), select "User" (to perform the API operation with the specified user), click "Generate token", copy the Token string, and click "Submit"
+2. Select the "Expiration time" (Token validity time), select "User" (choose the specified user to perform the API operation), click "Generate token", copy the `Token` string, and click "Submit".
 
 <p align="center">
    <img src="/img/create-token-en1.png" width="80%" />
  </p>
 
-### Use Token
+### Token Usage
+
 1. Open the API documentation page
-    > Address:http://{api server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
+    > Address:http://{API server ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
 <p align="center">
    <img src="/img/api-documentation-en.png" width="80%" />
  </p>
  
-2. select a test API, the API selected for this test: queryAllProjectList
+2. select a test API, the API selected for this test is `queryAllProjectList`
     > projects/query-project-list
-                                                                             >
-3. Open Postman, fill in the API address, and enter the Token in Headers, and then send the request to view the result
+3. Open `Postman`, fill in the API address, enter the `Token` in `Headers`, and then send the request to view the result:
     ```
     token: The Token just generated
     ```
@@ -37,6 +39,7 @@ Generally, projects and processes are created through pages, but integration wit
  </p>  
 
 ### Create a Project
+
 Here is an example of creating a project named "wudl-flink-test":
 <p align="center">
    <img src="/img/api/create_project1.png" width="80%" />
@@ -49,11 +52,11 @@ Here is an example of creating a project named "wudl-flink-test":
 <p align="center">
    <img src="/img/api/create_project3.png" width="80%" />
  </p>
-The returned msg information is "success", indicating that we have successfully created the project through API.
+The returned `msg` information is "success", indicating that we have successfully created the project through API.
 
-If you are interested in the source code of the project, please continue to read the following:
+If you are interested in the source code of creating a project, please continue to read the following:
 
-### Appendix:The Source Code of Creating a Project
+### Appendix: The Source Code of Creating a Project
 
 <p align="center">
    <img src="/img/api/create_source1.png" width="80%" />
diff --git a/docs/en-us/dev/user_doc/guide/quick-start.md b/docs/en-us/dev/user_doc/guide/quick-start.md
index 771e6ac..8ce9bd7 100644
--- a/docs/en-us/dev/user_doc/guide/quick-start.md
+++ b/docs/en-us/dev/user_doc/guide/quick-start.md
@@ -1,44 +1,48 @@
 # Quick Start
 
+* Watch Apache DolphinScheduler Quick Start Tutorile here:
+  [![image](/img/video_cover/quick-use.png)](https://www.youtube.com/watch?v=nrF20hpCkug)
+
+
 * Administrator user login
 
-  > Address:http://localhost:12345/dolphinscheduler  Username and password: admin/dolphinscheduler123
+  > Address:http://localhost:12345/dolphinscheduler  Username and password: `admin/dolphinscheduler123`
 
 <p align="center">
    <img src="/img/login_en.png" width="60%" />
  </p>
 
-* Create queue
+* Create a queue
 
 <p align="center">
    <img src="/img/create-queue-en.png" width="60%" />
  </p>
 
-* Create tenant
+* Create a tenant
 
 <p align="center">
   <img src="/img/create-tenant-en.png" width="60%" />
 </p>
 
-  * Creating Ordinary Users
+* Creating Ordinary Users
 <p align="center">
       <img src="/img/create-user-en.png" width="60%" />
  </p>
 
-  * Create an alarm group
+* Create an alarm group
 
  <p align="center">
     <img src="/img/alarm-group-en.png" width="60%" />
   </p>
 
   
-  * Create a worker group
+* Create a worker group
   
    <p align="center">
       <img src="/img/worker-group-en.png" width="60%" />
     </p>
 
-   * Create environment
+ * Create environment
 
    <p align="center">
     <img src="/img/create-environment.png" width="60%" />
@@ -51,21 +55,21 @@
 </p>
      
   
-  * Login with regular users
+* Login with regular users
   > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
 
-  * Project Management - > Create Project - > Click on Project Name
+* `Project Management - > Create Project - > Click on Project Name`
 <p align="center">
       <img src="/img/create_project_en.png" width="60%" />
  </p>
 
-  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
+* `Click Workflow Definition - > Create Workflow Definition - > Online Process Definition`
 
 <p align="center">
    <img src="/img/process_definition_en.png" width="60%" />
  </p>
 
-  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
+* `Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log`
 
  <p align="center">
    <img src="/img/log_en.png" width="60%" />
diff --git a/docs/en-us/dev/user_doc/guide/resource.md b/docs/en-us/dev/user_doc/guide/resource.md
index 26e7bf8..f43dabf 100644
--- a/docs/en-us/dev/user_doc/guide/resource.md
+++ b/docs/en-us/dev/user_doc/guide/resource.md
@@ -1,24 +1,24 @@
 # Resource Center
 
-If you want to use the resource upload function, you can select the local file directory for a single machine(this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO and other related environments
+If you want to use the resource upload function, you can appoint the local file directory as the upload directory for a single machine (this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO or other related environments.
 
 > **_Note:_**
 >
-> * If the resource upload function is used, the deployment user in [installation and deployment](installation/standalone.md) must to have operation authority
-> * If you using Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise Skip step
+> * If you want to use the resource upload function, the deployment user in [installation and deployment](installation/standalone.md) must have relevant operation authority.
+> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step.
 
 ## HDFS Resource Configuration
 
-- Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
+- Upload resource files and UDF functions, all uploaded files and resources will be stored on HDFS, so require the following configurations:
 
-```
-conf/common/common.properties
+```  
+conf/common.properties  
     # Users who have permission to create directories under the HDFS root path
     hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
+    # data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/dolphinscheduler" is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
     # whether kerberos starts
     hadoop.security.authentication.startup.state=false
     # java.security.krb5.conf path
@@ -26,32 +26,28 @@ conf/common/common.properties
     # loginUserFromKeytab user
     login.user.keytab.username=hdfs-mycluster@ESZ.COM
     # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-
-conf/common/hadoop.properties
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
-    fs.defaultFS=hdfs://mycluster:8020
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA enabled, you need to put core-site.xml and hdfs-site.xml in the installPath/conf directory. In this example, it is placed under /opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if the NameNode is not HA, modify it to a specific IP or host name.
+    # if resource.storage.type is S3,write S3 address,HA,for example :s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
+    fs.defaultFS=hdfs://mycluster:8020    
     #resourcemanager ha note this need ips , this empty if single
-    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
+    yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
     # If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
     yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
 
 ```
 
-- Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids and yarn.application.status.address, and the other address is empty.
-- You need to copy core-site.xml and hdfs-site.xml from the conf directory of the Hadoop cluster to the conf directory of the dolphinscheduler project, and restart the api-server service.
-
 ## File Management
 
-> It is the management of various resource files, including creating basic txt/log/sh/conf/py/java and other files, uploading jar packages and other types of files, and can do edit, rename, download, delete and other operations.
+> It is the management of various resource files, including creating basic `txt/log/sh/conf/py/java` and jar packages and other type files, and can do edit, rename, download, delete and other operations to the files.
 
   <p align="center">
    <img src="/img/file-manage-en.png" width="80%" />
  </p>
 
 - Create a file
-  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties
+  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties.
 
 <p align="center">
    <img src="/img/file_create_en.png" width="80%" />
@@ -59,7 +55,7 @@ conf/common/hadoop.properties
 
 - upload files
 
-> Upload file: Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name
+> Upload file: Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name.
 
 <p align="center">
    <img src="/img/file-upload-en.png" width="80%" />
@@ -67,15 +63,15 @@ conf/common/hadoop.properties
 
 - File View
 
-> For the file types that can be viewed, click the file name to view the file details
+> For the files that can be viewed, click the file name to view the file details.
 
 <p align="center">
    <img src="/img/file_detail_en.png" width="80%" />
  </p>
 
-- download file
+- Download file
 
-> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file
+> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file.
 
 - File rename
 
@@ -84,11 +80,11 @@ conf/common/hadoop.properties
  </p>
 
 - delete
-  > File list -> Click the "Delete" button to delete the specified file
+  > File list -> Click the "Delete" button to delete the specified file.
 
 - Re-upload file
 
-  > Re-upload file: Click the "Re-upload File" button to upload a new file to replace the old file, drag the file to the re-upload area, the file name will be automatically completed with the new file name
+  > Re-upload file: Click the "Re-upload File" button to upload a new file to replace the old file, drag the file to the re-upload area, the file name will be automatically completed with the new uploaded file name.
 
     <p align="center">
       <img src="/img/reupload_file_en.png" width="80%" />
@@ -99,22 +95,22 @@ conf/common/hadoop.properties
 
 ### Resource Management
 
-> The resource management and file management functions are similar. The difference is that the resource management is the uploaded UDF function, and the file management uploads the user program, script and configuration file.
+> The resource management and file management functions are similar. The difference is that the resource management is the UDF upload function, and the file management uploads the user programs, scripts and configuration files.
 > Operation function: rename, download, delete.
 
-- Upload udf resources
+- Upload UDF resources
   > Same as uploading files.
 
 ### Function Management
 
 - Create UDF function
-  > Click "Create UDF Function", enter the udf function parameters, select the udf resource, and click "Submit" to create the udf function.
+  > Click "Create UDF Function", enter the UDF function parameters, select the UDF resource, and click "Submit" to create the UDF function.
 
-> Currently only supports temporary UDF functions of HIVE
+> Currently, only supports temporary UDF functions of Hive.
 
-- UDF function name: the name when the UDF function is entered
-- Package name Class name: Enter the full path of the UDF function
-- UDF resource: Set the resource file corresponding to the created UDF
+- UDF function name: enter the name of the UDF function.
+- Package name Class name: enter the full path of the UDF function.
+- UDF resource: set the resource file corresponding to the created UDF function.
 
 <p align="center">
    <img src="/img/udf_edit_en.png" width="80%" />
@@ -122,7 +118,7 @@ conf/common/hadoop.properties
  
 ## Task Group Settings
 
-The task group is mainly used to control the concurrency of task instances, and is designed to control the pressure of other resources (it can also control the pressure of the Hadoop cluster, the cluster will have queue control it). When creating a new task definition, you can configure the corresponding task group and configure the priority of the task running in the task group. 
+The task group is mainly used to control the concurrency of task instances and is designed to control the pressure of other resources (it can also control the pressure of the Hadoop cluster, the cluster will have queue control it). When creating a new task definition, you can configure the corresponding task group and configure the priority of the task running in the task group. 
 
 ### Task Group Configuration 
 
@@ -132,19 +128,19 @@ The task group is mainly used to control the concurrency of task instances, and
     <img src="/img/task_group_manage_eng.png" width="80%" />
 </p>
 
-The user clicks [Resources] - [Task Group Management] - [Task Group option] - create task Group 
+The user clicks [Resources] - [Task Group Management] - [Task Group option] - [Create Task Group] 
 
 <p align="center">
 <img src="/img/task_group_create_eng.png" width="80%" />
 </p> 
 
-You need to enter the information in the picture:
+You need to enter the information inside the picture:
 
-[Task group name]: The name displayed when the task group is used
+- Task group name: the name displayed of the task group
 
-[Project name]: The project that the task group functions, this item is optional, if not selected, all the projects in the whole system can use this task group.
+- Project name: the project range that the task group functions, this item is optional, if not selected, all the projects in the whole system can use this task group.
 
-[Resource pool size]: The maximum number of concurrent task instances allowed 
+- Resource pool size: The maximum number of concurrent task instances allowed.
 
 #### View Task Group Queue 
 
@@ -152,7 +148,7 @@ You need to enter the information in the picture:
     <img src="/img/task_group_conf_eng.png" width="80%" />
 </p>
 
-Click the button to view task group usage information 
+Click the button to view task group usage information:
 
 <p align="center">
     <img src="/img/task_group_queue_list_eng.png" width="80%" />
@@ -160,18 +156,17 @@ Click the button to view task group usage information
 
 #### Use of Task Groups 
 
-Note: The use of task groups is applicable to tasks executed by workers, such as [switch] nodes, [condition] nodes, [sub_process] and other node types executed by the master are not controlled by the task group. Let's take the shell node as an example: 
+**Note**: The usage of task groups is applicable to tasks executed by workers, such as [switch] nodes, [condition] nodes, [sub_process] and other node types executed by the master are not controlled by the task group. Let's take the shell node as an example: 
 
 <p align="center">
     <img src="/img/task_group_use_eng.png" width="80%" />
 </p>        
 
+Regarding the configuration of the task group, all you need to do is to configure these parts in the red box:
 
-Regarding the configuration of the task group, all you need to do is to configure the part in the red box:
-
-[Task group name] : The task group name displayed on the task group configuration page. Here you can only see the task group that the project has permission to (the project is selected when creating a task group), or the task group that acts globally (the new task group is created). when no item is selected) 
+- Task group name: The task group name is displayed on the task group configuration page. Here you can only see the task group that the project has permission to access (the project is selected when creating a task group) or the task group that scope globally (no project is selected when creating a task group).
 
-[Priority] : When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
+- Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
 
 ### Implementation Logic of Task Group 
 
@@ -181,7 +176,7 @@ The master judges whether the task is configured with a task group when distribu
 
 #### Release and Wake Up: 
 
-When the task that has obtained the task group resource ends, the task group resource will be released. After the release, it will check whether there is a task waiting in the current task group. If there is, mark the task with the best priority to run, and create a new executable event. . The event stores the task id that is marked to obtain the resource, and then obtains the task group resource and then runs it. 
+When the task that has occupied the task group resource is finished, the task group resource will be released. After the release, it will check whether there is a task waiting in the current task group. If there is, mark the task with the best priority to run, and create a new executable event. The event stores the task ID that is marked to acquire the resource, and then the task obtains the task group resource and run. 
 
 #### Task Group Flowchart 
 
diff --git a/docs/en-us/dev/user_doc/guide/security.md b/docs/en-us/dev/user_doc/guide/security.md
index 9e20dcb..52004a5 100644
--- a/docs/en-us/dev/user_doc/guide/security.md
+++ b/docs/en-us/dev/user_doc/guide/security.md
@@ -1,21 +1,21 @@
-# Security
+# Security (Authorization System)
 
-* Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, resources, data sources, projects, etc. Authorization
-* Administrator login, default user name and password: admin/dolphinscheduler123
+* Only the administrator account in the security center has the authority to operate. It has functions such as queue management, tenant management, user management, alarm group management, worker group management, token management, etc. In the user management module, can authorize to the resources, data sources, projects, etc.
+* Administrator login, the default username and password is `admin/dolphinscheduler123`
 
 ## Create Queue
 
-- Queue is used when the "queue" parameter is needed to execute programs such as spark and mapreduce.
-- The administrator enters the Security Center->Queue Management page and clicks the "Create Queue" button to create a queue.
+- Configure `queue` parameter to execute programs such as Spark and MapReduce.
+- The administrator enters the `Security Center->Queue Management` page and clicks the "Create Queue" button to create a new queue.
 <p align="center">
    <img src="/img/create-queue-en.png" width="80%" />
  </p>
 
 ## Add Tenant
 
-- The tenant corresponds to the Linux user, which is used by the worker to submit the job. Task will fail if Linux does not exists this user. You can set the parameter `worker.tenant.auto.create` as `true` in configuration file `worker.properties`. After that DolphinScheduler would create user if not exists, The property `worker.tenant.auto.create=true` requests worker run `sudo` command without password.
+- The tenant corresponds to the Linux user, which is used by the worker to submit the job. The task will fail if Linux does not have this user exists. You can set the parameter `worker.tenant.auto.create` as `true` in configuration file `worker.properties`. After that DolphinScheduler will create a user if not exists, The property `worker.tenant.auto.create=true` requests worker run `sudo` command without password.
 - Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
-- The administrator enters the Security Center->Tenant Management page and clicks the "Create Tenant" button to create a tenant.
+- The administrator enters the `Security Center->Tenant Management` page and clicks the `Create Tenant` button to create a tenant.
 
  <p align="center">
     <img src="/img/addtenant-en.png" width="80%" />
@@ -25,45 +25,45 @@
 
 - Users are divided into **administrator users** and **normal users**
 
-  - The administrator has authorization and user management authority, but does not have the authority to create project and workflow definition operations.
-  - Ordinary users can create projects and create, edit, and execute workflow definitions.
-  - Note: If the user switches tenants, all resources under the tenant where the user belongs will be copied to the new tenant that is switched.
+  - The administrator has authorization to authorize and user management authorities but does not have the authority to create project and workflow definition operations.
+  - Normal users can create projects and create, edit and execute workflow definitions.
+  - **Note**: If the user switches tenants, all resources under the tenant to which the user belongs will be copied to the new tenant that is switched.
 
-- The administrator enters the Security Center -> User Management page and clicks the "Create User" button to create a user.
+- The administrator enters the `Security Center -> User Management` page and clicks the `Create User` button to create a user.
 <p align="center">
    <img src="/img/user-en.png" width="80%" />
  </p>
 
 > **Edit user information**
 
-- The administrator enters the Security Center->User Management page and clicks the "Edit" button to edit user information.
-- After an ordinary user logs in, click the user information in the user name drop-down box to enter the user information page, and click the "Edit" button to edit the user information.
+- The administrator enters the `Security Center->User Management` page and clicks the `Edit` button to edit user information.
+- After a normal user logs in, click the user information in the username drop-down box to enter the user information page, and click the `Edit` button to edit the user information.
 
 > **Modify user password**
 
-- The administrator enters the Security Center->User Management page and clicks the "Edit" button. When editing user information, enter the new password to modify the user password.
-- After a normal user logs in, click the user information in the user name drop-down box to enter the password modification page, enter the password and confirm the password and click the "Edit" button, then the password modification is successful.
+- The administrator enters the `Security Center->User Management` page and clicks the `Edit` button. When editing user information, enter the new password to modify the user password.
+- After a normal user logs in, click the user information in the username drop-down box to enter the password modification page, enter the password and confirm the password and click the `Edit` button, then the password modification is a success.
 
 ## Create Alarm Group
 
-- The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group in the form of email.
+- The alarm group is a parameter set at startup. After the process ends, the status of the process and other information will be sent to the alarm group by email.
 
-* The administrator enters the Security Center -> Alarm Group Management page and clicks the "Create Alarm Group" button to create an alarm group.
+* The administrator enters the `Security Center -> Alarm Group Management` page and clicks the `Create Alarm Group` button to create an alarm group.
 
   <p align="center">
     <img src="/img/mail-en.png" width="80%" />
 
 ## Token Management
 
-> Since the back-end interface has login check, token management provides a way to perform various operations on the system by calling the interface.
+> Since the back-end interface has login check, token management provides a way to execute various operations on the system by calling interfaces.
 
-- The administrator enters the Security Center -> Token Management page, clicks the "Create Token" button, selects the expiration time and user, clicks the "Generate Token" button, and clicks the "Submit" button, then the selected user's token is created successfully.
+- The administrator enters the `Security Center -> Token Management page`, clicks the `Create Token` button, selects the expiration time and user, clicks the `Generate Token` button, and clicks the `Submit` button, then create the selected user's token successfully.
 
   <p align="center">
       <img src="/img/create-token-en.png" width="80%" />
    </p>
 
-- After an ordinary user logs in, click the user information in the user name drop-down box, enter the token management page, select the expiration time, click the "generate token" button, and click the "submit" button, then the user creates a token successfully.
+- After a normal user logs in, click the user information in the username drop-down box, enter the token management page, select the expiration time, click the `Generate Token` button, and click the `Submit` button, then the user creates a token successfully.
 - Call example:
 
 ```java
@@ -101,18 +101,18 @@
     }
 ```
 
-## Granted Permission
+## Granted Permissions
 
     * Granted permissions include project permissions, resource permissions, data source permissions, UDF function permissions.
-    * The administrator can authorize the projects, resources, data sources and UDF functions not created by ordinary users. Because the authorization methods for projects, resources, data sources and UDF functions are the same, we take project authorization as an example.
-    * Note: For projects created by users themselves, the user has all permissions. The project list and the selected project list will not be displayed.
+    * The administrator can authorize the projects, resources, data sources and UDF functions to normal users which not created by them. Because the way to authorize projects, resources, data sources and UDF functions to users is the same, we take project authorization as an example.
+    * Note: The user has all permissions to the projects created by them. Projects will not be displayed in the project list and the selected project list.
 
-- The administrator enters the Security Center -> User Management page and clicks the "Authorize" button of the user who needs to be authorized, as shown in the figure below:
+- The administrator enters the `Security Center -> User Management` page and clicks the `Authorize` button of the user who needs to be authorized, as shown in the figure below:
  <p align="center">
   <img src="/img/auth-en.png" width="80%" />
 </p>
 
-- Select the project to authorize the project.
+- Select the project and authorize the project.
 
 <p align="center">
    <img src="/img/authproject-en.png" width="80%" />
@@ -124,38 +124,38 @@
 
 Each worker node will belong to its own worker group, and the default group is "default".
 
-When the task is executed, the task can be assigned to the specified worker group, and the task will be executed by the worker node in the group.
+When executing a task, the task can be assigned to the specified worker group, and the task will be executed by the worker node in the group.
 
-> Add/Update worker group
+> Add or update worker group
 
-- Open the "conf/worker.properties" configuration file on the worker node where you want to set the groups, and modify the "worker.groups" parameter
-- The "worker.groups" parameter is followed by the name of the group corresponding to the worker node, which is “default”.
-- If the worker node corresponds to more than one group, they are separated by commas
+- Open the `conf/worker.properties` configuration file on the worker node where you want to configure the groups and modify the `worker.groups` parameter.
+- The `worker.groups` parameter is followed by the name of the group corresponding to the worker node, which is `default`.
+- If the worker node corresponds to more than one group, they are separated by commas.
 
 ```conf
 worker.groups=default,test
 ```
-- You can also modify the worker group for worker which be assigned to specific worker group, and if the modification is successful, the worker will use the new group and ignore the configuration in `worker.properties`. The step to modify it as below: "security center -> worker group management -> click 'new worker group' -> click 'new worker group' ->  enter 'group name' -> select exists worker -> click submit". 
+- You can also change the worker group for the worker during execution, and if the modification is successful, the worker will use the new group and ignore the configuration in `worker.properties`. The step to modify work group as below: `Security Center -> Worker Group Management -> click 'New Worker Group' -> click 'New Worker Group' ->  enter 'Group Name' -> Select Exists Worker -> Click Submit`. 
 
 ## Environmental Management
 
-* Configure the Worker operating environment online. A Worker can specify multiple environments, and each environment is equivalent to the dolphinscheduler_env.sh file.
+* Configure the Worker operating environment online. A Worker can specify multiple environments, and each environment is equivalent to the `dolphinscheduler_env.sh` file.
 
-* The default environment is the dolphinscheduler_env.sh file.
+* The default environment is the `dolphinscheduler_env.sh` file.
 
-* When the task is executed, the task can be assigned to the designated worker group, and the corresponding environment can be selected according to the worker group. Finally, the worker node executes the environment first and then executes the task.
+* When executing a task, the task can be assigned to the specified worker group, and select the corresponding environment according to the worker group. Finally, the worker node executes the environment first and then executes the task.
 
-> Add/Update environment
+> Add or update environment
 
-- The environment configuration is equivalent to the configuration in the dolphinscheduler_env.sh file.
+- The environment configuration is equivalent to the configuration in the `dolphinscheduler_env.sh` file.
 
   <p align="center">
       <img src="/img/create-environment.png" width="80%" />
   </p>
 
-> Use environment
+> Usage environment
 
-- Create a task node in the workflow definition and select the environment corresponding to the Worker group and the Worker group. When the task is executed, the Worker will execute the environment first before executing the task.
+- Create a task node in the workflow definition, select the worker group and the environment corresponding to the worker group. When executing the task, the Worker will execute the environment first before executing the task.
 
     <p align="center">
         <img src="/img/use-environment.png" width="80%" />
diff --git a/docs/en-us/dev/user_doc/guide/upgrade.md b/docs/en-us/dev/user_doc/guide/upgrade.md
index 1b49b86..b401ce1 100644
--- a/docs/en-us/dev/user_doc/guide/upgrade.md
+++ b/docs/en-us/dev/user_doc/guide/upgrade.md
@@ -1,21 +1,21 @@
 # DolphinScheduler Upgrade Documentation
 
-## Back Up Previous Version's Files and Database
+## Back-Up Previous Version's Files and Database
 
 ## Stop All Services of DolphinScheduler
 
  `sh ./script/stop-all.sh`
 
-## Download the New Version's Installation Package
+## Download the Newest Version Installation Package
 
 - [download](/en-us/download/download.html) the latest version of the installation packages.
 - The following upgrade operations need to be performed in the new version's directory.
 
 ## Database Upgrade
 
-- Modify the following properties in conf/datasource.properties.
+- Modify the following properties in `conf/datasource.properties`.
 
-- If you use MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add mysql connector jar into lib dir, here we download mysql-connector-java-8.0.16.jar, and then correctly config database connect information. You can download mysql connector jar [here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use Postgres as database, you just need to comment out Mysql related configurations, and correctly config database conne [...]
+- If using MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add MYSQL connector jar into lib dir, here we download `mysql-connector-java-8.0.16.jar`, and then correctly configure database connection information. You can download MYSQL connector jar from [here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use PostgreSQL as the database, you just need to comment out Mysql related configurations and correctly confi [...]
 
     ```properties
       # postgre
@@ -28,7 +28,7 @@
       spring.datasource.password=xxx
     ```
 
-- Execute database upgrade script
+- Execute database upgrade script:
 
     `sh ./script/upgrade-dolphinscheduler.sh`
 
@@ -36,35 +36,35 @@
 
 ### Modify the Content in `conf/config/install_config.conf` File
 
-- Standalone Deployment please refer the [6, Modify running arguments] in [Standalone-Deployment](./installation/standalone.md).
-- Cluster Deployment please refer the [6, Modify running arguments] in [Cluster-Deployment](./installation/cluster.md).
+- Standalone Deployment please refer to the [Standalone-Deployment](./installation/standalone.md).
+- Cluster Deployment please refer to the [Cluster-Deployment](./installation/cluster.md).
 
 #### Masters Need Attentions
 
-Create worker group in 1.3.1 version has different design: 
+Create worker group in 1.3.1 version has a different design: 
 
 - Before version 1.3.1 worker group can be created through UI interface.
-- Since version 1.3.1 worker group can be created by modify the worker configuration. 
+- Since version 1.3.1 worker group can be created by modifying the worker configuration. 
 
-#### When Upgrade from Version Before 1.3.1 to 1.3.2, Below Operations are What We Need to Do to Keep Worker Group Config Consist with Previous
+#### When Upgrade from Version Before 1.3.1 to 1.3.2, the Below Operations are What We Need to Do to Keep Worker Group Configuration Consist with Previous
 
-1. Go to the backup database, search records in t_ds_worker_group table, mainly focus id, name and IP three columns.
+1. Go to the backup database, search records in `t_ds_worker_group table`, mainly focus `id, name and IP` three columns.
 
 | id | name | ip_list    |
 | :---         |     :---:      |          ---: |
 | 1   | service1     | 192.168.xx.10    |
 | 2   | service2     | 192.168.xx.11,192.168.xx.12      |
 
-2. Modify the workers config item in conf/config/install_config.conf file.
+2. Modify the worker configuration in `conf/config/install_config.conf` file.
 
-Imaging bellow are the machine worker service to be deployed:
+Assume bellow are the machine worker service to be deployed:
 | hostname | ip |
 | :---  | :---:  |
 | ds1   | 192.168.xx.10     |
 | ds2   | 192.168.xx.11     |
 | ds3   | 192.168.xx.12     |
 
-To keep worker group config consistent with the previous version, we need to modify workers config item as below:
+To keep worker group config consistent with the previous version, we need to modify workers configuration as below:
 
 ```shell
 #worker service is deployed on which machine, and also specify which worker group this worker belongs to. 
@@ -72,7 +72,8 @@ workers="ds1:service1,ds2:service2,ds3:service2"
 ```
 
 #### The Worker Group has Been Enhanced in Version 1.3.2
-Worker in 1.3.1 can't belong to more than one worker group, in 1.3.2 it's supported. So in 1.3.1 it's not supported when workers="ds1:service1,ds1:service2", and in 1.3.2 it's supported. 
+
+Workers in 1.3.1 can't belong to more than one worker group, but in 1.3.2 it's supported. So in 1.3.1 it's not supported when `workers="ds1:service1,ds1:service2"`, and in 1.3.2 it's supported. 
 
 ### Execute Deploy Script
 
diff --git a/docs/zh-cn/dev/user_doc/guide/introduction.md b/docs/zh-cn/dev/user_doc/guide/introduction.md
index a138aba..0ba6786 100644
--- a/docs/zh-cn/dev/user_doc/guide/introduction.md
+++ b/docs/zh-cn/dev/user_doc/guide/introduction.md
@@ -1,4 +1,4 @@
 # 系统使用手册
 
 
-用户使用手册向你介绍 DolphinScheduler 所有常见的操作告诉,如果你还没有安装 DolphinScheduler 请参照[快速上手](./quick-start.md) 完成安装
\ No newline at end of file
+用户使用手册向你介绍 DolphinScheduler 所有常见的操作,如果你还没有安装 DolphinScheduler 请参照[快速上手](./quick-start.md) 完成安装
\ No newline at end of file
diff --git a/img/video_cover/quick-use.png b/img/video_cover/quick-use.png
new file mode 100644
index 0000000..df65898
Binary files /dev/null and b/img/video_cover/quick-use.png differ