You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/03/16 10:11:05 UTC

[dolphinscheduler-website] branch master updated: Proofreading dev documents under /user_doc/guide/installation (#735)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new a3ea63e  Proofreading dev documents under /user_doc/guide/installation (#735)
a3ea63e is described below

commit a3ea63e47b628deac012074d9af18e1ddbd14dca
Author: Tq <ti...@gmail.com>
AuthorDate: Wed Mar 16 18:11:00 2022 +0800

    Proofreading dev documents under /user_doc/guide/installation (#735)
---
 .../dev/user_doc/guide/installation/cluster.md     |  20 +-
 .../dev/user_doc/guide/installation/docker.md      | 325 ++++++++++-----------
 .../dev/user_doc/guide/installation/hardware.md    |  17 +-
 .../dev/user_doc/guide/installation/kubernetes.md  | 170 +++++------
 .../user_doc/guide/installation/pseudo-cluster.md  |  50 ++--
 .../guide/installation/skywalking-agent.md         |  10 +-
 .../dev/user_doc/guide/installation/standalone.md  |  18 +-
 7 files changed, 304 insertions(+), 306 deletions(-)

diff --git a/docs/en-us/dev/user_doc/guide/installation/cluster.md b/docs/en-us/dev/user_doc/guide/installation/cluster.md
index dc97ba1..4f1ead9 100644
--- a/docs/en-us/dev/user_doc/guide/installation/cluster.md
+++ b/docs/en-us/dev/user_doc/guide/installation/cluster.md
@@ -1,28 +1,28 @@
 # Cluster Deployment
 
-Cluster deployment is to deploy the DolphinScheduler on multiple machines for running a large number of tasks in production.
+Cluster deployment is to deploy the DolphinScheduler on multiple machines for running massive tasks in production.
 
-If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
 
-## Deployment Step
+## Deployment Steps
 
-Cluster deployment uses the same scripts and configuration files as we deploy in [pseudo-cluster deployment](pseudo-cluster.md), so the prepare and required are the same as pseudo-cluster deployment. The difference is that [pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while cluster deployment (Cluster) for multiple. and the steps of "Modify configuration" are quite different between pseudo-cluster deployment and cluster deployment.
+Cluster deployment uses the same scripts and configuration files as [pseudo-cluster deployment](pseudo-cluster.md), so the preparation and deployment steps are the same as pseudo-cluster deployment. The difference is that [pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while cluster deployment (Cluster) is for multiple machines. And steps of "Modify Configuration" are quite different between pseudo-cluster deployment and cluster deployment.
 
-### Prepare and DolphinScheduler Startup Environment
+### Prerequisites and DolphinScheduler Startup Environment Preparations
 
-Because of cluster deployment for multiple machine, so you have to run you "Prepare" and "startup" in every machine in [pseudo-cluster.md](pseudo-cluster.md), except section "Configure machine SSH password-free login", "Start ZooKeeper", "Initialize the database", which is only for deployment or just need an single server
+Configure all the configurations refer to [pseudo-cluster deployment](pseudo-cluster.md) on every machine, except sections `Prerequisites`, `Start ZooKeeper` and `Initialize the Database` of the `DolphinScheduler Startup Environment`.
 
 ### Modify Configuration
 
-This is a step that is quite different from [pseudo-cluster.md](pseudo-cluster.md), because the deployment script will transfer the resources required for installation machine to each deployment machine using `scp`. And we have to declare all machine we want to install DolphinScheduler and then run script `install.sh`. The configuration file is under the path `conf/config/install_config.conf`, here we only need to modify section **INSTALL MACHINE**, **DolphinScheduler ENV, Database, Regi [...]
+This step differs quite a lot from [pseudo-cluster deployment](pseudo-cluster.md), because the deployment script transfers the required resources for installation to each deployment machine by using `scp`. So we only need to modify the configuration of the machine that runs `install.sh` script and configurations will dispatch to cluster by `scp`. The configuration file is under the path `conf/config/install_config.conf`, here we only need to modify section **INSTALL MACHINE**, **DolphinS [...]
 
 ```shell
 # ---------------------------------------------------------
 # INSTALL MACHINE
 # ---------------------------------------------------------
-# Using IP or machine hostname for server going to deploy master, worker, API server, the IP of the server
-# If you using hostname, make sure machine could connect each others by hostname
-# As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, where ds1, ds2 install master server, ds3, ds4, and ds5 installs worker server, the alert server is installed in ds4, and the api server is installed in ds5
+# Using IP or machine hostname for the server going to deploy master, worker, API server, the IP of the server
+# If you using a hostname, make sure machines could connect each other by hostname
+# As below, the hostname of the machine deploying DolphinScheduler is ds1, ds2, ds3, ds4, ds5, where ds1, ds2 install the master server, ds3, ds4, and ds5 installs worker server, the alert server is installed in ds4, and the API server is installed in ds5
 ips="ds1,ds2,ds3,ds4,ds5"
 masters="ds1,ds2"
 workers="ds3:default,ds4:default,ds5:default"
diff --git a/docs/en-us/dev/user_doc/guide/installation/docker.md b/docs/en-us/dev/user_doc/guide/installation/docker.md
index 7bb8c0c..092533c 100644
--- a/docs/en-us/dev/user_doc/guide/installation/docker.md
+++ b/docs/en-us/dev/user_doc/guide/installation/docker.md
@@ -2,39 +2,39 @@
 
 ## Prerequisites
 
- - [Docker](https://docs.docker.com/engine/install/) 1.13.1+
- - [Docker Compose](https://docs.docker.com/compose/) 1.11.0+
+ - [Docker](https://docs.docker.com/engine/install/) version: 1.13.1+
+ - [Docker Compose](https://docs.docker.com/compose/) version: 1.11.0+
 
 ## How to Use this Docker Image
 
-Here're 3 ways to quickly install DolphinScheduler
+Here are 3 ways to quickly install DolphinScheduler:
 
-### The First Way: Start a DolphinScheduler by Docker Compose (Recommended)
+### Start DolphinScheduler by Docker Compose (Recommended)
 
-In this way, you need to install [docker-compose](https://docs.docker.com/compose/) as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet
+In this way, you need to install [docker-compose](https://docs.docker.com/compose/) as a prerequisite, please install it yourself according to the rich docker-compose installation guidance on the Internet.
 
-For Windows 7-10, you can install [Docker Toolbox](https://github.com/docker/toolbox/releases). For Windows 10 64-bit, you can install [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), and pay attention to the [system requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements)
+For Windows 7-10, you can install [Docker Toolbox](https://github.com/docker/toolbox/releases). For Windows 10 64-bit, you can install [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), and meet the [system requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements).
 
 #### Configure Memory not Less Than 4GB
 
-For Mac user, click `Docker Desktop -> Preferences -> Resources -> Memory`
+For Mac user, click `Docker Desktop -> Preferences -> Resources -> Memory`.
 
-For Windows Docker Toolbox user, two items need to be configured:
+For Windows Docker Toolbox users, configure the following two settings:
 
- - **Memory**: Open Oracle VirtualBox Manager, if you double-click Docker Quickstart Terminal and successfully run Docker Toolbox, you will see a Virtual Machine named `default`. And click `Settings -> System -> Motherboard -> Base Memory`
- - **Port Forwarding**: Click `Settings -> Network -> Advanced -> Port forwarding -> Add`. `Name`, `Host Port` and `Guest Port` all fill in `12345`, regardless of `Host IP` and `Guest IP`
+ - **Memory**: Open Oracle VirtualBox Manager, if you double-click `Docker Quickstart Terminal` and successfully run `Docker Toolbox`, you will see a Virtual Machine named `default`. And click `Settings -> System -> Motherboard -> Base Memory`
+ - **Port Forwarding**: Click `Settings -> Network -> Advanced -> Port Forwarding -> Add`. fill `Name`, `Host Port` and `Guest Port` forms with `12345`, regardless of `Host IP` and `Guest IP`
 
 For Windows Docker Desktop user
- - **Hyper-V mode**: Click `Docker Desktop -> Settings -> Resources -> Memory`
- - **WSL 2 mode**: Refer to [WSL 2 utility VM](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
+ - **Hyper-V Mode**: Click `Docker Desktop -> Settings -> Resources -> Memory`
+ - **WSL 2 Mode**: Refer to [WSL 2 utility VM](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
 
 #### Download the Source Code Package
 
-Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: [download](/en-us/download/download.html)
+Please download the source code package `apache-dolphinscheduler-1.3.8-src.tar.gz`, download address: [download address](/en-us/download/download.html).
 
 #### Pull Image and Start the Service
 
-> For Mac and Linux user, open **Terminal**
+> For Mac and Linux users, open **Terminal**
 > For Windows Docker Toolbox user, open **Docker Quickstart Terminal**
 > For Windows Docker Desktop user, open **Windows PowerShell**
 
@@ -46,39 +46,39 @@ $ docker tag apache/dolphinscheduler:1.3.8 apache/dolphinscheduler:latest
 $ docker-compose up -d
 ```
 
-> PowerShell should use `cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm`
+> PowerShell should run `cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm`
 
-The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default.
 
 #### Login
 
-Visit the Web UI: http://localhost:12345/dolphinscheduler (The local address is http://localhost:12345/dolphinscheduler)
+Visit the Web UI: http://localhost:12345/dolphinscheduler (Modify the IP address if needed).
 
-The default username is `admin` and the default password is `dolphinscheduler123`
+The default username is `admin` and the default password is `dolphinscheduler123`.
 
 <p align="center">
   <img src="/img/login_en.png" width="60%" />
 </p>
 
-Please refer to the `Quick Start` in the chapter [Quick Start](../quick-start.md) to explore how to use DolphinScheduler
+Please refer to the [Quick Start](../quick-start.md) to explore how to use DolphinScheduler.
 
-### The Second Way: Start via Specifying the Existing PostgreSQL and ZooKeeper Service
+### Start via Existing PostgreSQL and ZooKeeper Service
 
-In this way, you need to install [docker](https://docs.docker.com/engine/install/) as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet
+In this way, you need to install [docker](https://docs.docker.com/engine/install/) as a prerequisite, please install it yourself according to the rich docker installation guidance on the Internet.
 
 #### Basic Required Software
 
- - [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+)
- - [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+)
- - [Docker](https://docs.docker.com/engine/install/) (1.13.1+)
+ - [PostgreSQL](https://www.postgresql.org/download/) (version 8.2.15+)
+ - [ZooKeeper](https://zookeeper.apache.org/releases.html) (version 3.4.6+)
+ - [Docker](https://docs.docker.com/engine/install/) (version 1.13.1+)
 
-#### Please Login to the PostgreSQL Database and Create a Database Named `dolphinscheduler`
+#### Login to the PostgreSQL Database and Create a Database Named `dolphinscheduler`
 
 #### Initialize the Database, Import `sql/dolphinscheduler_postgre.sql` to Create Tables and Initial Data
 
 #### Download the DolphinScheduler Image
 
-We have already uploaded user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:
+We have already uploaded the user-oriented DolphinScheduler image to the Docker repository so that you can pull the image from the docker repository:
 
 ```
 docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
@@ -95,15 +95,15 @@ $ docker run -d --name dolphinscheduler \
 apache/dolphinscheduler:1.3.8 all
 ```
 
-Note: database username test and password test need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your relate PostgreSQL and ZooKeeper host IP
+Note: database test username and password need to be replaced with your actual PostgreSQL username and password, 192.168.x.x need to be replaced with your related PostgreSQL and ZooKeeper host IP.
 
 #### Login
 
 Same as above
 
-### The Third Way: Start a Standalone DolphinScheduler Server
+### Start a Standalone DolphinScheduler Server
 
-The following services are automatically started when the container starts:
+The following services automatically start when the container starts:
 
 ```
      MasterServer         ----- master service
@@ -112,9 +112,7 @@ The following services are automatically started when the container starts:
      AlertServer          ----- alert service
 ```
 
-If you just want to run part of the services in the DolphinScheduler
-
-You can start some services in DolphinScheduler by running the following commands.
+If you just want to run part of the services in the DolphinScheduler, you can start a single service in DolphinScheduler by running the following commands.
 
 * Start a **master server**, For example:
 
@@ -136,7 +134,7 @@ $ docker run -d --name dolphinscheduler-worker \
 apache/dolphinscheduler:1.3.8 worker-server
 ```
 
-* Start a **api server**, For example:
+* Start an **api server**, For example:
 
 ```
 $ docker run -d --name dolphinscheduler-api \
@@ -147,7 +145,7 @@ $ docker run -d --name dolphinscheduler-api \
 apache/dolphinscheduler:1.3.8 api-server
 ```
 
-* Start a **alert server**, For example:
+* Start an **alert server**, For example:
 
 ```
 $ docker run -d --name dolphinscheduler-alert \
@@ -156,13 +154,13 @@ $ docker run -d --name dolphinscheduler-alert \
 apache/dolphinscheduler:1.3.8 alert-server
 ```
 
-**Note**: You must be specify `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_PASSWORD`, `ZOOKEEPER_QUORUM` when start a standalone dolphinscheduler server.
+**Note**: You must specify environment variables `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_PASSWORD`, `ZOOKEEPER_QUORUM` when start a single DolphinScheduler server.
 
 ## Environment Variables
 
-The Docker container is configured through environment variables, and the [Appendix-Environment Variables](#appendix-environment-variables) lists the configurable environment variables of the DolphinScheduler and their default values
+The Docker container is configured through environment variables, and the [Appendix-Environment Variables](#appendix-environment-variables) lists the configurable environment variables of the DolphinScheduler and their default values.
 
-Especially, it can be configured through the environment variable configuration file `config.env.sh` in Docker Compose and Docker Swarm
+Especially, it can be configured through the environment variable configuration file `config.env.sh` in Docker Compose and Docker Swarm.
 
 ## Support Matrix
 
@@ -226,7 +224,7 @@ docker-compose down -v
 
 ### How to View the Logs of a Container?
 
-List all running containers:
+List all running containers logs:
 
 ```
 docker ps
@@ -257,27 +255,27 @@ docker-compose up -d --scale dolphinscheduler-worker=3 dolphinscheduler-worker
 
 ### How to Deploy DolphinScheduler on Docker Swarm?
 
-Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to [create-swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/))
+Assuming that the Docker Swarm cluster has been created (If there is no Docker Swarm cluster, please refer to [create-swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/)).
 
-Start a stack named dolphinscheduler:
+Start a stack named `dolphinscheduler`:
 
 ```
 docker stack deploy -c docker-stack.yml dolphinscheduler
 ```
 
-List the services in the stack named dolphinscheduler:
+List the services in the stack named `dolphinscheduler`:
 
 ```
 docker stack services dolphinscheduler
 ```
 
-Stop and remove the stack named dolphinscheduler:
+Stop and remove the stack named `dolphinscheduler`:
 
 ```
 docker stack rm dolphinscheduler
 ```
 
-Remove the volumes of the stack named dolphinscheduler:
+Remove the volumes of the stack named `dolphinscheduler`:
 
 ```
 docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e "^dolphinscheduler")
@@ -285,13 +283,13 @@ docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e "^dolphins
 
 ### How to Scale Master and Worker on Docker Swarm?
 
-Scale master of the stack named dolphinscheduler to 2 instances:
+Scale master of the stack named `dolphinscheduler` to 2 instances:
 
 ```
 docker service scale dolphinscheduler_dolphinscheduler-master=2
 ```
 
-Scale worker of the stack named dolphinscheduler to 3 instances:
+Scale worker of the stack named `dolphinscheduler` to 3 instances:
 
 ```
 docker service scale dolphinscheduler_dolphinscheduler-worker=3
@@ -313,11 +311,11 @@ In Windows, execute in cmd or PowerShell:
 C:\dolphinscheduler-src>.\docker\build\hooks\build.bat
 ```
 
-Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` script files if you don't understand
+Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` script files if you don't understand.
 
 #### Build From the Binary Distribution (Not require Maven 3.3+ and JDK 1.8+)
 
-Please download the binary distribution package apache-dolphinscheduler-1.3.8-bin.tar.gz, download address: [download](/en-us/download/download.html). And put apache-dolphinscheduler-1.3.8-bin.tar.gz into the `apache-dolphinscheduler-1.3.8-src/docker/build` directory, execute in Terminal or PowerShell:
+Please download the binary distribution package `apache-dolphinscheduler-1.3.8-bin.tar.gz`, download address: [download address](/en-us/download/download.html). And put `apache-dolphinscheduler-1.3.8-bin.tar.gz` into the `apache-dolphinscheduler-1.3.8-src/docker/build` directory, execute in Terminal or PowerShell:
 
 ```
 $ cd apache-dolphinscheduler-1.3.8-src/docker/build
@@ -328,29 +326,30 @@ $ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
 
 #### Build Multi-Platform Images
 
-Currently support to build images including `linux/amd64` and `linux/arm64` platform architecture, requirements:
+Currently, support build images including `linux/amd64` and `linux/arm64` platform architecture, requirements:
 
 1. Support [docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/)
-2. Own the push permission of https://hub.docker.com/r/apache/dolphinscheduler (**Be cautious**: The build command will automatically push the multi-platform architecture images to the docker hub of apache/dolphinscheduler by default)
+2. Own the push permission of `https://hub.docker.com/r/apache/dolphinscheduler` (**Be cautious**: The build command will automatically push the multi-platform architecture images to the docker hub of `apache/dolphinscheduler` by default)
 
 Execute:
 
 ```bash
 $ docker login # login to push apache/dolphinscheduler
-$ bash ./docker/build/hooks/build
+$ bash ./docker/build/hooks/build x
 ```
 
 ### How to Add an Environment Variable for Docker?
 
-If you would like to do additional initialization in an image derived from this one, add one or more environment variables under `/root/start-init-conf.sh`, and modify template files in `/opt/dolphinscheduler/conf/*.tpl`.
+If you would like to do additional initialization or add environment variables when compiling or execution, you can add one or more environment variables in the script `/root/start-init-conf.sh`. If involves configuration modification, modify the script `/opt/dolphinscheduler/conf/*.tpl`.
 
-For example, to add an environment variable `SECURITY_AUTHENTICATION_TYPE` in `/root/start-init-conf.sh`:
+For example, add an environment variable `SECURITY_AUTHENTICATION_TYPE` in `/root/start-init-conf.sh`:
 
 ```
 export SECURITY_AUTHENTICATION_TYPE=PASSWORD
 ```
 
-and to modify `application-api.properties.tpl` template file, add the `SECURITY_AUTHENTICATION_TYPE`:
+Add the `SECURITY_AUTHENTICATION_TYPE` to the template file `application-api.properties.tpl`:
+
 ```
 security.authentication.type=${SECURITY_AUTHENTICATION_TYPE}
 ```
@@ -373,7 +372,7 @@ done
 >
 > If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image as follows.
 
-1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
 
 2. Create a new `Dockerfile` to add MySQL driver:
 
@@ -388,15 +387,15 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:mysql-driver .
 ```
 
-4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`
+4. Modify all the `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`
 
-5. Comment the `dolphinscheduler-postgresql` block in `docker-compose.yml`
+5. Comment the `dolphinscheduler-postgresql` block in `docker-compose.yml`.
 
-6. Add `dolphinscheduler-mysql` service in `docker-compose.yml` (**Optional**, you can directly use an external MySQL database)
+6. Add `dolphinscheduler-mysql` service in `docker-compose.yml` (**Optional**, you can directly use an external MySQL database).
 
-7. Modify DATABASE environment variables in `config.env.sh`
+7. Modify DATABASE environment variables in `config.env.sh`:
 
 ```
 DATABASE_TYPE=mysql
@@ -411,7 +410,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
 
 > If you have added `dolphinscheduler-mysql` service in `docker-compose.yml`, just set `DATABASE_HOST` to `dolphinscheduler-mysql`
 
-8. Run a dolphinscheduler (See **How to use this docker image**)
+8. Run the DolphinScheduler (See **How to use this docker image**)
 
 ### How to Support MySQL Datasource in `Datasource manage`?
 
@@ -419,7 +418,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
 >
 > If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
 
-1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
 
 2. Create a new `Dockerfile` to add MySQL driver:
 
@@ -434,13 +433,13 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:mysql-driver .
 ```
 
-4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`
+4. Modify all `image` fields to `apache/dolphinscheduler:mysql-driver` in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
-5. Run a dolphinscheduler (See **How to use this docker image**)
+5. Run the DolphinScheduler (See **How to use this docker image**).
 
-6. Add a MySQL datasource in `Datasource manage`
+6. Add a MySQL datasource in `Datasource manage`.
 
 ### How to Support Oracle Datasource in `Datasource manage`?
 
@@ -448,7 +447,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
 >
 > If you want to add Oracle datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
 
-1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`)
+1. Download the Oracle driver [ojdbc8.jar](https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/) (such as `ojdbc8-19.9.0.0.jar`).
 
 2. Create a new `Dockerfile` to add Oracle driver:
 
@@ -463,13 +462,13 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:oracle-driver .
 ```
 
-4. Modify all `image` fields to `apache/dolphinscheduler:oracle-driver` in `docker-compose.yml`
+4. Modify all `image` fields to `apache/dolphinscheduler:oracle-driver` in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
-5. Run a dolphinscheduler (See **How to use this docker image**)
+5. Run the DolphinScheduler (See **How to use this docker image**).
 
-6. Add an Oracle datasource in `Datasource manage`
+6. Add an Oracle datasource in `Datasource manage`.
 
 ### How to Support Python 2 pip and Custom requirements.txt?
 
@@ -484,7 +483,7 @@ RUN apt-get update && \
     rm -rf /var/lib/apt/lists/*
 ```
 
-The command will install the default **pip 18.1**. If you upgrade the pip, just add one line
+The command will install the default **pip 18.1**. If you need to upgrade the pip, just add one more line.
 
 ```
     pip install --no-cache-dir -U pip && \
@@ -496,13 +495,13 @@ The command will install the default **pip 18.1**. If you upgrade the pip, just
 docker build -t apache/dolphinscheduler:pip .
 ```
 
-3. Modify all `image` fields to `apache/dolphinscheduler:pip` in `docker-compose.yml`
+3. Modify all `image` fields to `apache/dolphinscheduler:pip` in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
-4. Run a dolphinscheduler (See **How to use this docker image**)
+4. Run the DolphinScheduler (See **How to use this docker image**).
 
-5. Verify pip under a new Python task
+5. Verify pip under a new Python task.
 
 ### How to Support Python 3?
 
@@ -515,7 +514,7 @@ RUN apt-get update && \
     rm -rf /var/lib/apt/lists/*
 ```
 
-The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip`.
 
 ```
     apt-get install -y --no-install-recommends python3-pip && \
@@ -527,33 +526,33 @@ The command will install the default **Python 3.7.3**. If you also want to insta
 docker build -t apache/dolphinscheduler:python3 .
 ```
 
-3. Modify all `image` fields to `apache/dolphinscheduler:python3` in `docker-compose.yml`
+3. Modify all `image` fields to `apache/dolphinscheduler:python3` in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
-4. Modify `PYTHON_HOME` to `/usr/bin/python3` in `config.env.sh`
+4. Modify `PYTHON_HOME` to `/usr/bin/python3` in `config.env.sh`.
 
-5. Run a dolphinscheduler (See **How to use this docker image**)
+5. Run the DolphinScheduler (See **How to use this docker image**).
 
-6. Verify Python 3 under a new Python task
+6. Verify Python 3 under a new Python task.
 
 ### How to Support Hadoop, Spark, Flink, Hive or DataX?
 
 Take Spark 2.4.7 as an example:
 
-1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`.
 
-2. Run a dolphinscheduler (See **How to use this docker image**)
+2. Run the DolphinScheduler (See **How to use this docker image**).
 
-3. Copy the Spark 2.4.7 release binary into Docker container
+3. Copy the Spark 2.4.7 release binary into the Docker container.
 
 ```bash
 docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
-Because the volume `dolphinscheduler-shared-local` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost
+Because the volume `dolphinscheduler-shared-local` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost.
 
-4. Attach the container and ensure that `SPARK_HOME2` exists
+4. Attach the container and ensure that `SPARK_HOME2` exists.
 
 ```bash
 docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
@@ -564,17 +563,17 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
-The last command will print the Spark version if everything goes well
+The last command will print the Spark version if everything goes well.
 
-5. Verify Spark under a Shell task
+5. Verify Spark under a Shell task.
 
 ```
 $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
 ```
 
-Check whether the task log contains the output like `Pi is roughly 3.146015`
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
 
-6. Verify Spark under a Spark task
+6. Verify Spark under a Spark task.
 
 The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
 
@@ -583,31 +582,31 @@ The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources f
 - Main Package: `spark-examples_2.11-2.4.7.jar`
 - Deploy Mode: `local`
 
-Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`.
 
-7. Verify Spark on YARN
+7. Verify Spark on YARN.
 
-Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.
 
-Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists.
 
 ### How to Support Spark 3?
 
-In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`.
 
 Take Spark 3.1.1 as an example:
 
-1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`.
 
-2. Run a dolphinscheduler (See **How to use this docker image**)
+2. Run the DolphinScheduler (See **How to use this docker image**).
 
-3. Copy the Spark 3.1.1 release binary into Docker container
+3. Copy the Spark 3.1.1 release binary into the Docker container.
 
 ```bash
 docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 ```
 
-4. Attach the container and ensure that `SPARK_HOME2` exists
+4. Attach the container and ensure that `SPARK_HOME2` exists.
 
 ```bash
 docker exec -it docker-swarm_dolphinscheduler-worker_1 bash
@@ -618,25 +617,25 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
-The last command will print the Spark version if everything goes well
+The last command will print the Spark version if everything goes well.
 
-5. Verify Spark under a Shell task
+5. Verify Spark under a Shell task.
 
 ```
 $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
 ```
 
-Check whether the task log contains the output like `Pi is roughly 3.146015`
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
 
-### How to Support Shared Storage between Master, Worker and Api server?
+### How to Support Shared Storage between Master, Worker and API server?
 
-> **Note**: If it is deployed on a single machine by `docker-compose`, step 1 and 2 can be skipped directly, and execute the command like `docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put Hadoop into the shared directory `/opt/soft` in the container
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 1 and 2 can be skipped directly, and execute the command like `docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put Hadoop into the shared directory `/opt/soft` in the container.
 
-For example, Master, Worker and Api server may use Hadoop at the same time
+For example, Master, Worker and API servers may use Hadoop at the same time.
 
-1. Modify the volume `dolphinscheduler-shared-local` to support NFS in `docker-compose.yml`
+1. Modify the volume `dolphinscheduler-shared-local` to support NFS in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
 ```yaml
 volumes:
@@ -647,13 +646,13 @@ volumes:
       device: ":/path/to/shared/dir"
 ```
 
-2. Put the Hadoop into the NFS
+2. Put the Hadoop into the NFS.
 
-3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct.
 
 ### How to Support Local File Resource Storage Instead of HDFS and S3?
 
-> **Note**: If it is deployed on a single machine by `docker-compose`, step 2 can be skipped directly
+> **Note**: If it is deployed on a single machine by `docker-compose`, step 2 can be skipped directly.
 
 1. Modify the following environment variables in `config.env.sh`:
 
@@ -662,9 +661,9 @@ RESOURCE_STORAGE_TYPE=HDFS
 FS_DEFAULT_FS=file:///
 ```
 
-2. Modify the volume `dolphinscheduler-resource-local` to support NFS in `docker-compose.yml`
+2. Modify the volume `dolphinscheduler-resource-local` to support NFS in `docker-compose.yml`.
 
-> If you want to deploy dolphinscheduler on Docker Swarm, you need to modify `docker-stack.yml`
+> If you want to deploy DolphinScheduler on Docker Swarm, you need to modify `docker-stack.yml`.
 
 ```yaml
 volumes:
@@ -677,7 +676,7 @@ volumes:
 
 ### How to Support S3 Resource Storage Like MinIO?
 
-Take MinIO as an example: Modify the following environment variables in `config.env.sh`
+Take MinIO as an example: modify the following environment variables in `config.env.sh`.
 
 ```
 RESOURCE_STORAGE_TYPE=S3
@@ -688,9 +687,9 @@ FS_S3A_ACCESS_KEY=MINIO_ACCESS_KEY
 FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
 ```
 
-`BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` need to be modified to actual values
+Modify `BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` to actual values.
 
-> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access
+> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.
 
 ### How to Configure SkyWalking?
 
@@ -709,51 +708,51 @@ SW_GRPC_LOG_SERVER_PORT=11800
 
 **`DATABASE_TYPE`**
 
-This environment variable sets the type for the database. The default value is `postgresql`.
+This environment variable sets the `TYPE` for the `database`. The default value is `postgresql`.
 
-**Note**: You must be specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_DRIVER`**
 
-This environment variable sets the type for the database. The default value is `org.postgresql.Driver`.
+This environment variable sets the `DRIVER` for the `database`. The default value is `org.postgresql.Driver`.
 
-**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_HOST`**
 
-This environment variable sets the host for the database. The default value is `127.0.0.1`.
+This environment variable sets the `HOST` for the `database`. The default value is `127.0.0.1`.
 
-**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_PORT`**
 
-This environment variable sets the port for the database. The default value is `5432`.
+This environment variable sets the `PORT` for the `database`. The default value is `5432`.
 
-**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_USERNAME`**
 
-This environment variable sets the username for the database. The default value is `root`.
+This environment variable sets the `USERNAME` for the `database`. The default value is `root`.
 
-**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_PASSWORD`**
 
-This environment variable sets the password for the database. The default value is `root`.
+This environment variable sets the `PASSWORD` for the `database`. The default value is `root`.
 
-**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_DATABASE`**
 
-This environment variable sets the database for the database. The default value is `dolphinscheduler`.
+This environment variable sets the `DATABASE` for the `database`. The default value is `dolphinscheduler`.
 
-**Note**: You must specify it when start a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 **`DATABASE_PARAMS`**
 
-This environment variable sets the database for the database. The default value is `characterEncoding=utf8`.
+This environment variable sets the `PARAMS` for the `database`. The default value is `characterEncoding=utf8`.
 
-**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`, `alert-server`.
 
 ### ZooKeeper
 
@@ -761,45 +760,45 @@ This environment variable sets the database for the database. The default value
 
 This environment variable sets ZooKeeper quorum. The default value is `127.0.0.1:2181`.
 
-**Note**: You must specify it when starting a standalone dolphinscheduler server. Like `master-server`, `worker-server`, `api-server`.
+**Note**: You must specify it when starting a standalone DolphinScheduler server. Like `master-server`, `worker-server`, `api-server`.
 
 **`ZOOKEEPER_ROOT`**
 
-This environment variable sets ZooKeeper root directory for dolphinscheduler. The default value is `/dolphinscheduler`.
+This environment variable sets the ZooKeeper root directory for DolphinScheduler. The default value is `/dolphinscheduler`.
 
 ### Common
 
 **`DOLPHINSCHEDULER_OPTS`**
 
-This environment variable sets JVM options for dolphinscheduler, suitable for `master-server`, `worker-server`, `api-server`, `alert-server`. The default value is empty.
+This environment variable sets JVM options for DolphinScheduler, suitable for `master-server`, `worker-server`, `api-server`, `alert-server`. The default value is empty.
 
 **`DATA_BASEDIR_PATH`**
 
-User data directory path, self configuration, please make sure the directory exists and have read-write permissions. The default value is `/tmp/dolphinscheduler`
+This environment variable sets user data directory, customized configuration, please make sure the directory exists and have read-write permissions. The default value is `/tmp/dolphinscheduler`
 
 **`RESOURCE_STORAGE_TYPE`**
 
-This environment variable sets resource storage types for dolphinscheduler like `HDFS`, `S3`, `NONE`. The default value is `HDFS`.
+This environment variable sets resource storage types for DolphinScheduler like `HDFS`, `S3`, `NONE`. The default value is `HDFS`.
 
 **`RESOURCE_UPLOAD_PATH`**
 
-This environment variable sets resource store path on HDFS/S3 for resource storage. The default value is `/dolphinscheduler`.
+This environment variable sets resource store path on `HDFS/S3` for resource storage. The default value is `/dolphinscheduler`.
 
 **`FS_DEFAULT_FS`**
 
-This environment variable sets fs.defaultFS for resource storage like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`. The default value is `file:///`.
+This environment variable sets `fs.defaultFS` for resource storage like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`. The default value is `file:///`.
 
 **`FS_S3A_ENDPOINT`**
 
-This environment variable sets s3 endpoint for resource storage. The default value is `s3.xxx.amazonaws.com`.
+This environment variable sets `s3` endpoint for resource storage. The default value is `s3.xxx.amazonaws.com`.
 
 **`FS_S3A_ACCESS_KEY`**
 
-This environment variable sets s3 access key for resource storage. The default value is `xxxxxxx`.
+This environment variable sets `s3` access key for resource storage. The default value is `xxxxxxx`.
 
 **`FS_S3A_SECRET_KEY`**
 
-This environment variable sets s3 secret key for resource storage. The default value is `xxxxxxx`.
+This environment variable sets `s3` secret key for resource storage. The default value is `xxxxxxx`.
 
 **`HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE`**
 
@@ -807,23 +806,23 @@ This environment variable sets whether to startup Kerberos. The default value is
 
 **`JAVA_SECURITY_KRB5_CONF_PATH`**
 
-This environment variable sets java.security.krb5.conf path. The default value is `/opt/krb5.conf`.
+This environment variable sets `java.security.krb5.conf` path. The default value is `/opt/krb5.conf`.
 
 **`LOGIN_USER_KEYTAB_USERNAME`**
 
-This environment variable sets login user from the keytab username. The default value is `hdfs@HADOOP.COM`.
+This environment variable sets the `keytab` username for the login user. The default value is `hdfs@HADOOP.COM`.
 
 **`LOGIN_USER_KEYTAB_PATH`**
 
-This environment variable sets login user from the keytab path. The default value is `/opt/hdfs.keytab`.
+This environment variable sets the `keytab` path for the login user. The default value is `/opt/hdfs.keytab`.
 
 **`KERBEROS_EXPIRE_TIME`**
 
-This environment variable sets Kerberos expire time, the unit is hour. The default value is `2`.
+This environment variable sets Kerberos expiration time, use hour as unit. The default value is `2`.
 
 **`HDFS_ROOT_USER`**
 
-This environment variable sets HDFS root user when resource.storage.type=HDFS. The default value is `hdfs`.
+This environment variable sets HDFS root user when `resource.storage.type=HDFS`. The default value is `hdfs`.
 
 **`RESOURCE_MANAGER_HTTPADDRESS_PORT`**
 
@@ -831,7 +830,7 @@ This environment variable sets resource manager HTTP address port. The default v
 
 **`YARN_RESOURCEMANAGER_HA_RM_IDS`**
 
-This environment variable sets yarn resourcemanager ha rm ids. The default value is empty.
+This environment variable sets yarn `resourcemanager` ha rm ids. The default value is empty.
 
 **`YARN_APPLICATION_STATUS_ADDRESS`**
 
@@ -847,11 +846,11 @@ This environment variable sets agent collector backend services for SkyWalking.
 
 **`SW_GRPC_LOG_SERVER_HOST`**
 
-This environment variable sets gRPC log server host for SkyWalking. The default value is `127.0.0.1`.
+This environment variable sets `gRPC` log server host for SkyWalking. The default value is `127.0.0.1`.
 
 **`SW_GRPC_LOG_SERVER_PORT`**
 
-This environment variable sets gRPC log server port for SkyWalking. The default value is `11800`.
+This environment variable sets `gRPC` log server port for SkyWalking. The default value is `11800`.
 
 **`HADOOP_HOME`**
 
@@ -897,11 +896,11 @@ This environment variable sets JVM options for `master-server`. The default valu
 
 **`MASTER_EXEC_THREADS`**
 
-This environment variable sets exec thread number for `master-server`. The default value is `100`.
+This environment variable sets execute thread number for `master-server`. The default value is `100`.
 
 **`MASTER_EXEC_TASK_NUM`**
 
-This environment variable sets exec task number for `master-server`. The default value is `20`.
+This environment variable sets execute task number for `master-server`. The default value is `20`.
 
 **`MASTER_DISPATCH_TASK_NUM`**
 
@@ -913,7 +912,7 @@ This environment variable sets host selector for `master-server`. Optional value
 
 **`MASTER_HEARTBEAT_INTERVAL`**
 
-This environment variable sets heartbeat interval for `master-server`. The default value is `10`.
+This environment variable sets heartbeat intervals for `master-server`. The default value is `10`.
 
 **`MASTER_TASK_COMMIT_RETRYTIMES`**
 
@@ -939,7 +938,7 @@ This environment variable sets JVM options for `worker-server`. The default valu
 
 **`WORKER_EXEC_THREADS`**
 
-This environment variable sets exec thread number for `worker-server`. The default value is `100`.
+This environment variable sets execute thread number for `worker-server`. The default value is `100`.
 
 **`WORKER_HEARTBEAT_INTERVAL`**
 
@@ -965,7 +964,7 @@ This environment variable sets JVM options for `alert-server`. The default value
 
 **`XLS_FILE_PATH`**
 
-This environment variable sets xls file path for `alert-server`. The default value is `/tmp/xls`.
+This environment variable sets `xls` file path for `alert-server`. The default value is `/tmp/xls`.
 
 **`MAIL_SERVER_HOST`**
 
@@ -989,37 +988,37 @@ This environment variable sets mail password for `alert-server`. The default val
 
 **`MAIL_SMTP_STARTTLS_ENABLE`**
 
-This environment variable sets SMTP tls for `alert-server`. The default value is `true`.
+This environment variable sets SMTP `tls` for `alert-server`. The default value is `true`.
 
 **`MAIL_SMTP_SSL_ENABLE`**
 
-This environment variable sets SMTP ssl for `alert-server`. The default value is `false`.
+This environment variable sets SMTP `ssl` for `alert-server`. The default value is `false`.
 
 **`MAIL_SMTP_SSL_TRUST`**
 
-This environment variable sets SMTP ssl truest for `alert-server`. The default value is empty.
+This environment variable sets SMTP `ssl` trust for `alert-server`. The default value is empty.
 
 **`ENTERPRISE_WECHAT_ENABLE`**
 
-This environment variable sets enterprise wechat enable for `alert-server`. The default value is `false`.
+This environment variable sets enterprise WeChat enables for `alert-server`. The default value is `false`.
 
 **`ENTERPRISE_WECHAT_CORP_ID`**
 
-This environment variable sets enterprise wechat corp id for `alert-server`. The default value is empty.
+This environment variable sets enterprise WeChat corp id for `alert-server`. The default value is empty.
 
 **`ENTERPRISE_WECHAT_SECRET`**
 
-This environment variable sets enterprise wechat secret for `alert-server`. The default value is empty.
+This environment variable sets enterprise WeChat secret for `alert-server`. The default value is empty.
 
 **`ENTERPRISE_WECHAT_AGENT_ID`**
 
-This environment variable sets enterprise wechat agent id for `alert-server`. The default value is empty.
+This environment variable sets enterprise WeChat agent id for `alert-server`. The default value is empty.
 
 **`ENTERPRISE_WECHAT_USERS`**
 
-This environment variable sets enterprise wechat users for `alert-server`. The default value is empty.
+This environment variable sets enterprise WeChat users for `alert-server`. The default value is empty.
 
-### Api Server
+### API Server
 
 **`API_SERVER_OPTS`**
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/hardware.md b/docs/en-us/dev/user_doc/guide/installation/hardware.md
index 1303276..e083ac8 100644
--- a/docs/en-us/dev/user_doc/guide/installation/hardware.md
+++ b/docs/en-us/dev/user_doc/guide/installation/hardware.md
@@ -1,6 +1,6 @@
 # Hardware Environment
 
-DolphinScheduler, as an open-source distributed workflow task scheduling system, can be well deployed and run in Intel architecture server environments and mainstream virtualization environments, and supports mainstream Linux operating system environments.
+DolphinScheduler, as an open-source distributed workflow task scheduling system, can deploy and run smoothly in Intel architecture server environments and mainstream virtualization environments and supports mainstream Linux operating system environments.
 
 ## Linux Operating System Version Requirements
 
@@ -16,7 +16,7 @@ DolphinScheduler, as an open-source distributed workflow task scheduling system,
 
 ## Recommended Server Configuration
 
-DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The following recommendation is made for server hardware configuration in a production environment:
+DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architecture. The recommended server requirements in a production environment are as follow:
 
 ### Production Environment
 
@@ -25,8 +25,8 @@ DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 architectu
 | 4 core+ | 8 GB+ | SAS | GbE | 1+ |
 
 > **Attention:**
-> - The above-recommended configuration is the minimum configuration for deploying DolphinScheduler. The higher configuration is strongly recommended for production environments.
-> - The hard disk size configuration is recommended by more than 50GB. The system disk and data disk are separated.
+> - The above recommended configuration is the minimum configuration for deploying DolphinScheduler. Higher configuration is strongly recommended for production environments.
+> - The recommended hard disk size is more than 50GB and separate the system disk and data disk.
 
 
 ## Network Requirements
@@ -35,9 +35,9 @@ DolphinScheduler provides the following network port configurations for normal o
 
 | Server | Port | Desc |
 |  --- | --- | --- |
-| MasterServer |  5678  | Not the communication port. Require the native ports do not conflict |
-| WorkerServer | 1234  | Not the communication port. Require the native ports do not conflict |
-| ApiApplicationServer |  12345 | Backend communication port |
+| MasterServer |  5678  | not the communication port, require the native ports do not conflict |
+| WorkerServer | 1234  | not the communication port, require the native ports do not conflict |
+| ApiApplicationServer |  12345 | backend communication port |
 
 > **Attention:**
 > - MasterServer and WorkerServer do not need to enable communication between the networks. As long as the local ports do not conflict.
@@ -45,5 +45,4 @@ DolphinScheduler provides the following network port configurations for normal o
 
 ## Browser Requirements
 
-DolphinScheduler recommends Chrome and the latest browsers which using Chrome Kernel to access the front-end visual operator page.
-
+DolphinScheduler recommends Chrome and the latest browsers which use Chrome Kernel to access the front-end UI page.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/installation/kubernetes.md b/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
index 225e40d..f517d9a 100644
--- a/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
+++ b/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
@@ -1,20 +1,20 @@
 # QuickStart in Kubernetes
 
-Kubernetes deployment is deploy DolphinScheduler in a Kubernetes cluster, which can schedule a large number of tasks and can be used in production.
+Kubernetes deployment is DolphinScheduler deployment in a Kubernetes cluster, which can schedule massive tasks and can be used in production.
 
-If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
 
 ## Prerequisites
 
- - [Helm](https://helm.sh/) 3.1.0+
- - [Kubernetes](https://kubernetes.io/) 1.12+
+ - [Helm](https://helm.sh/) version 3.1.0+
+ - [Kubernetes](https://kubernetes.io/) version 1.12+
  - PV provisioner support in the underlying infrastructure
 
-## Install the Chart
+## Install DolphinScheduler
 
-Please download the source code package apache-dolphinscheduler-1.3.8-src.tar.gz, download address: [download](/en-us/download/download.html)
+Please download the source code package `apache-dolphinscheduler-1.3.8-src.tar.gz`, download address: [download address](/en-us/download/download.html)
 
-To install the chart with the release name `dolphinscheduler`, please execute the following commands:
+To publish the release name `dolphinscheduler` version, please execute the following commands:
 
 ```
 $ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
@@ -24,36 +24,36 @@ $ helm dependency update .
 $ helm install dolphinscheduler . --set image.tag=1.3.8
 ```
 
-To install the chart with a namespace named `test`:
+To publish the release name `dolphinscheduler` version to `test` namespace:
 
 ```bash
 $ helm install dolphinscheduler . -n test
 ```
 
-> **Tip**: If a namespace named `test` is used, the option `-n test` needs to be added to the `helm` and `kubectl` command
+> **Tip**: If a namespace named `test` is used, the optional parameter `-n test` needs to be added to the `helm` and `kubectl` commands.
 
-These commands deploy DolphinScheduler on the Kubernetes cluster in the default configuration. The [Appendix-Configuration](#appendix-configuration) section lists the parameters that can be configured during installation.
+These commands are used to deploy DolphinScheduler on the Kubernetes cluster by default. The [Appendix-Configuration](#appendix-configuration) section lists the parameters that can be configured during installation.
 
 > **Tip**: List all releases using `helm list`
 
-The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default
+The **PostgreSQL** (with username `root`, password `root` and database `dolphinscheduler`) and **ZooKeeper** services will start by default.
 
 ## Access DolphinScheduler UI
 
-If `ingress.enabled` in `values.yaml` is set to `true`, you just access `http://${ingress.host}/dolphinscheduler` in browser.
+If `ingress.enabled` in `values.yaml` is set to `true`, you could access `http://${ingress.host}/dolphinscheduler` in browser.
 
-> **Tip**: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+> **Tip**: If there is a problem with ingress access, please contact the Kubernetes administrator and refer to the [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
 
-Otherwise, when `api.service.type=ClusterIP` you need to execute port-forward command like:
+Otherwise, when `api.service.type=ClusterIP` you need to execute `port-forward` commands:
 
 ```bash
 $ kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
 $ kubectl port-forward --address 0.0.0.0 -n test svc/dolphinscheduler-api 12345:12345 # with test namespace
 ```
 
-> **Tip**: If the error of `unable to do port forwarding: socat not found` appears, you need to install `socat` at first
+> **Tip**: If the error of `unable to do port forwarding: socat not found` appears, you need to install `socat` first.
 
-And then access the web: http://localhost:12345/dolphinscheduler (The local address is http://localhost:12345/dolphinscheduler)
+Access the web: `http://localhost:12345/dolphinscheduler` (Modify the IP address if needed).
 
 Or when `api.service.type=NodePort` you need to execute the command:
 
@@ -63,23 +63,23 @@ NODE_PORT=$(kubectl get svc {{ template "dolphinscheduler.fullname" . }}-api -n
 echo http://$NODE_IP:$NODE_PORT/dolphinscheduler
 ```
 
-And then access the web: http://$NODE_IP:$NODE_PORT/dolphinscheduler
+Access the web: `http://$NODE_IP:$NODE_PORT/dolphinscheduler`.
 
-The default username is `admin` and the default password is `dolphinscheduler123`
+The default username is `admin` and the default password is `dolphinscheduler123`.
 
-Please refer to the `Quick Start` in the chapter [Quick Start](../quick-start.md) to explore how to use DolphinScheduler
+Please refer to the `Quick Start` in the chapter [Quick Start](../quick-start.md) to explore how to use DolphinScheduler.
 
 ## Uninstall the Chart
 
-To uninstall/delete the `dolphinscheduler` deployment:
+To uninstall or delete the `dolphinscheduler` deployment:
 
 ```bash
 $ helm uninstall dolphinscheduler
 ```
 
-The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release.
+The command removes all the Kubernetes components (except PVC) associated with the `dolphinscheduler` and deletes the release.
 
-To delete the PVC's associated with `dolphinscheduler`:
+Run the command below to delete the PVC's associated with `dolphinscheduler`:
 
 ```bash
 $ kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler
@@ -137,7 +137,7 @@ kubectl get po
 kubectl get po -n test # with test namespace
 ```
 
-View the logs of a pod container named dolphinscheduler-master-0:
+View the logs of a pod container named `dolphinscheduler-master-0`:
 
 ```
 kubectl logs dolphinscheduler-master-0
@@ -145,7 +145,7 @@ kubectl logs -f dolphinscheduler-master-0 # follow log output
 kubectl logs --tail 10 dolphinscheduler-master-0 -n test # show last 10 lines from the end of the logs
 ```
 
-### How to Scale api, master and worker on Kubernetes?
+### How to Scale API, master and worker on Kubernetes?
 
 List all deployments (aka `deploy`):
 
@@ -188,7 +188,7 @@ kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test names
 >
 > If you want to use MySQL, you can build a new image based on the `apache/dolphinscheduler` image as follows.
 
-1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
 
 2. Create a new `Dockerfile` to add MySQL driver:
 
@@ -203,11 +203,11 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:mysql-driver .
 ```
 
-4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry.
 
-5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`.
 
-6. Modify postgresql `enabled` to `false` in `values.yaml`
+6. Modify postgresql `enabled` to `false` in `values.yaml`.
 
 7. Modify externalDatabase (especially modify `host`, `username` and `password`) in `values.yaml`:
 
@@ -223,7 +223,7 @@ externalDatabase:
   params: "useUnicode=true&characterEncoding=UTF-8"
 ```
 
-8. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+8. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
 ### How to Support MySQL Datasource in `Datasource manage`?
 
@@ -231,7 +231,7 @@ externalDatabase:
 >
 > If you want to add MySQL datasource, you can build a new image based on the `apache/dolphinscheduler` image as follows.
 
-1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar)
+1. Download the MySQL driver [mysql-connector-java-8.0.16.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar).
 
 2. Create a new `Dockerfile` to add MySQL driver:
 
@@ -246,13 +246,13 @@ COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:mysql-driver .
 ```
 
-4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry
+4. Push the docker image `apache/dolphinscheduler:mysql-driver` to a docker registry.
 
-5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`
+5. Modify image `repository` and update `tag` to `mysql-driver` in `values.yaml`.
 
-6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-7. Add a MySQL datasource in `Datasource manage`
+7. Add a MySQL datasource in `Datasource manage`.
 
 ### How to Support Oracle Datasource in `Datasource manage`?
 
@@ -275,13 +275,13 @@ COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
 docker build -t apache/dolphinscheduler:oracle-driver .
 ```
 
-4. Push the docker image `apache/dolphinscheduler:oracle-driver` to a docker registry
+4. Push the docker image `apache/dolphinscheduler:oracle-driver` to a docker registry.
 
-5. Modify image `repository` and update `tag` to `oracle-driver` in `values.yaml`
+5. Modify image `repository` and update `tag` to `oracle-driver` in `values.yaml`.
 
-6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-7. Add an Oracle datasource in `Datasource manage`
+7. Add an Oracle datasource in `Datasource manage`.
 
 ### How to Support Python 2 pip and Custom requirements.txt?
 
@@ -296,7 +296,7 @@ RUN apt-get update && \
     rm -rf /var/lib/apt/lists/*
 ```
 
-The command will install the default **pip 18.1**. If you upgrade the pip, just add one line
+The command will install the default **pip 18.1**. If you upgrade the pip, just add the following command.
 
 ```
     pip install --no-cache-dir -U pip && \
@@ -308,13 +308,13 @@ The command will install the default **pip 18.1**. If you upgrade the pip, just
 docker build -t apache/dolphinscheduler:pip .
 ```
 
-3. Push the docker image `apache/dolphinscheduler:pip` to a docker registry
+3. Push the docker image `apache/dolphinscheduler:pip` to a docker registry.
 
-4. Modify image `repository` and update `tag` to `pip` in `values.yaml`
+4. Modify image `repository` and update `tag` to `pip` in `values.yaml`.
 
-5. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+5. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-6. Verify pip under a new Python task
+6. Verify pip under a new Python task.
 
 ### How to Support Python 3?
 
@@ -327,7 +327,7 @@ RUN apt-get update && \
     rm -rf /var/lib/apt/lists/*
 ```
 
-The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like
+The command will install the default **Python 3.7.3**. If you also want to install **pip3**, just replace `python3` with `python3-pip` like:
 
 ```
     apt-get install -y --no-install-recommends python3-pip && \
@@ -339,36 +339,36 @@ The command will install the default **Python 3.7.3**. If you also want to insta
 docker build -t apache/dolphinscheduler:python3 .
 ```
 
-3. Push the docker image `apache/dolphinscheduler:python3` to a docker registry
+3. Push the docker image `apache/dolphinscheduler:python3` to a docker registry.
 
-4. Modify image `repository` and update `tag` to `python3` in `values.yaml`
+4. Modify image `repository` and update `tag` to `python3` in `values.yaml`.
 
-5. Modify `PYTHON_HOME` to `/usr/bin/python3` in `values.yaml`
+5. Modify `PYTHON_HOME` to `/usr/bin/python3` in `values.yaml`.
 
-6. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+6. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-7. Verify Python 3 under a new Python task
+7. Verify Python 3 under a new Python task.
 
 ### How to Support Hadoop, Spark, Flink, Hive or DataX?
 
 Take Spark 2.4.7 as an example:
 
-1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`
+1. Download the Spark 2.4.7 release binary `spark-2.4.7-bin-hadoop2.7.tgz`.
 
-2. Ensure that `common.sharedStoragePersistence.enabled` is turned on
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on.
 
-3. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+3. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-4. Copy the Spark 2.4.7 release binary into the Docker container
+4. Copy the Spark 2.4.7 release binary into the Docker container.
 
 ```bash
 kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
 kubectl cp -n test spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
 ```
 
-Because the volume `sharedStoragePersistence` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost
+Because the volume `sharedStoragePersistence` is mounted on `/opt/soft`, all files in `/opt/soft` will not be lost.
 
-5. Attach the container and ensure that `SPARK_HOME2` exists
+5. Attach the container and ensure that `SPARK_HOME2` exists.
 
 ```bash
 kubectl exec -it dolphinscheduler-worker-0 bash
@@ -380,17 +380,17 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 # or just mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
-The last command will print the Spark version if everything goes well
+The last command will print the Spark version if everything goes well.
 
-6. Verify Spark under a Shell task
+6. Verify Spark under a Shell task.
 
 ```
 $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar
 ```
 
-Check whether the task log contains the output like `Pi is roughly 3.146015`
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
 
-7. Verify Spark under a Spark task
+7. Verify Spark under a Spark task.
 
 The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources first, and then create a Spark task with:
 
@@ -399,34 +399,34 @@ The file `spark-examples_2.11-2.4.7.jar` needs to be uploaded to the resources f
 - Main Package: `spark-examples_2.11-2.4.7.jar`
 - Deploy Mode: `local`
 
-Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`
+Similarly, check whether the task log contains the output like `Pi is roughly 3.146015`.
 
-8. Verify Spark on YARN
+8. Verify Spark on YARN.
 
-Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps
+Spark on YARN (Deploy Mode is `cluster` or `client`) requires Hadoop support. Similar to Spark support, the operation of supporting Hadoop is almost the same as the previous steps.
 
-Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
+Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists.
 
 ### How to Support Spark 3?
 
-In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`
+In fact, the way to submit applications with `spark-submit` is the same, regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set `SPARK_HOME2=/path/to/spark3`.
 
 Take Spark 3.1.1 as an example:
 
-1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`
+1. Download the Spark 3.1.1 release binary `spark-3.1.1-bin-hadoop2.7.tgz`.
 
-2. Ensure that `common.sharedStoragePersistence.enabled` is turned on
+2. Ensure that `common.sharedStoragePersistence.enabled` is turned on.
 
-3. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
+3. Run a DolphinScheduler release in Kubernetes (See **Install DolphinScheduler**).
 
-4. Copy the Spark 3.1.1 release binary into the Docker container
+4. Copy the Spark 3.1.1 release binary into the Docker container.
 
 ```bash
 kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
 kubectl cp -n test spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
 ```
 
-5. Attach the container and ensure that `SPARK_HOME2` exists
+5. Attach the container and ensure that `SPARK_HOME2` exists.
 
 ```bash
 kubectl exec -it dolphinscheduler-worker-0 bash
@@ -438,19 +438,19 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
 $SPARK_HOME2/bin/spark-submit --version
 ```
 
-The last command will print the Spark version if everything goes well
+The last command will print the Spark version if everything goes well.
 
-6. Verify Spark under a Shell task
+6. Verify Spark under a Shell task.
 
 ```
 $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar
 ```
 
-Check whether the task log contains the output like `Pi is roughly 3.146015`
+Check whether the task log contains the output like `Pi is roughly 3.146015`.
 
 ### How to Support Shared Storage Between Master, Worker and Api Server?
 
-For example, Master, Worker and API server may use Hadoop at the same time
+For example, Master, Worker and API server may use Hadoop at the same time.
 
 1. Modify the following configurations in `values.yaml`
 
@@ -465,17 +465,17 @@ common:
     storage: "20Gi"
 ```
 
-`storageClassName` and `storage` need to be modified to actual values
+Modify `storageClassName` and `storage` to actual environment values.
 
-> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`.
 
-2. Copy the Hadoop into the directory `/opt/soft`
+2. Copy the Hadoop into the directory `/opt/soft`.
 
-3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
+3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct.
 
 ### How to Support Local File Resource Storage Instead of HDFS and S3?
 
-Modify the following configurations in `values.yaml`
+Modify the following configurations in `values.yaml`:
 
 ```yaml
 common:
@@ -491,13 +491,13 @@ common:
     storage: "20Gi"
 ```
 
-`storageClassName` and `storage` need to be modified to actual values
+Modify `storageClassName` and `storage` to actual environment values.
 
-> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`
+> **Note**: `storageClassName` must support the access mode: `ReadWriteMany`.
 
 ### How to Support S3 Resource Storage Like MinIO?
 
-Take MinIO as an example: Modify the following configurations in `values.yaml`
+Take MinIO as an example: Modify the following configurations in `values.yaml`:
 
 ```yaml
 common:
@@ -510,13 +510,13 @@ common:
     FS_S3A_SECRET_KEY: "MINIO_SECRET_KEY"
 ```
 
-`BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` need to be modified to actual values
+Modify `BUCKET_NAME`, `MINIO_IP`, `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY`  to actual environment values.
 
-> **Note**: `MINIO_IP` can only use IP instead of domain name, because DolphinScheduler currently doesn't support S3 path style access
+> **Note**: `MINIO_IP` can only use IP instead of the domain name, because DolphinScheduler currently doesn't support S3 path style access.
 
 ### How to Configure SkyWalking?
 
-Modify SKYWALKING configurations in `values.yaml`:
+Modify SkyWalking configurations in `values.yaml`:
 
 ```yaml
 common:
@@ -535,7 +535,7 @@ common:
 |                                                                                   |                                                                                                                                |                                                       |
 | `image.repository`                                                                | Docker image repository for the DolphinScheduler                                                                               | `apache/dolphinscheduler`                             |
 | `image.tag`                                                                       | Docker image version for the DolphinScheduler                                                                                  | `latest`                                              |
-| `image.pullPolicy`                                                                | Image pull policy. One of Always, Never, IfNotPresent                                                                          | `IfNotPresent`                                        |
+| `image.pullPolicy`                                                                | Image pull policy. Options: Always, Never, IfNotPresent                                                                          | `IfNotPresent`                                        |
 | `image.pullSecret`                                                                | Image pull secret. An optional reference to secret in the same namespace to use for pulling any of the images                  | `nil`                                                 |
 |                                                                                   |                                                                                                                                |                                                       |
 | `postgresql.enabled`                                                              | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL                             | `true`                                                |
diff --git a/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md b/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
index f1d03ea..8808184 100644
--- a/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
+++ b/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
@@ -1,12 +1,12 @@
 # Pseudo-Cluster Deployment
 
-The purpose of pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, api server, are all on the same machine.
+The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.
 
-If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
 
-## Prepare
+## Preparation
 
-Pseudo-cluster deployment of DolphinScheduler requires external software support
+Pseudo-cluster deployment of DolphinScheduler requires external software support:
 
 * JDK:Download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step, if it already exists in your environment.
 * Binary package: Download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html)
@@ -16,13 +16,13 @@ Pseudo-cluster deployment of DolphinScheduler requires external software support
   * `pstree` for macOS
   * `psmisc` for Fedora/Red/Hat/CentOS/Ubuntu/Debian
 
-> **_Note:_** DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support
+> **_Note:_** DolphinScheduler itself does not depend on Hadoop, Hive, Spark, but if you need to run tasks that depend on them, you need to have the corresponding environment support.
 
 ## DolphinScheduler Startup Environment
 
 ### Configure User Exemption and Permissions
 
-Create a deployment user, and be sure to configure `sudo` without password. We here make a example for user dolphinscheduler.
+Create a deployment user, and make sure to configure `sudo` without password. Here make an example to create user `dolphinscheduler`:
 
 ```shell
 # To create a user, login as root
@@ -41,12 +41,12 @@ chown -R dolphinscheduler:dolphinscheduler apache-dolphinscheduler-*-bin
 
 > **_NOTICE:_**
 >
-> * Because DolphinScheduler's multi-tenant task switch user by command `sudo -u {linux-user}`, the deployment user needs to have sudo privileges and is password-free. If novice learners don’t understand, you can ignore this point for the time being.
-> * If you find the line "Defaults requirest" in the `/etc/sudoers` file, please comment it
+> * Due to DolphinScheduler's multi-tenant task switch user using command `sudo -u {linux-user}`, the deployment user needs to have `sudo` privileges and be password-free. If novice learners don’t understand, you can ignore this point for now.
+> * If you find the line "Defaults requirett" in the `/etc/sudoers` file, please comment the content.
 
 ### Configure Machine SSH Password-Free Login
 
-Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The steps to configure password-free login are as follows
+Since resources need to be sent to different machines during installation, SSH password-free login is required between each machine. The steps to configure password-free login are as follows:
 
 ```shell
 su dolphinscheduler
@@ -56,11 +56,11 @@ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 chmod 600 ~/.ssh/authorized_keys
 ```
 
-> **_Notice:_** After the configuration is complete, you can run the command `ssh localhost` to test if it work or not, if you can login with ssh without password.
+> **_Notice:_** After the configuration is complete, you can run the command `ssh localhost` to test works or not. If you can login with ssh without password stands for successful.
 
 ### Start ZooKeeper
 
-Go to the ZooKeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in `conf/zoo.cfg` to `dataDir=./tmp/zookeeper`
+Go to the ZooKeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in `conf/zoo.cfg` to `dataDir=./tmp/zookeeper`.
 
 ```shell
 # Start ZooKeeper
@@ -78,7 +78,7 @@ spring.datasource.username=dolphinscheduler
 spring.datasource.password=dolphinscheduler
 ```
 
-After modifying and saving, execute the following command to create database table and inti basic data.
+After modifying and saving, execute the following command to create database tables and init basic data.
 
 ```shell
 sh script/create-dolphinscheduler.sh
@@ -87,38 +87,38 @@ sh script/create-dolphinscheduler.sh
 
 ## Modify Configuration
 
-After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of `conf/config/install_config.conf`. Generally, you just needs to modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** part to complete the deployment, the following describes the parameters that must be modified
+After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of `conf/config/install_config.conf`. Generally, you just need to modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** part to complete the deployment, the following describes the parameters that must be modified:
 
 ```shell
 # ---------------------------------------------------------
 # INSTALL MACHINE
 # ---------------------------------------------------------
-# Because the master, worker, and API server are deployed on a single node, the IP of the server is the machine IP or localhost
+# Due to the master, worker, and API server being deployed on a single node, the IP of the server is the machine IP or localhost
 ips="localhost"
 masters="localhost"
 workers="localhost:default"
 alertServer="localhost"
 apiServers="localhost"
 
-# DolphinScheduler installation path, it will auto create if not exists
+# DolphinScheduler installation path, it will auto-create if not exists
 installPath="~/dolphinscheduler"
 
-# Deploy user, use what you create in section **Configure machine SSH password-free login**
+# Deploy user, use the user you create in section **Configure machine SSH password-free login**
 deployUser="dolphinscheduler"
 
 # ---------------------------------------------------------
 # DolphinScheduler ENV
 # ---------------------------------------------------------
-# The path of JAVA_HOME, which JDK install path in section **Prepare**
+# The path of JAVA_HOME, which JDK install path in section **Preparation**
 javaHome="/your/java/home/here"
 
 # ---------------------------------------------------------
 # Database
 # ---------------------------------------------------------
-# Database type, username, password, IP, port, metadata. For now dbtype supports `mysql` and `postgresql`
+# Database type, username, password, IP, port, metadata. For now `dbtype` supports `mysql` and `postgresql`
 dbtype="mysql"
 dbhost="localhost:3306"
-# Have to modify if you are not using dolphinscheduler/dolphinscheduler as your username and password
+# Need to modify if you are not using `dolphinscheduler/dolphinscheduler` as your username and password
 username="dolphinscheduler"
 password="dolphinscheduler"
 dbname="dolphinscheduler"
@@ -132,7 +132,7 @@ registryServers="localhost:2181"
 
 ## Initialize the Database
 
-DolphinScheduler metadata is stored in relational database. Currently, PostgreSQL and MySQL are supported. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database
+DolphinScheduler metadata is stored in the relational database. Currently, supports PostgreSQL and MySQL. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database:
 
 ```shell
 mysql -uroot -p
@@ -146,7 +146,7 @@ mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTI
 mysql> flush privileges;
 ```
 
-After above steps done you would create a new database for DolphinScheduler, then run shortcut Shell scripts to init database
+After the above steps done you would create a new database for DolphinScheduler, then run Shell scripts to init database:
 
 ```shell
 sh script/create-dolphinscheduler.sh
@@ -154,18 +154,18 @@ sh script/create-dolphinscheduler.sh
 
 ## Start DolphinScheduler
 
-Use **deployment user** you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder
+Use **deployment user** you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder.
 
 ```shell
 sh install.sh
 ```
 
-> **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in terminal
-, this is non-important information and you can ignore it.
+> **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in the terminal,
+ this is non-important information that you can ignore.
 
 ## Login DolphinScheduler
 
-The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
 
 ## Start or Stop Server
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md b/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
index 14b5187..fedec67 100644
--- a/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
+++ b/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
@@ -1,13 +1,13 @@
 SkyWalking Agent Deployment
 =============================
 
-The dolphinscheduler-skywalking module provides [SkyWalking](https://skywalking.apache.org/) monitor agent for the Dolphinscheduler project.
+The `dolphinscheduler-skywalking` module provides [SkyWalking](https://skywalking.apache.org/) monitor agent for the DolphinScheduler project.
 
-This document describes how to enable SkyWalking 8.4+ support with this module (recommended to use SkyWalking 8.5.0).
+This document describes how to enable SkyWalking version 8.4+ support with this module (recommend using SkyWalking 8.5.0).
 
 ## Installation
 
-The following configuration is used to enable SkyWalking agent.
+The following configuration is used to enable the SkyWalking agent.
 
 ### Through Environment Variable Configuration (for Docker Compose)
 
@@ -20,7 +20,7 @@ SW_GRPC_LOG_SERVER_HOST=127.0.0.1
 SW_GRPC_LOG_SERVER_PORT=11800
 ```
 
-And run
+And run:
 
 ```shell
 $ docker-compose up -d
@@ -69,6 +69,6 @@ Copy the `${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinschedul
 
 #### View DolphinScheduler Dashboard
 
-If you have opened SkyWalking dashboard with a browser before, you need to clear the browser cache.
+If you have opened the SkyWalking dashboard with a browser before, you need to clear the browser cache.
 
 ![img1](/img/skywalking/import-dashboard-1.jpg)
diff --git a/docs/en-us/dev/user_doc/guide/installation/standalone.md b/docs/en-us/dev/user_doc/guide/installation/standalone.md
index 143ca65..fb9d43a 100644
--- a/docs/en-us/dev/user_doc/guide/installation/standalone.md
+++ b/docs/en-us/dev/user_doc/guide/installation/standalone.md
@@ -1,21 +1,21 @@
 # Standalone
 
-Standalone only for quick look for DolphinScheduler.
+Standalone only for quick experience for DolphinScheduler.
 
-If you are a green hand and want to experience DolphinScheduler, we recommended you install follow [Standalone](standalone.md). If you want to experience more complete functions or schedule large tasks number, we recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to using DolphinScheduler in production, we recommended you follow [cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
+If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
 
-> **_Note:_** Standalone only recommends the use of less than 20 workflows, because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause instability
+> **_Note:_** Standalone only recommends the usage of fewer than 20 workflows, because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause instability.
 
-## Prepare
+## Preparation
 
-* JDK:Download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step, if it already exists in your environment.
-* Binary package: Download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html)
+* JDK:download [JDK][jdk] (1.8+), and configure `JAVA_HOME` to and `PATH` variable. You can skip this step if it already exists in your environment.
+* Binary package: download the DolphinScheduler binary package at [download page](https://dolphinscheduler.apache.org/en-us/download/download.html).
 
 ## Start DolphinScheduler Standalone Server
 
 ### Extract and Start DolphinScheduler
 
-There is a standalone startup script in the binary compressed package, which can be quickly started after extract. Switch to a user with sudo permission and run the script
+There is a standalone startup script in the binary compressed package, which can be quickly started after extraction. Switch to a user with sudo permission and run the script:
 
 ```shell
 # Extract and start Standalone Server
@@ -26,11 +26,11 @@ sh ./bin/dolphinscheduler-daemon.sh start standalone-server
 
 ### Login DolphinScheduler
 
-The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
 
 ### Start or Stop Server
 
-The script `./bin/dolphinscheduler-daemon.sh` can not only quickly start standalone, but also stop the service operation. All the commands are as follows
+The script `./bin/dolphinscheduler-daemon.sh`can be used not only quickly start standalone, but also to stop the service operation. All the commands are as follows:
 
 ```shell
 # Start Standalone Server