You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by zh...@apache.org on 2022/03/15 11:20:14 UTC

[dolphinscheduler-website] branch master updated: Proofreading dev documents under /user_doc/guide/task (#732)

This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 2434770  Proofreading dev documents under /user_doc/guide/task (#732)
2434770 is described below

commit 24347701fa337f6027f17a0856db0c251b3e4a8c
Author: Tq <ti...@gmail.com>
AuthorDate: Tue Mar 15 19:18:42 2022 +0800

    Proofreading dev documents under /user_doc/guide/task (#732)
    
    * proofreading dev documents under /user_doc/guide/task
    * resolve conflicts in datax.md map-reduce.md spark.md
    * fix according to the reviews
---
 docs/en-us/dev/user_doc/guide/task/conditions.md   | 34 +++++-----
 docs/en-us/dev/user_doc/guide/task/datax.md        | 56 ++++++++--------
 docs/en-us/dev/user_doc/guide/task/dependent.md    |  6 +-
 docs/en-us/dev/user_doc/guide/task/emr.md          | 20 +++---
 docs/en-us/dev/user_doc/guide/task/flink.md        | 66 ++++++++++---------
 docs/en-us/dev/user_doc/guide/task/http.md         | 30 ++++-----
 docs/en-us/dev/user_doc/guide/task/map-reduce.md   | 74 +++++++++++-----------
 docs/en-us/dev/user_doc/guide/task/pigeon.md       | 20 +++---
 docs/en-us/dev/user_doc/guide/task/python.md       | 16 ++---
 docs/en-us/dev/user_doc/guide/task/shell.md        | 44 ++++++-------
 docs/en-us/dev/user_doc/guide/task/spark.md        | 66 +++++++++----------
 docs/en-us/dev/user_doc/guide/task/sql.md          | 30 ++++-----
 .../dev/user_doc/guide/task/stored-procedure.md    | 10 +--
 docs/en-us/dev/user_doc/guide/task/sub-process.md  | 36 +++++------
 docs/en-us/dev/user_doc/guide/task/switch.md       | 34 +++++-----
 docs/zh-cn/dev/user_doc/guide/task/datax.md        | 30 ++++-----
 docs/zh-cn/dev/user_doc/guide/task/flink.md        |  6 +-
 docs/zh-cn/dev/user_doc/guide/task/pigeon.md       |  2 +-
 docs/zh-cn/dev/user_doc/guide/task/spark.md        |  2 +-
 docs/zh-cn/dev/user_doc/guide/task/sql.md          |  2 +-
 docs/zh-cn/dev/user_doc/guide/task/switch.md       |  2 +-
 21 files changed, 289 insertions(+), 297 deletions(-)

diff --git a/docs/en-us/dev/user_doc/guide/task/conditions.md b/docs/en-us/dev/user_doc/guide/task/conditions.md
index d2e262e..6e404a9 100644
--- a/docs/en-us/dev/user_doc/guide/task/conditions.md
+++ b/docs/en-us/dev/user_doc/guide/task/conditions.md
@@ -1,10 +1,10 @@
 # Conditions
 
-Conditions is a condition node, determining which downstream task should be run based on the condition set to it. For now, the Conditions support multiple upstream tasks, but only two downstream tasks. When the number of upstream tasks exceeds one, complex upstream dependencies can be achieved through `and` and `or` operators.
+Condition is a conditional node, that determines which downstream task should run based on the condition of the upstream task. Currently, the Conditions support multiple upstream tasks, but only two downstream tasks. When the number of upstream tasks exceeds one, achieve complex upstream dependencies by through `and` and `or` operators.
 
-## Create
+## Create Task
 
-Drag in the toolbar<img src="/img/conditions.png" width="20"/>The task node to the drawing board to create a new Conditions task, as shown in the figure below:
+Drag from the toolbar <img src="/img/conditions.png" width="20"/> task node to canvas to create a new Conditions task, as shown in the figure below:
 
   <p align="center">
    <img src="/img/condition_dag_en.png" width="80%" />
@@ -17,20 +17,20 @@ Drag in the toolbar<img src="/img/conditions.png" width="20"/>The task node to t
 ## Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Downstream tasks: Supports two branches for now, success and failure
-  - Success: When the Conditions task runs successfully, run this downstream task
-  - Failure: When the Conditions task runs fails, run this downstream task
-- Upstream condition selection: one or more upstream tasks can be selected for conditions
-  - Add the upstream dependency: Use the first parameter to choose task name, and the second parameter for status of the upsteam task.
-  - Upstream task relationship: we use `and` and `or` operators to handle complex relationship of upstream when multiple upstream tasks for Conditions task
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Downstream tasks selection: supports two branches success and failure.
+  - Success: When the upstream task runs successfully, run the success branch.
+  - Failure: When the upstream task runs failed, run the failure branch.
+- Upstream condition selection: can select one or more upstream tasks for conditions.
+  - Add an upstream dependency: the first parameter is to choose a specified task name, and the second parameter is to choose the upstream task status to trigger conditions.
+  - Select upstream task relationship: use `and` and `or` operators to handle the complex relationship of upstream when there are multiple upstream tasks for conditions.
 
 ## Related Task
 
-[switch](switch.md): [Condition](conditions.md)task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream node. The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
+[switch](switch.md): Conditions task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream nodes. The [Switch](switch.md) task node mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the result of user written expression.
diff --git a/docs/en-us/dev/user_doc/guide/task/datax.md b/docs/en-us/dev/user_doc/guide/task/datax.md
index 4dd5248..5817512 100644
--- a/docs/en-us/dev/user_doc/guide/task/datax.md
+++ b/docs/en-us/dev/user_doc/guide/task/datax.md
@@ -7,57 +7,57 @@ DataX task type for executing DataX programs. For DataX nodes, the worker will e
 ## Create Task
 
 - Click Project Management -> Project Name -> Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag the <img src="/img/tasks/icons/datax.png" width="15"/> from the toolbar to the drawing board.
+- Drag from the toolbar <img src="/img/tasks/icons/datax.png" width="15"/> task node to canvas.
 
 ## Task Parameter
 
 - **Node name**: The node name in a workflow definition is unique.
-- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- **Descriptive information**: describe the function of the node.
-- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- **Environment Name**: Configure the environment name in which to run the script.
-- **Number of failed retry attempts**: The number of times the task failed to be resubmitted.
-- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task.
-- **Delayed execution time**: The time, in cents, that a task is delayed in execution.
-- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- **Custom template**: Custom the content of the DataX node's json profile when the default data source provided does not meet the required requirements.
-- **json**: json configuration file for DataX synchronization.
-- **Custom parameters**: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement.
-- **Data source**: Select the data source from which the data will be extracted.
-- **sql statement**: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Custom template**: Customize the content of the DataX node's JSON profile when the default DataSource provided does not meet the requirements.
+- **JSON**: JSON configuration file for DataX synchronization.
+- **Custom parameters**: SQL task type, and stored procedure is a custom parameter order, to set customized parameter type and data type for the method is the same as the stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the `${variable}` in the SQL statement.
+- **Data source**: Select the data source to extract data.
+- **SQL statement**: The SQL statement used to extract data from the target database, the SQL query column name is automatically parsed when execute the node, and mapped to the target table to synchronize column name. When the column names of the source table and the target table are inconsistent, they can be converted by column alias (as)
 - **Target library**: Select the target library for data synchronization.
-- **Pre-sql**: Pre-sql is executed before the sql statement (executed by the target library).
-- **Post-sql**: Post-sql is executed after the sql statement (executed by the target library).
-- **Stream limit (number of bytes)**: Limits the number of bytes in the query.
+- **Pre-SQL**: Pre-SQL executes before the SQL statement (executed by the target database).
+- **Post-SQL**: Post-SQL executes after the SQL statement (executed by the target database).
+- **Stream limit (number of bytes)**: Limit the number of bytes for a query.
 - **Limit flow (number of records)**: Limit the number of records for a query.
-- **Running memory**: the minimum and maximum memory required can be configured to suit the actual production environment.
-- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+- **Running memory**: Set the minimum and maximum memory required, which can be set according to the actual production environment.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
 
 ## Task Example
 
-This example demonstrates importing data from Hive into MySQL.
+This example demonstrates how to import data from Hive into MySQL.
 
-### Configuring the DataX environment in DolphinScheduler
+### Configure the DataX environment in DolphinScheduler
 
 If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The configuration file is as follows: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
 
 ![datax_task01](/img/tasks/demo/datax_task01.png)
 
-After the environment has been configured, DolphinScheduler needs to be restarted.
+After finish the environment configuration, need to restart DolphinScheduler.
 
-### Configuring DataX Task Node
+### Configure DataX Task Node
 
-As the default data source does not contain data to be read from Hive, a custom json is required, refer to: [HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md). Note: Partition directories exist on the HDFS path, when importing data in real world situations, partitioning is recommended to be passed as a parameter, using custom parameters.
+As the default DataSource does not contain data read from Hive, require a custom JSON, refer to: [HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md). Note: Partition directories exist on the HDFS path, when importing data in real world situations, partitioning is recommended to be passed as a parameter, using custom parameters.
 
-After writing the required json file, you can configure the node content by following the steps in the diagram below.
+After finish the required JSON file, you can configure the node by following the steps in the diagram below:
 
 ![datax_task02](/img/tasks/demo/datax_task02.png)
 
-### View run results
+### View Execution Result
 
 ![datax_task03](/img/tasks/demo/datax_task03.png)
 
 ### Notice
 
-If the default data source provided does not meet your needs, you can configure the writer and reader of DataX according to the actual usage environment in the custom template option, available at https://github.com/alibaba/DataX.
+If the default DataSource provided does not meet your needs, you can configure the writer and reader of the DataX according to the actual usage environment in the custom template options, available at [DataX](https://github.com/alibaba/DataX).
diff --git a/docs/en-us/dev/user_doc/guide/task/dependent.md b/docs/en-us/dev/user_doc/guide/task/dependent.md
index 88868c9..ff86207 100644
--- a/docs/en-us/dev/user_doc/guide/task/dependent.md
+++ b/docs/en-us/dev/user_doc/guide/task/dependent.md
@@ -1,8 +1,8 @@
-# Dependent
+# Dependent Node
 
-- Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node will check whether process B has a successful execution yesterday.
+- Dependent nodes are **dependency check nodes**. For example, process A depends on the successful execution of process B from yesterday, and the dependent node will check whether process B run successful yesterday.
 
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node in the toolbar to the drawing board, as shown in the following figure:
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) task node to the canvas, as shown in the figure below:
 
 <p align="center">
    <img src="/img/dependent-nodes-en.png" width="80%" />
diff --git a/docs/en-us/dev/user_doc/guide/task/emr.md b/docs/en-us/dev/user_doc/guide/task/emr.md
index e44a599..354cc7b 100644
--- a/docs/en-us/dev/user_doc/guide/task/emr.md
+++ b/docs/en-us/dev/user_doc/guide/task/emr.md
@@ -2,21 +2,21 @@
 
 ## Overview
 
-Amazon EMR task type, for creating EMR clusters on AWS and performing computing tasks. using [aws-java-sdk](https://aws.amazon.com/cn/sdk-for-java/) in the background, create [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object from json,then submit it to AWS.
+Amazon EMR task type, for creating EMR clusters on AWS and running computing tasks. Using [aws-java-sdk](https://aws.amazon.com/cn/sdk-for-java/) in the background code, to transfer JSON parameters to  [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object and submit to AWS.
 
 ## Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- json: The json corresponding to the [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object,can also refer to [API_RunJobFlow_Examples](https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html#API_RunJobFlow_Examples)
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- JSON: JSON corresponding to the [RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html) object, for details refer to [API_RunJobFlow_Examples](https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html#API_RunJobFlow_Examples).
 
-## json example
+## JSON example
 
 ```json
 {
diff --git a/docs/en-us/dev/user_doc/guide/task/flink.md b/docs/en-us/dev/user_doc/guide/task/flink.md
index a783654..634b811 100644
--- a/docs/en-us/dev/user_doc/guide/task/flink.md
+++ b/docs/en-us/dev/user_doc/guide/task/flink.md
@@ -1,50 +1,48 @@
-# Flink
+# Flink Node
 
 ## Overview
 
-Flink task type for executing Flink programs. For Flink nodes, the worker submits the task by using the flink command `flink run`. See [flink cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/) for more details.
+Flink task type for executing Flink programs. For Flink nodes, the worker submits the task by using the Flink command `flink run`. See [flink cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/) for more details.
 
 ## Create Task
 
 - Click Project Management -> Project Name -> Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag the <img src="/img/tasks/icons/flink.png" width="15"/> from the toolbar to the drawing board.
+- Drag from the toolbar <img src="/img/tasks/icons/flink.png" width="15"/>task node to canvas.
 
 ## Task Parameter
 
 - **Node name**: The node name in a workflow definition is unique.
-- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- **Descriptive information**: describe the function of the node.
-- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- **Environment Name**: Configure the environment name in which to run the script.
-- **Number of failed retry attempts**: The number of times the task failed to be resubmitted.
-- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task.
-- **Delayed execution time**: the time, in cents, that a task is delayed in execution.
-- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- **Program type**: supports Java、Scala and Python.
-- **The class of main function**: is the full path of Main Class, the entry point of the Flink program.
-- **Resource**: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
-- **Main jar package**: is the Flink jar package.
-- **Deployment mode**: support three modes of cluster and local 
-- **Task name** (option): Flink task name.
-- **jobManager memory number**: This is used to set the number of jobManager memories, which can be set according to the actual production environment.
-- **Number of slots**: This is used to set the number of Slots, which can be set according to the actual production environment.
-- **taskManager memory number**: This is used to set the number of taskManager memories, which can be set according to the actual production environment.
-- **Number of taskManage**: This is used to set the number of taskManagers, which can be set according to the actual production environment.
-- **Custom parameters**: It is a user-defined parameter that is part of MapReduce, which will replace the content with ${variable} in the script.
-- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Program type**: Supports Java, Scala and Python.
+- **The class of main function**: The **full path** of Main Class, the entry point of the Flink program.
+- **Main jar package**: The jar package of the Flink program (upload by Resource Center).
+- **Deployment mode**: Support 2 deployment modes: cluster and local.
+- **Flink version**: Select version according to the execution env.
+- **Task name** (optional): Flink task name.
+- **JobManager memory size**: Used to set the size of jobManager memories, which can be set according to the actual production environment.
+- **Number of slots**: Used to set the number of slots, which can be set according to the actual production environment.
+- **TaskManager memory size**: Used to set the size of taskManager memories, which can be set according to the actual production environment.
+- **Number of TaskManager**: Used to set the number of taskManagers, which can be set according to the actual production environment.
 - **Parallelism**: Used to set the degree of parallelism for executing Flink tasks.
-- **Main program parameters**: et the input parameters of the Flink program and support the substitution of custom parameter variables.
-- **Other parameters**: support `--jars`, `--files`,` --archives`, `--conf` format.
-- **Resource**: If the resource file is referenced in other parameters, you need to select and specify in the resource.
-- **Custom parameter**: It is a local user-defined parameter of Flink, which will replace the content with ${variable} in the script.
-- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+- **Main program parameters**: Set the input parameters for the Flink program and support the substitution of custom parameter variables.
+- **Optional parameters**: Support `--jar`, `--files`,` --archives`, `--conf` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **Custom parameter**: It is a local user-defined parameter for Flink, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
 
 ## Task Example
 
 ### Execute the WordCount Program
 
-This is a common introductory case in the Big Data ecosystem, which often applied to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text. (Flink's releases come with this example job)
+This is a common introductory case in the big data ecosystem, which often apply to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text. (Flink's releases attach this example job)
 
 #### Configure the flink environment in DolphinScheduler
 
@@ -54,18 +52,18 @@ If you are using the flink task type in a production environment, it is necessar
 
 #### Upload the Main Package
 
-When using the Flink task node, you will need to use the Resource Centre to upload the jar package for the executable. Refer to the [resource center](../resource.md).
+When using the Flink task node, you need to upload the jar package to the Resource Centre for the execution, refer to the [resource center](../resource.md).
 
-After configuring the Resource Centre, you can upload the required target files directly using drag and drop.
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
 
 ![resource_upload](/img/tasks/demo/upload_jar.png)
 
 #### Configure Flink Nodes
 
-Simply configure the required content according to the parameter descriptions above.
+Configure the required content according to the parameter descriptions above.
 
 ![demo-flink-simple](/img/tasks/demo/flink_task02.png)
 
 ## Notice
 
-JAVA and Scala are only used for identification, there is no difference, if it is Flink developed by Python, there is no class of the main function, the others are the same.
+JAVA and Scala only used for identification, there is no difference. If use Python to develop Flink, there is no class of the main function and the rest is the same.
diff --git a/docs/en-us/dev/user_doc/guide/task/http.md b/docs/en-us/dev/user_doc/guide/task/http.md
index d578180..0303714 100644
--- a/docs/en-us/dev/user_doc/guide/task/http.md
+++ b/docs/en-us/dev/user_doc/guide/task/http.md
@@ -1,22 +1,22 @@
-# HTTP
+# HTTP Node
 
-- Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the drawing board, as shown in the following figure:
+- Drag from the toolbar <img src="/img/http.png" width="35"/> task node to the canvas, as shown in the following figure:
 
 <p align="center">
    <img src="/img/http-en.png" width="80%" />
  </p>
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Request address: http request URL.
-- Request type: support GET, POSt, HEAD, PUT, DELETE.
-- Request parameters: Support Parameter, Body, Headers.
-- Verification conditions: support default response code, custom response code, content included, content not included.
-- Verification content: When the verification condition selects a custom response code, the content contains, and the content does not contain, the verification content is required.
-- Custom parameter: It is a user-defined parameter of http part, which will replace the content with \${variable} in the script.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Request address: HTTP request URL.
+- Request type: Support GET, POST, HEAD, PUT and DELETE.
+- Request parameters: Support Parameter, Body and Headers.
+- Verification conditions: Support default response code, custom response code, content include and content not included.
+- Verification content: When the verification condition selects the custom response code, the content include or the content not included, the verification content is required.
+- Custom parameter: It is a user-defined local parameter of HTTP, and will replace the content with `${variable}` in the script.
diff --git a/docs/en-us/dev/user_doc/guide/task/map-reduce.md b/docs/en-us/dev/user_doc/guide/task/map-reduce.md
index cb4d45c..6b973ea 100644
--- a/docs/en-us/dev/user_doc/guide/task/map-reduce.md
+++ b/docs/en-us/dev/user_doc/guide/task/map-reduce.md
@@ -1,58 +1,58 @@
-# MapReduce
+# MapReduce Node
 
 ## Overview
 
-- MapReduce(MR) task type for executing MapReduce programs. For MapReduce nodes, the worker submits the task by using the Hadoop command `hadoop jar`. See [Hadoop Command Manual](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#jar) for more details.
+MapReduce(MR) task type used for executing MapReduce programs. For MapReduce nodes, the worker submits the task by using the Hadoop command `hadoop jar`. See [Hadoop Command Manual](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#jar) for more details.
 
 ## Create Task
 
 - Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag the <img src="/img/tasks/icons/mr.png" width="15"/> from the toolbar to the drawing board.
+- Drag from the toolbar <img src="/img/tasks/icons/mr.png" width="15"/> to the canvas.
 
 ## Task Parameter
 
 - **Node name**: The node name in a workflow definition is unique.
-- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- **Descriptive information**: describe the function of the node.
-- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- **Environment Name**: Configure the environment name in which to run the script.
-- **Number of failed retry attempts**: The number of times the task failed to be resubmitted.
-- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task.
-- **Delayed execution time**: the time, in cents, that a task is delayed in execution.
-- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- **Resource**: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
-- **Custom parameters**: It is a user-defined parameter that is part of MapReduce, which will replace the content with ${variable} in the script.
-- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
-
-### JAVA/SCALA Program
-
-- **Program type**: select JAVA/SCALA program.
-- **The class of the main function**: is the full path of the Main Class, the entry point of the MapReduce program.
-- **Main jar package**: is the MapReduce jar package.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**:  Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Resource**: Refers to the list of resource files that called in the script, and upload or create files by the Resource Center file management.
+- **Custom parameters**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
+
+### JAVA or SCALA Program
+
+- **Program type**: Select JAVA or SCALA program.
+- **The class of the main function**: The **full path** of Main Class, the entry point of the MapReduce program.
+- **Main jar package**: The jar package of the MapReduce program.
 - **Task name** (optional): MapReduce task name.
-- **Command line parameters**: set the input parameters of the MapReduce program and support the substitution of custom parameter variables.
-- **Other parameters**: support -D, -files, -libjars, -archives format.
-- **Resource**: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- **User-defined parameter**: It is a user-defined parameter of the MapReduce part, which will replace the content with \${variable} in the script
+- **Command line parameters**: Set the input parameters of the MapReduce program and support the substitution of custom parameter variables.
+- **Other parameters**: support `-D`, `-files`, `-libjars`, `-archives` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **User-defined parameter**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
 
 ## Python Program
 
-- **Program type**: select Python language
-- **Main jar package**: is the Python jar package for running MR
-- **Other parameters**: support -D, -mapper, -reducer, -input -output format, here you can set the input of user-defined parameters, such as:
-- -mapper "mapper.py 1" -file mapper.py -reducer reducer.py -file reducer.py –input /journey/words.txt -output /journey/out/mr/\${currentTimeMillis}
-- The mapper.py 1 after -mapper is two parameters, the first parameter is mapper.py, and the second parameter is 1
-- **Resource**: If the resource file is referenced in other parameters, you need to select and specify in the resource
-- **User-defined parameter**: It is a user-defined parameter of the MapReduce part, which will replace the content with \${variable} in the script
+- **Program type**: Select Python language.
+- **Main jar package**: The Python jar package for running MapReduce.
+- **Other parameters**: support `-D`, `-mapper`, `-reducer,` `-input` `-output` format, and you can set the input of user-defined parameters, such as:
+- `-mapper "mapper.py 1"` `-file mapper.py` `-reducer reducer.py` `-file reducer.py` `–input /journey/words.txt` `-output /journey/out/mr/\${currentTimeMillis}`
+- The `mapper.py 1` after `-mapper` is two parameters, the first parameter is `mapper.py`, and the second parameter is `1`.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **User-defined parameter**: It is a local user-defined parameter for MapReduce, and will replace the content with `${variable}` in the script.
 
 ## Task Example
 
 ### Execute the WordCount Program
 
-This example is a common introductory type of MapReduce application, which is designed to count the number of identical words in the input text.
+This example is a common introductory type of MapReduce application, which used to count the number of identical words in the input text.
 
-#### Configure the MapReduce environment in DolphinScheduler
+#### Configure the MapReduce Environment in DolphinScheduler
 
 If you are using the MapReduce task type in a production environment, it is necessary to configure the required environment first. The configuration file is as follows: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
 
@@ -60,14 +60,14 @@ If you are using the MapReduce task type in a production environment, it is nece
 
 #### Upload the Main Package
 
-When using the MapReduce task node, you will need to use the Resource Centre to upload the jar package for the executable. Refer to the [resource centre](../resource.md).
+When using the MapReduce task node, you need to use the Resource Centre to upload the jar package for the execution. Refer to the [resource centre](../resource.md).
 
-After configuring the Resource Centre, you can upload the required target files directly using drag and drop.
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
 
 ![resource_upload](/img/tasks/demo/upload_jar.png)
 
 #### Configure MapReduce Nodes
 
-Simply configure the required content according to the parameter descriptions above.
+Configure the required content according to the parameter descriptions above.
 
 ![demo-mr-simple](/img/tasks/demo/mr_task02.png)
diff --git a/docs/en-us/dev/user_doc/guide/task/pigeon.md b/docs/en-us/dev/user_doc/guide/task/pigeon.md
index 9ec2430..a726f1a 100644
--- a/docs/en-us/dev/user_doc/guide/task/pigeon.md
+++ b/docs/en-us/dev/user_doc/guide/task/pigeon.md
@@ -1,19 +1,19 @@
 # Pigeon
 
-Pigeon is general websocket service tracking task for DolphinScheduler. It can trigger, check status, get log from remote websocket service.
+Pigeon is a task used to trigger remote tasks, acquire logs or status by calling remote WebSocket service. It is DolphinScheduler uses a remote WebSocket service to call tasks.
 
 ## Create
 
-Drag in the toolbar<img src="/img/pigeon.png" width="20"/>The task node to the drawing board to create a new Conditions task
+Drag from the toolbar <img src="/img/pigeon.png" width="20"/> to the canvas to create a new Pigeon task.
 
 ## Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Target task name: Pigeon websocket service name.
\ No newline at end of file
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Target task name: Target task name of this Pigeon node.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/task/python.md b/docs/en-us/dev/user_doc/guide/task/python.md
index 53c6f0b..56df104 100644
--- a/docs/en-us/dev/user_doc/guide/task/python.md
+++ b/docs/en-us/dev/user_doc/guide/task/python.md
@@ -1,15 +1,15 @@
-# Python
+# Python Node
 
-- Using python nodes, you can directly execute python scripts. For python nodes, workers will use `python **` to submit tasks.
+- Using python nodes, you can directly execute python scripts. For python nodes, workers use `python **` to submit tasks.
 
-> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)The task node to the drawing board, as shown in the following figure:
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png) task node to the canvas, as shown in the following figure:
 
 <p align="center">
    <img src="/img/python-en.png" width="80%" />
  </p>
 
-- Script: Python program developed by the user
-- Environment Name: Specific which Python interpreter would be use and run `Script`. If you want to use Python virtualenv, you should create multiply environments for each virtualenv.  
-- Resources: refers to the list of resource files that need to be called in the script
-- User-defined parameter: It is a local user-defined parameter of Python, which will replace the content with \${variable} in the script
-- Note: If you import the python file under the resource directory tree, you need to add the `__init__.py` file
+- Script: Python program developed by the user.
+- Environment Name: Specific Python interpreter path for running the script. If you need to use Python **virtualenv**, you should create multiply environments for each **virtualenv**.  
+- Resources: Refers to the list of resource files that need to be called in the script.
+- User-defined parameter: It is a user-defined local parameter of Python, and will replace the content with `${variable}` in the script.
+- Note: If you import the python file under the resource directory tree, you need to add the `__init__.py` file.
diff --git a/docs/en-us/dev/user_doc/guide/task/shell.md b/docs/en-us/dev/user_doc/guide/task/shell.md
index 9e8a9fe..ef110f1 100644
--- a/docs/en-us/dev/user_doc/guide/task/shell.md
+++ b/docs/en-us/dev/user_doc/guide/task/shell.md
@@ -2,46 +2,42 @@
 
 ## Overview
 
-Shell task, used to create a shell-type task and execute a series of shell scripts. When the worker executed,
-a temporary shell script is generated, and the Linux user with the same name as the tenant executes the script.
+Shell task used to create a shell task type and execute a series of shell scripts. When the worker run the shell task, a temporary shell script is generated, and use the Linux user with the same name as the tenant executes the script.
 
 ## Create Task
 
-- Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag <img src="/img/tasks/icons/shell.png" width="15"/> from the toolbar to the drawing board.
+- Click Project Management-Project->Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
+- Drag  from the toolbar <img src="/img/tasks/icons/shell.png" width="15"/> to the canvas.
 
 ## Task Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Environment Name: Configure the environment name in which to run the script.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- Script: SHELL program developed by users.
-- Resource: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
-- Custom parameters: It is a user-defined parameter that is part of SHELL, which will replace the content with \${variable} in the script.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Environment Name: Configure the environment name in which run the script.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Script: Shell program developed by users.
+- Resource: Refers to the list of resource files that called in the script, and upload or create files by the Resource Center file management.
+- Custom parameters: It is a user-defined local parameter of Shell, and will replace the content with `${variable}` in the script.
+- Predecessor task: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
 
 ## Task Example
 
 ### Simply Print
 
-This example is a sample echo task which only print one line in the log file, including the content
-"This is a demo of shell task". If your task only run one or two shell command, you could add task base on this example.
+We make an example simulate from a common task which runs by one command. The example is to print one line in the log file, as shown in the following figure:
+"This is a demo of shell task".
 
 ![demo-shell-simple](/img/tasks/demo/shell.jpg)
 
 ### Custom Parameters
 
-This example is a sample custom parameter task which could reuse existing as template, or for dynamic task. First of all,
-we should declare a custom parameter named "param_key", with the value as "param_val". Then we using keyword "${param_key}"
-to using the parameter we just declared. After this example is being run, we would see "param_val" print in the log
+This example simulates a custom parameter task. We use parameters for reusing existing tasks as template or coping with the dynamic task. In this case,
+we declare a custom parameter named "param_key", with the value "param_val". Then we use `echo` to print the parameter "${param_key}" we just declared. 
+After running this example, we would see "param_val" print in the log.
 
 ![demo-shell-custom-param](/img/tasks/demo/shell_custom_param.jpg)
-
-## Notice
-
-None
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/task/spark.md b/docs/en-us/dev/user_doc/guide/task/spark.md
index bfd751f..efa599c 100644
--- a/docs/en-us/dev/user_doc/guide/task/spark.md
+++ b/docs/en-us/dev/user_doc/guide/task/spark.md
@@ -1,49 +1,49 @@
-# Spark
+# Spark Node
 
 ## Overview
 
-Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command `spark submit`. See [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit) for more details.
+Spark task type used to execute Spark program. For Spark nodes, the worker submits the task by using the spark command `spark submit`. See [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit) for more details.
 
 ## Create Task
 
 - Click Project Management -> Project Name -> Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag the <img src="/img/tasks/icons/spark.png" width="15"/> from the toolbar to the drawing board.
+- Drag from the toolbar <img src="/img/tasks/icons/spark.png" width="15"/> to the canvas.
 
 ## Task Parameter
 
 - **Node name**: The node name in a workflow definition is unique.
-- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- **Descriptive information**: describe the function of the node.
-- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- **Environment Name**: Configure the environment name in which to run the script.
-- **Number of failed retry attempts**: The number of times the task failed to be resubmitted.
-- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task.
-- **Delayed execution time**: the time, in cents, that a task is delayed in execution.
-- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- **Program type**: supports Java, Scala and Python.
-- **Spark version**: support Spark1 and Spark2.
-- **The class of main function**: is the full path of Main Class, the entry point of the Spark program.
-- **Main jar package**: is the Spark jar package.
-- **Deployment mode**: support three modes of yarn-cluster, yarn-client and local. 
-- **Task name** (option): Spark task name.
-- **Driver core number**: This is used to set the number of Driver core, which can be set according to the actual production environment.
-- **Driver memory number**: This is used to set the number of Driver memories, which can be set according to the actual production environment.
-- **Number of Executor**: This is used to set the number of Executor, which can be set according to the actual production environment.
-- **Executor memory number**: This is used to set the number of Executor memories, which can be set according to the actual production environment.
-- **Main program parameters**: set the input parameters of the Spark program and support the substitution of custom parameter variables.
-- **Other parameters**: support `--jars`, `--files`,` --archives`, `--conf` format.
-- **Resource**: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.
-- **Custom parameter**: It is a local user-defined parameter of Spark, which will replace the content with ${variable} in the script.
-- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+- **Run flag**: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- **Descriptive information**: Describe the function of the node.
+- **Task priority**: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- **Worker grouping**: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- **Environment Name**: Configure the environment name in which run the script.
+- **Times of failed retry attempts**: The number of times the task failed to resubmit.
+- **Failed retry interval**: The time interval (unit minute) for resubmitting the task after a failed task.
+- **Delayed execution time**: The time (unit minute) that a task delays in execution.
+- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- **Program type**: Supports Java, Scala and Python.
+- **Spark version**: Support Spark1 and Spark2.
+- **The class of main function**: The **full path** of Main Class, the entry point of the Spark program.
+- **Main jar package**: The Spark jar package (upload by Resource Center).
+- **Deployment mode**: Support 3 deployment modes: yarn-cluster, yarn-client and local. 
+- **Task name** (optional): Spark task name.
+- **Driver core number**: Set the number of Driver core, which can be set according to the actual production environment.
+- **Driver memory size**: Set the size of Driver memories, which can be set according to the actual production environment.
+- **Number of Executor**: Set the number of Executor, which can be set according to the actual production environment.
+- **Executor memory size**: Set the size of Executor memories, which can be set according to the actual production environment.
+- **Main program parameters**: Set the input parameters of the Spark program and support the substitution of custom parameter variables.
+- **Optional parameters**: support `--jars`, `--files`,` --archives`, `--conf` format.
+- **Resource**: Appoint resource files in the `Resource` if parameters refer to them.
+- **Custom parameter**: It is a local user-defined parameter for Spark, and will replace the content with `${variable}` in the script.
+- **Predecessor task**: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
 
 ## Task Example
 
 ### Execute the WordCount Program
 
-This is a common introductory case in the Big Data ecosystem, which often applied to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text.
+This is a common introductory case in the big data ecosystem, which often apply to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text. (Flink's releases attach this example job)
 
-#### Configure the Spark environment in DolphinScheduler
+#### Configure the Spark Environment in DolphinScheduler
 
 If you are using the Spark task type in a production environment, it is necessary to configure the required environment first. The configuration file is as follows: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
 
@@ -51,18 +51,18 @@ If you are using the Spark task type in a production environment, it is necessar
 
 #### Upload the Main Package
 
-When using the Spark task node, you will need to use the Resource Center to upload the jar package for the executable. Refer to the [resource center](../resource.md).
+When using the Spark task node, you need to upload the jar package to the Resource Centre for the execution, refer to the [resource center](../resource.md).
 
-After configuring the Resource Center, you can upload the required target files directly using drag and drop.
+After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.
 
 ![resource_upload](/img/tasks/demo/upload_jar.png)
 
 #### Configure Spark Nodes
 
-Simply configure the required content according to the parameter descriptions above.
+Configure the required content according to the parameter descriptions above.
 
 ![demo-spark-simple](/img/tasks/demo/spark_task02.png)
 
 ## Notice
 
-JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no class of the main function, the others are the same.
+JAVA and Scala only used for identification, there is no difference. If use Python to develop Flink, there is no class of the main function and the rest is the same.
diff --git a/docs/en-us/dev/user_doc/guide/task/sql.md b/docs/en-us/dev/user_doc/guide/task/sql.md
index 0b6f51c..f52395e 100644
--- a/docs/en-us/dev/user_doc/guide/task/sql.md
+++ b/docs/en-us/dev/user_doc/guide/task/sql.md
@@ -2,42 +2,42 @@
 
 ## Overview
 
-SQL task, used to connect to database and execute SQL.
+SQL task type used to connect to databases and execute SQL.
 
-## Create Data Source
+## Create DataSource
 
-Refer to [Data Source](../datasource/introduction.md)
+Refer to [DataSource](../datasource/introduction.md)
 
 ## Create Task
 
 - Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag <img src="/img/tasks/icons/sql.png" width="25"/> from the toolbar to the drawing board.
+- Drag from the toolbar <img src="/img/tasks/icons/sql.png" width="25"/> to the canvas.
 
 ## Task Parameter
 
-- Data source: select the corresponding data source
-- sql type: supports query and non-query. The query is a select type query, which is returned with a result set. You can specify three templates for email notification as form, attachment or form attachment. Non-queries are returned without a result set, and are for three types of operations: update, delete, and insert.
-- sql parameter: the input parameter format is key1=value1;key2=value2...
-- sql statement: SQL statement
-- UDF function: For data sources of type HIVE, you can refer to UDF functions created in the resource center. UDF functions are not supported for other types of data sources.
-- Custom parameters: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the ${variable} in the SQL statement.
-- Pre-sql: Pre-sql is executed before the sql statement.
-- Post-sql: Post-sql is executed after the sql statement.
+- Data source: Select the corresponding DataSource.
+- SQL type: Supports query and non-query. The query is a `select` type query, which is returned with a result set. You can specify three templates for email notification: form, attachment or form attachment. Non-queries return without a result set, three types of operations are: update, delete and insert.
+- SQL parameter: The input parameter format is `key1=value1;key2=value2...`.
+- SQL statement: SQL statement.
+- UDF function: For Hive DataSources, you can refer to UDF functions created in the resource center, but other DataSource do not support UDF functions.
+- Custom parameters: SQL task type, and stored procedure is a custom parameter order, to set customized parameter type and data type for the method is the same as the stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the `${variable}` in the SQL statement.
+- Pre-SQL: Pre-SQL executes before the SQL statement.
+- Post-SQL: Post-SQL executes after the SQL statement.
 
 ## Task Example
 
 ### Create a Temporary Table in Hive and Write Data
 
-This example creates a temporary table `tmp_hello_world` in hive and write a row of data. Before creating a temporary table, we need to ensure that the table does not exist, so we will use custom parameters to obtain the time of the day as the suffix of the table name every time we run, so that this task can run every day. The format of the created table name is: `tmp_hello_world_{yyyyMMdd}`.
+This example creates a temporary table `tmp_hello_world` in Hive and writes a row of data. Before creating a temporary table, we need to ensure that the table does not exist. So we use custom parameters to obtain the time of the day as the suffix of the table name every time we run, this task can run every different day. The format of the created table name is: `tmp_hello_world_{yyyyMMdd}`.
 
 ![hive-sql](/img/tasks/demo/hive-sql.png)
 
 ### After Running the Task Successfully, Query the Results in Hive
 
-Log in to the bigdata cluster and use 'hive' command or 'beeline' or 'JDBC' and other methods to connect to the 'Apache Hive' for the query. The query SQL is `select * from tmp_hello_world_{yyyyMMdd}`, please replace '{yyyyMMdd}' with the date of the running day. The query screenshot is as follows:
+Log in to the bigdata cluster and use 'hive' command or 'beeline' or 'JDBC' and other methods to connect to the 'Apache Hive' for the query. The query SQL is `select * from tmp_hello_world_{yyyyMMdd}`, please replace `{yyyyMMdd}` with the date of the running day. The query screenshot is as follows:
 
 ![hive-sql](/img/tasks/demo/hive-result.png)
 
 ## Notice
 
-Pay attention to the selection of SQL type. If it is an insert operation, you need to select "Non Query" type.
\ No newline at end of file
+Pay attention to the selection of SQL type. If it is an insert operation, need to change to "Non-Query" type.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/task/stored-procedure.md b/docs/en-us/dev/user_doc/guide/task/stored-procedure.md
index 92bcc80..ea2ed48 100644
--- a/docs/en-us/dev/user_doc/guide/task/stored-procedure.md
+++ b/docs/en-us/dev/user_doc/guide/task/stored-procedure.md
@@ -1,13 +1,13 @@
 # Stored Procedure
 
-- According to the selected data source, execute the stored procedure.
+- Execute the stored procedure according to the selected DataSource.
 
-> Drag in the toolbar![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png)The task node to the drawing board, as shown in the following figure:
+> Drag from the toolbar ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png) task node into the canvas, as shown in the figure below:
 
 <p align="center">
    <img src="/img/procedure-en.png" width="80%" />
  </p>
 
-- Data source: The data source type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding data source
-- Method: is the method name of the stored procedure
-- Custom parameters: The custom parameter types of the stored procedure support IN and OUT, and the data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and BOOLEAN
\ No newline at end of file
+- DataSource: The DataSource type of the stored procedure supports MySQL and POSTGRESQL, select the corresponding DataSource.
+- Method: The method name of the stored procedure.
+- Custom parameters: The custom parameter types of the stored procedure support `IN` and `OUT`, and the data types support: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/task/sub-process.md b/docs/en-us/dev/user_doc/guide/task/sub-process.md
index 86c0f7d..c1d3b2f 100644
--- a/docs/en-us/dev/user_doc/guide/task/sub-process.md
+++ b/docs/en-us/dev/user_doc/guide/task/sub-process.md
@@ -1,46 +1,46 @@
-# SubProcess
+# SubProcess Node
 
 ## Overview
 
-The sub-process node is to execute a certain external workflow definition as a task node.
+The sub-process node is to execute an external workflow definition as a task node.
 
 ## Create Task
 
-- Click Project Management-Project Name-Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page.
-- Drag <img src="/img/tasks/icons/sub_process.png" width="15"/> from the toolbar to the drawing board.
+- Click `Project -> Management-Project -> Name-Workflow Definition`, and click the `Create Workflow` button to enter the DAG editing page.
+- Drag from the toolbar <img src="/img/tasks/icons/sub_process.png" width="15"/> task node to canvas to create a new SubProcess task.
 
 ## Task Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Environment Name: Configure the environment name in which to run the script.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
+- Run flag: Identifies whether this node schedules normally.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Environment Name: Configure the environment name in which run the script.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
 - Sub-node: It is the workflow definition of the selected sub-process. Enter the sub-node in the upper right corner to jump to the workflow definition of the selected sub-process.
-- Predecessor task: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.
+- Predecessor task: Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task.
 
 ## Task Example
 
-This example simulates a common task type, here we use a child node task to recall the [Shell](shell.md) to print out "hello world". This means that a shell task is executed as a child node.
+This example simulates a common task type, here we use a child node task to recall the [Shell](shell.md) to print out "hello". This means executing a shell task as a child node.
 
-### Create Shell task
+### Create a Shell task
 
-Create a shell task to print "hello". And define the workflow for this as test_dag01.
+Create a shell task to print "hello" and define the workflow as `test_dag01`.
 
 ![subprocess_task01](/img/tasks/demo/subprocess_task01.png)
 
-## Create Sub_process task
+## Create the Sub_process task
 
-To use the sub_process, you need to create the  sub-node task, which is the shell task we created in the first step. Then, as shown in the diagram below, select the corresponding sub-node in the ⑤ position.
+To use the sub_process, you need to create the sub-node task, which is the shell task we created in the first step. After that, as shown in the diagram below, select the corresponding sub-node in position ⑤.
 
 ![subprocess_task02](/img/tasks/demo/subprocess_task02.png)
 
-After creating the sub_process is complete, create a corresponding shell task for printing "world" and link the two together.Save the current workflow and run it to get the desired result.
+After creating the sub_process, create a corresponding shell task for printing "world" and link both together. Save the current workflow and run it to get the expected result.
 
 ![subprocess_task03](/img/tasks/demo/subprocess_task03.png)
 
 ## Notice
 
-When using sub_process to recall a sub-node task, you need to ensure that the defined sub-node is online, otherwise the sub_process workflow will not work properly.
+When using `sub_process` to recall a sub-node task, you need to ensure that the defined sub-node is online status, otherwise, the sub_process workflow will not work properly.
diff --git a/docs/en-us/dev/user_doc/guide/task/switch.md b/docs/en-us/dev/user_doc/guide/task/switch.md
index 7dc71d5..3b7487e 100644
--- a/docs/en-us/dev/user_doc/guide/task/switch.md
+++ b/docs/en-us/dev/user_doc/guide/task/switch.md
@@ -1,37 +1,39 @@
 # Switch
 
-Switch is a conditional judgment node, which branch should be executes according to the value of [global variable](../parameter/global.md) and the expression result written by the user.
+The switch is a conditional judgment node, decide the branch executes according to the value of [global variable](../parameter/global.md) and the expression result written by the user.
 
 ## Create
 
-Drag the <img src="/img/switch.png" width="20"/> in the tool bar to create task. **Note** After the switch task is created, you must configure it downstream to make parameter `Branch flow` work.
+Drag from the toolbar <img src="/img/switch.png" width="20"/>  task node to canvas to create a task. 
+**Note** After created a switch task, you must first configure the upstream and downstream, then configure the parameter of task branches.
 
 ## Parameter
 
 - Node name: The node name in a workflow definition is unique.
-- Run flag: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.
-- Descriptive information: describe the function of the node.
-- Task priority: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.
-- Worker grouping: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.
-- Number of failed retry attempts: The number of times the task failed to be resubmitted. It supports drop-down and hand-filling.
-- Failed retry interval: The time interval for resubmitting the task after a failed task. It supports drop-down and hand-filling.
-- Timeout alarm: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail.
-- condition: You can configure multiple conditions for the switch task. When the conditions are true, the configured branch will be executed. You can configure multiple different conditions to satisfy different businesses.
-- Branch flow: The default branch flow, when all the conditions are false, it will execute this branch flow.
+- Run flag: Identifies whether this node schedules normally, if it does not need to execute, select the `prohibition execution`.
+- Descriptive information: Describe the function of the node.
+- Task priority: When the number of worker threads is insufficient, execute in the order of priority from high to low, and tasks with the same priority will execute in a first-in first-out order.
+- Worker grouping: Assign tasks to the machines of the worker group to execute. If `Default` is selected, randomly select a worker machine for execution.
+- Times of failed retry attempts: The number of times the task failed to resubmit. You can select from drop-down or fill-in a number.
+- Failed retry interval: The time interval for resubmitting the task after a failed task. You can select from drop-down or fill-in a number.
+- Timeout alarm: Check the timeout alarm and timeout failure. When the task runs exceed the "timeout", an alarm email will send and the task execution will fail.
+- Condition: You can configure multiple conditions for the switch task. When the conditions are satisfied, execute the configured branch. You can configure multiple different conditions to satisfy different businesses.
+- Branch flow: The default branch flow, when all the conditions are not satisfied, execute this branch flow.
 
 ## Detail
 
-Here we have three tasks, the dependencies are `A -> B -> [C, D]`, and task_a is a shell task and task_b is a switch task
+Here we have three tasks, the dependencies are `A -> B -> [C, D]`, and `task_a` is a shell task and `task_b` is a switch task
 
 - In task A, a global variable named `id` is defined through [global variable](../parameter/global.md), and the declaration method is `${setValue(id=1)}`
-- Task B adds conditions and uses global variables declared upstream to achieve conditional judgment (note that global variables must exist when the switch is running, which means that switch task can use global variables that are not directly upstream). We want workflow execute task C when id = 1 else run task D
-  - Configure task C to run when the global variable `id=1`. Then edit `${id} == 1` in the condition of task B, select `C` as branch flow
+- Task B adds conditions and uses global variables declared upstream to achieve conditional judgment (Note: switch can get the global variables value, as long as its direct or indirect upstream have already assigned the global variables before switch acquires). We want to execute task C when `id = 1`, otherwise run task D
+  - Configure task C to run when the global variable `id=1`. Then edit `${id} == 1` in the condition of task B, and select `C` as branch flow
   - For other tasks, select `D` as branch flow
 
-Switch task configuration is as follows
+Switch task configuration is as follows:
 
 ![task-switch-configure](../../../../../../img/switch_configure.jpg)
 
 ## Related Task
 
-[condition](conditions.md):[Condition](conditions.md)task mainly executes the corresponding branch based on the execution status (success, failure) of the upstream node. The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
\ No newline at end of file
+[condition](conditions.md):[Condition](conditions.md)task mainly executes the corresponding branch based on the execution result status (success, failure) of the upstream node. 
+The [Switch](switch.md) task mainly executes the corresponding branch based on the value of the [global parameter](../parameter/global.md) and the judgment expression result written by the user.
\ No newline at end of file
diff --git a/docs/zh-cn/dev/user_doc/guide/task/datax.md b/docs/zh-cn/dev/user_doc/guide/task/datax.md
index f045517..5c0501a 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/datax.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/datax.md
@@ -44,20 +44,16 @@ DataX 任务类型,用于执行 DataX 程序。对于 DataX 节点,worker 
 
 ![datax_task01](/img/tasks/demo/datax_task01.png)
 
-当环境配置完成之后,需要重启 DolphinScheduler。
-
-### 配置 DataX 任务节点
-
-由于默认的的数据源中并不包含从 Hive 中读取数据,所以需要自定义 json,可参考:[HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md)。其中需要注意的是 HDFS 路径上存在分区目录,在实际情况导入数据时,分区建议进行传参,即使用自定义参数。
-
-在编写好所需的 json 之后,可按照下图步骤进行配置节点内容。 
-
-![datax_task02](/img/tasks/demo/datax_task02.png)
-
-### 查看运行结果
-
-![datax_task03](/img/tasks/demo/datax_task03.png)
-
-## 注意事项:
-
-若默认提供的数据源不满足需求,可在自定义模板选项中,根据实际使用环境来配置 DataX 的 writer 和 reader,可参考:https://github.com/alibaba/DataX
+  <p align="center">
+   <img src="/img/datax_edit.png" width="80%" />
+  </p>
+
+- 自定义模板:打开自定义模板开关时,可以自定义datax节点的json配置文件内容(适用于控件配置不满足需求时)
+- 数据源:选择抽取数据的数据源
+- sql语句:目标库抽取数据的sql语句,节点执行时自动解析sql查询列名,映射为目标表同步列名,源表和目标表列名不一致时,可以通过列别名(as)转换
+- 目标库:选择数据同步的目标库
+- 目标表:数据同步的目标表名
+- 前置sql:前置sql在sql语句之前执行(目标库执行)。
+- 后置sql:后置sql在sql语句之后执行(目标库执行)。
+- json:datax同步的json配置文件
+- 自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。
\ No newline at end of file
diff --git a/docs/zh-cn/dev/user_doc/guide/task/flink.md b/docs/zh-cn/dev/user_doc/guide/task/flink.md
index f441a30..4e41516 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/flink.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/flink.md
@@ -2,7 +2,7 @@
 
 ## 综述
 
-Flink 任务类型,用于执行 Flink 程序。对于 Flink 节点,worker 会通过使用 flink 命令 `flink run` 的方式提交任务。更多详情查看 [flink cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/)。
+Flink 任务类型,用于执行 Flink 程序。对于 Flink 节点,worker 会通过使用 flink 命令 `flink run` 的方式提交任务。更多详情查看 [flink cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/)。
 
 ## 创建任务
 
@@ -18,8 +18,8 @@ Flink 任务类型,用于执行 Flink 程序。对于 Flink 节点,worker 
 - Worker 分组:任务分配给 worker 组的机器执行,选择 Default,会随机选择一台 worker 机执行。
 - 环境名称:配置运行脚本的环境。
 - 失败重试次数:任务失败重新提交的次数。
-- 失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。
-- 延迟执行时间:任务延迟执行的时间,以分为单位。
+- 失败重试间隔:任务失败重新提交任务的时间间隔,以分钟为单位。
+- 延迟执行时间:任务延迟执行的时间,以分钟为单位。
 - 超时告警:勾选超时告警、超时失败,当任务超过"超时时长"后,会发送告警邮件并且任务执行失败。
 - 程序类型:支持 Java、Scala 和 Python 三种语言。
 - 主函数的 Class:Flink 程序的入口 Main Class 的**全路径**。
diff --git a/docs/zh-cn/dev/user_doc/guide/task/pigeon.md b/docs/zh-cn/dev/user_doc/guide/task/pigeon.md
index 0caf336..ec7d32f 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/pigeon.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/pigeon.md
@@ -1,6 +1,6 @@
 # Pigeon
 
-Pigeon任务类型与通过调用远程websocket服务,实现远程任务的触发,状态、日志的获取,是 DolphinScheduler 通用远程 websocket 服务调用任务
+Pigeon任务类型是通过调用远程websocket服务,实现远程任务的触发,状态、日志的获取,是 DolphinScheduler 通用远程 websocket 服务调用任务
 
 ## 创建任务
 
diff --git a/docs/zh-cn/dev/user_doc/guide/task/spark.md b/docs/zh-cn/dev/user_doc/guide/task/spark.md
index ad56857..442eb8b 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/spark.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/spark.md
@@ -2,7 +2,7 @@
 
 ## 综述
 
-Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 `spark submit` 方式提交任务。更多详情查看 [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit)。
+Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通过使用 spark 命令 `spark submit` 方式提交任务。更多详情查看 [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit)。
 
 ## 创建任务
 
diff --git a/docs/zh-cn/dev/user_doc/guide/task/sql.md b/docs/zh-cn/dev/user_doc/guide/task/sql.md
index 48e8cb4..0c1c673 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/sql.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/sql.md
@@ -20,7 +20,7 @@ SQL任务类型,用于连接数据库并执行相应SQL。
 - sql参数:输入参数格式为key1=value1;key2=value2…
 - sql语句:SQL语句
 - UDF函数:对于HIVE类型的数据源,可以引用资源中心中创建的UDF函数,其他类型的数据源暂不支持UDF函数。
-- 自定义参数:SQL任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。
+- 自定义参数:SQL任务类型,而存储过程是自定义参数顺序,给方法设置值自定义参数类型和数据类型,同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换sql语句中${变量}。
 - 前置sql:前置sql在sql语句之前执行。
 - 后置sql:后置sql在sql语句之后执行。
 
diff --git a/docs/zh-cn/dev/user_doc/guide/task/switch.md b/docs/zh-cn/dev/user_doc/guide/task/switch.md
index e4e315b..52dfe2a 100644
--- a/docs/zh-cn/dev/user_doc/guide/task/switch.md
+++ b/docs/zh-cn/dev/user_doc/guide/task/switch.md
@@ -24,7 +24,7 @@ Switch是一个条件判断节点,依据[全局变量](../parameter/global.md)
 假设我们三个任务,其依赖关系是 `A -> B -> [C, D]` 其中task_a是shell任务,task_b是switch任务
 
 - 任务A中通过[全局变量](../parameter/global.md)定义了名为`id`的全局变量,声明方式为`${setValue(id=1)}`
-- 任务B增加条件,使用上游声明的全局变量实现条件判断(注意switch运行时存在的全局变量就行,意味着可以是非直接上游产生的全局变量)。下面我们想要实现当id为1时,运行任务C,其他运行任务D
+- 任务B增加条件,使用上游声明的全局变量实现条件判断(注意:只要直接、非直接上游在switch运行前对全局变量赋值,switch运行时就可以获取该全局变量)。下面我们想要实现当id为1时,运行任务C,其他运行任务D
   - 配置当全局变量`id=1`时,运行任务C。则在任务B的条件中编辑`${id} == 1`,分支流转选择`C`
   - 对于其他任务,在分支流转中选择`D`