You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2022/05/30 03:00:23 UTC

[GitHub] [dolphinscheduler] zhongjiajie commented on a diff in pull request #10227: [Docment][Feature] Refactor context parameter docment

zhongjiajie commented on code in PR #10227:
URL: https://github.com/apache/dolphinscheduler/pull/10227#discussion_r884383216


##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.

Review Comment:
   ```suggestion
   When the SHELL node is defined, the log detects the format of `${setValue(output=1)}`, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
   ```



##########
docs/docs/zh/guide/parameter/context.md:
##########
@@ -20,50 +16,63 @@ DolphinScheduler 允许在任务间进行参数传递,目前传递方向仅支
 
 当定义上游节点时,如果有需要将该节点的结果传递给有依赖关系的下游节点,需要在【当前节点设置】的【自定义参数】设置一个方向是 OUT 的变量。目前我们主要针对 SQL 和 SHELL 节点做了可以向下传递参数的功能。
 
-### SQL
+> 注:若节点之间没有依赖关系,则局部参数无法通过上游传递。
+
+### 任务样例
+
+本样例展示了如何使用参数传递的功能,通过 SHELL 任务来创建本地参数并赋值传递给下游,SQL 任务通过获得上游任务的参数完成查询操作。
+
+#### 1、创建 SHELL 任务,设置参数

Review Comment:
   ```suggestion
   #### 创建 SHELL 任务,设置参数
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters

Review Comment:
   ```suggestion
   #### Create a SHELL task and set parameters
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
 
-If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+Create the Node_B task, which is mainly used to test and output the parameters passed by the upstream task Node_A.
 
-Let's make an example of the SQL node process in the above picture:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-The following defines the [createParam1] node in the above figure:
+#### 2. Create SQL tasks and use parameters
 
-![png05](/img/globalParam/image-20210723104957031.png)
+When the SHELL task is completed, we can use the output passed upstream as the query object for the SQL. The id of the query is renamed to ID and is output as a parameter.
 
-The following defines the [createParam2] node:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-![png06](/img/globalParam/image-20210723105026924.png)
+> Note: If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+>
+>If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
 
-Find the value of the variable in the [Workflow Instance] page corresponding to the node instance.
+#### 3. Save the workflow and set the global parameters
 
-The following shows the Node instance [createparam1]:
+Click on the Save workflow icon and set the global parameters output and value.
 
-![png07](/img/globalParam/image-20210723105131381.png)
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter04.png)
 
-Here, the value of "id" is 12.
+#### 4. View results

Review Comment:
   ```suggestion
   #### View results
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
 
-If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+Create the Node_B task, which is mainly used to test and output the parameters passed by the upstream task Node_A.
 
-Let's make an example of the SQL node process in the above picture:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-The following defines the [createParam1] node in the above figure:
+#### 2. Create SQL tasks and use parameters
 
-![png05](/img/globalParam/image-20210723104957031.png)
+When the SHELL task is completed, we can use the output passed upstream as the query object for the SQL. The id of the query is renamed to ID and is output as a parameter.
 
-The following defines the [createParam2] node:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-![png06](/img/globalParam/image-20210723105026924.png)
+> Note: If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+>
+>If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
 
-Find the value of the variable in the [Workflow Instance] page corresponding to the node instance.
+#### 3. Save the workflow and set the global parameters
 
-The following shows the Node instance [createparam1]:
+Click on the Save workflow icon and set the global parameters output and value.
 
-![png07](/img/globalParam/image-20210723105131381.png)
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter04.png)
 
-Here, the value of "id" is 12.
+#### 4. View results
 
-Let's see the case of the node instance [createparam2].
+After the workflow is created, run the workflow online and view its running results.
 
-![png08](/img/globalParam/image-20210723105255850.png)
+The result of Node_A is as follows:
 
-There is only the "id" value. Although the user-defined SQL query both "id" and "database_name" field, only set the `OUT` parameter `id` due to only one parameter "id" is defined for output. The length of the result list is 10 due to display reasons.
+![context-log01](/img/new_ui/dev/parameter/context_log01.png)
 
-### SHELL
+The result of Node_B is as follows:
 
-`prop` is user-specified and the direction is `OUT`. The output is defined as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and leave the value part blank.
-The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+![context-log02](/img/new_ui/dev/parameter/context_log02.png)
 
-For example, in the figure below:
+The result of Node_mysql is as follows:
 
-![png09](/img/globalParam/image-20210723101242216.png)
+![context-log03](/img/new_ui/dev/parameter/context_log03.png)
 
-When the log detects the `${setValue(key=value1)}` format in the shell node definition, it will assign value1 to the key, and downstream nodes can use the variable key directly. Similarly, you can find the corresponding node instance on the [Workflow Instance] page to see the value of the variable.
+Even though output is assigned a value of 1 in Node_A's script, the log still shows a value of 100. But according to the principle of [parameter priority](priority.md): `Local Parameter > Parameter Context > Global Parameter`, the output value in Node_B is 1. It proves that the output parameter is passed in the workflow with reference to the expected value, and the query operation is completed using this value in Node_mysql.

Review Comment:
   ```suggestion
   Even though output is assigned a value of 1 in Node_A's script, the log still shows a value of 100. But according to the principle from [parameter priority](priority.md): `Local Parameter > Parameter Context > Global Parameter`, the output value in Node_B is 1. It proves that the output parameter is passed in the workflow with reference to the expected value, and the query operation is completed using this value in Node_mysql.
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
 
-If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+Create the Node_B task, which is mainly used to test and output the parameters passed by the upstream task Node_A.
 
-Let's make an example of the SQL node process in the above picture:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-The following defines the [createParam1] node in the above figure:
+#### 2. Create SQL tasks and use parameters
 
-![png05](/img/globalParam/image-20210723104957031.png)
+When the SHELL task is completed, we can use the output passed upstream as the query object for the SQL. The id of the query is renamed to ID and is output as a parameter.
 
-The following defines the [createParam2] node:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-![png06](/img/globalParam/image-20210723105026924.png)
+> Note: If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+>
+>If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
 
-Find the value of the variable in the [Workflow Instance] page corresponding to the node instance.
+#### 3. Save the workflow and set the global parameters

Review Comment:
   ```suggestion
   #### Save the workflow and set the global parameters
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.

Review Comment:
   ```suggestion
   The user needs to pass the parameter when creating the shell script, the output statement format is `'${setValue(key=value)}'`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
   ```



##########
docs/docs/zh/guide/parameter/context.md:
##########
@@ -20,50 +16,63 @@ DolphinScheduler 允许在任务间进行参数传递,目前传递方向仅支
 
 当定义上游节点时,如果有需要将该节点的结果传递给有依赖关系的下游节点,需要在【当前节点设置】的【自定义参数】设置一个方向是 OUT 的变量。目前我们主要针对 SQL 和 SHELL 节点做了可以向下传递参数的功能。
 
-### SQL
+> 注:若节点之间没有依赖关系,则局部参数无法通过上游传递。
+
+### 任务样例
+
+本样例展示了如何使用参数传递的功能,通过 SHELL 任务来创建本地参数并赋值传递给下游,SQL 任务通过获得上游任务的参数完成查询操作。
+
+#### 1、创建 SHELL 任务,设置参数
+
+> 用户需要传递参数,在定义 SHELL 脚本时,需要输出格式为 ${setValue(key=value)} 的语句,key 为对应参数的 prop,value 为该参数的值。
 
-prop 为用户指定;方向选择为 OUT,只有当方向为 OUT 时才会被定义为变量输出;数据类型可以根据需要选择不同数据结构;value 部分不需要填写。
+创建 Node_A 任务,在自定义参数中添加 output 和 value 参数,并编写如下脚本:
 
-如果 SQL 节点的结果只有一行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型可选择为除 LIST 以外的其他类型。变量会选择 SQL 查询结果中的列名中与该变量名称相同的列对应的值。
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
 
-如果 SQL 节点的结果为多行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型选择为LIST。获取到 SQL 查询结果后会将对应列转化为 LIST<VARCHAR>,并将该结果转化为 JSON 后作为对应变量的值。
+参数说明:
 
-我们再以上图中包含 SQL 节点的流程举例说明:
+- value:方向选择为 IN,并赋值为 66
+- output:方向选择为 OUT,通过脚本 `'${setValue(output=1)}'` 赋值,并传递给下游参数
 
-上图中节点【createParam1】的定义如下:
+SHELL 节点定义时当日志检测到 ${setValue(output=1)} 的格式时,会将 1 赋值给 output,下游节点便可以直接使用变量 output 的值。同样,您可以在【工作流实例】页面,找到对应的节点实例,便可以查看该变量的值。
 
-<img src="/img/globalParam/image-20210723104957031.png" alt="image-20210723104957031" style="zoom:50%;" />
+创建 Node_B 任务,主要用于测试输出上游任务 Node_A 传递的参数。
 
-节点【createParam2】的定义如下:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-<img src="/img/globalParam/image-20210723105026924.png" alt="image-20210723105026924" style="zoom:50%;" />
+#### 2、创建 SQL 任务,使用参数
 
-您可以在【工作流实例】页面,找到对应的节点实例,便可以查看该变量的值。
+完成上述的 SHELL 任务之后,我们可以使用上游所传递的 output 作为 SQL 的查询对象。其中将所查询的 id 重命名为 ID,作为参数输出。
 
-节点实例【createParam1】如下:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-<img src="/img/globalParam/image-20210723105131381.png" alt="image-20210723105131381" style="zoom:50%;" />
+> 注:如果 SQL 节点的结果只有一行,一个或多个字段,参数的名字需要和字段名称一致。数据类型可选择为除 LIST 以外的其他类型。变量会选择 SQL 查询结果中的列名中与该变量名称相同的列对应的值。
+>
+> 如果 SQL 节点的结果为多行,一个或多个字段,参数的名字需要和字段名称一致。数据类型选择为 LIST。获取到 SQL 查询结果后会将对应列转化为 LIST,并将该结果转化为 JSON 后作为对应变量的值。
 
-这里当然 "id" 的值会等于 12.
+#### 3、保存工作流,设置全局参数
 
-我们再来看节点实例【createParam2】的情况。
+点击保存工作流图标,并设置全局参数 output 和 value。
 
-<img src="/img/globalParam/image-20210723105255850.png" alt="image-20210723105255850" style="zoom:50%;" />
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter04.png)
 
-这里只有 "id" 的值。尽管用户定义的 sql 查到的是 "id" 和 "database_name" 两个字段,但是由于只定义了一个为 out 的变量 "id",所以只会设置一个变量。由于显示的原因,这里已经替您查好了该 list 的长度为 10。
+#### 4、查看运行结果

Review Comment:
   ```suggestion
   #### 查看运行结果
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
 
-If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+Create the Node_B task, which is mainly used to test and output the parameters passed by the upstream task Node_A.
 
-Let's make an example of the SQL node process in the above picture:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-The following defines the [createParam1] node in the above figure:
+#### 2. Create SQL tasks and use parameters
 
-![png05](/img/globalParam/image-20210723104957031.png)
+When the SHELL task is completed, we can use the output passed upstream as the query object for the SQL. The id of the query is renamed to ID and is output as a parameter.
 
-The following defines the [createParam2] node:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-![png06](/img/globalParam/image-20210723105026924.png)
+> Note: If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+>
+>If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
 
-Find the value of the variable in the [Workflow Instance] page corresponding to the node instance.
+#### 3. Save the workflow and set the global parameters
 
-The following shows the Node instance [createparam1]:
+Click on the Save workflow icon and set the global parameters output and value.
 
-![png07](/img/globalParam/image-20210723105131381.png)
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter04.png)
 
-Here, the value of "id" is 12.
+#### 4. View results
 
-Let's see the case of the node instance [createparam2].
+After the workflow is created, run the workflow online and view its running results.
 
-![png08](/img/globalParam/image-20210723105255850.png)
+The result of Node_A is as follows:
 
-There is only the "id" value. Although the user-defined SQL query both "id" and "database_name" field, only set the `OUT` parameter `id` due to only one parameter "id" is defined for output. The length of the result list is 10 due to display reasons.
+![context-log01](/img/new_ui/dev/parameter/context_log01.png)
 
-### SHELL
+The result of Node_B is as follows:
 
-`prop` is user-specified and the direction is `OUT`. The output is defined as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and leave the value part blank.
-The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+![context-log02](/img/new_ui/dev/parameter/context_log02.png)
 
-For example, in the figure below:
+The result of Node_mysql is as follows:
 
-![png09](/img/globalParam/image-20210723101242216.png)
+![context-log03](/img/new_ui/dev/parameter/context_log03.png)
 
-When the log detects the `${setValue(key=value1)}` format in the shell node definition, it will assign value1 to the key, and downstream nodes can use the variable key directly. Similarly, you can find the corresponding node instance on the [Workflow Instance] page to see the value of the variable.
+Even though output is assigned a value of 1 in Node_A's script, the log still shows a value of 100. But according to the principle of [parameter priority](priority.md): `Local Parameter > Parameter Context > Global Parameter`, the output value in Node_B is 1. It proves that the output parameter is passed in the workflow with reference to the expected value, and the query operation is completed using this value in Node_mysql.
 
-![png10](/img/globalParam/image-20210723102522383.png)
+But the value of value is only output as 66 in Node_A, the reason is that the direction of value is selected as IN, and only when the direction is OUT will it be defined as a variable output.

Review Comment:
   ```suggestion
   But the output value 66 only shows in the Node_A, the reason is that the direction of value is selected as IN, and only when the direction is OUT will it be defined as a variable output.
   ```



##########
docs/docs/zh/guide/parameter/context.md:
##########
@@ -20,50 +16,63 @@ DolphinScheduler 允许在任务间进行参数传递,目前传递方向仅支
 
 当定义上游节点时,如果有需要将该节点的结果传递给有依赖关系的下游节点,需要在【当前节点设置】的【自定义参数】设置一个方向是 OUT 的变量。目前我们主要针对 SQL 和 SHELL 节点做了可以向下传递参数的功能。
 
-### SQL
+> 注:若节点之间没有依赖关系,则局部参数无法通过上游传递。
+
+### 任务样例
+
+本样例展示了如何使用参数传递的功能,通过 SHELL 任务来创建本地参数并赋值传递给下游,SQL 任务通过获得上游任务的参数完成查询操作。
+
+#### 1、创建 SHELL 任务,设置参数
+
+> 用户需要传递参数,在定义 SHELL 脚本时,需要输出格式为 ${setValue(key=value)} 的语句,key 为对应参数的 prop,value 为该参数的值。
 
-prop 为用户指定;方向选择为 OUT,只有当方向为 OUT 时才会被定义为变量输出;数据类型可以根据需要选择不同数据结构;value 部分不需要填写。
+创建 Node_A 任务,在自定义参数中添加 output 和 value 参数,并编写如下脚本:
 
-如果 SQL 节点的结果只有一行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型可选择为除 LIST 以外的其他类型。变量会选择 SQL 查询结果中的列名中与该变量名称相同的列对应的值。
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
 
-如果 SQL 节点的结果为多行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型选择为LIST。获取到 SQL 查询结果后会将对应列转化为 LIST<VARCHAR>,并将该结果转化为 JSON 后作为对应变量的值。
+参数说明:
 
-我们再以上图中包含 SQL 节点的流程举例说明:
+- value:方向选择为 IN,并赋值为 66
+- output:方向选择为 OUT,通过脚本 `'${setValue(output=1)}'` 赋值,并传递给下游参数
 
-上图中节点【createParam1】的定义如下:
+SHELL 节点定义时当日志检测到 ${setValue(output=1)} 的格式时,会将 1 赋值给 output,下游节点便可以直接使用变量 output 的值。同样,您可以在【工作流实例】页面,找到对应的节点实例,便可以查看该变量的值。
 
-<img src="/img/globalParam/image-20210723104957031.png" alt="image-20210723104957031" style="zoom:50%;" />
+创建 Node_B 任务,主要用于测试输出上游任务 Node_A 传递的参数。
 
-节点【createParam2】的定义如下:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-<img src="/img/globalParam/image-20210723105026924.png" alt="image-20210723105026924" style="zoom:50%;" />
+#### 2、创建 SQL 任务,使用参数
 
-您可以在【工作流实例】页面,找到对应的节点实例,便可以查看该变量的值。
+完成上述的 SHELL 任务之后,我们可以使用上游所传递的 output 作为 SQL 的查询对象。其中将所查询的 id 重命名为 ID,作为参数输出。
 
-节点实例【createParam1】如下:
+![context-parameter03](/img/new_ui/dev/parameter/context_parameter03.png)
 
-<img src="/img/globalParam/image-20210723105131381.png" alt="image-20210723105131381" style="zoom:50%;" />
+> 注:如果 SQL 节点的结果只有一行,一个或多个字段,参数的名字需要和字段名称一致。数据类型可选择为除 LIST 以外的其他类型。变量会选择 SQL 查询结果中的列名中与该变量名称相同的列对应的值。
+>
+> 如果 SQL 节点的结果为多行,一个或多个字段,参数的名字需要和字段名称一致。数据类型选择为 LIST。获取到 SQL 查询结果后会将对应列转化为 LIST,并将该结果转化为 JSON 后作为对应变量的值。
 
-这里当然 "id" 的值会等于 12.
+#### 3、保存工作流,设置全局参数

Review Comment:
   ```suggestion
   #### 保存工作流,设置全局参数
   ```



##########
docs/docs/en/guide/parameter/context.md:
##########
@@ -20,47 +16,63 @@ DolphinScheduler allows parameter transfer between tasks. Currently, transfer di
 
 When defining an upstream node, if there is a need to transmit the result of that node to a dependency related downstream node. You need to set an `OUT` direction parameter to [Custom Parameters] of the [Current Node Settings]. At present, we mainly focus on the SQL and shell nodes to pass parameters downstream.
 
-### SQL
+> Note: If there are no dependencies between nodes, local parameters cannot be passed upstream.
+
+### Example
+
+This sample shows how to use the parameter passing function. Create local parameters and assign them to downstream through the SHELL task. The SQL task completes the query operation by obtaining the parameters of the upstream task.
+
+#### 1. Create a SHELL task and set parameters
+
+> The user needs to pass the parameter when creating the shell script, the output statement format is `${setValue(key=value)}`, the key is the `prop` of the corresponding parameter, and value is the value of the parameter.
+
+Create a Node_A task, add output and value parameters to the custom parameters, and write the following script:
+
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
+
+Parameter Description:
 
-`prop` is user-specified; the direction selects `OUT`, and will define as an export parameter only when the direction is `OUT`. Choose data structures for data type according to the scenario, and the leave the value part blank.
+- value: The direction selection is IN, and the value is 66
+- output: The direction is selected as OUT, assigned through the script`'${setValue(output=1)}'`, and passed to the downstream parameters
 
-If the result of the SQL node has only one row, one or multiple fields, the name of the `prop` needs to be the same as the field name. The data type can choose structure except `LIST`. The parameter assigns the value according to the same column name in the SQL query result.
+When the SHELL node is defined, when the log detects the format of ${setValue(output=1)}, it will assign 1 to output, and the downstream node can directly use the value of the variable output. Similarly, you can find the corresponding node instance on the [Workflow Instance] page, and then you can view the value of this variable.
 
-If the result of the SQL node has multiple rows, one or more fields, the name of the `prop` needs to be the same as the field name. Choose the data type structure as `LIST`, and the SQL query result will be converted to `LIST<VARCHAR>`, and forward to convert to JSON as the parameter value.
+Create the Node_B task, which is mainly used to test and output the parameters passed by the upstream task Node_A.
 
-Let's make an example of the SQL node process in the above picture:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-The following defines the [createParam1] node in the above figure:
+#### 2. Create SQL tasks and use parameters

Review Comment:
   ```suggestion
   #### Create SQL tasks and use parameters
   ```



##########
docs/docs/zh/guide/parameter/context.md:
##########
@@ -20,50 +16,63 @@ DolphinScheduler 允许在任务间进行参数传递,目前传递方向仅支
 
 当定义上游节点时,如果有需要将该节点的结果传递给有依赖关系的下游节点,需要在【当前节点设置】的【自定义参数】设置一个方向是 OUT 的变量。目前我们主要针对 SQL 和 SHELL 节点做了可以向下传递参数的功能。
 
-### SQL
+> 注:若节点之间没有依赖关系,则局部参数无法通过上游传递。
+
+### 任务样例
+
+本样例展示了如何使用参数传递的功能,通过 SHELL 任务来创建本地参数并赋值传递给下游,SQL 任务通过获得上游任务的参数完成查询操作。
+
+#### 1、创建 SHELL 任务,设置参数
+
+> 用户需要传递参数,在定义 SHELL 脚本时,需要输出格式为 ${setValue(key=value)} 的语句,key 为对应参数的 prop,value 为该参数的值。
 
-prop 为用户指定;方向选择为 OUT,只有当方向为 OUT 时才会被定义为变量输出;数据类型可以根据需要选择不同数据结构;value 部分不需要填写。
+创建 Node_A 任务,在自定义参数中添加 output 和 value 参数,并编写如下脚本:
 
-如果 SQL 节点的结果只有一行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型可选择为除 LIST 以外的其他类型。变量会选择 SQL 查询结果中的列名中与该变量名称相同的列对应的值。
+![context-parameter01](/img/new_ui/dev/parameter/context_parameter01.png)
 
-如果 SQL 节点的结果为多行,一个或多个字段,prop 的名字需要和字段名称一致。数据类型选择为LIST。获取到 SQL 查询结果后会将对应列转化为 LIST<VARCHAR>,并将该结果转化为 JSON 后作为对应变量的值。
+参数说明:
 
-我们再以上图中包含 SQL 节点的流程举例说明:
+- value:方向选择为 IN,并赋值为 66
+- output:方向选择为 OUT,通过脚本 `'${setValue(output=1)}'` 赋值,并传递给下游参数
 
-上图中节点【createParam1】的定义如下:
+SHELL 节点定义时当日志检测到 ${setValue(output=1)} 的格式时,会将 1 赋值给 output,下游节点便可以直接使用变量 output 的值。同样,您可以在【工作流实例】页面,找到对应的节点实例,便可以查看该变量的值。
 
-<img src="/img/globalParam/image-20210723104957031.png" alt="image-20210723104957031" style="zoom:50%;" />
+创建 Node_B 任务,主要用于测试输出上游任务 Node_A 传递的参数。
 
-节点【createParam2】的定义如下:
+![context-parameter02](/img/new_ui/dev/parameter/context_parameter02.png)
 
-<img src="/img/globalParam/image-20210723105026924.png" alt="image-20210723105026924" style="zoom:50%;" />
+#### 2、创建 SQL 任务,使用参数

Review Comment:
   ```suggestion
   #### 创建 SQL 任务,使用参数
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org