You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2022/07/01 08:33:18 UTC

[GitHub] [dolphinscheduler] zhongjiajie commented on a diff in pull request #10715: Modified dq, monitor, security, resources docs.

zhongjiajie commented on code in PR #10715:
URL: https://github.com/apache/dolphinscheduler/pull/10715#discussion_r911553180


##########
docs/docs/en/guide/security.md:
##########
@@ -139,37 +135,21 @@ worker.groups=default,test
 * When executing a task, the task can be assigned to the specified worker group, and select the corresponding environment according to the worker group. Finally, the worker node executes the environment first and then executes the task.
 
 > Add or update environment
-
 - The environment configuration is equivalent to the configuration in the `dolphinscheduler_env.sh` file.
 
-![create-environment](../../../img/new_ui/dev/security/create-environment.png)
+![create-environment](/img/new_ui/dev/security/create-environment.png)

Review Comment:
   we have to use the related path of img to make it work in our document



##########
docs/docs/en/guide/resource/task-group.md:
##########
@@ -20,37 +20,37 @@ You need to enter the information inside the picture:
 
 - Resource pool size: The maximum number of concurrent task instances allowed.
 
-#### View Task Group Queue 
+### View Task Group Queue 
 
-![view-queue](../../../../img/new_ui/dev/resource/view-queue.png) 
+![view-queue](/img/new_ui/dev/resource/view-queue.png) 
 
 Click the button to view task group usage information:
 
-![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png) 
+![view-queue](/img/new_ui/dev/resource/view-groupQueue.png) 
 
-#### Use of Task Groups 
+### Use of Task Groups 
 
 **Note**: The usage of task groups is applicable to tasks executed by workers, such as [switch] nodes, [condition] nodes, [sub_process] and other node types executed by the master are not controlled by the task group. Let's take the shell node as an example: 
 
-![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)                 
+![use-queue](/img/new_ui/dev/resource/use-queue.png)                 
 
 Regarding the configuration of the task group, all you need to do is to configure these parts in the red box:
 
 - Task group name: The task group name is displayed on the task group configuration page. Here you can only see the task group that the project has permission to access (the project is selected when creating a task group) or the task group that scope globally (no project is selected when creating a task group).
 
 - Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
 
-### Implementation Logic of Task Group 
+## Implementation Logic of Task Group 
 
-#### Get Task Group Resources: 
+### Get Task Group Resources: 

Review Comment:
   ```suggestion
   ### Get Task Group Resources
   ```



##########
docs/configs/docsdev.js:
##########
@@ -257,6 +257,10 @@ export default {
                     {
                         title: 'Resource',
                         children: [
+                            {
+                                title: 'Resources',
+                                link: '/en-us/docs/dev/user_doc/guide/resource/resources_introduction.html'

Review Comment:
   I think we should better rename the `resources_introduction.md` to `intro.md`. It is simply. and make the title to `Introduction`



##########
docs/docs/en/guide/resource/configuration.md:
##########
@@ -92,45 +60,33 @@ yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
 yarn.application.status.address=http://localhost:%s/ds/v1/cluster/apps/%s
 # job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
 yarn.job.history.status.address=http://localhost:19888/ds/v1/history/mapreduce/jobs/%s
-
 # datasource encryption enable
 datasource.encryption.enable=false
-
 # datasource encryption salt
 datasource.encryption.salt=!@#$%^&*
-
 # data quality option
 data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
-
 #data-quality.error.output.path=/tmp/data-quality-error-data
-
 # Network IP gets priority, default inner outer
-
 # Whether hive SQL is executed in the same session
 support.hive.oneSession=false
-
 # use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions;
 # if set false, executing user is the deploy user and doesn't need sudo permissions
 sudo.enable=true
-
 # network interface preferred like eth0, default: empty
 #dolphin.scheduler.network.interface.preferred=
-
 # network IP gets priority, default: inner outer
 #dolphin.scheduler.network.priority.strategy=default
-
 # system env path
 #dolphinscheduler.env.path=env/dolphinscheduler_env.sh
-
 # development state
 development.state=false
-
 # rpc port
 alert.rpc.port=50052
 ```
 
-> **_Note:_**
->
+> **Note:**
+> 
 > *  If only the `api-server/conf/common.properties` file is configured, then resource uploading is enabled, but you can not use resources in task. If you want to use or execute the files in the workflow you need to configure `worker-server/conf/common.properties` too.
 > * If you want to use the resource upload function, the deployment user in [installation and deployment](../installation/standalone.md) must have relevant operation authority.
-> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `worker-server/conf` and `api-server/conf`, otherwise skip this copy step.
\ No newline at end of file
+> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step.

Review Comment:
   ```suggestion
   > * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `worker-server/conf` and `api-server/conf`, otherwise skip this copy step.
   ```



##########
docs/docs/en/guide/resource/file-manage.md:
##########
@@ -2,52 +2,39 @@
 
 When third party jars are used in the scheduling process or user defined scripts are required, these can be created from this page. The types of files that can be created include: txt, log, sh, conf, py, java and so on. Files can be edited, renamed, downloaded and deleted.
 
-![file-manage](../../../../img/new_ui/dev/resource/file-manage.png)
+![file-manage](/img/new_ui/dev/resource/file-manage.png)
 
-> **_Note:_**
+> **Note:**
 >
-> * When you manage files as `admin`, remember to set up `tenant` for `admin` first. 
+> * When you manage files as `admin`, remember to set up `tenant` for `admin` first.
 
-## Basic Operations
+- Create a file
+  > The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql properties.
+![create-file](/img/new_ui/dev/resource/create-file.png)
 
-### Create a File
+- Upload files
 
-The file format supports the following types: txt, log, sh, conf, cfg, py, java, sql, xml, hql, properties.
+> Upload file: Click the "`Upload File`" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name.
+![upload-file](/img/new_ui/dev/resource/upload-file.png)
 
-![create-file](../../../../img/new_ui/dev/resource/create-file.png)
+- File view
 
-### Upload Files
+> For the files that can be viewed, click the file name to view the file details.
+![file_detail](/img/tasks/demo/file_detail.png)
 
-Click the "Upload File" button to upload, drag the file to the upload area, the file name will be automatically completed with the uploaded file name.
+- Download file
 
-![upload-file](../../../../img/new_ui/dev/resource/upload-file.png)
+> Click the "`Download`" button in the file list to download the file or click the "`Download`" button in the upper right corner of the file details to download the file.
+- File rename
 
-### View File Content
+![rename-file](/img/new_ui/dev/resource/rename-file.png)
 
- For the files that can be viewed, click the file name to view the file details.
+- Delete
+  > File list -> Click the "`Delete`" button to delete the specified file.
+- Re-upload file
 
-![file_detail](../../../../img/tasks/demo/file_detail.png)
-
-### Download file
-
-> Click the "Download" button in the file list to download the file or click the "Download" button in the upper right corner of the file details to download the file.
-
-### Rename File
-
-![rename-file](../../../../img/new_ui/dev/resource/rename-file.png)
-
-### Delete File
-
-File list -> Click the "Delete" button to delete the specified file.
-
-### Re-upload file
-
-Click the "Re-upload File" button to upload a new file to replace the old file, drag the file to the re-upload area, the file name will be automatically completed with the new uploaded file name.
-
-![reuplod_file](../../../../img/reupload_file_en.png)

Review Comment:
   It is seem we should not remvoe those section of content



##########
docs/docs/en/guide/resource/resources_introduction.md:
##########
@@ -0,0 +1,2 @@
+# Resources
+The Resource Center is typically used for uploading files, UDF functions, and task group management. For a stand-alone environment, you can select the local file directory as the upload folder (this operation does not require Hadoop deployment). Of course, you can also choose to upload to Hadoop or MinIO cluster. In this case, you need to have Hadoop (2.6+) or MinIOn and other related environments.

Review Comment:
   good adding, BTW



##########
docs/docs/en/guide/resource/file-manage.md:
##########
@@ -66,10 +53,14 @@ In the workflow definition module of project Manage, create a new workflow using
 - Script: 'sh hello.sh'
 - Resource: Select 'hello.sh'
 
-![use-shell](../../../../img/new_ui/dev/resource/demo/file-demo02.png)
+![use-shell](/img/new_ui/dev/resource/demo/file-demo02.png)
 
 ### View the results
 
 You can view the log results of running the node in the workflow example. The diagram below:
 
-![log-shell](../../../../img/new_ui/dev/resource/demo/file-demo03.png)
+![log-shell](/img/new_ui/dev/resource/demo/file-demo03.png)
+
+
+
+

Review Comment:
   ```suggestion
   ```



##########
docs/docs/en/guide/security.md:
##########
@@ -139,37 +135,21 @@ worker.groups=default,test
 * When executing a task, the task can be assigned to the specified worker group, and select the corresponding environment according to the worker group. Finally, the worker node executes the environment first and then executes the task.
 
 > Add or update environment
-
 - The environment configuration is equivalent to the configuration in the `dolphinscheduler_env.sh` file.
 
-![create-environment](../../../img/new_ui/dev/security/create-environment.png)
+![create-environment](/img/new_ui/dev/security/create-environment.png)
 
 > Usage environment
-
 - Create a task node in the workflow definition, select the worker group and the environment corresponding to the worker group. When executing the task, the Worker will execute the environment first before executing the task.
 
-![use-environment](../../../img/new_ui/dev/security/use-environment.png)
-
-## Cluster Management
-
-> Add or update cluster
-
-- Each process can be related to zero or several clusters to support multiple environment, now just support k8s.
-
-> Usage cluster
-
-- After creation and authorization, k8s namespaces and processes will associate clusters. Each cluster will have separate workflows and task instances running independently.
-
-![create-cluster](../../../img/new_ui/dev/security/create-cluster.png)

Review Comment:
   Add why we remove the section of the docs?



##########
docs/docs/en/guide/resource/configuration.md:
##########
@@ -92,45 +60,33 @@ yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
 yarn.application.status.address=http://localhost:%s/ds/v1/cluster/apps/%s
 # job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
 yarn.job.history.status.address=http://localhost:19888/ds/v1/history/mapreduce/jobs/%s
-
 # datasource encryption enable
 datasource.encryption.enable=false
-
 # datasource encryption salt
 datasource.encryption.salt=!@#$%^&*
-
 # data quality option
 data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
-
 #data-quality.error.output.path=/tmp/data-quality-error-data
-
 # Network IP gets priority, default inner outer
-
 # Whether hive SQL is executed in the same session
 support.hive.oneSession=false
-
 # use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions;
 # if set false, executing user is the deploy user and doesn't need sudo permissions
 sudo.enable=true
-
 # network interface preferred like eth0, default: empty
 #dolphin.scheduler.network.interface.preferred=
-
 # network IP gets priority, default: inner outer
 #dolphin.scheduler.network.priority.strategy=default
-
 # system env path
 #dolphinscheduler.env.path=env/dolphinscheduler_env.sh
-
 # development state
 development.state=false
-
 # rpc port
 alert.rpc.port=50052
 ```
 
-> **_Note:_**
->
+> **Note:**
+> 
 > *  If only the `api-server/conf/common.properties` file is configured, then resource uploading is enabled, but you can not use resources in task. If you want to use or execute the files in the workflow you need to configure `worker-server/conf/common.properties` too.
 > * If you want to use the resource upload function, the deployment user in [installation and deployment](../installation/standalone.md) must have relevant operation authority.
-> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `worker-server/conf` and `api-server/conf`, otherwise skip this copy step.
\ No newline at end of file
+> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step.

Review Comment:
   should not change this content



##########
docs/docs/en/guide/resource/task-group.md:
##########
@@ -20,37 +20,37 @@ You need to enter the information inside the picture:
 
 - Resource pool size: The maximum number of concurrent task instances allowed.
 
-#### View Task Group Queue 
+### View Task Group Queue 
 
-![view-queue](../../../../img/new_ui/dev/resource/view-queue.png) 
+![view-queue](/img/new_ui/dev/resource/view-queue.png) 
 
 Click the button to view task group usage information:
 
-![view-queue](../../../../img/new_ui/dev/resource/view-groupQueue.png) 
+![view-queue](/img/new_ui/dev/resource/view-groupQueue.png) 
 
-#### Use of Task Groups 
+### Use of Task Groups 
 
 **Note**: The usage of task groups is applicable to tasks executed by workers, such as [switch] nodes, [condition] nodes, [sub_process] and other node types executed by the master are not controlled by the task group. Let's take the shell node as an example: 
 
-![use-queue](../../../../img/new_ui/dev/resource/use-queue.png)                 
+![use-queue](/img/new_ui/dev/resource/use-queue.png)                 
 
 Regarding the configuration of the task group, all you need to do is to configure these parts in the red box:
 
 - Task group name: The task group name is displayed on the task group configuration page. Here you can only see the task group that the project has permission to access (the project is selected when creating a task group) or the task group that scope globally (no project is selected when creating a task group).
 
 - Priority: When there is a waiting resource, the task with high priority will be distributed to the worker by the master first. The larger the value of this part, the higher the priority. 
 
-### Implementation Logic of Task Group 
+## Implementation Logic of Task Group 
 
-#### Get Task Group Resources: 
+### Get Task Group Resources: 
 
 The master judges whether the task is configured with a task group when distributing the task. If the task is not configured, it is normally thrown to the worker to run; if a task group is configured, it checks whether the remaining size of the task group resource pool meets the current task operation before throwing it to the worker for execution. , if the resource pool -1 is satisfied, continue to run; if not, exit the task distribution and wait for other tasks to wake up. 
 
-#### Release and Wake Up: 
+### Release and Wake Up: 

Review Comment:
   ```suggestion
   ### Release and Wake Up
   ```



##########
docs/docs/en/guide/resource/resources_introduction.md:
##########
@@ -0,0 +1,2 @@
+# Resources
+The Resource Center is typically used for uploading files, UDF functions, and task group management. For a stand-alone environment, you can select the local file directory as the upload folder (this operation does not require Hadoop deployment). Of course, you can also choose to upload to Hadoop or MinIO cluster. In this case, you need to have Hadoop (2.6+) or MinIOn and other related environments.

Review Comment:
   ```suggestion
   
   The Resource Center is typically used for uploading files, UDF functions, and task group management. For a stand-alone environment, you can select the local file directory as the upload folder (this operation does not require Hadoop deployment). Of course, you can also choose to upload to Hadoop or MinIO cluster. In this case, you need to have Hadoop (2.6+) or MinIOn and other related environments.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org