You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@yunikorn.apache.org by GitBox <gi...@apache.org> on 2022/03/09 06:16:20 UTC

[GitHub] [incubator-yunikorn-site] yangwwei commented on a change in pull request #134: [YUNIKORN-951] Add perf-tool description into benchmarking tutorial page

yangwwei commented on a change in pull request #134:
URL: https://github.com/apache/incubator-yunikorn-site/pull/134#discussion_r822296060



##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|
+
+### 2. Build tool
+Performance tool is in [yunikorn release](https://github.com/apache/incubator-yunikorn-release.git), so clone it to your host. 

Review comment:
       The performance tool is available in [yunikorn release repo](https://github.com/apache/incubator-yunikorn-release.git), clone the repo to your local workspace.

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|

Review comment:
       I would rather not call this e2e test, we usually say e2e test while doing some functional testing on integrated envs. I think we can remove this row, just focus on the rest. BTW, I think the tool supports Queue fairness if I remember correctly. could you please double-check?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|
+
+### 2. Build tool
+Performance tool is in [yunikorn release](https://github.com/apache/incubator-yunikorn-release.git), so clone it to your host. 
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+```
+Go to performance tool directory and build it
+```
+cd incubator-yunikorn-release/perf-tools/
+go mod tidy
+go build
+```
+It will look like this.
+![Build-perf-tools](./../assets/perf-tutorial-build.png)
+
+### 3. Set test configuration
+Before start tests, check configuration whether meet your except.
+Default output path is `\tmp`, you can modify `common.outputrootpath` to change it.
+In each scenarios, it contains followings and we can set
+
+|	field			|			description					|
+| ----------------------------- | --------------------------------------------------------------------- |
+|	schedulerNames		|	List of scheduler will run these cases 				|

Review comment:
       you can actually set 2 scheduler names? I wasn't aware of that
   I think you need to add some more description to explain this a bit more

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|

Review comment:
       Measure schedulers' throughput by calculating how many pods are allocated per second based on the pod start time

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|
+
+### 2. Build tool
+Performance tool is in [yunikorn release](https://github.com/apache/incubator-yunikorn-release.git), so clone it to your host. 
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+```
+Go to performance tool directory and build it

Review comment:
       NITS, this can be simply put as : 
   Build the tool:
   

##########
File path: package.json
##########
@@ -12,7 +12,7 @@
   "dependencies": {
     "@docusaurus/core": "^2.0.0-beta.15",
     "@docusaurus/preset-classic": "^2.0.0-beta.15",
-    "@docusaurus/theme-search-algolia": "^2.0.0-beta.15",
+    "@docusaurus/theme-search-algolia": "^2.0.0-beta.17",

Review comment:
       this change seems unrelated to this PR, can we skip this?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|
+
+### 2. Build tool
+Performance tool is in [yunikorn release](https://github.com/apache/incubator-yunikorn-release.git), so clone it to your host. 
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+```
+Go to performance tool directory and build it
+```
+cd incubator-yunikorn-release/perf-tools/
+go mod tidy
+go build
+```
+It will look like this.
+![Build-perf-tools](./../assets/perf-tutorial-build.png)
+
+### 3. Set test configuration
+Before start tests, check configuration whether meet your except.
+Default output path is `\tmp`, you can modify `common.outputrootpath` to change it.

Review comment:
       \tmp or /tmp?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|
+| ---------------------	| -----------------------------------------------------------------------------------------------------	| ------------- | ----------------------------- |
+|	e2e test	|	Simulate and record the time in each steps							|	none	|	exist(QPS, timecost)	|
+|	node fairness	|	Monitor node resource usage(allocated/capicity) with lots of pods requests			| 	exist	|	exist			|
+|	thourghput	|	Allocate `pod.spec.starttime` to calculate throughput(pods/sec) with lots of pods requests	|	exist	|	none			|
+
+### 2. Build tool
+Performance tool is in [yunikorn release](https://github.com/apache/incubator-yunikorn-release.git), so clone it to your host. 
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+```
+Go to performance tool directory and build it
+```
+cd incubator-yunikorn-release/perf-tools/
+go mod tidy
+go build
+```
+It will look like this.
+![Build-perf-tools](./../assets/perf-tutorial-build.png)
+
+### 3. Set test configuration
+Before start tests, check configuration whether meet your except.
+Default output path is `\tmp`, you can modify `common.outputrootpath` to change it.
+In each scenarios, it contains followings and we can set
+
+|	field			|			description					|
+| ----------------------------- | --------------------------------------------------------------------- |
+|	schedulerNames		|	List of scheduler will run these cases 				|
+|	showNumOfLastTasks	|	Show the last tasks in scheduling				|

Review comment:
       I feel this description isn't accurate. can you elaborate more?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -355,6 +355,52 @@ scrape_configs:
 
 Once the environment is setup, you are good to run workloads and collect results. YuniKorn community has some useful tools to run workloads and collect metrics, more details will be published here.
 
+### 1. Scenarios 
+In performance tools, there are three types of tests and feedbacks.
+
+|	test type	|						description						|	diagram	|  		log		|

Review comment:
       NITS: The first letter should be the capital letters in each column, just for consistency formatting




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org