You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@yunikorn.apache.org by GitBox <gi...@apache.org> on 2021/11/14 13:58:15 UTC

[GitHub] [incubator-yunikorn-site] HuangTing-Yao opened a new pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

HuangTing-Yao opened a new pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90


   The tutorial of build and setup performance banchmarking environment.
   Plz take a look @yangwwei . Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-yunikorn-site] yangwwei commented on a change in pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

Posted by GitBox <gi...@apache.org>.
yangwwei commented on a change in pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90#discussion_r749480069



##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial

Review comment:
       title: Benchmarking Tutorial

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
+
+## Hardware
+
+Be aware that performance result is highly variable depending on the underlying  hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:

Review comment:
       it is good to have general steps as an overview here.
   for each step, if we expand in the later doc sections, could you pls add a link for easier access?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
+
+## Hardware
+
+Be aware that performance result is highly variable depending on the underlying  hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure Kubernetes API server and controller manager, then add worker nodes.
+2. Deploy hollow pods,which will simulate worker nodes, name hollow nodes. After all hollow nodes in ready status, we need to cordon all native nodes, which are physical presence in the cluster, not the simulated nodes, to avoid we allocated test workload pod to native nodes.
+3. Deploy YuniKorn using the Helm chart on the master node, and scale down the Deployment to 0 replica, and modify the port in `prometheus.yml` to match the port of the service.
+4. Deploy 50k Nginx pods for testing, and the API server will create them. But since the YuniKorn scheduler Deployment has been scaled down to 0 replica, all Nginx pods will be stuck in pending.
+5. Scale up The YuniKorn Deployment back to 1 replica, and cordon the master node to avoid YuniKorn allocating Nginx pods there. In this step, YuniKorn will start collecting the metrics.
+6. Observe the metrics exposed in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leverage Kubemark to simulate up to a 4K-node cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to label all native nodes, otherwise the scheduler might allocate hollow pods to other simulated hollow nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration. We recommend to clone the [release repo](https://github.com/apache/incubator-yunikorn-release) and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Prometheus. Thus, we need to set up a Prometheus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Prometheus is launched, YuniKorn metrics can be easily collected. Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metrics which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 

Review comment:
       We need short explanations for these metrics
   we also need a JIRA to create a doc to explain all these metrics, in a table
   then we can link to that doc

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
+
+## Hardware
+
+Be aware that performance result is highly variable depending on the underlying  hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):

Review comment:
       The National Taichung University of Education -> adds a link
   Kuan-Chou Lai -> Professor Kuan-Chou Lai (add a link if possible)
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-yunikorn-site] HuangTing-Yao commented on pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

Posted by GitBox <gi...@apache.org>.
HuangTing-Yao commented on pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90#issuecomment-968926597


   Thank for your careful review, @yuchaoran2011 
   Already fix all typos and grammar.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-yunikorn-site] HuangTing-Yao commented on a change in pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

Posted by GitBox <gi...@apache.org>.
HuangTing-Yao commented on a change in pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90#discussion_r759468099



##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
+
+## Hardware
+
+Be aware that performance result is highly variable depending on the underlying  hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:

Review comment:
       Done.

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
+
+## Hardware
+
+Be aware that performance result is highly variable depending on the underlying  hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure Kubernetes API server and controller manager, then add worker nodes.
+2. Deploy hollow pods,which will simulate worker nodes, name hollow nodes. After all hollow nodes in ready status, we need to cordon all native nodes, which are physical presence in the cluster, not the simulated nodes, to avoid we allocated test workload pod to native nodes.
+3. Deploy YuniKorn using the Helm chart on the master node, and scale down the Deployment to 0 replica, and modify the port in `prometheus.yml` to match the port of the service.
+4. Deploy 50k Nginx pods for testing, and the API server will create them. But since the YuniKorn scheduler Deployment has been scaled down to 0 replica, all Nginx pods will be stuck in pending.
+5. Scale up The YuniKorn Deployment back to 1 replica, and cordon the master node to avoid YuniKorn allocating Nginx pods there. In this step, YuniKorn will start collecting the metrics.
+6. Observe the metrics exposed in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leverage Kubemark to simulate up to a 4K-node cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to label all native nodes, otherwise the scheduler might allocate hollow pods to other simulated hollow nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration. We recommend to clone the [release repo](https://github.com/apache/incubator-yunikorn-release) and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Prometheus. Thus, we need to set up a Prometheus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Prometheus is launched, YuniKorn metrics can be easily collected. Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metrics which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-yunikorn-site] yuchaoran2011 commented on a change in pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

Posted by GitBox <gi...@apache.org>.
yuchaoran2011 commented on a change in pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90#discussion_r748899095



##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.

Review comment:
       Could you clarify what this step means? Specifically the meaning of hollow node and native node?

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Promethus is launched, YuniKorn metrics can be easily collected.Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metircs which measure the latency of some critical scheduling paths. These metrics include:

Review comment:
       ```suggestion
   After Prometheus is launched, YuniKorn metrics can be easily collected. Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metrics which measure the latency of some critical scheduling paths. These metrics include:
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.

Review comment:
       ```suggestion
   1. Properly configure Kubernetes API server and controller manager, then add worker nodes.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.

Review comment:
       ```suggestion
   Be aware that performance result is highly variable depending on the underlying hardware. All results published in the doc can only be used as references. We encourage each individual to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 

Review comment:
       ```suggestion
   3. Deploy YuniKorn using the Helm chart on the master node, and scale down the Deployment to 0 replica, and modify the port in `prometheus.yml` to match the port of the service. 
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.

Review comment:
       ```suggestion
   5. Scale up The YuniKorn Deployment back to 1 replica, and cordon the master node to avoid YuniKorn allocating Nginx pods there. In this step, YuniKorn will start collecting the metrics.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.

Review comment:
       ```suggestion
   6. Observe the metrics exposed in Prometheus UI.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.

Review comment:
       ```suggestion
   4. Deploy 50k Nginx pods for testing, and the API server will create them. But since the YuniKorn scheduler Deployment has been scaled down to 0 replica, all Nginx pods will be stuck in pending.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.

Review comment:
       ```suggestion
   YuniKorn exposes its scheduling metrics via Prometheus. Thus, we need to set up a Prometheus server to collect these metrics.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.

Review comment:
       ```suggestion
   The YuniKorn community continues to optimize the performance of the scheduler, ensuring that YuniKorn satisfies the performance requirements of large-scale batch workloads. Thus, the community has built some useful tools for performance benchmarking that can be reused across releases. This document introduces all these tools and steps to run them.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.

Review comment:
       ```suggestion
   We need to tune some parameters based on the default configuration. We recommend to clone the [release repo](https://github.com/apache/incubator-yunikorn-release) and modify the parameters in `value.yaml`.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.

Review comment:
       ```suggestion
   [Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leverage Kubemark to simulate up to a 4K-node cluster on less than 20 physical nodes.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Promethus is launched, YuniKorn metrics can be easily collected.Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metircs which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 
+
+you can select and generate graph on Promethus UI easily, such as:
+
+![Prometheus Metrics List](./../assets/prometheus.png)
+
+
+---
+
+## Performance Tuning
+
+### Kubernetes
+
+The default K8s setup has limited concurrent requests which limits the overall throughput of the cluster. In this section, we introduced a few parameters that need to be tuned up in order to increase the overall throughput of the cluster.
+
+#### kubeadm
+
+Set pod-network mask
+
+```
+kubeadm init --pod-network-cidr=10.244.0.0/8
+```
+
+#### CNI
+
+Modify CNI mask and resources.
+
+```
+  net-conf.json: |
+    {
+      "Network": "10.244.0.0/8",
+      "Backend": {
+        "Type": "vxlan"
+      }
+    }
+```
+```
+  resources:
+    requests:
+      cpu: "100m"
+      memory: "200Mi"
+    limits:
+      cpu: "100m"
+      memory: "200Mi"
+```
+
+
+#### Api-Server
+
+In Api-Server, we need to modify two parameters: `max-mutating-requests-inflight` and `max-requests-inflight`. Those two parameters represent the api request bandwidth, cause we will apply large amount of pod request, so we need to increase those two parameters. Modify `/etc/kubernetes/manifest/kube-apiserver.yaml`:
+
+```
+--max-mutating-requests-inflight=3000
+--max-requests-inflight=3000
+```
+
+#### Controller-Manager
+
+In Controller-Manager, we need to modify three parameters: `node-cidr-mask-size`, `kube-api-burst` and `kube-api-qps`. `kube-api-burst` and `kube-api-qps` controls the server side requests' bandwidth, they need to be increased. `node-cidr-mask-size` represents the node cidr, it needs to be increased as well in order to scale up to thousands of nodes. 

Review comment:
       ```suggestion
   In the Kubernetes controller manager, we need to increase the value of three parameters: `node-cidr-mask-size`, `kube-api-burst` and `kube-api-qps`. `kube-api-burst` and `kube-api-qps` control the server side request bandwidth. `node-cidr-mask-size` represents the node CIDR. it needs to be increased as well in order to scale up to thousands of nodes. 
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Promethus is launched, YuniKorn metrics can be easily collected.Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metircs which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 
+
+you can select and generate graph on Promethus UI easily, such as:
+
+![Prometheus Metrics List](./../assets/prometheus.png)
+
+
+---
+
+## Performance Tuning
+
+### Kubernetes
+
+The default K8s setup has limited concurrent requests which limits the overall throughput of the cluster. In this section, we introduced a few parameters that need to be tuned up in order to increase the overall throughput of the cluster.
+
+#### kubeadm
+
+Set pod-network mask
+
+```
+kubeadm init --pod-network-cidr=10.244.0.0/8
+```
+
+#### CNI
+
+Modify CNI mask and resources.
+
+```
+  net-conf.json: |
+    {
+      "Network": "10.244.0.0/8",
+      "Backend": {
+        "Type": "vxlan"
+      }
+    }
+```
+```
+  resources:
+    requests:
+      cpu: "100m"
+      memory: "200Mi"
+    limits:
+      cpu: "100m"
+      memory: "200Mi"
+```
+
+
+#### Api-Server
+
+In Api-Server, we need to modify two parameters: `max-mutating-requests-inflight` and `max-requests-inflight`. Those two parameters represent the api request bandwidth, cause we will apply large amount of pod request, so we need to increase those two parameters. Modify `/etc/kubernetes/manifest/kube-apiserver.yaml`:
+
+```
+--max-mutating-requests-inflight=3000
+--max-requests-inflight=3000
+```
+
+#### Controller-Manager
+
+In Controller-Manager, we need to modify three parameters: `node-cidr-mask-size`, `kube-api-burst` and `kube-api-qps`. `kube-api-burst` and `kube-api-qps` controls the server side requests' bandwidth, they need to be increased. `node-cidr-mask-size` represents the node cidr, it needs to be increased as well in order to scale up to thousands of nodes. 
+
+
+Modify `/etc/kubernetes/manifest/kube-controller-manager.yaml`:
+
+```
+--node-cidr-mask-size=21 //log2(max number of pods in cluster)
+--kube-api-burst=3000
+--kube-api-qps=3000
+```
+
+#### kubelet
+
+In single worker node, we can run 110 pods as default, to get higher node resource utilization, we need to add some parameters in kubelet launch command, and restart it.

Review comment:
       ```suggestion
   In single worker node, we can run 110 pods as default. But to get higher node resource utilization, we need to add some parameters in Kubelet launch command, and restart it.
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Promethus is launched, YuniKorn metrics can be easily collected.Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metircs which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 
+
+you can select and generate graph on Promethus UI easily, such as:

Review comment:
       ```suggestion
   you can select and generate graph on Prometheus UI easily, such as:
   ```

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.

Review comment:
       ```suggestion
   We need to label all native nodes, otherwise the scheduler might allocate hollow pods to other simulated hollow nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
   ```
   Like above, it will be helpful to explain what hollow and native mean

##########
File path: docs/performance/performance_tutorial.md
##########
@@ -0,0 +1,451 @@
+---
+id: performance_tutorial
+title: Setup tutorial
+keywords:
+ - performance
+ - tutorial
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Overview
+
+The YuniKorn community continues to optimizes the peformance of the scheduler, ensures YuniKorn satisfies the performance requirements for large scale batch workloads. Thus, the community has built some useful tools to do performance benchmarking repetitively over releases. This document introduces all these tools and steps to achieve this.
+
+## Hardware
+
+Be aware that performance result is highly variant depends on the underneath hardware, all the results published in the doc can only be used as references. We encourage each individaul to run similar tests on their own environments in order to get a result based on your own hardware. This doc is just for demonstration purpose.
+
+A list of servers being used in this test are (Huge thanks to [National Taichung University of Education, Kuan-Chou Lai] for providing these servers for running tests):
+
+| Manchine Type         | CPU | Memory | Download/upload(Mbps) |
+| --------------------- | --- | ------ | --------------------- |
+| HP                    | 16  | 36G    | 525.74/509.86         |
+| HP                    | 16  | 30G    | 564.84/461.82         |
+| HP                    | 16  | 30G    | 431.06/511.69         |
+| HP                    | 24  | 32G    | 577.31/576.21         |
+| IBM blade H22         | 16  | 38G    | 432.11/4.15           |
+| IBM blade H22         | 16  | 36G    | 714.84/4.14           |
+| IBM blade H22         | 16  | 42G    | 458.38/4.13           |
+| IBM blade H22         | 16  | 42G    | 445.42/4.13           |
+| IBM blade H22         | 16  | 32G    | 400.59/4.13           |
+| IBM blade H22         | 16  | 12G    | 499.87/4.13           |
+| IBM blade H23         | 8   | 32G    | 468.51/4.14           |
+| WS660T                | 8   | 16G    | 87.73/86.30           |
+| ASUSPRO D640MB_M640SA | 4   | 8G     | 92.43/93.77           |
+| PRO E500 G6_WS720T    | 16  | 8G     | 90/87.18              |
+| WS E500 G6_WS720T     | 8   | 40G    | 92.61/89.78           |
+| E500 G5               | 8   | 8G     | 91.34/85.84           |
+| WS E500 G5_WS690T     | 12  | 16G    | 92.2/93.76            |
+| WS E500 G5_WS690T     | 8   | 32G    | 91/89.41              |
+| WS E900 G4_SW980T     | 80  | 512G   | 89.24/87.97           |
+
+The following steps are needed for each server, otherwise the large scale testing may fail due to the limited number of users/processes/open-files.
+
+### 1. Set /etc/sysctl.conf
+```
+kernel.pid_max=400000
+fs.inotify.max_user_instances=50000
+fs.inotify.max_user_watches=52094
+```
+### 2. Set /etc/security/limits.conf
+
+```
+* soft nproc 4000000
+* hard nproc 4000000
+root soft nproc 4000000
+root hard nproc 4000000
+* soft nofile 50000
+* hard nofile 50000
+root soft nofile 50000
+root hard nofile 50000
+```
+---
+
+## Deploy workflow
+
+Before going into the details, here are the general steps used in our tests:
+
+1. Properly configure K8s API-Server and controller-manager, then join worker nodes.
+2. Deploy hollow node pod, to simulate worker node, then cordon all native node, avoid we allocated test pod to native node.
+3. Helm install yunikorn on the master node, and scale down deploy to 0, and modify port in prometheus.yml to service's port. 
+4. Deploy 50k nginx pods for testing, and api-server will create them, but yunikorn pod just scale down to 0, so they will stuck in pending.
+5. Scale up yunikorn deploy to 1, and cordon master, to avoid yunikorn allocate nginx pod to master. In this step, YuniKorn will start to collect the metrics.
+6. Observe the metrics which expose in Prometheus UI.
+---
+
+## Setup Kubemark
+
+[Kubemark](https://github.com/kubernetes/kubernetes/tree/master/test/kubemark) is a performance testing tool which allows users to run experiments on simulated clusters. The primary use case is the scalability testing. The basic idea is to run tens or hundreds of fake kubelet nodes on one physical node in order to simulate large scale clusters. In our tests, we leveager kubemark to simulate up to 4K nodes cluster on less than 20 physical nodes.
+
+### 1. Build image
+
+##### Clone kubernetes repo, and build kubemark binary file
+
+```
+git clone https://github.com/kubernetes/kubernetes.git
+```
+```
+cd kubernetes
+```
+```
+KUBE_BUILD_PLATFORMS=linux/amd64 make kubemark GOFLAGS=-v GOGCFLAGS="-N -l"
+```
+
+##### Copy kubemark binary file to the image folder and build kubemark docker image
+
+```
+cp _output/bin/kubemark cluster/images/kubemark
+```
+```
+IMAGE_TAG=v1.XX.X make build
+```
+After this step, you can get the kubemark image which can simulate cluster node. You can upload it to Docker-Hub or just deploy it locally.
+
+### 2. Install Kubermark
+
+##### Create kubemark namespace
+
+```
+kubectl create ns kubemark
+```
+
+##### Create configmap
+
+```
+kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
+```
+
+##### Create secret
+
+```
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubeconfig_file_path}
+```
+### 3. Label node
+
+We need to lebel all native nodes, otherwise the scheduler might allocate hollow pod to other simulated hollows nodes. We can leverage Node selector in yaml to allocate hollow pods to native nodes.
+
+```
+kubectl label node {node name} tag=tagName
+```
+
+### 4. Deploy Kubemark
+
+The hollow-node.yaml is down below, there are some parameters we can configure.
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+  name: hollow-node
+  namespace: kubemark
+spec:
+  replicas: 2000  // the node number you want to simulate
+  selector:
+      name: hollow-node
+  template:
+    metadata:
+      labels:
+        name: hollow-node
+    spec:
+      nodeSelector:  // leverage label to allocate to native node
+        tag: tagName  
+      initContainers:
+      - name: init-inotify-limit
+        image: docker.io/busybox:latest
+        imagePullPolicy: IfNotPresent
+        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=200'] // set as same as max_user_instance in actual node 
+        securityContext:
+          privileged: true
+      volumes:
+      - name: kubeconfig-volume
+        secret:
+          secretName: kubeconfig
+      - name: logs-volume
+        hostPath:
+          path: /var/log
+      containers:
+      - name: hollow-kubelet
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        ports:
+        - containerPort: 4194
+        - containerPort: 10250
+        - containerPort: 10255
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=kubelet
+        - --name=$(NODE_NAME)
+        - --kubeconfig=/kubeconfig/kubelet.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:
+          requests:    // the resource of hollow pod, can modify it.
+            cpu: 20m
+            memory: 50M
+        securityContext:
+          privileged: true
+      - name: hollow-proxy
+        image: 0yukali0/kubemark:1.20.10 // the kubemark image you build 
+        imagePullPolicy: IfNotPresent
+        env:
+        - name: NODE_NAME
+          valueFrom:
+            fieldRef:
+              fieldPath: metadata.name
+        command:
+        - /kubemark
+        args:
+        - --morph=proxy
+        - --name=$(NODE_NAME)
+        - --use-real-proxier=false
+        - --kubeconfig=/kubeconfig/kubeproxy.kubeconfig
+        - --alsologtostderr
+        - --v=2
+        volumeMounts:
+        - name: kubeconfig-volume
+          mountPath: /kubeconfig
+          readOnly: true
+        - name: logs-volume
+          mountPath: /var/log
+        resources:  // the resource of hollow pod, can modify it.
+          requests:
+            cpu: 20m
+            memory: 50M
+      tolerations:
+      - effect: NoExecute
+        key: node.kubernetes.io/unreachable
+        operator: Exists
+      - effect: NoExecute
+        key: node.kubernetes.io/not-ready
+        operator: Exists
+```
+
+once done editing, apply it to the cluster:
+
+```
+kubectl apply -f hollow-node.yaml
+```
+
+---
+
+## Deploy YuniKorn
+
+#### Install YuniKorn with helm
+
+We can install YuniKorn with Helm, please refer to this [doc](https://yunikorn.apache.org/docs/#install).
+We need to tune some parameters based on the default configuration, recommended to clone the release repo and modify the parameters in `value.yaml`.
+
+```
+git clone https://github.com/apache/incubator-yunikorn-release.git
+cd helm-charts/yunikorn
+```
+
+#### Configuration
+
+The modifications in the `value.yaml` are:
+
+- increased memory/cpu resources for the scheduler pod
+- disabled the admission controller
+- set the app sorting policy to FAIR
+
+please see the changes below:
+
+```
+resources:
+  requests:
+    cpu: 14
+    memory: 16Gi
+  limits:
+    cpu: 14
+    memory: 16Gi
+```
+```
+embedAdmissionController: false
+```
+```
+configuration: |
+  partitions:
+    -
+      name: default
+      queues:
+        - name: root
+          submitacl: '*'
+          queues:
+            -
+              name: sandbox
+              properties:
+                application.sort.policy: fair
+```
+
+#### Install YuniKorn with local release repo
+
+```
+Helm install yunikorn . --namespace yunikorn
+```
+
+---
+
+## Setup Prometheus
+
+YuniKorn exposes its scheduling metrics via Promethus. Thus, we need to setup Promethus server to collect these metrics.
+
+### 1. Download Prometheus release
+
+```
+wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz
+```
+```
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+### 2. Configure prometheus.yml
+
+```
+global:
+  scrape_interval:     3s
+  evaluation_interval: 15s
+
+scrape_configs:
+  - job_name: 'yunikorn'
+    scrape_interval: 1s
+    metrics_path: '/ws/v1/metrics'
+    static_configs:
+    - targets: ['docker.for.mac.host.internal:9080'] 
+    // 9080 is internal port, need port forward or modify 9080 to service's port
+```
+
+### 3. Launch Prometheus
+```
+./prometheus --config.file=prometheus.yml
+```
+
+---
+
+## Collect and Observe YuniKorn metrics
+
+After Promethus is launched, YuniKorn metrics can be easily collected.Here is the [docs](https://yunikorn.apache.org/docs/performance/metrics) of YuniKorn metrics. YuniKorn tracks some key scheduling metircs which measure the latency of some critical scheduling paths. These metrics include:
+
+ - scheduling_latency_seconds
+ - app_sorting_latency_seconds
+ - node_sorting_latency_seconds
+ - queue_sorting_latency_seconds
+ - container_allocation_attempt_total 
+
+you can select and generate graph on Promethus UI easily, such as:
+
+![Prometheus Metrics List](./../assets/prometheus.png)
+
+
+---
+
+## Performance Tuning
+
+### Kubernetes
+
+The default K8s setup has limited concurrent requests which limits the overall throughput of the cluster. In this section, we introduced a few parameters that need to be tuned up in order to increase the overall throughput of the cluster.
+
+#### kubeadm
+
+Set pod-network mask
+
+```
+kubeadm init --pod-network-cidr=10.244.0.0/8
+```
+
+#### CNI
+
+Modify CNI mask and resources.
+
+```
+  net-conf.json: |
+    {
+      "Network": "10.244.0.0/8",
+      "Backend": {
+        "Type": "vxlan"
+      }
+    }
+```
+```
+  resources:
+    requests:
+      cpu: "100m"
+      memory: "200Mi"
+    limits:
+      cpu: "100m"
+      memory: "200Mi"
+```
+
+
+#### Api-Server
+
+In Api-Server, we need to modify two parameters: `max-mutating-requests-inflight` and `max-requests-inflight`. Those two parameters represent the api request bandwidth, cause we will apply large amount of pod request, so we need to increase those two parameters. Modify `/etc/kubernetes/manifest/kube-apiserver.yaml`:

Review comment:
       ```suggestion
   In the Kubernetes API server, we need to modify two parameters: `max-mutating-requests-inflight` and `max-requests-inflight`. Those two parameters represent the API request bandwidth. Because we will generate a large amount of pod request, we need to increase those two parameters. Modify `/etc/kubernetes/manifest/kube-apiserver.yaml`:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-yunikorn-site] yangwwei merged pull request #90: [YUNIKORN-851]Documents of build kubemark and prometheus

Posted by GitBox <gi...@apache.org>.
yangwwei merged pull request #90:
URL: https://github.com/apache/incubator-yunikorn-site/pull/90


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org