You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@yunikorn.apache.org by GitBox <gi...@apache.org> on 2022/01/26 04:45:21 UTC

[GitHub] [incubator-yunikorn-site] yangwwei commented on a change in pull request #113: [YUNIKORN-1044] Publish design doc for scheduler plugin

yangwwei commented on a change in pull request #113:
URL: https://github.com/apache/incubator-yunikorn-site/pull/113#discussion_r792307436



##########
File path: docs/design/scheduler_plugin.md
##########
@@ -0,0 +1,112 @@
+---
+id: scheduler_plugin
+title: K8s Scheduler Plugin
+---
+
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*      http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+## Background
+
+YuniKorn (on Kubernetes) is traditionally implemented as a ground-up implementation of a Kubernetes scheduler.
+This has allowed us to innovate rapidly, but is not without its problems; we currently have numerous places
+where we call into non-public K8S source code APIs with varying levels of (code) stability, requiring
+sometimes very disruptive code changes when we switch to new Kubernetes releases.
+
+Ideally, we should be able to take advantage of enhancements to new Kubernetes releases automatically.
+Using the plugin model enables us to enhance the Kubernetes scheduling logic with YuniKorn features.
+This also helps keep YuniKorn compatible with new Kubernetes releases with minimal effort.
+
+Additionally, it is desirable in many cases to allow non-batch workloads to bypass the YuniKorn scheduling
+functionality and use default scheduling logic. However, we have no way to do that today as the default
+scheduling functionality is not present in the YuniKorn scheduler binary.
+
+Since Kubernetes 1.19, the Kubernetes project has created a stable API for the
+[Scheduling Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/),
+which allows plugins to be created which implement various extension points. Plugins implement one or more
+of these extension points, and are then compiled into a scheduler binary which contains the default
+scheduler and plugin code, configured to call into the plugins during normal scheduling flow.
+
+## Design

Review comment:
       Some things might be good to be included:
   1. We need a high-level design to explain how this works together. A workflow will be helpful. For example, when a pod gets submitted, what happens in the scheduler-plugin mode until it gets allocated?
   2. Can you help to list the features that cannot be supported in the plugin mode?
   3. I think we need to mentioned the plugin mode will have worse performance than the standalone mode

##########
File path: docs/design/scheduler_plugin.md
##########
@@ -0,0 +1,112 @@
+---
+id: scheduler_plugin
+title: K8s Scheduler Plugin
+---
+
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*      http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+## Background
+
+YuniKorn (on Kubernetes) is traditionally implemented as a ground-up implementation of a Kubernetes scheduler.
+This has allowed us to innovate rapidly, but is not without its problems; we currently have numerous places
+where we call into non-public K8S source code APIs with varying levels of (code) stability, requiring
+sometimes very disruptive code changes when we switch to new Kubernetes releases.
+
+Ideally, we should be able to take advantage of enhancements to new Kubernetes releases automatically.
+Using the plugin model enables us to enhance the Kubernetes scheduling logic with YuniKorn features.
+This also helps keep YuniKorn compatible with new Kubernetes releases with minimal effort.
+
+Additionally, it is desirable in many cases to allow non-batch workloads to bypass the YuniKorn scheduling
+functionality and use default scheduling logic. However, we have no way to do that today as the default
+scheduling functionality is not present in the YuniKorn scheduler binary.
+
+Since Kubernetes 1.19, the Kubernetes project has created a stable API for the
+[Scheduling Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/),
+which allows plugins to be created which implement various extension points. Plugins implement one or more
+of these extension points, and are then compiled into a scheduler binary which contains the default
+scheduler and plugin code, configured to call into the plugins during normal scheduling flow.
+
+## Design
+
+We have added a scheduler plugin to the k8s-shim codebase which can be used to build a Kubernetes
+scheduler binary that includes YuniKorn functionality as well as the default scheduler functionality,
+significantly improving the compatibility of YuniKorn with upstream Kubernetes and allowing deployment of
+YuniKorn as the sole scheduler in a cluster with much greater confidence.
+
+Separate docker images are created for the scheduler. The traditional YuniKorn scheduler is built as
+`scheduler-{version}` while the new plugin version is built as `scheduler-plugin-{version}`. Either can be
+deployed interchangeably into a Kubernetes cluster with the same helm charts by customizing the scheduler
+image to deploy.
+
+## Entrypoints
+
+The existing shim `main()` method has been relocated to `pkg/cmd/shim/main.go`, and a new `main()` method
+under `pkg/cmd/schedulerplugin/main.go` has be created. This method instantiates the default Kubernetes
+scheduler and adds YuniKorn to it as a set of plugins. It also modifies the default scheduler CLI argument
+parsing to add YuniKorn-specific options. When the YuniKorn plugin is created, it will launch an instance
+of the existing shim / core schedulers in the background, sync all informers, and start the normal YuniKorn
+scheduling loop.
+
+## Shim Scheduler Changes
+
+In order to cooperate with the default scheduler, the shim needs to operate slightly differently when in
+plugin mode. These differences include:
+
+ - In `postTaskAllocated()`, we don’t actually bind the Pod or Volumes, as this is the responsibility of
+   the default scheduler framework. Instead, we track the Node that YK allocated for the Node in an
+   internal map, dispatch a new BindTaskEvent, and record a `QuotaApproved` event on the Pod.
+ - In `postTaskBound()`, we update the Pod’s state to `QuotaApproved` as this will cause the default scheduler
+   to re-evaluate the pod for scheduling (more on this below).
+ - In the scheduler cache, we track pending and in-progress pod allocations, and remove them if a pod is
+   removed from the cache.
+
+## Plugin Implementation
+
+To expose the entirety of YuniKorn functionality, we implement three of the Scheduling Framework Plugins:
+
+### PreFilter
+
+PreFilter plugins are passed a reference to a Pod and return either `Success` or `Unschedulable`, depending
+on whether that pod should be considered for scheduling.
+
+For the YuniKorn implementation, we first check the Pod to see if we have an associated `applicationID`

Review comment:
       applicationID -> applicationId

##########
File path: docs/design/scheduler_plugin.md
##########
@@ -0,0 +1,112 @@
+---
+id: scheduler_plugin
+title: K8s Scheduler Plugin
+---
+
+<!--
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*      http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+-->
+
+## Background
+
+YuniKorn (on Kubernetes) is traditionally implemented as a ground-up implementation of a Kubernetes scheduler.
+This has allowed us to innovate rapidly, but is not without its problems; we currently have numerous places
+where we call into non-public K8S source code APIs with varying levels of (code) stability, requiring
+sometimes very disruptive code changes when we switch to new Kubernetes releases.
+
+Ideally, we should be able to take advantage of enhancements to new Kubernetes releases automatically.
+Using the plugin model enables us to enhance the Kubernetes scheduling logic with YuniKorn features.
+This also helps keep YuniKorn compatible with new Kubernetes releases with minimal effort.
+
+Additionally, it is desirable in many cases to allow non-batch workloads to bypass the YuniKorn scheduling
+functionality and use default scheduling logic. However, we have no way to do that today as the default
+scheduling functionality is not present in the YuniKorn scheduler binary.
+
+Since Kubernetes 1.19, the Kubernetes project has created a stable API for the
+[Scheduling Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/),
+which allows plugins to be created which implement various extension points. Plugins implement one or more
+of these extension points, and are then compiled into a scheduler binary which contains the default
+scheduler and plugin code, configured to call into the plugins during normal scheduling flow.
+
+## Design
+
+We have added a scheduler plugin to the k8s-shim codebase which can be used to build a Kubernetes
+scheduler binary that includes YuniKorn functionality as well as the default scheduler functionality,
+significantly improving the compatibility of YuniKorn with upstream Kubernetes and allowing deployment of
+YuniKorn as the sole scheduler in a cluster with much greater confidence.
+
+Separate docker images are created for the scheduler. The traditional YuniKorn scheduler is built as
+`scheduler-{version}` while the new plugin version is built as `scheduler-plugin-{version}`. Either can be
+deployed interchangeably into a Kubernetes cluster with the same helm charts by customizing the scheduler
+image to deploy.
+
+## Entrypoints
+
+The existing shim `main()` method has been relocated to `pkg/cmd/shim/main.go`, and a new `main()` method
+under `pkg/cmd/schedulerplugin/main.go` has be created. This method instantiates the default Kubernetes
+scheduler and adds YuniKorn to it as a set of plugins. It also modifies the default scheduler CLI argument
+parsing to add YuniKorn-specific options. When the YuniKorn plugin is created, it will launch an instance
+of the existing shim / core schedulers in the background, sync all informers, and start the normal YuniKorn
+scheduling loop.
+
+## Shim Scheduler Changes
+
+In order to cooperate with the default scheduler, the shim needs to operate slightly differently when in
+plugin mode. These differences include:
+
+ - In `postTaskAllocated()`, we don’t actually bind the Pod or Volumes, as this is the responsibility of
+   the default scheduler framework. Instead, we track the Node that YK allocated for the Node in an
+   internal map, dispatch a new BindTaskEvent, and record a `QuotaApproved` event on the Pod.
+ - In `postTaskBound()`, we update the Pod’s state to `QuotaApproved` as this will cause the default scheduler
+   to re-evaluate the pod for scheduling (more on this below).
+ - In the scheduler cache, we track pending and in-progress pod allocations, and remove them if a pod is
+   removed from the cache.
+
+## Plugin Implementation
+
+To expose the entirety of YuniKorn functionality, we implement three of the Scheduling Framework Plugins:
+
+### PreFilter
+
+PreFilter plugins are passed a reference to a Pod and return either `Success` or `Unschedulable`, depending
+on whether that pod should be considered for scheduling.
+
+For the YuniKorn implementation, we first check the Pod to see if we have an associated `applicationID`
+defined. If not, we immediately return `Success`, which allows us to delegate to the default scheduler for
+non-batch workloads.
+
+If an `applicationId` is present, then we determine if there is a pending pod allocation (meaning the
+YuniKorn core has already decided to allocate the pod). If so, we return `Success`, otherwise `Unschedulable`.
+Additionally, if an in-progress allocation is detected (indicating that we have previously attempted to
+schedule this pod), we trigger a `TaskFailed` event for the YuniKorn core so that the pod will be sent back
+for scheduling later.

Review comment:
       I found this is quite confusing. If we have previously attempted to schedule a pod, and in PreFilter we see it again, we directly failed the allocation. Why do we consider this a failure? What if the actual binding took some time and is still ongoing? 
   
   In another word, under what circumstance we can see a pod again in the PreFilter() plugin which was already scheduled by the default scheduler?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@yunikorn.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org