You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by zj...@apache.org on 2021/07/12 09:22:34 UTC
[zeppelin] branch branch-0.9 updated: [ZEPPELIN-5443] Allow the
interpreter pod to request the gpu resources under k8s mode
This is an automated email from the ASF dual-hosted git repository.
zjffdu pushed a commit to branch branch-0.9
in repository https://gitbox.apache.org/repos/asf/zeppelin.git
The following commit(s) were added to refs/heads/branch-0.9 by this push:
new c39216c [ZEPPELIN-5443] Allow the interpreter pod to request the gpu resources under k8s mode
c39216c is described below
commit c39216ce1275272c822da8ad62e7d08a9238f5a1
Author: rick <ri...@rickdeMacBook-Pro.local>
AuthorDate: Wed Jul 7 17:47:23 2021 +0800
[ZEPPELIN-5443] Allow the interpreter pod to request the gpu resources under k8s mode
### What is this PR for?
Currently, the interpreter pod created by `k8s/interpreter/100-interpreter-spec.yaml` cannot request the GPU resources. So this PR adds two properties:
* `zeppelin.k8s.interpreter.gpu.type`, to specify the type of GPU, e.g., `nvidia.com/gpu`.
* `zeppelin.k8s.interpreter.gpu.nums`, to set the number of requested GPU.
Users can set these two properties directly in the interpreter settings, such as:
```
%spark.conf
zeppelin.k8s.interpreter.gpu.type nvidia.com/gpu
zeppelin.k8s.interpreter.gpu.nums 1
```
, which makes the interpreter pod be scheduled to a machine with GPU resources.
### What type of PR is it?
[Improvement]
### Todos
* [ ] - Task
### What is the Jira issue?
* <https://issues.apache.org/jira/browse/ZEPPELIN-5443>
### How should this be tested?
* CI pass and manually tested
### Screenshots (if appropriate)
### Questions:
* Does the licenses files need update? No
* Is there breaking changes for older versions? No
* Does this needs documentation? No
Author: rick <ri...@rickdeMacBook-Pro.local>
Closes #4168 from rickchengx/ZEPPELIN-5443 and squashes the following commits:
2d29d849c1 [rick] [ZEPPELIN-5443] Allow the interpreter pod to request the gpu resources under k8s mode
(cherry picked from commit d1474ae0f59234ddc0a95168936b6679c1b36986)
Signed-off-by: Jeff Zhang <zj...@apache.org>
---
k8s/interpreter/100-interpreter-spec.yaml | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/k8s/interpreter/100-interpreter-spec.yaml b/k8s/interpreter/100-interpreter-spec.yaml
index dc72fab..b35486f 100644
--- a/k8s/interpreter/100-interpreter-spec.yaml
+++ b/k8s/interpreter/100-interpreter-spec.yaml
@@ -72,6 +72,15 @@ spec:
{# limits.memory is not set because of a potential OOM-Killer. https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits #}
limits:
cpu: "{{zeppelin.k8s.interpreter.cores}}"
+ {% if zeppelin.k8s.interpreter.gpu.type is defined and zeppelin.k8s.interpreter.gpu.nums is defined %}
+ {{zeppelin.k8s.interpreter.gpu.type}}: "{{zeppelin.k8s.interpreter.gpu.nums}}"
+ {% endif %}
+ {% else %}
+ {% if zeppelin.k8s.interpreter.gpu.type is defined and zeppelin.k8s.interpreter.gpu.nums is defined %}
+ resources:
+ limits:
+ {{zeppelin.k8s.interpreter.gpu.type}}: "{{zeppelin.k8s.interpreter.gpu.nums}}"
+ {% endif %}
{% endif %}
{% if zeppelin.k8s.interpreter.group.name == "spark" %}
volumeMounts: