You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "rickcheng (Jira)" <ji...@apache.org> on 2021/07/02 09:59:00 UTC
[jira] [Created] (ZEPPELIN-5443) Allow the interpreter pod to
request the gpu resources under k8s mode
rickcheng created ZEPPELIN-5443:
-----------------------------------
Summary: Allow the interpreter pod to request the gpu resources under k8s mode
Key: ZEPPELIN-5443
URL: https://issues.apache.org/jira/browse/ZEPPELIN-5443
Project: Zeppelin
Issue Type: Improvement
Components: Kubernetes
Affects Versions: 0.9.0
Reporter: rickcheng
When zeppelin is running under k8s mode, it will create the interpreter pod through "k8s/interpreter/100-interpreter-spec.yaml". Unfortunately, it currently only supports the interpreter pod to request for CPU and memory resources. When users need to use some deep learning libraries (e.g., tensorflow), they hope that the interpreter pod can be scheduled to a machine *with gpu resources*.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)