You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Lee moon soo (JIRA)" <ji...@apache.org> on 2018/10/31 20:19:00 UTC

[jira] [Created] (ZEPPELIN-3840) Zeppelin on Kubernetes

Lee moon soo created ZEPPELIN-3840:
--------------------------------------

             Summary: Zeppelin on Kubernetes
                 Key: ZEPPELIN-3840
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3840
             Project: Zeppelin
          Issue Type: New Feature
            Reporter: Lee moon soo
            Assignee: Lee moon soo
             Fix For: 0.9.0


h2.  
h2. Goal

Make Zeppelin run on Kubernetes environment.

 - Run Zeppelin daemon as a Deployment, with RBAC to create/delete Pods for interpreters

 - Run Standard interpreters as Pods 

 - Run Spark interpreter with Spark Cluster deployed in Kubernetes cluster.
h2. How it works
 # Zeppelin-daemon is deployed in Kubernetes with necessary Role (RBAC).
e.g. kubectl apply -f ${ZEPPELIN_HOME}/k8s/zeppelin.yaml
 # Zeppelin-daemon automatically configure itself to use K8sStandardInterpreterLauncher, K8sSparkInterpreterLauncher instead of [StandardInterpreterLauncher|https://github.com/apache/zeppelin/blob/master/zeppelin-plugins/launcher/standard/src/main/java/org/apache/zeppelin/interpreter/launcher/StandardInterpreterLauncher.java], [SparkInterpreterLauncher|https://github.com/apache/zeppelin/blob/master/zeppelin-plugins/launcher/spark/src/main/java/org/apache/zeppelin/interpreter/launcher/SparkInterpreterLauncher.java].
 ## K8sStandardInterpreterLauncher run an interpreter as a Pod
 ## K8sSparkInterpreterLauncher run spark interpreter with Spark cluster in the Kubernetes cluster.

So user can start to use Zeppelin on Kubernetes with zero configuration.

 

Customize the interpreter pod

User can easily modify, extend zeppelin.yaml as their needs. (like mount volume to persist configuration and notebook, etc) To provide the same customization capability to interpreter pod, Zeppelin stores interpreter pod spec (yaml) files in the directory "${ZEPPELIN_HOME}/k8s/interpreter/", and all yaml files in the directory. So user can modify pod spec file or add more.
h2. Spark interpreter in Kubernetes

Spark Interpreter not only runs itself in Kubernetes as a Pod, but also creates Spark cluster. Spark-summit can deploy spark cluster as well in Kubernetes. see [https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html.] Also there's a PR we can check [https://github.com/apache/zeppelin/pull/2637].

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)