You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by pd...@apache.org on 2021/09/02 14:24:14 UTC
[zeppelin] branch branch-0.10 updated: [ZEPPELIN-5466] Can not
specify the spark image from the interpreter settings
This is an automated email from the ASF dual-hosted git repository.
pdallig pushed a commit to branch branch-0.10
in repository https://gitbox.apache.org/repos/asf/zeppelin.git
The following commit(s) were added to refs/heads/branch-0.10 by this push:
new d50739f [ZEPPELIN-5466] Can not specify the spark image from the interpreter settings
d50739f is described below
commit d50739f8854adbd868bac314a2c2bd76646c2023
Author: rick <ri...@rickdeMacBook-Pro.local>
AuthorDate: Fri Jul 23 15:17:48 2021 +0800
[ZEPPELIN-5466] Can not specify the spark image from the interpreter settings
### What is this PR for?
When Zeppelin launches a spark interpreter pod under k8s mode, it will build the **spark submit options** through [buildSparkSubmitOptions()](https://github.com/apache/zeppelin/blob/master/zeppelin-plugins/launcher/k8s-standard/src/main/java/org/apache/zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java#L372-L393)
But the property `--conf spark.kubernetes.container.image=` will use the value from the `zConf.getK8sSparkContainerImage()`. Note that although the interpreter properties overrides the values through
```
// interpreter properties overrides the values
k8sProperties.putAll(Maps.fromProperties(properties));
```
But the spark submit options has already been built. So if user sets the `spark.kubernetes.container.image` from the interpreter settings, it will not correctly set the spark image in the `spark submit options`.
### What type of PR is it?
[Bug Fix]
### Todos
* [ ] - Task
### What is the Jira issue?
* <https://issues.apache.org/jira/browse/ZEPPELIN-5466>
### How should this be tested?
* CI pass and manually tested
### Screenshots (if appropriate)
### Questions:
* Does the licenses files need update? No
* Is there breaking changes for older versions? No
* Does this needs documentation? No
Author: rick <ri...@rickdeMacBook-Pro.local>
Closes #4185 from rickchengx/ZEPPELIN-5466 and squashes the following commits:
9f270de35 [rick] [ZEPPELIN-5466] Can not specify the spark image from the interpreter settings
(cherry picked from commit 979820ec522fdc654f7ce857df60fb4795b459e5)
Signed-off-by: Philipp Dallig <ph...@gmail.com>
---
.../zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/zeppelin-plugins/launcher/k8s-standard/src/main/java/org/apache/zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java b/zeppelin-plugins/launcher/k8s-standard/src/main/java/org/apache/zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java
index 912c6b1..bfd016f 100644
--- a/zeppelin-plugins/launcher/k8s-standard/src/main/java/org/apache/zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java
+++ b/zeppelin-plugins/launcher/k8s-standard/src/main/java/org/apache/zeppelin/interpreter/launcher/K8sRemoteInterpreterProcess.java
@@ -75,6 +75,7 @@ public class K8sRemoteInterpreterProcess extends RemoteInterpreterManagedProcess
private static final String SPARK_DRIVER_MEMORY = "spark.driver.memory";
private static final String SPARK_DRIVER_MEMORY_OVERHEAD = "spark.driver.memoryOverhead";
private static final String SPARK_DRIVER_CORES = "spark.driver.cores";
+ private static final String SPARK_CONTAINER_IMAGE = "zeppelin.k8s.spark.container.image";
private static final String ENV_SERVICE_DOMAIN = "SERVICE_DOMAIN";
private static final String ENV_ZEPPELIN_HOME = "ZEPPELIN_HOME";
@@ -383,7 +384,8 @@ public class K8sRemoteInterpreterProcess extends RemoteInterpreterManagedProcess
options.append(" --conf spark.kubernetes.namespace=").append(getNamespace());
options.append(" --conf spark.executor.instances=1");
options.append(" --conf spark.kubernetes.driver.pod.name=").append(getPodName());
- options.append(" --conf spark.kubernetes.container.image=").append(sparkImage);
+ String sparkContainerImage = properties.containsKey(SPARK_CONTAINER_IMAGE) ? properties.getProperty(SPARK_CONTAINER_IMAGE) : sparkImage;
+ options.append(" --conf spark.kubernetes.container.image=").append(sparkContainerImage);
options.append(" --conf spark.driver.bindAddress=0.0.0.0");
options.append(" --conf spark.driver.host=").append(getInterpreterPodDnsName());
options.append(" --conf spark.driver.port=").append(getSparkDriverPort());