You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by mo...@apache.org on 2016/03/15 19:32:44 UTC

incubator-zeppelin git commit: [ZEPPELIN-735] Remove spark.executor.memory default to 512m

Repository: incubator-zeppelin
Updated Branches:
  refs/heads/master 848f4fba6 -> e2122bc9c


[ZEPPELIN-735] Remove spark.executor.memory default to 512m

### What is this PR for?
The Spark interpreter currently honors whatever is set for spark.executor.memory in spark-defaults.conf upon startup, but if you look at the Interpreter page, you'll see that it has a default of 512m. If you restart a running Spark interpreter from this page, the new SparkContext will use this new default of spark.executor.memory=512m instead of what it had previously pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow spark.executor.memory to default to whatever value may be set in spark-defaults.conf, falling back to the Spark built-in default (which, btw, has for a few Spark versions been 1g, not 512m anymore).

### What type of PR is it?
Improvement

### Todos
N/A

### What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-735

### How should this be tested?
* Set spark.executor.memory to some value in spark-defaults.conf (say, 5120m)
* Run a Spark paragraph in Zeppelin
 * The Spark application will correctly use the spark.executor.memory value from spark-defaults.conf (both before and after this change).
* View Interpreter page in Zeppelin UI
 * Before this change: spark.executor.memory will be displayed as 512m instead of what is in spark-defaults.conf
 * After this change: spark.executor.memory will be blank
* Restart Spark Interpreter from this page and run a Spark paragraph
 * Before this change: the new Spark application will incorrectly use the spark.executor.memory=512m value shown on the Interpreter page
 * After this change: the new Spark application will correctly use the spark.executor.memory value from spark-defaults.conf

### Screenshots (if appropriate)
N/A

### Questions:
* Does the licenses files need update? NO
* Is there breaking changes for older versions? NO
* Does this needs documentation? NO

Author: Jonathan Kelly <jo...@amazon.com>

Closes #774 from ejono/ZEPPELIN-735 and squashes the following commits:

07d83a0 [Jonathan Kelly] Remove spark.executor.memory default to 512m


Project: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/commit/e2122bc9
Tree: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/tree/e2122bc9
Diff: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/diff/e2122bc9

Branch: refs/heads/master
Commit: e2122bc9c1691e5b49cc612b592ff23108c0a431
Parents: 848f4fb
Author: Jonathan Kelly <jo...@amazon.com>
Authored: Fri Mar 11 11:29:39 2016 -0800
Committer: Lee moon soo <mo...@apache.org>
Committed: Tue Mar 15 11:36:55 2016 -0700

----------------------------------------------------------------------
 .../src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/e2122bc9/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java
----------------------------------------------------------------------
diff --git a/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java b/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java
index c39ef31..5bd50ce 100644
--- a/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java
+++ b/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java
@@ -93,7 +93,7 @@ public class SparkInterpreter extends Interpreter {
           getSystemDefault("MASTER", "spark.master", "local[*]"),
           "Spark master uri. ex) spark://masterhost:7077")
         .add("spark.executor.memory",
-          getSystemDefault(null, "spark.executor.memory", "512m"),
+          getSystemDefault(null, "spark.executor.memory", ""),
           "Executor memory per worker instance. ex) 512m, 32g")
         .add("spark.cores.max",
           getSystemDefault(null, "spark.cores.max", ""),