You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Jonathan Kelly (JIRA)" <ji...@apache.org> on 2016/03/11 23:21:18 UTC

[jira] [Created] (ZEPPELIN-736) Remove spark.executor.memory default to 512m

Jonathan Kelly created ZEPPELIN-736:
---------------------------------------

             Summary: Remove spark.executor.memory default to 512m
                 Key: ZEPPELIN-736
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-736
             Project: Zeppelin
          Issue Type: Improvement
    Affects Versions: 0.5.6
            Reporter: Jonathan Kelly
            Assignee: Jonathan Kelly
            Priority: Trivial
             Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for spark.executor.memory in spark-defaults.conf upon startup, but if you look at the Interpreter page, you'll see that it has a default of 512m. If you restart a running Spark interpreter from this page, the new SparkContext will use this new default of spark.executor.memory=512m instead of what it had previously pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow spark.executor.memory to default to whatever value may be set in spark-defaults.conf, falling back to the Spark built-in default (which, btw, has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)