You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Andrzej Bialecki (JIRA)" <ji...@apache.org> on 2015/09/09 11:47:46 UTC

[jira] [Created] (ZEPPELIN-295) Property "spark.executor.memory" doesn't have effect

Andrzej Bialecki  created ZEPPELIN-295:
------------------------------------------

             Summary: Property "spark.executor.memory" doesn't have effect
                 Key: ZEPPELIN-295
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-295
             Project: Zeppelin
          Issue Type: Bug
          Components: Core
    Affects Versions: 0.6.0
         Environment: external Spark 1.4.1 standalone cluster, OSX 10.10.5, Java 7. Zeppelin built from sources b4b4f5521a57fd3b0902b5e3ab0e228c10b8bac5
            Reporter: Andrzej Bialecki 


It appears that "spark.executor.memory" property is not passed to SparkContext when it's being created in SparkInterpreter.

Steps to repeat:
* edit zeppelin-env.sh to add
{{export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=1G -Dspark.cores.max=2"}}
* start Zeppelin and execute some paragraphs.
* Spark Master UI shows that the app's "Memory per node" is 512M

After a little digging I found the code that seems to get this option from environment or system props, in SparkInterpreter:97. After editing this line to set a fixed value (and rebuilding / restarting) it still didn't work. Setting the property around line 269 didn't work either - only setting it just before the return from createSparkContext() (around line 311) actually worked, i.e. the application got the right amount of memory.

So it seems that this property is overwritten somewhere between these lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)