You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Mori[A]rty (JIRA)" <ji...@apache.org> on 2017/04/17 12:04:41 UTC

[jira] [Created] (ZEPPELIN-2414) Memory leak under scoped mode of SparkInterpreter caused by inapproprately setting Thread.contextClassLoader

Mori[A]rty created ZEPPELIN-2414:
------------------------------------

             Summary: Memory leak under scoped mode of SparkInterpreter caused by inapproprately setting Thread.contextClassLoader
                 Key: ZEPPELIN-2414
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-2414
             Project: Zeppelin
          Issue Type: Bug
          Components: Interpreters
    Affects Versions: 0.6.2
         Environment: {quote}
jdk version: jdk1.7.0_67

spark interpreter env:
export MASTER=local\[4\]
export SPARK_SUBMIT_OPTIONS="--driver-memory 2G"
{quote}
            Reporter: Mori[A]rty
             Fix For: 0.6.3


When using scoped mode of SparkInterpreter, after several repetitions of these three steps (create-notebook -> run a paragraph -> remove-notebook), the heap of RemoteInterpreterServer process will grow to 100% rapidly.

For example, in my local environment, RemoteInpreterServer's max heap size is 2 GB. After I repeatedly run a simple paragrah
{quote}%spark 
sc{quote}
15 times (each time in a new notebook and delete the notebook after running), RemoteInpreterServer's heap has no more free space and the 15th execution of the paragrah was never finished.

heap occupation:
{quote}
  S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT   
  0.00   0.00 100.00  99.98  49.38     19    2.304  1093  798.293  800.597
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)