You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ryne Yang (JIRA)" <ji...@apache.org> on 2019/04/12 16:33:00 UTC

[jira] [Comment Edited] (SPARK-27434) memory leak in spark driver

    [ https://issues.apache.org/jira/browse/SPARK-27434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816432#comment-16816432 ] 

Ryne Yang edited comment on SPARK-27434 at 4/12/19 4:32 PM:
------------------------------------------------------------

[~shahid] yup:
 # start spark context with `spark.eventLog.enabled` set to true and logging path is under HDFS
 # do some work under that context
 # close spark context
 # repeat step 1

 

after a few loops, driver will have mem allocation high and will produce similar heap dump like the one I have attached. 

the key here is to NOT exit JVM, but keeps open and close spark context. 


was (Author: linehrr):
[~shahid] yup:
 # start spark context with `spark.eventLog.enabled` set to true and logging path is under HDFS
 # do some work under that context
 # close spark context
 # repeat step 1

 

after a few loops, driver will have mem allocation high and will produce similar heap dump like the one I have attached. 

> memory leak in spark driver
> ---------------------------
>
>                 Key: SPARK-27434
>                 URL: https://issues.apache.org/jira/browse/SPARK-27434
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0
>         Environment: OS: Centos 7
> JVM: 
> **_openjdk version "1.8.0_201"_
> _OpenJDK Runtime Environment (IcedTea 3.11.0) (Alpine 8.201.08-r0)_
> _OpenJDK 64-Bit Server VM (build 25.201-b08, mixed mode)_
> Spark version: 2.4.0
>            Reporter: Ryne Yang
>            Priority: Major
>         Attachments: Screen Shot 2019-04-10 at 12.11.35 PM.png
>
>
> we got a OOM exception on the driver after driver has completed multiple jobs(we are reusing spark context). 
> so we took a heap dump and looked at the leak analysis, found out that under AsyncEventQueue there are 3.5GB of heap allocated. Possibly a leak. 
>  
> can someone take a look at? 
> here is the heap analysis: 
> !Screen Shot 2019-04-10 at 12.11.35 PM.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org