You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Max Schmidt (JIRA)" <ji...@apache.org> on 2016/04/01 14:56:25 UTC

[jira] [Commented] (SPARK-14328) Leaking JobProgressListener on master

    [ https://issues.apache.org/jira/browse/SPARK-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221655#comment-15221655 ] 

Max Schmidt commented on SPARK-14328:
-------------------------------------

Okay I think I have found my problem. The default setting for "spark.ui.retainedJobs" far to high for our scenario. This issue may be closed.

> Leaking JobProgressListener  on master
> --------------------------------------
>
>                 Key: SPARK-14328
>                 URL: https://issues.apache.org/jira/browse/SPARK-14328
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.0
>            Reporter: Max Schmidt
>            Priority: Critical
>
> There is obviously a leak where JobProgressListener are held in a map on the master although the jobs that make the master add the listeners are all finished. 
> A heapdump shows that after submitting 37 applications:
> 37 instances of "org.apache.spark.ui.jobs.JobProgressListener", loaded by "sun.misc.Launcher$AppClassLoader @ 0x6c0018b38" occupy 159.574.728 (50,37%) bytes.
> This leads to a OutOfMemoryException on the Master after a while.
> Workaround: Restart Master.
> Scenario to reproduce: Submit a spark application with the JavaSparkContext. 
> Set spark.ui.enabled to false didn't change anything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org