You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Rekha Joshi (JIRA)" <ji...@apache.org> on 2015/10/06 06:05:26 UTC
[jira] [Comment Edited] (SPARK-10942) Not all cached RDDs are
unpersisted
[ https://issues.apache.org/jira/browse/SPARK-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944465#comment-14944465 ]
Rekha Joshi edited comment on SPARK-10942 at 10/6/15 4:04 AM:
--------------------------------------------------------------
[~pnpritchard] hi.did you try rdd2.unpersist()?
was (Author: rekhajoshm):
[~pnpritchard] hi.did you try rdd.unpersist()?
> Not all cached RDDs are unpersisted
> -----------------------------------
>
> Key: SPARK-10942
> URL: https://issues.apache.org/jira/browse/SPARK-10942
> Project: Spark
> Issue Type: Bug
> Components: Streaming
> Reporter: Nick Pritchard
>
> I have a Spark Streaming application that caches RDDs inside of a {{transform}} closure. Looking at the Spark UI, it seems that most of these RDDs are unpersisted after the batch completes, but not all.
> I have copied a minimal reproducible example below to highlight the problem. I run this and monitor the Spark UI "Storage" tab. The example generates and caches 30 RDDs, and I see most get cleaned up. However in the end, some still remain cached. There is some randomness going on because I see different RDDs remain cached for each run.
> I have marked this as Major because I haven't been able to workaround it and it is a memory leak for my application. I tried setting {{spark.cleaner.ttl}} but that did not change anything.
> {code}
> val inputRDDs = mutable.Queue.tabulate(30) { i =>
> sc.parallelize(Seq(i))
> }
> val input: DStream[Int] = ssc.queueStream(inputRDDs)
> val output = input.transform { rdd =>
> if (rdd.isEmpty()) {
> rdd
> } else {
> val rdd2 = rdd.map(identity)
> rdd2.setName(rdd.first().toString)
> rdd2.cache()
> val rdd3 = rdd2.map(identity)
> rdd3
> }
> }
> output.print()
> ssc.start()
> ssc.awaitTermination()
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org