You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yan (JIRA)" <ji...@apache.org> on 2017/07/30 10:09:00 UTC
[jira] [Created] (SPARK-21576) Spark caching difference between
2.0.2 and 2.1.1
Yan created SPARK-21576:
---------------------------
Summary: Spark caching difference between 2.0.2 and 2.1.1
Key: SPARK-21576
URL: https://issues.apache.org/jira/browse/SPARK-21576
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 2.1.1
Reporter: Yan
Priority: Minor
Hi,
i asked a question on stackoverflow and was recommended to open jira
I'm a bit confused with spark caching behavior. I want to compute dependent dataset (b), cache it and unpersist source dataset(a) - here is my code:
{code:java}
val spark = SparkSession.builder().appName("test").master("local[4]").getOrCreate()
import spark.implicits._
val a = spark.createDataset(Seq(("a", 1), ("b", 2), ("c", 3)))
a.createTempView("a")
a.cache
println(s"Is a cached: ${spark.catalog.isCached("a")}")
val b = a.filter(x => x._2 < 3)
b.createTempView("b")
// calling action
b.cache.first
println(s"Is b cached: ${spark.catalog.isCached("b")}")
spark.catalog.uncacheTable("a")
println(s"Is b cached after a was unpersisted: ${spark.catalog.isCached("b")}")
{code}
When using spark 2.0.2 it works as expected:
{code:java}
Is a cached: true
Is b cached: true
Is b cached after a was unpersisted: true
{code}
But on 2.1.1:
{code:java}
Is a cached: true
Is b cached: true
Is b cached after a was unpersisted: false
{code}
in reality i have dataset a (complex database query), heavy processing to get dataset b from a, but after b is computed and cached dataset a is not needed anymore (i want to free memory)
How can i archive this in 2.1.1?
Thank you.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org