You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by da...@apache.org on 2016/04/20 02:29:33 UTC
spark git commit: [SPARK-14717] [PYTHON] Scala,
Python APIs for Dataset.unpersist differ in default blocking value
Repository: spark
Updated Branches:
refs/heads/master a685e65a4 -> 366414235
[SPARK-14717] [PYTHON] Scala, Python APIs for Dataset.unpersist differ in default blocking value
## What changes were proposed in this pull request?
Change unpersist blocking parameter default value to match Scala
## How was this patch tested?
unit tests, manual tests
jkbradley davies
Author: felixcheung <fe...@hotmail.com>
Closes #12507 from felixcheung/pyunpersist.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/36641423
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/36641423
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/36641423
Branch: refs/heads/master
Commit: 3664142350afb6bf40a8bcb3508b56670603dae4
Parents: a685e65
Author: felixcheung <fe...@hotmail.com>
Authored: Tue Apr 19 17:29:28 2016 -0700
Committer: Davies Liu <da...@gmail.com>
Committed: Tue Apr 19 17:29:28 2016 -0700
----------------------------------------------------------------------
python/pyspark/sql/dataframe.py | 4 +++-
python/pyspark/sql/tests.py | 2 +-
2 files changed, 4 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/36641423/python/pyspark/sql/dataframe.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index b4fa836..328bda6 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -326,9 +326,11 @@ class DataFrame(object):
return self
@since(1.3)
- def unpersist(self, blocking=True):
+ def unpersist(self, blocking=False):
"""Marks the :class:`DataFrame` as non-persistent, and remove all blocks for it from
memory and disk.
+
+ .. note:: `blocking` default has changed to False to match Scala in 2.0.
"""
self.is_cached = False
self._jdf.unpersist(blocking)
http://git-wip-us.apache.org/repos/asf/spark/blob/36641423/python/pyspark/sql/tests.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/tests.py b/python/pyspark/sql/tests.py
index e4f79c9..d4c221d 100644
--- a/python/pyspark/sql/tests.py
+++ b/python/pyspark/sql/tests.py
@@ -362,7 +362,7 @@ class SQLTests(ReusedPySparkTestCase):
# cache and checkpoint
self.assertFalse(df.is_cached)
df.persist()
- df.unpersist()
+ df.unpersist(True)
df.cache()
self.assertTrue(df.is_cached)
self.assertEqual(2, df.count())
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org