You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by li...@apache.org on 2015/04/01 15:34:45 UTC

spark git commit: [SPARK-6608] [SQL] Makes DataFrame.rdd a lazy val

Repository: spark
Updated Branches:
  refs/heads/master 0358b08db -> d36c5fca7


[SPARK-6608] [SQL] Makes DataFrame.rdd a lazy val

Before 1.3.0, `SchemaRDD.id` works as a unique identifier of each `SchemaRDD`. In 1.3.0, unlike `SchemaRDD`, `DataFrame` is no longer an RDD, and `DataFrame.rdd` is actually a function which always returns a new RDD instance. Making `DataFrame.rdd` a lazy val should bring the unique identifier back.

<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5265)
<!-- Reviewable:end -->

Author: Cheng Lian <li...@databricks.com>

Closes #5265 from liancheng/spark-6608 and squashes the following commits:

7500968 [Cheng Lian] Updates javadoc
7f37d21 [Cheng Lian] Makes DataFrame.rdd a lazy val


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d36c5fca
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d36c5fca
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d36c5fca

Branch: refs/heads/master
Commit: d36c5fca7b9227c4c6e1b0c1455269b5fd8d4852
Parents: 0358b08
Author: Cheng Lian <li...@databricks.com>
Authored: Wed Apr 1 21:34:45 2015 +0800
Committer: Cheng Lian <li...@databricks.com>
Committed: Wed Apr 1 21:34:45 2015 +0800

----------------------------------------------------------------------
 sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/d36c5fca/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
index 5cd0a18..19cfa15 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
@@ -952,10 +952,12 @@ class DataFrame private[sql](
   /////////////////////////////////////////////////////////////////////////////
 
   /**
-   * Returns the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s.
+   * Represents the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s. Note that the RDD is
+   * memoized. Once called, it won't change even if you change any query planning related Spark SQL
+   * configurations (e.g. `spark.sql.shuffle.partitions`).
    * @group rdd
    */
-  def rdd: RDD[Row] = {
+  lazy val rdd: RDD[Row] = {
     // use a local variable to make sure the map closure doesn't capture the whole DataFrame
     val schema = this.schema
     queryExecution.executedPlan.execute().map(ScalaReflection.convertRowToScala(_, schema))


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org