You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by an...@apache.org on 2015/07/02 08:05:53 UTC

spark git commit: [SPARK-8769] [TRIVIAL] [DOCS] toLocalIterator should mention it results in many jobs

Repository: spark
Updated Branches:
  refs/heads/master d14338eaf -> 15d41cc50


[SPARK-8769] [TRIVIAL] [DOCS] toLocalIterator should mention it results in many jobs

Author: Holden Karau <ho...@pigscanfly.ca>

Closes #7171 from holdenk/SPARK-8769-toLocalIterator-documentation-improvement and squashes the following commits:

97ddd99 [Holden Karau] Add note


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/15d41cc5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/15d41cc5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/15d41cc5

Branch: refs/heads/master
Commit: 15d41cc501f5fa7ac82c4a6741e416bb557f610a
Parents: d14338e
Author: Holden Karau <ho...@pigscanfly.ca>
Authored: Wed Jul 1 23:05:45 2015 -0700
Committer: Andrew Or <an...@databricks.com>
Committed: Wed Jul 1 23:05:45 2015 -0700

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/rdd/RDD.scala | 4 ++++
 1 file changed, 4 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/15d41cc5/core/src/main/scala/org/apache/spark/rdd/RDD.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/rdd/RDD.scala b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
index 10610f4..cac6e3b 100644
--- a/core/src/main/scala/org/apache/spark/rdd/RDD.scala
+++ b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
@@ -890,6 +890,10 @@ abstract class RDD[T: ClassTag](
    * Return an iterator that contains all of the elements in this RDD.
    *
    * The iterator will consume as much memory as the largest partition in this RDD.
+   *
+   * Note: this results in multiple Spark jobs, and if the input RDD is the result
+   * of a wide transformation (e.g. join with different partitioners), to avoid
+   * recomputing the input RDD should be cached first.
    */
   def toLocalIterator: Iterator[T] = withScope {
     def collectPartition(p: Int): Array[T] = {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org