You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2016/02/11 02:00:57 UTC

spark git commit: [SPARK-13274] Fix Aggregator Links on GroupedDataset Scala API

Repository: spark
Updated Branches:
  refs/heads/master 0902e2028 -> 719973b05


[SPARK-13274] Fix Aggregator Links on GroupedDataset Scala API

Update Aggregator links to point to #org.apache.spark.sql.expressions.Aggregator

Author: raela <ra...@databricks.com>

Closes #11158 from raelawang/master.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/719973b0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/719973b0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/719973b0

Branch: refs/heads/master
Commit: 719973b05ef6d6b9fbb83d76aebac6454ae84fad
Parents: 0902e20
Author: raela <ra...@databricks.com>
Authored: Wed Feb 10 17:00:54 2016 -0800
Committer: Reynold Xin <rx...@databricks.com>
Committed: Wed Feb 10 17:00:54 2016 -0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/sql/GroupedDataset.scala     | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/719973b0/sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala b/sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala
index c0e28f2..53cb8eb 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala
@@ -101,7 +101,8 @@ class GroupedDataset[K, V] private[sql](
    *
    * This function does not support partial aggregation, and as a result requires shuffling all
    * the data in the [[Dataset]]. If an application intends to perform an aggregation over each
-   * key, it is best to use the reduce function or an [[Aggregator]].
+   * key, it is best to use the reduce function or an
+   * [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
    *
    * Internally, the implementation will spill to disk if any given group is too large to fit into
    * memory.  However, users must take care to avoid materializing the whole iterator for a group
@@ -128,7 +129,8 @@ class GroupedDataset[K, V] private[sql](
    *
    * This function does not support partial aggregation, and as a result requires shuffling all
    * the data in the [[Dataset]]. If an application intends to perform an aggregation over each
-   * key, it is best to use the reduce function or an [[Aggregator]].
+   * key, it is best to use the reduce function or an
+   * [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
    *
    * Internally, the implementation will spill to disk if any given group is too large to fit into
    * memory.  However, users must take care to avoid materializing the whole iterator for a group
@@ -148,7 +150,8 @@ class GroupedDataset[K, V] private[sql](
    *
    * This function does not support partial aggregation, and as a result requires shuffling all
    * the data in the [[Dataset]]. If an application intends to perform an aggregation over each
-   * key, it is best to use the reduce function or an [[Aggregator]].
+   * key, it is best to use the reduce function or an
+   * [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
    *
    * Internally, the implementation will spill to disk if any given group is too large to fit into
    * memory.  However, users must take care to avoid materializing the whole iterator for a group
@@ -169,7 +172,8 @@ class GroupedDataset[K, V] private[sql](
    *
    * This function does not support partial aggregation, and as a result requires shuffling all
    * the data in the [[Dataset]]. If an application intends to perform an aggregation over each
-   * key, it is best to use the reduce function or an [[Aggregator]].
+   * key, it is best to use the reduce function or an
+   * [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
    *
    * Internally, the implementation will spill to disk if any given group is too large to fit into
    * memory.  However, users must take care to avoid materializing the whole iterator for a group


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org