You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by cloud-fan <gi...@git.apache.org> on 2017/10/16 04:38:32 UTC

[GitHub] spark pull request #19317: [SPARK-22098][CORE] Add new method aggregateByKey...

Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19317#discussion_r144753719
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala ---
    @@ -180,6 +180,56 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
        * as in scala.TraversableOnce. The former operation is used for merging values within a
        * partition, and the latter is used for merging values between partitions. To avoid memory
        * allocation, both of these functions are allowed to modify and return their first argument
    +   * instead of creating a new U. This method is different from the ordinary "aggregateByKey"
    +   * method, it directly returns a map to the driver, rather than a rdd. This will also perform
    +   * the merging locally on each mapper before sending results to a reducer, similarly to a
    --- End diff --
    
    doesn't `aggregateByKey` perform map side combine?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org