You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by ceedubs <gi...@git.apache.org> on 2018/08/02 14:26:31 UTC

[GitHub] spark pull request #21971: [SPARK-24947] [Core] aggregateAsync and foldAsync

Github user ceedubs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21971#discussion_r207246593
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/AsyncRDDActions.scala ---
    @@ -61,6 +62,36 @@ class AsyncRDDActions[T: ClassTag](self: RDD[T]) extends Serializable with Loggi
           (index, data) => results(index) = data, results.flatten.toSeq)
       }
     
    +
    +  /**
    +   * Returns a future of an aggregation across the RDD.
    +   *
    +   * @see [[RDD.aggregate]] which is the synchronous version of this method.
    +   */
    +  def aggregateAsync[U](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): FutureAction[U] =
    +    self.withScope {
    +      val cleanSeqOp = self.context.clean(seqOp)
    +      val cleanCombOp = self.context.clean(combOp)
    +      val combBinOp = new BinaryOperator[U] {
    --- End diff --
    
    Is there a cleaner way to integrate with `BinaryOperator` before Scala 2.12?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org