You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by marmbrus <gi...@git.apache.org> on 2014/05/07 22:45:55 UTC

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

GitHub user marmbrus opened a pull request:

    https://github.com/apache/spark/pull/683

    [SQL] Improve SparkSQL Aggregates

    * Add native min/max (was using hive before).
    * Handle nulls correctly in Avg and Sum.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/marmbrus/spark aggFixes

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/683.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #683
    
----
commit 64fe30b0f58e12a139fe53f1b33eb8b45ef6e9a8
Author: Michael Armbrust <mi...@databricks.com>
Date:   2014-05-07T20:45:13Z

    Improve SparkSQL Aggregates
    * Add native min/max (was using hive before).
    * Handle nulls correctly in Avg and Sum.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42489550
  
    All automated tests passed.
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14784/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42481057
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42513745
  
    Merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by marmbrus <gi...@git.apache.org>.
Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/683#discussion_r12404117
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregates.scala ---
    @@ -86,6 +86,67 @@ abstract class AggregateFunction
       override def newInstance() = makeCopy(productIterator.map { case a: AnyRef => a }.toArray)
     }
     
    +case class Min(child: Expression) extends PartialAggregate with trees.UnaryNode[Expression] {
    +  override def references = child.references
    +  override def nullable = child.nullable
    +  override def dataType = child.dataType
    +  override def toString = s"MIN($child)"
    +
    +  override def asPartial: SplitEvaluation = {
    +    val partialMin = Alias(Min(child), "PartialMin")()
    +    SplitEvaluation(Min(partialMin.toAttribute), partialMin :: Nil)
    +  }
    +
    +  override def newInstance() = new MinFunction(child, this)
    +}
    +
    +case class MinFunction(expr: Expression, base: AggregateExpression) extends AggregateFunction {
    --- End diff --
    
    Good point, though this is not an issue in the code gen version.
    On May 7, 2014 2:28 PM, "Reynold Xin" <no...@github.com> wrote:
    
    > In
    > sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregates.scala:
    >
    > > @@ -86,6 +86,67 @@ abstract class AggregateFunction
    > >    override def newInstance() = makeCopy(productIterator.map { case a: AnyRef => a }.toArray)
    > >  }
    > >
    > > +case class Min(child: Expression) extends PartialAggregate with trees.UnaryNode[Expression] {
    > > +  override def references = child.references
    > > +  override def nullable = child.nullable
    > > +  override def dataType = child.dataType
    > > +  override def toString = s"MIN($child)"
    > > +
    > > +  override def asPartial: SplitEvaluation = {
    > > +    val partialMin = Alias(Min(child), "PartialMin")()
    > > +    SplitEvaluation(Min(partialMin.toAttribute), partialMin :: Nil)
    > > +  }
    > > +
    > > +  override def newInstance() = new MinFunction(child, this)
    > > +}
    > > +
    > > +case class MinFunction(expr: Expression, base: AggregateExpression) extends AggregateFunction {
    >
    > this is unrelated to this pr - but I just realized the way we are storing
    > the aggregation buffer in Spark SQL uses much more memory than needed,
    > because there are two extra pointers to expr/base, which is identical for
    > every tuple.
    >
    > —
    > Reply to this email directly or view it on GitHub<https://github.com/apache/spark/pull/683/files#r12404003>
    > .
    >


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by marmbrus <gi...@git.apache.org>.
Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42498775
  
    @pwendell, this should probably go in 1.0.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/683


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/683#discussion_r12404003
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregates.scala ---
    @@ -86,6 +86,67 @@ abstract class AggregateFunction
       override def newInstance() = makeCopy(productIterator.map { case a: AnyRef => a }.toArray)
     }
     
    +case class Min(child: Expression) extends PartialAggregate with trees.UnaryNode[Expression] {
    +  override def references = child.references
    +  override def nullable = child.nullable
    +  override def dataType = child.dataType
    +  override def toString = s"MIN($child)"
    +
    +  override def asPartial: SplitEvaluation = {
    +    val partialMin = Alias(Min(child), "PartialMin")()
    +    SplitEvaluation(Min(partialMin.toAttribute), partialMin :: Nil)
    +  }
    +
    +  override def newInstance() = new MinFunction(child, this)
    +}
    +
    +case class MinFunction(expr: Expression, base: AggregateExpression) extends AggregateFunction {
    --- End diff --
    
    this is unrelated to this pr - but I just realized the way we are storing the aggregation buffer in Spark SQL uses much more memory than needed, because there are two extra pointers to expr/base, which is identical for every tuple. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42481073
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42485556
  
    LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SQL] Improve SparkSQL Aggregates

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/683#issuecomment-42489548
  
    Merged build finished. All automated tests passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---