You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/04/23 07:54:50 UTC

[GitHub] [spark] cloud-fan edited a comment on pull request #31913: [SPARK-34581][SQL] Don't optimize out grouping expressions from aggregate expressions without aggregate function

cloud-fan edited a comment on pull request #31913:
URL: https://github.com/apache/spark/pull/31913#issuecomment-825470374


   I think the new idea should also be simpler:
   1. `CollapseProject` only collpase Project + Aggregate, but not Aggregate + Project. No change is needed. For example:
   ```
   scala> spark.range(10).select('id + 1 as 'a).groupBy('a).count().explain(true)
   == Parsed Logical Plan ==
   'Aggregate ['a], ['a, count(1) AS count#17L]
   +- Project [(id#11L + cast(1 as bigint)) AS a#13L]
      +- Range (0, 10, step=1, splits=Some(1))
   
   == Analyzed Logical Plan ==
   a: bigint, count: bigint
   Aggregate [a#13L], [a#13L, count(1) AS count#17L]
   +- Project [(id#11L + cast(1 as bigint)) AS a#13L]
      +- Range (0, 10, step=1, splits=Some(1))
   
   == Optimized Logical Plan ==
   Aggregate [a#13L], [a#13L, count(1) AS count#17L]
   +- Project [(id#11L + 1) AS a#13L]
      +- Range (0, 10, step=1, splits=Some(1))
   
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[a#13L], functions=[count(1)], output=[a#13L, count#17L])
      +- HashAggregate(keys=[a#13L], functions=[partial_count(1)], output=[a#13L, count#21L])
         +- Project [(id#11L + 1) AS a#13L]
            +- Range (0, 10, step=1, splits=1)
   ```
   2. `PhysicalAggregation` needs update to collapse Project under Aggregate.
   
   Anyway, I'm reverting this PR, @peter-toth look forward to your new PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org