You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2021/04/23 07:56:16 UTC
[spark] branch master updated (20d68dc -> fdccd88)
This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.
from 20d68dc [SPARK-35159][SQL][DOCS] Extract hive format doc
add fdccd88 Revert "[SPARK-34581][SQL] Don't optimize out grouping expressions from aggregate expressions without aggregate function"
No new revisions were added by this update.
Summary of changes:
...lity.scala => UpdateAttributeNullability.scala} | 23 +-----
.../sql/catalyst/expressions/AliasHelper.scala | 2 +-
.../expressions/aggregate/interfaces.scala | 8 --
.../spark/sql/catalyst/expressions/grouping.scala | 19 -----
.../sql/catalyst/optimizer/ComplexTypes.scala | 11 ++-
.../EnforceGroupingReferencesInAggregates.scala | 34 ---------
.../spark/sql/catalyst/optimizer/Optimizer.scala | 53 +++-----------
.../spark/sql/catalyst/optimizer/subquery.scala | 5 +-
.../spark/sql/catalyst/planning/patterns.scala | 12 +--
.../plans/logical/basicLogicalOperators.scala | 85 ++--------------------
.../optimizer/RemoveRedundantAggregatesSuite.scala | 2 +-
.../sql/catalyst/optimizer/complexTypesSuite.scala | 12 ++-
.../sql/execution/python/ExtractPythonUDFs.scala | 16 +++-
.../test/resources/sql-tests/inputs/group-by.sql | 9 ---
.../resources/sql-tests/results/group-by.sql.out | 24 +-----
15 files changed, 68 insertions(+), 247 deletions(-)
rename sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/{UpdateNullability.scala => UpdateAttributeNullability.scala} (74%)
delete mode 100644 sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/EnforceGroupingReferencesInAggregates.scala
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org