You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "cloud-fan (via GitHub)" <gi...@apache.org> on 2023/07/07 20:43:39 UTC

[GitHub] [spark] cloud-fan commented on a diff in pull request #41347: [SPARK-43838][SQL] Fix subquery on single table with having clause can't be optimized

cloud-fan commented on code in PR #41347:
URL: https://github.com/apache/spark/pull/41347#discussion_r1256451094


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DeduplicateRelations.scala:
##########
@@ -105,6 +105,21 @@ object DeduplicateRelations extends Rule[LogicalPlan] {
         (m, false)
       }
 
+    case p @ Project(_, child) if p.resolved && p.projectList.forall(_.isInstanceOf[Alias]) =>

Review Comment:
   I'm reading the doc of the `collectConflictPlans` function in this class. I think the problem we have now is there are more plan nodes than the leaf node that can produce new attributes, and we need to handle all of them. `Project` is not the only way, let's follow what plan nodes `collectConflictPlans` handles.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org