You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/12/07 02:28:53 UTC

[GitHub] [spark] wankunde opened a new pull request, #38951: [SPARK-41416][SQL] Rewrite self join in in predicate to aggregate

wankunde opened a new pull request, #38951:
URL: https://github.com/apache/spark/pull/38951

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   
   Transforms the SelfJoin resulting in duplicate rows used for IN predicate to aggregation.
   For IN predicate, duplicate rows does not have any value. It will be overhead.
   
   Ex: TPCDS Q95: following CTE is used only in IN predicates for only one column comparison ({@code ws_order_number}).
   This results in exponential increase in Joined rows with too many duplicate rows.
   
   ```sql
   WITH ws_wh AS
   (
          SELECT ws1.ws_order_number,
                 ws1.ws_warehouse_sk wh1,
                 ws2.ws_warehouse_sk wh2
          FROM   web_sales ws1,
                 web_sales ws2
          WHERE  ws1.ws_order_number = ws2.ws_order_number
          AND    ws1.ws_warehouse_sk <> ws2.ws_warehouse_sk)
   ```
   
   Could be optimized as below:
   
   ```sql
   WITH ws_wh AS
       (SELECT ws_order_number
         FROM  web_sales
         GROUP BY ws_order_number
         HAVING COUNT(DISTINCT ws_warehouse_sk) > 1)
   ```
   
   Optimized CTE scans table only once and results in unique rows.
   
   
   ### Why are the changes needed?
   
   Optimize TPCDS Q95,  reference code  
   https://github.com/rohankumardubey/hetu/blob/master/presto-main/src/main/java/io/prestosql/sql/planner/iterative/rule/TransformUnCorrelatedInPredicateSubQuerySelfJoinToAggregate.java
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   No
   
   
   ### How was this patch tested?
   
   Added UT
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] commented on pull request #38951: [SPARK-41416][SQL] Rewrite self join in in predicate to aggregate

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #38951:
URL: https://github.com/apache/spark/pull/38951#issuecomment-1475045049

   We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
   If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] AmplabJenkins commented on pull request #38951: [SPARK-41416][SQL] Rewrite self join in in predicate to aggregate

Posted by GitBox <gi...@apache.org>.
AmplabJenkins commented on PR #38951:
URL: https://github.com/apache/spark/pull/38951#issuecomment-1343457169

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] closed pull request #38951: [SPARK-41416][SQL] Rewrite self join in in predicate to aggregate

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] closed pull request #38951: [SPARK-41416][SQL] Rewrite self join in in predicate to aggregate
URL: https://github.com/apache/spark/pull/38951


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org