You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (Jira)" <ji...@apache.org> on 2022/04/08 02:45:00 UTC

[jira] [Resolved] (SPARK-38531) "Prune unrequired child index" branch of ColumnPruning has wrong condition

     [ https://issues.apache.org/jira/browse/SPARK-38531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan resolved SPARK-38531.
---------------------------------
    Fix Version/s: 3.3.0
       Resolution: Fixed

Issue resolved by pull request 35864
[https://github.com/apache/spark/pull/35864]

> "Prune unrequired child index" branch of ColumnPruning has wrong condition
> --------------------------------------------------------------------------
>
>                 Key: SPARK-38531
>                 URL: https://issues.apache.org/jira/browse/SPARK-38531
>             Project: Spark
>          Issue Type: Bug
>          Components: Optimizer
>    Affects Versions: 3.2.1
>            Reporter: Min Yang
>            Priority: Minor
>             Fix For: 3.3.0
>
>
> The "prune unrequired references" branch has the condition:
> {code:java}
> case p @ Project(_, g: Generate) if p.references != g.outputSet => {code}
> This is wrong as generators like Inline will always enter this branch as long as it does not use all the generator output.
>  
> Example:
>  
> input: <col1: array<struct<a: struct<a: int, b: int>, b: int>>>
>  
> Project(a.a as x)
> - Generate(Inline(col1), ..., a, b)
>  
> p.references is [a]
> g.outputSet is [a, b]
>  
> This bug makes us never enter the GeneratorNestedColumnAliasing branch below thus miss some optimization opportunities. The condition should be
> {code:java}
> g.requiredChildOutput.contains(!p.references.contains(_)) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org