You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "johanl-db (via GitHub)" <gi...@apache.org> on 2023/12/04 08:48:33 UTC

[PR] [SPARK-46092][SQL][3.5] Don't push down Parquet row group filters that overflow [spark]

johanl-db opened a new pull request, #44154:
URL: https://github.com/apache/spark/pull/44154

   This is a cherry-pick from https://github.com/apache/spark/pull/44006 to spark 3.5
   
   ### What changes were proposed in this pull request?
   This change adds a check for overflows when creating Parquet row group filters on an INT32 (byte/short/int) parquet type to avoid incorrectly skipping row groups if the predicate value doesn't fit in an INT. This can happen if the read schema is specified as LONG, e.g via `.schema("col LONG")`
   While the Parquet readers don't support reading INT32 into a LONG, the overflow can lead to row groups being incorrectly skipped, bypassing the reader altogether and producing incorrect results instead of failing.
   
   ### Why are the changes needed?
   Reading a parquet file containing INT32 values with a read schema specified as LONG can produce incorrect results today:
   ```
   Seq(0).toDF("a").write.parquet(path)
   spark.read.schema("a LONG").parquet(path).where(s"a < ${Long.MaxValue}").collect()
   ```
   will return an empty result. The correct result is either:
   - Failing the query if the parquet reader doesn't support upcasting integers to longs (all parquet readers in Spark today)
   - Return result `[0]` if the parquet reader supports that upcast (no readers in Spark as of now, but I'm looking into adding this capability).
   
   ### Does this PR introduce _any_ user-facing change?
   The following: 
   ```
   Seq(0).toDF("a").write.parquet(path)
   spark.read.schema("a LONG").parquet(path).where(s"a < ${Long.MaxValue}").collect()
   ```
   produces an (incorrect) empty result before this change. After this change, the read will fail, raising an error about the unsupported conversion from INT to LONG in the parquet reader.
   
   ### How was this patch tested?
   - Added tests to `ParquetFilterSuite` to ensure that no row group filter is created when the predicate value overflows or when the value type isn't compatible with the parquet type
   - Added test to `ParquetQuerySuite` covering the correctness issue described above.
   
   ### Was this patch authored or co-authored using generative AI tooling?
   No
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46092][SQL][3.5] Don't push down Parquet row group filters that overflow [spark]

Posted by "dongjoon-hyun (via GitHub)" <gi...@apache.org>.
dongjoon-hyun commented on PR #44154:
URL: https://github.com/apache/spark/pull/44154#issuecomment-1839076127

   Merged to branch-3.5 for Apache Spark 3.5.1.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46092][SQL][3.5] Don't push down Parquet row group filters that overflow [spark]

Posted by "dongjoon-hyun (via GitHub)" <gi...@apache.org>.
dongjoon-hyun closed pull request #44154: [SPARK-46092][SQL][3.5] Don't push down Parquet row group filters that overflow
URL: https://github.com/apache/spark/pull/44154


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org