You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/08 00:30:53 UTC

[GitHub] [spark] sadikovi commented on a diff in pull request #37419: [SPARK-39833][SQL] Fix Parquet incorrect count issue when requiredSchema is empty and column index is enabled in DSv1

sadikovi commented on code in PR #37419:
URL: https://github.com/apache/spark/pull/37419#discussion_r938492838


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:
##########
@@ -228,6 +228,13 @@ class ParquetFileFormat
       SQLConf.PARQUET_TIMESTAMP_NTZ_ENABLED.key,
       sparkSession.sessionState.conf.parquetTimestampNTZEnabled)
 
+    // See PARQUET-2170.
+    // Disable column index optimisation when required schema is empty so we get the correct
+    // row count from parquet-mr.
+    if (requiredSchema.isEmpty) {

Review Comment:
   No, this is not required for DSv2.
   
   The test works in DSv2 due to another inconsistency - Parquet DSv2 filters out the column in `readDataSchema`()` method due to the fact that both partition column and data column are similar in a case insensitive mode. The final schema becomes empty resulting in the empty list of filters and thus returning the correct number of records. It is rather a performance inefficiency in DSv2 as the entire file will be scanned. However, the result will be correct.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org