You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "yabola (via GitHub)" <gi...@apache.org> on 2023/03/21 14:57:59 UTC

[GitHub] [spark] yabola commented on a diff in pull request #40495: test reading footer within file range

yabola commented on code in PR #40495:
URL: https://github.com/apache/spark/pull/40495#discussion_r1143525196


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:
##########
@@ -92,8 +93,13 @@ case class ParquetPartitionReaderFactory(
     if (aggregation.isEmpty) {
       ParquetFooterReader.readFooter(conf, filePath, SKIP_ROW_GROUPS)
     } else {
+      val split = new FileSplit(file.toPath, file.start, file.length, Array.empty[String])
       // For aggregate push down, we will get max/min/count from footer statistics.
-      ParquetFooterReader.readFooter(conf, filePath, NO_FILTER)

Review Comment:
   @LuciferYang @huaxingao  oh~ thank you! Yes, using `NO_FILTER` is no problem here.
   https://github.com/apache/spark/blob/df2e2516188b46537349aa7a5f279de6141c6450/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetScan.scala#L50-L57
   
    But do you think it will be always safer to use file range? and I have made some changes here, I want to unify them  https://github.com/apache/spark/pull/39950



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org