You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/12/13 06:07:27 UTC

[GitHub] [iceberg] rbalamohan opened a new pull request, #6417: Reuse existing parquet reader in ReadConf (6416)

rbalamohan opened a new pull request, #6417:
URL: https://github.com/apache/iceberg/pull/6417

   https://github.com/apache/iceberg/issues/6416
   
   ReadConf creates new file reader for reading "generateOffsetToStartPos". Though it will be around 30-50ms call, this quickly increases overall runtime in workloads having lot of positional delete files. Creating this as a placeholder ticket to fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #6417: Reuse existing parquet reader in ReadConf (6416)

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #6417:
URL: https://github.com/apache/iceberg/pull/6417#discussion_r1051684313


##########
parquet/src/main/java/org/apache/iceberg/parquet/ReadConf.java:
##########
@@ -185,21 +184,16 @@ private Map<Long, Long> generateOffsetToStartPos(Schema schema) {
       return null;
     }
 
-    try (ParquetFileReader fileReader = newReader(file, ParquetReadOptions.builder().build())) {

Review Comment:
   Unfortunately, this isn't correct. The options that are passed into ReadConf will include the split's range of bytes in the file, and Parquet will use that to filter metadata before returning it, so `fileReader.getRowGroups()` will be different. We need to avoid passing down that filter.
   
   I think it's debatable whether we need to actually filter metadata. It seems like a silly optimization to me, when we will generally have very few row groups in a file. I think it was an optimization for Parquet doing some pretty dumb things in the past.
   
   If you want, we can [remove the range pushdown](https://github.com/apache/iceberg/blob/master/parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java#L1027) and continue with this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org