You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "wgtmac (via GitHub)" <gi...@apache.org> on 2023/03/02 05:06:34 UTC

[GitHub] [arrow] wgtmac commented on pull request #34281: GH-34280: [C++][Python] Clarify meaning of row_group_size and change default to 1Mi

wgtmac commented on PR #34281:
URL: https://github.com/apache/arrow/pull/34281#issuecomment-1451307061

   > The writer could guarantee that pages are aligned so that each page has the same number of rows (e.g. 120K rows each). But from the readers' perspective you are dependent on how the pages are laid out in the file.
   
   I agree with this. Aligning page boundaries benefits reading by avoiding unnecessary I/O and decoding. No special care is required on the reader side.
   
   > I have no idea how close Arrow's parquet-cpp reader is to this implementation or whether it is even feasible.
   
   Arrow's parquet-cpp reader currently reads pages of a single column chunk in a sequential fashion. The column reader does not know offset/length of all pages in advance. Without the knowledge of page index, it is difficult to schedule I/O and decoding in an efficient way. This is on my plan to contribute but I cannot promise the time frame yet.
   
   
   According to my experience, 1 Mi rows seems small to me if the schema contains only a few columns. We'd better decide the row group by `num_of_rows` and `estimated_compressed_size` together.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org