You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/03/17 14:49:35 UTC

[GitHub] [arrow] wjones127 commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

wjones127 commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1070973156


   Hi @eitsupi ,
   
   Depending on your memory restrictions, you may need to control the batch size (how many rows are loaded at once) on the scanner:
   
   ```python
   import pyarrow.dataset as ds
   
   input_dataset = ds.dataset("input")
   scanner = inpute_dataset.scanner(batch_size=100_000) # default is 1_000_000
   ds.write_dataset(scanner.to_reader(), "output", format="parquet")
   ```
   
   Does that help in your use case?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org