You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/03/17 05:25:34 UTC

[GitHub] [arrow] eitsupi opened a new issue #12653: Conversion from one dataset to another that will not fit in memory?

eitsupi opened a new issue #12653:
URL: https://github.com/apache/arrow/issues/12653


   Having found the following description in the documentation, I tried the operation of scanning a dataset larger than memory and writing it to another dataset.
   
   https://arrow.apache.org/docs/python/dataset.html#writing-large-amounts-of-data
   
   > The above examples wrote data from a table. If you are writing a large amount of data you may not be able to load everything into a single in-memory table. Fortunately, the write_dataset() method also accepts an iterable of record batches. This makes it really simple, for example, to repartition a large dataset without loading the entire dataset into memory:
   
   ```python
   import pyarrow.dataset as ds
   
   input_dataset = ds.dataset("input")
   ds.write_dataset(inpute_dataset.scanner(), "output", format="parquet")
   ```
   
   ```r
   arrow::open_dataset("input") |>
     arrow::write_dataset("output")
   ```
   
   But both Python and R on Windows crashed due to lack of memory. Am I missing something?
   Is there a recommended way to convert one dataset to another without running out of computer memory?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] westonpace commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
westonpace commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1071923418


   At the moment we generally use too much memory when scanning parquet.  This is because the scanner's readahead is unfortunately based on the row group size and not the batch size.  Using smaller row groups in your source files will help.  #12228 changes the readahead to be based on the batch size but it's been on my back burner for a bit.  I'm still optimistic I will get to it for the 8.0.0 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] wjones127 commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
wjones127 commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1070973156


   Hi @eitsupi ,
   
   Depending on your memory restrictions, you may need to control the batch size (how many rows are loaded at once) on the scanner:
   
   ```python
   import pyarrow.dataset as ds
   
   input_dataset = ds.dataset("input")
   scanner = inpute_dataset.scanner(batch_size=100_000) # default is 1_000_000
   ds.write_dataset(scanner.to_reader(), "output", format="parquet")
   ```
   
   Does that help in your use case?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] westonpace commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
westonpace commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1071923418


   At the moment we generally use too much memory when scanning parquet.  This is because the scanner's readahead is unfortunately based on the row group size and not the batch size.  Using smaller row groups in your source files will help.  #12228 changes the readahead to be based on the batch size but it's been on my back burner for a bit.  I'm still optimistic I will get to it for the 8.0.0 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] eitsupi commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
eitsupi commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1072271039


   Thank you both.
   I tried lowering the batch size to 1000 in Python, but it still consumed over 3GB of memory and crashed.
   
   I will wait for the 8.0.0 release to try this again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] eitsupi commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
eitsupi commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1072271039


   Thank you both.
   I tried lowering the batch size to 1000 in Python, but it still consumed over 3GB of memory and crashed.
   
   I will wait for the 8.0.0 release to try this again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] wjones127 commented on issue #12653: Conversion from one dataset to another that will not fit in memory?

Posted by GitBox <gi...@apache.org>.
wjones127 commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1070973156


   Hi @eitsupi ,
   
   Depending on your memory restrictions, you may need to control the batch size (how many rows are loaded at once) on the scanner:
   
   ```python
   import pyarrow.dataset as ds
   
   input_dataset = ds.dataset("input")
   scanner = inpute_dataset.scanner(batch_size=100_000) # default is 1_000_000
   ds.write_dataset(scanner.to_reader(), "output", format="parquet")
   ```
   
   Does that help in your use case?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org