You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Wes McKinney (Jira)" <ji...@apache.org> on 2019/08/22 22:17:00 UTC

[jira] [Updated] (ARROW-5086) [Python] Space leak in ParquetFile.read_row_group()

     [ https://issues.apache.org/jira/browse/ARROW-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wes McKinney updated ARROW-5086:
--------------------------------
    Fix Version/s: 0.15.0

> [Python] Space leak in  ParquetFile.read_row_group()
> ----------------------------------------------------
>
>                 Key: ARROW-5086
>                 URL: https://issues.apache.org/jira/browse/ARROW-5086
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.12.1
>            Reporter: Jakub Okoński
>            Priority: Major
>              Labels: parquet
>             Fix For: 0.15.0
>
>         Attachments: all.png, all.png
>
>
> I have a code pattern like this:
>  
> reader = pq.ParquetFile(path)
> for ix in range(0, reader.num_row_groups):
>     table = reader.read_row_group(ix, columns=self._columns)
>     # operate on table
>  
> But it leaks memory over time, only releasing it when the reader object is collected. Here's a workaround
>  
> num_row_groups = pq.ParquetFile(path).num_row_groups
> for ix in range(0, num_row_groups):
>     table = pq.ParquetFile(path).read_row_group(ix, columns=self._columns)
>     # operate on table
>  
> This puts an upper bound on memory usage and is what I'd  expect from the code. I also put gc.collect() to the end of every loop.
>  
> I charted out memory usage for a small benchmark that just copies a file, one row group at a time, converting to pandas and back to arrow on the writer path. Line in black is the first one, using a single reader object. Blue is instantiating a fresh reader in every iteration.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)