You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Jakub Okoński (JIRA)" <ji...@apache.org> on 2019/04/01 17:11:00 UTC

[jira] [Created] (ARROW-5086) Space leak in ParquetFile.read_row_group()

Jakub Okoński created ARROW-5086:
------------------------------------

             Summary: Space leak in  ParquetFile.read_row_group()
                 Key: ARROW-5086
                 URL: https://issues.apache.org/jira/browse/ARROW-5086
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 0.12.1
            Reporter: Jakub Okoński
         Attachments: all.png

I have a code pattern like this:

 

reader = pq.ParquetFile(path)

for ix in range(0, reader.num_row_groups):
    table = reader.read_row_group(ix, columns=self._columns)
    # operate on table

 

But it leaks memory over time, only releasing it when the reader object is collected. Here's a workaround

 

num_row_groups = pq.ParquetFile(path).num_row_groups

for ix in range(0, num_row_groups):
    table = pq.ParquetFile(path).read_row_group(ix, columns=self._columns)
    # operate on table

 

This puts an upper bound on memory usage and is what I'd  expect from the code. I also put gc.collect() to the end of every loop.

 

I charted out memory usage for a small benchmark that just copies a file, one row group at a time, converting to pandas and back to arrow on the writer path. Line in black is the first one, using a single reader object. Blue is instantiating a fresh reader in every iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)