You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@arrow.apache.org by "V Luong (Jira)" <ji...@apache.org> on 2019/10/16 23:50:00 UTC

[jira] [Created] (ARROW-6910) pyarrow.parquet.read_table(...) takes up lots of memory which is not released until program exits

V Luong created ARROW-6910:
------------------------------

             Summary: pyarrow.parquet.read_table(...) takes up lots of memory which is not released until program exits
                 Key: ARROW-6910
                 URL: https://issues.apache.org/jira/browse/ARROW-6910
             Project: Apache Arrow
          Issue Type: Bug
    Affects Versions: 0.15.0
            Reporter: V Luong


I realize that when I read up a lot of Parquet files using pyarrow.parquet.read_table(...), my program's memory usage becomes very bloated, although I don't keep the table objects after converting them to Pandas DFs.

You can try this in an interactive Python shell to reproduce this problem:

```{python}
from pyarrow.parquet import read_table

for path in paths_of_a_bunch_of_big_parquet_files:
    read_table(path, use_threads=True, memory_map=False)
    # note that I'm not assigning the read_table(...) result to anything, so I'm not creating any new objects at all

```

After that For loop above, if you view the memory using (e.g. using htop program), you'll see that the Python program has taken up a lot of memory. That memory is only released when you exit() from Python.

This problem means that my compute jobs using PyArrow currently need to use bigger server instances than I think is necessary, which translates to significant extra cost.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)