You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "V Luong (Jira)" <ji...@apache.org> on 2019/11/06 19:55:00 UTC

[jira] [Updated] (ARROW-6910) [Python] pyarrow.parquet.read_table(...) takes up lots of memory which is not released until program exits

     [ https://issues.apache.org/jira/browse/ARROW-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

V Luong updated ARROW-6910:
---------------------------
    Description: 
I realize that when I read up a lot of Parquet files using pyarrow.parquet.read_table(...), my program's memory usage becomes very bloated, although I don't keep the table objects after converting them to Pandas DFs.

You can try this in an interactive Python shell to reproduce this problem:

```{python}
from tqdm import tqdm
from pyarrow.parquet import read_table

PATH = '/tmp/big.snappy.parquet'

for _ in tqdm(range(10)):
    read_table(PATH, use_threads=False, memory_map=False)
    (note that I'm not assigning the read_table(...) result to anything, so I'm not creating any new objects at all)

```

During the For loop above, if you view the memory usage (e.g. using htop program), you'll see that it keeps creeping up. Either the program crashes during the 10 iterations, or if the 10 iterations complete, the program will still occupy a huge amount of memory, although no objects are kept. That memory is only released when you exit() from Python.

This problem means that my compute jobs using PyArrow currently need to use bigger server instances than I think is necessary, which translates to significant extra cost.



  was:
I realize that when I read up a lot of Parquet files using pyarrow.parquet.read_table(...), my program's memory usage becomes very bloated, although I don't keep the table objects after converting them to Pandas DFs.

You can try this in an interactive Python shell to reproduce this problem:

```{python}
from tqdm import tqdm
from pyarrow.parquet import read_table

PATH = '/tmp/big.snappy.parquet'

for _ in tqdm(range(100)):
    read_table(PATH, use_threads=False, memory_map=False)
    (note that I'm not assigning the read_table(...) result to anything, so I'm not creating any new objects at all)

```

During the For loop above, if you view the memory usage (e.g. using htop program), you'll see that it keeps creeping up. Either the program crashes during the 100 iterations, or if the 100 iterations complete, the program will still occupy a huge amount of memory, although no objects are kept. That memory is only released when you exit() from Python.

This problem means that my compute jobs using PyArrow currently need to use bigger server instances than I think is necessary, which translates to significant extra cost.




> [Python] pyarrow.parquet.read_table(...) takes up lots of memory which is not released until program exits
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-6910
>                 URL: https://issues.apache.org/jira/browse/ARROW-6910
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++, Python
>    Affects Versions: 0.15.0
>            Reporter: V Luong
>            Assignee: Wes McKinney
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.0.0, 0.15.1
>
>         Attachments: arrow6910.png
>
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I realize that when I read up a lot of Parquet files using pyarrow.parquet.read_table(...), my program's memory usage becomes very bloated, although I don't keep the table objects after converting them to Pandas DFs.
> You can try this in an interactive Python shell to reproduce this problem:
> ```{python}
> from tqdm import tqdm
> from pyarrow.parquet import read_table
> PATH = '/tmp/big.snappy.parquet'
> for _ in tqdm(range(10)):
>     read_table(PATH, use_threads=False, memory_map=False)
>     (note that I'm not assigning the read_table(...) result to anything, so I'm not creating any new objects at all)
> ```
> During the For loop above, if you view the memory usage (e.g. using htop program), you'll see that it keeps creeping up. Either the program crashes during the 10 iterations, or if the 10 iterations complete, the program will still occupy a huge amount of memory, although no objects are kept. That memory is only released when you exit() from Python.
> This problem means that my compute jobs using PyArrow currently need to use bigger server instances than I think is necessary, which translates to significant extra cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)