You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Bogdan Klichuk (JIRA)" <ji...@apache.org> on 2019/06/29 23:30:00 UTC

[jira] [Updated] (ARROW-5791) pyarrow.csv.read_csv hangs + eats all RAM

     [ https://issues.apache.org/jira/browse/ARROW-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bogdan Klichuk updated ARROW-5791:
----------------------------------
    Description: 
I have quite a sparse dataset in CSV format. A wide table that has several rows but many (32k) columns. Total size ~540K.

When I read the dataset using `pyarrow.csv.read_csv` it hangs, gradually eats all memory and gets killed.

More details on the conditions further. Script to run and all mentioned files are under attachments.

1) `sample_32769_cols.csv` is the dataset that suffers the problem.

2) `sample_32768_cols.csv` is the dataset that DOES NOT suffer and is read in under 400ms on my machine. It's the same dataset without ONE last column. That last column is no different than others and has empty values.

The reason of why exactly this column makes difference between proper execution and hanging failure which looks like some memory leak - no idea.

I have created flame graph for the case (1) to support this issue resolution (`graph.svg`).

 

  was:
I have quite a sparse dataset in CSV format. A wide table that has several rows but many (32k) columns. Total size ~540K.

When I read the dataset using `pyarrow.csv.read_csv` it hangs, gradually eats all memory and gets killed.

More details on the conditions further. Script to run and all mentioned files are under attachments.

1) `sample_32769_cols.csv` is the dataset that suffers the problem.

2) `sample_32768_cols.csv` is the dataset that DOES NOT suffer and is read in under 400ms on my machine. It's the same dataset without ONE last column. That last column is no different than others and has empty values.

The reason of why exactly this column makes difference between proper execution and hanging failure which looks like some memory leak - I don't know.

I have created flame graph for the case (1) to support this issue resolution (`graph.svg`).

 


> pyarrow.csv.read_csv hangs + eats all RAM
> -----------------------------------------
>
>                 Key: ARROW-5791
>                 URL: https://issues.apache.org/jira/browse/ARROW-5791
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.13.0
>         Environment: Ubuntu Xenial, python 2.7
>            Reporter: Bogdan Klichuk
>            Priority: Major
>         Attachments: csvtest.py, graph.svg, sample_32768_cols.csv, sample_32769_cols.csv
>
>
> I have quite a sparse dataset in CSV format. A wide table that has several rows but many (32k) columns. Total size ~540K.
> When I read the dataset using `pyarrow.csv.read_csv` it hangs, gradually eats all memory and gets killed.
> More details on the conditions further. Script to run and all mentioned files are under attachments.
> 1) `sample_32769_cols.csv` is the dataset that suffers the problem.
> 2) `sample_32768_cols.csv` is the dataset that DOES NOT suffer and is read in under 400ms on my machine. It's the same dataset without ONE last column. That last column is no different than others and has empty values.
> The reason of why exactly this column makes difference between proper execution and hanging failure which looks like some memory leak - no idea.
> I have created flame graph for the case (1) to support this issue resolution (`graph.svg`).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)