You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Olivier Giboin (JIRA)" <ji...@apache.org> on 2019/07/30 13:42:00 UTC

[jira] [Comment Edited] (ARROW-6059) [Python] Regression memory issue when calling pandas.read_parquet

    [ https://issues.apache.org/jira/browse/ARROW-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896138#comment-16896138 ] 

Olivier Giboin edited comment on ARROW-6059 at 7/30/19 1:41 PM:
----------------------------------------------------------------

I confirm a very similar issue:
 * Context: Reading arrow table with pyarrow.parquet.read_table. When using 0.13 no memory error, ~6gb arrow RAM size when loaded, fast read.
 * Context: Script ran in a VM with ~12gb free RAM, Windows, python3.7.x (conda)
 * When reading the same parquet file with pyarrow 0.14.1 --> memory error (malloc failed, no more free ram availabe)
 * Parquet file was generated by pyarrow.parquet.ParquetWriter

 

I can provide full parquet file on secure channel if needed


was (Author: gggibs):
I confirm a very similar issue:
 * Context: Reading ARROW table with pyarrow.parquet.read_table. When using 0.13 no memory error, ~6gb arrow RAM size when loaded
 * Context: Script ran in a VM with ~12gb free RAM, Windows, python3.7.x (conda)
 * When reading the same parquet file with pyarrow 0.14.1 --> memory error (malloc failed, no more free ram avaialbe)
 * Parquet file was generated by pyarrow.parquet.ParquetWriter

 

I can provide full parquet file on secure channel if needed

> [Python] Regression memory issue when calling pandas.read_parquet
> -----------------------------------------------------------------
>
>                 Key: ARROW-6059
>                 URL: https://issues.apache.org/jira/browse/ARROW-6059
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.0, 0.14.1
>            Reporter: Francisco Sanchez
>            Priority: Major
>
> I have a ~3MB parquet file with the next schema:
> {code:java}
> bag_stamp: timestamp[ns]
> transforms_[]_.header.seq: list<item: int64>
>   child 0, item: int64
> transforms_[]_.header.stamp: list<item: timestamp[ns]>
>   child 0, item: timestamp[ns]
> transforms_[]_.header.frame_id: list<item: string>
>   child 0, item: string
> transforms_[]_.child_frame_id: list<item: string>
>   child 0, item: string
> transforms_[]_.transform.translation.x: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.translation.y: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.translation.z: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.rotation.x: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.rotation.y: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.rotation.z: list<item: double>
>   child 0, item: double
> transforms_[]_.transform.rotation.w: list<item: double>
>   child 0, item: double
> {code}
>  If I read it with *pandas.read_parquet()* using pyarrow 0.13.0 all seems fine and it takes no time to load. If I try the same with 0.14.0 or 0.14.1 it takes a lot of time and uses ~10GB of RAM. Many times if I don't have enough available memory it will just be killed OOM. Now, if I use the next code snippet instead it works perfectly with all the versions:
> {code}
> parquet_file = pq.ParquetFile(input_file)
> tables = []
> for row_group in range(parquet_file.num_row_groups):
>     tables.append(parquet_file.read_row_group(row_group, columns=columns, use_pandas_metadata=True))
> df = pa.concat_tables(tables).to_pandas()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)