You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Kun Liu (JIRA)" <ji...@apache.org> on 2019/07/29 15:10:00 UTC

[jira] [Commented] (ARROW-6060) [Python] too large memory cost using pyarrow.parquet.read_table with use_threads=True

    [ https://issues.apache.org/jira/browse/ARROW-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895331#comment-16895331 ] 

Kun Liu commented on ARROW-6060:
--------------------------------

Thanks for the response, [~wesmckinn].

I am trying to generate a sample file and reproduce the error as the original file is not possible to disclose. The pandas types of columns in the parquet file are just unicode, bytes, and int64.

> [Python] too large memory cost using pyarrow.parquet.read_table with use_threads=True
> -------------------------------------------------------------------------------------
>
>                 Key: ARROW-6060
>                 URL: https://issues.apache.org/jira/browse/ARROW-6060
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.1
>            Reporter: Kun Liu
>            Priority: Major
>
>  I tried to load a parquet file of about 1.8Gb using the following code. It crashed due to out of memory issue.
> {code:java}
> import pyarrow.parquet as pq
> pq.read_table('/tmp/test.parquet'){code}
>  However, it worked well with use_threads=True as follows
> {code:java}
> pq.read_table('/tmp/test.parquet', use_threads=False){code}
> If pyarrow is downgraded to 0.12.1, there is no such problem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)