You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Wes McKinney (Jira)" <ji...@apache.org> on 2019/12/20 20:46:00 UTC

[jira] [Comment Edited] (ARROW-7305) [Python] High memory usage writing pyarrow.Table with large strings to parquet

    [ https://issues.apache.org/jira/browse/ARROW-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17001208#comment-17001208 ] 

Wes McKinney edited comment on ARROW-7305 at 12/20/19 8:45 PM:
---------------------------------------------------------------

Thanks for the additional information. Someone (which can be you) will need to investigate. These issues are very time consuming to diagnose


was (Author: wesmckinn):
Thanks for the additional information. Someone will need to investigate. These issues are very time consuming to diagnose

> [Python] High memory usage writing pyarrow.Table with large strings to parquet
> ------------------------------------------------------------------------------
>
>                 Key: ARROW-7305
>                 URL: https://issues.apache.org/jira/browse/ARROW-7305
>             Project: Apache Arrow
>          Issue Type: Task
>          Components: Python
>    Affects Versions: 0.15.1
>         Environment: Mac OSX
>            Reporter: Bogdan Klichuk
>            Priority: Major
>              Labels: parquet
>         Attachments: 50mb.csv.gz
>
>
> My case of datasets stored is specific. I have large strings (1-100MB each).
> Let's take for example a single row.
> 43mb.csv is a 1-row CSV with 10 columns. One column a 43mb string.
> When I read this csv with pandas and then dump to parquet, my script consumes 10x of the 43mb.
> With increasing amount of such rows memory footprint overhead diminishes, but I want to focus on this specific case.
> Here's the footprint after running using memory profiler:
> {code:java}
> Line #    Mem usage    Increment   Line Contents
> ================================================
>      4     48.9 MiB     48.9 MiB   @profile
>      5                             def test():
>      6    143.7 MiB     94.7 MiB       data = pd.read_csv('43mb.csv')
>      7    498.6 MiB    354.9 MiB       data.to_parquet('out.parquet')
>  {code}
> Is this typical for parquet in case of big strings?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)