You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Kun Liu (JIRA)" <ji...@apache.org> on 2019/07/29 15:54:00 UTC

[jira] [Comment Edited] (ARROW-6060) [Python] too large memory cost using pyarrow.parquet.read_table with use_threads=True

    [ https://issues.apache.org/jira/browse/ARROW-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895387#comment-16895387 ] 

Kun Liu edited comment on ARROW-6060 at 7/29/19 3:53 PM:
---------------------------------------------------------

[~wesmckinn] I used the following code to generate a sample parquet. 
{code:java}
import pandas as pd
from pandas.util.testing import rands

def generate_strings(length, nunique, string_length=10):
    unique_values = [rands(string_length) for i in range(nunique)]
    values = unique_values * (length // nunique)
    return values

df = pd.DataFrame()
df['a'] = generate_strings(100000000, 10000)
df['b'] = generate_strings(100000000, 10000)
df.to_parquet('/tmp/test.parquet')
{code}
And run following
{code:java}
import pyarrow.parquet as pq
 pq.read_table('/tmp/test.parquet') # crash
 pq.read_table('/tmp/test.parquet', use_threads=False) # works{code}

 Btw, my machine has 16GB RAM. 


was (Author: kwunlyou):
[~wesmckinn] I used the following code to generate a sample parquet. 
{code:java}
import pandas as pd
from pandas.util.testing import rands

def generate_strings(length, nunique, string_length=10):
    unique_values = [rands(string_length) for i in range(nunique)]
    values = unique_values * (length // nunique)
    return values

df = pd.DataFrame()
df['a'] = generate_strings(100000000, 10000)
df['b'] = generate_strings(100000000, 10000)
df.to_parquet('/tmp/test.parquet')
{code}
And run following
import pyarrow.parquet as pq
pq.read_table('/tmp/test.parquet') # crash
pq.read_table('/tmp/test.parquet', use_threads=False) # works
Btw, my machine has 16GB RAM. 

> [Python] too large memory cost using pyarrow.parquet.read_table with use_threads=True
> -------------------------------------------------------------------------------------
>
>                 Key: ARROW-6060
>                 URL: https://issues.apache.org/jira/browse/ARROW-6060
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.1
>            Reporter: Kun Liu
>            Priority: Major
>
>  I tried to load a parquet file of about 1.8Gb using the following code. It crashed due to out of memory issue.
> {code:java}
> import pyarrow.parquet as pq
> pq.read_table('/tmp/test.parquet'){code}
>  However, it worked well with use_threads=True as follows
> {code:java}
> pq.read_table('/tmp/test.parquet', use_threads=False){code}
> If pyarrow is downgraded to 0.12.1, there is no such problem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)