You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Mike Macpherson (Jira)" <ji...@apache.org> on 2020/05/09 16:49:00 UTC

[jira] [Commented] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading "wide" parquet files

    [ https://issues.apache.org/jira/browse/ARROW-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17103363#comment-17103363 ] 

Mike Macpherson commented on ARROW-8654:
----------------------------------------

Thank you for this context, very helpful.

What would you think of adding documentation on parquet file column-number limits, to pandas and/or pyarrow's docs? I'd be interested to contribute the PR/s, if we might clarify what those limits are. That may also be an appropriate place to offer the guidance that performance may decline as column-number grows.

> [Python] pyarrow 0.17.0 fails reading "wide" parquet files
> ----------------------------------------------------------
>
>                 Key: ARROW-8654
>                 URL: https://issues.apache.org/jira/browse/ARROW-8654
>             Project: Apache Arrow
>          Issue Type: Bug
>            Reporter: Mike Macpherson
>            Priority: Major
>
> {code:java}
> import pandas as pd
> import numpy as np
> num_rows, num_cols = 1000, 45000
> df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, num_cols)).astype(np.uint8))
> outfile = "test.parquet"
> df.to_parquet(outfile)
> del df
> df = pd.read_parquet(outfile)
> {code}
> Yields:
> {noformat}
> df = pd.read_parquet(outfile) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 310, in read_parquet 
> return impl.read(path, columns=columns, kwargs) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 125, in read 
> path, columns=columns, kwargs 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, in read_table 
> partitioning=partitioning) 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, in __init__ 
> self.validate_schemas() 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, in validate_schemas 
> self.schema = self.pieces[0].get_metadata().schema 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, in get_metadata 
> f = self.open() 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, in open 
> reader = self.open_file_func(self.path) 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, in _open_dataset_file 
> buffer_size=dataset.buffer_size 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, in __init__ 
> read_dictionary=read_dictionary, metadata=metadata) 
> File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
> File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
> OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
> {noformat}
> This is pandas 1.0.3, and pyarrow 0.17.0.
>  
> I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.
>  
> I also tried with 40,000 columns aot 45,000 as above, and that does work with 0.17.0.
>  
> Thanks for all your work on this project!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)