You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Mike Macpherson (Jira)" <ji...@apache.org> on 2020/04/30 16:12:00 UTC
[jira] [Updated] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading
"wide" parquet files
[ https://issues.apache.org/jira/browse/ARROW-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mike Macpherson updated ARROW-8654:
-----------------------------------
Description:
{code:java}
import pandas as pd
num_rows, num_cols = 1000, 45000
df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, num_cols)).astype(np.uint8))
outfile = "test.parquet"
df.to_parquet(outfile)
del df
df = pd.read_parquet(outfile)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile)
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 310, in read_parquet
return impl.read(path, columns=columns, kwargs)
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 125, in read
path, columns=columns, kwargs
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, in read_table
partitioning=partitioning)
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, in __init__
self.validate_schemas()
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, in validate_schemas
self.schema = self.pieces[0].get_metadata().schema
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, in get_metadata
f = self.open()
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, in open
reader = self.open_file_func(self.path)
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, in _open_dataset_file
buffer_size=dataset.buffer_size
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, in __init__
read_dictionary=read_dictionary, metadata=metadata)
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.
I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.
I also tried with 40,000 columns aot 45,000 as above, and that does work with 0.17.0.
Thanks for all your work on this project!
was:
{code:java}
import pandas as pd
num_rows, num_cols = 1000, 45000
df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, num_cols)).astype(np.uint8))
outfile = "test.parquet"
df.to_parquet(outfile)
del df
df = pd.read_parquet(fout)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile)
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 310, in read_parquet
return impl.read(path, columns=columns, kwargs)
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 125, in read
path, columns=columns, kwargs
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, in read_table
partitioning=partitioning)
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, in __init__
self.validate_schemas()
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, in validate_schemas
self.schema = self.pieces[0].get_metadata().schema
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, in get_metadata
f = self.open()
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, in open
reader = self.open_file_func(self.path)
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, in _open_dataset_file
buffer_size=dataset.buffer_size
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, in __init__
read_dictionary=read_dictionary, metadata=metadata)
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.
I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.
I also tried with 40,000 columns aot 45,000 as above, and that does work with 0.17.0.
Thanks for all your work on this project!
> [Python] pyarrow 0.17.0 fails reading "wide" parquet files
> ----------------------------------------------------------
>
> Key: ARROW-8654
> URL: https://issues.apache.org/jira/browse/ARROW-8654
> Project: Apache Arrow
> Issue Type: Bug
> Reporter: Mike Macpherson
> Priority: Major
>
> {code:java}
> import pandas as pd
> num_rows, num_cols = 1000, 45000
> df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, num_cols)).astype(np.uint8))
> outfile = "test.parquet"
> df.to_parquet(outfile)
> del df
> df = pd.read_parquet(outfile)
> {code}
> Yields:
> {noformat}
> df = pd.read_parquet(outfile)
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 310, in read_parquet
> return impl.read(path, columns=columns, kwargs)
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 125, in read
> path, columns=columns, kwargs
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, in read_table
> partitioning=partitioning)
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, in __init__
> self.validate_schemas()
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, in validate_schemas
> self.schema = self.pieces[0].get_metadata().schema
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, in get_metadata
> f = self.open()
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, in open
> reader = self.open_file_func(self.path)
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, in _open_dataset_file
> buffer_size=dataset.buffer_size
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, in __init__
> read_dictionary=read_dictionary, metadata=metadata)
> File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open
> File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
> OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
> {noformat}
> This is pandas 1.0.3, and pyarrow 0.17.0.
>
> I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.
>
> I also tried with 40,000 columns aot 45,000 as above, and that does work with 0.17.0.
>
> Thanks for all your work on this project!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)