You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Benjamin Kietzman (Jira)" <ji...@apache.org> on 2019/09/03 17:55:00 UTC

[jira] [Commented] (ARROW-3762) [C++] Parquet arrow::Table reads error when overflowing capacity of BinaryArray

    [ https://issues.apache.org/jira/browse/ARROW-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921610#comment-16921610 ] 

Benjamin Kietzman commented on ARROW-3762:
------------------------------------------

This is caused by the parquet column reader using a [chunk size of {{INT_MAX}}|https://github.com/apache/arrow/blob/5d907dd/cpp/src/parquet/column_reader.cc#L1214] but BinaryBuilder's [limit is {{INT_MAX - 1}}|https://github.com/apache/arrow/blob/master/cpp/src/arrow/array/builder_binary.h#L38]. ChunkedBinaryBuilder should not be constructible with a chunk size larger than BinaryBuilder's limit, and parquet's column reader should probably just use {{chunk size = BinaryBuilder::memory_limit()}} and/or that limit should be the default chunk size for ChunkedBinaryBuilder. A unit test in Parquet with string data of total length {{INT_MAX}} should be added (and maybe for total length {{INT_MAX + 1}} and {{INT_MAX - 1}} for good measure)

> [C++] Parquet arrow::Table reads error when overflowing capacity of BinaryArray
> -------------------------------------------------------------------------------
>
>                 Key: ARROW-3762
>                 URL: https://issues.apache.org/jira/browse/ARROW-3762
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++, Python
>            Reporter: Chris Ellison
>            Assignee: Benjamin Kietzman
>            Priority: Major
>              Labels: parquet, pull-request-available
>             Fix For: 0.14.0, 0.15.0
>
>          Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> # When reading a parquet file with binary data > 2 GiB, we get an ArrowIOError due to it not creating chunked arrays. Reading each row group individually and then concatenating the tables works, however.
>  
> {code:java}
> import pandas as pd
> import pyarrow as pa
> import pyarrow.parquet as pq
> x = pa.array(list('1' * 2**30))
> demo = 'demo.parquet'
> def scenario():
>     t = pa.Table.from_arrays([x], ['x'])
>     writer = pq.ParquetWriter(demo, t.schema)
>     for i in range(2):
>         writer.write_table(t)
>     writer.close()
>     pf = pq.ParquetFile(demo)
>     # pyarrow.lib.ArrowIOError: Arrow error: Invalid: BinaryArray cannot contain more than 2147483646 bytes, have 2147483647
>     t2 = pf.read()
>     # Works, but note, there are 32 row groups, not 2 as suggested by:
>     # https://arrow.apache.org/docs/python/parquet.html#finer-grained-reading-and-writing
>     tables = [pf.read_row_group(i) for i in range(pf.num_row_groups)]
>     t3 = pa.concat_tables(tables)
> scenario()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)