You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Joost Hoozemans (Jira)" <ji...@apache.org> on 2022/08/31 15:44:00 UTC

[jira] [Created] (ARROW-17583) [Python] File write visitor throws exception on large parquet file

Joost Hoozemans created ARROW-17583:
---------------------------------------

             Summary: [Python] File write visitor throws exception on large parquet file
                 Key: ARROW-17583
                 URL: https://issues.apache.org/jira/browse/ARROW-17583
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 9.0.0
            Reporter: Joost Hoozemans


When writing a large parquet file (e.g. 5GB) using pyarrow.dataset, it throws an exception:

Traceback (most recent call last):
  File "pyarrow/_dataset_parquet.pyx", line 165, in pyarrow._dataset_parquet.ParquetFileFormat._finish_write
  File "pyarrow/_dataset.pyx", line 2695, in pyarrow._dataset.WrittenFile.__init__
OverflowError: value too large to convert to int
Exception ignored in: 'pyarrow._dataset._filesystemdataset_write_visitor'

The file is written succesfully though. It seems related to this issue https://issues.apache.org/jira/browse/ARROW-16761.

I would guess the problem is the python field is an int while the C++ code return an int64_t [https://github.com/apache/arrow/pull/13338/files#diff-4f2eb12337651b45bab2b03abe2552dd7fc9958b1fbbeb09a2a488804b097109R164] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)