You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Furkan Tektas (Jira)" <ji...@apache.org> on 2019/09/10 22:44:00 UTC
[jira] [Created] (ARROW-6520) Segmentation fault on writing tables
with fixed size binary fields
Furkan Tektas created ARROW-6520:
------------------------------------
Summary: Segmentation fault on writing tables with fixed size binary fields
Key: ARROW-6520
URL: https://issues.apache.org/jira/browse/ARROW-6520
Project: Apache Arrow
Issue Type: Bug
Components: Python
Affects Versions: 0.14.1
Environment: Arch Linux x86_64
arrow-cpp 0.14.1 py37h6b969ab_1 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pyarrow 0.14.1 py37h8b68381_0 conda-forge
python 3.7.3 h33d41f4_1 conda-forge
Reporter: Furkan Tektas
I'm not sure if this should be reported to Parquet or here.
When I tried to serialize a pyarrow table with a fixed size binary field (holds 16 byte UUID4 information) to a parquet file, segmentation fault occurs.
Here is the minimal example to reproduce:
{color:#569cd6}import{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}as{color}{color:#d4d4d4} pa{color}
{color:#569cd6}from{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}import{color}{color:#d4d4d4} parquet {color}{color:#569cd6}as{color}{color:#d4d4d4} pq{color}
{color:#d4d4d4}data {color}{color:#d4d4d4}={color}{color:#d4d4d4} {{color}{color:#ce9178}"col"{color}{color:#d4d4d4}: pa.array([{color}{color:#569cd6}b{color}{color:#ce9178}"1234"{color}{color:#d4d4d4} {color}{color:#569cd6}for{color}{color:#d4d4d4} _ {color}{color:#569cd6}in{color}{color:#d4d4d4} range({color}{color:#b5cea8}10{color}{color:#d4d4d4})])}{color}
{color:#d4d4d4}fields {color}{color:#d4d4d4}={color}{color:#d4d4d4} [({color}{color:#ce9178}"col"{color}{color:#d4d4d4}, pa.binary({color}{color:#b5cea8}4{color}{color:#d4d4d4}))]{color}
{color:#d4d4d4}schema {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.schema(fields){color}
{color:#d4d4d4}table {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.table(data, schema){color}
{color:#d4d4d4}pq.write_table(table, {color}{color:#ce9178}"test.parquet"{color}{color:#d4d4d4}){color}
{color:#FF0000}*{{segmentation fault (core dumped) ipython}}*{color}
{{Yet, it works if I don't specify the size of the binary field.}}
{color:#569cd6}import{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}as{color}{color:#d4d4d4} pa{color}
{color:#569cd6}from{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}import{color}{color:#d4d4d4} parquet {color}{color:#569cd6}as{color}{color:#d4d4d4} pq{color}
{color:#d4d4d4}data {color}{color:#d4d4d4}={color}{color:#d4d4d4} {{color}{color:#ce9178}"col"{color}{color:#d4d4d4}: pa.array([{color}{color:#569cd6}b{color}{color:#ce9178}"1234"{color}{color:#d4d4d4} {color}{color:#569cd6}for{color}{color:#d4d4d4} _ {color}{color:#569cd6}in{color}{color:#d4d4d4} range({color}{color:#b5cea8}10{color}{color:#d4d4d4})])}{color}
{color:#d4d4d4}fields {color}{color:#d4d4d4}={color}{color:#d4d4d4} [({color}{color:#ce9178}"col"{color}{color:#d4d4d4}, pa.binary())]{color}
{color:#d4d4d4}schema {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.schema(fields){color}
{color:#d4d4d4}table {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.table(data, schema){color}
{color:#d4d4d4}pq.write_table(table, {color}{color:#ce9178}"test.parquet"{color}{color:#d4d4d4}){color}
Thanks,
--
This message was sent by Atlassian Jira
(v8.3.2#803003)