You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Furkan Tektas (Jira)" <ji...@apache.org> on 2019/09/10 22:46:00 UTC

[jira] [Updated] (ARROW-6520) Segmentation fault on writing tables with fixed size binary fields

     [ https://issues.apache.org/jira/browse/ARROW-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Furkan Tektas updated ARROW-6520:
---------------------------------
    Description: 
I'm not sure if this should be reported to Parquet or here.

When I tried to serialize a pyarrow table with a fixed size binary field (holds 16 byte UUID4 information) to a parquet file, segmentation fault occurs.

Here is the minimal example to reproduce:

{{ }}
{{ {color:#de350b}import pyarrow as pa{color}}}
{{{color:#de350b} from pyarrow import parquet as pqdata = \{"col": pa.array([b"1234" for _ in range(10)])}{color}}}
{{{color:#de350b} fields = [("col", pa.binary(4))]{color}}}
{{{color:#de350b} schema = pa.schema(fields){color}}}
{{{color:#de350b} table = pa.table(data, schema){color}}}
{{{color:#de350b} pq.write_table(table, "test.parquet"){color}}}
{{ *{color:#de350b}segmentation fault (core dumped) ipython{color}*}}

 

{{{{Yet, it works if I don't specify the size of the binary field.}}}}


{{ {color:#00875a}import pyarrow as pa{color}}}
{{{color:#00875a} from pyarrow import parquet as pq{color}}}{{{color:#00875a}data = \{"col": pa.array([b"1234" for _ in range(10)])}{color}}}
{{{color:#00875a} fields = [("col", pa.binary())]{color}}}
{{{color:#00875a} schema = pa.schema(fields){color}}}
{{{color:#00875a} table = pa.table(data, schema){color}}}
{{{color:#00875a} pq.write_table(table, "test.parquet"){color}}}
{{  }}

Thanks,

  was:
I'm not sure if this should be reported to Parquet or here.

When I tried to serialize a pyarrow table with a fixed size binary field (holds 16 byte UUID4 information) to a parquet file, segmentation fault occurs.

Here is the minimal example to reproduce:

 
{color:#569cd6}import{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}as{color}{color:#d4d4d4} pa{color}
{color:#569cd6}from{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}import{color}{color:#d4d4d4} parquet {color}{color:#569cd6}as{color}{color:#d4d4d4} pq{color}

{color:#d4d4d4}data {color}{color:#d4d4d4}={color}{color:#d4d4d4} {{color}{color:#ce9178}"col"{color}{color:#d4d4d4}: pa.array([{color}{color:#569cd6}b{color}{color:#ce9178}"1234"{color}{color:#d4d4d4} {color}{color:#569cd6}for{color}{color:#d4d4d4} _ {color}{color:#569cd6}in{color}{color:#d4d4d4} range({color}{color:#b5cea8}10{color}{color:#d4d4d4})])}{color}
{color:#d4d4d4}fields {color}{color:#d4d4d4}={color}{color:#d4d4d4} [({color}{color:#ce9178}"col"{color}{color:#d4d4d4}, pa.binary({color}{color:#b5cea8}4{color}{color:#d4d4d4}))]{color}
{color:#d4d4d4}schema {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.schema(fields){color}
{color:#d4d4d4}table {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.table(data, schema){color}
{color:#d4d4d4}pq.write_table(table, {color}{color:#ce9178}"test.parquet"{color}{color:#d4d4d4}){color}
{color:#FF0000}*{{segmentation fault (core dumped) ipython}}*{color}

{{Yet, it works if I don't specify the size of the binary field.}}
{color:#569cd6}import{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}as{color}{color:#d4d4d4} pa{color}
{color:#569cd6}from{color}{color:#d4d4d4} pyarrow {color}{color:#569cd6}import{color}{color:#d4d4d4} parquet {color}{color:#569cd6}as{color}{color:#d4d4d4} pq{color}

{color:#d4d4d4}data {color}{color:#d4d4d4}={color}{color:#d4d4d4} {{color}{color:#ce9178}"col"{color}{color:#d4d4d4}: pa.array([{color}{color:#569cd6}b{color}{color:#ce9178}"1234"{color}{color:#d4d4d4} {color}{color:#569cd6}for{color}{color:#d4d4d4} _ {color}{color:#569cd6}in{color}{color:#d4d4d4} range({color}{color:#b5cea8}10{color}{color:#d4d4d4})])}{color}
{color:#d4d4d4}fields {color}{color:#d4d4d4}={color}{color:#d4d4d4} [({color}{color:#ce9178}"col"{color}{color:#d4d4d4}, pa.binary())]{color}
{color:#d4d4d4}schema {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.schema(fields){color}
{color:#d4d4d4}table {color}{color:#d4d4d4}={color}{color:#d4d4d4} pa.table(data, schema){color}
{color:#d4d4d4}pq.write_table(table, {color}{color:#ce9178}"test.parquet"{color}{color:#d4d4d4}){color}
 

Thanks,


> Segmentation fault on writing tables with fixed size binary fields 
> -------------------------------------------------------------------
>
>                 Key: ARROW-6520
>                 URL: https://issues.apache.org/jira/browse/ARROW-6520
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.1
>         Environment: Arch Linux x86_64
> arrow-cpp                 0.14.1           py37h6b969ab_1    conda-forge
> parquet-cpp               1.5.1                         2    conda-forge
> pyarrow                   0.14.1           py37h8b68381_0    conda-forge
> python                    3.7.3                h33d41f4_1    conda-forge
>            Reporter: Furkan Tektas
>            Priority: Critical
>              Labels: newbie
>
> I'm not sure if this should be reported to Parquet or here.
> When I tried to serialize a pyarrow table with a fixed size binary field (holds 16 byte UUID4 information) to a parquet file, segmentation fault occurs.
> Here is the minimal example to reproduce:
> {{ }}
> {{ {color:#de350b}import pyarrow as pa{color}}}
> {{{color:#de350b} from pyarrow import parquet as pqdata = \{"col": pa.array([b"1234" for _ in range(10)])}{color}}}
> {{{color:#de350b} fields = [("col", pa.binary(4))]{color}}}
> {{{color:#de350b} schema = pa.schema(fields){color}}}
> {{{color:#de350b} table = pa.table(data, schema){color}}}
> {{{color:#de350b} pq.write_table(table, "test.parquet"){color}}}
> {{ *{color:#de350b}segmentation fault (core dumped) ipython{color}*}}
>  
> {{{{Yet, it works if I don't specify the size of the binary field.}}}}
> {{ {color:#00875a}import pyarrow as pa{color}}}
> {{{color:#00875a} from pyarrow import parquet as pq{color}}}{{{color:#00875a}data = \{"col": pa.array([b"1234" for _ in range(10)])}{color}}}
> {{{color:#00875a} fields = [("col", pa.binary())]{color}}}
> {{{color:#00875a} schema = pa.schema(fields){color}}}
> {{{color:#00875a} table = pa.table(data, schema){color}}}
> {{{color:#00875a} pq.write_table(table, "test.parquet"){color}}}
> {{  }}
> Thanks,



--
This message was sent by Atlassian Jira
(v8.3.2#803003)