You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Uwe L. Korn (JIRA)" <ji...@apache.org> on 2018/04/18 08:10:00 UTC

[jira] [Assigned] (PARQUET-1273) [Python] Error writing to partitioned Parquet dataset

     [ https://issues.apache.org/jira/browse/PARQUET-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Uwe L. Korn reassigned PARQUET-1273:
------------------------------------

    Assignee: Joshua Storck

> [Python] Error writing to partitioned Parquet dataset
> -----------------------------------------------------
>
>                 Key: PARQUET-1273
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1273
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-cpp
>         Environment: Linux (Ubuntu 16.04)
>            Reporter: Robert Dailey
>            Assignee: Joshua Storck
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: cpp-1.5.0
>
>         Attachments: ARROW-1938-test-data.csv.gz, ARROW-1938.py, pyarrow_dataset_error.png
>
>
> I receive the following error after upgrading to pyarrow 0.8.0 when writing to a dataset:
> * ArrowIOError: Column 3 had 187374 while previous column had 10000
> The command was:
> write_table_values = {'row_group_size': 10000}
> pq.write_to_dataset(pa.Table.from_pandas(df, preserve_index=True), '/logs/parsed/test', partition_cols=['Product', 'year', 'month', 'day', 'hour'], **write_table_values)
> I've also tried write_table_values = {'chunk_size': 10000} and received the same error.
> This same command works in version 0.7.1.  I am trying to troubleshoot the problem but wanted to submit a ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)