You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@arrow.apache.org by "Tsvika Shapira (Jira)" <ji...@apache.org> on 2020/01/28 17:10:00 UTC

[jira] [Created] (ARROW-7706) saving a dataframe to the same partitioned location silently doubles the data

Tsvika Shapira created ARROW-7706:
-------------------------------------

             Summary: saving a dataframe to the same partitioned location silently doubles the data
                 Key: ARROW-7706
                 URL: https://issues.apache.org/jira/browse/ARROW-7706
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 0.15.1
            Reporter: Tsvika Shapira


When a user saves a dataframe:
{code:python}
df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow')
{code}
it will create sub-directories named "{{a=val1}}", "{{a=val2}}" in {{/tmp/table}}. Each of them will contain one (or more?) parquet files with random filenames.

If a user runs the same command again, the code will use the existing sub-directories, but with different (random) filenames. As a result, any data loaded from this folder will be wrong - each row will be present twice.

For example, when using
{code:python}
df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow')  # second time

df2 = pd.read_parquet('/tmp/table', engine='pyarrow')
assert len(df1) == len(df2)  # raise an error{code}
This is a subtle change in the data that can pass unnoticed.

 

I would expect that the code will prevent the user from using an non-empty destination as partitioned target. an overwrite flag can also be useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)