You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Lance Dacey (Jira)" <ji...@apache.org> on 2021/11/30 01:46:00 UTC

[jira] [Commented] (ARROW-12358) [C++][Python][R][Dataset] Control overwriting vs appending when writing to existing dataset

    [ https://issues.apache.org/jira/browse/ARROW-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17450796#comment-17450796 ] 

Lance Dacey commented on ARROW-12358:
-------------------------------------

I was not able to install 6.0.1 until the latest version of turbodbc supported it. Finally have it up and running and I see that the `existing_data_behavior` argument has been added.

 Is this the proper way to use the "delete_matching" feature? When I tried to set that as default, there was a FileNotFound error (because the base_dir did not exist at all).
 
{code:python}
try:
    ds.write_dataset(
        data=table,
        existing_data_behavior="error",
    )
except pa.lib.ArrowInvalid:
    ds.write_dataset(
        data=table,
        ...,
        existing_data_behavior="delete_matching",
    )
{code}


I created a dataset using my old method (`use_legacy_dataset` = True with a `partition_filename_cb` to overwrite partitions) and the output matched the new "delete_matching" dataset. I believe I can completely retire the use_legacy_dataset code now. Really amazing, thank you.


> [C++][Python][R][Dataset] Control overwriting vs appending when writing to existing dataset
> -------------------------------------------------------------------------------------------
>
>                 Key: ARROW-12358
>                 URL: https://issues.apache.org/jira/browse/ARROW-12358
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++
>            Reporter: Joris Van den Bossche
>            Priority: Major
>              Labels: dataset
>             Fix For: 7.0.0
>
>
> Currently, the dataset writing (eg with {{pyarrow.dataset.write_dataset}}) uses a fixed filename template ({{"part\{i\}.ext"}}). This means that when you are writing to an existing dataset, you de facto overwrite previous data when using this default template.
> There is some discussion in ARROW-10695 about how the user can avoid this by ensuring the file names are unique (the user can specify the {{basename_template}} to be something unique). There is also ARROW-7706 about silently doubling data (so _not_ overwriting existing data) with the legacy {{parquet.write_to_dataset}} implementation. 
> It could be good to have a "mode" when writing datasets that controls the different possible behaviours. And erroring when there is pre-existing data in the target directory is maybe the safest default, because both appending vs overwriting silently can be surprising behaviour depending on your expectations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)