You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Lance Dacey (Jira)" <ji...@apache.org> on 2021/07/06 17:42:00 UTC

[jira] [Issue Comment Deleted] (ARROW-13074) [Python] Start with deprecating ParquetDataset custom attributes

     [ https://issues.apache.org/jira/browse/ARROW-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lance Dacey updated ARROW-13074:
--------------------------------
    Comment: was deleted

(was: I have run into a few issues with basename_template:

 

1) If I run tasks in parallel (for example, Airflow downloads data from various SQL servers and writes to the same partitions), then there is a chance to overwrite existing data (part-0.parquet)

2) If I make the basename_template unique, then I can end up with duplicate data inside of my partitions because I am not overwriting what is already there.

 

The way I have been organizing this so far is to have use two datasets:

 

*Dataset A*:
 * UUID filenames, so everything is unique. This most likely has duplicate values, and most certainly will have old versions of rows (based on an updated_at timestamp)
 * This normally has a lot of files per partition since I download data every 30 minutes - 1 hour in many cases

*Dataset B:*
 * Reads from Dataset A, sorts, drop duplicates, and then resave using a partition_filename_cb

{code:java}
use_legacy_dataset=True,             
partition_filename_cb=lambda x: str(x[-1]) + ".parquet",{code}
 * I normally partition by date_id, so each partition is something like
{code:java}
path/date_id=20210706/20210706.parquet{code}

 * This allows me to have a single file per partition which has the final version of the each row with no duplicates. Our visualization tool connects to these fragments directly (Power BI in this case) 

 

I think that I might be able to use basename_template if I was careful and made sure that I did not write data in parallel, so the part-0.parquet file would be overwritten each time. Or perhaps I could list the files in that partition and delete them before saving new data (risky if another process might be using those files at that time).

 

 
 )

> [Python] Start with deprecating ParquetDataset custom attributes
> ----------------------------------------------------------------
>
>                 Key: ARROW-13074
>                 URL: https://issues.apache.org/jira/browse/ARROW-13074
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>            Reporter: Joris Van den Bossche
>            Assignee: Joris Van den Bossche
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 5.0.0
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> As a first step for ARROW-9720, we should start with deprecating attributes/methods of {{pq.ParquetDataset}} that we would definitely not keep / are conflicting with the "dataset API". 
> I am thinking of the {{pieces}} attribute (and the {{ParquetDatasetPiece}} class), the {{partitions}} attribute (and the {{ParquetPartitions}} class). 
> In addition, some of the keywords are also exposed as properties (memory_map, read_dictionary, buffer_size, fs), and could be deprecated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)