You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Ji Xu (JIRA)" <ji...@apache.org> on 2018/10/17 06:27:00 UTC
[jira] [Updated] (ARROW-3538) ability to override the automated
assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ji Xu updated ARROW-3538:
-------------------------
Description:
Say I have a pandas DataFrame {{df}} that I would like to store on disk as dataset using pyarrow parquet, I would do this:
{code:java}
table = pyarrow.Table.from_pandas(df)
pyarrow.parquet.write_to_dataset(table, root_path=some_path, partition_cols=['a',]){code}
On disk the dataset would look like something like this:
{color:#14892c}some_path{color}
{color:#14892c}├── a=1{color}
{color:#14892c}____├── 4498704937d84fe5abebb3f06515ab2d.parquet{color}
{color:#14892c}├── a=2{color}
{color:#14892c}____├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color}
*Wished Feature:* It'd be great if I can override the auto-assignment of the long UUID as filename somehow during the *dataset* writing. My purpose is to be able to overwrite the dataset on disk when I have a new version of {{df}}. Currently if I try to write the dataset again, another new uniquely named [UUID].parquet file will be placed next to the old one, with the same, redundant data.
was:
Say I have a pandas DataFrame {{df}} that I would like to store on disk as dataset using pyarrow parquet, I would do this:
{{table = pyarrow.Table.from_pandas(df) pyarrow.parquet.write_to_dataset(table, root_path=some_path, partition_cols=['a',]) }}On disk the dataset would look like something like this:
{color:#14892c}some_path{color}
{color:#14892c}├── a=1{color}
{color:#14892c}____├── 4498704937d84fe5abebb3f06515ab2d.parquet{color}
{color:#14892c}├── a=2{color}
{color:#14892c}____├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color}
*Wished Feature:* It'd be great if I can override the auto-assignment of the long UUID as filename somehow during the *dataset* writing. My purpose is to be able to overwrite the dataset on disk when I have a new version of {{df}}. Currently if I try to write the dataset again, another new uniquely named [UUID].parquet file will be placed next to the old one, with the same, redundant data.
> ability to override the automated assignment of uuid for filenames when writing datasets
> ----------------------------------------------------------------------------------------
>
> Key: ARROW-3538
> URL: https://issues.apache.org/jira/browse/ARROW-3538
> Project: Apache Arrow
> Issue Type: Wish
> Reporter: Ji Xu
> Priority: Major
> Labels: features
>
> Say I have a pandas DataFrame {{df}} that I would like to store on disk as dataset using pyarrow parquet, I would do this:
> {code:java}
> table = pyarrow.Table.from_pandas(df)
> pyarrow.parquet.write_to_dataset(table, root_path=some_path, partition_cols=['a',]){code}
> On disk the dataset would look like something like this:
> {color:#14892c}some_path{color}
> {color:#14892c}├── a=1{color}
> {color:#14892c}____├── 4498704937d84fe5abebb3f06515ab2d.parquet{color}
> {color:#14892c}├── a=2{color}
> {color:#14892c}____├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color}
> *Wished Feature:* It'd be great if I can override the auto-assignment of the long UUID as filename somehow during the *dataset* writing. My purpose is to be able to overwrite the dataset on disk when I have a new version of {{df}}. Currently if I try to write the dataset again, another new uniquely named [UUID].parquet file will be placed next to the old one, with the same, redundant data.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)