You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Joris Van den Bossche (Jira)" <ji...@apache.org> on 2020/02/06 15:28:00 UTC
[jira] [Commented] (ARROW-7782) Losing index information when using
write_to_dataset with partition_cols
[ https://issues.apache.org/jira/browse/ARROW-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17031673#comment-17031673 ]
Joris Van den Bossche commented on ARROW-7782:
----------------------------------------------
This might be solved in master (will be released as 0.16 this week):
{code}
In [1]: from pathlib import Path
...: import pandas as pd
...: from pyarrow import Table
...: from pyarrow.parquet import write_to_dataset
...: path = Path('.')
...: file_name = 'trial_pq.parquet'
...: df = pd.DataFrame({"A": [1, 2, 3],
...: "B": ['a', 'a', 'b']
...: },
...: index=pd.Index(['a', 'b', 'c'], name='idx'))
...:
...: table = Table.from_pandas(df)
...: write_to_dataset(table, str(path / file_name), partition_cols=['B'],
...: partition_filename_cb=None, filesystem=None)
...:
In [2]: table
Out[2]:
pyarrow.Table
A: int64
B: string
idx: string
metadata
--------
{b'pandas': b'{"index_columns": ["idx"], "column_indexes": [{"name": null, "fi'
b'eld_name": null, "pandas_type": "unicode", "numpy_type": "object'
b'", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "A"'
b', "field_name": "A", "pandas_type": "int64", "numpy_type": "int6'
b'4", "metadata": null}, {"name": "B", "field_name": "B", "pandas_'
b'type": "unicode", "numpy_type": "object", "metadata": null}, {"n'
b'ame": "idx", "field_name": "idx", "pandas_type": "unicode", "num'
b'py_type": "object", "metadata": null}], "creator": {"library": "'
b'pyarrow", "version": "0.15.1.dev736+g46d0b7f47"}, "pandas_versio'
b'n": "1.1.0.dev0+369.ga62dbda20"}'}
In [3]: pd.read_parquet(file_name)
Out[3]:
A idx B
0 1 a a
1 2 b a
2 3 c b
{code}
which seem to preserve the "idx" index as a column?
> Losing index information when using write_to_dataset with partition_cols
> ------------------------------------------------------------------------
>
> Key: ARROW-7782
> URL: https://issues.apache.org/jira/browse/ARROW-7782
> Project: Apache Arrow
> Issue Type: Bug
> Environment: pyarrow==0.15.1
> Reporter: Ludwik Bielczynski
> Priority: Major
>
> One cannot save the index when using {{pyarrow.parquet.write_to_dataset()}} with given partition_cols arguments. Here I have created a minimal example which shows the issue:
> {code:java}
>
> from pathlib import Path
> import pandas as pd
> from pyarrow import Table
> from pyarrow.parquet import write_to_dataset
> path = Path('/home/user/trials')
> file_name = 'local_database.parquet'
> df = pd.DataFrame({"A": [1, 2, 3], "B": ['a', 'a', 'b']},
> index=pd.Index(['a', 'b', 'c'],
> name='idx'))
> table = Table.from_pandas(df)
> write_to_dataset(table,
> str(path / file_name),
> partition_cols=['B']
> )
> {code}
>
> The issue is rather important for pandas and dask users.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)