You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Andy Douglas (Jira)" <ji...@apache.org> on 2021/01/27 09:37:00 UTC

[jira] [Comment Edited] (ARROW-11388) [Python] Dataset Timezone Handling

    [ https://issues.apache.org/jira/browse/ARROW-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272711#comment-17272711 ] 

Andy Douglas edited comment on ARROW-11388 at 1/27/21, 9:36 AM:
----------------------------------------------------------------

Thank for your response [~jorisvandenbossche], that makes sense and fits with what I'm seeing.

Basically I have a small python library that wraps pyarrow datasets to provide a convenient method for accessing multiple datasets and exposing via pandas. One of the things I want to be able to do is define schemas for all datasets upfront in something like a YAML file. The schema can then be applied/checked consistently on write/read avoiding issues like numerical columns being typed based on contents and therefore sometimes ending up as integers and other times floats. I initially tried to do this using pyarrow schemas however (as you mention above) the schema alone is not enough to restore a pandas dataframe which contains both index and timezone info.

Do you have any suggestions for how I would handle the above? Would you suggest doing the schema checking within the library and not passing the schema parameter on pyarrow dataset read/write calls?

Separately, I also see an issue on write for indexed pandas dataframes where the index column is duplicated in the pandas metadata without the timezone information being added. I'll raise a separate issue for this. 


was (Author: andydoug):
That's for your response [~jorisvandenbossche], that makes sense and fits with what I'm seeing.

Basically I have a small python library that wraps pyarrow datasets to provide a convenient method for accessing multiple datasets and exposing via pandas. One of the things I want to be able to do is define schemas for all datasets upfront in something like a YAML file. The schema can then be applied/checked consistently on write/read avoiding issues like numerical columns being typed based on contents and therefore sometimes ending up as integers and other times floats. I initially tried to do this using pyarrow schemas however (as you mention above) the schema alone is not enough to restore a pandas dataframe which contains both index and timezone info.

Do you have any suggestions for how I would handle the above? Would you suggest doing the schema checking within the library and not passing the schema parameter on pyarrow dataset read/write calls?

Separately, I also see an issue on write for indexed pandas dataframes where the index column is duplicated in the pandas metadata without the timezone information being added. I'll raise a separate issue for this. 

> [Python] Dataset Timezone Handling
> ----------------------------------
>
>                 Key: ARROW-11388
>                 URL: https://issues.apache.org/jira/browse/ARROW-11388
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 2.0.0, 3.0.0
>            Reporter: Andy Douglas
>            Priority: Minor
>
> I'm trying to write a pandas dataframe with a datetimeindex with timezone information to a pyarrow dataset but the timezone information doesn't seem to be written (apart from in the pandas metadata)
>  
> For example
>  
> {code:java}
> import os
> import pandas as pd
> import numpy as np
> import pyarrow as pa
> import pyarrow.parquet as pq
> import pyarrow.dataset as ds
> from pathlib import Path
> # I've tried with both v2.0 and v3.0 today
> print(pa.__version__)
> # create dummy dataframe with datetime index containing tz info
> df = pd.DataFrame(
>     dict(
>         timestamp=pd.date_range("2021-01-01", freq="1T", periods=100, tz="US/Eastern"),
>         x=np.arange(100),
>      )
> ).set_index("timestamp")
> test_dir = Path("test_dir")
> table = pa.Table.from_pandas(df)
> schema = table.schema
> print(schema)
> print(schema.pandas_metadata)
> # warning - creates dir in cwd
> pq.write_to_dataset(table, test_dir)
> # timestamp column is us and UTC
> print(pq.ParquetFile(test_dir / os.listdir(test_dir)[0]).read())
> # create dataset using schema from earlier
> dataset = ds.dataset(test_dir, format="parquet", schema=schema)
> # doesn't work
> dataset.to_table()
> {code}
>  
>  
> Is this a bug or am I missing something?
> Thanks
> Andy
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)