You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/04/13 09:56:10 UTC

[GitHub] [arrow-rs] tustvold commented on issue #1459: Timestamps with time unit of MICROS or MILLIS are read incorrectly

tustvold commented on issue #1459:
URL: https://github.com/apache/arrow-rs/issues/1459#issuecomment-1097840462

   So digging into this the issue is that pandas is attaching an arrow schema that specifies nanosecond precision, whilst specifying the following as the parquet column description.
   
   ```
   converted_type: TIMESTAMP_MILLIS,
   logical_type: Some(
       TIMESTAMP(
           TimestampType {
               is_adjusted_to_u_t_c: false,
               unit: MILLIS(
                   MilliSeconds,
               ),
           },
       ),
   ),
   ```
   
   This is pretty wild because not only are the two different schemas, but `TIMESTAMP_MILLIS` could overflow if converted to an arrow TimestampNanosecondArray which uses an `i64` to store its values. I'm not really sure why it does this.
   
   The issue doesn't occur with `fastparquet` which uses the LogicalType support for nanosecond precision `i64` timestamps, but it also doesn't write an arrow schema so...
   
   I think we can work around this, I need to work out exactly how, but imo this is a bug in pyarrow
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org