You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/10/05 00:35:20 UTC
[jira] [Commented] (SPARK-10364) Support Parquet logical type
TIMESTAMP_MILLIS
[ https://issues.apache.org/jira/browse/SPARK-10364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15547189#comment-15547189 ]
Apache Spark commented on SPARK-10364:
--------------------------------------
User 'dilipbiswal' has created a pull request for this issue:
https://github.com/apache/spark/pull/15332
> Support Parquet logical type TIMESTAMP_MILLIS
> ---------------------------------------------
>
> Key: SPARK-10364
> URL: https://issues.apache.org/jira/browse/SPARK-10364
> Project: Spark
> Issue Type: Sub-task
> Components: SQL
> Affects Versions: 1.5.0
> Reporter: Cheng Lian
>
> The {{TimestampType}} in Spark SQL is of microsecond precision. Ideally, we should convert Spark SQL timestamp values into Parquet {{TIMESTAMP_MICROS}}. But unfortunately parquet-mr hasn't supported it yet.
> For the read path, we should be able to read {{TIMESTAMP_MILLIS}} Parquet values and pad a 0 microsecond part to read values.
> For the write path, currently we are writing timestamps as {{INT96}}, similar to Impala and Hive. One alternative is that, we can have a separate SQL option to let users be able to write Spark SQL timestamp values as {{TIMESTAMP_MILLIS}}. Of course, in this way the microsecond part will be truncated.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org