You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/11/22 16:04:54 UTC

[GitHub] [spark] sarutak commented on a change in pull request #34683: [SPARK-37283][SQL][FOLLOWUP] Avoid trying to store a table which contains timestamp_ntz types in Hive compatible format

sarutak commented on a change in pull request #34683:
URL: https://github.com/apache/spark/pull/34683#discussion_r754422263



##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala
##########
@@ -1411,6 +1411,7 @@ object HiveExternalCatalog {
 
   private[spark] def isHiveCompatibleDataType(dt: DataType): Boolean = dt match {
     case _: AnsiIntervalType => false
+    case _: TimestampNTZType => false

Review comment:
       > We need to make a decision here. Storing both ltz and ntz as hive timestamp is also an option. In fact, the hive timestamp is ntz, so storing ntz as hive timestamp is actually correct.
   
   Ah, O.K, I misunderstood. So if we decide to store both ltz and ntz to Hive table, this change is not necessary right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org