You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "MaxGekk (via GitHub)" <gi...@apache.org> on 2024/03/20 06:25:16 UTC

Re: [PR] [SPARK-47473][SQL] Fix correctness issue of converting postgres INFINITY timestamps [spark]

MaxGekk commented on code in PR #45599:
URL: https://github.com/apache/spark/pull/45599#discussion_r1531555079


##########
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala:
##########
@@ -454,8 +454,9 @@ class PostgresIntegrationSuite extends DockerJDBCIntegrationSuite {
     val negativeInfinity = row(1).getAs[Timestamp]("timestamp_column")
     val infinitySeq = row(0).getAs[scala.collection.Seq[Timestamp]]("timestamp_array")
     val negativeInfinitySeq = row(1).getAs[scala.collection.Seq[Timestamp]]("timestamp_array")
-    val minTimeStamp = LocalDateTime.of(1, 1, 1, 0, 0, 0).toEpochSecond(ZoneOffset.UTC)
-    val maxTimestamp = LocalDateTime.of(9999, 12, 31, 23, 59, 59).toEpochSecond(ZoneOffset.UTC)
+    val minTimeStamp = LocalDateTime.of(1, 1, 1, 0, 0, 0).toInstant(ZoneOffset.UTC).toEpochMilli
+    val maxTimestamp =
+      LocalDateTime.of(9999, 12, 31, 23, 59, 59, 999999999).toInstant(ZoneOffset.UTC).toEpochMilli
     assert(infinity.getTime == maxTimestamp)

Review Comment:
   It is bad that the test doesn't catch the error. We should use a concrete (constant) value for expected values instead of having the same code in test and in the impl.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org