You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sumit (Jira)" <ji...@apache.org> on 2020/01/22 20:26:00 UTC

[jira] [Created] (SPARK-30608) Postgres Column Interval converts to string and cant be written back to postgres

Sumit created SPARK-30608:
-----------------------------

             Summary: Postgres Column Interval converts to string and cant be written back to postgres
                 Key: SPARK-30608
                 URL: https://issues.apache.org/jira/browse/SPARK-30608
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.4.4
            Reporter: Sumit


If we read a  "Interval" type column from postgres and try to save it back to postgres, an exception is occured as during read operation postgres column is converted to String and while saving back it gives an error

 

java.sql.BatchUpdateException: Batch entry 0 INSERT INTO test_table ("dob","dob_time","dob_time_zone","duration") VALUES ('2019-05-29 -04','2016-08-12 10:22:31.100000-04','2016-08-12 13:22:31.100000-04','3 days 10:00:00') was aborted: ERROR: column "duration" is of type interval but expression is of type character varying
 Hint: You will need to rewrite or cast the expression.
 Position: 86 Call getNextException to see other errors in the batch.
 at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:151)
 at org.postgresql.core.ResultHandlerDelegate.handleError(ResultHandlerDelegate.java:45)
 at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2159)
 at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:463)
 at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:794)
 at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1662)
 at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:672)
 at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:834)
 at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:834)
 at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
 at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
 at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
 at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
 at org.apache.spark.scheduler.Task.run(Task.scala:123)
 at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org