You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2021/02/08 13:54:09 UTC

[jira] [Updated] (SPARK-33813) JDBC datasource fails when reading spatial datatypes with the MS SQL driver

     [ https://issues.apache.org/jira/browse/SPARK-33813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-33813:
---------------------------------
    Fix Version/s: 3.1.1

> JDBC datasource fails when reading spatial datatypes with the MS SQL driver
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-33813
>                 URL: https://issues.apache.org/jira/browse/SPARK-33813
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.0, 3.1.0
>            Reporter: Michał Świtakowski
>            Assignee: Kousuke Saruta
>            Priority: Major
>             Fix For: 3.0.2, 3.2.0, 3.1.1, 3.1.2
>
>
> The MS SQL JDBC driver introduced support for spatial types since version 7.0. The JDBC data source lacks mappings for these types which results in an exception below. It seems that a mapping in MsSqlServerDialect.getCatalystType that maps -157 and -158 typecode to VARBINARY should address the issue.
>  
> {noformat}
> java.sql.SQLException: Unrecognized SQL type -157
>  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getCatalystType(JdbcUtils.scala:251)
>  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$getSchema$1(JdbcUtils.scala:321)
>  at scala.Option.getOrElse(Option.scala:189)
>  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:321)
>  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:63)
>  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
>  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
>  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:364)
>  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:366)
>  at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:355)
>  at scala.Option.getOrElse(Option.scala:189)
>  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:355)
>  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:240)
>  at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:381){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org