You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/07/17 07:53:20 UTC

[jira] [Updated] (SPARK-12437) Reserved words (like table) throws error when writing a data frame to JDBC

     [ https://issues.apache.org/jira/browse/SPARK-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-12437:
------------------------------
    Target Version/s:   (was: 2.0.0)

> Reserved words (like table) throws error when writing a data frame to JDBC
> --------------------------------------------------------------------------
>
>                 Key: SPARK-12437
>                 URL: https://issues.apache.org/jira/browse/SPARK-12437
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Reynold Xin
>            Assignee: Apache Spark
>              Labels: starter
>
> From: A Spark user
> If you have a DataFrame column name that contains a SQL reserved word, it will not write to a JDBC source.  This is somewhat similar to an error found in the redshift adapter:
> https://github.com/databricks/spark-redshift/issues/80
> I have produced this on a MySQL (AWS Aurora) database
> Steps to reproduce:
> {code}
> val connectionProperties = new java.util.Properties()
> sqlContext.table("diamonds").write.jdbc(jdbcUrl, "diamonds", connectionProperties)
> {code}
> Exception:
> {code}
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'table DOUBLE PRECISION , price INTEGER , x DOUBLE PRECISION , y DOUBLE PRECISION' at line 1
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> 	at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
> 	at com.mysql.jdbc.Util.getInstance(Util.java:386)
> 	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
> 	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237)
> 	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
> 	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617)
> 	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
> 	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825)
> 	at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156)
> 	at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459)
> 	at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376)
> 	at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360)
> 	at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:275)
> {code}
> You can workaround this by renaming the column on the dataframe before writing, but ideally we should be able to do something like encapsulate the name in quotes which is allowed.  Example: 
> {code}
> CREATE TABLE `test_table_column` (
>   `id` int(11) DEFAULT NULL,
>   `table` varchar(100) DEFAULT NULL
> ) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org