You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Andrew Holway <an...@otternetworks.de> on 2016/02/06 17:19:58 UTC

Writing to jdbc database from SparkR (1.5.2)

I'm managing to read data via JDBC using the following but I can't work out
how to write something back to the Database.


df <- read.df(sqlContext, source="jdbc",
url="jdbc:mysql://hostname:3306?user=user&password=pass",
dbtable="database.table")


Does this functionality exist in 1.5.2?


Thanks,


Andrew

RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by Felix Cheung <fe...@hotmail.com>.
Correct :)



    _____________________________
From: Sun, Rui <ru...@intel.com>
Sent: Sunday, February 7, 2016 5:19 AM
Subject: RE: Fwd: Writing to jdbc database from SparkR (1.5.2)
To:  <de...@spark.apache.org>, Felix Cheung <fe...@hotmail.com>, Andrew Holway <an...@otternetworks.de>


                     

This should be solved by your pending PR https://github.com/apache/spark/pull/10480, right?    

                

From: Felix Cheung [mailto:felixcheung_m@hotmail.com] 
 Sent: Sunday, February 7, 2016 8:50 PM
 To: Sun, Rui <ru...@intel.com>; Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
 Subject: RE: Fwd: Writing to jdbc database from SparkR (1.5.2)             

     

I mean not exposed from the SparkR API.
 Calling it from R without a SparkR API would require either a serializer change or a JVM wrapper function.
 
          

On Sun, Feb 7, 2016 at 4:47 AM -0800, "Felix Cheung" <fe...@hotmail.com> wrote:                   

That does but it's a bit hard to call from R since it is not exposed.      

           


 
            

On Sat, Feb 6, 2016 at 11:57 PM -0800, "Sun, Rui" <ru...@intel.com> wrote:                       

DataFrameWrite.jdbc() does not work?       

        

From: Felix Cheung [mailto:felixcheung_m@hotmail.com] 
 Sent: Sunday, February 7, 2016 9:54 AM
 To: Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
 Subject: Re: Fwd: Writing to jdbc database from SparkR (1.5.2)       

                         

Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format                         

                          

For instance, this does not work in Scala either                         

df1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com:3306?user=user&password=password").option("dbtable", "table").save()                                     

                          

For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.                         

                               

_____________________________
 From: Andrew Holway <an...@otternetworks.de>
 Sent: Saturday, February 6, 2016 11:22 AM
 Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
 To: <de...@spark.apache.org>                 

Hi,                    

                             

I have a thread on  user@spark.apache.org but I think this might require developer attention.                       

                                

I'm reading data from a database: This is working well.                                                                  

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass")                                           

                                            

When I try and write something back to the DB I see this following error:                                            

                                         

> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass", dbtable="db.table", mode="append")             

              

16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed             

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :              

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.             

at scala.sys.package$.error(package.scala:27)             

at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)             

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)             

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)             

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)             

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)             

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)             

at java.lang.reflect.Method.invoke(Method.java:497)             

at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)             

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)             

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)             

at io.netty.channel.SimpleChannelIn             

              

Any ideas on a workaround?             

              

Thanks,             

              

Andrew                                                          

                              


  

RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by "Sun, Rui" <ru...@intel.com>.
This should be solved by your pending PR https://github.com/apache/spark/pull/10480, right?

From: Felix Cheung [mailto:felixcheung_m@hotmail.com]
Sent: Sunday, February 7, 2016 8:50 PM
To: Sun, Rui <ru...@intel.com>; Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
Subject: RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

I mean not exposed from the SparkR API.
Calling it from R without a SparkR API would require either a serializer change or a JVM wrapper function.

On Sun, Feb 7, 2016 at 4:47 AM -0800, "Felix Cheung" <fe...@hotmail.com>> wrote:
That does but it's a bit hard to call from R since it is not exposed.



On Sat, Feb 6, 2016 at 11:57 PM -0800, "Sun, Rui" <ru...@intel.com>> wrote:

DataFrameWrite.jdbc() does not work?



From: Felix Cheung [mailto:felixcheung_m@hotmail.com]
Sent: Sunday, February 7, 2016 9:54 AM
To: Andrew Holway <an...@otternetworks.de>>; dev@spark.apache.org<ma...@spark.apache.org>
Subject: Re: Fwd: Writing to jdbc database from SparkR (1.5.2)



Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format



For instance, this does not work in Scala either

df1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com<http://something.rds.amazonaws.com>:3306?user=user&password=password").option("dbtable", "table").save()



For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.



_____________________________
From: Andrew Holway <an...@otternetworks.de>>
Sent: Saturday, February 6, 2016 11:22 AM
Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
To: <de...@spark.apache.org>>

Hi,



I have a thread on user@spark.apache.org<ma...@spark.apache.org> but I think this might require developer attention.



I'm reading data from a database: This is working well.

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306/?user=user&password=pass>")



When I try and write something back to the DB I see this following error:



> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass>", dbtable="db.table", mode="append")



16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.

at scala.sys.package$.error(package.scala:27)

at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn



Any ideas on a workaround?



Thanks,



Andrew



RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by Felix Cheung <fe...@hotmail.com>.
I mean not exposed from the SparkR API.
Calling it from R without a SparkR API would require either a serializer change or a JVM wrapper function.



On Sun, Feb 7, 2016 at 4:47 AM -0800, "Felix Cheung" <fe...@hotmail.com> wrote:





That does but it's a bit hard to call from R since it is not exposed.






On Sat, Feb 6, 2016 at 11:57 PM -0800, "Sun, Rui" <ru...@intel.com> wrote:





DataFrameWrite.jdbc() does not work?

From: Felix Cheung [mailto:felixcheung_m@hotmail.com]
Sent: Sunday, February 7, 2016 9:54 AM
To: Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
Subject: Re: Fwd: Writing to jdbc database from SparkR (1.5.2)

Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format

For instance, this does not work in Scala either
df1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com<http://something.rds.amazonaws.com>:3306?user=user&password=password").option("dbtable", "table").save()

For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.

_____________________________
From: Andrew Holway <an...@otternetworks.de>>
Sent: Saturday, February 6, 2016 11:22 AM
Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
To: <de...@spark.apache.org>>

Hi,

I have a thread on user@spark.apache.org<ma...@spark.apache.org> but I think this might require developer attention.

I'm reading data from a database: This is working well.

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306/?user=user&password=pass>")

When I try and write something back to the DB I see this following error:


> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass>", dbtable="db.table", mode="append")



16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.

at scala.sys.package$.error(package.scala:27)

at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn



Any ideas on a workaround?



Thanks,



Andrew


RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by Felix Cheung <fe...@hotmail.com>.
That does but it's a bit hard to call from R since it is not exposed.






On Sat, Feb 6, 2016 at 11:57 PM -0800, "Sun, Rui" <ru...@intel.com> wrote:





DataFrameWrite.jdbc() does not work?

From: Felix Cheung [mailto:felixcheung_m@hotmail.com]
Sent: Sunday, February 7, 2016 9:54 AM
To: Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
Subject: Re: Fwd: Writing to jdbc database from SparkR (1.5.2)

Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format

For instance, this does not work in Scala either
df1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com<http://something.rds.amazonaws.com>:3306?user=user&password=password").option("dbtable", "table").save()

For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.

_____________________________
From: Andrew Holway <an...@otternetworks.de>>
Sent: Saturday, February 6, 2016 11:22 AM
Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
To: <de...@spark.apache.org>>

Hi,

I have a thread on user@spark.apache.org<ma...@spark.apache.org> but I think this might require developer attention.

I'm reading data from a database: This is working well.

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306/?user=user&password=pass>")

When I try and write something back to the DB I see this following error:


> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass>", dbtable="db.table", mode="append")



16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.

at scala.sys.package$.error(package.scala:27)

at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn



Any ideas on a workaround?



Thanks,



Andrew


RE: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by "Sun, Rui" <ru...@intel.com>.
DataFrameWrite.jdbc() does not work?

From: Felix Cheung [mailto:felixcheung_m@hotmail.com]
Sent: Sunday, February 7, 2016 9:54 AM
To: Andrew Holway <an...@otternetworks.de>; dev@spark.apache.org
Subject: Re: Fwd: Writing to jdbc database from SparkR (1.5.2)

Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format

For instance, this does not work in Scala either
df1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com<http://something.rds.amazonaws.com>:3306?user=user&password=password").option("dbtable", "table").save()

For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.

_____________________________
From: Andrew Holway <an...@otternetworks.de>>
Sent: Saturday, February 6, 2016 11:22 AM
Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
To: <de...@spark.apache.org>>

Hi,

I have a thread on user@spark.apache.org<ma...@spark.apache.org> but I think this might require developer attention.

I'm reading data from a database: This is working well.

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306/?user=user&password=pass>")

When I try and write something back to the DB I see this following error:


> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass<http://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass>", dbtable="db.table", mode="append")



16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select.

at scala.sys.package$.error(package.scala:27)

at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn



Any ideas on a workaround?



Thanks,



Andrew


Re: Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by Felix Cheung <fe...@hotmail.com>.
Unfortunately I couldn't find a simple workaround. It seems to be an issue with DataFrameWriter.save() that does not work with jdbc source/format
For instance, this does not work in Scala eitherdf1.write.format("jdbc").mode("overwrite").option("url", "jdbc:mysql://something.rds.amazonaws.com:3306?user=user&password=password").option("dbtable", "table").save()            
For Spark 1.5.x, it seems the best option would be to write a JVM wrapper and call it from R.

    _____________________________
From: Andrew Holway <an...@otternetworks.de>
Sent: Saturday, February 6, 2016 11:22 AM
Subject: Fwd: Writing to jdbc database from SparkR (1.5.2)
To:  <de...@spark.apache.org>


       Hi,       
          I have a thread on     user@spark.apache.org but I think this might require developer attention.    
         
             I'm reading data from a database: This is working well.     
     
                                             

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass")                            
                          When I try and write something back to the DB I see this following error:                         
                      

> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass", dbtable="db.table", mode="append")       


        

16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed        

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :  

  java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select. 

 at scala.sys.package$.error(package.scala:27) 

 at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200) 

 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146) 

 at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855) 

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 

 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 

 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 

 at java.lang.reflect.Method.invoke(Method.java:497) 

 at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132) 

 at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79) 

 at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38) 

 at io.netty.channel.SimpleChannelIn




Any ideas on a workaround?




Thanks,




Andrew                      


  

Fwd: Writing to jdbc database from SparkR (1.5.2)

Posted by Andrew Holway <an...@otternetworks.de>.
Hi,

I have a thread on user@spark.apache.org but I think this might require
developer attention.

I'm reading data from a database: This is working well.

> df <- read.df(sqlContext, source="jdbc", url="jdbc:mysql://
database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass
<http://database.foo.eu-west-1.rds.amazonaws.com:3306/?user=user&password=pass>
")

When I try and write something back to the DB I see this following error:

> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://
database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass",
dbtable="db.table", mode="append")


16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException:
org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not
allow create table as select.

at scala.sys.package$.error(package.scala:27)

at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at
org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn


Any ideas on a workaround?


Thanks,


Andrew

Re: Writing to jdbc database from SparkR (1.5.2)

Posted by Andrew Holway <an...@otternetworks.de>.
>
> df <- read.df(sqlContext, source="jdbc",
> url="jdbc:mysql://hostname:3306?user=user&password=pass",
> dbtable="database.table")
>

I got a bit further but am now getting the following error. This error is
being thrown without the database being touched. I tested this by making
the database unavailable.

> write.df(fooframe, path="NULL", source="jdbc", url="jdbc:mysql://
database.foo.eu-west-1.rds.amazonaws.com:3306?user=user&password=pass",
dbtable="db.table", mode="append")

16/02/06 19:05:43 ERROR RBackendHandler: save on 2 failed

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :

  java.lang.RuntimeException:
org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not
allow create table as select.

at scala.sys.package$.error(package.scala:27)

at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:200)

at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1855)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at
org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)

at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)

at
org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)

at io.netty.channel.SimpleChannelIn