You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shivaram Venkataraman (JIRA)" <ji...@apache.org> on 2015/08/20 19:19:47 UTC

[jira] [Updated] (SPARK-9982) SparkR DataFrame fail to return data of Decimal type

     [ https://issues.apache.org/jira/browse/SPARK-9982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shivaram Venkataraman updated SPARK-9982:
-----------------------------------------
    Assignee: Alex Shkurenko

> SparkR DataFrame fail to return data of Decimal type
> ----------------------------------------------------
>
>                 Key: SPARK-9982
>                 URL: https://issues.apache.org/jira/browse/SPARK-9982
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 1.4.1
>            Reporter: Alex Shkurenko
>            Assignee: Alex Shkurenko
>
> Got an issue similar to https://issues.apache.org/jira/browse/SPARK-8897, but with the Decimal datatype coming from a Postgres DB:
> //Set up SparkR
> >Sys.setenv(SPARK_HOME="/Users/ashkurenko/work/git_repos/spark")
> >Sys.setenv(SPARKR_SUBMIT_ARGS="--driver-class-path ~/Downloads/postgresql-9.4-1201.jdbc4.jar sparkr-shell")
> >.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
> >library(SparkR)
> >sc <- sparkR.init(master="local")
> // Connect to a Postgres DB via JDBC
> >sqlContext <- sparkRSQL.init(sc)
> >sql(sqlContext, "
>     CREATE TEMPORARY TABLE mytable 
>     USING org.apache.spark.sql.jdbc 
>     OPTIONS (url 'jdbc:postgresql://servername:5432/dbname'
>     ,dbtable 'mydbtable'
> )
> ")
> // Try pulling a Decimal column from a table
> >myDataFrame <- sql(sqlContext,("select a_decimal_column  from mytable "))
> // The schema shows up fine
> >show(myDataFrame)
> DataFrame[a_decimal_column:decimal(10,0)]
> >schema(myDataFrame)
> StructType
> |-name = "a_decimal_column", type = "DecimalType(10,0)", nullable = TRUE
> // ... but pulling data fails:
> localDF <- collect(myDataFrame)
> Error in as.data.frame.default(x[[i]], optional = TRUE) : 
>   cannot coerce class ""jobj"" to a data.frame



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org