You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/09/26 08:07:21 UTC

[jira] [Assigned] (SPARK-17665) SparkR does not support options in other types consistently other APIs

     [ https://issues.apache.org/jira/browse/SPARK-17665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-17665:
------------------------------------

    Assignee: Apache Spark

> SparkR does not support options in other types consistently other APIs
> ----------------------------------------------------------------------
>
>                 Key: SPARK-17665
>                 URL: https://issues.apache.org/jira/browse/SPARK-17665
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>    Affects Versions: 2.0.0
>            Reporter: Hyukjin Kwon
>            Assignee: Apache Spark
>            Priority: Minor
>
> Currently, SparkR only supports a string as option in some APIs such as `read.df`/`write.df` and etc.
> It'd be great if they support other types consistently with Python/Scala/Java/SQL APIs.
> - Python supports all types but converts it to string
> - Scala/Java/SQL - Long/Boolean/String/Double.
> Currently, 
> {code}
> > read.df("text.json", "csv", inferSchema=FALSE)
> {code}
> throws an exception as below:
> {code}
> Error in value[[3L]](cond) :
>   Error in invokeJava(isStatic = TRUE, className, methodName, ...): java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String
> 	at org.apache.spark.sql.internal.SessionState$$anonfun$newHadoopConfWithOptions$1.apply(SessionState.scala:59)
> 	at org.apache.spark.sql.internal.SessionState$$anonfun$newHadoopConfWithOptions$1.apply(SessionState.scala:59)
> 	at scala.collection.immutable.Map$Map3.foreach(Map.scala:161)
> 	at org.apache.spark.sql.internal.SessionState.newHadoopConfWithOptions(SessionState.scala:59)
> 	at org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.<init>(PartitioningAwareFileCatalog.scala:45)
> 	at org.apache.spark.sql.execution.datasources.ListingFileCatalog.<init>(ListingFileCatalog.scala:45)
> 	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:401)
> 	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
> 	at org.apache.spark.sql.DataFrameReader.lo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org