You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Deenar Toraskar (JIRA)" <ji...@apache.org> on 2016/01/30 09:58:39 UTC

[jira] [Created] (SPARK-13101) Dataset complex types mapping to DataFrame (element nullability) mismatch

Deenar Toraskar created SPARK-13101:
---------------------------------------

             Summary: Dataset complex types mapping to DataFrame  (element nullability) mismatch
                 Key: SPARK-13101
                 URL: https://issues.apache.org/jira/browse/SPARK-13101
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.6.1
            Reporter: Deenar Toraskar
             Fix For: 1.6.1


There seems to be a regression between 1.6.0 and 1.6.1 (snapshot build). By default a scala Seq[Double] is mapped by Spark as an ArrayType with nullable element

 |-- valuations: array (nullable = true)
 |    |-- element: double (containsNull = true)

This could be read back to as a Dataset in Spark 1.6.0

    val df = sqlContext.table("valuations").as[Valuation]

But with Spark 1.6.1 the same fails with
    val df = sqlContext.table("valuations").as[Valuation]

org.apache.spark.sql.AnalysisException: cannot resolve 'cast(valuations as array<double>)' due to data type mismatch: cannot cast ArrayType(DoubleType,true) to ArrayType(DoubleType,false);

Here's the classes I am using

case class Valuation(tradeId : String,
                     counterparty: String,
                     nettingAgreement: String,
                     wrongWay: Boolean,
                     valuations : Seq[Double], /* one per scenario */
                     timeInterval: Int,
                     jobId: String)  /* used for hdfs partitioning */

val vals : Seq[Valuation] = Seq()
val valsDF = sqlContext.sparkContext.parallelize(vals).toDF
valsDF.write.partitionBy("jobId").mode(SaveMode.Overwrite).saveAsTable("valuations")

even the following gives the same result
val valsDF = vals.toDS.toDF




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org