You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Andrés Ivaldi <ia...@gmail.com> on 2016/01/28 21:59:57 UTC

Problems when applying scheme to RDD

Hello, I'm having an exception when trying to apply a new Scheme to RDD

I'm reading an JSON with Databricks spark-csv v1.3.0


after applying some transformations I have RDD with Strings type columns

Then I'm trying to apply Scheme where one of the field is Integer then this
exception is riced

16/01/28 17:38:14 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 4,
localhost): java.lang.ClassCastException: java.lang.String cannot be cast
to java.lang.Integer
at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
at
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:41)
at
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getInt(rows.scala:221)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
Source)
at
org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at
org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:354)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1194)
at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

The code I'm running is like

 var res=(df.explode("rows","r") {
        l: WrappedArray[ArrayBuffer[String]] => l.toList}).select("r")
        .map { m => m.getList[Row](0) }

      var u = res.map { m => Row.fromSeq(m.toSeq) }


      var df1 = df.sqlContext.createDataFrame(u, getScheme(df)  )
      //if df1.show -> the exception is riced


getScheme return the scheme, the las column is IntegerType, if I change it
to StringType
and then apply the cast like this, its works

      df1.select(df1("ga:pageviews").cast(IntegerType)).show

The order of the fields at the Structure seems to be ok.
I read that in early versions of spark-csv was a similar issue.

Any Ideas?

Regards!!!

Ing. Ivaldi Andres