You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@toree.apache.org by "Josiah Samuel Sathiadass (JIRA)" <ji...@apache.org> on 2017/07/14 17:32:00 UTC

[jira] [Commented] (TOREE-424) ClassCastException on Dataset with case class

    [ https://issues.apache.org/jira/browse/TOREE-424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087655#comment-16087655 ] 

Josiah Samuel Sathiadass commented on TOREE-424:
------------------------------------------------

The ClassCastException got resolved after I did a fresh clone and built the binaries.

The link from where I installed toree has outdated dev version (https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz).

I'm just need to confirm whether the support for implicits inclusion is available as the following code still fails,

import spark.implicits._

Currently managing with the workaround.

> ClassCastException on Dataset with case class 
> ----------------------------------------------
>
>                 Key: TOREE-424
>                 URL: https://issues.apache.org/jira/browse/TOREE-424
>             Project: TOREE
>          Issue Type: Bug
>          Components: Kernel
>    Affects Versions: 0.2.0
>         Environment: ppcle64
>            Reporter: Josiah Samuel Sathiadass
>         Attachments: Screen Shot 2017-07-14 at 11.39.22 AM.png
>
>
> When we tried to use Jupyter Notebook with Apache Toree kernel, we couldn't get this working for DataSet specially with "case class" as it throws *ClassCastException* as follows,
> {{Name: org.apache.spark.SparkException
> Message: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.ClassCastException: $line45.$read$$iw$$iw$DataPoint cannot be cast to $line45.$read$$iw$$iw$DataPoint
> 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
> 	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> 	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
> 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> 	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)}}
> The commands we issued are as follows,
> {{import org.apache.spark.sql.SparkSession
> val sc = SparkSession.builder.getOrCreate()
> import sc.implicits._
> import sc.sqlContext.implicits._
> case class DataPoint(element: Long)
> val ds=spark.range(0,10,1,1).map(x => DataPoint(x))
> ds.collect().foreach(println)}}
> We were using the latest version of Toree which has the support for Spark 2.0. 
> pip install https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)