You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Aniket Bhatnagar (JIRA)" <ji...@apache.org> on 2016/11/03 10:49:58 UTC

[jira] [Created] (SPARK-18251) RuntimeException: Null value appeared in non-nullable field when holding Option Case Class

Aniket Bhatnagar created SPARK-18251:
----------------------------------------

             Summary: RuntimeException: Null value appeared in non-nullable field when holding Option Case Class
                 Key: SPARK-18251
                 URL: https://issues.apache.org/jira/browse/SPARK-18251
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.0.1
         Environment: OS X
            Reporter: Aniket Bhatnagar


I am running into a runtime exception when a DataSet is holding an Empty object instance for an Option type that is holding non-nullable field. For instance, if we have the following case class:

case class DataRow(id: Int, value: String)

Then, DataSet[Option[DataRow]] can only hold Some(DataRow) objects and cannot hold Empty. If it does so, the following exception is thrown:

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 0.0 failed 1 times, most recent failure: Lost task 6.0 in stage 0.0 (TID 6, localhost): java.lang.RuntimeException: Null value appeared in non-nullable field:
- field (class: "scala.Int", name: "id")
- option value class: "DataSetOptBug.DataRow"
- root class: "scala.Option"
If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (e.g. java.lang.Integer instead of int/scala.Int).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
	at org.apache.spark.scheduler.Task.run(Task.scala:86)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)


I am attaching a sample program that can be used to reproduce this bug.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org