You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Minh Thai (JIRA)" <ji...@apache.org> on 2018/08/25 09:10:00 UTC
[jira] [Issue Comment Deleted] (SPARK-17368) Scala value classes
create encoder problems and break at runtime
[ https://issues.apache.org/jira/browse/SPARK-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Minh Thai updated SPARK-17368:
------------------------------
Comment: was deleted
(was: [~jodersky] I know that this is an old ticket but I still want to give some comments on making encoder for value classes. Even until today, there is no way to have a type constraint that targets value classes. However, I think we can make a [universal trait|https://docs.scala-lang.org/overviews/core/value-classes.html] called {{OpaqueValue}}^1^ to be used as an upper type bound in encoder. This means:
- Any user-defined value class has to mixin {{OpaqueValue}}
- An encoder can be created to target those value classes.
{code:java}
trait OpaqueValue extends Any
implicit def newValueClassEncoder[T <: Product with OpaqueValue : TypeTag] = ???
case class Id(value: Int) extends AnyVal with OpaqueValue
{code}
tested on my machine using Spark 2.1.0 and Scala 2.11.12, this doesn't clash with the existing encoder for case class
{code:java}
implicit def newProductEncoder[T <: Product : TypeTag]: Encoder[T] = Encoders.product[T]
{code}
If this is possible to implement. I think it can solve SPARK-20384 also.
_(1) the name is inspired from [Opaque Type|https://docs.scala-lang.org/sips/opaque-types.html] feature of Scala 3_)
> Scala value classes create encoder problems and break at runtime
> ----------------------------------------------------------------
>
> Key: SPARK-17368
> URL: https://issues.apache.org/jira/browse/SPARK-17368
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 1.6.2, 2.0.0
> Environment: JDK 8 on MacOS
> Scala 2.11.8
> Spark 2.0.0
> Reporter: Aris Vlasakakis
> Assignee: Jakob Odersky
> Priority: Major
> Fix For: 2.1.0
>
>
> Using Scala value classes as the inner type for Datasets breaks in Spark 2.0 and 1.6.X.
> This simple Spark 2 application demonstrates that the code will compile, but will break at runtime with the error. The value class is of course *FeatureId*, as it extends AnyVal.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: Couldn't find v on int
> assertnotnull(input[0, int, true], top level non-flat input object).v AS v#0
> +- assertnotnull(input[0, int, true], top level non-flat input object).v
> +- assertnotnull(input[0, int, true], top level non-flat input object)
> +- input[0, int, true]".
> at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:279)
> at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:421)
> at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:421)
> {noformat}
> Test code for Spark 2.0.0:
> {noformat}
> import org.apache.spark.sql.{Dataset, SparkSession}
> object BreakSpark {
> case class FeatureId(v: Int) extends AnyVal
> def main(args: Array[String]): Unit = {
> val seq = Seq(FeatureId(1), FeatureId(2), FeatureId(3))
> val spark = SparkSession.builder.getOrCreate()
> import spark.implicits._
> spark.sparkContext.setLogLevel("warn")
> val ds: Dataset[FeatureId] = spark.createDataset(seq)
> println(s"BREAK HERE: ${ds.count}")
> }
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org