You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2019/04/25 16:57:03 UTC
[jira] [Assigned] (SPARK-27457) modify bean encoder to support avro
objects
[ https://issues.apache.org/jira/browse/SPARK-27457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-27457:
------------------------------------
Assignee: Apache Spark
> modify bean encoder to support avro objects
> -------------------------------------------
>
> Key: SPARK-27457
> URL: https://issues.apache.org/jira/browse/SPARK-27457
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Affects Versions: 2.4.1
> Reporter: Taoufik DACHRAOUI
> Assignee: Apache Spark
> Priority: Major
>
> Currently we modified JavaTypeInference to be able to create encoders for Avro objects.
>
> we have now three solutions to encode avro objects: [https://github.com/apache/spark/pull/22878], [https://github.com/apache/spark/pull/24299], and [https://github.com/apache/spark/pull/24367]
> The test we used for Encoders.bean is as follows:
>
> {code:java}
> implicit val avroExampleEncoder = Encoders.bean[AvroExample1](classOf[AvroExample1]).asInstanceOf[ExpressionEncoder[AvroExample1]]
> val input: AvroExample1 = AvroExample1.newBuilder()
> .setMyarray(List("Foo", "Bar").asJava)
> .setMyboolean(true)
> .setMybytes(java.nio.ByteBuffer.wrap("MyBytes".getBytes()))
> .setMydouble(2.5)
> .setMyfixed(new Magic("magic".getBytes))
> .setMyfloat(25.0F)
> .setMyint(100)
> .setMylong(10L)
> .setMystring("hello")
> .setMymap(Map(
> "foo" -> new java.lang.Integer(1),
> "bar" -> new java.lang.Integer(2)).asJava)
> .setMymoney(Money.newBuilder().setAmount(100.0F).setCurrency(Currency.EUR).build())
> .build()
> val row: InternalRow = avroExampleEncoder.toRow(input)
> val output: AvroExample1 = avroExampleEncoder.resolveAndBind().fromRow(row)
> val ds: Dataset[AvroExample1] = List(input).toDS()
> println(ds.schema)
> println(ds.collect().toList)
> ds.write.format("avro").save("example1")
> val fooDF = spark.read.format("avro").load("example1")
> val fooDS = fooDF.as[AvroExample1]
> println(fooDS.collect().toList)
>
> {code}
> The change is made in the branch [https://github.com/mazeboard/spark/tree/bean-encoder]
> *with 105 additions and 39 deletions. (excluding the code for tests)*
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org