You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Aleksander Eskilson (JIRA)" <ji...@apache.org> on 2016/10/12 21:37:20 UTC

[jira] [Commented] (SPARK-12787) Dataset to support custom encoder

    [ https://issues.apache.org/jira/browse/SPARK-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15569928#comment-15569928 ] 

Aleksander Eskilson commented on SPARK-12787:
---------------------------------------------

[~Zariel], I've put together an implementation of an Avro encoder that I'm currently submitting to the folks at spark-avro [1]. You can read the broader story here in a thread [2] on their project github. Additionally, the writing of encoders for additional types of Java objects might be made easier after resolving SPARK-17770.

[1] - https://github.com/databricks/spark-avro
[2] - https://github.com/databricks/spark-avro/issues/169

> Dataset to support custom encoder
> ---------------------------------
>
>                 Key: SPARK-12787
>                 URL: https://issues.apache.org/jira/browse/SPARK-12787
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 1.6.0
>            Reporter: Muthu Jayakumar
>
> The current Dataset API allows to be loaded using a case-class that requires the the attribute name and types to be match up precisely.
> It would be nicer, if a Partial function can be provided as a parameter to transform the Dataframe like schema into Dataset. 
> Something like...
> test_dataframe.as[TestCaseClass](partial_function)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org