You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (JIRA)" <ji...@apache.org> on 2019/08/13 12:57:00 UTC

[jira] [Assigned] (SPARK-28698) Allow user-specified output schema in function `to_avro`

     [ https://issues.apache.org/jira/browse/SPARK-28698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan reassigned SPARK-28698:
-----------------------------------

    Assignee: Gengliang Wang

> Allow user-specified output schema in function `to_avro`
> --------------------------------------------------------
>
>                 Key: SPARK-28698
>                 URL: https://issues.apache.org/jira/browse/SPARK-28698
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Gengliang Wang
>            Assignee: Gengliang Wang
>            Priority: Major
>
> The mapping of Spark schema to Avro schema is many-to-many. (See https://spark.apache.org/docs/latest/sql-data-sources-avro.html#supported-types-for-spark-sql---avro-conversion)
> The default schema mapping might not be exactly what users want. For example, by default a "string" column is always written as "string" Avro type, but users might want to output the column as "enum" Avro type.
> With PR https://github.com/apache/spark/pull/21847, Spark supports user-specified schema in the batch writer. 
> For the function `to_avro`, we should support user-specified output schema as well. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org