You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (JIRA)" <ji...@apache.org> on 2018/07/12 16:30:00 UTC

[jira] [Assigned] (SPARK-24691) Add new API `supportDataType` in FileFormat

     [ https://issues.apache.org/jira/browse/SPARK-24691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan reassigned SPARK-24691:
-----------------------------------

    Assignee: Gengliang Wang

> Add new API `supportDataType` in FileFormat
> -------------------------------------------
>
>                 Key: SPARK-24691
>                 URL: https://issues.apache.org/jira/browse/SPARK-24691
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: Gengliang Wang
>            Assignee: Gengliang Wang
>            Priority: Major
>
> In [https://github.com/apache/spark/pull/21389,]  data source schema is validated. However,
>  # Putting all the process logic together in `DataSourceUtils` is tricky and hard to maintain. On second thought after review, I find that the `OrcFileFormat` in hive package is not matched, so that its validation wrong.
>  # `DataSourceUtils.verifyWriteSchema` and `DataSourceUtils.verifyReadSchema` is not supposed to be called in every file format. We can move them to some upper entry.
> So, I propose we can add a new API `supportDataType` in FileFormat. Each file format can override the method to specify its supported/non-supported data types.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org