You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Gengliang Wang (JIRA)" <ji...@apache.org> on 2019/02/18 10:57:00 UTC

[jira] [Created] (SPARK-26913) New data source V2 API: SupportsDirectWrite

Gengliang Wang created SPARK-26913:
--------------------------------------

             Summary: New data source V2 API: SupportsDirectWrite
                 Key: SPARK-26913
                 URL: https://issues.apache.org/jira/browse/SPARK-26913
             Project: Spark
          Issue Type: Task
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Gengliang Wang


Spark supports writing to file data sources without getting and validation with the table schema.
For example, 
```
spark.range(10).write.orc(path)
val newDF = spark.range(20).map(id => (id.toDouble, id.toString)).toDF("double", "string")
newDF.write.mode("overwrite").orc(path)
```
1. There is no need to get/infer the schema from the table/path
2.  The schema of `newDF` can be different with the original table schema.


However, from https://github.com/apache/spark/pull/23606/files#r255319992 we can see that the feature above is missing in data source V2. Currently, data source V2 always validates the output query with the table schema. Even after the catalog support of DS V2 is implemented,  I think it is hard to support both behaviors with the current API/framework. 

This PR proposes to create a new mix-in interface `SupportsDirectWrite`.  With the interface, Spark will write to the table location directly without schema inference and validation on `DataFrameWriter.save`.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org