You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (Jira)" <ji...@apache.org> on 2023/01/06 02:43:00 UTC

[jira] [Resolved] (SPARK-41806) Use AppendData.byName for SQL INSERT INTO by name for DSV2 and block ambiguous queries with static partitions columns

     [ https://issues.apache.org/jira/browse/SPARK-41806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan resolved SPARK-41806.
---------------------------------
    Fix Version/s: 3.4.0
       Resolution: Fixed

Issue resolved by pull request 39334
[https://github.com/apache/spark/pull/39334]

> Use AppendData.byName for SQL INSERT INTO by name for DSV2 and block ambiguous queries with static partitions columns
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-41806
>                 URL: https://issues.apache.org/jira/browse/SPARK-41806
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.4.0
>            Reporter: Allison Portis
>            Priority: Major
>             Fix For: 3.4.0
>
>
> Currently for INSERT INTO by name we reorder the value list and convert it to INSERT INTO by ordinal. Since DSv2 logical nodes have the isByName flag we don't need to do this. The current approach is limiting in that
>  # Users must provide the full list of table columns (this limits the functionality for features like generated columns see SPARK-41290)
>  # It allows ambiguous queries such as INSERT OVERWRITE t PARTITION (c='1') (c) VALUES ('2') where the user provides both the static partition column 'c' and the column 'c' in the column list. We should check that the static partition column is not in the column list.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org