You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2018/07/25 20:07:00 UTC

[jira] [Resolved] (SPARK-24860) Expose dynamic partition overwrite per write operation

     [ https://issues.apache.org/jira/browse/SPARK-24860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xiao Li resolved SPARK-24860.
-----------------------------
       Resolution: Fixed
         Assignee: Koert Kuipers
    Fix Version/s: 2.4.0

> Expose dynamic partition overwrite per write operation
> ------------------------------------------------------
>
>                 Key: SPARK-24860
>                 URL: https://issues.apache.org/jira/browse/SPARK-24860
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: koert kuipers
>            Assignee: Koert Kuipers
>            Priority: Minor
>             Fix For: 2.4.0
>
>
> This is a follow up to issue SPARK-20236
> Also see the discussion in pullreq https://github.com/apache/spark/pull/18714
> SPARK-20236 added a global setting spark.sql.sources.partitionOverwriteMode to switch between static and dynamic overwrite of partitioned tables. It would be nice if we could choose per partitioned overwrite operation whether it's behavior is static or dynamic. The suggested syntax is:
> {noformat}
> df.write.option("partitionOverwriteMode", "dynamic").parquet...{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org