You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mikalai Surta (JIRA)" <ji...@apache.org> on 2018/11/19 09:38:00 UTC

[jira] [Issue Comment Deleted] (SPARK-20236) Overwrite a partitioned data source table should only overwrite related partitions

     [ https://issues.apache.org/jira/browse/SPARK-20236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mikalai Surta updated SPARK-20236:
----------------------------------
    Comment: was deleted

(was: Could you please explain why it doesn't work as I expect in my case?

I create a table with dynamic partitions. Insert data into 2 partitions. Then insert overwrite data for 1 partition. Expect to see 2 partitions as a result, but got only 1 that overwrite the whole table. Doesn't seem to be the same case as [~deepanker] had as I create table with explicit PARTITIONED BY clause.

spark.sparkContext.getConf().getAll()

...

('spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")', ''),

...

sqlContext.sql("CREATE TABLE `debug2` (....) PARTITIONED BY (inputDate)")

sqlContext.sql("insert into debug2 select * from debug1")

sqlContext.sql("select * from debug2").select('ean', 'inputDate').show(10, False)

+-------------+---------+
|ean |inputDate|
+-------------+---------+
|4019238159363|*20181025* |
|3188642344151|*20181026* |
+-------------+---------+

sqlContext.sql("insert overwrite table debug2 select * from debug1 where inputDate=='*20181025*'")

+-------------+---------+
|ean |inputDate|
+-------------+---------+
|4019238159363|*20181025* |
+-------------+---------+

*20181026* record is lost.)

> Overwrite a partitioned data source table should only overwrite related partitions
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-20236
>                 URL: https://issues.apache.org/jira/browse/SPARK-20236
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Wenchen Fan
>            Assignee: Wenchen Fan
>            Priority: Major
>              Labels: releasenotes
>             Fix For: 2.3.0
>
>
> When we overwrite a partitioned data source table, currently Spark will truncate the entire table to write new data, or truncate a bunch of partitions according to the given static partitions.
> For example, {{INSERT OVERWRITE tbl ...}} will truncate the entire table, {{INSERT OVERWRITE tbl PARTITION (a=1, b)}} will truncate all the partitions that starts with {{a=1}}.
> This behavior is kind of reasonable as we can know which partitions will be overwritten before runtime. However, hive has a different behavior that it only overwrites related partitions, e.g. {{INSERT OVERWRITE tbl SELECT 1,2,3}} will only overwrite partition {{a=2, b=3}}, assuming {{tbl}} has only one data column and is partitioned by {{a}} and {{b}}.
> It seems better if we can follow hive's behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org