You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:21:22 UTC

[jira] [Updated] (SPARK-18096) Spark on have - 'Update' save mode

     [ https://issues.apache.org/jira/browse/SPARK-18096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-18096:
---------------------------------
    Labels: bulk-closed  (was: )

> Spark on have - 'Update' save mode
> ----------------------------------
>
>                 Key: SPARK-18096
>                 URL: https://issues.apache.org/jira/browse/SPARK-18096
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.0.1
>            Reporter: David Hodeffi
>            Priority: Major
>              Labels: bulk-closed
>
> when creating ETL with Spark on Hive, it is needed to update incrementally the destination table. 
> In case it is partitioned table it means that we don't need to update all partitions, but just the one who mutated.
> right now there is only one way to update a Dataframe which is SaveMode.Overwrite , the problem is that when doing it incrementally you don't need to update all partitions but just those who changed/updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org