You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by fo...@apache.org on 2022/08/25 07:59:13 UTC

[iceberg] branch master updated: Docs: Added missing doc for REPLACE PARTITION FIELD (#5624)

This is an automated email from the ASF dual-hosted git repository.

fokko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new 65b8ac99f8 Docs: Added missing doc for REPLACE PARTITION FIELD (#5624)
65b8ac99f8 is described below

commit 65b8ac99f82ce1ccaec117b048a3113d610d6e56
Author: dotjdk <gi...@dotj.dk>
AuthorDate: Thu Aug 25 11:59:07 2022 +0400

    Docs: Added missing doc for REPLACE PARTITION FIELD (#5624)
    
    * Added missing doc for REPLACE PARTITION FIELD
    
    * Update docs/spark-ddl.md
    
    * Update docs/spark-ddl.md
    
    Co-authored-by: Jan Justesen <ja...@tlt.local>
    Co-authored-by: Fokko Driesprong <fo...@apache.org>
---
 docs/spark-ddl.md | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/docs/spark-ddl.md b/docs/spark-ddl.md
index 0c0fed80e3..3261aa0d2b 100644
--- a/docs/spark-ddl.md
+++ b/docs/spark-ddl.md
@@ -365,6 +365,16 @@ For example, if you partition by days and move to partitioning by hours, overwri
 Be careful when dropping a partition field because it will change the schema of metadata tables, like `files`, and may cause metadata queries to fail or produce different results.
 {{< /hint >}}
 
+### `ALTER TABLE ... REPLACE PARTITION FIELD`
+
+A partition field can be replaced by a new partition field in a single metadata update by using `REPLACE PARTITION FIELD`:
+
+```sql
+ALTER TABLE prod.db.sample REPLACE PARTITION FIELD ts_day WITH days(ts)
+-- use optional AS keyword to specify a custom name for the new partition field 
+ALTER TABLE prod.db.sample REPLACE PARTITION FIELD ts_day WITH days(ts) AS day_of_ts
+```
+
 ### `ALTER TABLE ... WRITE ORDERED BY`
 
 Iceberg tables can be configured with a sort order that is used to automatically sort data that is written to the table in some engines. For example, `MERGE INTO` in Spark will use the table ordering.