You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by bl...@apache.org on 2022/05/29 22:25:54 UTC
[iceberg] branch master updated: Docs: Fix a typo, add using for writeTo when creating a table (#4885)
This is an automated email from the ASF dual-hosted git repository.
blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git
The following commit(s) were added to refs/heads/master by this push:
new fb50da866 Docs: Fix a typo, add using for writeTo when creating a table (#4885)
fb50da866 is described below
commit fb50da866c7d108b5cd2d43a40e141c780099c3c
Author: Wing Yew Poon <wy...@cloudera.com>
AuthorDate: Sun May 29 15:25:49 2022 -0700
Docs: Fix a typo, add using for writeTo when creating a table (#4885)
---
docs/spark/spark-ddl.md | 2 +-
docs/spark/spark-writes.md | 7 +++++++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/docs/spark/spark-ddl.md b/docs/spark/spark-ddl.md
index c186f1d1a..e97c1e2c8 100644
--- a/docs/spark/spark-ddl.md
+++ b/docs/spark/spark-ddl.md
@@ -46,7 +46,7 @@ Iceberg will convert the column type in Spark to corresponding Iceberg type. Ple
Table create commands, including CTAS and RTAS, support the full range of Spark create clauses, including:
-* `PARTITION BY (partition-expressions)` to configure partitioning
+* `PARTITIONED BY (partition-expressions)` to configure partitioning
* `LOCATION '(fully-qualified-uri)'` to set the table location
* `COMMENT 'table documentation'` to set a table description
* `TBLPROPERTIES ('key'='value', ...)` to set [table configuration](../configuration)
diff --git a/docs/spark/spark-writes.md b/docs/spark/spark-writes.md
index 1cf749f36..165ba674e 100644
--- a/docs/spark/spark-writes.md
+++ b/docs/spark/spark-writes.md
@@ -290,6 +290,13 @@ val data: DataFrame = ...
data.writeTo("prod.db.table").create()
```
+If you have replaced the default Spark catalog (`spark_catalog`) with Iceberg's `SparkSessionCatalog`, do:
+
+```scala
+val data: DataFrame = ...
+data.writeTo("db.table").using("iceberg").create()
+```
+
Create and replace operations support table configuration methods, like `partitionedBy` and `tableProperty`:
```scala