You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by bl...@apache.org on 2020/10/10 20:51:39 UTC
[iceberg] branch master updated: Docs: Move URI clarification
earlier (#1571)
This is an automated email from the ASF dual-hosted git repository.
blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git
The following commit(s) were added to refs/heads/master by this push:
new dfa5f87 Docs: Move URI clarification earlier (#1571)
dfa5f87 is described below
commit dfa5f87bbf5ff81893aac9996b9dec67030e8781
Author: jbirtman <60...@users.noreply.github.com>
AuthorDate: Sat Oct 10 13:51:31 2020 -0700
Docs: Move URI clarification earlier (#1571)
---
site/docs/spark.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/site/docs/spark.md b/site/docs/spark.md
index ac93538..2bbdad9 100644
--- a/site/docs/spark.md
+++ b/site/docs/spark.md
@@ -45,6 +45,7 @@ This creates an Iceberg catalog named `hive_prod` that loads tables from a Hive
spark.sql.catalog.hive_prod = org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.hive_prod.type = hive
spark.sql.catalog.hive_prod.uri = thrift://metastore-host:port
+# omit uri to use the same URI as Spark: hive.metastore.uris in hive-site.xml
```
Iceberg also supports a directory-based catalog in HDFS that can be configured using `type=hadoop`:
@@ -82,7 +83,6 @@ To add Iceberg table support to Spark's built-in catalog, configure `spark_catal
```plain
spark.sql.catalog.spark_catalog = org.apache.iceberg.spark.SparkSessionCatalog
spark.sql.catalog.spark_catalog.type = hive
-# omit uri to use the same URI as Spark: hive.metastore.uris in hive-site.xml
```
Spark's built-in catalog supports existing v1 and v2 tables tracked in a Hive Metastore. This configures Spark to use Iceberg's `SparkSessionCatalog` as a wrapper around that session catalog. When a table is not an Iceberg table, the built-in catalog will be used to load it instead.