You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@iceberg.apache.org by bl...@apache.org on 2021/10/11 15:38:44 UTC

[iceberg] branch master updated: Docs: Update Java API quickstart to use no-arg Catalog constructor (#3253)

This is an automated email from the ASF dual-hosted git repository.

blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new 8b84f66  Docs: Update Java API quickstart to use no-arg Catalog constructor (#3253)
8b84f66 is described below

commit 8b84f66391b4e6ff331be538aa090fd2d40c3a41
Author: Samuel Redai <43...@users.noreply.github.com>
AuthorDate: Mon Oct 11 08:38:34 2021 -0700

    Docs: Update Java API quickstart to use no-arg Catalog constructor (#3253)
---
 site/docs/java-api-quickstart.md | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/site/docs/java-api-quickstart.md b/site/docs/java-api-quickstart.md
index de8bd31..9206926 100644
--- a/site/docs/java-api-quickstart.md
+++ b/site/docs/java-api-quickstart.md
@@ -23,12 +23,23 @@ Tables are created using either a [`Catalog`](./javadoc/master/index.html?org/ap
 
 ### Using a Hive catalog
 
-The Hive catalog connects to a Hive MetaStore to keep track of Iceberg tables. This example uses Spark's Hadoop configuration to get a Hive catalog:
+The Hive catalog connects to a Hive metastore to keep track of Iceberg tables.
+You can initialize a Hive catalog with a name and some properties.
+(see: [Catalog properties](https://iceberg.apache.org/configuration/#catalog-properties))
+
+**Note:** Currently, `setConf` is always required for hive catalogs, but this will change in the future.
 
 ```java
 import org.apache.iceberg.hive.HiveCatalog;
 
-Catalog catalog = new HiveCatalog(spark.sparkContext().hadoopConfiguration());
+Catalog catalog = new HiveCatalog();
+catalog.setConf(spark.sparkContext().hadoopConfiguration());  // Configure using Spark's Hadoop configuration
+
+Map <String, String> properties = new HashMap<String, String>();
+properties.put("warehouse", "...");
+properties.put("uri", "...");
+
+catalog.initialize("hive", properties);
 ```
 
 The `Catalog` interface defines methods for working with tables, like `createTable`, `loadTable`, `renameTable`, and `dropTable`.