You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by bl...@apache.org on 2019/08/16 21:17:48 UTC

[flink] branch release-1.9 updated: [FLINK-13277][hive][doc] add documentation of Hive source/sink

This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
     new 2d4d8a2  [FLINK-13277][hive][doc] add documentation of Hive source/sink
2d4d8a2 is described below

commit 2d4d8a2af5bc602dd181b9d644bee6b8d2f05c12
Author: Rui Li <li...@apache.org>
AuthorDate: Fri Aug 16 14:09:09 2019 -0700

    [FLINK-13277][hive][doc] add documentation of Hive source/sink
    
    To add documentation for hive source and sink.
    
    This closes #9217.
---
 docs/dev/batch/connectors.md              |  6 ++++++
 docs/dev/batch/connectors.zh.md           |  6 ++++++
 docs/dev/table/hive/read_write_hive.md    | 10 +++++++++-
 docs/dev/table/hive/read_write_hive.zh.md | 10 +++++++++-
 4 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/docs/dev/batch/connectors.md b/docs/dev/batch/connectors.md
index 2d341d7..53f1e09 100644
--- a/docs/dev/batch/connectors.md
+++ b/docs/dev/batch/connectors.md
@@ -217,4 +217,10 @@ The example shows how to access an Azure table and turn data into Flink's `DataS
 
 This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
 
+## Hive Connector
+
+Starting from 1.9.0, Apache Flink provides Hive connector to access Apache Hive tables. [HiveCatalog]({{ site.baseurl }}/dev/table/catalogs.html#hivecatalog) is required in order to use the Hive connector.
+After HiveCatalog is setup, please refer to [Reading & Writing Hive Tables]({{ site.baseurl }}/dev/table/hive/read_write_hive.html) for the usage of the Hive connector and its limitations.
+Same as HiveCatalog, the officially supported Apache Hive versions for Hive connector are 2.3.4 and 1.2.1.
+
 {% top %}
diff --git a/docs/dev/batch/connectors.zh.md b/docs/dev/batch/connectors.zh.md
index e577c37..82c7831 100644
--- a/docs/dev/batch/connectors.zh.md
+++ b/docs/dev/batch/connectors.zh.md
@@ -217,4 +217,10 @@ The example shows how to access an Azure table and turn data into Flink's `DataS
 
 This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
 
+## Hive Connector
+
+Starting from 1.9.0, Apache Flink provides Hive connector to access Apache Hive tables. [HiveCatalog]({{ site.baseurl }}/zh/dev/table/catalogs.html#hivecatalog) is required in order to use the Hive connector.
+After HiveCatalog is setup, please refer to [Reading & Writing Hive Tables]({{ site.baseurl }}/zh/dev/table/hive/read_write_hive.html) for the usage of the Hive connector and its limitations.
+Same as HiveCatalog, the officially supported Apache Hive versions for Hive connector are 2.3.4 and 1.2.1.
+
 {% top %}
diff --git a/docs/dev/table/hive/read_write_hive.md b/docs/dev/table/hive/read_write_hive.md
index b2e072f..4a48060 100644
--- a/docs/dev/table/hive/read_write_hive.md
+++ b/docs/dev/table/hive/read_write_hive.md
@@ -120,4 +120,12 @@ Flink SQL> INSERT INTO mytable (name, value) VALUES ('Tom', 4.72);
 
 ### Limitations
 
-Currently Flink's Hive data connector does not support writing into partitions. This feature is under active development.
+The following is a list of major limitations of the Hive connector. And we're actively working to close these gaps.
+
+1. INSERT OVERWRITE is not supported.
+2. Inserting into partitioned tables is not supported.
+3. ACID tables are not supported.
+4. Bucketed tables are not supported.
+5. Some data types are not supported. See the [limitations]({{ site.baseurl }}/dev/table/hive/#limitations) for details.
+6. Only a limited number of table storage formats have been tested, namely text, SequenceFile, ORC, and Parquet.
+7. Views are not supported.
diff --git a/docs/dev/table/hive/read_write_hive.zh.md b/docs/dev/table/hive/read_write_hive.zh.md
index b2e072f..9b9bf77 100644
--- a/docs/dev/table/hive/read_write_hive.zh.md
+++ b/docs/dev/table/hive/read_write_hive.zh.md
@@ -120,4 +120,12 @@ Flink SQL> INSERT INTO mytable (name, value) VALUES ('Tom', 4.72);
 
 ### Limitations
 
-Currently Flink's Hive data connector does not support writing into partitions. This feature is under active development.
+The following is a list of major limitations of the Hive connector. And we're actively working to close these gaps.
+
+1. INSERT OVERWRITE is not supported.
+2. Inserting into partitioned tables is not supported.
+3. ACID tables are not supported.
+4. Bucketed tables are not supported.
+5. Some data types are not supported. See the [limitations]({{ site.baseurl }}/zh/dev/table/hive/#limitations) for details.
+6. Only a limited number of table storage formats have been tested, namely text, SequenceFile, ORC, and Parquet.
+7. Views are not supported.