You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by lz...@apache.org on 2020/07/21 06:50:07 UTC

[flink] branch release-1.10 updated: [FLINK-18644][doc][hive] Remove obsolete hive connector docs

This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
     new c1f8763  [FLINK-18644][doc][hive] Remove obsolete hive connector docs
c1f8763 is described below

commit c1f8763e7479132b0efea4cd1cf168a52869877e
Author: Rui Li <li...@apache.org>
AuthorDate: Tue Jul 21 14:49:28 2020 +0800

    [FLINK-18644][doc][hive] Remove obsolete hive connector docs
    
    This closes #12939
---
 docs/dev/batch/connectors.md               |  6 ----
 docs/dev/batch/connectors.zh.md            |  6 ----
 docs/dev/table/hive/scala_shell_hive.md    | 45 ------------------------------
 docs/dev/table/hive/scala_shell_hive.zh.md | 45 ------------------------------
 4 files changed, 102 deletions(-)

diff --git a/docs/dev/batch/connectors.md b/docs/dev/batch/connectors.md
index 6221afe..8cf7c41 100644
--- a/docs/dev/batch/connectors.md
+++ b/docs/dev/batch/connectors.md
@@ -183,10 +183,4 @@ The example shows how to access an Azure table and turn data into Flink's `DataS
 
 This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
 
-## Hive Connector
-
-Starting from 1.9.0, Apache Flink provides Hive connector to access Apache Hive tables. [HiveCatalog]({{ site.baseurl }}/dev/table/catalogs.html#hivecatalog) is required in order to use the Hive connector.
-After HiveCatalog is setup, please refer to [Reading & Writing Hive Tables]({{ site.baseurl }}/dev/table/hive/read_write_hive.html) for the usage of the Hive connector and its limitations.
-Same as HiveCatalog, the officially supported Apache Hive versions for Hive connector are 2.3.4 and 1.2.1.
-
 {% top %}
diff --git a/docs/dev/batch/connectors.zh.md b/docs/dev/batch/connectors.zh.md
index 41df05f..9daf517 100644
--- a/docs/dev/batch/connectors.zh.md
+++ b/docs/dev/batch/connectors.zh.md
@@ -183,10 +183,4 @@ The example shows how to access an Azure table and turn data into Flink's `DataS
 
 This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
 
-## Hive Connector
-
-Starting from 1.9.0, Apache Flink provides Hive connector to access Apache Hive tables. [HiveCatalog]({{ site.baseurl }}/zh/dev/table/catalogs.html#hivecatalog) is required in order to use the Hive connector.
-After HiveCatalog is setup, please refer to [Reading & Writing Hive Tables]({{ site.baseurl }}/zh/dev/table/hive/read_write_hive.html) for the usage of the Hive connector and its limitations.
-Same as HiveCatalog, the officially supported Apache Hive versions for Hive connector are 2.3.4 and 1.2.1.
-
 {% top %}
diff --git a/docs/dev/table/hive/scala_shell_hive.md b/docs/dev/table/hive/scala_shell_hive.md
deleted file mode 100644
index 3710272..0000000
--- a/docs/dev/table/hive/scala_shell_hive.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: "Use Hive connector in scala shell"
-nav-parent_id: hive_tableapi
-nav-pos: 3
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-NOTE: since blink planner is not well supported in Scala Shell at the moment, it's **NOT** recommended to use Hive connector in Scala Shell.
-
-[Flink Scala Shell]({{ site.baseurl }}/ops/scala_shell.html) is a convenient quick way to try flink. 
-You can use hive in scala shell as well instead of specifying hive dependencies in pom file, packaging your program and submitting it via flink run command.
-In order to use hive connector in scala shell, you need to put the following [hive connector dependencies]({{ site.baseurl }}/dev/table/hive/#depedencies) under lib folder of flink dist .
-
-* flink-connector-hive_{scala_version}-{flink.version}.jar
-* flink-hadoop-compatibility_{scala_version}-{flink.version}.jar
-* flink-shaded-hadoop-2-uber-{hadoop.version}-{flink-shaded.version}.jar
-* hive-exec-2.x.jar (for Hive 1.x, you need to copy hive-exec-1.x.jar, hive-metastore-1.x.jar, libfb303-0.9.2.jar and libthrift-0.9.2.jar)
-
-Then you can use hive connector in scala shell like following:
-
-{% highlight scala %}
-Scala-Flink> import org.apache.flink.table.catalog.hive.HiveCatalog
-Scala-Flink> val hiveCatalog = new HiveCatalog("hive", "default", "<Replace it with HIVE_CONF_DIR>", "2.3.4");
-Scala-Flink> btenv.registerCatalog("hive", hiveCatalog)
-Scala-Flink> btenv.useCatalog("hive")
-Scala-Flink> btenv.listTables
-Scala-Flink> btenv.sqlQuery("<sql query>").toDataSet[Row].print()
-{% endhighlight %}
diff --git a/docs/dev/table/hive/scala_shell_hive.zh.md b/docs/dev/table/hive/scala_shell_hive.zh.md
deleted file mode 100644
index 3710272..0000000
--- a/docs/dev/table/hive/scala_shell_hive.zh.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: "Use Hive connector in scala shell"
-nav-parent_id: hive_tableapi
-nav-pos: 3
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-NOTE: since blink planner is not well supported in Scala Shell at the moment, it's **NOT** recommended to use Hive connector in Scala Shell.
-
-[Flink Scala Shell]({{ site.baseurl }}/ops/scala_shell.html) is a convenient quick way to try flink. 
-You can use hive in scala shell as well instead of specifying hive dependencies in pom file, packaging your program and submitting it via flink run command.
-In order to use hive connector in scala shell, you need to put the following [hive connector dependencies]({{ site.baseurl }}/dev/table/hive/#depedencies) under lib folder of flink dist .
-
-* flink-connector-hive_{scala_version}-{flink.version}.jar
-* flink-hadoop-compatibility_{scala_version}-{flink.version}.jar
-* flink-shaded-hadoop-2-uber-{hadoop.version}-{flink-shaded.version}.jar
-* hive-exec-2.x.jar (for Hive 1.x, you need to copy hive-exec-1.x.jar, hive-metastore-1.x.jar, libfb303-0.9.2.jar and libthrift-0.9.2.jar)
-
-Then you can use hive connector in scala shell like following:
-
-{% highlight scala %}
-Scala-Flink> import org.apache.flink.table.catalog.hive.HiveCatalog
-Scala-Flink> val hiveCatalog = new HiveCatalog("hive", "default", "<Replace it with HIVE_CONF_DIR>", "2.3.4");
-Scala-Flink> btenv.registerCatalog("hive", hiveCatalog)
-Scala-Flink> btenv.useCatalog("hive")
-Scala-Flink> btenv.listTables
-Scala-Flink> btenv.sqlQuery("<sql query>").toDataSet[Row].print()
-{% endhighlight %}