You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ja...@apache.org on 2022/09/20 01:45:12 UTC

[flink] branch release-1.16 updated (d18fc95cfd3 -> 100979d8961)

This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git


    from d18fc95cfd3 [FLINK-29023][docs][table] Improve docs of limitation of ADD JAR
     new 53194166ee7 [FLINK-29025][docs] add overview page for Hive dialect
     new 282a3f19eb3 [FLINK-29025][docs] add overview page of queries for Hive dialect
     new 292095cca5e [FLINK-29025][docs] add sort/cluster/distribute by page for Hive dialect
     new 9e22faef4a3 [FLINK-29025][docs] add group by page for Hive dialect
     new 84e89636dcd [FLINK-29025][docs] add join page for Hive dialect
     new d6763dea582 [FLINK-29025][docs] add set operation page for Hive dialect
     new a94d85d5801 [FLINK-29025][docs] add lateral view page for Hive dialect
     new 756f3a7a7db [FLINK-29025][docs] add window functions page for Hive dialect
     new 192fe60f1ac [FLINK-29025][docs] add sub query page for Hive dialect
     new 9c040e2f514 [FLINK-29025][docs] add cte page for Hive dialect
     new a94d8e4fea4 [FLINK-29025][docs] add transform page for Hive dialect
     new f76084f6d80 [FLINK-29025][docs] add table sample page for Hive dialect
     new a432e0af01e [FLINK-29025][docs] add `add jar` page for Hive dialect
     new 58ca9a620ed [FLINK-29025][docs] add alter page for Hive dialect
     new d1c88ed7775 [FLINK-29025][docs] add create page for Hive dialect
     new dffad01bf35 [FLINK-29025][docs] add drop page for Hive dialect
     new d54fc570fc3 [FLINK-29025][docs] add insert page for Hive dialect
     new 37667f9f32c [FLINK-29025][docs] add load data page for Hive dialect
     new 22a56c3fa93 [FLINK-29025][docs] add set page for Hive dialect
     new 87a4f7e174f [FLINK-29025][docs] add show page for Hive dialect
     new b07e433d07f [FLINK-29025][docs] Improve documentation of Hive compatibility pages
     new a0ee95ddcd4 [FLINK-29025][docs] Update page weight of Hive compatibility pages
     new 4b4409feba1 [FLINK-29025][docs][hive] Use dash-case instead of camelCase in URL of Hive compatibility pages
     new 9d4a769f0a4 [FLINK-29025][docs][hive] Fix links of Hive compatibility pages
     new 100979d8961 [FLINK-29025][docs][hive] Remove "alias" front matter of new added Hive compatibility pages

The 25 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../docs/connectors/table/hive/hive_catalog.md     |   2 +-
 .../docs/connectors/table/hive/hive_dialect.md     | 421 --------------------
 .../docs/connectors/table/hive/hive_functions.md   |  10 +-
 .../docs/connectors/table/hive/hive_read_write.md  |   2 +-
 .../docs/connectors/table/hive/overview.md         |   2 +-
 .../docs/dev/table/hive-compatibility}/_index.md   |   2 +-
 .../hive-dialect}/_index.md                        |   6 +-
 .../hive-dialect/add.md}                           |  43 +-
 .../table/hive-compatibility/hive-dialect/alter.md | 324 +++++++++++++++
 .../hive-compatibility/hive-dialect/create.md      | 246 ++++++++++++
 .../table/hive-compatibility/hive-dialect/drop.md  | 144 +++++++
 .../hive-compatibility/hive-dialect/insert.md      | 212 ++++++++++
 .../hive-compatibility/hive-dialect/load-data.md   |  84 ++++
 .../hive-compatibility/hive-dialect/overview.md    | 104 +++++
 .../hive-dialect/queries}/_index.md                |   6 +-
 .../hive-compatibility/hive-dialect/queries/cte.md |  67 ++++
 .../hive-dialect/queries/group-by.md               | 135 +++++++
 .../hive-dialect/queries/join.md                   | 105 +++++
 .../hive-dialect/queries/lateral-view.md           |  90 +++++
 .../hive-dialect/queries/overview.md               | 162 ++++++++
 .../hive-dialect/queries/set-op.md                 |  95 +++++
 .../queries/sort-cluster-distribute-by.md          |  98 +++++
 .../hive-dialect/queries/sub-queries.md            |  69 ++++
 .../hive-dialect/queries/table-sample.md}          |  36 +-
 .../hive-dialect/queries/transform.md              | 135 +++++++
 .../hive-dialect/queries/window-functions.md       | 105 +++++
 .../table/hive-compatibility/hive-dialect/set.md   |  65 +++
 .../table/hive-compatibility/hive-dialect/show.md  | 116 ++++++
 .../hiveserver2.md                                 |   4 +-
 .../docs/dev/table/sql-gateway/hiveserver2.md      |   2 +-
 .../docs/dev/table/sql-gateway/overview.md         |   2 +-
 .../docs/connectors/table/hive/hive_catalog.md     |   2 +-
 .../docs/connectors/table/hive/hive_dialect.md     | 434 ---------------------
 .../docs/connectors/table/hive/hive_functions.md   |  10 +-
 .../docs/connectors/table/hive/hive_read_write.md  |   2 +-
 .../content/docs/connectors/table/hive/overview.md |   2 +-
 .../docs/dev/table/hive-compatibility}/_index.md   |   0
 .../hive-compatibility/hive-dialect}/_index.md     |   6 +-
 .../table/hive-compatibility/hive-dialect/add.md}  |  43 +-
 .../table/hive-compatibility/hive-dialect/alter.md | 324 +++++++++++++++
 .../hive-compatibility/hive-dialect/create.md      | 246 ++++++++++++
 .../table/hive-compatibility/hive-dialect/drop.md  | 144 +++++++
 .../hive-compatibility/hive-dialect/insert.md      | 212 ++++++++++
 .../hive-compatibility/hive-dialect/load-data.md   |  84 ++++
 .../hive-compatibility/hive-dialect/overview.md    | 112 ++++++
 .../hive-dialect/queries}/_index.md                |   6 +-
 .../hive-compatibility/hive-dialect/queries/cte.md |  67 ++++
 .../hive-dialect/queries/group-by.md               | 135 +++++++
 .../hive-dialect/queries/join.md                   | 105 +++++
 .../hive-dialect/queries/lateral-view.md           |  90 +++++
 .../hive-dialect/queries/overview.md               | 162 ++++++++
 .../hive-dialect/queries/set-op.md                 |  95 +++++
 .../queries/sort-cluster-distribute-by.md          |  98 +++++
 .../hive-dialect/queries/sub-queries.md            |  69 ++++
 .../hive-dialect/queries/table-sample.md}          |  36 +-
 .../hive-dialect/queries/transform.md              | 135 +++++++
 .../hive-dialect/queries/window-functions.md       | 105 +++++
 .../table/hive-compatibility/hive-dialect/set.md   |  65 +++
 .../table/hive-compatibility/hive-dialect/show.md  | 116 ++++++
 .../hiveserver2.md                                 |   2 -
 .../docs/dev/table/sql-gateway/hiveserver2.md      |   2 +-
 .../content/docs/dev/table/sql-gateway/overview.md |   2 +-
 62 files changed, 4884 insertions(+), 921 deletions(-)
 delete mode 100644 docs/content.zh/docs/connectors/table/hive/hive_dialect.md
 rename docs/{content/docs/dev/table/hiveCompatibility => content.zh/docs/dev/table/hive-compatibility}/_index.md (96%)
 copy docs/content.zh/docs/dev/table/{hiveCompatibility => hive-compatibility/hive-dialect}/_index.md (95%)
 copy docs/content.zh/docs/dev/table/{hiveCompatibility/_index.md => hive-compatibility/hive-dialect/add.md} (55%)
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
 copy docs/content.zh/docs/dev/table/{hiveCompatibility => hive-compatibility/hive-dialect/queries}/_index.md (95%)
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
 copy docs/content.zh/docs/dev/table/{hiveCompatibility/_index.md => hive-compatibility/hive-dialect/queries/table-sample.md} (62%)
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
 create mode 100644 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
 rename docs/content.zh/docs/dev/table/{hiveCompatibility => hive-compatibility}/hiveserver2.md (99%)
 delete mode 100644 docs/content/docs/connectors/table/hive/hive_dialect.md
 copy docs/{content.zh/docs/dev/table/hiveCompatibility => content/docs/dev/table/hive-compatibility}/_index.md (100%)
 copy docs/{content.zh/docs/dev/table/hiveCompatibility => content/docs/dev/table/hive-compatibility/hive-dialect}/_index.md (95%)
 copy docs/{content.zh/docs/dev/table/hiveCompatibility/_index.md => content/docs/dev/table/hive-compatibility/hive-dialect/add.md} (55%)
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
 copy docs/{content.zh/docs/dev/table/hiveCompatibility => content/docs/dev/table/hive-compatibility/hive-dialect/queries}/_index.md (95%)
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
 rename docs/{content.zh/docs/dev/table/hiveCompatibility/_index.md => content/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample.md} (62%)
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
 create mode 100644 docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
 rename docs/content/docs/dev/table/{hiveCompatibility => hive-compatibility}/hiveserver2.md (99%)


[flink] 20/25: [FLINK-29025][docs] add show page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 87a4f7e174fd8dde5a09e0e6934b74c5d4c27079
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:43:32 2022 +0800

    [FLINK-29025][docs] add show page for Hive dialect
---
 .../table/hiveCompatibility/hiveDialect/show.md    | 119 +++++++++++++++++++++
 .../table/hiveCompatibility/hiveDialect/show.md    | 119 +++++++++++++++++++++
 2 files changed, 238 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
new file mode 100644
index 00000000000..a21a8cf288c
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -0,0 +1,119 @@
+---
+title: "Show Statements"
+weight: 5
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# SHOW Statements
+
+With Hive dialect, the following SHOW statements are supported for now:
+
+- SHOW DATABASES
+- SHOW TABLES
+- SHOW VIEWS
+- SHOW PARTITIONS
+- SHOW FUNCTIONS
+
+## SHOW DATABASES
+
+### Description
+
+`SHOW DATABASES` statement is used to list all the databases defined in the metastore.
+
+### Syntax
+
+```sql
+SHOW (DATABASES|SCHEMAS);
+```
+The use of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+
+
+## SHOW TABLES
+
+### Description
+
+`SHOW TABLES` statement lists all the base tables and views in the current database.
+
+### Syntax
+
+```sql
+SHOW TABLES;
+```
+
+## SHOW VIEWS
+
+### Description
+
+`SHOW VIEWS` statement lists all the views in the current database.
+
+### Syntax
+
+```sql
+SHOW VIEWS;
+```
+
+## SHOW PARTITIONS
+
+### Description
+
+`SHOW PARTITIONS` lists all the existing partitions or the partitions matching the specified partition spec for a given base table.
+
+### Syntax
+
+```sql
+SHOW PARTITIONS table_name [ partition_spec ];
+partition_spec:
+  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)
+```
+
+### Parameter
+
+- partition_spec
+
+  The optional `partition_spec` is used to what kind of partition should be returned.
+  When specified, the partitions that match the `partition_spec` specification are returned.
+  The `partition_spec` can be partial.
+
+
+### Examples
+
+```sql
+-- list all partitions
+SHOW PARTITIONS t1;
+
+-- specific a full partition partition spec to list specific partition
+SHOW PARTITIONS t1 PARTITION (year = 2022, mohth = 12);
+
+-- specific a partial partition spec to list all the specifc partitions
+SHOW PARTITIONS t1 PARTITION (year = 2022);
+```
+
+## SHOW FUNCTIONS
+
+### Description
+
+`SHOW FUNCTIONS` statement is used to list all the user defined and builtin functions.
+
+### Syntax
+
+```sql
+SHOW FUNCTIONS;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
new file mode 100644
index 00000000000..a21a8cf288c
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -0,0 +1,119 @@
+---
+title: "Show Statements"
+weight: 5
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# SHOW Statements
+
+With Hive dialect, the following SHOW statements are supported for now:
+
+- SHOW DATABASES
+- SHOW TABLES
+- SHOW VIEWS
+- SHOW PARTITIONS
+- SHOW FUNCTIONS
+
+## SHOW DATABASES
+
+### Description
+
+`SHOW DATABASES` statement is used to list all the databases defined in the metastore.
+
+### Syntax
+
+```sql
+SHOW (DATABASES|SCHEMAS);
+```
+The use of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+
+
+## SHOW TABLES
+
+### Description
+
+`SHOW TABLES` statement lists all the base tables and views in the current database.
+
+### Syntax
+
+```sql
+SHOW TABLES;
+```
+
+## SHOW VIEWS
+
+### Description
+
+`SHOW VIEWS` statement lists all the views in the current database.
+
+### Syntax
+
+```sql
+SHOW VIEWS;
+```
+
+## SHOW PARTITIONS
+
+### Description
+
+`SHOW PARTITIONS` lists all the existing partitions or the partitions matching the specified partition spec for a given base table.
+
+### Syntax
+
+```sql
+SHOW PARTITIONS table_name [ partition_spec ];
+partition_spec:
+  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)
+```
+
+### Parameter
+
+- partition_spec
+
+  The optional `partition_spec` is used to what kind of partition should be returned.
+  When specified, the partitions that match the `partition_spec` specification are returned.
+  The `partition_spec` can be partial.
+
+
+### Examples
+
+```sql
+-- list all partitions
+SHOW PARTITIONS t1;
+
+-- specific a full partition partition spec to list specific partition
+SHOW PARTITIONS t1 PARTITION (year = 2022, mohth = 12);
+
+-- specific a partial partition spec to list all the specifc partitions
+SHOW PARTITIONS t1 PARTITION (year = 2022);
+```
+
+## SHOW FUNCTIONS
+
+### Description
+
+`SHOW FUNCTIONS` statement is used to list all the user defined and builtin functions.
+
+### Syntax
+
+```sql
+SHOW FUNCTIONS;
+```


[flink] 07/25: [FLINK-29025][docs] add lateral view page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a94d85d5801e03615a6833f3216f3a7d22b207e7
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:14:20 2022 +0800

    [FLINK-29025][docs] add lateral view page for Hive dialect
---
 .../hiveDialect/Queries/lateral-view.md            | 90 ++++++++++++++++++++++
 .../hiveDialect/Queries/lateral-view.md            | 90 ++++++++++++++++++++++
 2 files changed, 180 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
new file mode 100644
index 00000000000..1bcd48123b5
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
@@ -0,0 +1,90 @@
+---
+title: "Lateral View Clause"
+weight: 6
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Lateral View Clause
+
+## Description
+
+Lateral view clause is used in conjunction with user-defined table generating functions([UDTF](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inTable-GeneratingFunctions(UDTF))) such as `explode()`.
+A UDTF generates zero or more output rows for each input row.
+
+A lateral view first applies the UDTF to each row of base and then joins results output rows to the input rows to form a virtual table having the supplied table alias.
+
+## Syntax
+
+```sql
+lateralView: LATERAL VIEW [ OUTER ] udtf( expression ) tableAlias AS columnAlias [, ... ]
+fromClause: FROM baseTable lateralView [, ... ]
+```
+The column alias can be omitted. In this case, aliases are inherited from fields name of StructObjectInspector which is returned from UDTF.
+
+## Parameters
+
+- Lateral View Outer
+
+  User can specify the optional `OUTER` keyword to generate rows even when a `LATERAL VIEW` usually would not generate a row.
+  This happens when the UDTF used does not generate any rows which happens easily with when the column to explode is empty.
+  In this case, the source row would never appear in the results. `OUTER` can be used to prevent that and rows will be generated with `NULL`
+  values in the columns coming from UDTF.
+- Multiple Lateral Views
+
+  A FROM clause can have multiple LATERAL VIEW clauses.
+  Subsequent LATERAL VIEWS can reference columns from any of the tables appearing to the left of the LATERAL VIEW.
+
+
+## Examples
+
+Assuming you have one table:
+```sql
+create table pageAds(pageid string, addid_list array<int>);
+```
+And the table contains two rows:
+```sql
+front_page, [1, 2, 3];
+contact_page, [3, 4, 5];
+```
+Now, you can use `LATERAL VIEW` to convert the column `addid_list` into separate rows:
+```sql
+SELECT pageid, adid FROM pageAds LATERAL VIEW explode(adid_list) adTable AS adid;
+-- result
+front_page, 1
+front_page, 2
+front_page, 3
+contact_page, 3
+contact_page, 4
+contact_page, 5
+```
+Also, if you have one table:
+```sql
+CREATE TABLE t1(c1 array<int>, addid_list array<int>);
+``
+You can use multiple lateral view clauses to convert the column `c1` and `c2` into separate rows:
+```sql
+SELECT myc1, myc2 FROM t1
+LATERAL VIEW explode(c1) myTable1 AS myc1
+LATERAL VIEW explode(c2) myTable2 AS myc2;
+```
+When the UDTF doesn't produce rows, then `LATERAL VIEW` won't produce rows.
+You can use `LATERAL VIEW OUTER` to still produce rows, with `NULL` filling the corresponding column.
+```sql
+SELECT * FROM t1 LATERAL VIEW OUTER explode(array()) C AS a;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
new file mode 100644
index 00000000000..b1463529bd0
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
@@ -0,0 +1,90 @@
+---
+title: "Lateral View Clause"
+weight: 6
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Lateral View Clause
+
+## Description
+
+Lateral view clause is used in conjunction with user-defined table generating functions([UDTF](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inTable-GeneratingFunctions(UDTF))) such as `explode()`.
+A UDTF generates zero or more output rows for each input row.
+
+A lateral view first applies the UDTF to each row of base and then joins results output rows to the input rows to form a virtual table having the supplied table alias.
+
+## Syntax
+
+```sql
+lateralView: LATERAL VIEW [ OUTER ] udtf( expression ) tableAlias AS columnAlias [, ... ]
+fromClause: FROM baseTable lateralView [, ... ]
+```
+The column alias can be omitted. In this case, aliases are inherited from fields name of StructObjectInspector which is returned from UDTF.
+
+## Parameters
+
+- Lateral View Outer
+
+  User can specify the optional `OUTER` keyword to generate rows even when a `LATERAL VIEW` usually would not generate a row.
+  This happens when the UDTF used does not generate any rows which happens easily with when the column to explode is empty.
+  In this case, the source row would never appear in the results. `OUTER` can be used to prevent that and rows will be generated with `NULL`
+  values in the columns coming from UDTF.
+- Multiple Lateral Views
+
+  A FROM clause can have multiple LATERAL VIEW clauses.
+  Subsequent LATERAL VIEWS can reference columns from any of the tables appearing to the left of the LATERAL VIEW.
+
+
+## Examples
+
+Assuming you have one table:
+```sql
+create table pageAds(pageid string, addid_list array<int>);
+```
+And the table contains two rows:
+```sql
+front_page, [1, 2, 3];
+contact_page, [3, 4, 5];
+```
+Now, you can use `LATERAL VIEW` to convert the column `addid_list` into separate rows:
+```sql
+SELECT pageid, adid FROM pageAds LATERAL VIEW explode(adid_list) adTable AS adid;
+-- result
+front_page, 1
+front_page, 2
+front_page, 3
+contact_page, 3
+contact_page, 4
+contact_page, 5
+```
+Also, if you have one table:
+```sql
+CREATE TABLE t1(c1 array<int>, addid_list array<int>);
+```
+You can use multiple lateral view clauses to convert the column `c1` and `c2` into separate rows:
+```sql
+SELECT myc1, myc2 FROM t1
+LATERAL VIEW explode(c1) myTable1 AS myc1
+LATERAL VIEW explode(c2) myTable2 AS myc2;
+```
+When the UDTF doesn't produce rows, then `LATERAL VIEW` won't produce rows.
+You can use `LATERAL VIEW OUTER` to still produce rows, with `NULL` filling the corresponding column.
+```sql
+SELECT * FROM t1 LATERAL VIEW OUTER explode(array()) C AS a;
+```


[flink] 13/25: [FLINK-29025][docs] add `add jar` page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a432e0af01e06d05647c6057b9582634904aa620
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:34:17 2022 +0800

    [FLINK-29025][docs] add `add jar` page for Hive dialect
---
 .../dev/table/hiveCompatibility/hiveDialect/add.md | 54 ++++++++++++++++++++++
 .../dev/table/hiveCompatibility/hiveDialect/add.md | 54 ++++++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
new file mode 100644
index 00000000000..bae85d4464e
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -0,0 +1,54 @@
+---
+title: "ADD Statements"
+weight: 7
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# ADD Statements
+
+With Hive dialect, the following `ADD` statements are supported for now:
+- ADD JAR
+
+## ADD JAR
+
+### Description
+
+`ADD JAR` statement is used to add user jars into the classpath.
+Add multiple jars file in single `ADD JAR` statement is not supported.
+
+
+### Syntax
+
+```sql
+ADD JAR filename;
+```
+
+### Parameters
+
+- filename
+
+  The name of the JAR file to be added. It could be either on a local file or distributed file system.
+
+### Examples
+
+```sql
+ADD JAR t.jar;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
new file mode 100644
index 00000000000..bae85d4464e
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -0,0 +1,54 @@
+---
+title: "ADD Statements"
+weight: 7
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# ADD Statements
+
+With Hive dialect, the following `ADD` statements are supported for now:
+- ADD JAR
+
+## ADD JAR
+
+### Description
+
+`ADD JAR` statement is used to add user jars into the classpath.
+Add multiple jars file in single `ADD JAR` statement is not supported.
+
+
+### Syntax
+
+```sql
+ADD JAR filename;
+```
+
+### Parameters
+
+- filename
+
+  The name of the JAR file to be added. It could be either on a local file or distributed file system.
+
+### Examples
+
+```sql
+ADD JAR t.jar;
+```


[flink] 01/25: [FLINK-29025][docs] add overview page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 53194166ee70c71aae55bedf1030f44aaf50ca3d
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 14:51:11 2022 +0800

    [FLINK-29025][docs] add overview page for Hive dialect
---
 .../docs/connectors/table/hive/hive_catalog.md     |   2 +-
 .../docs/connectors/table/hive/hive_dialect.md     | 421 --------------------
 .../docs/connectors/table/hive/hive_functions.md   |  10 +-
 .../docs/connectors/table/hive/hive_read_write.md  |   2 +-
 .../docs/connectors/table/hive/overview.md         |   2 +-
 .../docs/dev/table/hiveCompatibility/_index.md     |   4 +-
 .../hiveCompatibility/{ => hiveDialect}/_index.md  |   6 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  93 +++++
 .../docs/connectors/table/hive/hive_catalog.md     |   2 +-
 .../docs/connectors/table/hive/hive_dialect.md     | 434 ---------------------
 .../docs/connectors/table/hive/hive_functions.md   |  10 +-
 .../docs/connectors/table/hive/hive_read_write.md  |   2 +-
 .../content/docs/connectors/table/hive/overview.md |   2 +-
 .../docs/dev/table/hiveCompatibility/_index.md     |   2 +-
 .../table/hiveCompatibility/hiveDialect}/_index.md |   6 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  97 +++++
 16 files changed, 209 insertions(+), 886 deletions(-)

diff --git a/docs/content.zh/docs/connectors/table/hive/hive_catalog.md b/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
index dc2e461fd7f..44a752573e7 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
@@ -64,7 +64,7 @@ Generic tables, on the other hand, are specific to Flink. When creating generic
 HMS to persist the metadata. While these tables are visible to Hive, it's unlikely Hive is able to understand
 the metadata. And therefore using such tables in Hive leads to undefined behavior.
 
-It's recommended to switch to [Hive dialect]({{< ref "docs/connectors/table/hive/hive_dialect" >}}) to create Hive-compatible tables.
+It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to create Hive-compatible tables.
 If you want to create Hive-compatible tables with default dialect, make sure to set `'connector'='hive'` in your table properties, otherwise
 a table is considered generic by default in `HiveCatalog`. Note that the `connector` property is not required if you use Hive dialect.
 
diff --git a/docs/content.zh/docs/connectors/table/hive/hive_dialect.md b/docs/content.zh/docs/connectors/table/hive/hive_dialect.md
deleted file mode 100644
index d591457ce70..00000000000
--- a/docs/content.zh/docs/connectors/table/hive/hive_dialect.md
+++ /dev/null
@@ -1,421 +0,0 @@
----
-title: "Hive 方言"
-weight: 3
-type: docs
-aliases:
-  - /zh/dev/table/connectors/hive/hive_dialect.html
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Hive 方言
-
-从 1.11.0 开始,在使用 Hive 方言时,Flink 允许用户用 Hive 语法来编写 SQL 语句。通过提供与 Hive 语法的兼容性,我们旨在改善与 Hive 的互操作性,并减少用户需要在 Flink 和 Hive 之间切换来执行不同语句的情况。
-
-## 使用 Hive 方言
-
-Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端和 Table API 设置方言。
-还要注意,你可以为执行的每个语句动态切换方言。无需重新启动会话即可使用其他方言。
-
-### SQL 客户端
-
-SQL 方言可以通过 `table.sql-dialect` 属性指定。因此你可以通过 SQL 客户端 yaml 文件中的 `configuration` 部分来设置初始方言。
-
-```yaml
-
-execution:
-  type: batch
-  result-mode: table
-
-configuration:
-  table.sql-dialect: hive
-
-```
-
-你同样可以在 SQL 客户端启动后设置方言。
-
-```bash
-
-Flink SQL> set table.sql-dialect=hive; -- to use hive dialect
-[INFO] Session property has been set.
-
-Flink SQL> set table.sql-dialect=default; -- to use default dialect
-[INFO] Session property has been set.
-
-```
-
-### Table API
-
-你可以使用 Table API 为 TableEnvironment 设置方言。
-
-{{< tabs "82a7968d-df12-4db2-83ab-16f09b263935" >}}
-{{< tab "Java" >}}
-```java
-
-EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
-TableEnvironment tableEnv = TableEnvironment.create(settings);
-// to use hive dialect
-tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
-// to use default dialect
-tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
-
-```
-{{< /tab >}}
-{{< tab "Python" >}}
-```python
-from pyflink.table import *
-
-settings = EnvironmentSettings.in_batch_mode()
-t_env = TableEnvironment.create(settings)
-
-# to use hive dialect
-t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
-# to use default dialect
-t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
-
-```
-{{< /tab >}}
-{{< /tabs >}}
-
-## DDL
-
-本章节列出了 Hive 方言支持的 DDL 语句。我们主要关注语法。你可以参考 [Hive 文档](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
-了解每个 DDL 语句的语义。
-
-### CATALOG
-
-#### Show
-
-```sql
-SHOW CURRENT CATALOG;
-```
-
-### DATABASE
-
-#### Show
-
-```sql
-SHOW DATABASES;
-```
-
-#### Create
-
-```sql
-CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
-  [COMMENT database_comment]
-  [LOCATION fs_path]
-  [WITH DBPROPERTIES (property_name=property_value, ...)];
-```
-
-#### Alter
-
-##### Update Properties
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);
-```
-
-##### Update Owner
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;
-```
-
-##### Update Location
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET LOCATION fs_path;
-```
-
-#### Drop
-
-```sql
-DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
-```
-
-#### Use
-
-```sql
-USE database_name;
-```
-
-### TABLE
-
-#### Show
-
-```sql
-SHOW TABLES;
-```
-
-#### Create
-
-```sql
-CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
-  [(col_name data_type [column_constraint] [COMMENT col_comment], ... [table_constraint])]
-  [COMMENT table_comment]
-  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
-  [
-    [ROW FORMAT row_format]
-    [STORED AS file_format]
-  ]
-  [LOCATION fs_path]
-  [TBLPROPERTIES (property_name=property_value, ...)]
-
-row_format:
-  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
-      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
-      [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
-
-file_format:
-  : SEQUENCEFILE
-  | TEXTFILE
-  | RCFILE
-  | ORC
-  | PARQUET
-  | AVRO
-  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
-
-column_constraint:
-  : NOT NULL [[ENABLE|DISABLE] [VALIDATE|NOVALIDATE] [RELY|NORELY]]
-
-table_constraint:
-  : [CONSTRAINT constraint_name] PRIMARY KEY (col_name, ...) [[ENABLE|DISABLE] [VALIDATE|NOVALIDATE] [RELY|NORELY]]
-```
-
-#### Alter
-
-##### Rename
-
-```sql
-ALTER TABLE table_name RENAME TO new_table_name;
-```
-
-##### Update Properties
-
-```sql
-ALTER TABLE table_name SET TBLPROPERTIES (property_name = property_value, property_name = property_value, ... );
-```
-
-##### Update Location
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION fs_path;
-```
-
-如果指定了 `partition_spec`,那么必须完整,即具有所有分区列的值。如果指定了,该操作将作用在对应分区上而不是表上。
-
-##### Update File Format
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
-```
-
-如果指定了 `partition_spec`,那么必须完整,即具有所有分区列的值。如果指定了,该操作将作用在对应分区上而不是表上。
-
-##### Update SerDe Properties
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];
-
-ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;
-
-serde_properties:
-  : (property_name = property_value, property_name = property_value, ... )
-```
-
-如果指定了 `partition_spec`,那么必须完整,即具有所有分区列的值。如果指定了,该操作将作用在对应分区上而不是表上。
-
-##### Add Partitions
-
-```sql
-ALTER TABLE table_name ADD [IF NOT EXISTS] (PARTITION partition_spec [LOCATION fs_path])+;
-```
-
-##### Drop Partitions
-
-```sql
-ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...];
-```
-
-##### Add/Replace Columns
-
-```sql
-ALTER TABLE table_name
-  ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
-  [CASCADE|RESTRICT]
-```
-
-##### Change Column
-
-```sql
-ALTER TABLE table_name CHANGE [COLUMN] col_old_name col_new_name column_type
-  [COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];
-```
-
-#### Drop
-
-```sql
-DROP TABLE [IF EXISTS] table_name;
-```
-
-### VIEW
-
-#### Create
-
-```sql
-CREATE VIEW [IF NOT EXISTS] view_name [(column_name, ...) ]
-  [COMMENT view_comment]
-  [TBLPROPERTIES (property_name = property_value, ...)]
-  AS SELECT ...;
-```
-
-#### Alter
-
-**注意**: 变更视图只在 Table API 中有效,SQL 客户端不支持。
-
-##### Rename
-
-```sql
-ALTER VIEW view_name RENAME TO new_view_name;
-```
-
-##### Update Properties
-
-```sql
-ALTER VIEW view_name SET TBLPROPERTIES (property_name = property_value, ... );
-```
-
-##### Update As Select
-
-```sql
-ALTER VIEW view_name AS select_statement;
-```
-
-#### Drop
-
-```sql
-DROP VIEW [IF EXISTS] view_name;
-```
-
-### FUNCTION
-
-#### Show
-
-```sql
-SHOW FUNCTIONS;
-```
-
-#### Create
-
-```sql
-CREATE FUNCTION function_name AS class_name;
-```
-
-#### Drop
-
-```sql
-DROP FUNCTION [IF EXISTS] function_name;
-```
-
-## DML & DQL _`Beta`_
-
-Hive 方言支持常用的 Hive [DML](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML)
-和 [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select) 。 下表列出了一些 Hive 方言支持的语法。
-
-- [SORT/CLUSTER/DISTRIBUTE BY](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy)
-- [Group By](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+GroupBy)
-- [Join](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins)
-- [Union](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Union)
-- [LATERAL VIEW](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView)
-- [Window Functions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics)
-- [SubQueries](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SubQueries)
-- [CTE](https://cwiki.apache.org/confluence/display/Hive/Common+Table+Expression)
-- [INSERT INTO dest schema](https://issues.apache.org/jira/browse/HIVE-9481)
-- [Implicit type conversions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-AllowedImplicitConversions)
-
-为了实现更好的语法和语义的兼容,强烈建议使用 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
-并将其放在 Module 列表的首位,以便在函数解析时优先使用 Hive 内置函数。
-
-Hive 方言不再支持 [Flink SQL 语法]({{< ref "docs/dev/table/sql/queries/overview" >}}) 。 若需使用 Flink 语法,请切换到 `default` 方言。
-
-以下是一个使用 Hive 方言的示例。
-
-```bash
-Flink SQL> create catalog myhive with ('type' = 'hive', 'hive-conf-dir' = '/opt/hive-conf');
-[INFO] Execute statement succeed.
-
-Flink SQL> use catalog myhive;
-[INFO] Execute statement succeed.
-
-Flink SQL> load module hive;
-[INFO] Execute statement succeed.
-
-Flink SQL> use modules hive,core;
-[INFO] Execute statement succeed.
-
-Flink SQL> set table.sql-dialect=hive;
-[INFO] Session property has been set.
-
-Flink SQL> select explode(array(1,2,3)); -- call hive udtf
-+-----+
-| col |
-+-----+
-|   1 |
-|   2 |
-|   3 |
-+-----+
-3 rows in set
-
-Flink SQL> create table tbl (key int,value string);
-[INFO] Execute statement succeed.
-
-Flink SQL> insert overwrite table tbl values (5,'e'),(1,'a'),(1,'a'),(3,'c'),(2,'b'),(3,'c'),(3,'c'),(4,'d');
-[INFO] Submitting SQL update statement to the cluster...
-[INFO] SQL update statement has been successfully submitted to the cluster:
-
-Flink SQL> select * from tbl cluster by key; -- run cluster by
-2021-04-22 16:13:57,005 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input paths to process : 1
-+-----+-------+
-| key | value |
-+-----+-------+
-|   1 |     a |
-|   1 |     a |
-|   5 |     e |
-|   2 |     b |
-|   3 |     c |
-|   3 |     c |
-|   3 |     c |
-|   4 |     d |
-+-----+-------+
-8 rows in set
-```
-
-## 注意
-
-以下是使用 Hive 方言的一些注意事项。
-
-- Hive 方言只能用于操作 Hive 对象,并要求当前 Catalog 是一个 [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}) 。
-- Hive 方言只支持 `db.table` 这种两级的标识符,不支持带有 Catalog 名字的标识符。
-- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的[Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
- 只在 Hive-2.4.0 或更高版本支持。
-- 执行 DML 和 DQL 时应该使用 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 。
-- 从 Flink 1.15版本开始,在使用 Hive 方言抛出以下异常时,请尝试用 opt 目录下的 flink-table-planner_2.12 jar 包来替换 lib 目录下的 flink-table-planner-loader jar 包。具体原因请参考 [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128)。
-  {{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
-  
diff --git a/docs/content.zh/docs/connectors/table/hive/hive_functions.md b/docs/content.zh/docs/connectors/table/hive/hive_functions.md
index c505c7a3050..da540f9399a 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_functions.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_functions.md
@@ -61,13 +61,9 @@ version = "2.3.4"
 t_env.load_module(name, HiveModule(version))
 ```
 {{< /tab >}}
-{{< tab "YAML" >}}
-```yaml
-modules:
-   - name: core
-     type: core
-   - name: myhive
-     type: hive
+{{< tab "SQL Client" >}}
+```sql
+LOAD MODULE hive WITH ('hive-version' = '2.3.4');
 ```
 {{< /tab >}}
 {{< /tabs >}}
diff --git a/docs/content.zh/docs/connectors/table/hive/hive_read_write.md b/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
index fb29cf9366f..99d05cbba05 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
@@ -507,7 +507,7 @@ INSERT INTO TABLE fact_tz PARTITION (day, hour) select 1, '2022-8-8', '14';
 
 **注意:**
 - 该配置项 `table.exec.hive.sink.sort-by-dynamic-partition.enable` 只在批模式下生效。
-- 目前,只有在 Flink 批模式下使用了 [Hive 方言]({{< ref "docs/connectors/table/hive/hive_dialect" >}}),才可以使用 `DISTRIBUTED BY` 和 `SORTED BY`。
+- 目前,只有在 Flink 批模式下使用了 [Hive 方言]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}),才可以使用 `DISTRIBUTED BY` 和 `SORTED BY`。
 
 ### 自动收集统计信息
 在使用 Flink 写入 Hive 表的时候,Flink 将默认自动收集写入数据的统计信息然后将其提交至 Hive metastore 中。
diff --git a/docs/content.zh/docs/connectors/table/hive/overview.md b/docs/content.zh/docs/connectors/table/hive/overview.md
index b40967e8945..957cd790fb7 100644
--- a/docs/content.zh/docs/connectors/table/hive/overview.md
+++ b/docs/content.zh/docs/connectors/table/hive/overview.md
@@ -449,7 +449,7 @@ USE CATALOG myhive;
 
 ## DDL
 
-在 Flink 中执行 DDL 操作 Hive 的表、视图、分区、函数等元数据时,建议使用 [Hive 方言]({{< ref "docs/connectors/table/hive/hive_dialect" >}})
+在 Flink 中执行 DDL 操作 Hive 的表、视图、分区、函数等元数据时,建议使用 [Hive 方言]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}})
 
 ## DML
 
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
index 3dee17410da..fe5bb79705b 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
@@ -1,7 +1,7 @@
 ---
-title: Hive Compatibility
+title: Hive 兼容性
 bookCollapseSection: true
-weight: 94
+weight: 34
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
similarity index 95%
copy from docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
copy to docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
index 3dee17410da..3daf759196f 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
@@ -1,7 +1,7 @@
 ---
-title: Hive Compatibility
+title: Hive 方言
 bookCollapseSection: true
-weight: 94
+weight: 35
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -11,9 +11,7 @@ regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at
-
   http://www.apache.org/licenses/LICENSE-2.0
-
 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
new file mode 100644
index 00000000000..127b74940d1
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -0,0 +1,93 @@
+---
+title: "概览"
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/overview
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Hive 方言
+
+从 1.11.0 开始,在使用 Hive 方言时,Flink 允许用户用 Hive 语法来编写 SQL 语句。
+通过提供与 Hive 语法的兼容性,我们旨在改善与 Hive 的互操作性,并减少用户需要在 Flink 和 Hive 之间切换来执行不同语句的情况。
+
+## 使用 Hive 方言
+
+Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端和 Table API 设置方言。
+还要注意,你可以为执行的每个语句动态切换方言。无需重新启动会话即可使用其他方言。
+
+{{< hint warning >}}
+**Note:**
+
+- 为了使用 Hive 方言, 你必须首先添加和 Hive 相关的依赖. 请参考 [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) 如何添加这些依赖。
+- 请确保当前的 Catalog 是 [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). 否则, 将使用 Flink 的默认方言。
+- 了实现更好的语法和语义的兼容,强烈建议首先加载 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
+  并将其放在 Module 列表的首位,以便在函数解析时优先使用 Hive 内置函数。 
+  请参考文档 [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) 来将 HiveModule 放在 Module 列表的首.
+- Hive 方言只支持 `db.table` 这种两级的标识符,不支持带有 Catalog 名字的标识符。
+- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的[Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
+  只在 Hive-2.4.0 或更高版本支持。
+  {{< /hint >}}
+
+### SQL Client
+
+SQL 方言可以通过 `table.sql-dialect` 属性指定。你可以在 SQL 客户端启动后设置方言。
+
+```bash
+Flink SQL> SET 'table.sql-dialect' = 'hive'; -- 使用 Hive 方言
+[INFO] Session property has been set.
+
+Flink SQL> SET 'table.sql-dialect' = 'default'; -- 使用 Flink 默认 方言
+[INFO] Session property has been set.
+```
+
+{{< hint warning >}}
+**Note:**
+Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
+with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
+{{< /hint >}}
+{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+
+### Table API
+
+你可以使用 Table API 为 TableEnvironment 设置方言。
+
+{{< tabs "f19e5e09-c58d-424d-999d-275106d1d5b3" >}}
+{{< tab "Java" >}}
+```java
+EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
+TableEnvironment tableEnv = TableEnvironment.create(settings);
+// to use hive dialect
+tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+// to use default dialect
+tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
+```
+{{< /tab >}}
+{{< tab "Python" >}}
+```python
+from pyflink.table import *
+settings = EnvironmentSettings.in_batch_mode()
+t_env = TableEnvironment.create(settings)
+# to use hive dialect
+t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
+# to use default dialect
+t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
+```
+{{< /tab >}}
+{{< /tabs >}}
diff --git a/docs/content/docs/connectors/table/hive/hive_catalog.md b/docs/content/docs/connectors/table/hive/hive_catalog.md
index 932e18fcc0d..323928055c7 100644
--- a/docs/content/docs/connectors/table/hive/hive_catalog.md
+++ b/docs/content/docs/connectors/table/hive/hive_catalog.md
@@ -64,7 +64,7 @@ Generic tables, on the other hand, are specific to Flink. When creating generic
 HMS to persist the metadata. While these tables are visible to Hive, it's unlikely Hive is able to understand
 the metadata. And therefore using such tables in Hive leads to undefined behavior.
 
-It's recommended to switch to [Hive dialect]({{< ref "docs/connectors/table/hive/hive_dialect" >}}) to create Hive-compatible tables.
+It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to create Hive-compatible tables.
 If you want to create Hive-compatible tables with default dialect, make sure to set `'connector'='hive'` in your table properties, otherwise
 a table is considered generic by default in `HiveCatalog`. Note that the `connector` property is not required if you use Hive dialect.
 
diff --git a/docs/content/docs/connectors/table/hive/hive_dialect.md b/docs/content/docs/connectors/table/hive/hive_dialect.md
deleted file mode 100644
index f8d2e675cbb..00000000000
--- a/docs/content/docs/connectors/table/hive/hive_dialect.md
+++ /dev/null
@@ -1,434 +0,0 @@
----
-title: "Hive Dialect"
-weight: 3
-type: docs
-aliases:
-  - /dev/table/connectors/hive/hive_dialect.html
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Hive Dialect
-
-Flink allows users to write SQL statements in Hive syntax when Hive dialect
-is used. By providing compatibility with Hive syntax, we aim to improve the interoperability with
-Hive and reduce the scenarios when users need to switch between Flink and Hive in order to execute
-different statements.
-
-## Use Hive Dialect
-
-Flink currently supports two SQL dialects: `default` and `hive`. You need to switch to Hive dialect
-before you can write in Hive syntax. The following describes how to set dialect with
-SQL Client and Table API. Also notice that you can dynamically switch dialect for each
-statement you execute. There's no need to restart a session to use a different dialect.
-
-### SQL Client
-
-SQL dialect can be specified via the `table.sql-dialect` property. Therefore you can set the initial dialect to use in
-the `configuration` section of the yaml file for your SQL Client.
-
-```yaml
-
-execution:
-  type: batch
-  result-mode: table
-
-configuration:
-  table.sql-dialect: hive
-
-```
-
-You can also set the dialect after the SQL Client has launched.
-
-```bash
-
-Flink SQL> SET 'table.sql-dialect' = 'hive'; -- to use hive dialect
-[INFO] Session property has been set.
-
-Flink SQL> SET 'table.sql-dialect' = 'default'; -- to use default dialect
-[INFO] Session property has been set.
-
-```
-
-### Table API
-
-You can set dialect for your TableEnvironment with Table API.
-
-{{< tabs "f19e5e09-c58d-424d-999d-275106d1d5b3" >}}
-{{< tab "Java" >}}
-```java
-
-EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
-TableEnvironment tableEnv = TableEnvironment.create(settings);
-// to use hive dialect
-tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
-// to use default dialect
-tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
-
-```
-{{< /tab >}}
-{{< tab "Python" >}}
-```python
-from pyflink.table import *
-
-settings = EnvironmentSettings.in_batch_mode()
-t_env = TableEnvironment.create(settings)
-
-# to use hive dialect
-t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
-# to use default dialect
-t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
-
-```
-{{< /tab >}}
-{{< /tabs >}}
-
-## DDL
-
-This section lists the supported DDLs with the Hive dialect. We'll mainly focus on the syntax
-here. You can refer to [Hive doc](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
-for the semantics of each DDL statement.
-
-### CATALOG
-
-#### Show
-
-```sql
-SHOW CURRENT CATALOG;
-```
-
-### DATABASE
-
-#### Show
-
-```sql
-SHOW DATABASES;
-SHOW CURRENT DATABASE;
-```
-
-#### Create
-
-```sql
-CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
-  [COMMENT database_comment]
-  [LOCATION fs_path]
-  [WITH DBPROPERTIES (property_name=property_value, ...)];
-```
-
-#### Alter
-
-##### Update Properties
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);
-```
-
-##### Update Owner
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;
-```
-
-##### Update Location
-
-```sql
-ALTER (DATABASE|SCHEMA) database_name SET LOCATION fs_path;
-```
-
-#### Drop
-
-```sql
-DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
-```
-
-#### Use
-
-```sql
-USE database_name;
-```
-
-### TABLE
-
-#### Show
-
-```sql
-SHOW TABLES;
-```
-
-#### Create
-
-```sql
-CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
-  [(col_name data_type [column_constraint] [COMMENT col_comment], ... [table_constraint])]
-  [COMMENT table_comment]
-  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
-  [
-    [ROW FORMAT row_format]
-    [STORED AS file_format]
-  ]
-  [LOCATION fs_path]
-  [TBLPROPERTIES (property_name=property_value, ...)]
-
-row_format:
-  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
-      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
-      [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
-
-file_format:
-  : SEQUENCEFILE
-  | TEXTFILE
-  | RCFILE
-  | ORC
-  | PARQUET
-  | AVRO
-  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
-
-column_constraint:
-  : NOT NULL [[ENABLE|DISABLE] [VALIDATE|NOVALIDATE] [RELY|NORELY]]
-
-table_constraint:
-  : [CONSTRAINT constraint_name] PRIMARY KEY (col_name, ...) [[ENABLE|DISABLE] [VALIDATE|NOVALIDATE] [RELY|NORELY]]
-```
-
-#### Alter
-
-##### Rename
-
-```sql
-ALTER TABLE table_name RENAME TO new_table_name;
-```
-
-##### Update Properties
-
-```sql
-ALTER TABLE table_name SET TBLPROPERTIES (property_name = property_value, property_name = property_value, ... );
-```
-
-##### Update Location
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION fs_path;
-```
-
-The `partition_spec`, if present, needs to be a full spec, i.e. has values for all partition columns. And when it's
-present, the operation will be applied to the corresponding partition instead of the table.
-
-##### Update File Format
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
-```
-
-The `partition_spec`, if present, needs to be a full spec, i.e. has values for all partition columns. And when it's
-present, the operation will be applied to the corresponding partition instead of the table.
-
-##### Update SerDe Properties
-
-```sql
-ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];
-
-ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;
-
-serde_properties:
-  : (property_name = property_value, property_name = property_value, ... )
-```
-
-The `partition_spec`, if present, needs to be a full spec, i.e. has values for all partition columns. And when it's
-present, the operation will be applied to the corresponding partition instead of the table.
-
-##### Add Partitions
-
-```sql
-ALTER TABLE table_name ADD [IF NOT EXISTS] (PARTITION partition_spec [LOCATION fs_path])+;
-```
-
-##### Drop Partitions
-
-```sql
-ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...];
-```
-
-##### Add/Replace Columns
-
-```sql
-ALTER TABLE table_name
-  ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
-  [CASCADE|RESTRICT]
-```
-
-##### Change Column
-
-```sql
-ALTER TABLE table_name CHANGE [COLUMN] col_old_name col_new_name column_type
-  [COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];
-```
-
-#### Drop
-
-```sql
-DROP TABLE [IF EXISTS] table_name;
-```
-
-### VIEW
-
-#### Create
-
-```sql
-CREATE VIEW [IF NOT EXISTS] view_name [(column_name, ...) ]
-  [COMMENT view_comment]
-  [TBLPROPERTIES (property_name = property_value, ...)]
-  AS SELECT ...;
-```
-
-#### Alter
-
-##### Rename
-
-```sql
-ALTER VIEW view_name RENAME TO new_view_name;
-```
-
-##### Update Properties
-
-```sql
-ALTER VIEW view_name SET TBLPROPERTIES (property_name = property_value, ... );
-```
-
-##### Update As Select
-
-```sql
-ALTER VIEW view_name AS select_statement;
-```
-
-#### Drop
-
-```sql
-DROP VIEW [IF EXISTS] view_name;
-```
-
-### FUNCTION
-
-#### Show
-
-```sql
-SHOW FUNCTIONS;
-```
-
-#### Create
-
-```sql
-CREATE FUNCTION function_name AS class_name;
-```
-
-#### Drop
-
-```sql
-DROP FUNCTION [IF EXISTS] function_name;
-```
-
-## DML & DQL _`Beta`_
-
-Hive dialect supports a commonly-used subset of Hive's [DML](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML)
-and [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select). The following lists some examples of
-HiveQL supported by the Hive dialect.
-
-- [SORT/CLUSTER/DISTRIBUTE BY](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy)
-- [Group By](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+GroupBy)
-- [Join](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins)
-- [Union](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Union)
-- [LATERAL VIEW](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView)
-- [Window Functions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics)
-- [SubQueries](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SubQueries)
-- [CTE](https://cwiki.apache.org/confluence/display/Hive/Common+Table+Expression)
-- [INSERT INTO dest schema](https://issues.apache.org/jira/browse/HIVE-9481)
-- [Implicit type conversions](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-AllowedImplicitConversions)
-
-In order to have better syntax and semantic compatibility, it's highly recommended to use [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule)
-and place it first in the module list, so that Hive built-in functions can be picked up during function resolution.
-
-Hive dialect no longer supports [Flink SQL queries]({{< ref "docs/dev/table/sql/queries/overview" >}}). Please switch to `default`
-dialect if you'd like to write in Flink syntax.
-
-Following is an example of using hive dialect to run some queries.
-
-```bash
-Flink SQL> create catalog myhive with ('type' = 'hive', 'hive-conf-dir' = '/opt/hive-conf');
-[INFO] Execute statement succeed.
-
-Flink SQL> use catalog myhive;
-[INFO] Execute statement succeed.
-
-Flink SQL> load module hive;
-[INFO] Execute statement succeed.
-
-Flink SQL> use modules hive,core;
-[INFO] Execute statement succeed.
-
-Flink SQL> set table.sql-dialect=hive;
-[INFO] Session property has been set.
-
-Flink SQL> select explode(array(1,2,3)); -- call hive udtf
-+-----+
-| col |
-+-----+
-|   1 |
-|   2 |
-|   3 |
-+-----+
-3 rows in set
-
-Flink SQL> create table tbl (key int,value string);
-[INFO] Execute statement succeed.
-
-Flink SQL> insert overwrite table tbl values (5,'e'),(1,'a'),(1,'a'),(3,'c'),(2,'b'),(3,'c'),(3,'c'),(4,'d');
-[INFO] Submitting SQL update statement to the cluster...
-[INFO] SQL update statement has been successfully submitted to the cluster:
-
-Flink SQL> select * from tbl cluster by key; -- run cluster by
-2021-04-22 16:13:57,005 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input paths to process : 1
-+-----+-------+
-| key | value |
-+-----+-------+
-|   1 |     a |
-|   1 |     a |
-|   5 |     e |
-|   2 |     b |
-|   3 |     c |
-|   3 |     c |
-|   3 |     c |
-|   4 |     d |
-+-----+-------+
-8 rows in set
-```
-
-## Notice
-
-The following are some precautions for using the Hive dialect.
-
-- Hive dialect should only be used to process Hive meta objects, and requires the current catalog to be a
-[HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}).
-- Hive dialect only supports 2-part identifiers, so you can't specify catalog for an identifier.
-- While all Hive versions support the same syntax, whether a specific feature is available still depends on the
-[Hive version]({{< ref "docs/connectors/table/hive/overview" >}}#supported-hive-versions) you use. For example, updating database
-location is only supported in Hive-2.4.0 or later.
-- Use [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule)
-to run DML and DQL.
-- Since Flink 1.15 you need to swap flink-table-planner-loader located in /lib with flink-table-planner_2.12 located in /opt to avoid the following exception. Please see [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128) for more details.
-  {{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
diff --git a/docs/content/docs/connectors/table/hive/hive_functions.md b/docs/content/docs/connectors/table/hive/hive_functions.md
index a9dcfe61557..e57d27f1804 100644
--- a/docs/content/docs/connectors/table/hive/hive_functions.md
+++ b/docs/content/docs/connectors/table/hive/hive_functions.md
@@ -61,13 +61,9 @@ version = "2.3.4"
 t_env.load_module(name, HiveModule(version))
 ```
 {{< /tab >}}
-{{< tab "YAML" >}}
-```yaml
-modules:
-   - name: core
-     type: core
-   - name: myhive
-     type: hive
+{{< tab "SQL Client" >}}
+```sql
+LOAD MODULE hive WITH ('hive-version' = '2.3.4');
 ```
 {{< /tab >}}
 {{< /tabs >}}
diff --git a/docs/content/docs/connectors/table/hive/hive_read_write.md b/docs/content/docs/connectors/table/hive/hive_read_write.md
index 7490e2bcbac..8577c8b8f0a 100644
--- a/docs/content/docs/connectors/table/hive/hive_read_write.md
+++ b/docs/content/docs/connectors/table/hive/hive_read_write.md
@@ -534,7 +534,7 @@ Also, you can manually add `SORTED BY <partition_field>` in your SQL statement t
 
 **NOTE:** 
 - The configuration `table.exec.hive.sink.sort-by-dynamic-partition.enable` only works in Flink `BATCH` mode.
-- Currently, `DISTRIBUTED BY` and `SORTED BY` is only supported when using [Hive dialect]({{< ref "docs/connectors/table/hive/hive_dialect" >}})  in Flink `BATCH` mode.
+- Currently, `DISTRIBUTED BY` and `SORTED BY` is only supported when using [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}})  in Flink `BATCH` mode.
 
 ### Auto Gather Statistic
 By default, Flink will gather the statistic automatically and then committed to Hive metastore during writing Hive table.
diff --git a/docs/content/docs/connectors/table/hive/overview.md b/docs/content/docs/connectors/table/hive/overview.md
index 687d3afb2eb..6dc9f5117b0 100644
--- a/docs/content/docs/connectors/table/hive/overview.md
+++ b/docs/content/docs/connectors/table/hive/overview.md
@@ -454,7 +454,7 @@ Below are the options supported when creating a `HiveCatalog` instance with YAML
 
 ## DDL
 
-It's recommended to use [Hive dialect]({{< ref "docs/connectors/table/hive/hive_dialect" >}}) to execute DDLs to create
+It's recommended to use [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to execute DDLs to create
 Hive tables, views, partitions, functions within Flink.
 
 ## DML
diff --git a/docs/content/docs/dev/table/hiveCompatibility/_index.md b/docs/content/docs/dev/table/hiveCompatibility/_index.md
index 3dee17410da..75ce8032c3d 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/_index.md
@@ -1,7 +1,7 @@
 ---
 title: Hive Compatibility
 bookCollapseSection: true
-weight: 94
+weight: 34
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
similarity index 95%
copy from docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
copy to docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
index 3dee17410da..5eaefdb93b0 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
@@ -1,7 +1,7 @@
 ---
-title: Hive Compatibility
+title: Hive Dialect
 bookCollapseSection: true
-weight: 94
+weight: 35
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -11,9 +11,7 @@ regarding copyright ownership.  The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License.  You may obtain a copy of the License at
-
   http://www.apache.org/licenses/LICENSE-2.0
-
 Unless required by applicable law or agreed to in writing,
 software distributed under the License is distributed on an
 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
new file mode 100644
index 00000000000..c652bef92db
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -0,0 +1,97 @@
+---
+title: "Overview"
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/overview
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Hive Dialect
+
+Flink allows users to write SQL statements in Hive syntax when Hive dialect is used.
+By providing compatibility with Hive syntax, we aim to improve the interoperability with Hive and reduce the scenarios when users need to switch between Flink and Hive in order to execute different statements.
+
+## Use Hive Dialect
+
+Flink currently supports two SQL dialects: `default` and `hive`. You need to switch to Hive dialect
+before you can write in Hive syntax. The following describes how to set dialect with
+SQL Client and Table API. Also notice that you can dynamically switch dialect for each
+statement you execute. There's no need to restart a session to use a different dialect.
+
+{{< hint warning >}}
+**Note:**
+
+- To use Hive dialect, you have to add dependencies related to Hive. Please refer to [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) for how to add the dependencies.
+- Please make sure the current catalog is [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). Otherwise, it will fall back to Flink's `default` dialect.
+- In order to have better syntax and semantic compatibility, it’s highly recommended to load [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) and
+  place it first in the module list, so that Hive built-in functions can be picked up during function resolution.
+  Please refer [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) for how to change resolution order.
+- Hive dialect only supports 2-part identifiers, so you can't specify catalog for an identifier.
+- While all Hive versions support the same syntax, whether a specific feature is available still depends on the
+  [Hive version]({{< ref "docs/connectors/table/hive/overview" >}}#supported-hive-versions) you use. For example, updating database
+  location is only supported in Hive-2.4.0 or later.
+{{< /hint >}}
+
+### SQL Client
+
+SQL dialect can be specified via the `table.sql-dialect` property.
+Therefore,you can set the dialect after the SQL Client has launched.
+
+```bash
+Flink SQL> SET 'table.sql-dialect' = 'hive'; -- to use hive dialect
+[INFO] Session property has been set.
+
+Flink SQL> SET 'table.sql-dialect' = 'default'; -- to use default dialect
+[INFO] Session property has been set.
+```
+
+{{< hint warning >}}
+**Note:**
+Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
+with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
+{{< /hint >}}
+{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+
+### Table API
+
+You can set dialect for your TableEnvironment with Table API.
+
+{{< tabs "f19e5e09-c58d-424d-999d-275106d1d5b3" >}}
+{{< tab "Java" >}}
+```java
+EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
+TableEnvironment tableEnv = TableEnvironment.create(settings);
+// to use hive dialect
+tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+// to use default dialect
+tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
+```
+{{< /tab >}}
+{{< tab "Python" >}}
+```python
+from pyflink.table import *
+settings = EnvironmentSettings.in_batch_mode()
+t_env = TableEnvironment.create(settings)
+# to use hive dialect
+t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
+# to use default dialect
+t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
+```
+{{< /tab >}}
+{{< /tabs >}}


[flink] 02/25: [FLINK-29025][docs] add overview page of queries for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 282a3f19eb35c48e149035ddf0460ef6e1d52d0d
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:01:37 2022 +0800

    [FLINK-29025][docs] add overview page of queries for Hive dialect
---
 .../hiveDialect/Queries/_index.md                  |  21 +++
 .../hiveDialect/Queries/overview.md                | 163 +++++++++++++++++++++
 .../hiveDialect/Queries/_index.md                  |  21 +++
 .../hiveDialect/Queries/overview.md                | 163 +++++++++++++++++++++
 4 files changed, 368 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
new file mode 100644
index 00000000000..d3ec8ca31d9
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
@@ -0,0 +1,21 @@
+---
+title: Queries
+bookCollapseSection: true
+weight: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
new file mode 100644
index 00000000000..90e6a062223
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -0,0 +1,163 @@
+---
+title: "Overview"
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Queries
+
+## Description
+
+Hive dialect supports a commonly-used subset of Hive’s [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select).
+The following lists some parts of HiveQL supported by the Hive dialect.
+
+- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}})
+- [Group By]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}})
+- [Join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}})
+- [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
+- [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
+- [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
+- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
+- [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
+- [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
+
+## Syntax
+
+The following section describes the overall query syntax.
+The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+
+```sql
+[WITH CommonTableExpression (, CommonTableExpression)*]
+SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+  FROM table_reference
+  [WHERE where_condition]
+  [GROUP BY col_list]
+  [ORDER BY col_list]
+  [CLUSTER BY col_list
+    | [DISTRIBUTE BY col_list] [SORT BY col_list]
+  ]
+ [LIMIT [offset,] rows]
+```
+- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- Table names and column names are case-insensitive
+
+### WHERE Clause
+
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+
+### GROUP BY Clause
+
+Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}}) for more details.
+
+### ORDER BY Clause
+
+The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
+Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
+a total order in the output.
+
+{{< hint warning >}}
+**Note:**
+To guarantee global order, there has to be single one task to sort the final output.
+So if the number of rows in the output is too large, it could take a very long time to finish.
+{{< /hint >}}
+
+## CLUSTER/DISTRIBUTE/SORT BY
+
+Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}) for more details.
+
+### ALL and DISTINCT Clauses
+
+The `ALL` and `DISTINCT` options specify whether duplicate rows should be returned or not.
+If none of these two options are given, the default is `ALL` (all matching rows are returned).
+`DISTINCT` specifies removal of duplicate rows from the result set.
+
+### LIMIT Clause
+
+The `LIMIT` clause can be used to constrain the number of rows returned by the `SELECT` statement.
+
+`LIMIT` takes one or two numeric arguments, which must both be non-negative integer constants.
+The first argument specifies the offset of the first row to return and the second specifies the maximum number of rows to return.
+When a single argument is given, it stands for the maximum number of rows and the offset defaults to 0.
+
+## Examples
+
+Following is an example of using hive dialect to run some queries.
+
+{{< hint warning >}}
+**Note:** Hive dialect no longer supports [Flink SQL queries]({{< ref "docs/dev/table/sql/queries/overview" >}}). Please switch to default dialect if you’d like to write in Flink syntax.
+{{< /hint >}}
+
+```bash
+Flink SQL> create catalog myhive with ('type' = 'hive', 'hive-conf-dir' = '/opt/hive-conf');
+[INFO] Execute statement succeed.
+
+Flink SQL> use catalog myhive;
+[INFO] Execute statement succeed.
+
+Flink SQL> load module hive;
+[INFO] Execute statement succeed.
+
+Flink SQL> use modules hive,core;
+[INFO] Execute statement succeed.
+
+Flink SQL> set table.sql-dialect=hive;
+[INFO] Session property has been set.
+
+FLINK SQL> set sql-client.execution.result-mode=tableau;
+
+Flink SQL> select explode(array(1,2,3)); -- call hive udtf
++----+-------------+
+| op |         col |
++----+-------------+
+| +I |           1 |
+| +I |           2 |
+| +I |           3 |
++----+-------------+
+Received a total of 3 rows
+
+Flink SQL> create table tbl (key int,value string);
+[INFO] Execute statement succeed.
+
+Flink SQL> insert into table tbl values (5,'e'),(1,'a'),(1,'a'),(3,'c'),(2,'b'),(3,'c'),(3,'c'),(4,'d');
+[INFO] Submitting SQL update statement to the cluster...
+[INFO] SQL update statement has been successfully submitted to the cluster:
+
+FLINK SQL> set execution.runtime-mode=batch; -- change to batch mode
+
+Flink SQL> select * from tbl cluster by key; -- run cluster by
+2021-04-22 16:13:57,005 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input paths to process : 1
++-----+-------+
+| key | value |
++-----+-------+
+|   1 |     a |
+|   1 |     a |
+|   5 |     e |
+|   2 |     b |
+|   3 |     c |
+|   3 |     c |
+|   3 |     c |
+|   4 |     d |
++-----+-------+
+Received a total of 8 rows
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
new file mode 100644
index 00000000000..d3ec8ca31d9
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
@@ -0,0 +1,21 @@
+---
+title: Queries
+bookCollapseSection: true
+weight: 1
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
new file mode 100644
index 00000000000..90e6a062223
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -0,0 +1,163 @@
+---
+title: "Overview"
+weight: 1
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Queries
+
+## Description
+
+Hive dialect supports a commonly-used subset of Hive’s [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select).
+The following lists some parts of HiveQL supported by the Hive dialect.
+
+- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}})
+- [Group By]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}})
+- [Join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}})
+- [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
+- [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
+- [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
+- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
+- [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
+- [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
+
+## Syntax
+
+The following section describes the overall query syntax.
+The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+
+```sql
+[WITH CommonTableExpression (, CommonTableExpression)*]
+SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+  FROM table_reference
+  [WHERE where_condition]
+  [GROUP BY col_list]
+  [ORDER BY col_list]
+  [CLUSTER BY col_list
+    | [DISTRIBUTE BY col_list] [SORT BY col_list]
+  ]
+ [LIMIT [offset,] rows]
+```
+- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- Table names and column names are case-insensitive
+
+### WHERE Clause
+
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+
+### GROUP BY Clause
+
+Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}}) for more details.
+
+### ORDER BY Clause
+
+The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
+Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
+a total order in the output.
+
+{{< hint warning >}}
+**Note:**
+To guarantee global order, there has to be single one task to sort the final output.
+So if the number of rows in the output is too large, it could take a very long time to finish.
+{{< /hint >}}
+
+## CLUSTER/DISTRIBUTE/SORT BY
+
+Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}) for more details.
+
+### ALL and DISTINCT Clauses
+
+The `ALL` and `DISTINCT` options specify whether duplicate rows should be returned or not.
+If none of these two options are given, the default is `ALL` (all matching rows are returned).
+`DISTINCT` specifies removal of duplicate rows from the result set.
+
+### LIMIT Clause
+
+The `LIMIT` clause can be used to constrain the number of rows returned by the `SELECT` statement.
+
+`LIMIT` takes one or two numeric arguments, which must both be non-negative integer constants.
+The first argument specifies the offset of the first row to return and the second specifies the maximum number of rows to return.
+When a single argument is given, it stands for the maximum number of rows and the offset defaults to 0.
+
+## Examples
+
+Following is an example of using hive dialect to run some queries.
+
+{{< hint warning >}}
+**Note:** Hive dialect no longer supports [Flink SQL queries]({{< ref "docs/dev/table/sql/queries/overview" >}}). Please switch to default dialect if you’d like to write in Flink syntax.
+{{< /hint >}}
+
+```bash
+Flink SQL> create catalog myhive with ('type' = 'hive', 'hive-conf-dir' = '/opt/hive-conf');
+[INFO] Execute statement succeed.
+
+Flink SQL> use catalog myhive;
+[INFO] Execute statement succeed.
+
+Flink SQL> load module hive;
+[INFO] Execute statement succeed.
+
+Flink SQL> use modules hive,core;
+[INFO] Execute statement succeed.
+
+Flink SQL> set table.sql-dialect=hive;
+[INFO] Session property has been set.
+
+FLINK SQL> set sql-client.execution.result-mode=tableau;
+
+Flink SQL> select explode(array(1,2,3)); -- call hive udtf
++----+-------------+
+| op |         col |
++----+-------------+
+| +I |           1 |
+| +I |           2 |
+| +I |           3 |
++----+-------------+
+Received a total of 3 rows
+
+Flink SQL> create table tbl (key int,value string);
+[INFO] Execute statement succeed.
+
+Flink SQL> insert into table tbl values (5,'e'),(1,'a'),(1,'a'),(3,'c'),(2,'b'),(3,'c'),(3,'c'),(4,'d');
+[INFO] Submitting SQL update statement to the cluster...
+[INFO] SQL update statement has been successfully submitted to the cluster:
+
+FLINK SQL> set execution.runtime-mode=batch; -- change to batch mode
+
+Flink SQL> select * from tbl cluster by key; -- run cluster by
+2021-04-22 16:13:57,005 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input paths to process : 1
++-----+-------+
+| key | value |
++-----+-------+
+|   1 |     a |
+|   1 |     a |
+|   5 |     e |
+|   2 |     b |
+|   3 |     c |
+|   3 |     c |
+|   3 |     c |
+|   4 |     d |
++-----+-------+
+Received a total of 8 rows
+```


[flink] 22/25: [FLINK-29025][docs] Update page weight of Hive compatibility pages

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a0ee95ddcd4f3662b6ffc895c273efeb4258f87f
Author: Jark Wu <ja...@apache.org>
AuthorDate: Mon Sep 19 22:29:54 2022 +0800

    [FLINK-29025][docs] Update page weight of Hive compatibility pages
---
 docs/content.zh/docs/dev/table/hiveCompatibility/_index.md             | 2 +-
 docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md | 2 +-
 docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md        | 2 +-
 docs/content/docs/dev/table/hiveCompatibility/_index.md                | 2 +-
 docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md    | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
index fe5bb79705b..02905d10e09 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
@@ -1,7 +1,7 @@
 ---
 title: Hive 兼容性
 bookCollapseSection: true
-weight: 34
+weight: 94
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
index 3daf759196f..8922b7719a9 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
@@ -1,7 +1,7 @@
 ---
 title: Hive 方言
 bookCollapseSection: true
-weight: 35
+weight: 1
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
index 99c881d41b0..b44c813f76d 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
@@ -1,6 +1,6 @@
 ---
 title: HiveServer2 Endpoint
-weight: 1
+weight: 11
 type: docs
 aliases:
 - /dev/table/hiveCompatibility/hiveserver2.html
diff --git a/docs/content/docs/dev/table/hiveCompatibility/_index.md b/docs/content/docs/dev/table/hiveCompatibility/_index.md
index 75ce8032c3d..3dee17410da 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/_index.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/_index.md
@@ -1,7 +1,7 @@
 ---
 title: Hive Compatibility
 bookCollapseSection: true
-weight: 34
+weight: 94
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
index 5eaefdb93b0..9700cf2dbd0 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
@@ -1,7 +1,7 @@
 ---
 title: Hive Dialect
 bookCollapseSection: true
-weight: 35
+weight: 1
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one


[flink] 11/25: [FLINK-29025][docs] add transform page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a94d8e4fea4524bf26d62db3d99826bfb5cfdd57
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:21:01 2022 +0800

    [FLINK-29025][docs] add transform page for Hive dialect
---
 .../hiveDialect/Queries/transform.md               | 129 +++++++++++++++++++++
 .../hiveDialect/Queries/transform.md               | 129 +++++++++++++++++++++
 2 files changed, 258 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
new file mode 100644
index 00000000000..1ba9a9c0cfe
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -0,0 +1,129 @@
+---
+title: "Transform Clause"
+weight: 10
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Transform Clause
+
+## Description
+
+The `TRANSFORM` clause allows user to transform inputs using user-specified command or script.
+
+## Syntax
+
+```sql
+rowFormat
+  : ROW FORMAT
+    (DELIMITED [FIELDS TERMINATED BY char]
+               [COLLECTION ITEMS TERMINATED BY char]
+               [MAP KEYS TERMINATED BY char]
+               [ESCAPED BY char]
+               [LINES SEPARATED BY char]
+     |
+     SERDE serde_name [WITH SERDEPROPERTIES
+                            property_name=property_value,
+                            property_name=property_value, ...])
+ 
+outRowFormat : rowFormat
+inRowFormat : rowFormat
+outRecordReader : RECORDREADER className
+inRecordWriter: RECORDWRITER record_write_class
+ 
+query:
+   SELECT TRANSFORM '(' expression [ , ... ] ')'
+    ( inRowFormat )?
+    ( inRecordWriter )?
+    USING command_or_script
+    ( AS colName ( colType )? [, ... ] )?
+    ( outRowFormat )? ( outRecordReader )?
+```
+
+{{< hint warning >}}
+**Note:**
+
+- `MAP ..` and `REDUCE ..` are syntactic transformations of `SELECT TRANSFORM ( ... )` in Hive dialect for such query.
+  So you can use `MAP` / `REDUCE` to replace `SELECT TRANSFORM`.
+  {{< /hint >}}
+
+## Parameters
+
+- inRowFormat
+
+  Specific use what row format to feed to input data into the running script.
+  By default, columns will be transformed to `STRING` and delimited by `TAB` before feeding to the user script;
+  Similarly, all `NULL` values will be converted to the literal string `\N` in order to differentiate `NULL` values from empty strings.
+
+- outRowFormat
+
+  Specific use what row format to read the output from the running script.
+  By default, the standard output of the user script will be treated as TAB-separated `STRING` columns,
+  any cell containing only `\N` will be re-interpreted as a `NULL`,
+  and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
+
+- inRecordWriter
+  Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
+
+- outRecordReader
+  Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
+
+- command_or_script
+  Specifies a command or a path to script to process data.
+
+  {{< hint warning >}}
+  **Note:**
+
+  Add a script file and then transform input using the script is not supported yet.
+  {{< /hint >}}
+
+- colType
+  Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
+
+
+For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the following behavior:
+- If the actual number of output columns is less than user specified output columns, additional user specified out columns will be filled with NULL.
+- If the actual number of output columns is more than user specified output columns, the actual output will be truncated, keeping the corresponding columns.
+- If user don't specific the clause `( AS colName ( colType )? [, ... ] )?`, the default output schema is `(key: STRING, value: STRING)`.
+  The key column contains all the characters before the first tab and the value column contains the remaining characters after the first tab.
+  If there is no tab, it will return the NULL value for the second column `value`.
+  Note that this is different from specifying AS `key, value` because in that case, `value` will only contain the portion between the first tab and the second tab if there are multiple tabs.
+
+
+## Examples
+
+```sql
+CREATE TABLE src(key string, value string);
+-- transform using
+SELECT TRANSFORM(key, value) using 'script' from t1;
+
+-- transform using with specific record writer and record reader
+SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
+ WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
+ RECORDWRITER 'MyRecordWriter'
+ using 'script'
+ ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
+ RECORDREADER 'MyRecordReader' from src;
+ 
+-- use keyword MAP instead of TRANSFORM
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+
+-- specific the output of transform
+SELECT TRANSFORM(column) USING 'script' AS c1, c2;
+SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
new file mode 100644
index 00000000000..1ba9a9c0cfe
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -0,0 +1,129 @@
+---
+title: "Transform Clause"
+weight: 10
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Transform Clause
+
+## Description
+
+The `TRANSFORM` clause allows user to transform inputs using user-specified command or script.
+
+## Syntax
+
+```sql
+rowFormat
+  : ROW FORMAT
+    (DELIMITED [FIELDS TERMINATED BY char]
+               [COLLECTION ITEMS TERMINATED BY char]
+               [MAP KEYS TERMINATED BY char]
+               [ESCAPED BY char]
+               [LINES SEPARATED BY char]
+     |
+     SERDE serde_name [WITH SERDEPROPERTIES
+                            property_name=property_value,
+                            property_name=property_value, ...])
+ 
+outRowFormat : rowFormat
+inRowFormat : rowFormat
+outRecordReader : RECORDREADER className
+inRecordWriter: RECORDWRITER record_write_class
+ 
+query:
+   SELECT TRANSFORM '(' expression [ , ... ] ')'
+    ( inRowFormat )?
+    ( inRecordWriter )?
+    USING command_or_script
+    ( AS colName ( colType )? [, ... ] )?
+    ( outRowFormat )? ( outRecordReader )?
+```
+
+{{< hint warning >}}
+**Note:**
+
+- `MAP ..` and `REDUCE ..` are syntactic transformations of `SELECT TRANSFORM ( ... )` in Hive dialect for such query.
+  So you can use `MAP` / `REDUCE` to replace `SELECT TRANSFORM`.
+  {{< /hint >}}
+
+## Parameters
+
+- inRowFormat
+
+  Specific use what row format to feed to input data into the running script.
+  By default, columns will be transformed to `STRING` and delimited by `TAB` before feeding to the user script;
+  Similarly, all `NULL` values will be converted to the literal string `\N` in order to differentiate `NULL` values from empty strings.
+
+- outRowFormat
+
+  Specific use what row format to read the output from the running script.
+  By default, the standard output of the user script will be treated as TAB-separated `STRING` columns,
+  any cell containing only `\N` will be re-interpreted as a `NULL`,
+  and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
+
+- inRecordWriter
+  Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
+
+- outRecordReader
+  Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
+
+- command_or_script
+  Specifies a command or a path to script to process data.
+
+  {{< hint warning >}}
+  **Note:**
+
+  Add a script file and then transform input using the script is not supported yet.
+  {{< /hint >}}
+
+- colType
+  Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
+
+
+For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the following behavior:
+- If the actual number of output columns is less than user specified output columns, additional user specified out columns will be filled with NULL.
+- If the actual number of output columns is more than user specified output columns, the actual output will be truncated, keeping the corresponding columns.
+- If user don't specific the clause `( AS colName ( colType )? [, ... ] )?`, the default output schema is `(key: STRING, value: STRING)`.
+  The key column contains all the characters before the first tab and the value column contains the remaining characters after the first tab.
+  If there is no tab, it will return the NULL value for the second column `value`.
+  Note that this is different from specifying AS `key, value` because in that case, `value` will only contain the portion between the first tab and the second tab if there are multiple tabs.
+
+
+## Examples
+
+```sql
+CREATE TABLE src(key string, value string);
+-- transform using
+SELECT TRANSFORM(key, value) using 'script' from t1;
+
+-- transform using with specific record writer and record reader
+SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
+ WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
+ RECORDWRITER 'MyRecordWriter'
+ using 'script'
+ ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
+ RECORDREADER 'MyRecordReader' from src;
+ 
+-- use keyword MAP instead of TRANSFORM
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+
+-- specific the output of transform
+SELECT TRANSFORM(column) USING 'script' AS c1, c2;
+SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+```


[flink] 04/25: [FLINK-29025][docs] add group by page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9e22faef4a3146be970b89d66ba8c732ed3d3e12
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:10:14 2022 +0800

    [FLINK-29025][docs] add group by page for Hive dialect
---
 .../hiveDialect/Queries/group-by.md                | 129 +++++++++++++++++++++
 .../hiveDialect/Queries/group-by.md                | 129 +++++++++++++++++++++
 2 files changed, 258 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
new file mode 100644
index 00000000000..719ce532dc1
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -0,0 +1,129 @@
+---
+title: "Group By"
+weight: 3
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Group By Clause
+
+## Description
+
+The `Group by` clause is used to compute a single result from multiple input rows with given aggregation function.
+Hive dialect also supports enhanced aggregation features to do multiple aggregations based on the same record by using
+`ROLLUP`/`CUBE`/`GROUPING SETS`.
+
+## Syntax
+
+```sql
+groupByClause: groupByClause-1 | groupByClause-2
+groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+ 
+groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
+grouping_set: { expression | ( [ expression [, ...] ] ) }
+ 
+groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+```
+In `group_expression`, columns can be also specified by position number. But please remember:
+- For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
+- For Hive 2.2.0 and later, set `hive.groupby.position.alias` to true (the default is false)
+
+## Parameters
+
+### GROUPING SETS
+
+`GROUPING SETS` allow for more complex grouping operations than those describable by a standard `GROUP BY`.
+Rows are grouped separately by each specified grouping set and aggregates are computed for each group just as for simple `GROUP BY` clauses.
+
+All `GROUPING SET` clauses can be logically expressed in terms of several `GROUP BY` queries connected by `UNION`.
+
+For example:
+```sql
+SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b GROUPING SETS ( (a, b), a, b, ( ) )
+```
+is equivalent to
+```sql
+SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b
+UNION
+SELECT a, null, SUM( c ) FROM tab1 GROUP BY a, null
+UNION
+SELECT null, b, SUM( c ) FROM tab1 GROUP BY null, b
+UNION
+SELECT null, null, SUM( c ) FROM tab1
+```
+When aggregates are displayed for a column its value is null. This may conflict in case the column itself has some null values.
+There needs to be some way to identify NULL in column, which means aggregate and NULL in column, which means `GROUPING__ID` function is the solution to that.
+
+This function returns a bitvector corresponding to whether each column is present or not.
+For each column, a value of "1" is produced for a row in the result set if that column has been aggregated in that row, otherwise the value is "0".
+This can be used to differentiate when there are nulls in the data.
+For more details, please refer to Hive's docs [Grouping__ID function](https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup#EnhancedAggregation,Cube,GroupingandRollup-Grouping__IDfunction).
+
+Also, there's [Grouping function](https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup#EnhancedAggregation,Cube,GroupingandRollup-Groupingfunction) indicates whether an expression in a `GROUP BY` clause is aggregated or not for a given row.
+The value 0 represents a column that is part of the grouping set, while the value 1 represents a column that is not part of the grouping set.
+
+### ROLLUP
+
+`ROLLUP` is a shorthand notation for specifying a common type of grouping set.
+It represents the given list of expressions and all prefixes of the list, including the empty list.
+For example:
+```sql
+GROUP BY a, b, c WITH ROLLUP
+```
+is equivalent to
+```sql
+GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (a), ( )).
+```
+
+### CUBE
+
+`CUBE` is a shorthand notation for specifying a common type of grouping set.
+It represents the given list and all of its possible subsets - the power set.
+
+For example:
+```sql
+GROUP BY a, b, c, WITH CUBE
+```
+is equivalent to
+```sql
+GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (b, c), (a, c), (a), (b), (c), ( ))
+```
+
+## Examples
+
+```sql
+-- use group by expression
+SELECT abs(x), sum(y) FROM t GROUP BY abs(x);
+
+-- use group by column
+SELECT x, sum(y) FROM t GROUP BY x;
+
+-- use group by position
+SELECT x, sum(y) FROM t GROUP BY 1; -- group by first column in the table;
+
+-- use grouping sets
+SELECT x, SUM(y) FROM t GROUP BY x GROUPING SETS ( x, ( ) );
+
+-- use rollup
+SELECT x, SUM(y) FROM t GROUP BY x WITH ROLLUP;
+SELECT x, SUM(y) FROM t GROUP BY ROLLUP (x);
+
+-- use cube
+SELECT x, SUM(y) FROM t GROUP BY x WITH CUBE;
+SELECT x, SUM(y) FROM t GROUP BY CUBE (x);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
new file mode 100644
index 00000000000..719ce532dc1
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -0,0 +1,129 @@
+---
+title: "Group By"
+weight: 3
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Group By Clause
+
+## Description
+
+The `Group by` clause is used to compute a single result from multiple input rows with given aggregation function.
+Hive dialect also supports enhanced aggregation features to do multiple aggregations based on the same record by using
+`ROLLUP`/`CUBE`/`GROUPING SETS`.
+
+## Syntax
+
+```sql
+groupByClause: groupByClause-1 | groupByClause-2
+groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+ 
+groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
+grouping_set: { expression | ( [ expression [, ...] ] ) }
+ 
+groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+```
+In `group_expression`, columns can be also specified by position number. But please remember:
+- For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
+- For Hive 2.2.0 and later, set `hive.groupby.position.alias` to true (the default is false)
+
+## Parameters
+
+### GROUPING SETS
+
+`GROUPING SETS` allow for more complex grouping operations than those describable by a standard `GROUP BY`.
+Rows are grouped separately by each specified grouping set and aggregates are computed for each group just as for simple `GROUP BY` clauses.
+
+All `GROUPING SET` clauses can be logically expressed in terms of several `GROUP BY` queries connected by `UNION`.
+
+For example:
+```sql
+SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b GROUPING SETS ( (a, b), a, b, ( ) )
+```
+is equivalent to
+```sql
+SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b
+UNION
+SELECT a, null, SUM( c ) FROM tab1 GROUP BY a, null
+UNION
+SELECT null, b, SUM( c ) FROM tab1 GROUP BY null, b
+UNION
+SELECT null, null, SUM( c ) FROM tab1
+```
+When aggregates are displayed for a column its value is null. This may conflict in case the column itself has some null values.
+There needs to be some way to identify NULL in column, which means aggregate and NULL in column, which means `GROUPING__ID` function is the solution to that.
+
+This function returns a bitvector corresponding to whether each column is present or not.
+For each column, a value of "1" is produced for a row in the result set if that column has been aggregated in that row, otherwise the value is "0".
+This can be used to differentiate when there are nulls in the data.
+For more details, please refer to Hive's docs [Grouping__ID function](https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup#EnhancedAggregation,Cube,GroupingandRollup-Grouping__IDfunction).
+
+Also, there's [Grouping function](https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup#EnhancedAggregation,Cube,GroupingandRollup-Groupingfunction) indicates whether an expression in a `GROUP BY` clause is aggregated or not for a given row.
+The value 0 represents a column that is part of the grouping set, while the value 1 represents a column that is not part of the grouping set.
+
+### ROLLUP
+
+`ROLLUP` is a shorthand notation for specifying a common type of grouping set.
+It represents the given list of expressions and all prefixes of the list, including the empty list.
+For example:
+```sql
+GROUP BY a, b, c WITH ROLLUP
+```
+is equivalent to
+```sql
+GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (a), ( )).
+```
+
+### CUBE
+
+`CUBE` is a shorthand notation for specifying a common type of grouping set.
+It represents the given list and all of its possible subsets - the power set.
+
+For example:
+```sql
+GROUP BY a, b, c, WITH CUBE
+```
+is equivalent to
+```sql
+GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (b, c), (a, c), (a), (b), (c), ( ))
+```
+
+## Examples
+
+```sql
+-- use group by expression
+SELECT abs(x), sum(y) FROM t GROUP BY abs(x);
+
+-- use group by column
+SELECT x, sum(y) FROM t GROUP BY x;
+
+-- use group by position
+SELECT x, sum(y) FROM t GROUP BY 1; -- group by first column in the table;
+
+-- use grouping sets
+SELECT x, SUM(y) FROM t GROUP BY x GROUPING SETS ( x, ( ) );
+
+-- use rollup
+SELECT x, SUM(y) FROM t GROUP BY x WITH ROLLUP;
+SELECT x, SUM(y) FROM t GROUP BY ROLLUP (x);
+
+-- use cube
+SELECT x, SUM(y) FROM t GROUP BY x WITH CUBE;
+SELECT x, SUM(y) FROM t GROUP BY CUBE (x);
+```


[flink] 14/25: [FLINK-29025][docs] add alter page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 58ca9a620ed4f4306b535d3c72a9a165f45bea77
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:35:36 2022 +0800

    [FLINK-29025][docs] add alter page for Hive dialect
---
 .../table/hiveCompatibility/hiveDialect/alter.md   | 326 +++++++++++++++++++++
 .../table/hiveCompatibility/hiveDialect/alter.md   | 326 +++++++++++++++++++++
 2 files changed, 652 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
new file mode 100644
index 00000000000..b595d6b3196
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -0,0 +1,326 @@
+---
+title: "ALTER Statements"
+weight: 3
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/alter.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# ALTER Statements
+
+With Hive dialect, the following ALTER statements are supported for now:
+
+- ALTER DATABASE
+- ALTER TABLE
+- ALTER VIEW
+
+## ALTER DATABASE
+
+### Description
+
+`ALTER DATABASE` statement is used to change the properties or location of a database.
+
+### Syntax
+
+```sql
+-- alter database's properties
+ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);
+
+-- alter database's localtion
+ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path;
+```
+
+### Synopsis
+
+- The uses of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+- The `ALTER DATABASE .. SET LOCATION` statement doesn't move the contents of the database's current directory to the newly specified location.
+  It does not change the locations associated with any tables/partitions under the specified database.
+  It only changes the default parent-directory where new tables will be added for this database.
+  This behaviour is analogous to how changing a table-directory does not move existing partitions to a different location.
+
+### Examples
+
+```sql
+-- alter database's properties
+ALTER DATABASE d1 SET DBPROPERTIES ('p1' = 'v1', 'p2' = 'v2);
+
+-- alter database's localtion
+ALTER DATABASE d1 SET LOCATION '/new/path';
+```
+
+## ALTER TABLE
+
+### Description
+
+`ALTER TABLE` statement changes the schema or properties of a table.
+
+### Rename Table
+
+#### Description
+
+The `RENAME TABLE` statement allows user to change the name of a table to a different name.
+
+#### Syntax
+
+```sql
+ALTER TABLE table_name RENAME TO new_table_name;
+```
+
+#### Examples
+
+```sql
+ALTER TABLE t1 RENAME TO t2;
+```
+
+### Alter Table Properties
+
+#### Description
+
+The `ALTER TABLE PROPERTIES` statement allows user add own metadata to tables. Currently, last_modified_user, last_modified_time properties are automatically added and managed by Hive.
+
+#### Syntax
+
+```sql
+ALTER TABLE table_name SET TBLPROPERTIES table_properties;
+ 
+table_properties:
+  : (property_name = property_value, property_name = property_value, ... )
+```
+
+#### Examples
+
+```sql
+ALTER TABLE table_name SET TBLPROPERTIES ('p1' = 'v1', 'p2' = 'v2');
+```
+
+### Add / Remove SerDe Properties
+
+#### Description
+
+The statement enable user to change a table's SerDe or add/move user-defined metadata to the table's SerDe Object.
+The SerDe properties are passed to the table's SerDe to serialize and deserialize data. So users can store any information required for their custom SerDe here.
+Refer to the Hive's [SerDe docs](https://cwiki.apache.org/confluence/display/Hive/SerDe) and [Hive SerDe](https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-HiveSerDe) for more details.
+
+#### Syntax
+
+Add SerDe Properties:
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];
+ 
+ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;
+ 
+serde_properties:
+  : (property_name = property_value, property_name = property_value, ... )
+```
+
+Remove SerDe Properties:
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] UNSET SERDEPROPERTIES (property_name, ... );
+```
+
+#### Examples
+
+```sql
+-- add serde properties
+ALTER TABLE t1 SET SERDEPROPERTIES ('field.delim' = ',');
+
+-- remove serde properties
+ALTER TABLE t1 UNSET SERDEPROPERTIES ('field.delim');
+```
+
+### Alter Partition
+
+`ALTER TABLE ... PARTITION ..` statement is used to add/rename/drop partitions.
+
+#### Add Partitions
+
+`ALTER TABLE .. ADD PARTITION` statement is used to add partitions.
+Partition values should be quoted only if they are strings.
+The location must be a directory inside which data files reside. (ADD PARTITION changes the table metadata, but does not load data. If the data does not exist in the partition's location, queries will not return any results.)
+An error is thrown if the partition_spec for the table already exists. You can use IF NOT EXISTS to skip the error.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name ADD [IF NOT EXISTS]
+ PARTITION partition_spec [LOCATION 'location']
+ [, PARTITION partition_spec [LOCATION 'location'], ...];
+partition_spec:
+  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 ADD PARTITION (dt='2022-08-08', country='china') location '/path/to/us/part080808'
+                   PARTITION (dt='2022-08-09', country='china') location '/path/to/us/part080809';
+```
+
+#### Rename Partitions
+
+`ALTER TABLE .. PARTITION ... RENAME TO ...` statement is used to rename partition.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china')
+     RENAME TO PARTITION (dt='2023-08-08', country='china');
+```
+
+#### Drop Partitions
+
+`ALTER TABLE .. DROP PARTITION ...` statement is used to drop partition.
+This removes the data and metadata for this partition. The data is actually moved to the `.Trash/Current` directory if Trash is configured, but the
+metadata is completed lost.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...]
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 DROP IF EXISTS PARTITION (dt='2022-08-08', country='china');
+```
+
+#### Alter Location / File Format
+
+`ALTER TABLE SET` command can also be used for changing the file location and file format for existing tables.
+
+##### Syntax
+
+```sql 
+--- Alter File Location
+ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";
+
+--- Alter File Format
+ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
+```
+
+##### Examples
+
+```sql
+-- alter file localtion
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china') SET LOCATION "/user/warehouse/t2/dt=2022-08-08/country=china";
+
+-- alter file format
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china') SET FILEFORMAT ORC;
+```
+
+### Alter Column
+
+#### Rules for Column Names
+
+Column names are case-insensitive. Backtick quotation enables the use of reserved keywords for column names, as well as table names.
+
+#### Change Column's Definition
+
+The statement allow users to change a column's name, data type, comment, or position, or an arbitrary combination of them.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type
+  [COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 CHANGE COLUMN c1 new_c1 STRING FIRST;
+ALTER TABLE t1 CHANGE COLUMN c1 new_c1 STRING AFRER c2;
+```
+
+#### Add/Replace Columns
+
+The statement allow users to add new columns or replace the existing columns with the new columns.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name 
+  [PARTITION partition_spec]                
+  ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
+  [CASCADE|RESTRICT]
+```
+
+`ADD COLUMNS` will add new columns to the end of the existing columns before the partition columns.
+
+`REPLACE COLUMNS` will remove all existing columns and add the new set of columns.
+
+##### Synopsis
+
+`ALTER TABLE ... COLUMNS` with `CASCADE` command changes the columns of a table's metadata,
+and cascades the same change to all the partition metadata.
+`RESTRICT` is the default, limiting column changes only to table metadata.
+
+##### Examples
+
+```sql
+-- add column
+ALTER table t1 ADD COLUMNS (ch CHAR(5), name STRING) CASCADE;
+
+-- replace column
+ALTER table t1 REPLACE COLUMNS (t1 TINYINT, d DECIMAL) CASCADE;
+```
+
+## ALTER VIEW
+
+### Alter View Properties
+
+`ALTER VIEW ... SET TBLPROPERTIES ..` allow user to add own metadata to a view.
+
+#### Syntax
+
+```sql
+ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;
+ 
+table_properties:
+  : (property_name = property_value, property_name = property_value, ...)
+```
+
+#### Examples
+
+```sql
+ALTER VIEW v1 SET TBLPROPERTIES ('p1' = 'v1');
+```
+
+### Alter View As Select
+
+`ALTER VIEW ... AS ..` allow user to change the definition of a view, which must exist.
+
+#### Syntax
+
+```sql
+ALTER VIEW [db_name.]view_name AS select_statement;
+```
+
+#### Examples
+
+```sql
+ALTER VIEW v1 AS SELECT * FROM t2;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
new file mode 100644
index 00000000000..b595d6b3196
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -0,0 +1,326 @@
+---
+title: "ALTER Statements"
+weight: 3
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/alter.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# ALTER Statements
+
+With Hive dialect, the following ALTER statements are supported for now:
+
+- ALTER DATABASE
+- ALTER TABLE
+- ALTER VIEW
+
+## ALTER DATABASE
+
+### Description
+
+`ALTER DATABASE` statement is used to change the properties or location of a database.
+
+### Syntax
+
+```sql
+-- alter database's properties
+ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);
+
+-- alter database's localtion
+ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path;
+```
+
+### Synopsis
+
+- The uses of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+- The `ALTER DATABASE .. SET LOCATION` statement doesn't move the contents of the database's current directory to the newly specified location.
+  It does not change the locations associated with any tables/partitions under the specified database.
+  It only changes the default parent-directory where new tables will be added for this database.
+  This behaviour is analogous to how changing a table-directory does not move existing partitions to a different location.
+
+### Examples
+
+```sql
+-- alter database's properties
+ALTER DATABASE d1 SET DBPROPERTIES ('p1' = 'v1', 'p2' = 'v2);
+
+-- alter database's localtion
+ALTER DATABASE d1 SET LOCATION '/new/path';
+```
+
+## ALTER TABLE
+
+### Description
+
+`ALTER TABLE` statement changes the schema or properties of a table.
+
+### Rename Table
+
+#### Description
+
+The `RENAME TABLE` statement allows user to change the name of a table to a different name.
+
+#### Syntax
+
+```sql
+ALTER TABLE table_name RENAME TO new_table_name;
+```
+
+#### Examples
+
+```sql
+ALTER TABLE t1 RENAME TO t2;
+```
+
+### Alter Table Properties
+
+#### Description
+
+The `ALTER TABLE PROPERTIES` statement allows user add own metadata to tables. Currently, last_modified_user, last_modified_time properties are automatically added and managed by Hive.
+
+#### Syntax
+
+```sql
+ALTER TABLE table_name SET TBLPROPERTIES table_properties;
+ 
+table_properties:
+  : (property_name = property_value, property_name = property_value, ... )
+```
+
+#### Examples
+
+```sql
+ALTER TABLE table_name SET TBLPROPERTIES ('p1' = 'v1', 'p2' = 'v2');
+```
+
+### Add / Remove SerDe Properties
+
+#### Description
+
+The statement enable user to change a table's SerDe or add/move user-defined metadata to the table's SerDe Object.
+The SerDe properties are passed to the table's SerDe to serialize and deserialize data. So users can store any information required for their custom SerDe here.
+Refer to the Hive's [SerDe docs](https://cwiki.apache.org/confluence/display/Hive/SerDe) and [Hive SerDe](https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-HiveSerDe) for more details.
+
+#### Syntax
+
+Add SerDe Properties:
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];
+ 
+ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;
+ 
+serde_properties:
+  : (property_name = property_value, property_name = property_value, ... )
+```
+
+Remove SerDe Properties:
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] UNSET SERDEPROPERTIES (property_name, ... );
+```
+
+#### Examples
+
+```sql
+-- add serde properties
+ALTER TABLE t1 SET SERDEPROPERTIES ('field.delim' = ',');
+
+-- remove serde properties
+ALTER TABLE t1 UNSET SERDEPROPERTIES ('field.delim');
+```
+
+### Alter Partition
+
+`ALTER TABLE ... PARTITION ..` statement is used to add/rename/drop partitions.
+
+#### Add Partitions
+
+`ALTER TABLE .. ADD PARTITION` statement is used to add partitions.
+Partition values should be quoted only if they are strings.
+The location must be a directory inside which data files reside. (ADD PARTITION changes the table metadata, but does not load data. If the data does not exist in the partition's location, queries will not return any results.)
+An error is thrown if the partition_spec for the table already exists. You can use IF NOT EXISTS to skip the error.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name ADD [IF NOT EXISTS]
+ PARTITION partition_spec [LOCATION 'location']
+ [, PARTITION partition_spec [LOCATION 'location'], ...];
+partition_spec:
+  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 ADD PARTITION (dt='2022-08-08', country='china') location '/path/to/us/part080808'
+                   PARTITION (dt='2022-08-09', country='china') location '/path/to/us/part080809';
+```
+
+#### Rename Partitions
+
+`ALTER TABLE .. PARTITION ... RENAME TO ...` statement is used to rename partition.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china')
+     RENAME TO PARTITION (dt='2023-08-08', country='china');
+```
+
+#### Drop Partitions
+
+`ALTER TABLE .. DROP PARTITION ...` statement is used to drop partition.
+This removes the data and metadata for this partition. The data is actually moved to the `.Trash/Current` directory if Trash is configured, but the
+metadata is completed lost.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...]
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 DROP IF EXISTS PARTITION (dt='2022-08-08', country='china');
+```
+
+#### Alter Location / File Format
+
+`ALTER TABLE SET` command can also be used for changing the file location and file format for existing tables.
+
+##### Syntax
+
+```sql 
+--- Alter File Location
+ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";
+
+--- Alter File Format
+ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
+```
+
+##### Examples
+
+```sql
+-- alter file localtion
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china') SET LOCATION "/user/warehouse/t2/dt=2022-08-08/country=china";
+
+-- alter file format
+ALTER TABLE t1 PARTITION (dt='2022-08-08', country='china') SET FILEFORMAT ORC;
+```
+
+### Alter Column
+
+#### Rules for Column Names
+
+Column names are case-insensitive. Backtick quotation enables the use of reserved keywords for column names, as well as table names.
+
+#### Change Column's Definition
+
+The statement allow users to change a column's name, data type, comment, or position, or an arbitrary combination of them.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type
+  [COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];
+```
+
+##### Examples
+
+```sql
+ALTER TABLE t1 CHANGE COLUMN c1 new_c1 STRING FIRST;
+ALTER TABLE t1 CHANGE COLUMN c1 new_c1 STRING AFRER c2;
+```
+
+#### Add/Replace Columns
+
+The statement allow users to add new columns or replace the existing columns with the new columns.
+
+##### Syntax
+
+```sql
+ALTER TABLE table_name 
+  [PARTITION partition_spec]                
+  ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
+  [CASCADE|RESTRICT]
+```
+
+`ADD COLUMNS` will add new columns to the end of the existing columns before the partition columns.
+
+`REPLACE COLUMNS` will remove all existing columns and add the new set of columns.
+
+##### Synopsis
+
+`ALTER TABLE ... COLUMNS` with `CASCADE` command changes the columns of a table's metadata,
+and cascades the same change to all the partition metadata.
+`RESTRICT` is the default, limiting column changes only to table metadata.
+
+##### Examples
+
+```sql
+-- add column
+ALTER table t1 ADD COLUMNS (ch CHAR(5), name STRING) CASCADE;
+
+-- replace column
+ALTER table t1 REPLACE COLUMNS (t1 TINYINT, d DECIMAL) CASCADE;
+```
+
+## ALTER VIEW
+
+### Alter View Properties
+
+`ALTER VIEW ... SET TBLPROPERTIES ..` allow user to add own metadata to a view.
+
+#### Syntax
+
+```sql
+ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;
+ 
+table_properties:
+  : (property_name = property_value, property_name = property_value, ...)
+```
+
+#### Examples
+
+```sql
+ALTER VIEW v1 SET TBLPROPERTIES ('p1' = 'v1');
+```
+
+### Alter View As Select
+
+`ALTER VIEW ... AS ..` allow user to change the definition of a view, which must exist.
+
+#### Syntax
+
+```sql
+ALTER VIEW [db_name.]view_name AS select_statement;
+```
+
+#### Examples
+
+```sql
+ALTER VIEW v1 AS SELECT * FROM t2;
+```


[flink] 23/25: [FLINK-29025][docs][hive] Use dash-case instead of camelCase in URL of Hive compatibility pages

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4b4409feba1f9583112fb1af0508857493886fda
Author: Jark Wu <ja...@apache.org>
AuthorDate: Mon Sep 19 22:35:45 2022 +0800

    [FLINK-29025][docs][hive] Use dash-case instead of camelCase in URL of Hive compatibility pages
---
 .../docs/dev/table/{hiveCompatibility => hive-compatibility}/_index.md    | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/_index.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/add.md                | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/alter.md              | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/create.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/drop.md               | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/insert.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/load-data.md          | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/overview.md           | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/_index.md         | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/cte.md            | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/group-by.md       | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/join.md           | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/lateral-view.md   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/overview.md       | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/set-op.md         | 0
 .../hive-dialect/queries}/sort-cluster-distribute-by.md                   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/sub-queries.md    | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/table-sample.md   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/transform.md      | 0
 .../hive-dialect/queries}/window-functions.md                             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/set.md                | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/show.md               | 0
 .../dev/table/{hiveCompatibility => hive-compatibility}/hiveserver2.md    | 0
 .../docs/dev/table/{hiveCompatibility => hive-compatibility}/_index.md    | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/_index.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/add.md                | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/alter.md              | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/create.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/drop.md               | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/insert.md             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/load-data.md          | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/overview.md           | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/_index.md         | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/cte.md            | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/group-by.md       | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/join.md           | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/lateral-view.md   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/overview.md       | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/set-op.md         | 0
 .../hive-dialect/queries}/sort-cluster-distribute-by.md                   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/sub-queries.md    | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/table-sample.md   | 0
 .../Queries => hive-compatibility/hive-dialect/queries}/transform.md      | 0
 .../hive-dialect/queries}/window-functions.md                             | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/set.md                | 0
 .../hiveDialect => hive-compatibility/hive-dialect}/show.md               | 0
 .../dev/table/{hiveCompatibility => hive-compatibility}/hiveserver2.md    | 0
 48 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/_index.md b/docs/content.zh/docs/dev/table/hive-compatibility/_index.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/_index.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/_index.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/_index.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/_index.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/_index.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/_index.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md b/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
similarity index 100%
rename from docs/content.zh/docs/dev/table/hiveCompatibility/hiveserver2.md
rename to docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/_index.md b/docs/content/docs/dev/table/hive-compatibility/_index.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/_index.md
rename to docs/content/docs/dev/table/hive-compatibility/_index.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/_index.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/_index.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/_index.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/_index.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/_index.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/_index.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/cte.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/join.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/transform.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
rename to docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveserver2.md b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
similarity index 100%
rename from docs/content/docs/dev/table/hiveCompatibility/hiveserver2.md
rename to docs/content/docs/dev/table/hive-compatibility/hiveserver2.md


[flink] 17/25: [FLINK-29025][docs] add insert page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d54fc570fc31c2b56daa1fe9ab3769f09016bf3d
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:40:25 2022 +0800

    [FLINK-29025][docs] add insert page for Hive dialect
---
 .../table/hiveCompatibility/hiveDialect/insert.md  | 213 +++++++++++++++++++++
 .../table/hiveCompatibility/hiveDialect/insert.md  | 213 +++++++++++++++++++++
 2 files changed, 426 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
new file mode 100644
index 00000000000..60e9c2c32f4
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -0,0 +1,213 @@
+---
+title: "INSERT Statements"
+weight: 3
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# INSERT Statements
+
+## INSERT TABLE
+
+### Description
+
+The `INSERT TABLE` statement is used to insert rows into a table or overwrite the existing data in the table. The row to be inserted
+can be specified by value expressions or result from query.
+
+### Syntax
+
+```sql
+-- Stardard syntax
+INSERT [OVERWRITE] TABLE tablename1
+ [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+   
+INSERT INTO TABLE tablename1
+ [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+   
+-- Hive extension (multiple inserts):
+FROM from_statement
+INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
+INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+[, ... ];
+
+FROM from_statement
+INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
+INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+[, ... ];
+
+-- Hive extension (dynamic partition inserts):
+INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+  
+INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+```
+
+### Parameters
+
+- `OVERWRITE`
+
+  If specify `OVERWRITE`, it will overwrite any existing data in the table or partition.
+
+- `PARTITION ( ... )`
+
+  An option to specify insert data into table's specific partitions.
+  If the `PARTITION` clause is specified, the table should be a partitioned table.
+
+- `VALUES ( value [, ..] ) [, ( ... ) ]`
+
+  Specifies the values to be inserted explicitly. A common must be used to separate each value in the clause.
+  More than one set of values can be specified to insert multiple rows.
+
+- select_statement
+
+  A statement for query.
+  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+
+### Synopsis
+
+#### Multiple Inserts
+
+In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
+tables by scanning the input data just once.
+
+#### Dynamic Partition Inserts
+
+In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
+
+Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
+
+The dynamic partition columns must be specified last among the columns in the `SELECT` statement and in the same order in which they appear in the `PARTITION()` clause.
+
+{{< hint warning >}}
+**Note:**
+
+In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
+
+But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
+{{< /hint >}}
+
+### Examples
+
+```sql
+-- insert into table using values
+INSERT INTO t1 VALUES ('k1', 'v1'), ('k2', 'v2');
+
+-- insert overwrite
+INSERT OVERWRITE t1 VALUES ('k1', 'v1'), ('k2', 'v2');;
+
+-- insert into table using select statement
+INSERT INTO TABLE t1 SELECT * FROM t2;
+
+-- insert into  partition
+--- static partition
+INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
+
+--- dynamic partition 
+INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
+INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
+
+-- multi-insert statements
+FROM (SELECT month, value from t1)
+    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
+```
+
+## INSERT OVERWRITE DIRECTORY
+
+### Description
+
+Query results can be inserted into filesystem directories by using a slight variation of the syntax above:
+```sql
+-- Standard syntax:
+INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
+  [ROW FORMAT row_format] [STORED AS file_format] 
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+
+-- Hive extension (multiple inserts):
+FROM from_statement
+INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
+[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Parameters
+
+- directory_path
+
+  The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
+  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+
+- `LOCAL`
+
+  The `LOCAL` keyword is optional. If `LOCAL` keyword is used, Flink will write data to the directory on the local file system.
+
+- `VALUES ( value [, ..] ) [, ( ... ) ]`
+  Specifies the values to be inserted explicitly. A common must be used to separate each value in the clause.
+  More than one set of values can be specified to insert multiple rows.
+
+- select_statement
+
+  A statement for query.
+  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+
+- `STORED AS file_format`
+
+  Specifies the file format to use for the insert. The data will be stored as specific file format.
+  The valid value are `TEXTFILE`, `ORC`, `PARQUET`,  `AVRO`, `RCFILE`, `SEQUENCEFILE`, `JSONFILE`.
+  For more details, please refer to Hive's doc [Storage Formats](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormat,StorageFormat,andSerDe).
+
+
+- row_format
+
+  Specifies the row format for this insert. The data will be serialized to file with the specific property.
+  For more details, please refer to Hive's doc [RowFormat](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormat,StorageFormat,andSerDe).
+
+### Synopsis
+
+#### Multiple Inserts
+
+In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
+tables by scanning the input data just once.
+
+### Examples
+
+```sql
+--- insert directory with specific format
+INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+-- insert directory with specific row format
+INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
+ ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  COLLECTION ITEMS TERMINATED BY '#'
+  MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
+  
+-- multiple insert
+FROM (SELECT month, value from t1)
+    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
new file mode 100644
index 00000000000..60e9c2c32f4
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -0,0 +1,213 @@
+---
+title: "INSERT Statements"
+weight: 3
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# INSERT Statements
+
+## INSERT TABLE
+
+### Description
+
+The `INSERT TABLE` statement is used to insert rows into a table or overwrite the existing data in the table. The row to be inserted
+can be specified by value expressions or result from query.
+
+### Syntax
+
+```sql
+-- Stardard syntax
+INSERT [OVERWRITE] TABLE tablename1
+ [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+   
+INSERT INTO TABLE tablename1
+ [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+   
+-- Hive extension (multiple inserts):
+FROM from_statement
+INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
+INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+[, ... ];
+
+FROM from_statement
+INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
+INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+[, ... ];
+
+-- Hive extension (dynamic partition inserts):
+INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+  
+INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+```
+
+### Parameters
+
+- `OVERWRITE`
+
+  If specify `OVERWRITE`, it will overwrite any existing data in the table or partition.
+
+- `PARTITION ( ... )`
+
+  An option to specify insert data into table's specific partitions.
+  If the `PARTITION` clause is specified, the table should be a partitioned table.
+
+- `VALUES ( value [, ..] ) [, ( ... ) ]`
+
+  Specifies the values to be inserted explicitly. A common must be used to separate each value in the clause.
+  More than one set of values can be specified to insert multiple rows.
+
+- select_statement
+
+  A statement for query.
+  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+
+### Synopsis
+
+#### Multiple Inserts
+
+In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
+tables by scanning the input data just once.
+
+#### Dynamic Partition Inserts
+
+In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
+
+Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
+
+The dynamic partition columns must be specified last among the columns in the `SELECT` statement and in the same order in which they appear in the `PARTITION()` clause.
+
+{{< hint warning >}}
+**Note:**
+
+In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
+
+But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
+{{< /hint >}}
+
+### Examples
+
+```sql
+-- insert into table using values
+INSERT INTO t1 VALUES ('k1', 'v1'), ('k2', 'v2');
+
+-- insert overwrite
+INSERT OVERWRITE t1 VALUES ('k1', 'v1'), ('k2', 'v2');;
+
+-- insert into table using select statement
+INSERT INTO TABLE t1 SELECT * FROM t2;
+
+-- insert into  partition
+--- static partition
+INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
+
+--- dynamic partition 
+INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
+INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
+
+-- multi-insert statements
+FROM (SELECT month, value from t1)
+    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
+```
+
+## INSERT OVERWRITE DIRECTORY
+
+### Description
+
+Query results can be inserted into filesystem directories by using a slight variation of the syntax above:
+```sql
+-- Standard syntax:
+INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
+  [ROW FORMAT row_format] [STORED AS file_format] 
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+
+-- Hive extension (multiple inserts):
+FROM from_statement
+INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
+[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Parameters
+
+- directory_path
+
+  The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
+  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+
+- `LOCAL`
+
+  The `LOCAL` keyword is optional. If `LOCAL` keyword is used, Flink will write data to the directory on the local file system.
+
+- `VALUES ( value [, ..] ) [, ( ... ) ]`
+  Specifies the values to be inserted explicitly. A common must be used to separate each value in the clause.
+  More than one set of values can be specified to insert multiple rows.
+
+- select_statement
+
+  A statement for query.
+  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+
+- `STORED AS file_format`
+
+  Specifies the file format to use for the insert. The data will be stored as specific file format.
+  The valid value are `TEXTFILE`, `ORC`, `PARQUET`,  `AVRO`, `RCFILE`, `SEQUENCEFILE`, `JSONFILE`.
+  For more details, please refer to Hive's doc [Storage Formats](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormat,StorageFormat,andSerDe).
+
+
+- row_format
+
+  Specifies the row format for this insert. The data will be serialized to file with the specific property.
+  For more details, please refer to Hive's doc [RowFormat](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormat,StorageFormat,andSerDe).
+
+### Synopsis
+
+#### Multiple Inserts
+
+In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
+tables by scanning the input data just once.
+
+### Examples
+
+```sql
+--- insert directory with specific format
+INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+-- insert directory with specific row format
+INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
+ ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  COLLECTION ITEMS TERMINATED BY '#'
+  MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
+  
+-- multiple insert
+FROM (SELECT month, value from t1)
+    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```


[flink] 06/25: [FLINK-29025][docs] add set operation page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d6763dea582e5961ccc3dd21f1113b32b7534170
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:12:58 2022 +0800

    [FLINK-29025][docs] add set operation page for Hive dialect
---
 .../hiveDialect/Queries/set-op.md                  | 95 ++++++++++++++++++++++
 .../hiveDialect/Queries/set-op.md                  | 95 ++++++++++++++++++++++
 2 files changed, 190 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
new file mode 100644
index 00000000000..1178844d2c3
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -0,0 +1,95 @@
+---
+title: "Set Operations"
+weight: 5
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Set Operations
+
+Set Operation is used to combing two select into single one.
+Hive dialect supports the following operations:
+- UNION
+- INTERSECT
+- EXCEPT/MINUS
+
+## UNION
+
+### Description
+
+`UNION`/`UNION DISTINCT`/`UNION ALL` returns the rows that are found in either side.
+
+`UNION` and `UNION DISTINCT` only returns the distinct rows, while `UNION ALL` does not duplicate.
+
+### Syntax
+
+```sql
+select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+```sql
+SELECT x, y FROM t1 UNION DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 UNION SELECT x, y FROM t2;
+SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
+```
+
+## INTERSECT
+
+### Description
+
+`INTERSECT`/`INTERSECT DISTINCT`/`INTERSECT ALL` returns the rows that are found in both side.
+
+`INTERSECT`/`INTERSECT DISTINCT` only returns the distinct rows, while `INTERSECT ALL` does not duplicate.
+
+### Syntax
+
+```sql
+select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+```sql
+SELECT x, y FROM t1 INTERSECT DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 INTERSECT SELECT x, y FROM t2;
+SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
+```
+
+## EXCEPT/MINUS
+
+### Description
+
+`EXCEPT`/`EXCEPT DISTINCT`/`EXCEPT ALL` returns the rows that are found in left side but not in right side.
+
+`EXCEPT`/`EXCEPT DISTINCT` only returns the distinct rows, while `EXCEPT ALL` does not duplicate.
+
+`MINUS` is synonym for `EXCEPT`.
+
+### Syntax
+
+```sql
+select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t1 EXCEPT DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 EXCEPT SELECT x, y FROM t2;
+SELECT x, y FROM t1 EXCEPT ALL SELECT x, y FROM t2;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
new file mode 100644
index 00000000000..1178844d2c3
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -0,0 +1,95 @@
+---
+title: "Set Operations"
+weight: 5
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Set Operations
+
+Set Operation is used to combing two select into single one.
+Hive dialect supports the following operations:
+- UNION
+- INTERSECT
+- EXCEPT/MINUS
+
+## UNION
+
+### Description
+
+`UNION`/`UNION DISTINCT`/`UNION ALL` returns the rows that are found in either side.
+
+`UNION` and `UNION DISTINCT` only returns the distinct rows, while `UNION ALL` does not duplicate.
+
+### Syntax
+
+```sql
+select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+```sql
+SELECT x, y FROM t1 UNION DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 UNION SELECT x, y FROM t2;
+SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
+```
+
+## INTERSECT
+
+### Description
+
+`INTERSECT`/`INTERSECT DISTINCT`/`INTERSECT ALL` returns the rows that are found in both side.
+
+`INTERSECT`/`INTERSECT DISTINCT` only returns the distinct rows, while `INTERSECT ALL` does not duplicate.
+
+### Syntax
+
+```sql
+select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+```sql
+SELECT x, y FROM t1 INTERSECT DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 INTERSECT SELECT x, y FROM t2;
+SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
+```
+
+## EXCEPT/MINUS
+
+### Description
+
+`EXCEPT`/`EXCEPT DISTINCT`/`EXCEPT ALL` returns the rows that are found in left side but not in right side.
+
+`EXCEPT`/`EXCEPT DISTINCT` only returns the distinct rows, while `EXCEPT ALL` does not duplicate.
+
+`MINUS` is synonym for `EXCEPT`.
+
+### Syntax
+
+```sql
+select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t1 EXCEPT DISTINCT SELECT x, y FROM t2;
+SELECT x, y FROM t1 EXCEPT SELECT x, y FROM t2;
+SELECT x, y FROM t1 EXCEPT ALL SELECT x, y FROM t2;
+```


[flink] 19/25: [FLINK-29025][docs] add set page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 22a56c3fa93e1ec7f15298d53fcbd5a0cf9810d0
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:42:36 2022 +0800

    [FLINK-29025][docs] add set page for Hive dialect
---
 .../dev/table/hiveCompatibility/hiveDialect/set.md | 72 ++++++++++++++++++++++
 .../dev/table/hiveCompatibility/hiveDialect/set.md | 72 ++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
new file mode 100644
index 00000000000..17008d7464b
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -0,0 +1,72 @@
+---
+title: "SET Statements"
+weight: 8
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# SET Statements
+
+## Description
+
+The `SET` statement sets a property which provide a ways to set variables for a session and
+configuration property including system variable and Hive configuration.
+But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
+
+## Syntax
+
+```sql
+SET key=value;
+```
+
+## EXAMPLES
+
+```sql
+-- set Flink's configuration
+SET 'table.sql-dialect'='default';
+
+-- set Hive's configuration
+SET hiveconf:k1=v1;
+
+-- set system property
+SET system:k2=v2;
+
+-- set vairable for current session
+SET hivevar:k3=v3;
+
+-- get value for configuration
+SET hiveconf:k1;
+SET system:k2;
+SET hivevar:k3;
+
+-- print options
+SET -v;
+SET; 
+```
+
+{{< hint warning >}}
+**Note:**
+
+In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+
+But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+
+So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+{{< /hint  >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
new file mode 100644
index 00000000000..17008d7464b
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -0,0 +1,72 @@
+---
+title: "SET Statements"
+weight: 8
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# SET Statements
+
+## Description
+
+The `SET` statement sets a property which provide a ways to set variables for a session and
+configuration property including system variable and Hive configuration.
+But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
+
+## Syntax
+
+```sql
+SET key=value;
+```
+
+## EXAMPLES
+
+```sql
+-- set Flink's configuration
+SET 'table.sql-dialect'='default';
+
+-- set Hive's configuration
+SET hiveconf:k1=v1;
+
+-- set system property
+SET system:k2=v2;
+
+-- set vairable for current session
+SET hivevar:k3=v3;
+
+-- get value for configuration
+SET hiveconf:k1;
+SET system:k2;
+SET hivevar:k3;
+
+-- print options
+SET -v;
+SET; 
+```
+
+{{< hint warning >}}
+**Note:**
+
+In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+
+But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+
+So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+{{< /hint  >}}


[flink] 25/25: [FLINK-29025][docs][hive] Remove "alias" front matter of new added Hive compatibility pages

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 100979d89613ca30b07c905fc8b3c444aa43e3c0
Author: Jark Wu <ja...@apache.org>
AuthorDate: Mon Sep 19 22:46:50 2022 +0800

    [FLINK-29025][docs][hive] Remove "alias" front matter of new added Hive compatibility pages
    
    Alias front matter is used to setup redirect from removed page to this one. The Hive compatibility pages are new pages, so there is no need to add "alias" front matter.
---
 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md   | 2 --
 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md | 2 --
 .../content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md | 2 --
 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md  | 2 --
 .../content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md | 2 --
 .../docs/dev/table/hive-compatibility/hive-dialect/load-data.md         | 2 --
 .../docs/dev/table/hive-compatibility/hive-dialect/overview.md          | 2 --
 .../docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md  | 2 --
 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md   | 2 --
 docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md  | 2 --
 docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md        | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md      | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md    | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md   | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md     | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md   | 2 --
 .../content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md | 2 --
 .../docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md  | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md      | 2 --
 docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md     | 2 --
 docs/content/docs/dev/table/hive-compatibility/hiveserver2.md           | 2 --
 22 files changed, 44 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md
index fdbc0bb0dff..ce3f239152c 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/add.md
@@ -2,8 +2,6 @@
 title: "ADD Statements"
 weight: 7
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
index 5738036fbac..8eeaf099203 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/alter.md
@@ -2,8 +2,6 @@
 title: "ALTER Statements"
 weight: 3
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
index 8d4f9bd1c04..101604e39a4 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/create.md
@@ -2,8 +2,6 @@
 title: "CREATE Statements"
 weight: 2
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
index 36cdacecfc8..d710d0c8715 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
@@ -2,8 +2,6 @@
 title: "DROP Statements"
 weight: 2
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index 4ad602770d0..6aebad09979 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -2,8 +2,6 @@
 title: "INSERT Statements"
 weight: 3
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
index 534b0132408..3abb13549b1 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
@@ -2,8 +2,6 @@
 title: "Load Data Statements"
 weight: 4
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
index bb710333fb6..5a3176bbfa4 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
@@ -2,8 +2,6 @@
 title: "概览"
 weight: 1
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
index bae637cb81e..2f4e6b5a1ac 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
@@ -2,8 +2,6 @@
 title: "Overview"
 weight: 1
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
index 88264a3f41a..b3c8a296731 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/set.md
@@ -2,8 +2,6 @@
 title: "SET Statements"
 weight: 8
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
index fb3d5acffbc..3832f65821e 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/show.md
@@ -2,8 +2,6 @@
 title: "SHOW Statements"
 weight: 5
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md b/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
index b4c8e155135..0fdf6025495 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
@@ -2,8 +2,6 @@
 title: HiveServer2 Endpoint
 weight: 11
 type: docs
-aliases:
-- /dev/table/hive-compatibility/hiveserver2.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md
index 20ffa413d7a..35d0434e6e3 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/add.md
@@ -2,8 +2,6 @@
 title: "ADD Statements"
 weight: 7
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
index 5738036fbac..8eeaf099203 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/alter.md
@@ -2,8 +2,6 @@
 title: "ALTER Statements"
 weight: 3
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
index 70ee04df641..4a06257518d 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/create.md
@@ -2,8 +2,6 @@
 title: "CREATE Statements"
 weight: 2
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
index 36cdacecfc8..d710d0c8715 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
@@ -2,8 +2,6 @@
 title: "DROP Statements"
 weight: 2
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index 4ad602770d0..6aebad09979 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -2,8 +2,6 @@
 title: "INSERT Statements"
 weight: 3
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
index 534b0132408..3abb13549b1 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/load-data.md
@@ -2,8 +2,6 @@
 title: "Load Data Statements"
 weight: 4
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
index 2a0ef94442f..89e08443cb7 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
@@ -2,8 +2,6 @@
 title: "Overview"
 weight: 1
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
index da908a1ee4f..4d809fa3bf0 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
@@ -2,8 +2,6 @@
 title: "Overview"
 weight: 1
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
index c5eb59e17a1..7c4d8254cd8 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/set.md
@@ -2,8 +2,6 @@
 title: "SET Statements"
 weight: 8
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
index fb3d5acffbc..3832f65821e 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/show.md
@@ -2,8 +2,6 @@
 title: "SHOW Statements"
 weight: 5
 type: docs
-aliases:
-- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
index dd546b08936..4320cdc965e 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
@@ -2,8 +2,6 @@
 title: HiveServer2 Endpoint
 weight: 1
 type: docs
-aliases:
-- /dev/table/hive-compatibility/hiveserver2.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one


[flink] 12/25: [FLINK-29025][docs] add table sample page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f76084f6d800e15df5ada2bf703a1d66c34b69d1
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:22:09 2022 +0800

    [FLINK-29025][docs] add table sample page for Hive dialect
---
 .../hiveDialect/Queries/table-sample.md            | 49 ++++++++++++++++++++++
 .../hiveDialect/Queries/table-sample.md            | 49 ++++++++++++++++++++++
 2 files changed, 98 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
new file mode 100644
index 00000000000..35b0caa9b92
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
@@ -0,0 +1,49 @@
+---
+title: "Table Sample"
+weight: 11
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Table Sample
+
+## Description
+
+The `TABLESAMPLE` statement is used to sample rows from the table.
+
+### Syntax
+
+```sql
+TABLESAMPLE ( num_rows ROWS )
+```
+{{< hint warning >}}
+**Note:**
+Currently, only sample specific number of rows is supported.
+{{< /hint >}}
+
+### Parameters
+
+- num_rows `ROWS`
+
+  num_rows is a constant positive to specify how many rows to sample.
+
+### Examples
+
+```sql
+SELECT * FROM src TABLESAMPLE (5 ROWS)
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
new file mode 100644
index 00000000000..35b0caa9b92
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample.md
@@ -0,0 +1,49 @@
+---
+title: "Table Sample"
+weight: 11
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Table Sample
+
+## Description
+
+The `TABLESAMPLE` statement is used to sample rows from the table.
+
+### Syntax
+
+```sql
+TABLESAMPLE ( num_rows ROWS )
+```
+{{< hint warning >}}
+**Note:**
+Currently, only sample specific number of rows is supported.
+{{< /hint >}}
+
+### Parameters
+
+- num_rows `ROWS`
+
+  num_rows is a constant positive to specify how many rows to sample.
+
+### Examples
+
+```sql
+SELECT * FROM src TABLESAMPLE (5 ROWS)
+```


[flink] 15/25: [FLINK-29025][docs] add create page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d1c88ed77755b7257fcc809047f36068c0313bba
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:36:32 2022 +0800

    [FLINK-29025][docs] add create page for Hive dialect
---
 .../table/hiveCompatibility/hiveDialect/create.md  | 247 +++++++++++++++++++++
 .../table/hiveCompatibility/hiveDialect/create.md  | 247 +++++++++++++++++++++
 2 files changed, 494 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
new file mode 100644
index 00000000000..ba3dfb2ba86
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -0,0 +1,247 @@
+---
+title: "CREATE Statements"
+weight: 2
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# CREATE Statements
+
+With Hive dialect, the following CREATE statements are supported for now:
+
+- CREATE DATABASE
+- CREATE TABLE
+- CREATE VIEW
+- CREATE MARCO
+- CREATE FUNCTION
+
+## CREATE DATABASE
+
+### Description
+
+`CREATE DATABASE` statement is used to create a database with the specified name.
+
+### Syntax
+
+```sql
+CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
+  [COMMENT database_comment]
+  [LOCATION hdfs_path]
+  [WITH DBPROPERTIES (property_name=property_value, ...)];
+```
+
+### Examples
+
+```sql
+CREATE DATABASE db1;
+CREATE DATABASE IF NOT EXISTS db1 COMMENT 'db1' LOCATION '/user/hive/warehouse/db1'
+    WITH DBPROPERTIES ('name'='example-db');
+```
+
+
+## CREATE TABLE
+
+### Description
+
+`CREATE TABLE` statement is used to define a table in an existing database.
+
+### Syntax
+
+```sql
+CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
+  [(col_name data_type [column_constraint] [COMMENT col_comment], ... [table_constraint])]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
+  [
+    [ROW FORMAT row_format]
+    [STORED AS file_format]
+  ]
+  [LOCATION fs_path]
+  [TBLPROPERTIES (property_name=property_value, ...)]
+  [AS select_statment];
+  
+data_type
+  : primitive_type
+  | array_type
+  | map_type
+  | struct_type
+primitive_type
+  : TINYINT
+  | SMALLINT
+  | INT
+  | BIGINT
+  | BOOLEAN
+  | FLOAT
+  | DOUBLE
+  | DOUBLE PRECISION
+  | STRING
+  | BINARY     
+  | TIMESTAMP
+  | DECIMAL
+  | DECIMAL(precision, scale)
+  | DATE
+  | VARCHAR
+  | CHAR 
+array_type
+  : ARRAY < data_type >
+  
+array_type
+  : ARRAY < data_type >
+struct_type
+  : STRUCT < col_name : data_type [COMMENT col_comment], ...>
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+file_format:
+  : SEQUENCEFILE
+  | TEXTFILE
+  | RCFILE
+  | ORC
+  | PARQUET
+  | AVRO
+  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
+column_constraint:
+  : NOT NULL
+table_constraint:
+  : [CONSTRAINT constraint_name] PRIMARY KEY (col_name, ...)
+```
+
+{{< hint warning >}}
+**NOTE:**
+
+- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
+- Create temporary table is not supported yet.
+  {{< /hint >}}
+
+### Examples
+
+```sql
+-- creaet non-partition table
+CREATE TABLE t1(key string, value string);
+
+-- creaet partitioned table
+CREATE TABLE pt1(key string, value string) PARTITIONED by (year int, month int);
+
+-- creaet table with specifc format
+CREATE TABLE t1(key string, value string) stored as ORC;
+
+-- create table with specifc rowfromat
+CREATE TABLE t1(m MAP<BIGINT, STRING>) 
+  ROW FROMAT DELIMITED COLLECTION ITEMS TERMINATED BY ';'
+  MAP KEYS TERMINATED BY ':';
+
+-- create table as select
+CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
+```
+
+## CREATE VIEW
+
+### Description
+
+`View`
+`CREATE VIEW` creates a view with the given name.
+If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
+(If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
+When renaming columns, column comments can also optionally be supplied. (Comments are not automatically inherited from underlying columns.)
+
+Note that a view is a purely logical object with no associated storage. When a query references a view, the view's definition is evaluated in order to produce a set of rows for further processing by the query.
+
+### Syntax
+
+```sql
+CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name, ...) ]
+  [COMMENT view_comment]
+  [TBLPROPERTIES (property_name = property_value, ...)]
+  AS SELECT ...;
+```
+
+### Examples
+
+```sql
+CREATE VIEW IF NOT EXISTS v1
+    (key COMMENT 'key') 
+    COMMENT 'View for key=1'
+    AS SELECT key FROM src
+        WHERE key = '1';
+```
+
+## CREATE MARCO
+
+### Description
+
+`CREATE TEMPORARY MACRO` statement creates a macro using the given optional list of columns as inputs to the expression.
+Macros exists for the duration of the current session.
+
+### Syntax
+
+```sql
+CREATE TEMPORARY MACRO macro_name([col_name col_type, ...]) expression;
+```
+
+### Examples
+
+```sql
+CREATE TEMPORARY MACRO fixed_number() 42;
+CREATE TEMPORARY MACRO string_len_plus_two(x string) length(x) + 2;
+CREATE TEMPORARY MACRO simple_add (x int, y int) x + y;
+```
+
+## CREATE FUNCTION
+
+### Description
+
+` CREATE FUNCTION` statement creates a function that is implemented by the class_name.
+
+### Syntax
+
+#### Create Temporary Function
+
+```sql
+CREATE TEMPORARY FUNCTION function_name AS class_name [USING JAR 'file_uri'];
+```
+
+The function exists for the duration of the current session.
+
+#### Create Permanent Function
+
+```sql
+CREATE FUNCTION [db_name.]function_name AS class_name
+  [USING JAR 'file_uri'];
+```
+The function is registered to metastore and will exist in all session unless the function is dropped.
+
+### Parameter
+- `[USING JAR 'file_uri']`
+
+  User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
+  The `file_uri` can be a local file or distributed file system.
+
+
+### Examples
+
+```sql
+-- create a function accuming the class `SimpleUdf` has existed in class path
+CREATE FUNCTION simple_udf AS 'SimpleUdf';
+
+-- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
+CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
new file mode 100644
index 00000000000..ba3dfb2ba86
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -0,0 +1,247 @@
+---
+title: "CREATE Statements"
+weight: 2
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# CREATE Statements
+
+With Hive dialect, the following CREATE statements are supported for now:
+
+- CREATE DATABASE
+- CREATE TABLE
+- CREATE VIEW
+- CREATE MARCO
+- CREATE FUNCTION
+
+## CREATE DATABASE
+
+### Description
+
+`CREATE DATABASE` statement is used to create a database with the specified name.
+
+### Syntax
+
+```sql
+CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
+  [COMMENT database_comment]
+  [LOCATION hdfs_path]
+  [WITH DBPROPERTIES (property_name=property_value, ...)];
+```
+
+### Examples
+
+```sql
+CREATE DATABASE db1;
+CREATE DATABASE IF NOT EXISTS db1 COMMENT 'db1' LOCATION '/user/hive/warehouse/db1'
+    WITH DBPROPERTIES ('name'='example-db');
+```
+
+
+## CREATE TABLE
+
+### Description
+
+`CREATE TABLE` statement is used to define a table in an existing database.
+
+### Syntax
+
+```sql
+CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
+  [(col_name data_type [column_constraint] [COMMENT col_comment], ... [table_constraint])]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
+  [
+    [ROW FORMAT row_format]
+    [STORED AS file_format]
+  ]
+  [LOCATION fs_path]
+  [TBLPROPERTIES (property_name=property_value, ...)]
+  [AS select_statment];
+  
+data_type
+  : primitive_type
+  | array_type
+  | map_type
+  | struct_type
+primitive_type
+  : TINYINT
+  | SMALLINT
+  | INT
+  | BIGINT
+  | BOOLEAN
+  | FLOAT
+  | DOUBLE
+  | DOUBLE PRECISION
+  | STRING
+  | BINARY     
+  | TIMESTAMP
+  | DECIMAL
+  | DECIMAL(precision, scale)
+  | DATE
+  | VARCHAR
+  | CHAR 
+array_type
+  : ARRAY < data_type >
+  
+array_type
+  : ARRAY < data_type >
+struct_type
+  : STRUCT < col_name : data_type [COMMENT col_comment], ...>
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+file_format:
+  : SEQUENCEFILE
+  | TEXTFILE
+  | RCFILE
+  | ORC
+  | PARQUET
+  | AVRO
+  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
+column_constraint:
+  : NOT NULL
+table_constraint:
+  : [CONSTRAINT constraint_name] PRIMARY KEY (col_name, ...)
+```
+
+{{< hint warning >}}
+**NOTE:**
+
+- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
+- Create temporary table is not supported yet.
+  {{< /hint >}}
+
+### Examples
+
+```sql
+-- creaet non-partition table
+CREATE TABLE t1(key string, value string);
+
+-- creaet partitioned table
+CREATE TABLE pt1(key string, value string) PARTITIONED by (year int, month int);
+
+-- creaet table with specifc format
+CREATE TABLE t1(key string, value string) stored as ORC;
+
+-- create table with specifc rowfromat
+CREATE TABLE t1(m MAP<BIGINT, STRING>) 
+  ROW FROMAT DELIMITED COLLECTION ITEMS TERMINATED BY ';'
+  MAP KEYS TERMINATED BY ':';
+
+-- create table as select
+CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
+```
+
+## CREATE VIEW
+
+### Description
+
+`View`
+`CREATE VIEW` creates a view with the given name.
+If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
+(If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
+When renaming columns, column comments can also optionally be supplied. (Comments are not automatically inherited from underlying columns.)
+
+Note that a view is a purely logical object with no associated storage. When a query references a view, the view's definition is evaluated in order to produce a set of rows for further processing by the query.
+
+### Syntax
+
+```sql
+CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name, ...) ]
+  [COMMENT view_comment]
+  [TBLPROPERTIES (property_name = property_value, ...)]
+  AS SELECT ...;
+```
+
+### Examples
+
+```sql
+CREATE VIEW IF NOT EXISTS v1
+    (key COMMENT 'key') 
+    COMMENT 'View for key=1'
+    AS SELECT key FROM src
+        WHERE key = '1';
+```
+
+## CREATE MARCO
+
+### Description
+
+`CREATE TEMPORARY MACRO` statement creates a macro using the given optional list of columns as inputs to the expression.
+Macros exists for the duration of the current session.
+
+### Syntax
+
+```sql
+CREATE TEMPORARY MACRO macro_name([col_name col_type, ...]) expression;
+```
+
+### Examples
+
+```sql
+CREATE TEMPORARY MACRO fixed_number() 42;
+CREATE TEMPORARY MACRO string_len_plus_two(x string) length(x) + 2;
+CREATE TEMPORARY MACRO simple_add (x int, y int) x + y;
+```
+
+## CREATE FUNCTION
+
+### Description
+
+` CREATE FUNCTION` statement creates a function that is implemented by the class_name.
+
+### Syntax
+
+#### Create Temporary Function
+
+```sql
+CREATE TEMPORARY FUNCTION function_name AS class_name [USING JAR 'file_uri'];
+```
+
+The function exists for the duration of the current session.
+
+#### Create Permanent Function
+
+```sql
+CREATE FUNCTION [db_name.]function_name AS class_name
+  [USING JAR 'file_uri'];
+```
+The function is registered to metastore and will exist in all session unless the function is dropped.
+
+### Parameter
+- `[USING JAR 'file_uri']`
+
+  User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
+  The `file_uri` can be a local file or distributed file system.
+
+
+### Examples
+
+```sql
+-- create a function accuming the class `SimpleUdf` has existed in class path
+CREATE FUNCTION simple_udf AS 'SimpleUdf';
+
+-- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
+CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+```


[flink] 18/25: [FLINK-29025][docs] add load data page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 37667f9f32c393ef5f07e9e3436b6df44c56c611
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:41:20 2022 +0800

    [FLINK-29025][docs] add load data page for Hive dialect
---
 .../hiveCompatibility/hiveDialect/load-data.md     | 86 ++++++++++++++++++++++
 .../hiveCompatibility/hiveDialect/load-data.md     | 86 ++++++++++++++++++++++
 2 files changed, 172 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
new file mode 100644
index 00000000000..246759bbe1b
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -0,0 +1,86 @@
+---
+title: "Load Data Statements"
+weight: 4
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/load.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Load Data Statements
+
+## Description
+
+The `LOAD DATA` statement is used to load the data into a Hive table from the user specified directory or file.
+The load operation are currently pure copy/move operations that move data files into locations corresponding to Hive tables.
+
+## Syntax
+
+```sql
+LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)];
+```
+
+## Parameters
+
+- filepath
+
+  The `filepath` can be:
+    - a relative path, such as `warehouse/data1`
+    - an absolute path, such as `/user/hive/warehouse/data1`
+    - a full URL with schema and (optionally) an authority, such as `hdfs://namenode:9000/user/hive/warehouse/data1`
+
+  The `filepath` can refer to a file (in which case, only the single file is loaded) or it can be a directory (in which case, all the files from
+  the directory are loaded).
+
+- `LOCAL`
+
+  If specify `LOCAL` keyword, then:
+    - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
+      The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
+    - it will try to **copy** all the files addressed by `filepath` to the target file system.
+      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+
+  If not, then:
+    - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
+      specifies the NameNode URI.
+    - if the path is not absolute, then it'll be interpreted relative to /user/<username>
+    - It will try to **move** the files addressed by `filepath` into the table (or partition).
+
+- `OVERWRITE`
+
+  By default, the files addressed by `filepath` will be appended to the table (or partition).
+  If specific `OVERWRITE`, the original data will be replaced by the files.
+
+- `PARTITION ( ... )`
+
+  An option to specify load data into table's specific partitions. If the `PARTITION` clause is specified, the table should be a partitioned table.
+
+**NOTE:**
+
+For loading data into partition, the partition specifications must be full partition specifications.
+Partial partition specification is not supported yet.
+
+## Examples
+
+```sql
+-- load data into table
+LOAD DATA LOCAL INPATH '/user/warehouse/hive/t1' OVERWRITE INTO TABLE t1;
+
+-- load data into partition
+LOAD DATA LOCAL INPATH '/user/warehouse/hive/t1/p1=1' INTO TABLE t1 PARTITION (p1=1);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
new file mode 100644
index 00000000000..246759bbe1b
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -0,0 +1,86 @@
+---
+title: "Load Data Statements"
+weight: 4
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/load.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Load Data Statements
+
+## Description
+
+The `LOAD DATA` statement is used to load the data into a Hive table from the user specified directory or file.
+The load operation are currently pure copy/move operations that move data files into locations corresponding to Hive tables.
+
+## Syntax
+
+```sql
+LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)];
+```
+
+## Parameters
+
+- filepath
+
+  The `filepath` can be:
+    - a relative path, such as `warehouse/data1`
+    - an absolute path, such as `/user/hive/warehouse/data1`
+    - a full URL with schema and (optionally) an authority, such as `hdfs://namenode:9000/user/hive/warehouse/data1`
+
+  The `filepath` can refer to a file (in which case, only the single file is loaded) or it can be a directory (in which case, all the files from
+  the directory are loaded).
+
+- `LOCAL`
+
+  If specify `LOCAL` keyword, then:
+    - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
+      The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
+    - it will try to **copy** all the files addressed by `filepath` to the target file system.
+      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+
+  If not, then:
+    - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
+      specifies the NameNode URI.
+    - if the path is not absolute, then it'll be interpreted relative to /user/<username>
+    - It will try to **move** the files addressed by `filepath` into the table (or partition).
+
+- `OVERWRITE`
+
+  By default, the files addressed by `filepath` will be appended to the table (or partition).
+  If specific `OVERWRITE`, the original data will be replaced by the files.
+
+- `PARTITION ( ... )`
+
+  An option to specify load data into table's specific partitions. If the `PARTITION` clause is specified, the table should be a partitioned table.
+
+**NOTE:**
+
+For loading data into partition, the partition specifications must be full partition specifications.
+Partial partition specification is not supported yet.
+
+## Examples
+
+```sql
+-- load data into table
+LOAD DATA LOCAL INPATH '/user/warehouse/hive/t1' OVERWRITE INTO TABLE t1;
+
+-- load data into partition
+LOAD DATA LOCAL INPATH '/user/warehouse/hive/t1/p1=1' INTO TABLE t1 PARTITION (p1=1);
+```


[flink] 03/25: [FLINK-29025][docs] add sort/cluster/distribute by page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 292095cca5efecf013be8690dd254a97af1d52e1
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:04:49 2022 +0800

    [FLINK-29025][docs] add sort/cluster/distribute by page for Hive dialect
---
 .../Queries/sort-cluster-distribute-by.md          | 94 ++++++++++++++++++++++
 .../Queries/sort-cluster-distribute-by.md          | 94 ++++++++++++++++++++++
 2 files changed, 188 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
new file mode 100644
index 00000000000..5f6954bb880
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -0,0 +1,94 @@
+---
+title: "Sort/Cluster/Distributed By"
+weight: 2
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Sort/Cluster/Distributed by Clause
+
+## Sort By
+
+### Description
+
+Unlike [ORDER BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}#order-by-clause) which guarantees a total order of output,
+`SORT BY` only guarantees the result rows with each partition is in the user specified order.
+So when there's more than one partition, `SORT BY` may return result that's partially ordered.
+
+### Syntax
+
+```sql
+colOrder: ( ASC | DESC )
+sortBy: SORT BY BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src sortBy
+```
+
+### Parameters
+- colOrder
+
+  it's used specified the order of returned rows. The default order is `ASC`.
+
+### Examples
+
+```sql
+SELECT x, y FROM t SORT BY x;
+SELECT x, y FROM t SORT BY abs(y) DESC;
+```
+
+## Distribute By
+
+### Description
+
+The `DISTRIBUTE BY` clause is used to repartition the data.
+The data with same value evaluated by the specified expression will be in same partition.
+
+### Syntax
+
+```sql
+distributeBy: DISTRIBUTE BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src distributeBy
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t DISTRIBUTE BY x;
+SELECT x, y FROM t DISTRIBUTE BY abs(y);
+```
+
+## Cluster By
+
+### Description
+
+`CLUSTER BY` is a short-cut for both `DISTRIBUTE BY` and `SORT BY`.
+The `CLUSTER BY` is used to first repartition the data based on the input expressions and sort the data with each partition.
+Also, this clause only guarantees the data is sorted within each partition.
+
+### Syntax
+
+```sql
+clusterBy: CLUSTER BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src clusterBy
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t CLUSTER BY x;
+SELECT x, y FROM t CLUSTER BY abs(y);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
new file mode 100644
index 00000000000..5f6954bb880
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -0,0 +1,94 @@
+---
+title: "Sort/Cluster/Distributed By"
+weight: 2
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Sort/Cluster/Distributed by Clause
+
+## Sort By
+
+### Description
+
+Unlike [ORDER BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}#order-by-clause) which guarantees a total order of output,
+`SORT BY` only guarantees the result rows with each partition is in the user specified order.
+So when there's more than one partition, `SORT BY` may return result that's partially ordered.
+
+### Syntax
+
+```sql
+colOrder: ( ASC | DESC )
+sortBy: SORT BY BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src sortBy
+```
+
+### Parameters
+- colOrder
+
+  it's used specified the order of returned rows. The default order is `ASC`.
+
+### Examples
+
+```sql
+SELECT x, y FROM t SORT BY x;
+SELECT x, y FROM t SORT BY abs(y) DESC;
+```
+
+## Distribute By
+
+### Description
+
+The `DISTRIBUTE BY` clause is used to repartition the data.
+The data with same value evaluated by the specified expression will be in same partition.
+
+### Syntax
+
+```sql
+distributeBy: DISTRIBUTE BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src distributeBy
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t DISTRIBUTE BY x;
+SELECT x, y FROM t DISTRIBUTE BY abs(y);
+```
+
+## Cluster By
+
+### Description
+
+`CLUSTER BY` is a short-cut for both `DISTRIBUTE BY` and `SORT BY`.
+The `CLUSTER BY` is used to first repartition the data based on the input expressions and sort the data with each partition.
+Also, this clause only guarantees the data is sorted within each partition.
+
+### Syntax
+
+```sql
+clusterBy: CLUSTER BY expression [ , ... ]
+query: SELECT expression [ , ... ] FROM src clusterBy
+```
+
+### Examples
+
+```sql
+SELECT x, y FROM t CLUSTER BY x;
+SELECT x, y FROM t CLUSTER BY abs(y);
+```


[flink] 09/25: [FLINK-29025][docs] add sub query page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 192fe60f1ac5391b26dd8e7ad2bde170b1c1a506
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:16:54 2022 +0800

    [FLINK-29025][docs] add sub query page for Hive dialect
---
 .../hiveDialect/Queries/sub-queries.md             | 69 ++++++++++++++++++++++
 .../hiveDialect/Queries/sub-queries.md             | 69 ++++++++++++++++++++++
 2 files changed, 138 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
new file mode 100644
index 00000000000..592d130bf46
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -0,0 +1,69 @@
+---
+title: "Sub-Queries"
+weight: 8
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Sub-Queries
+
+## Sub-Queries in the FROM Clause
+
+### Description
+
+Hive dialect supports sub-queries in the `FROM` clause. The sub-query has to be given a name because every table in a `FROM` clause must have a name.
+Columns in the sub-query select list must have unique names.
+The columns in the sub-query select list are available in the outer query just like columns of a table.
+The sub-query can also be a query expression with `UNION`. Hive dialect supports arbitrary levels of sub-queries.
+
+### Syntax
+
+```sql
+select_statement from ( subquery_select_statement ) [ AS ] name
+```
+
+### Example
+
+```sql
+SELECT col
+FROM (
+  SELECT a+b AS col
+  FROM t1
+) t2
+```
+
+## Sub-Queries in the WHERE Clause
+
+### Description
+
+Hive dialect also supports some types of sub-queries in the `WHERE` clause.
+
+### Syntax
+
+```sql
+select_statement from table WHERE { colName { IN | NOT IN } 
+                                  | NOT EXISTS | EXISTS } ( subquery_select_statement )
+```
+
+### Examples
+
+```sql
+SELECT * FROM t1 WHERE t1.x IN (SELECT y FROM t2);
+ 
+SELECT * FROM t1 WHERE EXISTS (SELECT y FROM t2 WHERE t1.x = t2.x);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
new file mode 100644
index 00000000000..592d130bf46
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -0,0 +1,69 @@
+---
+title: "Sub-Queries"
+weight: 8
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Sub-Queries
+
+## Sub-Queries in the FROM Clause
+
+### Description
+
+Hive dialect supports sub-queries in the `FROM` clause. The sub-query has to be given a name because every table in a `FROM` clause must have a name.
+Columns in the sub-query select list must have unique names.
+The columns in the sub-query select list are available in the outer query just like columns of a table.
+The sub-query can also be a query expression with `UNION`. Hive dialect supports arbitrary levels of sub-queries.
+
+### Syntax
+
+```sql
+select_statement from ( subquery_select_statement ) [ AS ] name
+```
+
+### Example
+
+```sql
+SELECT col
+FROM (
+  SELECT a+b AS col
+  FROM t1
+) t2
+```
+
+## Sub-Queries in the WHERE Clause
+
+### Description
+
+Hive dialect also supports some types of sub-queries in the `WHERE` clause.
+
+### Syntax
+
+```sql
+select_statement from table WHERE { colName { IN | NOT IN } 
+                                  | NOT EXISTS | EXISTS } ( subquery_select_statement )
+```
+
+### Examples
+
+```sql
+SELECT * FROM t1 WHERE t1.x IN (SELECT y FROM t2);
+ 
+SELECT * FROM t1 WHERE EXISTS (SELECT y FROM t2 WHERE t1.x = t2.x);
+```


[flink] 21/25: [FLINK-29025][docs] Improve documentation of Hive compatibility pages

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b07e433d07f93c0e2447c052cce01994f0ed512e
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Wed Sep 7 19:38:01 2022 +0800

    [FLINK-29025][docs] Improve documentation of Hive compatibility pages
---
 .../hiveCompatibility/hiveDialect/Queries/cte.md   |   6 +-
 .../hiveDialect/Queries/group-by.md                |  18 ++--
 .../hiveDialect/Queries/overview.md                |  21 ++--
 .../hiveDialect/Queries/set-op.md                  |   8 +-
 .../Queries/sort-cluster-distribute-by.md          |   8 +-
 .../hiveDialect/Queries/sub-queries.md             |   2 +-
 .../hiveDialect/Queries/transform.md               |  34 ++++---
 .../dev/table/hiveCompatibility/hiveDialect/add.md |   6 +-
 .../table/hiveCompatibility/hiveDialect/alter.md   |   2 +-
 .../table/hiveCompatibility/hiveDialect/create.md  |  17 ++--
 .../table/hiveCompatibility/hiveDialect/drop.md    |   2 +-
 .../table/hiveCompatibility/hiveDialect/insert.md  | 113 +++++++++++----------
 .../hiveCompatibility/hiveDialect/load-data.md     |   4 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  41 +++++---
 .../dev/table/hiveCompatibility/hiveDialect/set.md |  29 +++---
 .../table/hiveCompatibility/hiveDialect/show.md    |   7 +-
 .../hiveCompatibility/hiveDialect/Queries/cte.md   |   6 +-
 .../hiveDialect/Queries/group-by.md                |  18 ++--
 .../hiveDialect/Queries/overview.md                |  19 ++--
 .../hiveDialect/Queries/set-op.md                  |   8 +-
 .../Queries/sort-cluster-distribute-by.md          |   8 +-
 .../hiveDialect/Queries/sub-queries.md             |   2 +-
 .../hiveDialect/Queries/transform.md               |  34 ++++---
 .../dev/table/hiveCompatibility/hiveDialect/add.md |  12 ++-
 .../table/hiveCompatibility/hiveDialect/alter.md   |   2 +-
 .../table/hiveCompatibility/hiveDialect/create.md  |  21 ++--
 .../table/hiveCompatibility/hiveDialect/drop.md    |   2 +-
 .../table/hiveCompatibility/hiveDialect/insert.md  | 113 +++++++++++----------
 .../hiveCompatibility/hiveDialect/load-data.md     |   4 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  45 +++++---
 .../dev/table/hiveCompatibility/hiveDialect/set.md |  27 ++---
 .../table/hiveCompatibility/hiveDialect/show.md    |   7 +-
 32 files changed, 355 insertions(+), 291 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
index e609e912c9d..96c4a73709d 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -30,14 +30,14 @@ or `INSERT` keyword. The CTE is defined only with the execution scope of a singl
 ## Syntax
 
 ```sql
-withClause: cteClause [, ...]
-cteClause: cte_name AS (select statment)
+withClause: WITH cteClause [ , ... ]
+cteClause: cte_name AS (select statement)
 ```
 
 
 {{< hint warning >}}
 **Note:**
-- The `WITH` clause is not supported within SubQuery block
+- The `WITH` clause is not supported within Sub-Query block
 - CTEs are supported in Views, `CTAS` and `INSERT` statement
 - [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
   {{< /hint >}}
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
index 719ce532dc1..514fbe9bee6 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -31,13 +31,19 @@ Hive dialect also supports enhanced aggregation features to do multiple aggregat
 ## Syntax
 
 ```sql
-groupByClause: groupByClause-1 | groupByClause-2
-groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+group_by_clause: 
+  group_by_clause_1 | group_by_clause_2
+
+group_by_clause_1: 
+  GROUP BY group_expression [ , ... ] [ WITH ROLLUP | WITH CUBE ] 
  
-groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
-grouping_set: { expression | ( [ expression [, ...] ] ) }
+group_by_clause_2: 
+  GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [ , ... ] ) } [ , ... ]
+
+grouping_set: 
+  { expression | ( [ expression [ , ... ] ] ) }
  
-groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+groupByQuery: SELECT expression [ , ... ] FROM src groupByClause?
 ```
 In `group_expression`, columns can be also specified by position number. But please remember:
 - For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
@@ -97,7 +103,7 @@ It represents the given list and all of its possible subsets - the power set.
 
 For example:
 ```sql
-GROUP BY a, b, c, WITH CUBE
+GROUP BY a, b, c WITH CUBE
 ```
 is equivalent to
 ```sql
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
index 90e6a062223..61d375f74c5 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,7 +35,7 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 - [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
 - [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
 - [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
 - [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
 - [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
 - [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
@@ -43,11 +43,11 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
-[WITH CommonTableExpression (, CommonTableExpression)*]
-SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+[WITH CommonTableExpression [ , ... ]]
+SELECT [ALL | DISTINCT] select_expr [ , ... ]
   FROM table_reference
   [WHERE where_condition]
   [GROUP BY col_list]
@@ -57,14 +57,15 @@ SELECT [ALL | DISTINCT] select_expr, select_expr, ...
   ]
  [LIMIT [offset,] rows]
 ```
-- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
-The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
-in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+in the `WHERE` clause. Some types of [sub-queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
 
@@ -74,7 +75,7 @@ Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
 Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
-a total order in the output.
+a global order in the output.
 
 {{< hint warning >}}
 **Note:**
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
index 1178844d2c3..cb01213c6a7 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -22,7 +22,7 @@ under the License.
 
 # Set Operations
 
-Set Operation is used to combing two select into single one.
+Set Operations are used to combine multiple `SELECT` statements into a single result set.
 Hive dialect supports the following operations:
 - UNION
 - INTERSECT
@@ -39,7 +39,7 @@ Hive dialect supports the following operations:
 ### Syntax
 
 ```sql
-select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { UNION [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -60,7 +60,7 @@ SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { INTERSECT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -83,7 +83,7 @@ SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { EXCEPT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
index 5f6954bb880..19649d225d2 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -33,9 +33,9 @@ So when there's more than one partition, `SORT BY` may return result that's part
 ### Syntax
 
 ```sql
-colOrder: ( ASC | DESC )
-sortBy: SORT BY BY expression [ , ... ]
 query: SELECT expression [ , ... ] FROM src sortBy
+sortBy: SORT BY expression colOrder [ , ... ]
+colOrder: ( ASC | DESC )
 ```
 
 ### Parameters
@@ -67,8 +67,12 @@ query: SELECT expression [ , ... ] FROM src distributeBy
 ### Examples
 
 ```sql
+-- only use DISTRIBUTE BY clause
 SELECT x, y FROM t DISTRIBUTE BY x;
 SELECT x, y FROM t DISTRIBUTE BY abs(y);
+
+-- use both DISTRIBUTE BY and SORT BY clause
+SELECT x, y FROM t DISTRIBUTE BY x SORT BY y DESC;
 ```
 
 ## Cluster By
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
index 592d130bf46..5a63f1b6a3b 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -34,7 +34,7 @@ The sub-query can also be a query expression with `UNION`. Hive dialect supports
 ### Syntax
 
 ```sql
-select_statement from ( subquery_select_statement ) [ AS ] name
+select_statement from ( select_statement ) [ AS ] name
 ```
 
 ### Example
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
index 1ba9a9c0cfe..b147ae7d8d9 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -29,6 +29,15 @@ The `TRANSFORM` clause allows user to transform inputs using user-specified comm
 ## Syntax
 
 ```sql
+query:
+   SELECT TRANSFORM ( expression [ , ... ] )
+   [ inRowFormat ]
+   [ inRecordWriter ]
+   USING command_or_script
+   [ AS colName [ colType ] [ , ... ] ]
+   [ outRowFormat ]
+   [ outRecordReader ]
+
 rowFormat
   : ROW FORMAT
     (DELIMITED [FIELDS TERMINATED BY char]
@@ -45,14 +54,6 @@ outRowFormat : rowFormat
 inRowFormat : rowFormat
 outRecordReader : RECORDREADER className
 inRecordWriter: RECORDWRITER record_write_class
- 
-query:
-   SELECT TRANSFORM '(' expression [ , ... ] ')'
-    ( inRowFormat )?
-    ( inRecordWriter )?
-    USING command_or_script
-    ( AS colName ( colType )? [, ... ] )?
-    ( outRowFormat )? ( outRecordReader )?
 ```
 
 {{< hint warning >}}
@@ -78,21 +79,26 @@ query:
   and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
 
 - inRecordWriter
+
   Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
 
 - outRecordReader
+
   Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
 
 - command_or_script
+
   Specifies a command or a path to script to process data.
 
   {{< hint warning >}}
   **Note:**
 
-  Add a script file and then transform input using the script is not supported yet.
+  - Add a script file and then transform input using the script is not supported yet.
+  - The script used must be a local script and should be accessible on all hosts in the cluster. 
   {{< /hint >}}
 
 - colType
+
   Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
 
 
@@ -110,20 +116,20 @@ For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the foll
 ```sql
 CREATE TABLE src(key string, value string);
 -- transform using
-SELECT TRANSFORM(key, value) using 'script' from t1;
+SELECT TRANSFORM(key, value) using 'cat' from t1;
 
 -- transform using with specific record writer and record reader
 SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
  WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
  RECORDWRITER 'MyRecordWriter'
- using 'script'
+ using 'cat'
  ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
  RECORDREADER 'MyRecordReader' from src;
  
 -- use keyword MAP instead of TRANSFORM
-FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'cat' as (c1, c2);
 
 -- specific the output of transform
-SELECT TRANSFORM(column) USING 'script' AS c1, c2;
-SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+SELECT TRANSFORM(column) USING 'cat' AS c1, c2;
+SELECT TRANSFORM(column) USING 'cat' AS(c1 INT, c2 INT);
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
index bae85d4464e..fdbc0bb0dff 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -3,7 +3,7 @@ title: "ADD Statements"
 weight: 7
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -50,5 +50,9 @@ ADD JAR filename;
 ### Examples
 
 ```sql
+-- add a local jar
 ADD JAR t.jar;
+
+-- add a remote jar
+ADD JAR hdfs://namenode-host:port/path/t.jar
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
index b595d6b3196..5738036fbac 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -3,7 +3,7 @@ title: "ALTER Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/alter.html
+- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
index ba3dfb2ba86..8d4f9bd1c04 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -3,7 +3,7 @@ title: "CREATE Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -128,9 +128,8 @@ table_constraint:
 {{< hint warning >}}
 **NOTE:**
 
-- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
 - Create temporary table is not supported yet.
-  {{< /hint >}}
+{{< /hint >}}
 
 ### Examples
 
@@ -157,7 +156,6 @@ CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
 
 ### Description
 
-`View`
 `CREATE VIEW` creates a view with the given name.
 If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
 (If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
@@ -233,15 +231,18 @@ The function is registered to metastore and will exist in all session unless the
 - `[USING JAR 'file_uri']`
 
   User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
-  The `file_uri` can be a local file or distributed file system.
-
+  The `file_uri` can be on local file or distributed file system.
+  Flink will automatically download the jars for remote jars when the function is used in queries. The downloaded jars will be removed when the session exits.
 
 ### Examples
 
 ```sql
--- create a function accuming the class `SimpleUdf` has existed in class path
+-- create a function assuming the class `SimpleUdf` has existed in class path
 CREATE FUNCTION simple_udf AS 'SimpleUdf';
 
--- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
+-- create function using jar assuming the class `SimpleUdf` hasn't existed in class path
 CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+
+-- create function using remote jar
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR 'hdfs://namenode-host:port/path/SimpleUdf.jar';
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
index 67bca77e1d7..a507d53a05f 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -3,7 +3,7 @@ title: "DROP Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
index 60e9c2c32f4..94170b96dfa 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -3,7 +3,7 @@ title: "INSERT Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,31 +35,9 @@ can be specified by value expressions or result from query.
 
 ```sql
 -- Stardard syntax
-INSERT [OVERWRITE] TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
-INSERT INTO TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
-FROM from_statement
-INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
--- Hive extension (dynamic partition inserts):
-INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
-  
-INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+INSERT { OVERWRITE | INTO } [TABLE] tablename
+ [PARTITION (partcol1[=val1], partcol2[=val2] ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 ```
 
 ### Parameters
@@ -85,14 +63,9 @@ INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 #### Dynamic Partition Inserts
 
-In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+When writing data into Hive table's partition, users can specify the list of partition column names in the `PARTITION` clause with optional column values.
 If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
 
 Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
@@ -102,7 +75,7 @@ The dynamic partition columns must be specified last among the columns in the `S
 {{< hint warning >}}
 **Note:**
 
-In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+In Hive, by default, users must specify at least one static partition in case of accidentally overwriting all partitions, and users can
 set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
@@ -127,11 +100,6 @@ INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
 --- dynamic partition 
 INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
 INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
-
--- multi-insert statements
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
 ```
 
 ## INSERT OVERWRITE DIRECTORY
@@ -143,17 +111,13 @@ Query results can be inserted into filesystem directories by using a slight vari
 -- Standard syntax:
 INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
   [ROW FORMAT row_format] [STORED AS file_format] 
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
-[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
 row_format:
   : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
       [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
       [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)
 ```
 
 ### Parameters
@@ -161,7 +125,7 @@ row_format:
 - directory_path
 
   The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
-  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+  it'll use the scheme and authority from the Flink configuration variable `fs.default-scheme` that specifies the filesystem scheme.
 
 - `LOCAL`
 
@@ -190,24 +154,61 @@ row_format:
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 ### Examples
 
 ```sql
 --- insert directory with specific format
 INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+
 -- insert directory with specific row format
 INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
   COLLECTION ITEMS TERMINATED BY '#'
   MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
-  
--- multiple insert
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```
+
+## Multiple Inserts
+
+Hive dialect enables users to insert into multiple destinations in one single statement. Users can mix inserting into table and inserting into directory in one single statement.
+In such syntax, Flink will minimize the number of data scans requires. Flink can insert data into multiple tables/directories by scanning the input data just once.
+
+### Syntax
+
+```sql
+-- multiple insert into table
+FROM from_statement
+  INSERT { OVERWRITE | INTO } [TABLE] tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS] select_statement1,
+  INSERT { OVERWRITE | INTO } [TABLE] tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+  [, ... ]
+
+-- multiple insert into directory
+FROM from_statement
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path [ROW FORMAT row_format] [STORED AS file_format] select_statement1,
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path [ROW FORMAT row_format] [STORED AS file_format] select_statement2
+  [, ... ]
+
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Examples
+
+```sql
+-- multiple insert into table
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE TABLE t1_2 SELECT value WHERE month > 6;
+
+-- multiple insert into directory
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+    
+-- mixed with insert into table/directory in one single statement
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
index 246759bbe1b..534b0132408 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -3,7 +3,7 @@ title: "Load Data Statements"
 weight: 4
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/load.html
+- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -53,7 +53,7 @@ LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION
     - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
       The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
     - it will try to **copy** all the files addressed by `filepath` to the target file system.
-      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+      The target file system is inferred by looking at the location attribution. The copied data files will then be moved to the location of the table.
 
   If not, then:
     - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
index 127b74940d1..ef56af19f29 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -3,7 +3,7 @@ title: "概览"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/overview
+- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -29,40 +29,49 @@ under the License.
 
 ## 使用 Hive 方言
 
-Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端和 Table API 设置方言。
+Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端,启动了 HiveServer2 endpoint 的 SQL Gateway 和 Table API 设置方言。
 还要注意,你可以为执行的每个语句动态切换方言。无需重新启动会话即可使用其他方言。
 
 {{< hint warning >}}
 **Note:**
 
 - 为了使用 Hive 方言, 你必须首先添加和 Hive 相关的依赖. 请参考 [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) 如何添加这些依赖。
+- 从 Flink 1.15版本开始,如果需要使用 Hive 方言的话,请首先将 `FLINK_HOME/opt` 下面的 `flink-table-planner_2.12` jar 包放到 `FLINK_HOME/lib` 下,并将 `FLINK_HOME/lib`
+  下的 `flink-table-planner-loader` jar 包移出 `FLINK_HOME/lib` 目录。否则将抛出 `ValidationException`。具体原因请参考 [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128)。
 - 请确保当前的 Catalog 是 [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). 否则, 将使用 Flink 的默认方言。
-- 了实现更好的语法和语义的兼容,强烈建议首先加载 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
+  在启动了 HiveServer2 endpoint 的 SQL Gateway,默认当前的 Catalog 就是 HiveCatalog。
+- 为了实现更好的语法和语义的兼容,强烈建议首先加载 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
   并将其放在 Module 列表的首位,以便在函数解析时优先使用 Hive 内置函数。 
-  请参考文档 [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) 来将 HiveModule 放在 Module 列表的首.
+  请参考文档 [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) 来将 HiveModule 放在 Module 列表的首。
+  在启动了 HiveServer2 endpoint 的 SQL Gateway,HiveModule 已经被加载进来了。
 - Hive 方言只支持 `db.table` 这种两级的标识符,不支持带有 Catalog 名字的标识符。
-- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的[Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
+- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的 [Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
   只在 Hive-2.4.0 或更高版本支持。
-  {{< /hint >}}
+- Hive 方言主要是在批模式下使用的,某些 Hive 的语法([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), 等)还没有在流模式下支持。
+{{< /hint >}}
 
 ### SQL Client
 
 SQL 方言可以通过 `table.sql-dialect` 属性指定。你可以在 SQL 客户端启动后设置方言。
 
 ```bash
-Flink SQL> SET 'table.sql-dialect' = 'hive'; -- 使用 Hive 方言
+Flink SQL> SET table.sql-dialect = hive; -- 使用 Hive 方言
 [INFO] Session property has been set.
 
-Flink SQL> SET 'table.sql-dialect' = 'default'; -- 使用 Flink 默认 方言
+Flink SQL> SET table.sql-dialect = default; -- 使用 Flink 默认 方言
 [INFO] Session property has been set.
 ```
 
-{{< hint warning >}}
-**Note:**
-Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
-with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
-{{< /hint >}}
-{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+### SQL Gateway Configured With HiveServer2 Endpoint
+
+在启动了 HiveServer2 endpoint 的 SQL Gateway中,会默认使用 Hive 方言,所以如果你想使用 Hive 方言的话,你不需要手动切换至 Hive 方言,直接就能使用。但是如果你想使用 Flink 的默认方言,你也手动进行切换。
+
+```bash
+# 假设已经通过 beeline 连接上了 SQL Gateway
+jdbc:hive2> SET table.sql-dialect = default; -- 使用 Flink 默认 方言
+
+jdbc:hive2> SET table.sql-dialect = hive; -- 使用 Hive 方言
+```
 
 ### Table API
 
@@ -73,8 +82,10 @@ with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise,
 ```java
 EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
+
 // to use hive dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+
 // to use default dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 ```
@@ -84,8 +95,10 @@ tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 from pyflink.table import *
 settings = EnvironmentSettings.in_batch_mode()
 t_env = TableEnvironment.create(settings)
+
 # to use hive dialect
 t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
+
 # to use default dialect
 t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
index 17008d7464b..88264a3f41a 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -3,7 +3,7 @@ title: "SET Statements"
 weight: 8
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,17 +30,11 @@ The `SET` statement sets a property which provide a ways to set variables for a
 configuration property including system variable and Hive configuration.
 But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
 
-## Syntax
-
-```sql
-SET key=value;
-```
-
 ## EXAMPLES
 
 ```sql
 -- set Flink's configuration
-SET 'table.sql-dialect'='default';
+SET table.sql-dialect=default;
 
 -- set Hive's configuration
 SET hiveconf:k1=v1;
@@ -52,21 +46,22 @@ SET system:k2=v2;
 SET hivevar:k3=v3;
 
 -- get value for configuration
+SET table.sql-dialect;
 SET hiveconf:k1;
 SET system:k2;
 SET hivevar:k3;
 
--- print options
+-- only print Flink's configuration
+SET;
+
+-- print all configurations
 SET -v;
-SET; 
 ```
 
 {{< hint warning >}}
 **Note:**
-
-In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
-
-But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
-
-So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
-{{< /hint  >}}
+- In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+  But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+  So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive dialect, the `key`/`value` to be set shouldn't be quoted.
+  {{< /hint  >}}
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
index a21a8cf288c..fb3d5acffbc 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -1,9 +1,9 @@
 ---
-title: "Show Statements"
+title: "SHOW Statements"
 weight: 5
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -90,8 +90,7 @@ partition_spec:
 
   The optional `partition_spec` is used to what kind of partition should be returned.
   When specified, the partitions that match the `partition_spec` specification are returned.
-  The `partition_spec` can be partial.
-
+  The `partition_spec` can be partial which means you can specific only part of partition columns for listing the partitions.
 
 ### Examples
 
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
index e609e912c9d..96c4a73709d 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -30,14 +30,14 @@ or `INSERT` keyword. The CTE is defined only with the execution scope of a singl
 ## Syntax
 
 ```sql
-withClause: cteClause [, ...]
-cteClause: cte_name AS (select statment)
+withClause: WITH cteClause [ , ... ]
+cteClause: cte_name AS (select statement)
 ```
 
 
 {{< hint warning >}}
 **Note:**
-- The `WITH` clause is not supported within SubQuery block
+- The `WITH` clause is not supported within Sub-Query block
 - CTEs are supported in Views, `CTAS` and `INSERT` statement
 - [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
   {{< /hint >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
index 719ce532dc1..514fbe9bee6 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -31,13 +31,19 @@ Hive dialect also supports enhanced aggregation features to do multiple aggregat
 ## Syntax
 
 ```sql
-groupByClause: groupByClause-1 | groupByClause-2
-groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+group_by_clause: 
+  group_by_clause_1 | group_by_clause_2
+
+group_by_clause_1: 
+  GROUP BY group_expression [ , ... ] [ WITH ROLLUP | WITH CUBE ] 
  
-groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
-grouping_set: { expression | ( [ expression [, ...] ] ) }
+group_by_clause_2: 
+  GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [ , ... ] ) } [ , ... ]
+
+grouping_set: 
+  { expression | ( [ expression [ , ... ] ] ) }
  
-groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+groupByQuery: SELECT expression [ , ... ] FROM src groupByClause?
 ```
 In `group_expression`, columns can be also specified by position number. But please remember:
 - For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
@@ -97,7 +103,7 @@ It represents the given list and all of its possible subsets - the power set.
 
 For example:
 ```sql
-GROUP BY a, b, c, WITH CUBE
+GROUP BY a, b, c WITH CUBE
 ```
 is equivalent to
 ```sql
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
index 90e6a062223..d0daeb79cb0 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,7 +35,7 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 - [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
 - [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
 - [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
 - [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
 - [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
 - [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
@@ -43,11 +43,11 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
-[WITH CommonTableExpression (, CommonTableExpression)*]
-SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+[WITH CommonTableExpression [ , ... ]]
+SELECT [ALL | DISTINCT] select_expr [ , ... ]
   FROM table_reference
   [WHERE where_condition]
   [GROUP BY col_list]
@@ -57,13 +57,14 @@ SELECT [ALL | DISTINCT] select_expr, select_expr, ...
   ]
  [LIMIT [offset,] rows]
 ```
-- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
-The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
 in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
@@ -74,7 +75,7 @@ Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
 Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
-a total order in the output.
+a global order in the output.
 
 {{< hint warning >}}
 **Note:**
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
index 1178844d2c3..83252bd2150 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -22,7 +22,7 @@ under the License.
 
 # Set Operations
 
-Set Operation is used to combing two select into single one.
+Set Operations are used to combine multiple `SELECT` statements into a single result set. 
 Hive dialect supports the following operations:
 - UNION
 - INTERSECT
@@ -39,7 +39,7 @@ Hive dialect supports the following operations:
 ### Syntax
 
 ```sql
-select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { UNION [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -60,7 +60,7 @@ SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { INTERSECT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -83,7 +83,7 @@ SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { EXCEPT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
index 5f6954bb880..5548ac48d81 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -33,9 +33,9 @@ So when there's more than one partition, `SORT BY` may return result that's part
 ### Syntax
 
 ```sql
-colOrder: ( ASC | DESC )
-sortBy: SORT BY BY expression [ , ... ]
 query: SELECT expression [ , ... ] FROM src sortBy
+sortBy: SORT BY expression colOrder [ , ... ] 
+colOrder: ( ASC | DESC )
 ```
 
 ### Parameters
@@ -67,8 +67,12 @@ query: SELECT expression [ , ... ] FROM src distributeBy
 ### Examples
 
 ```sql
+-- only use DISTRIBUTE BY clause
 SELECT x, y FROM t DISTRIBUTE BY x;
 SELECT x, y FROM t DISTRIBUTE BY abs(y);
+
+-- use both DISTRIBUTE BY and SORT BY clause
+SELECT x, y FROM t DISTRIBUTE BY x SORT BY y DESC;
 ```
 
 ## Cluster By
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
index 592d130bf46..5a63f1b6a3b 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -34,7 +34,7 @@ The sub-query can also be a query expression with `UNION`. Hive dialect supports
 ### Syntax
 
 ```sql
-select_statement from ( subquery_select_statement ) [ AS ] name
+select_statement from ( select_statement ) [ AS ] name
 ```
 
 ### Example
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
index 1ba9a9c0cfe..fd6af4b271f 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -29,6 +29,15 @@ The `TRANSFORM` clause allows user to transform inputs using user-specified comm
 ## Syntax
 
 ```sql
+query:
+   SELECT TRANSFORM ( expression [ , ... ] )
+   [ inRowFormat ]
+   [ inRecordWriter ]
+   USING command_or_script
+   [ AS colName [ colType ] [ , ... ] ]
+   [ outRowFormat ]
+   [ outRecordReader ]
+
 rowFormat
   : ROW FORMAT
     (DELIMITED [FIELDS TERMINATED BY char]
@@ -45,14 +54,6 @@ outRowFormat : rowFormat
 inRowFormat : rowFormat
 outRecordReader : RECORDREADER className
 inRecordWriter: RECORDWRITER record_write_class
- 
-query:
-   SELECT TRANSFORM '(' expression [ , ... ] ')'
-    ( inRowFormat )?
-    ( inRecordWriter )?
-    USING command_or_script
-    ( AS colName ( colType )? [, ... ] )?
-    ( outRowFormat )? ( outRecordReader )?
 ```
 
 {{< hint warning >}}
@@ -78,21 +79,26 @@ query:
   and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
 
 - inRecordWriter
+
   Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
 
 - outRecordReader
+
   Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
 
 - command_or_script
+
   Specifies a command or a path to script to process data.
 
   {{< hint warning >}}
   **Note:**
 
-  Add a script file and then transform input using the script is not supported yet.
+  - Add a script file and then transform input using the script is not supported yet.
+  - The script used must be a local script and should be accessible on all hosts in the cluster.
   {{< /hint >}}
 
 - colType
+
   Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
 
 
@@ -110,20 +116,20 @@ For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the foll
 ```sql
 CREATE TABLE src(key string, value string);
 -- transform using
-SELECT TRANSFORM(key, value) using 'script' from t1;
+SELECT TRANSFORM(key, value) using 'cat' from t1;
 
 -- transform using with specific record writer and record reader
 SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
  WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
  RECORDWRITER 'MyRecordWriter'
- using 'script'
+ using 'cat'
  ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
  RECORDREADER 'MyRecordReader' from src;
  
 -- use keyword MAP instead of TRANSFORM
-FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'cat' as (c1, c2);
 
 -- specific the output of transform
-SELECT TRANSFORM(column) USING 'script' AS c1, c2;
-SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+SELECT TRANSFORM(column) USING 'cat' AS c1, c2;
+SELECT TRANSFORM(column) USING 'cat' AS(c1 INT, c2 INT);
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
index bae85d4464e..20ffa413d7a 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -3,7 +3,7 @@ title: "ADD Statements"
 weight: 7
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -38,17 +38,21 @@ Add multiple jars file in single `ADD JAR` statement is not supported.
 ### Syntax
 
 ```sql
-ADD JAR filename;
+ADD JAR <jar_path>;
 ```
 
 ### Parameters
 
-- filename
+- jar_path
 
-  The name of the JAR file to be added. It could be either on a local file or distributed file system.
+  The path of the JAR file to be added. It could be either on a local file or distributed file system.
 
 ### Examples
 
 ```sql
+-- add a local jar
 ADD JAR t.jar;
+
+-- add a remote jar
+ADD JAR hdfs://namenode-host:port/path/t.jar
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
index b595d6b3196..5738036fbac 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -3,7 +3,7 @@ title: "ALTER Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/alter.html
+- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
index ba3dfb2ba86..70ee04df641 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -3,7 +3,7 @@ title: "CREATE Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -128,9 +128,8 @@ table_constraint:
 {{< hint warning >}}
 **NOTE:**
 
-- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
-- Create temporary table is not supported yet.
-  {{< /hint >}}
+- Create temporary table is not supported yet. 
+{{< /hint >}}
 
 ### Examples
 
@@ -157,7 +156,6 @@ CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
 
 ### Description
 
-`View`
 `CREATE VIEW` creates a view with the given name.
 If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
 (If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
@@ -233,15 +231,18 @@ The function is registered to metastore and will exist in all session unless the
 - `[USING JAR 'file_uri']`
 
   User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
-  The `file_uri` can be a local file or distributed file system.
-
+  The `file_uri` can be on local file or distributed file system.
+  Flink will automatically download the jars for remote jars when the function is used in queries. The downloaded jars will be removed when the session exits.
 
 ### Examples
 
 ```sql
--- create a function accuming the class `SimpleUdf` has existed in class path
+-- create a function assuming the class `SimpleUdf` has existed in class path
 CREATE FUNCTION simple_udf AS 'SimpleUdf';
 
--- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
-CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+-- create function using jar assuming the class `SimpleUdf` hasn't existed in class path
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+
+-- create function using remote jar
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR 'hdfs://namenode-host:port/path/SimpleUdf.jar';
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
index 67bca77e1d7..a507d53a05f 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -3,7 +3,7 @@ title: "DROP Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
index 60e9c2c32f4..94170b96dfa 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -3,7 +3,7 @@ title: "INSERT Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,31 +35,9 @@ can be specified by value expressions or result from query.
 
 ```sql
 -- Stardard syntax
-INSERT [OVERWRITE] TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
-INSERT INTO TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
-FROM from_statement
-INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
--- Hive extension (dynamic partition inserts):
-INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
-  
-INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+INSERT { OVERWRITE | INTO } [TABLE] tablename
+ [PARTITION (partcol1[=val1], partcol2[=val2] ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 ```
 
 ### Parameters
@@ -85,14 +63,9 @@ INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 #### Dynamic Partition Inserts
 
-In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+When writing data into Hive table's partition, users can specify the list of partition column names in the `PARTITION` clause with optional column values.
 If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
 
 Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
@@ -102,7 +75,7 @@ The dynamic partition columns must be specified last among the columns in the `S
 {{< hint warning >}}
 **Note:**
 
-In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+In Hive, by default, users must specify at least one static partition in case of accidentally overwriting all partitions, and users can
 set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
@@ -127,11 +100,6 @@ INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
 --- dynamic partition 
 INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
 INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
-
--- multi-insert statements
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
 ```
 
 ## INSERT OVERWRITE DIRECTORY
@@ -143,17 +111,13 @@ Query results can be inserted into filesystem directories by using a slight vari
 -- Standard syntax:
 INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
   [ROW FORMAT row_format] [STORED AS file_format] 
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
-[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
 row_format:
   : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
       [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
       [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)
 ```
 
 ### Parameters
@@ -161,7 +125,7 @@ row_format:
 - directory_path
 
   The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
-  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+  it'll use the scheme and authority from the Flink configuration variable `fs.default-scheme` that specifies the filesystem scheme.
 
 - `LOCAL`
 
@@ -190,24 +154,61 @@ row_format:
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 ### Examples
 
 ```sql
 --- insert directory with specific format
 INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+
 -- insert directory with specific row format
 INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
   COLLECTION ITEMS TERMINATED BY '#'
   MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
-  
--- multiple insert
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```
+
+## Multiple Inserts
+
+Hive dialect enables users to insert into multiple destinations in one single statement. Users can mix inserting into table and inserting into directory in one single statement.
+In such syntax, Flink will minimize the number of data scans requires. Flink can insert data into multiple tables/directories by scanning the input data just once.
+
+### Syntax
+
+```sql
+-- multiple insert into table
+FROM from_statement
+  INSERT { OVERWRITE | INTO } [TABLE] tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS] select_statement1,
+  INSERT { OVERWRITE | INTO } [TABLE] tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+  [, ... ]
+
+-- multiple insert into directory
+FROM from_statement
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path [ROW FORMAT row_format] [STORED AS file_format] select_statement1,
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path [ROW FORMAT row_format] [STORED AS file_format] select_statement2
+  [, ... ]
+
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Examples
+
+```sql
+-- multiple insert into table
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE TABLE t1_2 SELECT value WHERE month > 6;
+
+-- multiple insert into directory
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+    
+-- mixed with insert into table/directory in one single statement
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
index 246759bbe1b..534b0132408 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -3,7 +3,7 @@ title: "Load Data Statements"
 weight: 4
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/load.html
+- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -53,7 +53,7 @@ LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION
     - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
       The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
     - it will try to **copy** all the files addressed by `filepath` to the target file system.
-      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+      The target file system is inferred by looking at the location attribution. The copied data files will then be moved to the location of the table.
 
   If not, then:
     - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
index c652bef92db..44dfecddd02 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/overview
+- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,43 +30,54 @@ By providing compatibility with Hive syntax, we aim to improve the interoperabil
 ## Use Hive Dialect
 
 Flink currently supports two SQL dialects: `default` and `hive`. You need to switch to Hive dialect
-before you can write in Hive syntax. The following describes how to set dialect with
-SQL Client and Table API. Also notice that you can dynamically switch dialect for each
+before you can write in Hive syntax. The following describes how to set dialect using
+SQL Client, SQL Gateway configured with HiveServer2 Endpoint and Table API. Also notice that you can dynamically switch dialect for each
 statement you execute. There's no need to restart a session to use a different dialect.
 
 {{< hint warning >}}
 **Note:**
 
 - To use Hive dialect, you have to add dependencies related to Hive. Please refer to [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) for how to add the dependencies.
+- Since Flink 1.15, if you want to use Hive dialect in Flink SQL Client or SQL Gateway, you have to put the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`
+  to `FLINK_HOME/lib` and then move out the jar `flink-table-planner-loader` from `FLINK_HOME/lib`.
+  Otherwise, it'll throw ValidationException. Please refer to [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128) for more details.
 - Please make sure the current catalog is [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). Otherwise, it will fall back to Flink's `default` dialect.
+  When using SQL Gateway configured with HiveServer2 Endpoint, the current catalog will be a HiveCatalog by default.
 - In order to have better syntax and semantic compatibility, it’s highly recommended to load [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) and
   place it first in the module list, so that Hive built-in functions can be picked up during function resolution.
   Please refer [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) for how to change resolution order.
+  But when using SQL Gateway configured with HiveServer2 Endpoint, the Hive module will be loaded automatically.
 - Hive dialect only supports 2-part identifiers, so you can't specify catalog for an identifier.
 - While all Hive versions support the same syntax, whether a specific feature is available still depends on the
   [Hive version]({{< ref "docs/connectors/table/hive/overview" >}}#supported-hive-versions) you use. For example, updating database
   location is only supported in Hive-2.4.0 or later.
+- The Hive dialect is mainly used in batch mode. Some Hive's syntax ([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), etc.)  haven't been supported in streaming mode yet.
 {{< /hint >}}
 
 ### SQL Client
 
 SQL dialect can be specified via the `table.sql-dialect` property.
-Therefore,you can set the dialect after the SQL Client has launched.
+Therefore,you can set the dialect after the SQL Client has launched. 
 
 ```bash
-Flink SQL> SET 'table.sql-dialect' = 'hive'; -- to use hive dialect
+Flink SQL> SET table.sql-dialect = hive; -- to use Hive dialect
 [INFO] Session property has been set.
 
-Flink SQL> SET 'table.sql-dialect' = 'default'; -- to use default dialect
+Flink SQL> SET table.sql-dialect = default; -- to use Flink default dialect
 [INFO] Session property has been set.
 ```
 
-{{< hint warning >}}
-**Note:**
-Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
-with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
-{{< /hint >}}
-{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+### SQL Gateway Configured With HiveServer2 Endpoint
+
+When using the SQL Gateway configured with HiveServer2 Endpoint, the dialect will be Hive dialect by default, so you don't need to do anything if you want to use Hive dialect. But you can still
+change the dialect to Flink default dialect.
+
+```bash
+# assuming has connected to SQL Gateway with beeline
+jdbc:hive2> SET table.sql-dialect = default; -- to use Flink default dialect
+
+jdbc:hive2> SET table.sql-dialect = hive; -- to use Hive dialect
+```
 
 ### Table API
 
@@ -77,8 +88,10 @@ You can set dialect for your TableEnvironment with Table API.
 ```java
 EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
+
 // to use hive dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+
 // to use default dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 ```
@@ -88,10 +101,14 @@ tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 from pyflink.table import *
 settings = EnvironmentSettings.in_batch_mode()
 t_env = TableEnvironment.create(settings)
-# to use hive dialect
+
+# to use Hive dialect
 t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
-# to use default dialect
+
+# to use Flink default dialect
 t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
 ```
 {{< /tab >}}
 {{< /tabs >}}
+
+
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
index 17008d7464b..c5eb59e17a1 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -3,7 +3,7 @@ title: "SET Statements"
 weight: 8
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,17 +30,11 @@ The `SET` statement sets a property which provide a ways to set variables for a
 configuration property including system variable and Hive configuration.
 But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
 
-## Syntax
-
-```sql
-SET key=value;
-```
-
 ## EXAMPLES
 
 ```sql
 -- set Flink's configuration
-SET 'table.sql-dialect'='default';
+SET table.sql-dialect=default;
 
 -- set Hive's configuration
 SET hiveconf:k1=v1;
@@ -52,21 +46,22 @@ SET system:k2=v2;
 SET hivevar:k3=v3;
 
 -- get value for configuration
+SET table.sql-dialect;
 SET hiveconf:k1;
 SET system:k2;
 SET hivevar:k3;
 
--- print options
+-- only print Flink's configuration
+SET;
+
+-- print all configurations
 SET -v;
-SET; 
 ```
 
 {{< hint warning >}}
 **Note:**
-
-In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
-
-But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
-
-So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+  But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+  So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive dialect, the `key`/`value` to be set shouldn't be quoted.
 {{< /hint  >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
index a21a8cf288c..fb3d5acffbc 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -1,9 +1,9 @@
 ---
-title: "Show Statements"
+title: "SHOW Statements"
 weight: 5
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -90,8 +90,7 @@ partition_spec:
 
   The optional `partition_spec` is used to what kind of partition should be returned.
   When specified, the partitions that match the `partition_spec` specification are returned.
-  The `partition_spec` can be partial.
-
+  The `partition_spec` can be partial which means you can specific only part of partition columns for listing the partitions.
 
 ### Examples
 


[flink] 24/25: [FLINK-29025][docs][hive] Fix links of Hive compatibility pages

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9d4a769f0a48fa19d7b9ea3721ebfe6acb37bb52
Author: Jark Wu <ja...@apache.org>
AuthorDate: Mon Sep 19 22:39:51 2022 +0800

    [FLINK-29025][docs][hive] Fix links of Hive compatibility pages
---
 .../docs/connectors/table/hive/hive_catalog.md     |  2 +-
 .../docs/connectors/table/hive/hive_read_write.md  |  2 +-
 .../docs/connectors/table/hive/overview.md         |  2 +-
 .../table/hive-compatibility/hive-dialect/drop.md  |  2 +-
 .../hive-compatibility/hive-dialect/insert.md      |  4 +--
 .../hive-compatibility/hive-dialect/overview.md    |  2 +-
 .../hive-dialect/queries/overview.md               | 34 +++++++++++-----------
 .../queries/sort-cluster-distribute-by.md          |  2 +-
 .../dev/table/hive-compatibility/hiveserver2.md    |  2 +-
 .../docs/dev/table/sql-gateway/hiveserver2.md      |  2 +-
 .../docs/dev/table/sql-gateway/overview.md         |  2 +-
 .../docs/connectors/table/hive/hive_catalog.md     |  2 +-
 .../docs/connectors/table/hive/hive_read_write.md  |  2 +-
 .../content/docs/connectors/table/hive/overview.md |  2 +-
 .../table/hive-compatibility/hive-dialect/drop.md  |  2 +-
 .../hive-compatibility/hive-dialect/insert.md      |  4 +--
 .../hive-compatibility/hive-dialect/overview.md    |  2 +-
 .../hive-dialect/queries/overview.md               | 34 +++++++++++-----------
 .../queries/sort-cluster-distribute-by.md          |  2 +-
 .../dev/table/hive-compatibility/hiveserver2.md    |  2 +-
 .../docs/dev/table/sql-gateway/hiveserver2.md      |  2 +-
 .../content/docs/dev/table/sql-gateway/overview.md |  2 +-
 22 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/docs/content.zh/docs/connectors/table/hive/hive_catalog.md b/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
index 44a752573e7..c8ca73daf7f 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_catalog.md
@@ -64,7 +64,7 @@ Generic tables, on the other hand, are specific to Flink. When creating generic
 HMS to persist the metadata. While these tables are visible to Hive, it's unlikely Hive is able to understand
 the metadata. And therefore using such tables in Hive leads to undefined behavior.
 
-It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to create Hive-compatible tables.
+It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}}) to create Hive-compatible tables.
 If you want to create Hive-compatible tables with default dialect, make sure to set `'connector'='hive'` in your table properties, otherwise
 a table is considered generic by default in `HiveCatalog`. Note that the `connector` property is not required if you use Hive dialect.
 
diff --git a/docs/content.zh/docs/connectors/table/hive/hive_read_write.md b/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
index 99d05cbba05..1bcb8ec4b94 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_read_write.md
@@ -507,7 +507,7 @@ INSERT INTO TABLE fact_tz PARTITION (day, hour) select 1, '2022-8-8', '14';
 
 **注意:**
 - 该配置项 `table.exec.hive.sink.sort-by-dynamic-partition.enable` 只在批模式下生效。
-- 目前,只有在 Flink 批模式下使用了 [Hive 方言]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}),才可以使用 `DISTRIBUTED BY` 和 `SORTED BY`。
+- 目前,只有在 Flink 批模式下使用了 [Hive 方言]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}}),才可以使用 `DISTRIBUTED BY` 和 `SORTED BY`。
 
 ### 自动收集统计信息
 在使用 Flink 写入 Hive 表的时候,Flink 将默认自动收集写入数据的统计信息然后将其提交至 Hive metastore 中。
diff --git a/docs/content.zh/docs/connectors/table/hive/overview.md b/docs/content.zh/docs/connectors/table/hive/overview.md
index 957cd790fb7..96b0e604621 100644
--- a/docs/content.zh/docs/connectors/table/hive/overview.md
+++ b/docs/content.zh/docs/connectors/table/hive/overview.md
@@ -449,7 +449,7 @@ USE CATALOG myhive;
 
 ## DDL
 
-在 Flink 中执行 DDL 操作 Hive 的表、视图、分区、函数等元数据时,建议使用 [Hive 方言]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}})
+在 Flink 中执行 DDL 操作 Hive 的表、视图、分区、函数等元数据时,建议使用 [Hive 方言]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}})
 
 ## DML
 
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
index a507d53a05f..36cdacecfc8 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/drop.md
@@ -107,7 +107,7 @@ DROP VIEW IF EXISTS v1;
 ## DROP MARCO
 
 `DROP MARCO` statement is used to drop the existing `MARCO`.
-Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/create" >}}#create-marco) for how to create `MARCO`.
+Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/create" >}}#create-marco) for how to create `MARCO`.
 
 ### Syntax
 
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index 94170b96dfa..4ad602770d0 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -59,7 +59,7 @@ INSERT { OVERWRITE | INTO } [TABLE] tablename
 - select_statement
 
   A statement for query.
-  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+  See more details in [queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}).
 
 ### Synopsis
 
@@ -138,7 +138,7 @@ row_format:
 - select_statement
 
   A statement for query.
-  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+  See more details in [queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}).
 
 - `STORED AS file_format`
 
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
index ef56af19f29..bb710333fb6 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/overview.md
@@ -47,7 +47,7 @@ Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换
 - Hive 方言只支持 `db.table` 这种两级的标识符,不支持带有 Catalog 名字的标识符。
 - 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的 [Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
   只在 Hive-2.4.0 或更高版本支持。
-- Hive 方言主要是在批模式下使用的,某些 Hive 的语法([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), 等)还没有在流模式下支持。
+- Hive 方言主要是在批模式下使用的,某些 Hive 的语法([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/transform" >}}), 等)还没有在流模式下支持。
 {{< /hint >}}
 
 ### SQL Client
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
index 61d375f74c5..bae637cb81e 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
@@ -29,21 +29,21 @@ under the License.
 Hive dialect supports a commonly-used subset of Hive’s [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select).
 The following lists some parts of HiveQL supported by the Hive dialect.
 
-- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}})
-- [Group By]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}})
-- [Join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}})
-- [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
-- [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
-- [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
-- [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
-- [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
-- [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
+- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}})
+- [Group By]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/group-by" >}})
+- [Join]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/join" >}})
+- [Set Operation]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/set-op" >}})
+- [Lateral View]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view" >}})
+- [Window Functions]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}})
+- [CTE]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/cte" >}})
+- [Transform]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/transform" >}})
+- [Table Sample]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample" >}})
 
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
 [WITH CommonTableExpression [ , ... ]]
@@ -57,24 +57,24 @@ SELECT [ALL | DISTINCT] select_expr [ , ... ]
   ]
  [LIMIT [offset,] rows]
 ```
-- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}) of another query
 - `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
 The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
-in the `WHERE` clause. Some types of [sub-queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+in the `WHERE` clause. Some types of [sub-queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
 
-Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}}) for more details.
+Please refer to [GROUP BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/group-by" >}}) for more details.
 
 ### ORDER BY Clause
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
-Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
+Different from [SORT BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
 a global order in the output.
 
 {{< hint warning >}}
@@ -85,7 +85,7 @@ So if the number of rows in the output is too large, it could take a very long t
 
 ## CLUSTER/DISTRIBUTE/SORT BY
 
-Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}) for more details.
+Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}) for more details.
 
 ### ALL and DISTINCT Clauses
 
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
index 19649d225d2..6d55dab1107 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
@@ -26,7 +26,7 @@ under the License.
 
 ### Description
 
-Unlike [ORDER BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}#order-by-clause) which guarantees a total order of output,
+Unlike [ORDER BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}#order-by-clause) which guarantees a total order of output,
 `SORT BY` only guarantees the result rows with each partition is in the user specified order.
 So when there's more than one partition, `SORT BY` may return result that's partially ordered.
 
diff --git a/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md b/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
index b44c813f76d..b4c8e155135 100644
--- a/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
+++ b/docs/content.zh/docs/dev/table/hive-compatibility/hiveserver2.md
@@ -3,7 +3,7 @@ title: HiveServer2 Endpoint
 weight: 11
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveserver2.html
+- /dev/table/hive-compatibility/hiveserver2.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md b/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md
index b2ac2fd6233..f879e6ebfff 100644
--- a/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md
+++ b/docs/content.zh/docs/dev/table/sql-gateway/hiveserver2.md
@@ -30,5 +30,5 @@ HiveServer2 Endpoint is compatible with [HiveServer2](https://cwiki.apache.org/c
 wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
 
 It suggests to use HiveServer2 Endpoint with Hive Catalog and Hive dialect to get the same experience
-as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}})
+as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hive-compatibility/hiveserver2" >}})
 for more details. 
diff --git a/docs/content.zh/docs/dev/table/sql-gateway/overview.md b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
index 7ee788027bb..b33af45b632 100644
--- a/docs/content.zh/docs/dev/table/sql-gateway/overview.md
+++ b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
@@ -214,7 +214,7 @@ $ ./sql-gateway -Dkey=value
 Supported Endpoints
 ----------------
 
-Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}).
+Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hive-compatibility/hiveserver2" >}}).
 The SQL Gateway is bundled with the REST Endpoint by default. With the flexible architecture, users are able to start the SQL Gateway with the specified endpoints by calling
 
 ```bash
diff --git a/docs/content/docs/connectors/table/hive/hive_catalog.md b/docs/content/docs/connectors/table/hive/hive_catalog.md
index 323928055c7..3518b631d57 100644
--- a/docs/content/docs/connectors/table/hive/hive_catalog.md
+++ b/docs/content/docs/connectors/table/hive/hive_catalog.md
@@ -64,7 +64,7 @@ Generic tables, on the other hand, are specific to Flink. When creating generic
 HMS to persist the metadata. While these tables are visible to Hive, it's unlikely Hive is able to understand
 the metadata. And therefore using such tables in Hive leads to undefined behavior.
 
-It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to create Hive-compatible tables.
+It's recommended to switch to [Hive dialect]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}}) to create Hive-compatible tables.
 If you want to create Hive-compatible tables with default dialect, make sure to set `'connector'='hive'` in your table properties, otherwise
 a table is considered generic by default in `HiveCatalog`. Note that the `connector` property is not required if you use Hive dialect.
 
diff --git a/docs/content/docs/connectors/table/hive/hive_read_write.md b/docs/content/docs/connectors/table/hive/hive_read_write.md
index 8577c8b8f0a..95f377732d6 100644
--- a/docs/content/docs/connectors/table/hive/hive_read_write.md
+++ b/docs/content/docs/connectors/table/hive/hive_read_write.md
@@ -534,7 +534,7 @@ Also, you can manually add `SORTED BY <partition_field>` in your SQL statement t
 
 **NOTE:** 
 - The configuration `table.exec.hive.sink.sort-by-dynamic-partition.enable` only works in Flink `BATCH` mode.
-- Currently, `DISTRIBUTED BY` and `SORTED BY` is only supported when using [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}})  in Flink `BATCH` mode.
+- Currently, `DISTRIBUTED BY` and `SORTED BY` is only supported when using [Hive dialect]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}})  in Flink `BATCH` mode.
 
 ### Auto Gather Statistic
 By default, Flink will gather the statistic automatically and then committed to Hive metastore during writing Hive table.
diff --git a/docs/content/docs/connectors/table/hive/overview.md b/docs/content/docs/connectors/table/hive/overview.md
index 6dc9f5117b0..65c67759948 100644
--- a/docs/content/docs/connectors/table/hive/overview.md
+++ b/docs/content/docs/connectors/table/hive/overview.md
@@ -454,7 +454,7 @@ Below are the options supported when creating a `HiveCatalog` instance with YAML
 
 ## DDL
 
-It's recommended to use [Hive dialect]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/overview" >}}) to execute DDLs to create
+It's recommended to use [Hive dialect]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/overview" >}}) to execute DDLs to create
 Hive tables, views, partitions, functions within Flink.
 
 ## DML
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
index a507d53a05f..36cdacecfc8 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/drop.md
@@ -107,7 +107,7 @@ DROP VIEW IF EXISTS v1;
 ## DROP MARCO
 
 `DROP MARCO` statement is used to drop the existing `MARCO`.
-Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/create" >}}#create-marco) for how to create `MARCO`.
+Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/create" >}}#create-marco) for how to create `MARCO`.
 
 ### Syntax
 
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
index 94170b96dfa..4ad602770d0 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/insert.md
@@ -59,7 +59,7 @@ INSERT { OVERWRITE | INTO } [TABLE] tablename
 - select_statement
 
   A statement for query.
-  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+  See more details in [queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}).
 
 ### Synopsis
 
@@ -138,7 +138,7 @@ row_format:
 - select_statement
 
   A statement for query.
-  See more details in [queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}).
+  See more details in [queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}).
 
 - `STORED AS file_format`
 
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
index 44dfecddd02..2a0ef94442f 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/overview.md
@@ -51,7 +51,7 @@ statement you execute. There's no need to restart a session to use a different d
 - While all Hive versions support the same syntax, whether a specific feature is available still depends on the
   [Hive version]({{< ref "docs/connectors/table/hive/overview" >}}#supported-hive-versions) you use. For example, updating database
   location is only supported in Hive-2.4.0 or later.
-- The Hive dialect is mainly used in batch mode. Some Hive's syntax ([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), etc.)  haven't been supported in streaming mode yet.
+- The Hive dialect is mainly used in batch mode. Some Hive's syntax ([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/transform" >}}), etc.)  haven't been supported in streaming mode yet.
 {{< /hint >}}
 
 ### SQL Client
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
index d0daeb79cb0..da908a1ee4f 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/overview.md
@@ -29,21 +29,21 @@ under the License.
 Hive dialect supports a commonly-used subset of Hive’s [DQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select).
 The following lists some parts of HiveQL supported by the Hive dialect.
 
-- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}})
-- [Group By]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}})
-- [Join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}})
-- [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
-- [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
-- [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
-- [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
-- [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
-- [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
+- [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}})
+- [Group By]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/group-by" >}})
+- [Join]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/join" >}})
+- [Set Operation]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/set-op" >}})
+- [Lateral View]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view" >}})
+- [Window Functions]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}})
+- [CTE]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/cte" >}})
+- [Transform]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/transform" >}})
+- [Table Sample]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample" >}})
 
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
 [WITH CommonTableExpression [ , ... ]]
@@ -57,24 +57,24 @@ SELECT [ALL | DISTINCT] select_expr [ , ... ]
   ]
  [LIMIT [offset,] rows]
 ```
-- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}) of another query
 - `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
 The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
-in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
 
-Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by" >}}) for more details.
+Please refer to [GROUP BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/group-by" >}}) for more details.
 
 ### ORDER BY Clause
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
-Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
+Different from [SORT BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
 a global order in the output.
 
 {{< hint warning >}}
@@ -85,7 +85,7 @@ So if the number of rows in the output is too large, it could take a very long t
 
 ## CLUSTER/DISTRIBUTE/SORT BY
 
-Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}) for more details.
+Please refer to [Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by" >}}) for more details.
 
 ### ALL and DISTINCT Clauses
 
diff --git a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
index 5548ac48d81..b6fea7a2efe 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by.md
@@ -26,7 +26,7 @@ under the License.
 
 ### Description
 
-Unlike [ORDER BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview" >}}#order-by-clause) which guarantees a total order of output,
+Unlike [ORDER BY]({{< ref "docs/dev/table/hive-compatibility/hive-dialect/queries/overview" >}}#order-by-clause) which guarantees a total order of output,
 `SORT BY` only guarantees the result rows with each partition is in the user specified order.
 So when there's more than one partition, `SORT BY` may return result that's partially ordered.
 
diff --git a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
index 918e096a6b4..dd546b08936 100644
--- a/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
+++ b/docs/content/docs/dev/table/hive-compatibility/hiveserver2.md
@@ -3,7 +3,7 @@ title: HiveServer2 Endpoint
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveserver2.html
+- /dev/table/hive-compatibility/hiveserver2.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/sql-gateway/hiveserver2.md b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
index d60141ef968..f879e6ebfff 100644
--- a/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
+++ b/docs/content/docs/dev/table/sql-gateway/hiveserver2.md
@@ -30,5 +30,5 @@ HiveServer2 Endpoint is compatible with [HiveServer2](https://cwiki.apache.org/c
 wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL Gateway with existing Hive clients, such as Hive JDBC, Beeline, DBeaver, Apache Superset and so on.
 
 It suggests to use HiveServer2 Endpoint with Hive Catalog and Hive dialect to get the same experience
-as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}) 
+as HiveServer2. Please refer to the [Hive Compatibility]({{< ref "docs/dev/table/hive-compatibility/hiveserver2" >}})
 for more details. 
diff --git a/docs/content/docs/dev/table/sql-gateway/overview.md b/docs/content/docs/dev/table/sql-gateway/overview.md
index e13ad914fc7..625fef2d851 100644
--- a/docs/content/docs/dev/table/sql-gateway/overview.md
+++ b/docs/content/docs/dev/table/sql-gateway/overview.md
@@ -214,7 +214,7 @@ $ ./sql-gateway -Dkey=value
 Supported Endpoints
 ----------------
 
-Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hiveCompatibility/hiveserver2" >}}). 
+Flink natively support [REST Endpoint]({{< ref "docs/dev/table/sql-gateway/rest" >}}) and [HiveServer2 Endpoint]({{< ref "docs/dev/table/hive-compatibility/hiveserver2" >}}).
 The SQL Gateway is bundled with the REST Endpoint by default. With the flexible architecture, users are able to start the SQL Gateway with the specified endpoints by calling 
 
 ```bash


[flink] 16/25: [FLINK-29025][docs] add drop page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit dffad01bf356cc01411e31fb17dc3bb7726893f4
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:39:05 2022 +0800

    [FLINK-29025][docs] add drop page for Hive dialect
---
 .../table/hiveCompatibility/hiveDialect/drop.md    | 146 +++++++++++++++++++++
 .../table/hiveCompatibility/hiveDialect/drop.md    | 146 +++++++++++++++++++++
 2 files changed, 292 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
new file mode 100644
index 00000000000..67bca77e1d7
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -0,0 +1,146 @@
+---
+title: "DROP Statements"
+weight: 2
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# CREATE Statements
+
+With Hive dialect, the following DROP statements are supported for now:
+
+- DROP DATABASE
+- DROP TABLE
+- DROP VIEW
+- DROP MARCO
+- DROP FUNCTION
+
+## DROP DATABASE
+
+### Description
+
+`DROP DATABASE` statement is used to drop a database as well as the tables/directories associated with the database.
+
+### Syntax
+
+```sql
+DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
+```
+The use of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+The default behavior is `RESTRICT`, where `DROP DATABASE` will fail if the database is not empty.
+To drop the tables in the database as well, use `DROP DATABASE ... CASCADE`.
+
+`DROP` returns an error if the database doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP DATABASE db1 CASCADE;
+```
+
+## DROP TABLE
+
+### Description
+
+`DROP TABLE` statement removes metadata and data for this table.
+The data is actually moved to the `.Trash/Current` directory if Trash is configured.
+The metadata is completely lost.
+
+When drop an `EXTERNAL` table, data in the table will not be deleted from the filesystem.
+
+### Syntax
+
+```sql
+DROP TABLE [IF EXISTS] table_name;
+```
+
+`DROP` returns an error if the table doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP TABLE IF EXISTS t1;
+```
+
+## DROP VIEW
+
+### Description
+
+`DROP VIEW` statement is used to removed metadata for the specified view.
+
+### Syntax
+
+```sql
+DROP VIEW [IF EXISTS] [db_name.]view_name;
+```
+`DROP` returns an error if the view doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP VIEW IF EXISTS v1;
+```
+
+## DROP MARCO
+
+`DROP MARCO` statement is used to drop the existing `MARCO`.
+Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/create" >}}#create-marco) for how to create `MARCO`.
+
+### Syntax
+
+```sql
+DROP TEMPORARY MACRO [IF EXISTS] macro_name;
+```
+`DROP` returns an error if the view doesn't exist, unless `IF EXISTS` is specified.
+
+### Examples
+
+```sql
+DROP TEMPORARY MACRO IF EXISTS m1;
+```
+
+## DROP FUNCTION
+
+`DROP FUNCTION` statement is used to drop the existing `FUNCTION`.
+
+### Syntax
+
+```sql
+--- Drop temporary function
+DROP TEMPORARY FUNCTION [IF EXISTS] function_name;
+
+--- Drop permanent function
+DROP FUNCTION [IF EXISTS] function_name;
+```
+`DROP` returns an error if the function doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP FUNCTION IF EXISTS f1;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
new file mode 100644
index 00000000000..67bca77e1d7
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -0,0 +1,146 @@
+---
+title: "DROP Statements"
+weight: 2
+type: docs
+aliases:
+- /dev/table/hiveCompatibility/hiveDialect/create.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# CREATE Statements
+
+With Hive dialect, the following DROP statements are supported for now:
+
+- DROP DATABASE
+- DROP TABLE
+- DROP VIEW
+- DROP MARCO
+- DROP FUNCTION
+
+## DROP DATABASE
+
+### Description
+
+`DROP DATABASE` statement is used to drop a database as well as the tables/directories associated with the database.
+
+### Syntax
+
+```sql
+DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
+```
+The use of `SCHEMA` and `DATABASE` are interchangeable - they mean the same thing.
+The default behavior is `RESTRICT`, where `DROP DATABASE` will fail if the database is not empty.
+To drop the tables in the database as well, use `DROP DATABASE ... CASCADE`.
+
+`DROP` returns an error if the database doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP DATABASE db1 CASCADE;
+```
+
+## DROP TABLE
+
+### Description
+
+`DROP TABLE` statement removes metadata and data for this table.
+The data is actually moved to the `.Trash/Current` directory if Trash is configured.
+The metadata is completely lost.
+
+When drop an `EXTERNAL` table, data in the table will not be deleted from the filesystem.
+
+### Syntax
+
+```sql
+DROP TABLE [IF EXISTS] table_name;
+```
+
+`DROP` returns an error if the table doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP TABLE IF EXISTS t1;
+```
+
+## DROP VIEW
+
+### Description
+
+`DROP VIEW` statement is used to removed metadata for the specified view.
+
+### Syntax
+
+```sql
+DROP VIEW [IF EXISTS] [db_name.]view_name;
+```
+`DROP` returns an error if the view doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP VIEW IF EXISTS v1;
+```
+
+## DROP MARCO
+
+`DROP MARCO` statement is used to drop the existing `MARCO`.
+Please refer to [CREATE MARCO]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/create" >}}#create-marco) for how to create `MARCO`.
+
+### Syntax
+
+```sql
+DROP TEMPORARY MACRO [IF EXISTS] macro_name;
+```
+`DROP` returns an error if the view doesn't exist, unless `IF EXISTS` is specified.
+
+### Examples
+
+```sql
+DROP TEMPORARY MACRO IF EXISTS m1;
+```
+
+## DROP FUNCTION
+
+`DROP FUNCTION` statement is used to drop the existing `FUNCTION`.
+
+### Syntax
+
+```sql
+--- Drop temporary function
+DROP TEMPORARY FUNCTION [IF EXISTS] function_name;
+
+--- Drop permanent function
+DROP FUNCTION [IF EXISTS] function_name;
+```
+`DROP` returns an error if the function doesn't exist, unless `IF EXISTS` is specified
+or the configuration variable [hive.exec.drop.ignorenonexistent](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.drop.ignorenonexistent)
+is set to true.
+
+### Examples
+
+```sql
+DROP FUNCTION IF EXISTS f1;
+```


[flink] 08/25: [FLINK-29025][docs] add window functions page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 756f3a7a7dbe9295e17ba26f28badfa14df488d6
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:15:46 2022 +0800

    [FLINK-29025][docs] add window functions page for Hive dialect
---
 .../hiveDialect/Queries/window-functions.md        | 105 +++++++++++++++++++++
 .../hiveDialect/Queries/window-functions.md        | 105 +++++++++++++++++++++
 2 files changed, 210 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
new file mode 100644
index 00000000000..2e41c5fb620
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
@@ -0,0 +1,105 @@
+---
+title: "Window Functions"
+weight: 7
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Window Functions
+
+## Description
+
+Window functions are a kind of aggregation for a group of rows, referred as a window.
+It will return the aggregation value for each row based on the group of rows.
+
+## Syntax
+
+```sql
+window_function OVER ( [ { PARTITION | DISTRIBUTE }  BY colName ( [, ... ] ) ] 
+{ ORDER | SORT } BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [ , ... ]
+[ window_frame ] )
+```
+
+## Parameters
+
+### window_function
+
+Hive dialect supports the following window functions:
+- Windowing functions
+    - LEAD
+    - LAG
+    - FIRST_VALUE
+    - LAST_VALUE
+
+  {{< hint warning >}}
+  **Note:** For FIRST_VALUE/LAST_VALUE, use parameter to control skip null values or respect null values isn't supported yet. And they will always skip null values
+  {{< /hint >}}
+- Analytic functions
+    - RANK
+    - ROW_NUMBER
+    - DENSE_RANK
+    - CUME_DIST
+    - PERCENT_RANK
+    - NTILE
+- Aggregate Functions
+    - COUNT
+    - SUM
+    - MIN
+    - MAX
+    - AVG
+
+### window_frame
+
+It's used to specified which row to start on and where to end it. Window frame supports the following formats:
+```sql
+(ROWS | RANGE) BETWEEN (UNBOUNDED | [num]) PRECEDING AND ([num] PRECEDING | CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
+(ROWS | RANGE) BETWEEN CURRENT ROW AND (CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
+(ROWS | RANGE) BETWEEN [num] FOLLOWING AND (UNBOUNDED | [num]) FOLLOWING
+```
+
+When `ORDER BY` is specified, but missing `window_frame`, the window frame defaults to `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`.
+
+When both `ORDER BY` and `window_frame` are missing, the window frame defaults to `ROW BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING`.
+
+{{< hint warning >}}
+**Note:**
+Distinct is not supported in window function yet.
+{{< /hint >}}
+
+## Examples
+
+```sql
+-- PARTITION BY with one partitioning column, no ORDER BY or window specification
+SELECT a, COUNT(b) OVER (PARTITION BY c) FROM t;
+
+-- PARTITION BY with two partitioning columns, no ORDER BY or window specification
+SELECT a, COUNT(b) OVER (PARTITION BY c, d) FROM t;
+
+-- PARTITION BY with two partitioning columns, no ORDER BY or window specification
+SELECT a, SUM(b) OVER (PARTITION BY c, d ORDER BY e, f) FROM t;
+
+-- PARTITION BY with partitioning, ORDER BY, and window specification
+SELECT a, SUM(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN 3 PRECEDING AND 3 FOLLOWING)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
+FROM t;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
new file mode 100644
index 00000000000..2e41c5fb620
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions.md
@@ -0,0 +1,105 @@
+---
+title: "Window Functions"
+weight: 7
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Window Functions
+
+## Description
+
+Window functions are a kind of aggregation for a group of rows, referred as a window.
+It will return the aggregation value for each row based on the group of rows.
+
+## Syntax
+
+```sql
+window_function OVER ( [ { PARTITION | DISTRIBUTE }  BY colName ( [, ... ] ) ] 
+{ ORDER | SORT } BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [ , ... ]
+[ window_frame ] )
+```
+
+## Parameters
+
+### window_function
+
+Hive dialect supports the following window functions:
+- Windowing functions
+    - LEAD
+    - LAG
+    - FIRST_VALUE
+    - LAST_VALUE
+
+  {{< hint warning >}}
+  **Note:** For FIRST_VALUE/LAST_VALUE, use parameter to control skip null values or respect null values isn't supported yet. And they will always skip null values
+  {{< /hint >}}
+- Analytic functions
+    - RANK
+    - ROW_NUMBER
+    - DENSE_RANK
+    - CUME_DIST
+    - PERCENT_RANK
+    - NTILE
+- Aggregate Functions
+    - COUNT
+    - SUM
+    - MIN
+    - MAX
+    - AVG
+
+### window_frame
+
+It's used to specified which row to start on and where to end it. Window frame supports the following formats:
+```sql
+(ROWS | RANGE) BETWEEN (UNBOUNDED | [num]) PRECEDING AND ([num] PRECEDING | CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
+(ROWS | RANGE) BETWEEN CURRENT ROW AND (CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
+(ROWS | RANGE) BETWEEN [num] FOLLOWING AND (UNBOUNDED | [num]) FOLLOWING
+```
+
+When `ORDER BY` is specified, but missing `window_frame`, the window frame defaults to `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`.
+
+When both `ORDER BY` and `window_frame` are missing, the window frame defaults to `ROW BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING`.
+
+{{< hint warning >}}
+**Note:**
+Distinct is not supported in window function yet.
+{{< /hint >}}
+
+## Examples
+
+```sql
+-- PARTITION BY with one partitioning column, no ORDER BY or window specification
+SELECT a, COUNT(b) OVER (PARTITION BY c) FROM t;
+
+-- PARTITION BY with two partitioning columns, no ORDER BY or window specification
+SELECT a, COUNT(b) OVER (PARTITION BY c, d) FROM t;
+
+-- PARTITION BY with two partitioning columns, no ORDER BY or window specification
+SELECT a, SUM(b) OVER (PARTITION BY c, d ORDER BY e, f) FROM t;
+
+-- PARTITION BY with partitioning, ORDER BY, and window specification
+SELECT a, SUM(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN 3 PRECEDING AND 3 FOLLOWING)
+FROM t;
+SELECT a, AVG(b) OVER (PARTITION BY c ORDER BY d ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
+FROM t;
+```


[flink] 05/25: [FLINK-29025][docs] add join page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 84e89636dcd1fd04bb842b3a4583914b3031d6f7
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:11:34 2022 +0800

    [FLINK-29025][docs] add join page for Hive dialect
---
 .../hiveCompatibility/hiveDialect/Queries/join.md  | 105 +++++++++++++++++++++
 .../hiveCompatibility/hiveDialect/Queries/join.md  | 105 +++++++++++++++++++++
 2 files changed, 210 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
new file mode 100644
index 00000000000..6c7073309dc
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
@@ -0,0 +1,105 @@
+---
+title: "Join"
+weight: 4
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Join
+
+## Description
+
+`JOIN` is used to combine rows from two relations based on join condition.
+
+## Syntax
+
+Hive Dialect supports the following syntax for joining tables:
+```sql
+join_table:
+    table_reference [ INNER ] JOIN table_factor [ join_condition ]
+  | table_reference { LEFT | RIGHT | FULL } [ OUTER ] JOIN table_reference join_condition
+  | table_reference LEFT SEMI JOIN table_reference [ ON expression ] 
+  | table_reference CROSS JOIN table_reference [ join_condition ]
+ 
+table_reference:
+    table_factor
+  | join_table
+ 
+table_factor:
+    tbl_name [ alias ]
+  | table_subquery alias
+  | ( table_references )
+ 
+join_condition:
+    { ON expression | USING ( colName [, ...] ) }
+```
+
+## JOIN Type
+
+### INNER JOIN
+
+`INNER JOIN` returns the rows matched in both join sides. `INNER JOIN` is the default join type.
+
+### LEFT JOIN
+
+`LEFT JOIN` returns all the rows from the left join side and the matched values from the right join side. It will concat the values from both sides.  
+If there's no match in right join side, it will append `NULL` value. `LEFT JOIN` is equivalent to `LEFT OUTER JOIN`.
+
+### RIGHT JOIN
+
+`RIGHT JOIN` returns all the rows from the right join side and the matched values from the left join side. It will concat the values from both sides.
+If there's no match in left join side, it will append `NULL` value. `RIGHT JOIN` is equivalent to `RIGHT JOIN`.
+
+### FULL JOIN
+
+`FULL JOIN` returns all the rows from both join sides. It will concat the values from both sides. If there's one side does not match the row, it will append `NULL` value.
+`FULL JOIN` is equivalent to `FULL OUTER JOIN`.
+
+### LEFT SEMI JOIN
+
+`LEFT SMEI JOIN` returns the rows from the left join side that have matching in right join side. It won't concat the values from the right side.
+
+### CROSS JOIN
+
+`CROSS JOIN` returns the Cartesian product of two join sides.
+
+## Examples
+
+```sql
+-- INNER JOIN
+SELECT t1.x FROM t1 INNER JOIN t2 USING (x);
+SELECT t1.x FROM t1 INNER JOIN t2 ON t1.x = t2.x;
+
+-- LEFT JOIN
+SELECT t1.x FROM t1 LEFT JOIN t2 USING (x);
+SELECT t1.x FROM t1 LEFT OUTER JOIN t2 ON t1.x = t2.x;
+
+-- RIGHT JOIN
+SELECT t1.x FROM t1 RIGHT JOIN t2 USING (x);
+SELECT t1.x FROM t1 RIGHT OUTER JOIN t2 ON t1.x = t2.x;
+
+-- FULL JOIN
+SELECT t1.x FROM t1 FULL JOIN t2 USING (x);
+SELECT t1.x FROM t1 FULL OUTER JOIN t2 ON t1.x = t2.x;
+
+-- LEFT SEMI JOIN
+SELECT t1.x FROM t1 LEFT SEMI JOIN t2 ON t1.x = t2.x;
+
+-- CROSS JOIN
+SELECT t1.x FROM t1 CROSS JOIN t2 USING (x);
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
new file mode 100644
index 00000000000..6c7073309dc
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/join.md
@@ -0,0 +1,105 @@
+---
+title: "Join"
+weight: 4
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Join
+
+## Description
+
+`JOIN` is used to combine rows from two relations based on join condition.
+
+## Syntax
+
+Hive Dialect supports the following syntax for joining tables:
+```sql
+join_table:
+    table_reference [ INNER ] JOIN table_factor [ join_condition ]
+  | table_reference { LEFT | RIGHT | FULL } [ OUTER ] JOIN table_reference join_condition
+  | table_reference LEFT SEMI JOIN table_reference [ ON expression ] 
+  | table_reference CROSS JOIN table_reference [ join_condition ]
+ 
+table_reference:
+    table_factor
+  | join_table
+ 
+table_factor:
+    tbl_name [ alias ]
+  | table_subquery alias
+  | ( table_references )
+ 
+join_condition:
+    { ON expression | USING ( colName [, ...] ) }
+```
+
+## JOIN Type
+
+### INNER JOIN
+
+`INNER JOIN` returns the rows matched in both join sides. `INNER JOIN` is the default join type.
+
+### LEFT JOIN
+
+`LEFT JOIN` returns all the rows from the left join side and the matched values from the right join side. It will concat the values from both sides.  
+If there's no match in right join side, it will append `NULL` value. `LEFT JOIN` is equivalent to `LEFT OUTER JOIN`.
+
+### RIGHT JOIN
+
+`RIGHT JOIN` returns all the rows from the right join side and the matched values from the left join side. It will concat the values from both sides.
+If there's no match in left join side, it will append `NULL` value. `RIGHT JOIN` is equivalent to `RIGHT JOIN`.
+
+### FULL JOIN
+
+`FULL JOIN` returns all the rows from both join sides. It will concat the values from both sides. If there's one side does not match the row, it will append `NULL` value.
+`FULL JOIN` is equivalent to `FULL OUTER JOIN`.
+
+### LEFT SEMI JOIN
+
+`LEFT SMEI JOIN` returns the rows from the left join side that have matching in right join side. It won't concat the values from the right side.
+
+### CROSS JOIN
+
+`CROSS JOIN` returns the Cartesian product of two join sides.
+
+## Examples
+
+```sql
+-- INNER JOIN
+SELECT t1.x FROM t1 INNER JOIN t2 USING (x);
+SELECT t1.x FROM t1 INNER JOIN t2 ON t1.x = t2.x;
+
+-- LEFT JOIN
+SELECT t1.x FROM t1 LEFT JOIN t2 USING (x);
+SELECT t1.x FROM t1 LEFT OUTER JOIN t2 ON t1.x = t2.x;
+
+-- RIGHT JOIN
+SELECT t1.x FROM t1 RIGHT JOIN t2 USING (x);
+SELECT t1.x FROM t1 RIGHT OUTER JOIN t2 ON t1.x = t2.x;
+
+-- FULL JOIN
+SELECT t1.x FROM t1 FULL JOIN t2 USING (x);
+SELECT t1.x FROM t1 FULL OUTER JOIN t2 ON t1.x = t2.x;
+
+-- LEFT SEMI JOIN
+SELECT t1.x FROM t1 LEFT SEMI JOIN t2 ON t1.x = t2.x;
+
+-- CROSS JOIN
+SELECT t1.x FROM t1 CROSS JOIN t2 USING (x);
+```


[flink] 10/25: [FLINK-29025][docs] add cte page for Hive dialect

Posted by ja...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9c040e2f51449fe3a3ec3021db0882c5304e190d
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Mon Aug 29 15:18:52 2022 +0800

    [FLINK-29025][docs] add cte page for Hive dialect
---
 .../hiveCompatibility/hiveDialect/Queries/cte.md   | 67 ++++++++++++++++++++++
 .../hiveCompatibility/hiveDialect/Queries/cte.md   | 67 ++++++++++++++++++++++
 2 files changed, 134 insertions(+)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
new file mode 100644
index 00000000000..e609e912c9d
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -0,0 +1,67 @@
+---
+title: "CTE"
+weight: 9
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Common Table Expression (CTE)
+
+## Description
+
+A Common Table Expression (CTE) is a temporary result set derived from a query specified in a `WITH` clause, which immediately precedes a `SELECT`
+or `INSERT` keyword. The CTE is defined only with the execution scope of a single statement, and can be referred in the scope.
+
+## Syntax
+
+```sql
+withClause: cteClause [, ...]
+cteClause: cte_name AS (select statment)
+```
+
+
+{{< hint warning >}}
+**Note:**
+- The `WITH` clause is not supported within SubQuery block
+- CTEs are supported in Views, `CTAS` and `INSERT` statement
+- [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
+  {{< /hint >}}
+
+## Examples
+
+```sql
+WITH q1 AS ( SELECT key FROM src WHERE key = '5')
+SELECT *
+FROM q1;
+
+-- chaining CTEs
+WITH q1 AS ( SELECT key FROM q2 WHERE key = '5'),
+q2 AS ( SELECT key FROM src WHERE key = '5')
+SELECT * FROM (SELECT key FROM q1) a;
+
+-- insert example
+WITH q1 AS ( SELECT key, value FROM src WHERE key = '5')
+FROM q1
+INSERT OVERWRITE TABLE t1
+SELECT *;
+
+-- ctas example
+CREATE TABLE t2 AS
+WITH q1 AS ( SELECT key FROM src WHERE key = '4')
+SELECT * FROM q1;
+```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
new file mode 100644
index 00000000000..e609e912c9d
--- /dev/null
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -0,0 +1,67 @@
+---
+title: "CTE"
+weight: 9
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Common Table Expression (CTE)
+
+## Description
+
+A Common Table Expression (CTE) is a temporary result set derived from a query specified in a `WITH` clause, which immediately precedes a `SELECT`
+or `INSERT` keyword. The CTE is defined only with the execution scope of a single statement, and can be referred in the scope.
+
+## Syntax
+
+```sql
+withClause: cteClause [, ...]
+cteClause: cte_name AS (select statment)
+```
+
+
+{{< hint warning >}}
+**Note:**
+- The `WITH` clause is not supported within SubQuery block
+- CTEs are supported in Views, `CTAS` and `INSERT` statement
+- [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
+  {{< /hint >}}
+
+## Examples
+
+```sql
+WITH q1 AS ( SELECT key FROM src WHERE key = '5')
+SELECT *
+FROM q1;
+
+-- chaining CTEs
+WITH q1 AS ( SELECT key FROM q2 WHERE key = '5'),
+q2 AS ( SELECT key FROM src WHERE key = '5')
+SELECT * FROM (SELECT key FROM q1) a;
+
+-- insert example
+WITH q1 AS ( SELECT key, value FROM src WHERE key = '5')
+FROM q1
+INSERT OVERWRITE TABLE t1
+SELECT *;
+
+-- ctas example
+CREATE TABLE t2 AS
+WITH q1 AS ( SELECT key FROM src WHERE key = '4')
+SELECT * FROM q1;
+```