You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ja...@apache.org on 2022/09/20 01:45:33 UTC

[flink] 21/25: [FLINK-29025][docs] Improve documentation of Hive compatibility pages

This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b07e433d07f93c0e2447c052cce01994f0ed512e
Author: luoyuxia <lu...@alumni.sjtu.edu.cn>
AuthorDate: Wed Sep 7 19:38:01 2022 +0800

    [FLINK-29025][docs] Improve documentation of Hive compatibility pages
---
 .../hiveCompatibility/hiveDialect/Queries/cte.md   |   6 +-
 .../hiveDialect/Queries/group-by.md                |  18 ++--
 .../hiveDialect/Queries/overview.md                |  21 ++--
 .../hiveDialect/Queries/set-op.md                  |   8 +-
 .../Queries/sort-cluster-distribute-by.md          |   8 +-
 .../hiveDialect/Queries/sub-queries.md             |   2 +-
 .../hiveDialect/Queries/transform.md               |  34 ++++---
 .../dev/table/hiveCompatibility/hiveDialect/add.md |   6 +-
 .../table/hiveCompatibility/hiveDialect/alter.md   |   2 +-
 .../table/hiveCompatibility/hiveDialect/create.md  |  17 ++--
 .../table/hiveCompatibility/hiveDialect/drop.md    |   2 +-
 .../table/hiveCompatibility/hiveDialect/insert.md  | 113 +++++++++++----------
 .../hiveCompatibility/hiveDialect/load-data.md     |   4 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  41 +++++---
 .../dev/table/hiveCompatibility/hiveDialect/set.md |  29 +++---
 .../table/hiveCompatibility/hiveDialect/show.md    |   7 +-
 .../hiveCompatibility/hiveDialect/Queries/cte.md   |   6 +-
 .../hiveDialect/Queries/group-by.md                |  18 ++--
 .../hiveDialect/Queries/overview.md                |  19 ++--
 .../hiveDialect/Queries/set-op.md                  |   8 +-
 .../Queries/sort-cluster-distribute-by.md          |   8 +-
 .../hiveDialect/Queries/sub-queries.md             |   2 +-
 .../hiveDialect/Queries/transform.md               |  34 ++++---
 .../dev/table/hiveCompatibility/hiveDialect/add.md |  12 ++-
 .../table/hiveCompatibility/hiveDialect/alter.md   |   2 +-
 .../table/hiveCompatibility/hiveDialect/create.md  |  21 ++--
 .../table/hiveCompatibility/hiveDialect/drop.md    |   2 +-
 .../table/hiveCompatibility/hiveDialect/insert.md  | 113 +++++++++++----------
 .../hiveCompatibility/hiveDialect/load-data.md     |   4 +-
 .../hiveCompatibility/hiveDialect/overview.md      |  45 +++++---
 .../dev/table/hiveCompatibility/hiveDialect/set.md |  27 ++---
 .../table/hiveCompatibility/hiveDialect/show.md    |   7 +-
 32 files changed, 355 insertions(+), 291 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
index e609e912c9d..96c4a73709d 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -30,14 +30,14 @@ or `INSERT` keyword. The CTE is defined only with the execution scope of a singl
 ## Syntax
 
 ```sql
-withClause: cteClause [, ...]
-cteClause: cte_name AS (select statment)
+withClause: WITH cteClause [ , ... ]
+cteClause: cte_name AS (select statement)
 ```
 
 
 {{< hint warning >}}
 **Note:**
-- The `WITH` clause is not supported within SubQuery block
+- The `WITH` clause is not supported within Sub-Query block
 - CTEs are supported in Views, `CTAS` and `INSERT` statement
 - [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
   {{< /hint >}}
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
index 719ce532dc1..514fbe9bee6 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -31,13 +31,19 @@ Hive dialect also supports enhanced aggregation features to do multiple aggregat
 ## Syntax
 
 ```sql
-groupByClause: groupByClause-1 | groupByClause-2
-groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+group_by_clause: 
+  group_by_clause_1 | group_by_clause_2
+
+group_by_clause_1: 
+  GROUP BY group_expression [ , ... ] [ WITH ROLLUP | WITH CUBE ] 
  
-groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
-grouping_set: { expression | ( [ expression [, ...] ] ) }
+group_by_clause_2: 
+  GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [ , ... ] ) } [ , ... ]
+
+grouping_set: 
+  { expression | ( [ expression [ , ... ] ] ) }
  
-groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+groupByQuery: SELECT expression [ , ... ] FROM src groupByClause?
 ```
 In `group_expression`, columns can be also specified by position number. But please remember:
 - For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
@@ -97,7 +103,7 @@ It represents the given list and all of its possible subsets - the power set.
 
 For example:
 ```sql
-GROUP BY a, b, c, WITH CUBE
+GROUP BY a, b, c WITH CUBE
 ```
 is equivalent to
 ```sql
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
index 90e6a062223..61d375f74c5 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,7 +35,7 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 - [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
 - [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
 - [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
 - [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
 - [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
 - [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
@@ -43,11 +43,11 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
-[WITH CommonTableExpression (, CommonTableExpression)*]
-SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+[WITH CommonTableExpression [ , ... ]]
+SELECT [ALL | DISTINCT] select_expr [ , ... ]
   FROM table_reference
   [WHERE where_condition]
   [GROUP BY col_list]
@@ -57,14 +57,15 @@ SELECT [ALL | DISTINCT] select_expr, select_expr, ...
   ]
  [LIMIT [offset,] rows]
 ```
-- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
-The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
-in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+in the `WHERE` clause. Some types of [sub-queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
 
@@ -74,7 +75,7 @@ Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
 Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
-a total order in the output.
+a global order in the output.
 
 {{< hint warning >}}
 **Note:**
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
index 1178844d2c3..cb01213c6a7 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -22,7 +22,7 @@ under the License.
 
 # Set Operations
 
-Set Operation is used to combing two select into single one.
+Set Operations are used to combine multiple `SELECT` statements into a single result set.
 Hive dialect supports the following operations:
 - UNION
 - INTERSECT
@@ -39,7 +39,7 @@ Hive dialect supports the following operations:
 ### Syntax
 
 ```sql
-select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { UNION [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -60,7 +60,7 @@ SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { INTERSECT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -83,7 +83,7 @@ SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { EXCEPT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
index 5f6954bb880..19649d225d2 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -33,9 +33,9 @@ So when there's more than one partition, `SORT BY` may return result that's part
 ### Syntax
 
 ```sql
-colOrder: ( ASC | DESC )
-sortBy: SORT BY BY expression [ , ... ]
 query: SELECT expression [ , ... ] FROM src sortBy
+sortBy: SORT BY expression colOrder [ , ... ]
+colOrder: ( ASC | DESC )
 ```
 
 ### Parameters
@@ -67,8 +67,12 @@ query: SELECT expression [ , ... ] FROM src distributeBy
 ### Examples
 
 ```sql
+-- only use DISTRIBUTE BY clause
 SELECT x, y FROM t DISTRIBUTE BY x;
 SELECT x, y FROM t DISTRIBUTE BY abs(y);
+
+-- use both DISTRIBUTE BY and SORT BY clause
+SELECT x, y FROM t DISTRIBUTE BY x SORT BY y DESC;
 ```
 
 ## Cluster By
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
index 592d130bf46..5a63f1b6a3b 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -34,7 +34,7 @@ The sub-query can also be a query expression with `UNION`. Hive dialect supports
 ### Syntax
 
 ```sql
-select_statement from ( subquery_select_statement ) [ AS ] name
+select_statement from ( select_statement ) [ AS ] name
 ```
 
 ### Example
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
index 1ba9a9c0cfe..b147ae7d8d9 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -29,6 +29,15 @@ The `TRANSFORM` clause allows user to transform inputs using user-specified comm
 ## Syntax
 
 ```sql
+query:
+   SELECT TRANSFORM ( expression [ , ... ] )
+   [ inRowFormat ]
+   [ inRecordWriter ]
+   USING command_or_script
+   [ AS colName [ colType ] [ , ... ] ]
+   [ outRowFormat ]
+   [ outRecordReader ]
+
 rowFormat
   : ROW FORMAT
     (DELIMITED [FIELDS TERMINATED BY char]
@@ -45,14 +54,6 @@ outRowFormat : rowFormat
 inRowFormat : rowFormat
 outRecordReader : RECORDREADER className
 inRecordWriter: RECORDWRITER record_write_class
- 
-query:
-   SELECT TRANSFORM '(' expression [ , ... ] ')'
-    ( inRowFormat )?
-    ( inRecordWriter )?
-    USING command_or_script
-    ( AS colName ( colType )? [, ... ] )?
-    ( outRowFormat )? ( outRecordReader )?
 ```
 
 {{< hint warning >}}
@@ -78,21 +79,26 @@ query:
   and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
 
 - inRecordWriter
+
   Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
 
 - outRecordReader
+
   Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
 
 - command_or_script
+
   Specifies a command or a path to script to process data.
 
   {{< hint warning >}}
   **Note:**
 
-  Add a script file and then transform input using the script is not supported yet.
+  - Add a script file and then transform input using the script is not supported yet.
+  - The script used must be a local script and should be accessible on all hosts in the cluster. 
   {{< /hint >}}
 
 - colType
+
   Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
 
 
@@ -110,20 +116,20 @@ For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the foll
 ```sql
 CREATE TABLE src(key string, value string);
 -- transform using
-SELECT TRANSFORM(key, value) using 'script' from t1;
+SELECT TRANSFORM(key, value) using 'cat' from t1;
 
 -- transform using with specific record writer and record reader
 SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
  WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
  RECORDWRITER 'MyRecordWriter'
- using 'script'
+ using 'cat'
  ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
  RECORDREADER 'MyRecordReader' from src;
  
 -- use keyword MAP instead of TRANSFORM
-FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'cat' as (c1, c2);
 
 -- specific the output of transform
-SELECT TRANSFORM(column) USING 'script' AS c1, c2;
-SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+SELECT TRANSFORM(column) USING 'cat' AS c1, c2;
+SELECT TRANSFORM(column) USING 'cat' AS(c1 INT, c2 INT);
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
index bae85d4464e..fdbc0bb0dff 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -3,7 +3,7 @@ title: "ADD Statements"
 weight: 7
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -50,5 +50,9 @@ ADD JAR filename;
 ### Examples
 
 ```sql
+-- add a local jar
 ADD JAR t.jar;
+
+-- add a remote jar
+ADD JAR hdfs://namenode-host:port/path/t.jar
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
index b595d6b3196..5738036fbac 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -3,7 +3,7 @@ title: "ALTER Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/alter.html
+- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
index ba3dfb2ba86..8d4f9bd1c04 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -3,7 +3,7 @@ title: "CREATE Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -128,9 +128,8 @@ table_constraint:
 {{< hint warning >}}
 **NOTE:**
 
-- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
 - Create temporary table is not supported yet.
-  {{< /hint >}}
+{{< /hint >}}
 
 ### Examples
 
@@ -157,7 +156,6 @@ CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
 
 ### Description
 
-`View`
 `CREATE VIEW` creates a view with the given name.
 If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
 (If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
@@ -233,15 +231,18 @@ The function is registered to metastore and will exist in all session unless the
 - `[USING JAR 'file_uri']`
 
   User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
-  The `file_uri` can be a local file or distributed file system.
-
+  The `file_uri` can be on local file or distributed file system.
+  Flink will automatically download the jars for remote jars when the function is used in queries. The downloaded jars will be removed when the session exits.
 
 ### Examples
 
 ```sql
--- create a function accuming the class `SimpleUdf` has existed in class path
+-- create a function assuming the class `SimpleUdf` has existed in class path
 CREATE FUNCTION simple_udf AS 'SimpleUdf';
 
--- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
+-- create function using jar assuming the class `SimpleUdf` hasn't existed in class path
 CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+
+-- create function using remote jar
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR 'hdfs://namenode-host:port/path/SimpleUdf.jar';
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
index 67bca77e1d7..a507d53a05f 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -3,7 +3,7 @@ title: "DROP Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
index 60e9c2c32f4..94170b96dfa 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -3,7 +3,7 @@ title: "INSERT Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,31 +35,9 @@ can be specified by value expressions or result from query.
 
 ```sql
 -- Stardard syntax
-INSERT [OVERWRITE] TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
-INSERT INTO TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
-FROM from_statement
-INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
--- Hive extension (dynamic partition inserts):
-INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
-  
-INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+INSERT { OVERWRITE | INTO } [TABLE] tablename
+ [PARTITION (partcol1[=val1], partcol2[=val2] ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 ```
 
 ### Parameters
@@ -85,14 +63,9 @@ INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 #### Dynamic Partition Inserts
 
-In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+When writing data into Hive table's partition, users can specify the list of partition column names in the `PARTITION` clause with optional column values.
 If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
 
 Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
@@ -102,7 +75,7 @@ The dynamic partition columns must be specified last among the columns in the `S
 {{< hint warning >}}
 **Note:**
 
-In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+In Hive, by default, users must specify at least one static partition in case of accidentally overwriting all partitions, and users can
 set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
@@ -127,11 +100,6 @@ INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
 --- dynamic partition 
 INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
 INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
-
--- multi-insert statements
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
 ```
 
 ## INSERT OVERWRITE DIRECTORY
@@ -143,17 +111,13 @@ Query results can be inserted into filesystem directories by using a slight vari
 -- Standard syntax:
 INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
   [ROW FORMAT row_format] [STORED AS file_format] 
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
-[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
 row_format:
   : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
       [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
       [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)
 ```
 
 ### Parameters
@@ -161,7 +125,7 @@ row_format:
 - directory_path
 
   The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
-  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+  it'll use the scheme and authority from the Flink configuration variable `fs.default-scheme` that specifies the filesystem scheme.
 
 - `LOCAL`
 
@@ -190,24 +154,61 @@ row_format:
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 ### Examples
 
 ```sql
 --- insert directory with specific format
 INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+
 -- insert directory with specific row format
 INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
   COLLECTION ITEMS TERMINATED BY '#'
   MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
-  
--- multiple insert
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```
+
+## Multiple Inserts
+
+Hive dialect enables users to insert into multiple destinations in one single statement. Users can mix inserting into table and inserting into directory in one single statement.
+In such syntax, Flink will minimize the number of data scans requires. Flink can insert data into multiple tables/directories by scanning the input data just once.
+
+### Syntax
+
+```sql
+-- multiple insert into table
+FROM from_statement
+  INSERT { OVERWRITE | INTO } [TABLE] tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS] select_statement1,
+  INSERT { OVERWRITE | INTO } [TABLE] tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+  [, ... ]
+
+-- multiple insert into directory
+FROM from_statement
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path [ROW FORMAT row_format] [STORED AS file_format] select_statement1,
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path [ROW FORMAT row_format] [STORED AS file_format] select_statement2
+  [, ... ]
+
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Examples
+
+```sql
+-- multiple insert into table
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE TABLE t1_2 SELECT value WHERE month > 6;
+
+-- multiple insert into directory
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+    
+-- mixed with insert into table/directory in one single statement
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
index 246759bbe1b..534b0132408 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -3,7 +3,7 @@ title: "Load Data Statements"
 weight: 4
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/load.html
+- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -53,7 +53,7 @@ LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION
     - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
       The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
     - it will try to **copy** all the files addressed by `filepath` to the target file system.
-      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+      The target file system is inferred by looking at the location attribution. The copied data files will then be moved to the location of the table.
 
   If not, then:
     - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
index 127b74940d1..ef56af19f29 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -3,7 +3,7 @@ title: "概览"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/overview
+- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -29,40 +29,49 @@ under the License.
 
 ## 使用 Hive 方言
 
-Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端和 Table API 设置方言。
+Flink 目前支持两种 SQL 方言: `default` 和 `hive`。你需要先切换到 Hive 方言,然后才能使用 Hive 语法编写。下面介绍如何使用 SQL 客户端,启动了 HiveServer2 endpoint 的 SQL Gateway 和 Table API 设置方言。
 还要注意,你可以为执行的每个语句动态切换方言。无需重新启动会话即可使用其他方言。
 
 {{< hint warning >}}
 **Note:**
 
 - 为了使用 Hive 方言, 你必须首先添加和 Hive 相关的依赖. 请参考 [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) 如何添加这些依赖。
+- 从 Flink 1.15版本开始,如果需要使用 Hive 方言的话,请首先将 `FLINK_HOME/opt` 下面的 `flink-table-planner_2.12` jar 包放到 `FLINK_HOME/lib` 下,并将 `FLINK_HOME/lib`
+  下的 `flink-table-planner-loader` jar 包移出 `FLINK_HOME/lib` 目录。否则将抛出 `ValidationException`。具体原因请参考 [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128)。
 - 请确保当前的 Catalog 是 [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). 否则, 将使用 Flink 的默认方言。
-- 了实现更好的语法和语义的兼容,强烈建议首先加载 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
+  在启动了 HiveServer2 endpoint 的 SQL Gateway,默认当前的 Catalog 就是 HiveCatalog。
+- 为了实现更好的语法和语义的兼容,强烈建议首先加载 [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) 
   并将其放在 Module 列表的首位,以便在函数解析时优先使用 Hive 内置函数。 
-  请参考文档 [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) 来将 HiveModule 放在 Module 列表的首.
+  请参考文档 [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) 来将 HiveModule 放在 Module 列表的首。
+  在启动了 HiveServer2 endpoint 的 SQL Gateway,HiveModule 已经被加载进来了。
 - Hive 方言只支持 `db.table` 这种两级的标识符,不支持带有 Catalog 名字的标识符。
-- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的[Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
+- 虽然所有 Hive 版本支持相同的语法,但是一些特定的功能是否可用仍取决于你使用的 [Hive 版本]({{< ref "docs/connectors/table/hive/overview" >}}#支持的hive版本)。例如,更新数据库位置
   只在 Hive-2.4.0 或更高版本支持。
-  {{< /hint >}}
+- Hive 方言主要是在批模式下使用的,某些 Hive 的语法([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), 等)还没有在流模式下支持。
+{{< /hint >}}
 
 ### SQL Client
 
 SQL 方言可以通过 `table.sql-dialect` 属性指定。你可以在 SQL 客户端启动后设置方言。
 
 ```bash
-Flink SQL> SET 'table.sql-dialect' = 'hive'; -- 使用 Hive 方言
+Flink SQL> SET table.sql-dialect = hive; -- 使用 Hive 方言
 [INFO] Session property has been set.
 
-Flink SQL> SET 'table.sql-dialect' = 'default'; -- 使用 Flink 默认 方言
+Flink SQL> SET table.sql-dialect = default; -- 使用 Flink 默认 方言
 [INFO] Session property has been set.
 ```
 
-{{< hint warning >}}
-**Note:**
-Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
-with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
-{{< /hint >}}
-{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+### SQL Gateway Configured With HiveServer2 Endpoint
+
+在启动了 HiveServer2 endpoint 的 SQL Gateway中,会默认使用 Hive 方言,所以如果你想使用 Hive 方言的话,你不需要手动切换至 Hive 方言,直接就能使用。但是如果你想使用 Flink 的默认方言,你也手动进行切换。
+
+```bash
+# 假设已经通过 beeline 连接上了 SQL Gateway
+jdbc:hive2> SET table.sql-dialect = default; -- 使用 Flink 默认 方言
+
+jdbc:hive2> SET table.sql-dialect = hive; -- 使用 Hive 方言
+```
 
 ### Table API
 
@@ -73,8 +82,10 @@ with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise,
 ```java
 EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
+
 // to use hive dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+
 // to use default dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 ```
@@ -84,8 +95,10 @@ tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 from pyflink.table import *
 settings = EnvironmentSettings.in_batch_mode()
 t_env = TableEnvironment.create(settings)
+
 # to use hive dialect
 t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
+
 # to use default dialect
 t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
 ```
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
index 17008d7464b..88264a3f41a 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -3,7 +3,7 @@ title: "SET Statements"
 weight: 8
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,17 +30,11 @@ The `SET` statement sets a property which provide a ways to set variables for a
 configuration property including system variable and Hive configuration.
 But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
 
-## Syntax
-
-```sql
-SET key=value;
-```
-
 ## EXAMPLES
 
 ```sql
 -- set Flink's configuration
-SET 'table.sql-dialect'='default';
+SET table.sql-dialect=default;
 
 -- set Hive's configuration
 SET hiveconf:k1=v1;
@@ -52,21 +46,22 @@ SET system:k2=v2;
 SET hivevar:k3=v3;
 
 -- get value for configuration
+SET table.sql-dialect;
 SET hiveconf:k1;
 SET system:k2;
 SET hivevar:k3;
 
--- print options
+-- only print Flink's configuration
+SET;
+
+-- print all configurations
 SET -v;
-SET; 
 ```
 
 {{< hint warning >}}
 **Note:**
-
-In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
-
-But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
-
-So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
-{{< /hint  >}}
+- In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+  But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+  So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive dialect, the `key`/`value` to be set shouldn't be quoted.
+  {{< /hint  >}}
diff --git a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
index a21a8cf288c..fb3d5acffbc 100644
--- a/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
+++ b/docs/content.zh/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -1,9 +1,9 @@
 ---
-title: "Show Statements"
+title: "SHOW Statements"
 weight: 5
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -90,8 +90,7 @@ partition_spec:
 
   The optional `partition_spec` is used to what kind of partition should be returned.
   When specified, the partitions that match the `partition_spec` specification are returned.
-  The `partition_spec` can be partial.
-
+  The `partition_spec` can be partial which means you can specific only part of partition columns for listing the partitions.
 
 ### Examples
 
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
index e609e912c9d..96c4a73709d 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte.md
@@ -30,14 +30,14 @@ or `INSERT` keyword. The CTE is defined only with the execution scope of a singl
 ## Syntax
 
 ```sql
-withClause: cteClause [, ...]
-cteClause: cte_name AS (select statment)
+withClause: WITH cteClause [ , ... ]
+cteClause: cte_name AS (select statement)
 ```
 
 
 {{< hint warning >}}
 **Note:**
-- The `WITH` clause is not supported within SubQuery block
+- The `WITH` clause is not supported within Sub-Query block
 - CTEs are supported in Views, `CTAS` and `INSERT` statement
 - [Recursive Queries](https://wiki.postgresql.org/wiki/CTEReadme#Parsing_recursive_queries) are not supported
   {{< /hint >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
index 719ce532dc1..514fbe9bee6 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/group-by.md
@@ -31,13 +31,19 @@ Hive dialect also supports enhanced aggregation features to do multiple aggregat
 ## Syntax
 
 ```sql
-groupByClause: groupByClause-1 | groupByClause-2
-groupByClause-1: GROUP BY group_expression [, ...] [ WITH ROLLUP | WITH CUBE ]
+group_by_clause: 
+  group_by_clause_1 | group_by_clause_2
+
+group_by_clause_1: 
+  GROUP BY group_expression [ , ... ] [ WITH ROLLUP | WITH CUBE ] 
  
-groupByClause-2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [, ...] ) } [, ...]
-grouping_set: { expression | ( [ expression [, ...] ] ) }
+group_by_clause_2: 
+  GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [ , ... ] ) } [ , ... ]
+
+grouping_set: 
+  { expression | ( [ expression [ , ... ] ] ) }
  
-groupByQuery: SELECT expression [, ...] FROM src groupByClause?
+groupByQuery: SELECT expression [ , ... ] FROM src groupByClause?
 ```
 In `group_expression`, columns can be also specified by position number. But please remember:
 - For Hive 0.11.0 through 2.1.x, set `hive.groupby.orderby.position.alias` to true (the default is false)
@@ -97,7 +103,7 @@ It represents the given list and all of its possible subsets - the power set.
 
 For example:
 ```sql
-GROUP BY a, b, c, WITH CUBE
+GROUP BY a, b, c WITH CUBE
 ```
 is equivalent to
 ```sql
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
index 90e6a062223..d0daeb79cb0 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/Queries/overview
+- /dev/table/hive_compatibility/hive_dialect/queries/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,7 +35,7 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 - [Set Operation]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}})
 - [Lateral View]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/lateral-view" >}})
 - [Window Functions]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/window-functions" >}})
-- [SubQueries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
+- [Sub-Queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}})
 - [CTE]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}})
 - [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}})
 - [Table Sample]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/table-sample" >}})
@@ -43,11 +43,11 @@ The following lists some parts of HiveQL supported by the Hive dialect.
 ## Syntax
 
 The following section describes the overall query syntax.
-The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.
+The SELECT clause can be part of a query which also includes [common table expressions (CTE)]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/cte" >}}), set operations, and various other clauses.
 
 ```sql
-[WITH CommonTableExpression (, CommonTableExpression)*]
-SELECT [ALL | DISTINCT] select_expr, select_expr, ...
+[WITH CommonTableExpression [ , ... ]]
+SELECT [ALL | DISTINCT] select_expr [ , ... ]
   FROM table_reference
   [WHERE where_condition]
   [GROUP BY col_list]
@@ -57,13 +57,14 @@ SELECT [ALL | DISTINCT] select_expr, select_expr, ...
   ]
  [LIMIT [offset,] rows]
 ```
-- A `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
-- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [subquery]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
+- The `SELECT` statement can be part of a [set]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op" >}}) query or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) of another query
+- `CommonTableExpression` is a temporary result set derived from a query specified in a `WITH` clause
+- `table_reference` indicates the input to the query. It can be a regular table, a view, a [join]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/join" >}}) or a [sub-query]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}).
 - Table names and column names are case-insensitive
 
 ### WHERE Clause
 
-The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFS](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
+The `WHERE` condition is a boolean expression. Hive dialect supports a number of [operators and UDFs](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
 in the `WHERE` clause. Some types of [sub queries]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries" >}}) are supported in `WHERE` clause.
 
 ### GROUP BY Clause
@@ -74,7 +75,7 @@ Please refer to [GROUP BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect
 
 The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order.
 Different from [SORT BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}#sort-by), `ORDER BY` clause guarantees
-a total order in the output.
+a global order in the output.
 
 {{< hint warning >}}
 **Note:**
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
index 1178844d2c3..83252bd2150 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/set-op.md
@@ -22,7 +22,7 @@ under the License.
 
 # Set Operations
 
-Set Operation is used to combing two select into single one.
+Set Operations are used to combine multiple `SELECT` statements into a single result set. 
 Hive dialect supports the following operations:
 - UNION
 - INTERSECT
@@ -39,7 +39,7 @@ Hive dialect supports the following operations:
 ### Syntax
 
 ```sql
-select_statement { UNION [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { UNION [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -60,7 +60,7 @@ SELECT x, y FROM t1 UNION ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { INTERSECT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { INTERSECT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
@@ -83,7 +83,7 @@ SELECT x, y FROM t1 INTERSECT ALL SELECT x, y FROM t2;
 ### Syntax
 
 ```sql
-select_statement { EXCEPT [ ALL | DISTINCT ] } select_statement [ .. ]
+<query> { EXCEPT [ ALL | DISTINCT ] } <query> [ .. ]
 ```
 
 ### Examples
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
index 5f6954bb880..5548ac48d81 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by.md
@@ -33,9 +33,9 @@ So when there's more than one partition, `SORT BY` may return result that's part
 ### Syntax
 
 ```sql
-colOrder: ( ASC | DESC )
-sortBy: SORT BY BY expression [ , ... ]
 query: SELECT expression [ , ... ] FROM src sortBy
+sortBy: SORT BY expression colOrder [ , ... ] 
+colOrder: ( ASC | DESC )
 ```
 
 ### Parameters
@@ -67,8 +67,12 @@ query: SELECT expression [ , ... ] FROM src distributeBy
 ### Examples
 
 ```sql
+-- only use DISTRIBUTE BY clause
 SELECT x, y FROM t DISTRIBUTE BY x;
 SELECT x, y FROM t DISTRIBUTE BY abs(y);
+
+-- use both DISTRIBUTE BY and SORT BY clause
+SELECT x, y FROM t DISTRIBUTE BY x SORT BY y DESC;
 ```
 
 ## Cluster By
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
index 592d130bf46..5a63f1b6a3b 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/sub-queries.md
@@ -34,7 +34,7 @@ The sub-query can also be a query expression with `UNION`. Hive dialect supports
 ### Syntax
 
 ```sql
-select_statement from ( subquery_select_statement ) [ AS ] name
+select_statement from ( select_statement ) [ AS ] name
 ```
 
 ### Example
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
index 1ba9a9c0cfe..fd6af4b271f 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform.md
@@ -29,6 +29,15 @@ The `TRANSFORM` clause allows user to transform inputs using user-specified comm
 ## Syntax
 
 ```sql
+query:
+   SELECT TRANSFORM ( expression [ , ... ] )
+   [ inRowFormat ]
+   [ inRecordWriter ]
+   USING command_or_script
+   [ AS colName [ colType ] [ , ... ] ]
+   [ outRowFormat ]
+   [ outRecordReader ]
+
 rowFormat
   : ROW FORMAT
     (DELIMITED [FIELDS TERMINATED BY char]
@@ -45,14 +54,6 @@ outRowFormat : rowFormat
 inRowFormat : rowFormat
 outRecordReader : RECORDREADER className
 inRecordWriter: RECORDWRITER record_write_class
- 
-query:
-   SELECT TRANSFORM '(' expression [ , ... ] ')'
-    ( inRowFormat )?
-    ( inRecordWriter )?
-    USING command_or_script
-    ( AS colName ( colType )? [, ... ] )?
-    ( outRowFormat )? ( outRecordReader )?
 ```
 
 {{< hint warning >}}
@@ -78,21 +79,26 @@ query:
   and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual way.
 
 - inRecordWriter
+
   Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter`
 
 - outRecordReader
+
   Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader`
 
 - command_or_script
+
   Specifies a command or a path to script to process data.
 
   {{< hint warning >}}
   **Note:**
 
-  Add a script file and then transform input using the script is not supported yet.
+  - Add a script file and then transform input using the script is not supported yet.
+  - The script used must be a local script and should be accessible on all hosts in the cluster.
   {{< /hint >}}
 
 - colType
+
   Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type.
 
 
@@ -110,20 +116,20 @@ For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the foll
 ```sql
 CREATE TABLE src(key string, value string);
 -- transform using
-SELECT TRANSFORM(key, value) using 'script' from t1;
+SELECT TRANSFORM(key, value) using 'cat' from t1;
 
 -- transform using with specific record writer and record reader
 SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe'
  WITH SERDEPROPERTIES ('p1'='v1','p2'='v2')
  RECORDWRITER 'MyRecordWriter'
- using 'script'
+ using 'cat'
  ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
  RECORDREADER 'MyRecordReader' from src;
  
 -- use keyword MAP instead of TRANSFORM
-FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'script' as (c1, c2);
+FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) using 'cat' as (c1, c2);
 
 -- specific the output of transform
-SELECT TRANSFORM(column) USING 'script' AS c1, c2;
-SELECT TRANSFORM(column) USING 'script' AS(c1 INT, c2 INT);
+SELECT TRANSFORM(column) USING 'cat' AS c1, c2;
+SELECT TRANSFORM(column) USING 'cat' AS(c1 INT, c2 INT);
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
index bae85d4464e..20ffa413d7a 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/add.md
@@ -3,7 +3,7 @@ title: "ADD Statements"
 weight: 7
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/add.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -38,17 +38,21 @@ Add multiple jars file in single `ADD JAR` statement is not supported.
 ### Syntax
 
 ```sql
-ADD JAR filename;
+ADD JAR <jar_path>;
 ```
 
 ### Parameters
 
-- filename
+- jar_path
 
-  The name of the JAR file to be added. It could be either on a local file or distributed file system.
+  The path of the JAR file to be added. It could be either on a local file or distributed file system.
 
 ### Examples
 
 ```sql
+-- add a local jar
 ADD JAR t.jar;
+
+-- add a remote jar
+ADD JAR hdfs://namenode-host:port/path/t.jar
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
index b595d6b3196..5738036fbac 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/alter.md
@@ -3,7 +3,7 @@ title: "ALTER Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/alter.html
+- /dev/table/hive_compatibility/hive_dialect/alter.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
index ba3dfb2ba86..70ee04df641 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/create.md
@@ -3,7 +3,7 @@ title: "CREATE Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/create.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -128,9 +128,8 @@ table_constraint:
 {{< hint warning >}}
 **NOTE:**
 
-- Create table with `STORED BY 'class_name'` / `CLUSTERED BY` / `SKEWED BY` is not supported yet.
-- Create temporary table is not supported yet.
-  {{< /hint >}}
+- Create temporary table is not supported yet. 
+{{< /hint >}}
 
 ### Examples
 
@@ -157,7 +156,6 @@ CREATE TABLE t2 AS SELECT key, COUNT(1) FROM t1 GROUP BY key;
 
 ### Description
 
-`View`
 `CREATE VIEW` creates a view with the given name.
 If no column names are supplied, the names of the view's columns will be derived automatically from the defining SELECT expression.
 (If the SELECT contains un-aliased scalar expressions such as x+y, the resulting view column names will be generated in the form _C0, _C1, etc.)
@@ -233,15 +231,18 @@ The function is registered to metastore and will exist in all session unless the
 - `[USING JAR 'file_uri']`
 
   User can use the clause to add Jar that contains the implementation of the function along with its dependencies while creating the function.
-  The `file_uri` can be a local file or distributed file system.
-
+  The `file_uri` can be on local file or distributed file system.
+  Flink will automatically download the jars for remote jars when the function is used in queries. The downloaded jars will be removed when the session exits.
 
 ### Examples
 
 ```sql
--- create a function accuming the class `SimpleUdf` has existed in class path
+-- create a function assuming the class `SimpleUdf` has existed in class path
 CREATE FUNCTION simple_udf AS 'SimpleUdf';
 
--- create function using jar accuming the class `SimpleUdf` hasn't existed in class path
-CREATE  FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+-- create function using jar assuming the class `SimpleUdf` hasn't existed in class path
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR '/tmp/SimpleUdf.jar';
+
+-- create function using remote jar
+CREATE FUNCTION simple_udf AS 'SimpleUdf' USING JAR 'hdfs://namenode-host:port/path/SimpleUdf.jar';
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
index 67bca77e1d7..a507d53a05f 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/drop.md
@@ -3,7 +3,7 @@ title: "DROP Statements"
 weight: 2
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/drop.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
index 60e9c2c32f4..94170b96dfa 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/insert.md
@@ -3,7 +3,7 @@ title: "INSERT Statements"
 weight: 3
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/insert.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -35,31 +35,9 @@ can be specified by value expressions or result from query.
 
 ```sql
 -- Stardard syntax
-INSERT [OVERWRITE] TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
-INSERT INTO TABLE tablename1
- [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]]
-   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
-   
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT [OVERWRITE] TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT [OVERWRITE] TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
-FROM from_statement
-INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1,
-INSERT INTO TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
-[, ... ];
-
--- Hive extension (dynamic partition inserts):
-INSERT [OVERWRITE] TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
-  
-INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement };
+INSERT { OVERWRITE | INTO } [TABLE] tablename
+ [PARTITION (partcol1[=val1], partcol2[=val2] ...) [IF NOT EXISTS]]
+   { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 ```
 
 ### Parameters
@@ -85,14 +63,9 @@ INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...)
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 #### Dynamic Partition Inserts
 
-In the Hive extension syntax - dynamic partition inserts, users can give partial partition specifications, which means just specifying the list of partition column names in the `PARTITION` clause with optional column values.
+When writing data into Hive table's partition, users can specify the list of partition column names in the `PARTITION` clause with optional column values.
 If all the partition columns' value are given, we call this a static partition, otherwise it is a dynamic partition.
 
 Each dynamic partition column has a corresponding input column from the select statement. This means that the dynamic partition creation is determined by the value of the input column.
@@ -102,7 +75,7 @@ The dynamic partition columns must be specified last among the columns in the `S
 {{< hint warning >}}
 **Note:**
 
-In Hive, by default, the user mush specify at least one static partition in case the user accidentally overwrites all partition, and user can
+In Hive, by default, users must specify at least one static partition in case of accidentally overwriting all partitions, and users can
 set the configuration `hive.exec.dynamic.partition.mode` to `nonstrict` to to allow all partitions to be dynamic.
 
 But in Flink's Hive dialect, it'll always be `nonstrict` mode which means all partitions are allowed to be dynamic.
@@ -127,11 +100,6 @@ INSERT INTO t1 PARTITION (year = 2022, month = 12) SELECT value FROM t2;
 --- dynamic partition 
 INSERT INTO t1 PARTITION (year = 2022, month) SELECT month, value FROM t2;
 INSERT INTO t1 PARTITION (year, month) SELECT 2022, month, value FROM t2;
-
--- multi-insert statements
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
-    INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month > 6;
 ```
 
 ## INSERT OVERWRITE DIRECTORY
@@ -143,17 +111,13 @@ Query results can be inserted into filesystem directories by using a slight vari
 -- Standard syntax:
 INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
   [ROW FORMAT row_format] [STORED AS file_format] 
-  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement1 FROM from_statement };
+  { VALUES ( value [, ..] ) [, ( ... ) ] | select_statement FROM from_statement }
 
--- Hive extension (multiple inserts):
-FROM from_statement
-INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path select_statement1
-[INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path select_statement2] ...
 row_format:
   : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
       [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
       [NULL DEFINED AS char]
-  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)
 ```
 
 ### Parameters
@@ -161,7 +125,7 @@ row_format:
 - directory_path
 
   The path for the directory to be inserted can be a full URI. If scheme or authority are not specified,
-  it'll use the scheme and authority from the hadoop configuration variable `fs.default.name` that specifies the Namenode URI.
+  it'll use the scheme and authority from the Flink configuration variable `fs.default-scheme` that specifies the filesystem scheme.
 
 - `LOCAL`
 
@@ -190,24 +154,61 @@ row_format:
 
 ### Synopsis
 
-#### Multiple Inserts
-
-In the Hive extension syntax - multiple inserts, Flink will minimize the number of data scans requires. Flink can insert data into multiple
-tables by scanning the input data just once.
-
 ### Examples
 
 ```sql
 --- insert directory with specific format
 INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1' STORED AS ORC SELECT * FROM t1;
+
 -- insert directory with specific row format
 INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t1'
- ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
+  ROW FORMAT DELIMITED FIELDS TERMINATED BY ':'
   COLLECTION ITEMS TERMINATED BY '#'
   MAP KEYS TERMINATED BY '=' SELECT * FROM t1;
-  
--- multiple insert
-FROM (SELECT month, value from t1)
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
-    INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+```
+
+## Multiple Inserts
+
+Hive dialect enables users to insert into multiple destinations in one single statement. Users can mix inserting into table and inserting into directory in one single statement.
+In such syntax, Flink will minimize the number of data scans requires. Flink can insert data into multiple tables/directories by scanning the input data just once.
+
+### Syntax
+
+```sql
+-- multiple insert into table
+FROM from_statement
+  INSERT { OVERWRITE | INTO } [TABLE] tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS] select_statement1,
+  INSERT { OVERWRITE | INTO } [TABLE] tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2
+  [, ... ]
+
+-- multiple insert into directory
+FROM from_statement
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory1_path [ROW FORMAT row_format] [STORED AS file_format] select_statement1,
+  INSERT OVERWRITE [LOCAL] DIRECTORY directory2_path [ROW FORMAT row_format] [STORED AS file_format] select_statement2
+  [, ... ]
+
+row_format:
+  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
+      [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
+      [NULL DEFINED AS char]
+  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, ...)]
+```
+
+### Examples
+
+```sql
+-- multiple insert into table
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE TABLE t1_2 SELECT value WHERE month > 6;
+
+-- multiple insert into directory
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month1' SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
+    
+-- mixed with insert into table/directory in one single statement
+FROM (SELECT month, value from t1) t
+  INSERT OVERWRITE TABLE t1_1 SELECT value WHERE month <= 6
+  INSERT OVERWRITE DIRECTORY '/user/hive/warehouse/t1/month2' SELECT value WHERE month > 6;
 ```
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
index 246759bbe1b..534b0132408 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/load-data.md
@@ -3,7 +3,7 @@ title: "Load Data Statements"
 weight: 4
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/load.html
+- /dev/table/hive_compatibility/hive_dialect/load.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -53,7 +53,7 @@ LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION
     - it will look for `filepath` in the local file system. If a relative path is specified, it will be interpreted relative to the users' current working directory.
       The user can specify a full URI for local files as well - for example: file:///user/hive/warehouse/data1
     - it will try to **copy** all the files addressed by `filepath` to the target file system.
-      The target file system is inferred by looking at the location attribution. The coped data files will then be moved to the table.
+      The target file system is inferred by looking at the location attribution. The copied data files will then be moved to the location of the table.
 
   If not, then:
     - if schema or authority are not specified, it'll use the schema and authority from the hadoop configuration variable `fs.default.name` that
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
index c652bef92db..44dfecddd02 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/overview.md
@@ -3,7 +3,7 @@ title: "Overview"
 weight: 1
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/overview
+- /dev/table/hive_compatibility/hive_dialect/overview
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,43 +30,54 @@ By providing compatibility with Hive syntax, we aim to improve the interoperabil
 ## Use Hive Dialect
 
 Flink currently supports two SQL dialects: `default` and `hive`. You need to switch to Hive dialect
-before you can write in Hive syntax. The following describes how to set dialect with
-SQL Client and Table API. Also notice that you can dynamically switch dialect for each
+before you can write in Hive syntax. The following describes how to set dialect using
+SQL Client, SQL Gateway configured with HiveServer2 Endpoint and Table API. Also notice that you can dynamically switch dialect for each
 statement you execute. There's no need to restart a session to use a different dialect.
 
 {{< hint warning >}}
 **Note:**
 
 - To use Hive dialect, you have to add dependencies related to Hive. Please refer to [Hive dependencies]({{< ref "docs/connectors/table/hive/overview" >}}#dependencies) for how to add the dependencies.
+- Since Flink 1.15, if you want to use Hive dialect in Flink SQL Client or SQL Gateway, you have to put the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`
+  to `FLINK_HOME/lib` and then move out the jar `flink-table-planner-loader` from `FLINK_HOME/lib`.
+  Otherwise, it'll throw ValidationException. Please refer to [FLINK-25128](https://issues.apache.org/jira/browse/FLINK-25128) for more details.
 - Please make sure the current catalog is [HiveCatalog]({{< ref "docs/connectors/table/hive/hive_catalog" >}}). Otherwise, it will fall back to Flink's `default` dialect.
+  When using SQL Gateway configured with HiveServer2 Endpoint, the current catalog will be a HiveCatalog by default.
 - In order to have better syntax and semantic compatibility, it’s highly recommended to load [HiveModule]({{< ref "docs/connectors/table/hive/hive_functions" >}}#use-hive-built-in-functions-via-hivemodule) and
   place it first in the module list, so that Hive built-in functions can be picked up during function resolution.
   Please refer [here]({{< ref "docs/dev/table/modules" >}}#how-to-load-unload-use-and-list-modules) for how to change resolution order.
+  But when using SQL Gateway configured with HiveServer2 Endpoint, the Hive module will be loaded automatically.
 - Hive dialect only supports 2-part identifiers, so you can't specify catalog for an identifier.
 - While all Hive versions support the same syntax, whether a specific feature is available still depends on the
   [Hive version]({{< ref "docs/connectors/table/hive/overview" >}}#supported-hive-versions) you use. For example, updating database
   location is only supported in Hive-2.4.0 or later.
+- The Hive dialect is mainly used in batch mode. Some Hive's syntax ([Sort/Cluster/Distributed BY]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/sort-cluster-distribute-by" >}}), [Transform]({{< ref "docs/dev/table/hiveCompatibility/hiveDialect/Queries/transform" >}}), etc.)  haven't been supported in streaming mode yet.
 {{< /hint >}}
 
 ### SQL Client
 
 SQL dialect can be specified via the `table.sql-dialect` property.
-Therefore,you can set the dialect after the SQL Client has launched.
+Therefore,you can set the dialect after the SQL Client has launched. 
 
 ```bash
-Flink SQL> SET 'table.sql-dialect' = 'hive'; -- to use hive dialect
+Flink SQL> SET table.sql-dialect = hive; -- to use Hive dialect
 [INFO] Session property has been set.
 
-Flink SQL> SET 'table.sql-dialect' = 'default'; -- to use default dialect
+Flink SQL> SET table.sql-dialect = default; -- to use Flink default dialect
 [INFO] Session property has been set.
 ```
 
-{{< hint warning >}}
-**Note:**
-Since Flink 1.15, when you want to use Hive dialect in Flink SQL client, you have to swap the jar `flink-table-planner-loader` located in `FLINK_HOME/lib`
-with the jar `flink-table-planner_2.12` located in `FLINK_HOME/opt`. Otherwise, it'll throw the following exception:
-{{< /hint >}}
-{{<img alt="error" width="80%" src="/fig/hive_parser_load_exception.png">}}
+### SQL Gateway Configured With HiveServer2 Endpoint
+
+When using the SQL Gateway configured with HiveServer2 Endpoint, the dialect will be Hive dialect by default, so you don't need to do anything if you want to use Hive dialect. But you can still
+change the dialect to Flink default dialect.
+
+```bash
+# assuming has connected to SQL Gateway with beeline
+jdbc:hive2> SET table.sql-dialect = default; -- to use Flink default dialect
+
+jdbc:hive2> SET table.sql-dialect = hive; -- to use Hive dialect
+```
 
 ### Table API
 
@@ -77,8 +88,10 @@ You can set dialect for your TableEnvironment with Table API.
 ```java
 EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
+
 // to use hive dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
+
 // to use default dialect
 tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 ```
@@ -88,10 +101,14 @@ tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
 from pyflink.table import *
 settings = EnvironmentSettings.in_batch_mode()
 t_env = TableEnvironment.create(settings)
-# to use hive dialect
+
+# to use Hive dialect
 t_env.get_config().set_sql_dialect(SqlDialect.HIVE)
-# to use default dialect
+
+# to use Flink default dialect
 t_env.get_config().set_sql_dialect(SqlDialect.DEFAULT)
 ```
 {{< /tab >}}
 {{< /tabs >}}
+
+
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
index 17008d7464b..c5eb59e17a1 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/set.md
@@ -3,7 +3,7 @@ title: "SET Statements"
 weight: 8
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/set.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -30,17 +30,11 @@ The `SET` statement sets a property which provide a ways to set variables for a
 configuration property including system variable and Hive configuration.
 But environment variable can't be set via `SET` statement. The behavior of `SET` with Hive dialect is compatible to Hive's.
 
-## Syntax
-
-```sql
-SET key=value;
-```
-
 ## EXAMPLES
 
 ```sql
 -- set Flink's configuration
-SET 'table.sql-dialect'='default';
+SET table.sql-dialect=default;
 
 -- set Hive's configuration
 SET hiveconf:k1=v1;
@@ -52,21 +46,22 @@ SET system:k2=v2;
 SET hivevar:k3=v3;
 
 -- get value for configuration
+SET table.sql-dialect;
 SET hiveconf:k1;
 SET system:k2;
 SET hivevar:k3;
 
--- print options
+-- only print Flink's configuration
+SET;
+
+-- print all configurations
 SET -v;
-SET; 
 ```
 
 {{< hint warning >}}
 **Note:**
-
-In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
-
-But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
-
-So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive, the `SET` command `SET xx=yy` whose key has no prefix is equivalent to `SET hiveconf:xx=yy`, which means it'll set it to Hive Conf.
+  But in Flink, with Hive dialect, such `SET` command `set xx=yy` will set `xx` with value `yy` to Flink's configuration.
+  So, if you want to set configuration to Hive's Conf, please add the prefix `hiveconf:`, using the  `SET` command like `SET hiveconf:xx=yy`.
+- In Hive dialect, the `key`/`value` to be set shouldn't be quoted.
 {{< /hint  >}}
diff --git a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
index a21a8cf288c..fb3d5acffbc 100644
--- a/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
+++ b/docs/content/docs/dev/table/hiveCompatibility/hiveDialect/show.md
@@ -1,9 +1,9 @@
 ---
-title: "Show Statements"
+title: "SHOW Statements"
 weight: 5
 type: docs
 aliases:
-- /dev/table/hiveCompatibility/hiveDialect/create.html
+- /dev/table/hive_compatibility/hive_dialect/show.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -90,8 +90,7 @@ partition_spec:
 
   The optional `partition_spec` is used to what kind of partition should be returned.
   When specified, the partitions that match the `partition_spec` specification are returned.
-  The `partition_spec` can be partial.
-
+  The `partition_spec` can be partial which means you can specific only part of partition columns for listing the partitions.
 
 ### Examples