You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by go...@apache.org on 2022/08/25 09:05:51 UTC

[flink] branch master updated: [FLINK-28493][docs] Add document to describe "ANALYZE TABLE" syntax

This is an automated email from the ASF dual-hosted git repository.

godfrey pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new 7166625ff78 [FLINK-28493][docs] Add document to describe "ANALYZE TABLE" syntax
7166625ff78 is described below

commit 7166625ff787181a59a10f4d75218b4ac87254ed
Author: zhengyunhong.zyh <33...@qq.com>
AuthorDate: Tue Aug 9 00:07:00 2022 +0800

    [FLINK-28493][docs] Add document to describe "ANALYZE TABLE" syntax
    
    This closes #20506
---
 docs/content.zh/docs/dev/table/sql/analyze.md  | 385 +++++++++++++++++++++++++
 docs/content.zh/docs/dev/table/sql/overview.md |   3 +-
 docs/content/docs/dev/table/sql/analyze.md     | 382 ++++++++++++++++++++++++
 docs/content/docs/dev/table/sql/overview.md    |   3 +-
 4 files changed, 771 insertions(+), 2 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/sql/analyze.md b/docs/content.zh/docs/dev/table/sql/analyze.md
new file mode 100644
index 00000000000..a43573dd96b
--- /dev/null
+++ b/docs/content.zh/docs/dev/table/sql/analyze.md
@@ -0,0 +1,385 @@
+---
+title: "ANALYZE 语句"
+weight: 8
+type: docs
+aliases:
+  - /zh/dev/table/sql/analyze.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<a name="analyze-statements"></a>
+
+# ANALYZE 语句
+
+`ANALYZE` 语句被用于为存在的表收集统计信息,并将统计信息写入该表的 catalog 中。当前版本中,`ANALYZE` 语句只支持 `ANALYZE TABLE`,
+且只能由用户手动触发。
+
+<span class="label label-danger">注意</span> 现在, `ANALYZE TABLE` 只支持批模式(Batch Mode),且只能用于已存在的表,
+如果表不存在或者是视图(View)则会报错。
+
+
+<a name="run-a-analyze-table-statement"></a>
+
+## 执行 ANALYZE TABLE 语句
+
+{{< tabs "analyze table" >}}
+{{< tab "Java" >}}
+可以使用 `TableEnvironment` 的 `executeSql()` 方法执行 `ANALYZE TABLE` 语句。
+
+以下示例展示了如何在 `TableEnvironment` 中执行一条 `ANALYZE TABLE` 语句。
+{{< /tab >}}
+{{< tab "Scala" >}}
+可以使用 `TableEnvironment` 的 `executeSql()` 方法执行 `ANALYZE TABLE` 语句。
+
+以下示例展示了如何在 `TableEnvironment` 中执行一条 `ANALYZE TABLE` 语句。
+{{< /tab >}}
+{{< tab "Python" >}}
+可以使用 `TableEnvironment` 的 `execute_sql()` 方法执行 `ANALYZE TABLE` 语句。
+
+以下示例展示了如何在 `TableEnvironment` 中执行一条 `ANALYZE TABLE` 语句。
+{{< /tab >}}
+{{< tab "SQL CLI" >}}
+
+`ANALYZE TABLE` 语句可以在 [SQL CLI]({{< ref "docs/dev/table/sqlClient" >}}) 中执行。
+
+以下示例展示了如何在 SQL CLI 中执行一条 `ANALYZE TABLE` 语句。
+
+{{< /tab >}}
+{{< /tabs >}}
+
+{{< tabs "a5de1760-e363-4b8d-9d6f-0bacb35b9dcf" >}}
+{{< tab "Java" >}}
+```java
+TableEnvironment tableEnv = TableEnvironment.create(...);
+
+// 注册名为 “Store” 的非分区表
+tableEnv.executeSql(
+        "CREATE TABLE Store (" +
+        " `id` BIGINT NOT NULl," +
+        " `location` VARCHAR(32)," +
+        " `owner` VARCHAR(32)" +
+        ") with (...)");
+
+// 注册名为 “Orders” 的分区表
+tableEnv.executeSql(
+        "CREATE TABLE Orders (" +
+        " `id` BIGINT NOT NULl," +
+        " `product` VARCHAR(32)," +
+        " `amount` INT," +
+        " `sold_year` BIGINT", +
+        " `sold_month` BIGINT", +
+        " `sold_day` BIGINT" +
+        ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+        ") with (...)");
+
+// 非分区表,收集表级别的统计信息(表的统计信息主要为行数(row count))。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+// 非分区表,收集表级别的统计信息和所有列的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 非分区表,收集表级别的统计信息和指定列(列: location)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+// 假设分区表 “Orders” 有 4 个分区,分区信息如下:
+// Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+// Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+// Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+// Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+// 分区表,收集分区 Partition1 的表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+// 分区表,为所有分区收集表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+// 分区表,收集分区 Partition1 的表级别统计信息和所有列的统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和所有列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,为所有分区收集表级别统计信息和所有列的统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,收集分区 Partition1 的表级别统计信息和分区中指定列(列: amount)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和分区中指定列(列: amount,列: product)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+// 分区表,收集所有分区的表级别统计信息和指定列(列: amount,列: product)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "Scala" >}}
+```scala
+val tableEnv = TableEnvironment.create(...)
+
+// 注册名为 “Store” 的非分区表
+tableEnv.executeSql(
+  "CREATE TABLE Store (" +
+          " `id` BIGINT NOT NULl," +
+          " `location` VARCHAR(32)," +
+          " `owner` VARCHAR(32)" +
+          ") with (...)");
+
+// 注册名为 “Orders” 的分区表
+tableEnv.executeSql(
+  "CREATE TABLE Orders (" +
+          " `id` BIGINT NOT NULl," +
+          " `product` VARCHAR(32)," +
+          " `amount` INT," +
+          " `sold_year` BIGINT", +
+          " `sold_month` BIGINT", +
+          " `sold_day` BIGINT" +
+          ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+") with (...)");
+
+// 非分区表,收集表级别的统计信息(表的统计信息主要为行数(row count))。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+// 非分区表,收集表级别的统计信息和所有列的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 非分区表,收集表级别的统计信息和指定列(列: location)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+// 假设分区表 “Orders” 有 4 个分区,分区信息如下:
+// Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+// Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+// Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+// Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+// 分区表,收集分区 Partition1 的表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+// 分区表,为所有分区收集表级别统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+// 分区表,收集分区 Partition1 的表级别统计信息和所有列的统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和所有列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,为所有分区收集表级别统计信息和所有列的统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// 分区表,收集分区 Partition1 的表级别统计信息和分区中指定列(列: amount)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+// 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和分区中指定列(列: amount,列: product)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+// 分区表,收集所有分区的表级别统计信息和指定列(列: amount,列: product)的列统计信息。
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "Python" >}}
+```python
+table_env = TableEnvironment.create(...)
+
+# 注册名为 “Store” 的非分区表
+table_env.execute_sql(
+  "CREATE TABLE Store (" +
+          " `id` BIGINT NOT NULl," +
+          " `location` VARCHAR(32)," +
+          " `owner` VARCHAR(32)" +
+          ") with (...)");
+
+# 注册名为 “Orders” 的分区表
+table_env.execute_sql(
+  "CREATE TABLE Orders (" +
+          " `id` BIGINT NOT NULl," +
+          " `product` VARCHAR(32)," +
+          " `amount` INT," +
+          " `sold_year` BIGINT", +
+          " `sold_month` BIGINT", +
+          " `sold_day` BIGINT" +
+          ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+") with (...)");
+
+# 非分区表,收集表级别的统计信息(表的统计信息主要为行数(row count))。
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+# 非分区表,收集表级别的统计信息和所有列的列统计信息。
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# 非分区表,收集表级别的统计信息和指定列(列: location)的列统计信息。
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+# 假设分区表 “Orders” 有 4 个分区,分区信息如下:
+# Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+# Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+# Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+# Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+# 分区表,收集分区 Partition1 的表级别统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+# 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+# 分区表,为所有分区收集表级别统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+# 分区表,收集分区 Partition1 的表级别统计信息和所有列的统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和所有列统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# 分区表,为所有分区收集表级别统计信息和所有列的统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# 分区表,收集分区 Partition1 的表级别统计信息和分区中指定列(列: amount)的列统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+# 分区表,收集分区 Partition1 和 Partition2 的表级别统计信息和分区中指定列(列: amount,列: product)的列统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+# 分区表,收集所有分区的表级别统计信息和指定列(列: amount,列: product)的列统计信息。
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "SQL CLI" >}}
+```sql
+Flink SQL> CREATE TABLE Store (
+> `id` BIGINT NOT NULl,
+> `location` VARCHAR(32),
+> `owner` VARCHAR(32)
+> ) with (
+> ...
+> );
+[INFO] Table has been created.
+
+Flink SQL> CREATE TABLE Orders (
+> `id` BIGINT NOT NULl,
+> `product` VARCHAR(32),
+> `amount` INT,
+> `sold_year` BIGINT,
+> `sold_month` BIGINT,
+> `sold_day` BIGINT  
+> ) PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`)
+> ) with (
+> ...
+> );
+[INFO] Table has been created.
+
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION (sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product;
+[INFO] Execute statement succeed.
+```
+{{< /tab >}}
+{{< /tabs >}}
+
+<a name="syntax"></a>
+
+## 语法
+
+```sql
+ANALYZE TABLE [catalog_name.][db_name.]table_name PARTITION(partcol1[=val1] [, partcol2[=val2], ...]) COMPUTE STATISTICS [FOR COLUMNS col1 [, col2, ...] | FOR ALL COLUMNS]
+```
+- 对于分区表, 语法中 PARTITION(partcol1[=val1] [, partcol2[=val2], ...]) 是必须指定的
+  - 如果没有指定某分区,则会收集所有分区的统计信息
+  - 如果指定了某分区,则只会收集该分区的统计信息
+  - 如果该表为非分区表,但语句中指定了分区,则会报异常
+  - 如果指定了某个分区,但是该分区不存在,则会报异常
+
+- 语法中,FOR COLUMNS col1 [, col2, ...] 或者 FOR ALL COLUMNS 也是可选的
+  - 如果没有指定某一列,则只会收集表级别的统计信息
+  - 如果指定的列不存在,或者该列不是物理列,则会报异常
+  - 如果指定了某一列或者某几列,则会收集列的统计信息
+  - 列级别的统计信息包括:
+    - ndv: 该列中列值不同的数量
+    - nullCount: 该列中空值的数量
+    - avgLen: 列值的平均长度
+    - maxLen: 列值的最大长度
+    - minValue: 列值的最小值
+    - maxValue: 列值的最大值
+    - valueCount: 该值只应用于 boolean 类型
+  - 对于列统计信息,支持类型和对应的列统计信息值如下表所示("Y" 代表支持,"N" 代表不支持):
+
+| 类型                               | `ndv` | `nullCount` | `avgLen` | `maxLen` | `maxValue` | `minValue` | `valueCount` |
+|:---------------------------------|:-----:|:-----------:|:--------:|:--------:|:----------:|:----------:|:------------:|
+| `BOOLEAN`                        |   N   |      Y      |    N     |    N     |     N      |     N      |      Y       |
+| `TINYINT`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `SMALLINT`                       |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `INTEGER`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `FLOAT`                          |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DATE`                           |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIME_WITHOUT_TIME_ZONE`         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `BIGINT`                         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DOUBLE`                         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DECIMAL`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIMESTAMP_WITH_LOCAL_TIME_ZONE` |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIMESTAMP_WITHOUT_TIME_ZONE`    |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `CHAR`                           |   Y   |      Y      |    Y     |    Y     |     N      |     N      |      N       |
+| `VARCHAR`                        |   Y   |      Y      |    Y     |    Y     |     N      |     N      |      N       |
+| `other types`                    |   N   |      Y      |    N     |    N     |     N      |     N      |      N       |
+
+*注意:* 对于数据值定长的类型(例如:`BOOLEAN`, `INTEGER`, `DOUBLE` 等), Flink 不会去收集 `avgLen` 和 `maxLen` 值。
+
+{{< top >}}
diff --git a/docs/content.zh/docs/dev/table/sql/overview.md b/docs/content.zh/docs/dev/table/sql/overview.md
index 9c9f4e2a208..d4732f9be8d 100644
--- a/docs/content.zh/docs/dev/table/sql/overview.md
+++ b/docs/content.zh/docs/dev/table/sql/overview.md
@@ -34,6 +34,7 @@ under the License.
 - [CREATE TABLE, CATALOG, DATABASE, VIEW, FUNCTION]({{< ref "docs/dev/table/sql/create" >}})
 - [DROP TABLE, DATABASE, VIEW, FUNCTION]({{< ref "docs/dev/table/sql/drop" >}})
 - [ALTER TABLE, DATABASE, FUNCTION]({{< ref "docs/dev/table/sql/alter" >}})
+- [ANALYZE TABLE]({{< ref "docs/dev/table/sql/analyze" >}})
 - [INSERT]({{< ref "docs/dev/table/sql/insert" >}})
 - [SQL HINTS]({{< ref "docs/dev/table/sql/queries/hints" >}})
 - [DESCRIBE]({{< ref "docs/dev/table/sql/describe" >}})
@@ -63,7 +64,7 @@ under the License.
 
 虽然 SQL 的特性并未完全实现,但是一些字符串的组合却已经被预留为关键字以备未来使用。如果你希望使用以下字符串作为你的字段名,请在使用时使用反引号将该字段名包起来(如 `` `value` ``, `` `count` `` )。
 
-  A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LE [...]
+  A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANALYZE, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHA [...]
 
 
 {{< top >}}
diff --git a/docs/content/docs/dev/table/sql/analyze.md b/docs/content/docs/dev/table/sql/analyze.md
new file mode 100644
index 00000000000..20813bba2a0
--- /dev/null
+++ b/docs/content/docs/dev/table/sql/analyze.md
@@ -0,0 +1,382 @@
+---
+title: "ANALYZE Statements"
+weight: 8
+type: docs
+aliases:
+  - /dev/table/sql/analyze.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# ANALYZE Statements
+
+`ANALYZE` statements are used to collect statistics for existing tables and store the result to catalog. Only
+`ANALYZE TABLE` statements are supported now, and need to be triggered manually instead of automatically.
+
+<span class="label label-danger">Attention</span> Currently, `ANALYZE TABLE` only supports in batch mode. Only existing 
+table is supported, and an exception will be thrown if the table is a view or table not exists.
+
+
+## Run an ANALYZE TABLE statement
+
+{{< tabs "analyze table" >}}
+{{< tab "Java" >}}
+`ANALYZE TABLE` statements can be executed with the `executeSql()` method of the `TableEnvironment`. 
+
+The following examples show how to run a `ANALYZE TABLE` statement in `TableEnvironment`.
+{{< /tab >}}
+{{< tab "Scala" >}}
+`ANALYZE TABLE` statements can be executed with the `executeSql()` method of the `TableEnvironment`. 
+
+The following examples show how to run a `ANALYZE TABLE` statement in `TableEnvironment`.
+{{< /tab >}}
+{{< tab "Python" >}}
+
+`ANALYZE TABLE` statements can be executed with the `execute_sql()` method of the `TableEnvironment`.
+
+The following examples show how to run a `ANALYZE TABLE` statement in `TableEnvironment`.
+
+{{< /tab >}}
+{{< tab "SQL CLI" >}}
+
+`ANALYZE TABLE` statements can be executed in [SQL CLI]({{< ref "docs/dev/table/sqlClient" >}}).
+
+The following examples show how to run a `ANALYZE TABLE` statement in SQL CLI.
+
+{{< /tab >}}
+{{< /tabs >}}
+
+{{< tabs "a5de1760-e363-4b8d-9d6f-0bacb35b9dcf" >}}
+{{< tab "Java" >}}
+```java
+TableEnvironment tableEnv = TableEnvironment.create(...);
+
+// register a non-partition table named "Store"
+tableEnv.executeSql(
+        "CREATE TABLE Store (" +
+        " `id` BIGINT NOT NULl," +
+        " `location` VARCHAR(32)," +
+        " `owner` VARCHAR(32)" +
+        ") with (...)");
+
+// register a partition table named "Orders"
+tableEnv.executeSql(
+        "CREATE TABLE Orders (" +
+        " `id` BIGINT NOT NULl," +
+        " `product` VARCHAR(32)," +
+        " `amount` INT," +
+        " `sold_year` BIGINT", +
+        " `sold_month` BIGINT", +
+        " `sold_day` BIGINT" +
+        ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+        ") with (...)");
+
+// Non-partition table, collect row count.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+// Non-partition table, collect row count and statistics for all columns.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Non-partition table, collect row count and statistics for column `location`.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+// Suppose table "Orders" has 4 partitions with specs:
+// Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+// Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+// Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+// Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+// Partition table, collect row count for Partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+// Partition table, collect row count for Partition1 and Partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+// Partition table, collect row count for all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+// Partition table, collect row count and statistics for all columns on partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for all columns on partition1 and partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for all columns on all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for column `amount` on partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+// Partition table, collect row count and statistics for `amount` and `product` on partition1 and partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+// Partition table, collect row count and statistics for column `amount` and `product` on all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "Scala" >}}
+```scala
+val tableEnv = TableEnvironment.create(...)
+
+// register a non-partition table named "Store"
+tableEnv.executeSql(
+  "CREATE TABLE Store (" +
+          " `id` BIGINT NOT NULl," +
+          " `location` VARCHAR(32)," +
+          " `owner` VARCHAR(32)" +
+          ") with (...)");
+
+// register a partition table named "Orders"
+tableEnv.executeSql(
+  "CREATE TABLE Orders (" +
+          " `id` BIGINT NOT NULl," +
+          " `product` VARCHAR(32)," +
+          " `amount` INT," +
+          " `sold_year` BIGINT", +
+          " `sold_month` BIGINT", +
+          " `sold_day` BIGINT" +
+          ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+") with (...)");
+
+// Non-partition table, collect row count.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+// Non-partition table, collect row count and statistics for all columns.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Non-partition table, collect row count and statistics for column `location`.
+tableEnv.executeSql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+// Suppose table "Orders" has 4 partitions with specs:
+// Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+// Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+// Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+// Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+// Partition table, collect row count for Partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+// Partition table, collect row count for Partition1 and Partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+// Partition table, collect row count for all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+// Partition table, collect row count and statistics for all columns on partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for all columns on partition1 and partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for all columns on all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+// Partition table, collect row count and statistics for column `amount` on partition1.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+// Partition table, collect row count and statistics for `amount` and `product` on partition1 and partition2.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+// Partition table, collect row count and statistics for column `amount` and `product` on all partitions.
+tableEnv.executeSql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "Python" >}}
+```python
+table_env = TableEnvironment.create(...)
+
+# register a non-partition table named "Store"
+table_env.execute_sql(
+  "CREATE TABLE Store (" +
+          " `id` BIGINT NOT NULl," +
+          " `location` VARCHAR(32)," +
+          " `owner` VARCHAR(32)" +
+          ") with (...)");
+
+# register a partition table named "Orders"
+table_env.execute_sql(
+  "CREATE TABLE Orders (" +
+          " `id` BIGINT NOT NULl," +
+          " `product` VARCHAR(32)," +
+          " `amount` INT," +
+          " `sold_year` BIGINT", +
+          " `sold_month` BIGINT", +
+          " `sold_day` BIGINT" +
+          ") PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`) "
+") with (...)");
+
+# Non-partition table, collect row count.
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS");
+
+# Non-partition table, collect row count and statistics for all columns.
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# Non-partition table, collect row count and statistics for column `location`.
+table_env.execute_sql("ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location");
+
+
+# Suppose table "Orders" has 4 partitions with specs:
+# Partition1 : (sold_year='2022', sold_month='1', sold_day='10')
+# Partition2 : (sold_year='2022', sold_month='1', sold_day='11')
+# Partition3 : (sold_year='2022', sold_month='2', sold_day='10')
+# Partition4 : (sold_year='2022', sold_month='2', sold_day='11')
+
+
+# Partition table, collect row count for Partition1.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS");
+
+# Partition table, collect row count for Partition1 and Partition2.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS");
+
+# Partition table, collect row count for all partitions.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS");
+
+# Partition table, collect row count and statistics for all columns on partition1.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# Partition table, collect row count and statistics for all columns on partition1 and partition2.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# Partition table, collect row count and statistics for all columns on all partitions.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS");
+
+# Partition table, collect row count and statistics for column `amount` on partition1.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount");
+
+# Partition table, collect row count and statistics for `amount` and `product` on partition1 and partition2.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+
+# Partition table, collect row count and statistics for column `amount` and `product` on all partitions.
+table_env.execute_sql("ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product");
+```
+{{< /tab >}}
+{{< tab "SQL CLI" >}}
+```sql
+Flink SQL> CREATE TABLE Store (
+> `id` BIGINT NOT NULl,
+> `location` VARCHAR(32),
+> `owner` VARCHAR(32)
+> ) with (
+> ...
+> );
+[INFO] Table has been created.
+
+Flink SQL> CREATE TABLE Orders (
+> `id` BIGINT NOT NULl,
+> `product` VARCHAR(32),
+> `amount` INT,
+> `sold_year` BIGINT,
+> `sold_month` BIGINT,
+> `sold_day` BIGINT   
+> ) PARTITIONED BY (`sold_year`, `sold_month`, `sold_day`)
+> ) with (
+> ...
+> );
+[INFO] Table has been created.
+
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Store COMPUTE STATISTICS FOR COLUMNS location;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS;
+[INFO] Execute statement succeed.
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.    
+
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR ALL COLUMNS;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year='2022', sold_month='1', sold_day='10') COMPUTE STATISTICS FOR COLUMNS amount;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION (sold_year='2022', sold_month='1', sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product;
+[INFO] Execute statement succeed.
+    
+Flink SQL> ANALYZE TABLE Orders PARTITION(sold_year, sold_month, sold_day) COMPUTE STATISTICS FOR COLUMNS amount, product;
+[INFO] Execute statement succeed.
+```
+{{< /tab >}}
+{{< /tabs >}}
+
+
+## Syntax
+
+```sql
+ANALYZE TABLE [catalog_name.][db_name.]table_name PARTITION(partcol1[=val1] [, partcol2[=val2], ...]) COMPUTE STATISTICS [FOR COLUMNS col1 [, col2, ...] | FOR ALL COLUMNS]
+```
+- PARTITION(partcol1[=val1] [, partcol2[=val2], ...]) is required for the partition table
+  - If no partition is specified, the statistics will be gathered for all partitions
+  - If a certain partition is specified, the statistics will be gathered only for specific partition
+  - If the table is non-partition table , while a partition is specified, an exception will be thrown
+  - If a certain partition is specified, but the partition does not exist, an exception will be thrown
+  
+- FOR COLUMNS col1 [, col2, ...] or FOR ALL COLUMNS are optional
+  - If no column is specified, only the table level statistics will be gathered
+  - If a column does not exist, or column is not a physical column, an exception will be thrown.
+  - If a column or any column is specified, the column level statistics will be gathered
+    - the column level statistics include:
+      - ndv: the number of distinct values
+      - nullCount: the number of nulls
+      - avgLen: the average length of column values 
+      - maxLen: the max length of column values
+      - minValue: the min value of column values
+      - maxValue: the max value of column values
+      - valueCount: the value count only for boolean type
+    - the supported types and its corresponding column level statistics are as following sheet lists("Y" means support, "N" means unsupported):
+
+| Types                            | `ndv` | `nullCount` | `avgLen` | `maxLen` | `maxValue` | `minValue` | `valueCount` |
+|:---------------------------------|:-----:|:-----------:|:--------:|:--------:|:----------:|:----------:|:------------:|
+| `BOOLEAN`                        |   N   |      Y      |    N     |    N     |     N      |     N      |      Y       |
+| `TINYINT`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `SMALLINT`                       |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `INTEGER`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `FLOAT`                          |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DATE`                           |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIME_WITHOUT_TIME_ZONE`         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `BIGINT`                         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DOUBLE`                         |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `DECIMAL`                        |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIMESTAMP_WITH_LOCAL_TIME_ZONE` |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `TIMESTAMP_WITHOUT_TIME_ZONE`    |   Y   |      Y      |    N     |    N     |     Y      |     Y      |      N       |
+| `CHAR`                           |   Y   |      Y      |    Y     |    Y     |     N      |     N      |      N       |
+| `VARCHAR`                        |   Y   |      Y      |    Y     |    Y     |     N      |     N      |      N       |
+| `other types`                    |   N   |      Y      |    N     |    N     |     N      |     N      |      N       |
+
+*NOTE:* For the fix length types (like `BOOLEAN`, `INTEGER`, `DOUBLE` etc.), we need not collect the `avgLen` and `maxLen` from the original records.
+
+{{< top >}}
\ No newline at end of file
diff --git a/docs/content/docs/dev/table/sql/overview.md b/docs/content/docs/dev/table/sql/overview.md
index 028dc4137b9..97fe751c16e 100644
--- a/docs/content/docs/dev/table/sql/overview.md
+++ b/docs/content/docs/dev/table/sql/overview.md
@@ -34,6 +34,7 @@ This page lists all the supported statements supported in Flink SQL for now:
 - [CREATE TABLE, CATALOG, DATABASE, VIEW, FUNCTION]({{< ref "docs/dev/table/sql/create" >}})
 - [DROP TABLE, DATABASE, VIEW, FUNCTION]({{< ref "docs/dev/table/sql/drop" >}})
 - [ALTER TABLE, DATABASE, FUNCTION]({{< ref "docs/dev/table/sql/alter" >}})
+- [ANALYZE TABLE]({{< ref "docs/dev/table/sql/analyze" >}})
 - [INSERT]({{< ref "docs/dev/table/sql/insert" >}})
 - [DESCRIBE]({{< ref "docs/dev/table/sql/describe" >}})
 - [EXPLAIN]({{< ref "docs/dev/table/sql/explain" >}})
@@ -62,7 +63,7 @@ For DDLs, we support full data types defined in page [Data Types]({{< ref "docs/
 
 Although not every SQL feature is implemented yet, some string combinations are already reserved as keywords for future use. If you want to use one of the following strings as a field name, make sure to surround them with backticks (e.g. `` `value` ``, `` `count` ``).
 
-  A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LE [...]
+  A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANALYZE, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHA [...]
 
 
 {{< top >}}