You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kylin.apache.org by xx...@apache.org on 2022/09/14 01:26:27 UTC

[kylin] 01/03: KYLIN-5262 add doc for deployment

This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch doc5.0
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 8382913ee525888819f80b406561226e3d02d47c
Author: Mukvin <bo...@163.com>
AuthorDate: Tue Sep 13 15:23:49 2022 +0800

    KYLIN-5262 add doc for deployment
---
 website/docs/configuration/hadoop_queue_config.md  |   7 +-
 website/docs/configuration/https.md                |   4 +-
 .../docs/deployment/installation/images/job.png    | Bin 273294 -> 0 bytes
 .../deployment/installation/images/model_list.png  | Bin 123821 -> 0 bytes
 .../docs/deployment/installation/images/query.png  | Bin 199326 -> 0 bytes
 .../on-premises/deploy_mode/cluster_deployment.md  |   6 +-
 .../deploy_mode/images/cluster_20191231.png        | Bin 199821 -> 0 bytes
 .../deploy_mode/images/cluster_20220913.png        | Bin 0 -> 162496 bytes
 .../on-premises/images/{jdk.en.png => jdk.png}     | Bin
 .../installation/images/download_krb5.en.png       | Bin
 .../images/installation_job_monitor.png            | Bin
 .../images/installation_query_result.png           | Bin
 .../images/installation_show_model.png             | Bin
 .../on-premises/installation/images/job.png        | Bin 0 -> 61330 bytes
 .../installation/images/minimal.png                | Bin
 .../on-premises/installation/images/model_list.png | Bin 0 -> 40368 bytes
 .../on-premises/installation/images/query.png      | Bin 0 -> 54876 bytes
 .../installation/images/query_result.png           | Bin
 .../installation/install_validation.md             |   4 +-
 .../{ => on-premises}/installation/intro.md        |   2 -
 .../platform/install_on_apache_hadoop.md           |   6 +-
 .../installation/platform/intro.md                 |   0
 .../installation/uninstallation.md                 |   0
 website/docs/deployment/on-premises/intro.md       |   2 +-
 .../on-premises/network_port_requirements.md       |   2 +-
 .../docs/deployment/on-premises/prerequisite.md    |  12 +--
 .../rdbms_metastore/mysql/install_mysql.md         |   4 +-
 .../postgresql/install_postgresql.md               |  24 ++++--
 website/docs/modeling/data_modeling.md             |   6 +-
 website/docs/modeling/load_data/build_index.md     |   2 +-
 website/docs/modeling/manual_modeling.md           |   4 +-
 website/docs/modeling/model_concepts_operations.md |   2 +-
 .../advance_guide/images/model_check.png           | Bin 49547 -> 42184 bytes
 .../advance_guide/images/model_export.png          | Bin 171441 -> 33367 bytes
 .../advance_guide/images/model_publish.png         | Bin 57324 -> 88130 bytes
 .../advance_guide/images/model_upload.png          | Bin 54934 -> 14324 bytes
 .../advance_guide/model_metadata_managment.md      |   4 +-
 .../advance_guide/multilevel_partitioning.md       |   2 +-
 .../modeling/model_design/aggregation_group.md     |  14 ++--
 .../modeling/model_design/images/agg/advanced.png  | Bin 301048 -> 110451 bytes
 .../modeling/model_design/images/agg/agg_1.png     | Bin 0 -> 75878 bytes
 .../modeling/model_design/images/agg/agg_2.png     | Bin 0 -> 78718 bytes
 .../model_design/images/agg/agg_index_2.png        | Bin 121116 -> 28858 bytes
 .../model_design/images/agg/agg_measure.png        | Bin 247240 -> 65171 bytes
 .../docs/modeling/model_design/images/agg_1.png    | Bin 303722 -> 0 bytes
 .../docs/modeling/model_design/images/agg_2.png    | Bin 307027 -> 0 bytes
 .../modeling/model_design/images/agg_measure.png   | Bin 247240 -> 0 bytes
 .../modeling/model_design/images/mdc/intro_mdc.png | Bin 236778 -> 90718 bytes
 .../model_design/images/mdc/single_mdc.png         | Bin 181062 -> 100541 bytes
 .../modeling/model_design/images/mdc/total_mdc.png | Bin 213552 -> 89560 bytes
 .../model_design/measure_design/collect_set.md     |   6 +-
 .../measure_design/count_distinct_bitmap.md        |   4 +-
 .../count_distinct_case_when_expr.md               |   2 +-
 .../measure_design/count_distinct_hllc.md          |   4 +-
 .../measure_design/percentile_approx.md            |   2 +-
 .../modeling/model_design/measure_design/topn.md   |   2 +-
 website/docs/monitor/images/job_diagnosis_web.png  | Bin 92222 -> 36486 bytes
 website/docs/monitor/images/job_id.png             | Bin 120640 -> 182134 bytes
 website/docs/monitor/images/job_log.png            | Bin 494702 -> 249456 bytes
 website/docs/monitor/images/job_status.png         | Bin 96180 -> 59773 bytes
 website/docs/monitor/images/job_type.png           | Bin 83019 -> 103265 bytes
 website/docs/monitor/job_concept_settings.md       |   2 +-
 website/docs/monitor/job_diagnosis.md              |   7 +-
 website/docs/monitor/job_exception_resolve.md      |   2 +-
 website/docs/monitor/job_operations.md             |   2 +-
 website/docs/operations/logs/audit_log.md          |   4 +-
 .../system-operation/cli_tool/diagnosis.md         |   4 +-
 website/docs/query/history.md                      |   2 +-
 website/docs/query/insight/async_query.md          |   2 +-
 website/docs/query/insight/insight.md              |   2 +-
 .../function/conditional_function.md               |   4 +-
 .../operator_function/function/string_function.md  |  20 ++---
 .../operator_function/function/window_function.md  |   2 +-
 .../query/pushdown/pushdown_to_embedded_spark.md   |   2 +-
 website/docs/quickstart/images/query_result.png    | Bin 169101 -> 55337 bytes
 website/docs/restapi/error_code.md                 |   2 +-
 .../docs/restapi/model_api/model_management_api.md |   2 +-
 website/sidebars.js                                |  82 +++++++++++++--------
 78 files changed, 142 insertions(+), 126 deletions(-)

diff --git a/website/docs/configuration/hadoop_queue_config.md b/website/docs/configuration/hadoop_queue_config.md
index f85925250b..13284780e3 100644
--- a/website/docs/configuration/hadoop_queue_config.md
+++ b/website/docs/configuration/hadoop_queue_config.md
@@ -14,13 +14,10 @@ last_update:
     date: 08/16/2022
 ---
 
-
-## Hadoop Queue Configuration
-
 In the case of a multiple-tenants environment, to securely share a large cluster, each tenant needs to have the allocated resources in a timely manner under the constraints of the allocated capacities. To achieve computing resources allocation and separation, each Kylin instance or project can be configured to use a different YARN queue.  
 
 
-###<span id="instance">Instance-level YARN Queue Setting</span>
+### <span id="instance">Instance-level YARN Queue Setting</span>
 
 To achieve this, first create a new YARN capacity scheduler queue. By default, the job sent out by Kylin will go to the default YARN queue.
 
@@ -48,6 +45,6 @@ Similarly, you may set up YARN queue for other Kylin instances to achieve comput
 
 
 
-###<span id="project">Project-level YARN Queue Setting</span>
+### <span id="project">Project-level YARN Queue Setting</span>
 
 The system admin user can set the YARN Application Queue of the project in **Setting -> Advanced Settings -> YARN Application Queue**, please refer to the [Project Settings](../operations/project-operation/project_settings.md) for more information.
diff --git a/website/docs/configuration/https.md b/website/docs/configuration/https.md
index dd41340f55..6ecbd9adde 100644
--- a/website/docs/configuration/https.md
+++ b/website/docs/configuration/https.md
@@ -14,7 +14,7 @@ last_update:
     date: 08/16/2022
 ---
 
-Kylin 5.x provides a HTTPS connection. It is disabled by default. If you need to enable it, please follow the steps below.
+Kylin 5.0 provides a HTTPS connection. It is disabled by default. If you need to enable it, please follow the steps below.
 
 ### Use Default Certificate
 
@@ -61,7 +61,7 @@ If you need to encrypt `kylin.server.https.keystore-password`, you can do it lik
 
 i.run following commands in `${KYLIN_HOME}`, it will print encrypted password
 ```shell
-./bin/kylin.sh io.kyligence.kap.tool.general.CryptTool -e AES -s <password>
+./bin/kylin.sh org.apache.kylin.tool.general.CryptTool -e AES -s <password>
 ```
 
 ii.config `kylin.server.https.keystore-password` like this
diff --git a/website/docs/deployment/installation/images/job.png b/website/docs/deployment/installation/images/job.png
deleted file mode 100644
index b139648ede..0000000000
Binary files a/website/docs/deployment/installation/images/job.png and /dev/null differ
diff --git a/website/docs/deployment/installation/images/model_list.png b/website/docs/deployment/installation/images/model_list.png
deleted file mode 100644
index 1feb8dba00..0000000000
Binary files a/website/docs/deployment/installation/images/model_list.png and /dev/null differ
diff --git a/website/docs/deployment/installation/images/query.png b/website/docs/deployment/installation/images/query.png
deleted file mode 100644
index 2a02096776..0000000000
Binary files a/website/docs/deployment/installation/images/query.png and /dev/null differ
diff --git a/website/docs/deployment/on-premises/deploy_mode/cluster_deployment.md b/website/docs/deployment/on-premises/deploy_mode/cluster_deployment.md
index 233edb3b6e..560aae9f97 100644
--- a/website/docs/deployment/on-premises/deploy_mode/cluster_deployment.md
+++ b/website/docs/deployment/on-premises/deploy_mode/cluster_deployment.md
@@ -17,7 +17,7 @@ last_update:
 
 All Kylin state information instance is stored in a RDBMS database, so running Kylin on multiple nodes in a cluster is good practice for better load balance and higher availability. Currently, we only support deployment with one `all` node and multiple `query` nodes.
 
-[comment]: <#TODO> (![Deployment Architecture]&#40;images/cluster_20191231.png&#41;)
+![Deployment Architecture](images/cluster_20220913.png)
 
 In the above diagram, the components which require user deployment are below:
 
@@ -32,7 +32,7 @@ We will go through each one of them:
 
 Kylin uses RDBMS to store metadata, please refer to [Use RDBMS as Metastore](../rdbms_metastore/intro.md).
 
-Kylin uses Time-series database to store metrics (mostly for monitor purpose), please refer to [Use InfluxDB as Time-Series Database](../../operations/monitoring/influxdb/influxdb.md). 
+Kylin uses Time-series database to store metrics (mostly for monitor purpose), please refer to [Use InfluxDB as Time-Series Database](../../../operations/monitoring/influxdb/influxdb.md). 
 
 ### Kylin Nodes Introduction
 
@@ -83,4 +83,4 @@ kylin.job.ssh-password=password
 **Note:**
 
 - The "Multi-Active" mode is enabled by default. If there is only one `all` or `job` node, this mode should be turned off because of performance considerations. If you want to disable this feature, please add `kylin.server.leader-race.enabled=false` in `$KYLIN_HOME/conf/kylin.properties` for the `all` or `job` node.
-- If you want to enable it again, please update the relationship between projects and the job engines. After that, you need call Rest API to update the data. For details, please refer to [Project Settings API](../../restapi/project_api.md)
+- If you want to enable it again, please update the relationship between projects and the job engines. After that, you need call Rest API to update the data. For details, please refer to [Project Settings API](../../../restapi/project_api.md)
diff --git a/website/docs/deployment/on-premises/deploy_mode/images/cluster_20191231.png b/website/docs/deployment/on-premises/deploy_mode/images/cluster_20191231.png
deleted file mode 100644
index 0d13c43390..0000000000
Binary files a/website/docs/deployment/on-premises/deploy_mode/images/cluster_20191231.png and /dev/null differ
diff --git a/website/docs/deployment/on-premises/deploy_mode/images/cluster_20220913.png b/website/docs/deployment/on-premises/deploy_mode/images/cluster_20220913.png
new file mode 100644
index 0000000000..ba7fdcaefd
Binary files /dev/null and b/website/docs/deployment/on-premises/deploy_mode/images/cluster_20220913.png differ
diff --git a/website/docs/deployment/on-premises/images/jdk.en.png b/website/docs/deployment/on-premises/images/jdk.png
similarity index 100%
rename from website/docs/deployment/on-premises/images/jdk.en.png
rename to website/docs/deployment/on-premises/images/jdk.png
diff --git a/website/docs/deployment/installation/images/download_krb5.en.png b/website/docs/deployment/on-premises/installation/images/download_krb5.en.png
similarity index 100%
rename from website/docs/deployment/installation/images/download_krb5.en.png
rename to website/docs/deployment/on-premises/installation/images/download_krb5.en.png
diff --git a/website/docs/deployment/installation/images/installation_job_monitor.png b/website/docs/deployment/on-premises/installation/images/installation_job_monitor.png
similarity index 100%
rename from website/docs/deployment/installation/images/installation_job_monitor.png
rename to website/docs/deployment/on-premises/installation/images/installation_job_monitor.png
diff --git a/website/docs/deployment/installation/images/installation_query_result.png b/website/docs/deployment/on-premises/installation/images/installation_query_result.png
similarity index 100%
rename from website/docs/deployment/installation/images/installation_query_result.png
rename to website/docs/deployment/on-premises/installation/images/installation_query_result.png
diff --git a/website/docs/deployment/installation/images/installation_show_model.png b/website/docs/deployment/on-premises/installation/images/installation_show_model.png
similarity index 100%
rename from website/docs/deployment/installation/images/installation_show_model.png
rename to website/docs/deployment/on-premises/installation/images/installation_show_model.png
diff --git a/website/docs/deployment/on-premises/installation/images/job.png b/website/docs/deployment/on-premises/installation/images/job.png
new file mode 100644
index 0000000000..93924fcc44
Binary files /dev/null and b/website/docs/deployment/on-premises/installation/images/job.png differ
diff --git a/website/docs/deployment/installation/images/minimal.png b/website/docs/deployment/on-premises/installation/images/minimal.png
similarity index 100%
rename from website/docs/deployment/installation/images/minimal.png
rename to website/docs/deployment/on-premises/installation/images/minimal.png
diff --git a/website/docs/deployment/on-premises/installation/images/model_list.png b/website/docs/deployment/on-premises/installation/images/model_list.png
new file mode 100644
index 0000000000..fd1cb4c262
Binary files /dev/null and b/website/docs/deployment/on-premises/installation/images/model_list.png differ
diff --git a/website/docs/deployment/on-premises/installation/images/query.png b/website/docs/deployment/on-premises/installation/images/query.png
new file mode 100644
index 0000000000..174f8ff5d8
Binary files /dev/null and b/website/docs/deployment/on-premises/installation/images/query.png differ
diff --git a/website/docs/deployment/installation/images/query_result.png b/website/docs/deployment/on-premises/installation/images/query_result.png
similarity index 100%
rename from website/docs/deployment/installation/images/query_result.png
rename to website/docs/deployment/on-premises/installation/images/query_result.png
diff --git a/website/docs/deployment/installation/install_validation.md b/website/docs/deployment/on-premises/installation/install_validation.md
similarity index 95%
rename from website/docs/deployment/installation/install_validation.md
rename to website/docs/deployment/on-premises/installation/install_validation.md
index 7b67662f49..c2ad50844a 100644
--- a/website/docs/deployment/installation/install_validation.md
+++ b/website/docs/deployment/on-premises/installation/install_validation.md
@@ -40,7 +40,7 @@ After running successfully, you should be able to see the following information
 Sample hive tables are created successfully
 ```
 
-We will be using SSB dataset as the data sample to introduce Kylin in several sections of this  product manual. The SSB dataset simulates transaction data for the online store, see more details in [Sample Dataset](#TODO). Below is a brief introduction.
+We will be using SSB dataset as the data sample to introduce Kylin in several sections of this  product manual. The SSB dataset simulates transaction data for the online store, see more details in [Sample Dataset](../../../quickstart/sample_dataset.md). Below is a brief introduction.
 
 
 | Table        | Description                 | Introduction                                                         |
@@ -69,7 +69,7 @@ When the metadata is loaded successfully, at the **Insight** page, 6 sample hive
 ```sql
 SELECT LO_PARTKEY, SUM(LO_REVENUE) AS TOTAL_REVENUE
 FROM SSB.P_LINEORDER
-WHERE LO_ORDERDATE between '19930601' AND '19940601' 
+WHERE LO_ORDERDATE between '1993-06-01' AND '1994-06-01' 
 group by LO_PARTKEY
 order by SUM(LO_REVENUE) DESC 
 ```
diff --git a/website/docs/deployment/installation/intro.md b/website/docs/deployment/on-premises/installation/intro.md
similarity index 94%
rename from website/docs/deployment/installation/intro.md
rename to website/docs/deployment/on-premises/installation/intro.md
index acc4d62152..a21ff80da5 100644
--- a/website/docs/deployment/installation/intro.md
+++ b/website/docs/deployment/on-premises/installation/intro.md
@@ -15,6 +15,4 @@ last_update:
     date: 08/12/2022
 ---
 
-## Install and Uninstall
-
 This chapter will introduce how to install Kylin on different platforms and how to uninstall Kylin.
diff --git a/website/docs/deployment/installation/platform/install_on_apache_hadoop.md b/website/docs/deployment/on-premises/installation/platform/install_on_apache_hadoop.md
similarity index 72%
rename from website/docs/deployment/installation/platform/install_on_apache_hadoop.md
rename to website/docs/deployment/on-premises/installation/platform/install_on_apache_hadoop.md
index 2a5bdcc28e..02132bcd80 100644
--- a/website/docs/deployment/installation/platform/install_on_apache_hadoop.md
+++ b/website/docs/deployment/on-premises/installation/platform/install_on_apache_hadoop.md
@@ -18,7 +18,7 @@ last_update:
 
 ### Prepare Environment
 
-First, **make sure you allocate sufficient resources for the environment**. Please refer to [Prerequisites](../../../deployment/on-premises/prerequisite.md) for detailed resource requirements for Kylin. Moreover, please ensure that `HDFS`, `YARN`, `Hive`, `ZooKeeper` and other components are in normal state without any warning information.
+First, **make sure you allocate sufficient resources for the environment**. Please refer to [Prerequisites](docs/deployment/on-premises/prerequisite.md) for detailed resource requirements for Kylin. Moreover, please ensure that `HDFS`, `YARN`, `Hive`, `ZooKeeper` and other components are in normal state without any warning information.
 
 
 
@@ -43,10 +43,10 @@ Add the following two configurations in `$KYLIN_HOME/conf/kylin.properties`:
 
 In Apache Hadoop 3.2.1, you also need to prepare the MySQL JDBC driver in the operating environment of Kylin.
 
-Here is a download link for the jar file package of the MySQL 5.1 JDBC driver:https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar. You need to prepare the other versions of the driver yourself.Please place the JDBC driver of the corresponding version of MySQL in the `$KYLIN_HOME/lib/ext` directory.
+Here is a download link for the jar file package of the MySQL 8.0 JDBC driver:https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar. You need to prepare the other versions of the driver yourself.Please place the JDBC driver of the corresponding version of MySQL in the `$KYLIN_HOME/lib/ext` directory.
 
 
 
 ### Install Kylin
 
-After setting up the environment, please refer to [Quick Start](../../../quickstart/quick_start.md) to continue.
+After setting up the environment, please refer to [Quick Start](docs/quickstart/quick_start.md) to continue.
diff --git a/website/docs/deployment/installation/platform/intro.md b/website/docs/deployment/on-premises/installation/platform/intro.md
similarity index 100%
rename from website/docs/deployment/installation/platform/intro.md
rename to website/docs/deployment/on-premises/installation/platform/intro.md
diff --git a/website/docs/deployment/installation/uninstallation.md b/website/docs/deployment/on-premises/installation/uninstallation.md
similarity index 100%
rename from website/docs/deployment/installation/uninstallation.md
rename to website/docs/deployment/on-premises/installation/uninstallation.md
diff --git a/website/docs/deployment/on-premises/intro.md b/website/docs/deployment/on-premises/intro.md
index e5c88e8783..bbaf2e29bf 100644
--- a/website/docs/deployment/on-premises/intro.md
+++ b/website/docs/deployment/on-premises/intro.md
@@ -14,4 +14,4 @@ last_update:
     date: 08/17/2022
 ---
 
-Kylin 5.0 support deploy on Hadoop.
+Kylin 5.0 support to deploy on Hadoop.
diff --git a/website/docs/deployment/on-premises/network_port_requirements.md b/website/docs/deployment/on-premises/network_port_requirements.md
index 5d409ec903..16d3c1b4b2 100644
--- a/website/docs/deployment/on-premises/network_port_requirements.md
+++ b/website/docs/deployment/on-premises/network_port_requirements.md
@@ -18,7 +18,7 @@ Kylin needs to communicate with different components. The following are the port
 
 | Component            | Port          | Function                                                     | Required |
 | -------------------- | ------------- | ------------------------------------------------------------ | -------- |
-| SSH                  | 22            | SSH to connect to the port of the virtual machine where KE is located | Y        |
+| SSH                  | 22            | SSH to connect to the port of the virtual machine where Kylin is located | Y        |
 | Kylin                | 7070          | Kylin access port                                            | Y        |
 | Kylin                | 7443          | Kylin HTTPS access port                                      | N        |
 | HDFS                 | 8020          | HDFS receives client connection RPC port                     | Y        |
diff --git a/website/docs/deployment/on-premises/prerequisite.md b/website/docs/deployment/on-premises/prerequisite.md
index 56237cf317..4b3c7981a9 100644
--- a/website/docs/deployment/on-premises/prerequisite.md
+++ b/website/docs/deployment/on-premises/prerequisite.md
@@ -36,7 +36,7 @@ Prior to installing Kylin, please check the following prerequisites are met.
 
 The following Hadoop distributions are verified to run on Kylin.
 
-- Apache Hadoop 3.2.1 (#TODO)
+- [Apache Hadoop 3.2.1](installation/platform/install_on_apache_hadoop.md)
 
 
 Kylin requires some components, please make sure each server has the following components.
@@ -60,7 +60,7 @@ java -version
 
 You can use the following command to check the JDK version of your existing environment, for example, the following figure shows JDK 8
 
-![JDK version](images/jdk.en.png)
+![JDK version](images/jdk.png)
 
 ### <span id="account">Account Authority</span>
 
@@ -114,13 +114,13 @@ Verify the user has access to the Hadoop cluster with account `KyAdmin`. Test us
 
 A configured metastore is required for this product.
 
-We recommend using PostgreSQL 10.7 as the metastore, which is provided in our package. Please refer to [Use PostgreSQL as Metastore (Default)](./rdbms_metastore/default_metastore.md) for installation steps and details.
+We recommend using PostgreSQL 10.7 as the metastore, which is provided in our package. Please refer to [Use PostgreSQL as Metastore (Default)](./rdbms_metastore/postgresql/default_metastore.md) for installation steps and details.
 
 If you want to use your own PostgreSQL database, the supported versions are below:
 
 - PostgreSQL 9.1 or above
 
-You can also choose to use MySQL but we currently don't provide a MySQL installation package or JDBC driver. Therefore, you need to finish all the prerequisites before setting up. Please refer to [Use MySQL as Metastore](./rdbms_metastore/mysql_metastore.md) for installation steps and details. The supported MySQL database versions are below:
+You can also choose to use MySQL but we currently don't provide a MySQL installation package or JDBC driver. Therefore, you need to finish all the prerequisites before setting up. Please refer to [Use MySQL as Metastore](./rdbms_metastore/mysql/mysql_metastore.md) for installation steps and details. The supported MySQL database versions are below:
 
 - MySQL 5.1-5.7
 - MySQL 5.7 (recommended)
@@ -176,7 +176,9 @@ We recommend the following hardware configuration to install Kylin:
 
 We recommend using the following version of the Linux operating system:
 
-- (#TODO)
+- Ubuntu 18.04 + (recommend LTS version)
+- Red Hat Enterprise Linux 6.4+ 或 7.x 
+- CentOS 6.4+ 或 7.x
 
 ### <span id="client">Recommended Client Configuration</span>
 
diff --git a/website/docs/deployment/on-premises/rdbms_metastore/mysql/install_mysql.md b/website/docs/deployment/on-premises/rdbms_metastore/mysql/install_mysql.md
index c65ddc2269..e9206ffc48 100644
--- a/website/docs/deployment/on-premises/rdbms_metastore/mysql/install_mysql.md
+++ b/website/docs/deployment/on-premises/rdbms_metastore/mysql/install_mysql.md
@@ -33,9 +33,9 @@ last_update:
 
 4. Please put the corresponding MySQL's JDBC driver to directory `$KYLIN_HOME/lib/ext`. 
 
-### <span id ="not_root">Non `root` User Installation and Configuration</span>
+### <span id ="not_root">`Non root` User Installation and Configuration</span>
 
-The followings are the steps for a non `root` user `abc` installing MySQL 5.7 on CentOS 7(#TODO)( apply to `root` users as well).
+The followings are the steps for a `non root` user `abc` installing MySQL 5.7 on CentOS 7( apply to `root` users as well).
 
 1. Create a new directory `/home/abc/mysql`, and locate MySQL intallation package in the directory, excute the following command to unzip the package of `rpm`:
 
diff --git a/website/docs/deployment/on-premises/rdbms_metastore/postgresql/install_postgresql.md b/website/docs/deployment/on-premises/rdbms_metastore/postgresql/install_postgresql.md
index 6d0acf19e4..3030bf0370 100644
--- a/website/docs/deployment/on-premises/rdbms_metastore/postgresql/install_postgresql.md
+++ b/website/docs/deployment/on-premises/rdbms_metastore/postgresql/install_postgresql.md
@@ -20,7 +20,7 @@ last_update:
 
 2. If using other versions of PostgreSQL, please choose a version above PostgreSQL 9.1.
 
-3. The PostgreSQL installation package currently supports installation in (#TODO) system, the correspondence is as follows:
+3. The PostgreSQL installation package currently supports installation in CentOS system, the correspondence is as follows:
 
    - `rhel6.x86_64.rpm` -> CentOS 6
    - `rhel7.x86_64.rpm` -> CentOS 7
@@ -28,13 +28,21 @@ last_update:
 
    Please check out Linux version before choosing the installation package. You should be able to see your Linux core version by running `uname -a` or `cat /etc/issue`.
 
+   > Note: other system compatible package please refer to [PostgreSQL Website](https://www.postgresql.org/download/). 
+
 4. In this section, we will go through a PostgreSQL installation and configuration on CentOS 6.
 
 
 
 ### <span id="root">`root` User Installation and Configuration</span>
 
-1. After unzipping the Kylin package, enter the root directory `postgresql` and run following commands in order to install PostgreSQL.(#TODO)
+1. After unzipping the Kylin package, enter the root directory `sbin` and run following commands in order to download PostgreSQL.
+
+   ```shell
+   ./download_postgresql.sh
+   ```
+
+2. After unzipping the Kylin package, enter the root directory `postgresql` and run following commands in order to install PostgreSQL.
 
    ```shell
    rpm -ivh postgresql10-libs-10.7-1PGDG.rhel6.x86_64.rpm
@@ -42,7 +50,7 @@ last_update:
    rpm -ivh postgresql10-server-10.7-1PGDG.rhel6.x86_64.rpm
    ```
 
-2. Initialize PostgreSQL
+3. Initialize PostgreSQL
 
    The OS has installed Initscripts services, Please run:
    ```sh
@@ -55,7 +63,7 @@ last_update:
    for example: /user/pgsql-10/bin/postgresql-10-setup initdb
    ```
 
-3. Modify two PostgreSQL configuration files, the files are in `/var/lib/pgsql/10/data/`:
+4. Modify two PostgreSQL configuration files, the files are in `/var/lib/pgsql/10/data/`:
 
    - `pg_hba.conf`: mainly used to store the authentication information of the client.
    - `postgresql.conf`
@@ -103,9 +111,9 @@ last_update:
    - `listen_addresses`: Specify the TCP / IP address listened by server. It is represented by multiple hostnames seperated by comma, for intance, `listen_addresses = host1,host2,host3` or `listen_address = 10.1.1.1,10.1.1.2,10.1.1.3`. The special symbol `*` matches all IP addresses. You can modify the property on demands.
    - `port`: The default value is `5432`. If `5432` is taken, please replace it with an avaliable port.
 
-4. Run `service postgresql-10 start` to launch PostgreSQL
+5. Run `service postgresql-10 start` to launch PostgreSQL
 
-5. Log in to PostgreSQL and create the database
+6. Log in to PostgreSQL and create the database
 
    **i.** Run `su - postgres` to switch to `postgres` user.
 
@@ -133,7 +141,7 @@ last_update:
 
 The following example is that Linux user `abc` installs and configures PostgreSQL.
 
-1. Create a new directory `/home/abc/postgresql`, then unzip the PostgreSQL installation package.(#TODO)
+1. Create a new directory `/home/abc/postgresql`, then unzip the PostgreSQL installation package.
 
    ```sh
    rpm2cpio postgresql10-libs-10.7-1PGDG.rhel6.x86_64.rpm | cpio -idmv
@@ -227,4 +235,4 @@ There are two solutions:
 
 Solution 1: Make sure that the node installing PostgreSQL can access the external network, and then enter the command `yum install libicu-devel` in the terminal to download libicui18n.
 
-Solution 2: Visit the website https://pkgs.org/download/libicu and download the required packages. Please choose the appropriate version according to the system kernel, such as `libicu-4.2.1-1.el6.x86_64.rpm` for CentOS 6. Then use the command `rpm2cpio libicu-4.2.1-14.el6.x86_64.rpm | cpio -idmv` to decompress the binary package and place the decompressed content in ` $LD_LIBRARY_PATH`. If you don't know `$LD_LIBRARY_PATH`, please refer to the second step of [Non `root` User Installatio [...]
+Solution 2: Visit the website https://pkgs.org/download/libicu and download the required packages. Please choose the appropriate version according to the system kernel, such as `libicu-4.2.1-1.el6.x86_64.rpm` for CentOS 6. Then use the command `rpm2cpio libicu-4.2.1-14.el6.x86_64.rpm | cpio -idmv` to decompress the binary package and place the decompressed content in ` $LD_LIBRARY_PATH`. If you don't know `$LD_LIBRARY_PATH`, please refer to the second step of [`Non root` User Installatio [...]
diff --git a/website/docs/modeling/data_modeling.md b/website/docs/modeling/data_modeling.md
index 052b77c27a..e2afa8eeb3 100755
--- a/website/docs/modeling/data_modeling.md
+++ b/website/docs/modeling/data_modeling.md
@@ -37,15 +37,15 @@ With pre-computation, the number of indexes will be determined by the dimension
 
 #### Manual modeling 
 
-In addition to intelligent modeling, Kylin also supports users to design their own models and indexes based on their business needs. Kylin provides step-by-step guidance on how to complete basic model settings, including dimensions, measures, join relationships, and indexes. For details, see [Manual modeling](../../model/manual_modeling.en.md). 
+In addition to intelligent modeling, Kylin also supports users to design their own models and indexes based on their business needs. Kylin provides step-by-step guidance on how to complete basic model settings, including dimensions, measures, join relationships, and indexes. For details, see [Manual modeling](manual_modeling.md). 
 
 #### Advanced model design 
 
 Kylin offers various advanced features around models and indexes to help users quickly dig out the most valuable data. These features include: 
 
-- Accelerated model design: Kylin offers built-in [advanced measures](measure_design/intro.md) like count distinct and Top N to speed up modeling.  
+- Accelerated model design: Kylin offers built-in [advanced measures](model_design/measure_design/intro.md) like count distinct and Top N to speed up modeling.  
 
-For more information, see [Advanced model design](intro.md). 
+For more information, see [Advanced model design](model_design/advance_guide/intro.md). 
 
 ### Basic concepts 
 
diff --git a/website/docs/modeling/load_data/build_index.md b/website/docs/modeling/load_data/build_index.md
index 5bc92388ab..9f48fd2d50 100755
--- a/website/docs/modeling/load_data/build_index.md
+++ b/website/docs/modeling/load_data/build_index.md
@@ -14,7 +14,7 @@ last_update:
     date: 08/19/2022
 ---
 
-As the business scenario changes, some of the indexes in the model need to be retained only in latest months for saving building and storage costs. Therefore, Kyligence provides a more flexible way to build indexes since the 4.2 released.
+As the business scenario changes, some of the indexes in the model need to be retained only in latest months for saving building and storage costs. Therefore, Kylin provides a more flexible way to build indexes since the 5.0 released.
 
 
 ### Build Index
diff --git a/website/docs/modeling/manual_modeling.md b/website/docs/modeling/manual_modeling.md
index fc503e047c..1a8e6eead5 100644
--- a/website/docs/modeling/manual_modeling.md
+++ b/website/docs/modeling/manual_modeling.md
@@ -84,7 +84,7 @@ Kylin model consists of multiple tables and their join relations. In this articl
 
    - **Table Relationship:** Select the mapping between the foreign and primary keys: **One-to-One or Many-to-One**, or **One-to-Many or Many-to-Many**.  
 
-   - **Precompute Join Relationship**: Select whether to expand the joined tables into a flat table based on the mappings. This option is selected by default. For more information about this function and its applicable scenarios, see [Precompute the join relations](precompute_join_relations.md). 
+   - **Precompute Join Relationship**: Select whether to expand the joined tables into a flat table based on the mappings. This option is selected by default. For more information about this function and its applicable scenarios, see [Precompute the join relations](model_design/precompute_join_relations.md). 
 
    - **Join Relationship for Columns**: It includes 3 drop-down lists. The first and the third one specify the columns to be joined, and the second one defines the join relation, which is equal-join (=) by default. Join relations should meet the following requirements:  
      - Do not define more than one join relation for the same column; two tables could only be joined by the same condition for one time
@@ -146,7 +146,7 @@ P_LINEORDER LEFT JOIN PART ON P_LINEORDER.LO_PARTKEY = PART.P_PARTKEY
 
       In our example, we added revenue (LO_REVENUE in P_LINEORDER) and supply cost (LO_SUPPLYCOST in P_LINEORDER) as measures, and wanted to calculate the sum for each.  
 
-3. (Optional) To achieve complex processing and computation based on the existing columns, you can add computed columns to the model. For more information, see [Computed columns](model_design/computed_column/intro.md).
+3. (Optional) To achieve complex processing and computation based on the existing columns, you can add computed columns to the model. For more information, see [Computed columns](model_design/computed_column.md).
 
 ### Step 4: Save the model and set the loading method
 
diff --git a/website/docs/modeling/model_concepts_operations.md b/website/docs/modeling/model_concepts_operations.md
index 334491eed4..e96a2ec584 100644
--- a/website/docs/modeling/model_concepts_operations.md
+++ b/website/docs/modeling/model_concepts_operations.md
@@ -111,7 +111,7 @@ Models contain Segments and indexes. You can click model name to unfold the deta
 
 - **Overview**: Check Overview details, please refer to [Model Overview](#overview) for more.
 - **Data Features**: Check data features.
-- **Segment**: Check Segment details, please refer to [Segment Operation and Settings](load_data/segment_operation_settings.md) for more.
+- **Segment**: Check Segment details, please refer to [Segment Operation and Settings](load_data/segment_operation_settings/intro.md) for more.
 - **Index**: Review the model indexes.
   - **Index Overview**: Check index overview.
   - **Aggregate Group**: Add or check defined aggregate indexes, please refer to [Aggregate Index](model_design/aggregation_group.md) for more details.
diff --git a/website/docs/modeling/model_design/advance_guide/images/model_check.png b/website/docs/modeling/model_design/advance_guide/images/model_check.png
index b711079f1c..10a2293325 100644
Binary files a/website/docs/modeling/model_design/advance_guide/images/model_check.png and b/website/docs/modeling/model_design/advance_guide/images/model_check.png differ
diff --git a/website/docs/modeling/model_design/advance_guide/images/model_export.png b/website/docs/modeling/model_design/advance_guide/images/model_export.png
index 98be4465ed..c76d67cd91 100644
Binary files a/website/docs/modeling/model_design/advance_guide/images/model_export.png and b/website/docs/modeling/model_design/advance_guide/images/model_export.png differ
diff --git a/website/docs/modeling/model_design/advance_guide/images/model_publish.png b/website/docs/modeling/model_design/advance_guide/images/model_publish.png
index b5b6783b5d..7a20dae235 100644
Binary files a/website/docs/modeling/model_design/advance_guide/images/model_publish.png and b/website/docs/modeling/model_design/advance_guide/images/model_publish.png differ
diff --git a/website/docs/modeling/model_design/advance_guide/images/model_upload.png b/website/docs/modeling/model_design/advance_guide/images/model_upload.png
index 1db29a6367..c9b58bf7f2 100644
Binary files a/website/docs/modeling/model_design/advance_guide/images/model_upload.png and b/website/docs/modeling/model_design/advance_guide/images/model_upload.png differ
diff --git a/website/docs/modeling/model_design/advance_guide/model_metadata_managment.md b/website/docs/modeling/model_design/advance_guide/model_metadata_managment.md
index a38d356925..21cf8d336c 100644
--- a/website/docs/modeling/model_design/advance_guide/model_metadata_managment.md
+++ b/website/docs/modeling/model_design/advance_guide/model_metadata_managment.md
@@ -15,9 +15,7 @@ last_update:
 ---
 
 
-## Model Metadata Management
-
-Kylin is stateless service. All state information is stored in metadata. The model is the core asset of the KE cluster. The model metadata describes the content of the model information in detail.
+Kylin is stateless service. All state information is stored in metadata. The model is the core asset of the Kylin cluster. The model metadata describes the content of the model information in detail.
 
 The movement of models in different environments is an important process of actual production. Therefore, importing and exporting metadata is a crucial link in operation and maintenance. Kylin provides import and export model metadata functions.
 
diff --git a/website/docs/modeling/model_design/advance_guide/multilevel_partitioning.md b/website/docs/modeling/model_design/advance_guide/multilevel_partitioning.md
index c9fb9db8ac..efce545672 100644
--- a/website/docs/modeling/model_design/advance_guide/multilevel_partitioning.md
+++ b/website/docs/modeling/model_design/advance_guide/multilevel_partitioning.md
@@ -52,7 +52,7 @@ You can also add, delete or search for sub-partition values in **Model List-...(
 
 When adding sub-partition values, the system does not check the correctness. The system allows adding sub-partition values ​​that do not yet exist. When querying, the sub-partition value must be exactly the same with the sub-partition value to match (case sensitive, wildcards matching is not supported). Please ensure that the added sub-partition value meets your expectations.
 
-![](./images/multilevel_partitioning_subp_value.png)
+![](images/multilevel_partitioning_subp_value.png)
 
 
 
diff --git a/website/docs/modeling/model_design/aggregation_group.md b/website/docs/modeling/model_design/aggregation_group.md
index b1fcaa5940..e3a09b5f49 100644
--- a/website/docs/modeling/model_design/aggregation_group.md
+++ b/website/docs/modeling/model_design/aggregation_group.md
@@ -36,7 +36,7 @@ To alleviate the pressure on index building, Kylin has released a series of adva
 In the **Data Asset -> Model** page, click the model name to get more information, you can click **Index**, then click **Aggregate Group** button under **+ Index** in the **Index Overview** tab to enter the  aggregate index editing page. Or you can click **+** (Add Aggregate Group) button in the **Aggregate Group** tab to enter the page. Then you can edit the aggregate index in the pop-up window shown below, and define dimensions and measures in different aggregate groups according to yo [...]
 
 
-![Edit Aggregate Index](images/agg_1.png)
+![Edit Aggregate Index](images/agg/agg_1.png)
 
 **Step 1: Dimension Setting**
 
@@ -44,7 +44,7 @@ The initial interface is to edit *Aggregation Group 1*. First you need to set th
 
 Users can then set **Mandatory Dimension**, **Hierarchy Dimension**, and **Joint Dimension** in the *Aggregate Group 1*. Dimensions under these three settings have to be included in **Include** under this aggregate group first. You can add aggregate groups as needed. After editing all the aggregate groups, click the button beside **Index Amount** on the bottom left corner, estimated index number will be calculated and displayed beside the name of the aggregate group, the total estimated  [...]
 
-![Dimension Setting](images/agg_2.png)
+![Dimension Setting](images/agg/agg_2.png)
 
 We recommend selecting frequently paired grouping dimensions and filtering dimensions into the aggregate group according to the cardinality from high to low. For example, you often query for the supplier ID `SUPPKEY` and the product ID `PARTKEY`, by adding the dimension `SUPPKEY` and dimension `PARTKEY` into the aggregate group, you can view the cardinality of these two relevant columns in **Data Asset-Data Source**. If the cardinality of `SUPPKEY` is greater than the cardinality of `PAR [...]
 
@@ -138,7 +138,7 @@ Assume a transactional aggregate index that includes transaction date, transacti
 
 
 
-### <span id="hierarchy">Hierarchy Dimension</span>
+### <span id="hierarchy">Hierarchy Dimension</span>
 
 End users will usually use dimensions with hierarchical relationship, for example, country, province, and city. In this case, a hierarchical relationship can be set as **Hierachy Dimension**. From top to bottom, country, province and city are one-to-many relationship. These three dimensions can be grouped into three different combinations:
 
@@ -147,7 +147,7 @@ End users will usually use dimensions with hierarchical relationship, for exampl
 2. group by country, province(equivalent to group by province)
 
 3. group by country, province, city(equivalent to group by country, city or group by city)
-    ​
+    
     In the aggregate index group shown below, assume dimension A = Country, dimension B = Province and dimension C = City, then dimension ABC can be set as a hierarchy dimension. And index [A, C, D] = index [A, B, C, D],index [B, D] = index [A, B, D], thus, index [A, C, D] and index [B, D] can be pruned.
 
     ![Hierarchy Dimension](images/agg/Hierarchy-2.png)
@@ -157,8 +157,6 @@ End users will usually use dimensions with hierarchical relationship, for exampl
     ![Reduce dimension combinations with Hierachy Dimension](images/agg/Hierarchy-3.png)
 
 
-
-
 #### Use Case of Hierarchy Dimension
 Assume a transactional aggregate index that includes dimensions transaction city `city`, transaction province `province`, transaction country `country` and payment type `pay_type`. Analysts will group transaction country, transaction province, transaction city, and payment type together to understand customer payment type preference in different geographical locations. In the example above, we recommend creating hierarchy dimensions in existing aggregate group (Country / Province / City) [...]
 
@@ -263,7 +261,7 @@ The use of aggregate groups helps to avoid index number explosion. However, in o
 
 This chapter will introduce another simple index pruning tool named *Max Dimension Combination (MDC)*, which represents the maximum number of dimensions in every index. This tool limits the dimension number in a single index, which means indexes containing too many dimensions will not be built in index building process. This tool fits well in the situation where most queries only touch no more than N dimensions, where N is the MDC threshold that is configurable.
 
-> **Note**: MDC is only available from version 4.1.0.
+> **Note**: MDC is only available from version 5.0.
 
 #### Dimensions Count in Query
 
@@ -376,7 +374,7 @@ In the aggregate index, you can set the ShardBy column, and the data will be sto
 
 > Note: The ShardBy column is applied to all custom aggregate indexes.
 
-In the navigation bar **Data Asset -> Model ** page, click the icon to the left of the specified model to expand the model for more information. You can see the **Advanced Setting** button in the **Index- Aggregate Group**.
+In the navigation bar **Data Asset -> Model ** page, click the icon to the left of the specified model to expand the model for more information. You can see the **Advanced Setting** button in the **Index-> Aggregate Group**.
 
 In the **Advanced Setting**, you can select the dimensions that need to be set as the ShardBy column. 
 
diff --git a/website/docs/modeling/model_design/images/agg/advanced.png b/website/docs/modeling/model_design/images/agg/advanced.png
index db0d12b1b1..0110e22219 100644
Binary files a/website/docs/modeling/model_design/images/agg/advanced.png and b/website/docs/modeling/model_design/images/agg/advanced.png differ
diff --git a/website/docs/modeling/model_design/images/agg/agg_1.png b/website/docs/modeling/model_design/images/agg/agg_1.png
new file mode 100644
index 0000000000..5fb368a8b1
Binary files /dev/null and b/website/docs/modeling/model_design/images/agg/agg_1.png differ
diff --git a/website/docs/modeling/model_design/images/agg/agg_2.png b/website/docs/modeling/model_design/images/agg/agg_2.png
new file mode 100644
index 0000000000..2f0023259b
Binary files /dev/null and b/website/docs/modeling/model_design/images/agg/agg_2.png differ
diff --git a/website/docs/modeling/model_design/images/agg/agg_index_2.png b/website/docs/modeling/model_design/images/agg/agg_index_2.png
index c373629ade..ae32a34a2c 100644
Binary files a/website/docs/modeling/model_design/images/agg/agg_index_2.png and b/website/docs/modeling/model_design/images/agg/agg_index_2.png differ
diff --git a/website/docs/modeling/model_design/images/agg/agg_measure.png b/website/docs/modeling/model_design/images/agg/agg_measure.png
index 55d77592a2..d5600ba190 100644
Binary files a/website/docs/modeling/model_design/images/agg/agg_measure.png and b/website/docs/modeling/model_design/images/agg/agg_measure.png differ
diff --git a/website/docs/modeling/model_design/images/agg_1.png b/website/docs/modeling/model_design/images/agg_1.png
deleted file mode 100644
index 6aba059e6f..0000000000
Binary files a/website/docs/modeling/model_design/images/agg_1.png and /dev/null differ
diff --git a/website/docs/modeling/model_design/images/agg_2.png b/website/docs/modeling/model_design/images/agg_2.png
deleted file mode 100644
index 45a897675d..0000000000
Binary files a/website/docs/modeling/model_design/images/agg_2.png and /dev/null differ
diff --git a/website/docs/modeling/model_design/images/agg_measure.png b/website/docs/modeling/model_design/images/agg_measure.png
deleted file mode 100644
index 55d77592a2..0000000000
Binary files a/website/docs/modeling/model_design/images/agg_measure.png and /dev/null differ
diff --git a/website/docs/modeling/model_design/images/mdc/intro_mdc.png b/website/docs/modeling/model_design/images/mdc/intro_mdc.png
index d023b095a0..1ab214937e 100644
Binary files a/website/docs/modeling/model_design/images/mdc/intro_mdc.png and b/website/docs/modeling/model_design/images/mdc/intro_mdc.png differ
diff --git a/website/docs/modeling/model_design/images/mdc/single_mdc.png b/website/docs/modeling/model_design/images/mdc/single_mdc.png
index 63b5d024dc..0a575c2640 100644
Binary files a/website/docs/modeling/model_design/images/mdc/single_mdc.png and b/website/docs/modeling/model_design/images/mdc/single_mdc.png differ
diff --git a/website/docs/modeling/model_design/images/mdc/total_mdc.png b/website/docs/modeling/model_design/images/mdc/total_mdc.png
index f4502c7d1c..37ee898e6e 100644
Binary files a/website/docs/modeling/model_design/images/mdc/total_mdc.png and b/website/docs/modeling/model_design/images/mdc/total_mdc.png differ
diff --git a/website/docs/modeling/model_design/measure_design/collect_set.md b/website/docs/modeling/model_design/measure_design/collect_set.md
index 08923f301b..7065e31e23 100644
--- a/website/docs/modeling/model_design/measure_design/collect_set.md
+++ b/website/docs/modeling/model_design/measure_design/collect_set.md
@@ -15,13 +15,13 @@ last_update:
 ---
 
 
-From Kylin 5, Kylin supports the COLLECT_SET function, which returns a set of unique elements as an array. The syntax is `COLLECT_SET(column)`. The COLLECT_ SET measure is customizable.
+From Kylin 5, Kylin supports the COLLECT_SET function, which returns a set of unique elements as an array. The syntax is `COLLECT_SET(column)`. The COLLECT_SET measure is customizable.
 
 
 
 ### Use Case
 
-Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce COLLECT_ SET measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../data_modeling.md) to understand more about the methods used in model design. 
+Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce COLLECT_SET measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../../data_modeling.md) to understand more about the methods used in model design. 
 
 We will use the fact table `SSB.P_LINEORDER`. This sample table is a mockup of transactions that can happen in an online marketplace. It has a couple of dimension and measure columns. For easy understanding, we will only use two columns: `LO_CUSTKEY` and `LO_ORDERDATE`. The table below gives an introduction of these columns.
 
@@ -54,7 +54,7 @@ Resubmit the above SQL query in the **Query -> Insight** page, and you will find
 
 ![Query Result](images/collect_result.png)
 
-If you need to create a model from the very beginning and add a COLLECT_SET measure, please add some indexes and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../data_modeling.md) to understand the method of model design.
+If you need to create a model from the very beginning and add a COLLECT_SET measure, please add some indexes and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../../data_modeling.md) to understand the method of model design.
 
 In actual application scenarios, you can use the COLLECT_SET function in combination with other functions to apply more analysis scenarios. For example, the following query combines the CONCAT_WS function, which  the values in the array of order date into a string and splits it with `;`:
 
diff --git a/website/docs/modeling/model_design/measure_design/count_distinct_bitmap.md b/website/docs/modeling/model_design/measure_design/count_distinct_bitmap.md
index adf88e1e4e..322901e2a4 100644
--- a/website/docs/modeling/model_design/measure_design/count_distinct_bitmap.md
+++ b/website/docs/modeling/model_design/measure_design/count_distinct_bitmap.md
@@ -29,7 +29,7 @@ Before using the Count Distinct query, you need to clarify if the target column
 
 ### Count Distinct Precision Setting 
 
-Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce count distinct precision measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../data_modeling.md) to understand more about the methods used in model design. 
+Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce count distinct precision measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../../data_modeling.md) to understand more about the methods used in model design. 
 
 Please add a measure in the model editing page as follows. Please fill in the measure **Name**, such as `DISTINCT_CUSTOMER`, select **Function** as **COUNT_DISTINCT**, select accuracy requirement from **Function Parameter**, and finally select the target column from the drop-down list.
 
@@ -41,6 +41,6 @@ Kylin offers both approximate Count Distinct function and precise Count Distinct
 
 Once the measure is added and the model is saved, you need to go to the **Edit Aggregate Index** page, add the corresponding dimensions and measures to the appropriate aggregate group according to your business scenario, and the new aggregate index will be generated after submission. You need to build index and load data to complete the precomputation of the target column. You can check the job of Build Index in the Job Monitor page. After the index is built, you can use the **Count Dist [...]
 
-If you need to create a model from the very beginning and add a Count Distinct (Precise) measure, please add some indices and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../data_modeling.md) to understand the methods used in model design.
+If you need to create a model from the very beginning and add a Count Distinct (Precise) measure, please add some indices and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../../data_modeling.md) to understand the methods used in model design.
 
 For more information about approximate Count Distinct function, please refer to [Count Distinct (Approximate)](count_distinct_hllc.md) Introduction.
diff --git a/website/docs/modeling/model_design/measure_design/count_distinct_case_when_expr.md b/website/docs/modeling/model_design/measure_design/count_distinct_case_when_expr.md
index ff8eaf122d..ca89cc176e 100644
--- a/website/docs/modeling/model_design/measure_design/count_distinct_case_when_expr.md
+++ b/website/docs/modeling/model_design/measure_design/count_distinct_case_when_expr.md
@@ -65,7 +65,7 @@ It can be answered by indexes including `cal_dt` dimension and `count(distinct p
 ### Known Limitation
 
 1. Else can only be with null, constants are not supported temporarily, such as `case when ... then column1 else 1 end`.
-Starting from KE 4.5.4 GA version, after else can be cast(null as `type`), such as `case when ... then column1 else cast(null as double) end`.
+Starting from Kylin 5.0 version, after else can be cast(null as `type`), such as `case when ... then column1 else cast(null as double) end`.
 It should be noted that `type` should be as close as possible to `column1` The type is the same or the same category,
 otherwise it may not conform to the sql syntax and an error will be reported, or this function cannot be applied. 
 The major category refers to the same numeric type, date type, Boolean type, etc.
diff --git a/website/docs/modeling/model_design/measure_design/count_distinct_hllc.md b/website/docs/modeling/model_design/measure_design/count_distinct_hllc.md
index 8e8e90d54f..9f833f090d 100644
--- a/website/docs/modeling/model_design/measure_design/count_distinct_hllc.md
+++ b/website/docs/modeling/model_design/measure_design/count_distinct_hllc.md
@@ -30,7 +30,7 @@ In the project of Kylin 5, you can customize Count Distinct (Approximate) measur
 
 ### Prerequisite
 
-Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce approximate count distinct measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../data_modeling.md) to understand more about the methods used in model design. 
+Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce approximate count distinct measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../../data_modeling.md) to understand more about the methods used in model design. 
 
 Before using Count Distinct query, you need to check the target column is ready. You can get measure information in the model editing page. If the desire measure has been pre-calculated on approximate Count Distinct syntax (requires both `Function` to be count_distinct and `Return Type` to be hllc), then this measure is ready for Count Distinct querying. Otherwise, you need to add a new measure Count Distinct (Approximate) first.
 
@@ -46,6 +46,6 @@ Once the measure is added and the model is saved, click **Add Index** in the pop
 SELECT COUNT(DISTINCT P_LINEORDER.LO_SHIPPRIOTITY)
 FROM SSB.P_LINEORDER
 ```
-If you need to create a model from the very beginning and add a Count Distinct (Approximate) measure, please add some indices and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../data_modeling.md) to understand the method of model design.
+If you need to create a model from the very beginning and add a Count Distinct (Approximate) measure, please add some indices and load data into the model. A model won't be able to serve any query if it has no index and data. You can read this chapter [Model Design Basics](../../data_modeling.md) to understand the method of model design.
 
 More information about precise Count Distinct function, please refer to [Count Distinct (Approximate)](count_distinct_bitmap.md) Introduction.
diff --git a/website/docs/modeling/model_design/measure_design/percentile_approx.md b/website/docs/modeling/model_design/measure_design/percentile_approx.md
index 666e411e9d..45c465eb6b 100644
--- a/website/docs/modeling/model_design/measure_design/percentile_approx.md
+++ b/website/docs/modeling/model_design/measure_design/percentile_approx.md
@@ -33,7 +33,7 @@ Percentile_approx returns the value of below which a given percentage of observa
 
 ### Use Case
 
-Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce percentile_approx measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../data_modeling.md) to understand more about the methods used in model design. 
+Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce percentile_approx measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../../data_modeling.md) to understand more about the methods used in model design. 
 
 We will use the fact table `SSB.P_LINEORDER`. This sample table is a mockup of transactions that can happen in an online marketplace. It has a couple of dimension and measure columns. For easy undersatning, we will only use two columns:  `LO_SUPPKEY` and `LO_ORDTOTALPRICE`. The table below gives an introduction to these columns.
 
diff --git a/website/docs/modeling/model_design/measure_design/topn.md b/website/docs/modeling/model_design/measure_design/topn.md
index b6510625a1..cb60ddc290 100644
--- a/website/docs/modeling/model_design/measure_design/topn.md
+++ b/website/docs/modeling/model_design/measure_design/topn.md
@@ -29,7 +29,7 @@ In the project of Kylin 5 the Top-N measure is customizable.
 
 ### Top-N Query
 
-Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce Top-N measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../data_modeling.en.md) to understand more about the methods used in model design. 
+Let’s use the project created in the chapter [Tutorial](../../../quickstart/expert_mode_tutorial.md) as an example to introduce Top-N measure settings. This project uses the SSB Dataset and needs to complete the model design and index build (including data load). A model won't be able to serve any queries if it has no index and data. You can read [Model Design Basics](../../data_modeling.md) to understand more about the methods used in model design. 
 
 We will use the fact table `SSB.P_LINEORDER`. This is a mockup of transactions that can happen in an online marketplace. It has a couple of dimension and measure columns. For easy understanding, we use only use four columns:  `LO_ORDERDATE`, `LO_SUPPKEY`, `LO_PARTKEY` and `LO_ORDTOTALPRICE`. The table below gives an introduction to these columns. 
 
diff --git a/website/docs/monitor/images/job_diagnosis_web.png b/website/docs/monitor/images/job_diagnosis_web.png
index 5908b70e38..0020eaae3d 100644
Binary files a/website/docs/monitor/images/job_diagnosis_web.png and b/website/docs/monitor/images/job_diagnosis_web.png differ
diff --git a/website/docs/monitor/images/job_id.png b/website/docs/monitor/images/job_id.png
index 9ba244edc2..b5be342d57 100644
Binary files a/website/docs/monitor/images/job_id.png and b/website/docs/monitor/images/job_id.png differ
diff --git a/website/docs/monitor/images/job_log.png b/website/docs/monitor/images/job_log.png
index 2042902134..64eacda809 100644
Binary files a/website/docs/monitor/images/job_log.png and b/website/docs/monitor/images/job_log.png differ
diff --git a/website/docs/monitor/images/job_status.png b/website/docs/monitor/images/job_status.png
index 22f291d21a..6dff58b1e1 100644
Binary files a/website/docs/monitor/images/job_status.png and b/website/docs/monitor/images/job_status.png differ
diff --git a/website/docs/monitor/images/job_type.png b/website/docs/monitor/images/job_type.png
index 4eece63204..498e60d216 100644
Binary files a/website/docs/monitor/images/job_type.png and b/website/docs/monitor/images/job_type.png differ
diff --git a/website/docs/monitor/job_concept_settings.md b/website/docs/monitor/job_concept_settings.md
index 05f379227d..6504080065 100644
--- a/website/docs/monitor/job_concept_settings.md
+++ b/website/docs/monitor/job_concept_settings.md
@@ -118,7 +118,7 @@ Some of the elements include job steps, waiting time and executing time, log out
 
 You can modify settings about **Email Notification** in the navigation bar **Setting -> Advanced Settings**, as shown below: 
 
-![Job Notification](./images/job_settings.png)
+![Job Notification](images/job_settings.png)
 
 You can fill in your email and choose to open different types of job notification.
 
diff --git a/website/docs/monitor/job_diagnosis.md b/website/docs/monitor/job_diagnosis.md
index b0d04d4551..1925238a8c 100644
--- a/website/docs/monitor/job_diagnosis.md
+++ b/website/docs/monitor/job_diagnosis.md
@@ -14,10 +14,7 @@ last_update:
     date: 08/19/2022
 ---
 
-
-## Job Diagnosis
-
- Job diagnosis may encounter various problems during execution. To help solve these problems efficiently, Kylin provides a task diagnostic function, which can package related problems' log information into a compressed package for operations or Kyligence technical supports to analyze problems and ascertain the cause.
+ Job diagnosis may encounter various problems during execution. To help solve these problems efficiently, Kylin provides a task diagnostic function, which can package related problems' log information into a compressed package for operations or Apache Community supports to analyze problems and ascertain the cause.
 
 ### View Job Execution Log On Web UI
 
@@ -50,7 +47,7 @@ You can execute ` $KYLIN_HOME/bin/diag.sh -job <jobid> ` to generate the job dia
 - `/yarn_application_log`: specifies the logs of yarn application of job. 
 - `/client`: operating system resources occupied information, hadoop version and kerberos information.
 - `/monitor_metrics`:The node monitoring log of the specified task.
-- `/write_ hadoop_ conf`:`$KYLIN_HOME/write_hadoop_conf`, Hadoop configuration of the build cluster. This directory will not be available when Read/Write separation deployment is not configured.
+- `/write_hadoop_conf`:`$KYLIN_HOME/write_hadoop_conf`, Hadoop configuration of the build cluster. This directory will not be available when Read/Write separation deployment is not configured.
 - file `catalog_info`: directory structure of install package.
 - file `commit_SHA1`: git-commit version.
 - file `hadoop_env`: hadoop environment information.
diff --git a/website/docs/monitor/job_exception_resolve.md b/website/docs/monitor/job_exception_resolve.md
index 46faa9e45f..48df4e02f4 100644
--- a/website/docs/monitor/job_exception_resolve.md
+++ b/website/docs/monitor/job_exception_resolve.md
@@ -28,7 +28,7 @@ Various problems may occur during the execution of building jobs which cause the
 
     1. Modify the time format of the model time partition column to be consistent with the actual time format in the data source:
 
-       Please refer to [Design a Data Model](../modeling/model_design/data_modeling.md#design) *Step 7. Set Time Partition Column and Data Filter Condition* modify the time format of the model time partition column。
+       Please refer to [Design a Data Model](../modeling/manual_modeling.md#step-4-save-the-model-and-set-the-loading-method) *Step 4. Save the model and set the loading method* modify the time format of the model time partition column。
 
     2. 2. If you insist on using this format, you can choose to disable checking the time partition column by modifying the system parameter in `kylin.properties` to `kylin.engine.check-partition-col-enabled=false`.
        Notice: Although this method can bypass the time format verification here, it may cause other problems. Please use it with caution.
diff --git a/website/docs/monitor/job_operations.md b/website/docs/monitor/job_operations.md
index 1e45fe6c22..cbc57bcda9 100644
--- a/website/docs/monitor/job_operations.md
+++ b/website/docs/monitor/job_operations.md
@@ -37,7 +37,7 @@ The job has the following 6 states:
 
 You can view the job status information in the **Monitor -> Job** interface of navigation bar. As shown below:
 
-![Job Status](./images/job_status.png)
+![Job Status](images/job_status.png)
 
 - Label 1: Execution status.
 - Label 2: Pause status.
diff --git a/website/docs/operations/logs/audit_log.md b/website/docs/operations/logs/audit_log.md
index 5c80fda56b..3bf18f59eb 100644
--- a/website/docs/operations/logs/audit_log.md
+++ b/website/docs/operations/logs/audit_log.md
@@ -180,7 +180,7 @@ In the Kylin configuration file `kylin.properties`, there are the following conf
 
 The Audit Log is stored in the database. You can use the tools provided by Kylin to export the data within the specified time range to the local for backup, or export it as an attachment to the Kylin ticket when encountering problems, which is convenient for technology support personnel to locate the problem.
 
-There are two ways to execute commands on the KE node:
+There are two ways to execute commands on the Kylin node:
 
 1. Use the diagnostic package command: `$ {KYLIN_HOME}/bin/diag.sh` 
 
@@ -188,7 +188,7 @@ There are two ways to execute commands on the KE node:
 
      
 
-2. Using the AuditLogTool tool: `${KYLIN_HOME}/bin/kylin.sh io.kyligence.kap.tool.AuditLogTool -startTime ${starttime} -endTime ${endtime} -dir ${target_dir}`
+2. Using the AuditLogTool tool: `${KYLIN_HOME}/bin/kylin.sh org.apache.kylin.tool.AuditLogTool -startTime ${starttime} -endTime ${endtime} -dir ${target_dir}`
 
    - `${starttime}` and `${endtime}` Retrieves the specified range of Audit Log. The format is timestamp in milliseconds: e.g `1579868382749`;
    - `${target_dir}` specifies the directory where your Audit Log files are stored. The generated Audit Log is stored in the `${target_dir}/${starttime}_${endtime}` file;
diff --git a/website/docs/operations/system-operation/cli_tool/diagnosis.md b/website/docs/operations/system-operation/cli_tool/diagnosis.md
index 18d180cf89..5d304935a6 100644
--- a/website/docs/operations/system-operation/cli_tool/diagnosis.md
+++ b/website/docs/operations/system-operation/cli_tool/diagnosis.md
@@ -82,7 +82,7 @@ Diagnostic packages generated by scripts are stored under `$KYLIN_HOME/diag_dump
 - file `commit_SHA1`: git-commit version.
 - file `hadoop_env`: hadoop environment information.
 - file `info`: license, package and hostname.
-- file `kylin_env`:kyligence Enterprise version, operating system information, Java related information, git-commit information.
+- file `kylin_env`:Kylin version, operating system information, Java related information, git-commit information.
 - file `time_used_info`: Time statistics of each file generated in the diagnostic package.
 
 #### Query Diagnostic Package Content
@@ -98,7 +98,7 @@ Diagnostic packages generated by scripts are stored under `$KYLIN_HOME/diag_dump
 - file `commit_SHA1`: git-commit version.
 - file `hadoop_env`: hadoop environment information.
 - file `info`: license, package and hostname.
-- file `kylin_env`:kyligence Enterprise version, operating system information, Java related information, git-commit information.
+- file `kylin_env`:Kylin version, operating system information, Java related information, git-commit information.
 - file `time_used_info`: Time statistics of each file generated in the diagnostic package.
 
 ### Multi-Node Diagnosis
diff --git a/website/docs/query/history.md b/website/docs/query/history.md
index 103567cd02..87f51729d3 100644
--- a/website/docs/query/history.md
+++ b/website/docs/query/history.md
@@ -89,7 +89,7 @@ Each line in the picture is a query history record. The meaning of the columns a
 
 When you click on the icon to the left of a query, the execution details of the query will be displayed, as shown below:
 
-![Query Execution Detail](./images/query_history2.png)
+![Query Execution Detail](images/query_history2.png)
 
 On the left is the SQL statement, you can copy and paste and then query. The fields on the right have the following meanings:
 
diff --git a/website/docs/query/insight/async_query.md b/website/docs/query/insight/async_query.md
index ea5071b391..82e23e1002 100644
--- a/website/docs/query/insight/async_query.md
+++ b/website/docs/query/insight/async_query.md
@@ -55,7 +55,7 @@ In general, the same cluster queue can be used for asynchronous query and normal
      kylin.engine.spark-conf.spark.yarn.access.hadoopFileSystems=hdfs://readcluster,hdfs://asyncquerycluster,hdfs://writecluster
     ```
 
-6. In general, the above configuration can meet the requirements. In some more advanced scenarios, you can configure spark related configurations in `kylin.properties` to achieve more fine-grained control with guidance of Kyligence expert.
+6. In general, the above configuration can meet the requirements. In some more advanced scenarios, you can configure spark related configurations in `kylin.properties` to achieve more fine-grained control with guidance of Kylin expert.
 
    The configuration starts with `kylin.query.async-query.spark-conf`, as shown below:
 
diff --git a/website/docs/query/insight/insight.md b/website/docs/query/insight/insight.md
index 935ee2b71f..7ead86fc08 100644
--- a/website/docs/query/insight/insight.md
+++ b/website/docs/query/insight/insight.md
@@ -92,7 +92,7 @@ Support chart type: Line Chart, Bar Chart, Pie Chart.
 
 ### Other ways to execute SQL queries
 
-- [Integration with BI tools](#TODO)
+- [Integration with BI tools](../../integration/intro.md)
 
 ### Notices
 
diff --git a/website/docs/query/insight/operator_function/function/conditional_function.md b/website/docs/query/insight/operator_function/function/conditional_function.md
index 9cf9d64b4a..c72d5b7b0c 100644
--- a/website/docs/query/insight/operator_function/function/conditional_function.md
+++ b/website/docs/query/insight/operator_function/function/conditional_function.md
@@ -20,6 +20,6 @@ last_update:
 | CASE WHEN condition1 THEN result1 WHEN conditionN THEN resultN ELSE resultZ END | Searched case                                                | `CASE WHEN OPS_REGION='Beijing'THEN 'BJ' WHEN OPS_REGION='Shanghai' THEN 'SH' WHEN OPS_REGION='Hongkong' THEN 'HK' END FROM KYLIN_SALES`<br /> = HK SH BJ | ✔️            | ✔️              | ✔️                          | ✔️                         |
 | NULLIF(value, value)                                         | Return NULL if the values are the same. Otherwise, return the first value. | `NULLIF(5,5)`<br /> = null                                   | ✔️            | ✔️              | ✔️                          |  ✔️                         |
 | COALESCE(value, value [, value ]*)                           | Return the first not null value.                 | `COALESCE(NULL,NULL,5)`<br /> = 5                            | ✔️            | ✔️              | ✔️                          |  ✔️                          |
-| IFNULL(value1, value2)                                       | Return value2 if value1 is NULL. Otherwise, return value1. | `IFNULL('kyligence','apache')`<br /> = 'kyligence'  | ✔️        | ✔️        | ✔️                | ✔️              |
-| ISNULL(value)                                                | Return true if value is NULL. Otherwise, return false. | `ISNULL('kyligence')` <br /> = false                   | ✔️        | ✔️        | ✔️                |  ✔️                |
+| IFNULL(value1, value2)                                       | Return value2 if value1 is NULL. Otherwise, return value1. | `IFNULL('kylin','apache')`<br /> = 'kylin'  | ✔️        | ✔️        | ✔️                | ✔️              |
+| ISNULL(value)                                                | Return true if value is NULL. Otherwise, return false. | `ISNULL('kylin')` <br /> = false                   | ✔️        | ✔️        | ✔️                |  ✔️                |
 | NVL(value1, value2)                                          | Return value2 if value1 is NULL. Otherwise, return value1. Value1, value2 must have same data type. | `NVL('kylin','apache')`<br /> = 'kylin'  | ✔️        | ✔️        | ✔️                | ✔️               |
diff --git a/website/docs/query/insight/operator_function/function/string_function.md b/website/docs/query/insight/operator_function/function/string_function.md
index a7208eed5d..05329b9d66 100644
--- a/website/docs/query/insight/operator_function/function/string_function.md
+++ b/website/docs/query/insight/operator_function/function/string_function.md
@@ -18,15 +18,15 @@ last_update:
 | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------ | -------------- | -------------------------- | ---------------------------- |
 | CHAR_LENGTH(string)                                          | Returns the number of characters in a character string       | `CHAR_LENGTH('Kyligence')`<br /> = 9                         | ✔️            | ✔️              | ✔️                          | ✔️                            |
 | CHARACTER_LENGTH(string)                                     | As CHAR_LENGTH(*string*)                                     | ` CHARACTER_LENGTH('Kyligence')`<br /> = 9                   | ✔️            | ✔️              | ✔️                          | ✔️                          |
-| UPPER(string)                                                | Returns a character string converted to upper case           | ` UPPER('Kyligence')`<br /> = KYLIGENCE                      | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| LOWER(string)                                                | Returns a character string converted to lower case           | ` LOWER('Kyligence')`<br /> = kyligence                      | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| POSITION(string1 IN string2)                                 | Returns the position of the first occurrence of *string1* in *string2* | ` POSITION('Kyli' IN 'Kyligence')`<br /> = 1                 | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| TRIM( { BOTH \ LEADING\ TRAILING } string1 FROM string2)     | Removes the longest string containing only the characters in *string1* from the both ends/start/end of *string1* | Example1: <br />` TRIM(BOTH '6' FROM '666Kyligence66')`<br /> = Kyligence<br /><br />Example 2: <br />` TRIM(LEADING '6' FROM '666Kyligence66')`<br /> = Kyligence66<br /><br />Example 3: <br />` TRIM(TRAILING '6' FROM '666Kyligence66')`<br /> = 666Kyligence | ✔️            | ✔️              | ✔️                 [...]
-| OVERLAY(string1 PLACING string2 FROM integer [ FOR integer2 ]) | Replaces a substring of *string1* with *string2*  starting at the integer bit | `OVERLAY('666' placing 'KYLIGENCE' FROM 2 for 2)`<br/>= 6KYLIGENCE | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| SUBSTRING(string FROM integer)                               | Returns a substring of a character string starting at a given point | ` SUBSTRING('Kyligence' FROM 5)`<br /> = gence               | ✔️            |  ✔️              |  ✔️                           |  ✔️                            |
-| SUBSTRING(string FROM integer1 FOR integer2)                 | Returns a substring of a character string starting at a given point with a given length | ` SUBSTRING('Kyligence' from 5 for 2)`<br /> = ge            | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| INITCAP(string)                                              | Returns *string* with the first letter of each word converter to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters. | ` INITCAP('kyligence')`<br /> = Kyligence                    | ✔️            | ✔️              | ✔️                          | ✔️                            |
-| REPLACE(string, search, replacement)                         | Returns a string in which all the occurrences of *search* in *string* are replaced with *replacement*; if *replacement* is the empty string, the occurrences of *search* are removed | ` REPLACE('Kyligence','Kyli','Kyliiiiiii')`<br /> = Kyliiiiiiigence | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| UPPER(string)                                                | Returns a character string converted to upper case           | ` UPPER('Kylin')`<br /> = KYLIN                      | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| LOWER(string)                                                | Returns a character string converted to lower case           | ` LOWER('Kylin')`<br /> = kylin                      | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| POSITION(string1 IN string2)                                 | Returns the position of the first occurrence of *string1* in *string2* | ` POSITION('Kyli' IN 'Kylin')`<br /> = 1                 | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| TRIM( { BOTH \ LEADING\ TRAILING } string1 FROM string2)     | Removes the longest string containing only the characters in *string1* from the both ends/start/end of *string1* | Example1: <br />` TRIM(BOTH '6' FROM '666Kylin66')`<br /> = Kylin<br /><br />Example 2: <br />` TRIM(LEADING '6' FROM '666Kylin66')`<br /> = Kylin66<br /><br />Example 3: <br />` TRIM(TRAILING '6' FROM '666Kylin66')`<br /> = 666Kylin | ✔️            | ✔️              | ✔️                          | ❌            [...]
+| OVERLAY(string1 PLACING string2 FROM integer [ FOR integer2 ]) | Replaces a substring of *string1* with *string2*  starting at the integer bit | `OVERLAY('666' placing 'KYLIN' FROM 2 for 2)`<br/>= 6KYLIN | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| SUBSTRING(string FROM integer)                               | Returns a substring of a character string starting at a given point | ` SUBSTRING('Kylin' FROM 5)`<br /> = n               | ✔️            |  ✔️              |  ✔️                           |  ✔️                            |
+| SUBSTRING(string FROM integer1 FOR integer2)                 | Returns a substring of a character string starting at a given point with a given length | ` SUBSTRING('KYlin' from 5 for 2)`<br /> = n            | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| INITCAP(string)                                              | Returns *string* with the first letter of each word converter to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters. | ` INITCAP('kylin')`<br /> = Kylin                    | ✔️            | ✔️              | ✔️                          | ✔️                            |
+| REPLACE(string, search, replacement)                         | Returns a string in which all the occurrences of *search* in *string* are replaced with *replacement*; if *replacement* is the empty string, the occurrences of *search* are removed | ` REPLACE('Kylin','Kyli','Kyliiiiiii')`<br /> = Kyliiiiiiin | ✔️            | ✔️              | ✔️                          | ✔️                            |
 | BASE64(bin)                                                  | Converts the argument from a binary `bin` to a base 64 string | ` BASE64('Spark SQL')`<br /> = U3BhcmsgU1FM                  | ✔️            | ✔️              | ✔️                          | ✔️                            |
 | DECODE(bin, charset)                                         | Decodes the first argument using the second argument character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null | ` DECODE(ENCODE('abc', 'utf-8'), 'utf-8')`<br /> = abc       | ✔️            | ✔️              | ✔️                          | ✔️                            |
 | ENCODE(str, charset)                                         | Encodes the first argument using the second argument character set(one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null | ` ENCODE('abc', 'utf-8')`<br /> = `[B@2b442236`                        | ✔️            | ✔️              | ❌                          | ❌                            |
@@ -45,6 +45,6 @@ last_update:
 | CHR(str)                             | Convert ascii code to corresponding character| ` CHR(97)` = a| ✔️ | ❌| ✔️              | ✔️              |
 | SPACE(len)                             | Generate `len` number of continuous space | space(2) =`  ` | ✔️ | ❌| ❌                | ❌                |
 | SPLIT_PART(str, separator, index)  | Split `str` with `separator` and return the `index`-th token. `index` counts from 1. when `index` is negative, tokens are counted starting from the right. | `split_part('a-b-c', '-', 1)` = a, <br /> `split_part('a-b-c', '-', -1)` = c,| ✔️ | ❌| ✔️              | ✔️              |
-| CONCAT(str1, str2) | Concatenate the strings `str1` and `str2` into one string    | `concat('Kyli', 'gence') `= Kyligence                        | ✔️ | ✔️ | ✔️              | ✔️              |
+| CONCAT(str1, str2) | Concatenate the strings `str1` and `str2` into one string    | `concat('Kyl', 'in') `= Kylin                        | ✔️ | ✔️ | ✔️              | ✔️              |
 | REPEAT(str,n)| Repeat `str` `n` times and return string. When querying the model, `str` supports passing in constants, columns and expressions, and `n` only supports passing in constants | `repeat('kylin',2)` ='kylinkylin' | ✔️ | ✔️ | ✔️ | ✔️ |
 | LEFT(str,n)| Return the `n` characters from the left of `str`. When querying the model, `str` supports passing in constants, columns, and `n` only supports passing in constants | `left('kylin',2)` ='ky' | ✔️ | ✔️ | ✔️ | ✔️ |
diff --git a/website/docs/query/insight/operator_function/function/window_function.md b/website/docs/query/insight/operator_function/function/window_function.md
index 737ab3cf2a..84f362e0ae 100644
--- a/website/docs/query/insight/operator_function/function/window_function.md
+++ b/website/docs/query/insight/operator_function/function/window_function.md
@@ -33,7 +33,7 @@ You can use window function to simplify the query process and analyze more compl
 
 ### Examples of window functions
 
-And then we'll illustrate the usage of every single window function with table *P_LINEORDER* in the [sample dataset](../../../../../Get-to-Know-Kyligence-Enterprise/quickstart/sample_dataset.en.md), where you may also find the fields and corresponding descriptions.
+And then we'll illustrate the usage of every single window function with table *P_LINEORDER* in the [sample dataset](../../../../quickstart/sample_dataset.md), where you may also find the fields and corresponding descriptions.
 
 - **ROW_NUMBER**
 
diff --git a/website/docs/query/pushdown/pushdown_to_embedded_spark.md b/website/docs/query/pushdown/pushdown_to_embedded_spark.md
index 22453ab46f..f18ba7d91d 100644
--- a/website/docs/query/pushdown/pushdown_to_embedded_spark.md
+++ b/website/docs/query/pushdown/pushdown_to_embedded_spark.md
@@ -15,7 +15,7 @@ last_update:
 ---
 
 
-Kylin uses pre-calculation instead of online calculation to achieve sub-second query latency on big data. In general, the model with pre-calculated data is able to serve the most frequently-used queries. But if a query is beyond the model's definition, the system will route it to the Kyligence smart pushdown engine. The embedded pushdown engine is Spark SQL.
+Kylin uses pre-calculation instead of online calculation to achieve sub-second query latency on big data. In general, the model with pre-calculated data is able to serve the most frequently-used queries. But if a query is beyond the model's definition, the system will route it to the Kylin smart pushdown engine. The embedded pushdown engine is Spark SQL.
 
 > **Note**: In order to ensure data consistency, query cache is not available in pushdown.
 
diff --git a/website/docs/quickstart/images/query_result.png b/website/docs/quickstart/images/query_result.png
index e07f5d2454..8eb6d4cb1b 100644
Binary files a/website/docs/quickstart/images/query_result.png and b/website/docs/quickstart/images/query_result.png differ
diff --git a/website/docs/restapi/error_code.md b/website/docs/restapi/error_code.md
index a5fa4cdc08..6d16682ee4 100644
--- a/website/docs/restapi/error_code.md
+++ b/website/docs/restapi/error_code.md
@@ -111,7 +111,7 @@ If an error occurs when calling the Kylin API or using the built-in tools, you c
 | KE-010043208 | The entered parameter value is invalid. The parameter value must be a non-negative <br />integer. Please check and try again. |
 | KE-010043209 | The entered parameter value is invalid. Only support specific values at the moment. Please <br />check and try again. |
 | KE-010043210 | The parameter can't be empty. Please enter the time partition column format. |
-| KE-010043211 | The type of the time partition column is invalid. Please enter the supported format, refer <br />to the [user manual](#TODO). |
+| KE-010043211 | The type of the time partition column is invalid. Please enter the supported format, refer <br />to the [user manual](../modeling/data_modeling.md). |
 | KE-010043212 | The parameter can't be empty. Please enter layout(s) id.     |
 | KE-010043213 | Can't find layout. Please check and try again.               |
 | KE-010043214 | Can't refresh the value, the time units are only supported in d (days), h (hours), <br />or m (minutes). Please check and try again. |
diff --git a/website/docs/restapi/model_api/model_management_api.md b/website/docs/restapi/model_api/model_management_api.md
index 23b6ed4edb..9f2b4dcbcc 100644
--- a/website/docs/restapi/model_api/model_management_api.md
+++ b/website/docs/restapi/model_api/model_management_api.md
@@ -89,7 +89,7 @@ last_update:
       - `primary_key` - `required` `string[]`, primary key
       - `simplified_non_equi_join_conditions` -  `optional` `JSON Object`, non-equivalent join conditions
 
-        (note1: The support of this settings should have 'Support History Table' enabled in advance. Seeing [Slowly Changing Dimension](../../modeling/model_design/slowly_changing_dimension.md))
+        (note1: The support of this settings should have 'Support History Table' enabled in advance. Seeing [Slowly Changing Dimension](../../modeling/model_design/slowly_changing_dimension.md)
 
         (note2: Join relationship >= and < must be used in pairs, and same column must be joint in both conditions)
 
diff --git a/website/sidebars.js b/website/sidebars.js
index 81f87a36ec..b6fc750bbb 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -140,6 +140,38 @@ const sidebars = {
                                 },
                             ]
                         },
+                        {
+                            type: 'category',
+                            label: 'Install and Uninstall',
+                            link: {
+                                type: 'doc',
+                                id: 'deployment/on-premises/installation/intro',
+                            },
+                            items: [
+                                {
+                                    type: 'category',
+                                    label: 'Install On Platforms',
+                                    link: {
+                                        type: 'doc',
+                                        id: 'deployment/on-premises/installation/platform/intro',
+                                    },
+                                    items: [
+                                        {
+                                            type: 'doc',
+                                            id: 'deployment/on-premises/installation/platform/install_on_apache_hadoop',
+                                        },
+                                    ],
+                                },
+                                {
+                                    type: 'doc',
+                                    id: 'deployment/on-premises/installation/uninstallation',
+                                },
+                                {
+                                    type: 'doc',
+                                    id: 'deployment/on-premises/installation/install_validation',
+                                },
+                            ],
+                        },
                     ],
                 },
                 {
@@ -194,38 +226,6 @@ const sidebars = {
                         },
                     ],
                 },
-                {
-                    type: 'category',
-                    label: 'Install and Uninstall',
-                    link: {
-                        type: 'doc',
-                        id: 'deployment/installation/intro',
-                    },
-                    items: [
-                        {
-                            type: 'category',
-                            label: 'Install On Platforms',
-                            link: {
-                                type: 'doc',
-                                id: 'deployment/installation/platform/intro',
-                            },
-                            items: [
-                                {
-                                    type: 'doc',
-                                    id: 'deployment/installation/platform/install_on_apache_hadoop',
-                                },
-                            ],
-                        },
-                        {
-                            type: 'doc',
-                            id: 'deployment/installation/uninstallation',
-                        },
-                        {
-                            type: 'doc',
-                            id: 'deployment/installation/install_validation',
-                        },
-                    ],
-                },
             ],
         },
         {
@@ -442,6 +442,24 @@ const sidebars = {
                 },
             ],
         },
+        {
+            type: 'category',
+            label: 'Datasource',
+            link: {
+                type: 'doc',
+                id: 'datasource/intro',
+            },
+            items: [
+                {
+                    type: 'doc',
+                    id: 'datasource/import_hive'
+                },
+                {
+                    type: 'doc',
+                    id: 'datasource/data_sampling'
+                },
+            ],
+        },
         {
             type: 'category',
             label: 'Modeling',