You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by yi...@apache.org on 2022/10/26 06:56:53 UTC

[doris] branch master updated: [typo](docs)fix docs 404 link (#13677)

This is an automated email from the ASF dual-hosted git repository.

yiguolei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new c5559877b4 [typo](docs)fix docs 404 link (#13677)
c5559877b4 is described below

commit c5559877b42a5d4fbc5733478fb78734c456641a
Author: jiafeng.zhang <zh...@gmail.com>
AuthorDate: Wed Oct 26 14:56:47 2022 +0800

    [typo](docs)fix docs 404 link (#13677)
---
 .../release-and-verify/release-doris-manager.md    |  2 +-
 .../community/release-and-verify/release-verify.md |  6 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 .../docs/admin-manual/data-admin/delete-recover.md |  2 +-
 .../http-actions/fe/table-schema-action.md         |  2 +-
 .../admin-manual/maint-monitor/disk-capacity.md    |  2 +-
 .../maint-monitor/metadata-operation.md            |  2 +-
 docs/en/docs/advanced/alter-table/replace-table.md |  4 +-
 docs/en/docs/advanced/alter-table/schema-change.md |  2 +-
 docs/en/docs/data-operate/export/outfile.md        |  2 +-
 .../import/import-scenes/external-storage-load.md  |  2 +-
 .../data-operate/import/import-scenes/jdbc-load.md |  2 +-
 docs/en/docs/data-table/basic-usage.md             | 16 ++---
 docs/en/docs/ecosystem/doris-manager/space-list.md |  6 +-
 .../ecosystem/external-table/hive-bitmap-udf.md    |  4 +-
 .../docs/ecosystem/external-table/multi-catalog.md |  2 +-
 docs/en/docs/ecosystem/logstash.md                 |  4 +-
 docs/en/docs/install/install-deploy.md             |  2 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |  2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |  4 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |  2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |  2 +-
 .../Load/BROKER-LOAD.md                            | 18 +++---
 .../Load/CREATE-SYNC-JOB.md                        |  2 +-
 .../Load/STREAM-LOAD.md                            |  8 +--
 .../SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md          |  2 +-
 .../sql-reference/Show-Statements/SHOW-STATUS.md   |  3 -
 docs/sidebars.json                                 | 58 ++++++++---------
 .../how-to-contribute/how-to-contribute.md         |  2 +-
 .../release-and-verify/release-prepare.md          |  2 +-
 .../community/release-and-verify/release-verify.md |  6 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 .../docs/admin-manual/data-admin/delete-recover.md |  2 +-
 .../http-actions/fe/table-schema-action.md         |  2 +-
 .../maint-monitor/tablet-repair-and-balance.md     |  2 +-
 .../docs/advanced/alter-table/replace-table.md     |  4 +-
 docs/zh-CN/docs/data-operate/export/outfile.md     |  2 +-
 .../import/import-scenes/external-storage-load.md  |  2 +-
 .../data-operate/import/import-scenes/jdbc-load.md |  2 +-
 docs/zh-CN/docs/data-table/basic-usage.md          | 18 +++---
 docs/zh-CN/docs/data-table/hit-the-rollup.md       |  2 +-
 .../docs/ecosystem/doris-manager/space-list.md     |  6 +-
 .../ecosystem/external-table/hive-bitmap-udf.md    |  4 +-
 .../docs/ecosystem/external-table/multi-catalog.md |  2 +-
 docs/zh-CN/docs/ecosystem/logstash.md              |  4 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |  2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |  2 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |  2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |  2 +-
 .../Load/BROKER-LOAD.md                            | 20 +++---
 .../SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md          |  2 +-
 .../sql-reference/Show-Statements/SHOW-STATUS.md   | 72 ++++++++++++++++++++++
 52 files changed, 200 insertions(+), 131 deletions(-)

diff --git a/docs/en/community/release-and-verify/release-doris-manager.md b/docs/en/community/release-and-verify/release-doris-manager.md
index 76a2f489cf..7898e945b7 100644
--- a/docs/en/community/release-and-verify/release-doris-manager.md
+++ b/docs/en/community/release-and-verify/release-doris-manager.md
@@ -299,4 +299,4 @@ xxx
 
 ## Finish publishing
 
-Please refer to the [Completing the Release](./release-complete) documentation to complete all release processes.
+Please refer to the [Completing the Release](../release-complete) documentation to complete all release processes.
diff --git a/docs/en/community/release-and-verify/release-verify.md b/docs/en/community/release-and-verify/release-verify.md
index 6b5f768d60..cf34406420 100644
--- a/docs/en/community/release-and-verify/release-verify.md
+++ b/docs/en/community/release-and-verify/release-verify.md
@@ -95,6 +95,6 @@ If invalid is 0, then the validation passes.
 
 Please see the compilation documentation of each component to verify the compilation.
 
-* For Doris Core, see [compilation documentation](../../docs/install/source-install/compilation)
-* Flink Doris Connector, see [compilation documentation](../../docs/ecosystem/flink-doris-connector)
-* Spark Doris Connector, see [compilation documentation](../../docs/ecosystem/spark-doris-connector)
+* For Doris Core, see [compilation documentation](/docs/install/source-install/compilation)
+* Flink Doris Connector, see [compilation documentation](/docs/ecosystem/flink-doris-connector)
+* Spark Doris Connector, see [compilation documentation](/docs/ecosystem/spark-doris-connector)
diff --git a/docs/en/docs/admin-manual/cluster-management/elastic-expansion.md b/docs/en/docs/admin-manual/cluster-management/elastic-expansion.md
index c87cde3e15..52355152d9 100644
--- a/docs/en/docs/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/en/docs/admin-manual/cluster-management/elastic-expansion.md
@@ -106,7 +106,7 @@ You can also view the BE node through the front-end page connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../maint-monitor/tablet-repair-and-balance).
+The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../../maint-monitor/tablet-repair-and-balance).
 
 ### Add BE nodes
 
diff --git a/docs/en/docs/admin-manual/data-admin/delete-recover.md b/docs/en/docs/admin-manual/data-admin/delete-recover.md
index e9944ae9cd..93ccf18f2b 100644
--- a/docs/en/docs/admin-manual/data-admin/delete-recover.md
+++ b/docs/en/docs/admin-manual/data-admin/delete-recover.md
@@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
 
 ## More Help
 
-For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
+For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
diff --git a/docs/en/docs/admin-manual/http-actions/fe/table-schema-action.md b/docs/en/docs/admin-manual/http-actions/fe/table-schema-action.md
index 4cdb7528dd..62b6a457b3 100644
--- a/docs/en/docs/admin-manual/http-actions/fe/table-schema-action.md
+++ b/docs/en/docs/admin-manual/http-actions/fe/table-schema-action.md
@@ -97,7 +97,7 @@ None
 	"count": 0
 }
 ```
-Note: The difference is that the `http` method returns more `aggregation_type` fields than the `http v2` method. The `http v2` is enabled by setting `enable_http_server_v2`. For detailed parameter descriptions, see [fe parameter settings](../../config/fe-config)
+Note: The difference is that the `http` method returns more `aggregation_type` fields than the `http v2` method. The `http v2` is enabled by setting `enable_http_server_v2`. For detailed parameter descriptions, see [fe parameter settings](../../../config/fe-config)
 
 ## Examples
 
diff --git a/docs/en/docs/admin-manual/maint-monitor/disk-capacity.md b/docs/en/docs/admin-manual/maint-monitor/disk-capacity.md
index 59412f846b..73da70e49d 100644
--- a/docs/en/docs/admin-manual/maint-monitor/disk-capacity.md
+++ b/docs/en/docs/admin-manual/maint-monitor/disk-capacity.md
@@ -127,7 +127,7 @@ When the disk capacity is higher than High Watermark or even Flood Stage, many o
     * snapshot/: Snapshot files in the snapshot directory. 
     * trash/ Trash files in the trash directory. 
 
-    **This operation will affect [Restore data from BE Recycle Bin](./tablet-restore-tool.md).**
+    **This operation will affect [Restore data from BE Recycle Bin](../../tablet-restore-tool).**
 
     If the BE can still be started, you can use `ADMIN CLEAN TRASH ON(BackendHost:BackendHeartBeatPort);` to actively clean up temporary files. **all trash files** and expired snapshot files will be cleaned up, **This will affect the operation of restoring data from the trash bin**.
 
diff --git a/docs/en/docs/admin-manual/maint-monitor/metadata-operation.md b/docs/en/docs/admin-manual/maint-monitor/metadata-operation.md
index bc2439ff58..1b7d536380 100644
--- a/docs/en/docs/admin-manual/maint-monitor/metadata-operation.md
+++ b/docs/en/docs/admin-manual/maint-monitor/metadata-operation.md
@@ -32,7 +32,7 @@ For the time being, read the [Doris metadata design document](/community/design/
 
 ## Important tips
 
-* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion.java` file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../admin-manual/cluster-management/upgrade).
+* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion.java` file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../../admin-manual/cluster-management/upgrade).
 
 ## Metadata catalog structure
 
diff --git a/docs/en/docs/advanced/alter-table/replace-table.md b/docs/en/docs/advanced/alter-table/replace-table.md
index ad1fc22b31..21fd3b95d0 100644
--- a/docs/en/docs/advanced/alter-table/replace-table.md
+++ b/docs/en/docs/advanced/alter-table/replace-table.md
@@ -29,7 +29,7 @@ under the License.
 In version 0.14, Doris supports atomic replacement of two tables.
 This operation only applies to OLAP tables.
 
-For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition)
+For partition level replacement operations, please refer to [Temporary Partition Document](../../partition/table-temp-partition)
 
 ## Syntax
 
@@ -69,4 +69,4 @@ If `swap` is `false`, the operation is as follows:
 
 1. Atomic Overwrite Operation
 
-    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
+    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
diff --git a/docs/en/docs/advanced/alter-table/schema-change.md b/docs/en/docs/advanced/alter-table/schema-change.md
index 03dd656b3f..0503dac489 100644
--- a/docs/en/docs/advanced/alter-table/schema-change.md
+++ b/docs/en/docs/advanced/alter-table/schema-change.md
@@ -283,5 +283,5 @@ SHOW ALTER TABLE COLUMN\G;
 
 ## More Help
 
-For more detailed syntax and best practices used by Schema Change, see [ALTER TABLE COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by Schema Change, see [ALTER TABLE COLUMN](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql client command line for more help information.
 
diff --git a/docs/en/docs/data-operate/export/outfile.md b/docs/en/docs/data-operate/export/outfile.md
index 3f2f8d6dfa..77447ba060 100644
--- a/docs/en/docs/data-operate/export/outfile.md
+++ b/docs/en/docs/data-operate/export/outfile.md
@@ -163,4 +163,4 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = Open broker writer failed ...
 
 ## More Help
 
-For more detailed syntax and best practices for using OUTFILE, please refer to the [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE) command manual, you can also More help information can be obtained by typing `HELP OUTFILE` at the command line of the MySql client.
+For more detailed syntax and best practices for using OUTFILE, please refer to the [OUTFILE](../../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE) command manual, you can also More help information can be obtained by typing `HELP OUTFILE` at the command line of the MySql client.
diff --git a/docs/en/docs/data-operate/import/import-scenes/external-storage-load.md b/docs/en/docs/data-operate/import/import-scenes/external-storage-load.md
index 9c9358d12f..0074c591f3 100644
--- a/docs/en/docs/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/en/docs/data-operate/import/import-scenes/external-storage-load.md
@@ -82,7 +82,7 @@ Hdfs load creates an import statement. The import method is basically the same a
 
 3. Check import status
 
-   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to view
+   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to view
    
    ```
    mysql> show load order by createtime desc limit 1\G;
diff --git a/docs/en/docs/data-operate/import/import-scenes/jdbc-load.md b/docs/en/docs/data-operate/import/import-scenes/jdbc-load.md
index 45c743dc6d..4dd61e18a6 100644
--- a/docs/en/docs/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/en/docs/data-operate/import/import-scenes/jdbc-load.md
@@ -160,4 +160,4 @@ Please note the following:
 
    As mentioned earlier, we recommend that when using INSERT to import data, use the "batch" method to import, rather than a single insert.
 
-   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](./load-atomicity), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) document.
+   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](../load-atomicity), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) document.
diff --git a/docs/en/docs/data-table/basic-usage.md b/docs/en/docs/data-table/basic-usage.md
index 430cf6dbaf..069fceaa12 100644
--- a/docs/en/docs/data-table/basic-usage.md
+++ b/docs/en/docs/data-table/basic-usage.md
@@ -43,7 +43,7 @@ Doris has built-in root and admin users, and the password is empty by default.
 >
 >admin user has ADMIN_PRIV and GRANT_PRIV privileges
 >
->For specific instructions on permissions, please refer to [Permission Management](/docs/admin-manual/privilege-ldap/user-privilege)
+>For specific instructions on permissions, please refer to [Permission Management](../../admin-manual/privilege-ldap/user-privilege)
 
 After starting the Doris program, you can connect to the Doris cluster through root or admin users.
 Use the following command to log in to Doris:
@@ -107,7 +107,7 @@ CREATE DATABASE example_db;
 >
 > If you don't know the full name of the command, you can use "help command a field" for fuzzy query. If you type `HELP CREATE`, you can match commands like `CREATE DATABASE', `CREATE TABLE', `CREATE USER', etc.
 
-After the database is created, you can view the database information through [SHOW DATABASES](../sql-manual/sql-reference/Show-Statements/SHOW-DATABASES).
+After the database is created, you can view the database information through [SHOW DATABASES](../../sql-manual/sql-reference/Show-Statements/SHOW-DATABASES).
 
 ```sql
 MySQL> SHOW DATABASES;
@@ -142,7 +142,7 @@ mysql> USE example_db;
 Database changed
 ```
 
-Doris supports [composite partition and single partition](./data-partition)  two table building methods. The following takes the aggregation model as an example to demonstrate how to create two partitioned data tables.
+Doris supports [composite partition and single partition](../data-partition)  two table building methods. The following takes the aggregation model as an example to demonstrate how to create two partitioned data tables.
 
 #### Single partition
 
@@ -406,7 +406,7 @@ MySQL> SELECT SUM(pv) FROM table2 WHERE siteid IN (SELECT siteid FROM table1 WHE
 
 ## Table Structure Change
 
-Use the [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) command to modify the table Schema, including the following changes.
+Use the [ALTER TABLE COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) command to modify the table Schema, including the following changes.
 
 - Adding columns
 - Deleting columns
@@ -470,7 +470,7 @@ For more help, see ``HELP ALTER TABLE``.
 
 Rollup can be understood as a materialized index structure for a Table. **Materialized** because its data is physically stored independently, and **Indexed** in the sense that Rollup can reorder columns to increase the hit rate of prefix indexes, and can reduce key columns to increase the aggregation of data.
 
-Use [ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP) to perform various changes to Rollup.
+Use [ALTER TABLE ROLLUP](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP) to perform various changes to Rollup.
 
 The following examples are given
 
@@ -539,7 +539,7 @@ Materialized views are a space-for-time data analysis acceleration technique, an
 
 At the same time, Doris can automatically ensure data consistency between materialized views and base tables, and automatically match the appropriate materialized view at query time, greatly reducing the cost of data maintenance for users and providing a consistent and transparent query acceleration experience for users.
 
-For more information about materialized views, see [Materialized Views](../advanced/materialized-view)
+For more information about materialized views, see [Materialized Views](../../advanced/materialized-view)
 
 ## Data table queries
 
@@ -656,7 +656,7 @@ mysql> select sum(table1.pv) from table1 join [shuffle] table2 where table1.site
 
 When deploying multiple FE nodes, you can deploy a load balancing layer on top of multiple FEs to achieve high availability of Doris.
 
-Please refer to [Load Balancing](...) for details on installation, deployment, and usage. /admin-manual/cluster-management/load-balancing)
+Please refer to [Load Balancing](../../admin-manual/cluster-management/load-balancing) for details on installation, deployment, and usage.
 
 ## Data update and deletion
 
@@ -664,4 +664,4 @@ Doris supports deleting imported data in two ways. One way is to delete data by
 
 The other deletion method is for the Unique primary key unique model only, where the primary key rows to be deleted are imported by importing the data, and the final physical deletion of the data is performed internally by Doris using the delete tag bit. This deletion method is suitable for deleting data in a real-time manner.
 
-For specific instructions on delete and update operations, see [Data Update](...). /data-operate/update-delete/update) documentation.
+For specific instructions on delete and update operations, see [Data Update](../../data-operate/update-delete/update) documentation.
diff --git a/docs/en/docs/ecosystem/doris-manager/space-list.md b/docs/en/docs/ecosystem/doris-manager/space-list.md
index 68c3647a36..65dd65028a 100644
--- a/docs/en/docs/ecosystem/doris-manager/space-list.md
+++ b/docs/en/docs/ecosystem/doris-manager/space-list.md
@@ -104,7 +104,7 @@ Enter the host IP to add a new host, or add it in batches.
 
 1. Code package path
 
-   When deploying a cluster through Doris Manager, you need to provide the compiled Doris installation package. You can compile it yourself from the Doris source code, or use the officially provided [binary version](https://doris.apache.org/zh-CN/ downloads/downloads.html).
+   When deploying a cluster through Doris Manager, you need to provide the compiled Doris installation package. You can compile it yourself from the Doris source code.
 
 `Doris Manager will pull the Doris installation package through http. If you need to build your own http service, please refer to the bottom of the document - Self-built http service`.
 
@@ -223,7 +223,7 @@ Reference: https://www.runoob.com/linux/nginx-install-setup.html
 ### 3 Configuration
 
 1. Put the doris installation package in the nginx root directory
-mv PALO-0.15.1-rc03-binary.tar.gz /usr/share/nginx/html
+mv apache-doris-1.1.1-bin-x86.tar.gz  /usr/share/nginx/html
 
 2. Modify ngixn.conf
 
@@ -234,4 +234,4 @@ location /download {
 ````
 
 Restart ngxin access after modification:
-https://host:port/download/PALO-0.15.1-rc03-binary.tar.gz
+https://host:port/download/apache-doris-1.1.1-bin-x86.tar.gz
diff --git a/docs/en/docs/ecosystem/external-table/hive-bitmap-udf.md b/docs/en/docs/ecosystem/external-table/hive-bitmap-udf.md
index 27d6199fa8..c56fa7a3ae 100644
--- a/docs/en/docs/ecosystem/external-table/hive-bitmap-udf.md
+++ b/docs/en/docs/ecosystem/external-table/hive-bitmap-udf.md
@@ -57,7 +57,7 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 
    Hive Bitmap UDF used in Hive/Spark,First, you need to compile fe to get hive-udf-jar-with-dependencies.jar.
    Compilation preparation:If you have compiled the ldb source code, you can directly compile fe,If you have compiled the ldb source code, you can compile it directly. If you have not compiled the ldb source code, you need to manually install thrift,
-   Reference:[Setting Up dev env for FE](../../../community/developer-guide/fe-idea-dev) .
+   Reference:[Setting Up dev env for FE](/community/developer-guide/fe-idea-dev) .
 
 ```sql
 --clone doris code
@@ -106,4 +106,4 @@ select k1,bitmap_union(uuid) from hive_bitmap_table group by k1
 
 ## Hive Bitmap import into Doris
 
- see details: [Spark Load](../../data-operate/import/import-way/spark-load-manual) -> Basic operation -> Create load(Example 3: when the upstream data source is hive binary type table)
+ see details: [Spark Load](../../../data-operate/import/import-way/spark-load-manual) -> Basic operation -> Create load(Example 3: when the upstream data source is hive binary type table)
diff --git a/docs/en/docs/ecosystem/external-table/multi-catalog.md b/docs/en/docs/ecosystem/external-table/multi-catalog.md
index 0c7ddf75a9..7cb081a6d7 100644
--- a/docs/en/docs/ecosystem/external-table/multi-catalog.md
+++ b/docs/en/docs/ecosystem/external-table/multi-catalog.md
@@ -64,7 +64,7 @@ This function will be used as a supplement and enhancement to the previous exter
 	
 4. Drop Catalog
 
-	Both Database and Table in External Catalog are read-only. However, the catalog can be deleted (Internal Catalog cannot be deleted). An External Catalog can be dropped via the [DROP CATALOG](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DRIO-CATALOG.md) command.
+	Both Database and Table in External Catalog are read-only. However, the catalog can be deleted (Internal Catalog cannot be deleted). An External Catalog can be dropped via the [DROP CATALOG](../../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-CATALOG) command.
 
 	This operation will only delete the mapping information of the catalog in Doris, and will not modify or change the contents of any external data source.
 
diff --git a/docs/en/docs/ecosystem/logstash.md b/docs/en/docs/ecosystem/logstash.md
index 11ec6163db..38d67d412d 100644
--- a/docs/en/docs/ecosystem/logstash.md
+++ b/docs/en/docs/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 This plugin is used to output data to Doris for logstash, use the HTTP protocol to interact with the Doris FE Http interface, and import data through Doris's stream load.
 
-[Learn more about Doris Stream Load ](../data-operate/import/import-way/stream-load-manual)
+[Learn more about Doris Stream Load ](../../data-operate/import/import-way/stream-load-manual)
 
 [Learn more about Doris](/)
 
@@ -85,7 +85,7 @@ Configuration | Explanation
 `label_prefix` | Import the identification prefix, the final generated ID is *{label\_prefix}\_{db}\_{table}\_{time_stamp}*
 
 
-Load configuration:([Reference documents](../data-operate/import/import-way/stream-load-manual))
+Load configuration:([Reference documents](../../data-operate/import/import-way/stream-load-manual))
 
 Configuration | Explanation
 --- | ---
diff --git a/docs/en/docs/install/install-deploy.md b/docs/en/docs/install/install-deploy.md
index 9337b9821a..5ac4a0358a 100644
--- a/docs/en/docs/install/install-deploy.md
+++ b/docs/en/docs/install/install-deploy.md
@@ -158,7 +158,7 @@ BROKER does not currently have, nor does it need, priority\_networks. Broker's s
 
 By default, doris is case-sensitive. If there is a need for case-insensitive table names, you need to set it before cluster initialization. The table name case sensitivity cannot be changed after cluster initialization is completed.
 
-See the section on `lower_case_table_names` variables in [Variables](../advanced/variables) for details.
+See the section on `lower_case_table_names` variables in [Variables](../../advanced/variables) for details.
 
 ## Cluster deployment
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 69b81b5c18..d8beb732c4 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ Notice:
 - The partition is left closed and right open. If the user only specifies the right boundary, the system will automatically determine the left boundary
 - If the bucketing method is not specified, the bucketing method and bucket number used for creating the table would be automatically used
 - If the bucketing method is specified, only the number of buckets can be modified, not the bucketing method or the bucketing column. If the bucketing method is specified but the number of buckets not be specified, the default value `10` will be used for bucket number instead of the number specified when the table is created. If the number of buckets modified, the bucketing method needs to be specified simultaneously.
-- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](../Create/CREATE-TABLE)
+- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](../../Create/CREATE-TABLE)
 - If the user does not explicitly create a partition when creating a table, adding a partition by ALTER is not supported
 
 2. Delete the partition
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index a0c1616e0b..4165a80659 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "ALTER-TABLE-REPLACE-COLUMN",
+    "title": "ALTER-TABLE-REPLACE",
     "language": "en"
 }
 ---
@@ -83,4 +83,4 @@ ALTER, TABLE, REPLACE, ALTER TABLE
 ### Best Practice
 1. Atomic overlay write operations
 
-   In some cases, the user wants to be able to rewrite the data of a table, but if the deletion and then import method is used, the data cannot be viewed for a period of time. In this case, you can use the `CREATE TABLE LIKE` statement to CREATE a new TABLE with the same structure. After importing the new data into the new TABLE, you can replace the old TABLE atomic to achieve the purpose.
+  In some cases, the user wants to be able to rewrite the data of a certain table, but if the data is deleted first and then imported, the data cannot be viewed for a period of time in between. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and use the replacement operation to atomically replace the old table to achieve the goal. Atomic overwrite write operations at the partitio [...]
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index 596e7cfcc5..c19578901c 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -68,7 +68,7 @@ Notice:
 
 - If from_index_name is not specified, it will be created from base index by default
 - Columns in rollup table must be columns already in from_index
-- In properties, the storage format can be specified. For details, see [CREATE TABLE](../Create/CREATE-TABLE)
+- In properties, the storage format can be specified. For details, see [CREATE TABLE](../../Create/CREATE-TABLE)
 
 3. Delete rollup index
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index 260145ad03..58e579cf48 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 illustrate:
 
-- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) statement for details
+- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) statement for details
 - If you execute DROP TABLE FORCE, the system will not check whether there are unfinished transactions in the table, the table will be deleted directly and cannot be recovered, this operation is generally not recommended
 
 ### Example
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index f8901e13b5..57fb1003b1 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../../advanced/broker) documentation for specific information.
+  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../../../advanced/broker) documentation for specific information.
 
   ````text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../../../advanced/time-zone) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
+    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../../../../advanced/time-zone) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
 
   - `load_parallelism`
 
@@ -421,21 +421,21 @@ WITH BROKER broker_name
 
 3. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../data-operate/import/import-scenes/load-atomicity) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity) documentation.
 
 4. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
 5. Error data filtering
 
    Doris' import tasks can tolerate a portion of malformed data. Tolerated via `max_filter_ratio` setting. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
 6. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../../data-operate/import/import-scenes/load-strict-mode) documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../../../data-operate/import/import-scenes/load-strict-mode) documentation.
 
 7. Timeout
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index 09302c36e4..0f7f627232 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ The data synchronization (Sync Job) function supports users to submit a resident
 
 Currently, the data synchronization job only supports connecting to Canal, obtaining the parsed Binlog data from the Canal Server and importing it into Doris.
 
-Users can view the data synchronization job status through [SHOW SYNC JOB](../../Show-Statements/SHOW-SYNC-JOB).
+Users can view the data synchronization job status through [SHOW SYNC JOB](../../../Show-Statements/SHOW-SYNC-JOB).
 
 grammar:
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
index 07efc31ae9..78154231e1 100644
--- a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
+++ b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
@@ -429,21 +429,21 @@ curl --location-trusted -u root -H "columns: k1,k2,source_sequence,v1,v2" -H "fu
 
 4. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
 
 5. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 6. Error data filtering
 
    Doris' import tasks can tolerate a portion of malformed data. The tolerance ratio is set via `max_filter_ratio`. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
+   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 7. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.md) documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../../../data-operate/import/import-scenes/load-strict-mode.md) documentation.
 
 8. Timeout
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md b/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
index 7a75f9a94a..020935ae99 100644
--- a/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
+++ b/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) statement.
+This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) statement.
 
 > This statement is equivalent to `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md b/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
index 7b105a7034..26707d10aa 100644
--- a/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
+++ b/docs/en/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
@@ -32,7 +32,6 @@ SHOW STATUS
 
 ### Description
 
-<<<<<<< HEAD
 This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../../Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) statement.
 
 > This statement is equivalent to `SHOW ALTER TABLE ROLLUP`;
@@ -105,8 +104,6 @@ RollupIndexName: r1
 
 - `Timeout`: Job timeout, in seconds.
 
-=======
->>>>>>> 1c59e4cf7 (fix docs 404 url)
 ### Example
 
 ### Keywords
diff --git a/docs/sidebars.json b/docs/sidebars.json
index e76caf1d33..ba8b200c58 100644
--- a/docs/sidebars.json
+++ b/docs/sidebars.json
@@ -786,45 +786,55 @@
                             "type": "category",
                             "label": "Show",
                             "items": [
+                                "sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-ALTER",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-BACKUP",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-BACKENDS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-BROKER",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CATALOGS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CHARSET",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-MATERIALIZED-VIEW",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-ROUTINE-LOAD",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-FUNCTION",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-COLUMNS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-COLLATION",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-DATABASES",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-DATA-SKEW",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-DATABASE-ID",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-DYNAMIC-PARTITION",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-DELETE",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-DATA",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-ENGINES",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-EVENTS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-EXPORT",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-ENCRYPT-KEY",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-FUNCTIONS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-FILE",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-GRANTS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-LAST-INSERT",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-BACKUP",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-LOAD-WARNINGS",
+                                "sql-manual/sql-reference/Show-Statements/SHOW-INDEX",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-MIGRATIONS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PARTITION-ID",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-SNAPSHOT",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-FUNCTIONS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-ROLLUP",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-ENGINES",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-DELETE",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-FUNCTION",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-WHITE-LIST",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-WARNING",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-DATA-SKEW",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-DATABASE-ID",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-DYNAMIC-PARTITION",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TABLET",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-VARIABLES",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-ROUTINE-LOAD",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PLUGINS",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-EVENTS",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-LOAD-WARNINGS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-ROLES",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-GRANTS",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-INDEX",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-EXPORT",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PROCEDURE",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD-TASK",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-BACKENDS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PROC",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-COLLATION",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TABLE-STATUS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-REPOSITORIES",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-MATERIALIZED-VIEW",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-QUERY-PROFILE",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-OPEN-TABLES",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TABLETS",
@@ -834,25 +844,16 @@
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PARTITIONS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-FRONTENDS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-RESTORE",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-DATA",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PROPERTY",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-BROKER",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TRIGGERS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-PROCESSLIST",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-ENCRYPT-KEY",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-COLUMNS",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TRASH",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-VIEW",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TRANSACTION",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-FILE",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-STREAM-LOAD",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-STATUS",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-TABLE-ID",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-ALTER",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-SMALL-FILES",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE",
-                                "sql-manual/sql-reference/Show-Statements/SHOW-CHARSET",
                                 "sql-manual/sql-reference/Show-Statements/SHOW-POLICY"
                             ]
                         },
@@ -940,7 +941,6 @@
                        "admin-manual/maint-monitor/doris-error-code",
                        "admin-manual/maint-monitor/tablet-meta-tool",
                        "admin-manual/maint-monitor/monitor-alert",
-                       "admin-manual/maint-monitor/multi-tenant",
                        "admin-manual/maint-monitor/tablet-local-debug",
                        "admin-manual/maint-monitor/tablet-restore-tool",
                        "admin-manual/maint-monitor/metadata-operation"
diff --git a/docs/zh-CN/community/how-to-contribute/how-to-contribute.md b/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
index e482cb4055..2f178cace6 100644
--- a/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
+++ b/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
@@ -78,7 +78,7 @@ under the License.
 
 ## 修改代码和提交PR(Pull Request)
 
-您可以下载代码,编译安装,部署运行试一试(可以参考[编译文档](/docs/install/source-install/compilation.md),看看是否与您预想的一样工作。如果有问题,您可以直接联系我们,提 Issue 或者通过阅读和分析源代码自己修复。
+您可以下载代码,编译安装,部署运行试一试(可以参考[编译文档](/docs/install/source-install/compilation),看看是否与您预想的一样工作。如果有问题,您可以直接联系我们,提 Issue 或者通过阅读和分析源代码自己修复。
 
 无论是修复 Bug 还是增加 Feature,我们都非常欢迎。如果您希望给 Doris 提交代码,您需要从 GitHub 上 fork 代码库至您的项目空间下,为您提交的代码创建一个新的分支,添加源项目为upstream,并提交PR。
 提交PR的方式可以参考文档 [Pull Request](./pull-request.md)。
diff --git a/docs/zh-CN/community/release-and-verify/release-prepare.md b/docs/zh-CN/community/release-and-verify/release-prepare.md
index d2c85b5875..70f053cb3e 100644
--- a/docs/zh-CN/community/release-and-verify/release-prepare.md
+++ b/docs/zh-CN/community/release-and-verify/release-prepare.md
@@ -83,7 +83,7 @@ Apache 项目的版本发布主要有以下三种形式:
 ### 准备gpg key
 
 Release manager 在发布前需要先生成自己的签名公钥,并上传到公钥服务器,之后就可以用这个公钥对准备发布的软件包进行签名。
-如果在[KEY](https://downloads.apache.org/incubator/doris/KEYS)里已经存在了你的KEY,那么你可以跳过这个步骤了。
+如果在[KEY](https://downloads.apache.org/doris/KEYS)里已经存在了你的KEY,那么你可以跳过这个步骤了。
 
 #### 签名软件 GnuPG 的安装配置
 
diff --git a/docs/zh-CN/community/release-and-verify/release-verify.md b/docs/zh-CN/community/release-and-verify/release-verify.md
index 53aba53dc4..e5173b29f2 100644
--- a/docs/zh-CN/community/release-and-verify/release-verify.md
+++ b/docs/zh-CN/community/release-and-verify/release-verify.md
@@ -97,6 +97,6 @@ INFO Totally checked 5611 files, valid: 3926, invalid: 0, ignored: 1685, fixed:
 
 请参阅各组件的编译文档验证编译。
 
-* Doris 主代码编译,请参阅 [编译文档](../../docs/install/source-install/compilation)
-* Flink Doris Connector 编译,请参阅 [编译文档](../../docs/ecosystem/flink-doris-connector)
-* Spark Doris Connector 编译,请参阅 [编译文档](../../docs/ecosystem/spark-doris-connector)
+* Doris 主代码编译,请参阅 [编译文档](/docs/install/source-install/compilation)
+* Flink Doris Connector 编译,请参阅 [编译文档](/docs/ecosystem/flink-doris-connector)
+* Spark Doris Connector 编译,请参阅 [编译文档](/docs/ecosystem/spark-doris-connector)
diff --git a/docs/zh-CN/docs/admin-manual/cluster-management/elastic-expansion.md b/docs/zh-CN/docs/admin-manual/cluster-management/elastic-expansion.md
index 12c20e7523..20e14bd105 100644
--- a/docs/zh-CN/docs/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/zh-CN/docs/admin-manual/cluster-management/elastic-expansion.md
@@ -102,7 +102,7 @@ FE 分为 Leader,Follower 和 Observer 三种角色。 默认一个集群,
 
 以上方式,都需要 Doris 的 root 用户权限。
 
-BE 节点的扩容和缩容过程,不影响当前系统运行以及正在执行的任务,并且不会影响当前系统的性能。数据均衡会自动进行。根据集群现有数据量的大小,集群会在几个小时到1天不等的时间内,恢复到负载均衡的状态。集群负载情况,可以参见 [Tablet 负载均衡文档](../maint-monitor/tablet-repair-and-balance)。
+BE 节点的扩容和缩容过程,不影响当前系统运行以及正在执行的任务,并且不会影响当前系统的性能。数据均衡会自动进行。根据集群现有数据量的大小,集群会在几个小时到1天不等的时间内,恢复到负载均衡的状态。集群负载情况,可以参见 [Tablet 负载均衡文档](../../maint-monitor/tablet-repair-and-balance)。
 
 ### 增加 BE 节点
 
diff --git a/docs/zh-CN/docs/admin-manual/data-admin/delete-recover.md b/docs/zh-CN/docs/admin-manual/data-admin/delete-recover.md
index d1bd285f3b..5b869431af 100644
--- a/docs/zh-CN/docs/admin-manual/data-admin/delete-recover.md
+++ b/docs/zh-CN/docs/admin-manual/data-admin/delete-recover.md
@@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
 
 ## 更多帮助
 
-关于 RECOVER 使用的更多详细语法及最佳实践,请参阅 [RECOVER](../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RECOVER` 获取更多帮助信息。
+关于 RECOVER 使用的更多详细语法及最佳实践,请参阅 [RECOVER](../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RECOVER` 获取更多帮助信息。
diff --git a/docs/zh-CN/docs/admin-manual/http-actions/fe/table-schema-action.md b/docs/zh-CN/docs/admin-manual/http-actions/fe/table-schema-action.md
index a4ae8f6e18..8f5a456971 100644
--- a/docs/zh-CN/docs/admin-manual/http-actions/fe/table-schema-action.md
+++ b/docs/zh-CN/docs/admin-manual/http-actions/fe/table-schema-action.md
@@ -97,7 +97,7 @@ under the License.
 	"count": 0
 }
 ```
-注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](../../config/fe-config.md)
+注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](../../../config/fe-config.md)
 
 ## Examples
 
diff --git a/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md b/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
index b842fd6ada..d698c70cf4 100644
--- a/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -216,7 +216,7 @@ TabletScheduler 里等待被调度的分片会根据状态不同,赋予不同
 
 ## 副本均衡
 
-Doris 会自动进行集群内的副本均衡。目前支持两种均衡策略,负载/分区。负载均衡适合需要兼顾节点磁盘使用率和节点副本数量的场景;而分区均衡会使每个分区的副本都均匀分布在各个节点,避免热点,适合对分区读写要求比较高的场景。但是,分区均衡不考虑磁盘使用率,使用分区均衡时需要注意磁盘的使用情况。 策略只能在fe启动前配置[tablet_rebalancer_type](../config/fe-config )  ,不支持运行时切换。
+Doris 会自动进行集群内的副本均衡。目前支持两种均衡策略,负载/分区。负载均衡适合需要兼顾节点磁盘使用率和节点副本数量的场景;而分区均衡会使每个分区的副本都均匀分布在各个节点,避免热点,适合对分区读写要求比较高的场景。但是,分区均衡不考虑磁盘使用率,使用分区均衡时需要注意磁盘的使用情况。 策略只能在fe启动前配置[tablet_rebalancer_type](../../config/fe-config )  ,不支持运行时切换。
 
 ### 负载均衡
 
diff --git a/docs/zh-CN/docs/advanced/alter-table/replace-table.md b/docs/zh-CN/docs/advanced/alter-table/replace-table.md
index 63afc3471c..043ab9e6fc 100644
--- a/docs/zh-CN/docs/advanced/alter-table/replace-table.md
+++ b/docs/zh-CN/docs/advanced/alter-table/replace-table.md
@@ -28,7 +28,7 @@ under the License.
 
 在 0.14 版本中,Doris 支持对两个表进行原子的替换操作。 该操作仅适用于 OLAP 表。
 
-分区级别的替换操作,请参阅 [临时分区文档](../partition/table-temp-partition)
+分区级别的替换操作,请参阅 [临时分区文档](../../partition/table-temp-partition)
 
 ## 语法说明
 
@@ -68,4 +68,4 @@ ALTER TABLE [db.]tbl1 REPLACE WITH TABLE tbl2
 
 1. 原子的覆盖写操作
 
-   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../partition/table-tmp-partition.md)。
+   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../../partition/table-tmp-partition)。
diff --git a/docs/zh-CN/docs/data-operate/export/outfile.md b/docs/zh-CN/docs/data-operate/export/outfile.md
index 2ed75ddba9..ed20ac2d27 100644
--- a/docs/zh-CN/docs/data-operate/export/outfile.md
+++ b/docs/zh-CN/docs/data-operate/export/outfile.md
@@ -157,4 +157,4 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = Open broker writer failed ...
 
 ## 更多帮助
 
-关于 OUTFILE 使用的更多详细语法及最佳实践,请参阅 [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP OUTFILE` 获取更多帮助信息。
+关于 OUTFILE 使用的更多详细语法及最佳实践,请参阅 [OUTFILE](../../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP OUTFILE` 获取更多帮助信息。
diff --git a/docs/zh-CN/docs/data-operate/import/import-scenes/external-storage-load.md b/docs/zh-CN/docs/data-operate/import/import-scenes/external-storage-load.md
index 1dd619cde0..88c5a271e7 100644
--- a/docs/zh-CN/docs/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/zh-CN/docs/data-operate/import/import-scenes/external-storage-load.md
@@ -86,7 +86,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
   
 3. 查看导入状态
    
-   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
+   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW LOAD](../../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
    
    ```
    mysql> show load order by createtime desc limit 1\G;
diff --git a/docs/zh-CN/docs/data-operate/import/import-scenes/jdbc-load.md b/docs/zh-CN/docs/data-operate/import/import-scenes/jdbc-load.md
index 36725343f0..ada0d26ddc 100644
--- a/docs/zh-CN/docs/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/zh-CN/docs/data-operate/import/import-scenes/jdbc-load.md
@@ -160,4 +160,4 @@ public class DorisJDBCDemo {
 
    前面提到,我们建议在使用 INSERT 导入数据时,采用 ”批“ 的方式进行导入,而不是单条插入。
 
-   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](./load-atomicity) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) 文档。
+   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](../load-atomicity) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) 文档。
diff --git a/docs/zh-CN/docs/data-table/basic-usage.md b/docs/zh-CN/docs/data-table/basic-usage.md
index a06de3c3a9..4a108e4e3e 100644
--- a/docs/zh-CN/docs/data-table/basic-usage.md
+++ b/docs/zh-CN/docs/data-table/basic-usage.md
@@ -42,7 +42,7 @@ Doris 内置 root,密码默认为空。
 >
 >root 用户默认拥有集群所有权限。同时拥有 Grant_priv 和 Node_priv 的用户,可以将该权限赋予其他用户,拥有节点变更权限,包括 FE、BE、BROKER 节点的添加、删除、下线等操作。
 >
->关于权限这块的具体说明可以参照[权限管理](/docs/admin-manual/privilege-ldap/user-privilege)
+>关于权限这块的具体说明可以参照[权限管理](../../admin-manual/privilege-ldap/user-privilege)
 
 启动完 Doris 程序之后,可以通过 root 或 admin 用户连接到 Doris 集群。 使用下面命令即可登录 Doris,登录后进入到Doris对应的Mysql命令行操作界面:
 
@@ -130,7 +130,7 @@ CREATE DATABASE example_db;
 >    SHOW CREATE ROUTINE LOAD
 > ```
 
-数据库创建完成之后,可以通过 [SHOW DATABASES](../sql-manual/sql-reference/Show-Statements/SHOW-DATABASES) 查看数据库信息。
+数据库创建完成之后,可以通过 [SHOW DATABASES](../../sql-manual/sql-reference/Show-Statements/SHOW-DATABASES) 查看数据库信息。
 
 ```sql
 mysql> SHOW DATABASES;
@@ -165,7 +165,7 @@ mysql> USE example_db;
 Database changed
 ```
 
-Doris支持[复合分区和单分区](./data-partition)两种建表方式。下面以聚合模型为例,分别演示如何创建两种分区的数据表。
+Doris支持[复合分区和单分区](../data-partition)两种建表方式。下面以聚合模型为例,分别演示如何创建两种分区的数据表。
 
 #### 单分区
 
@@ -328,7 +328,7 @@ curl --location-trusted -u test:test -H "label:table2_20170707" -H "column_separ
 > 注意事项:
 >
 > 1. 采用流式导入建议文件大小限制在 10GB 以内,过大的文件会导致失败重试代价变大。
-> 2. label:Label 的主要作用是唯一标识一个导入任务,并且能够保证相同的 Label 仅会被成功导入一次,具体可以查看 [数据导入事务及原子性 ](../data-operate/import/import-scenes/load-atomicity)。
+> 2. label:Label 的主要作用是唯一标识一个导入任务,并且能够保证相同的 Label 仅会被成功导入一次,具体可以查看 [数据导入事务及原子性 ](../../data-operate/import/import-scenes/load-atomicity)。
 > 3. 流式导入是同步命令。命令返回成功则表示数据已经导入,返回失败表示这批数据没有导入。
 
 #### Broker 导入
@@ -431,7 +431,7 @@ mysql> SELECT SUM(pv) FROM table2 WHERE siteid IN (SELECT siteid FROM table1 WHE
 
 ## 表结构变更
 
-使用 [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) 命令可以修改表的 Schema,包括如下修改:
+使用 [ALTER TABLE COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN) 命令可以修改表的 Schema,包括如下修改:
 
 - 增加列
 - 删除列
@@ -495,7 +495,7 @@ CANCEL ALTER TABLE COLUMN FROM table1;
 
 Rollup 可以理解为 Table 的一个物化索引结构。**物化** 是因为其数据在物理上独立存储,而 **索引** 的意思是,Rollup可以调整列顺序以增加前缀索引的命中率,也可以减少key列以增加数据的聚合度。
 
-使用[ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP)可以进行Rollup的各种变更操作。
+使用[ALTER TABLE ROLLUP](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP)可以进行Rollup的各种变更操作。
 
 以下举例说明
 
@@ -562,7 +562,7 @@ Rollup 建立之后,查询不需要指定 Rollup 进行查询。还是指定
 
 同时,Doris 能够自动保证物化视图和基础表的数据一致性,并且在查询时自动匹配合适的物化视图,极大降低用户的数据维护成本,为用户提供一个一致且透明的查询加速体验。
 
-关于物化视图的具体介绍,可参阅 [物化视图](../advanced/materialized-view)
+关于物化视图的具体介绍,可参阅 [物化视图](../../advanced/materialized-view)
 
 ## 数据表的查询
 
@@ -679,7 +679,7 @@ mysql> select sum(table1.pv) from table1 join [shuffle] table2 where table1.site
 
 当部署多个 FE 节点时,用户可以在多个 FE 之上部署负载均衡层来实现 Doris 的高可用。
 
-具体安装部署及使用方式请参照 [负载均衡](../admin-manual/cluster-management/load-balancing)
+具体安装部署及使用方式请参照 [负载均衡](../../admin-manual/cluster-management/load-balancing)
 
 ## 数据更新和删除
 
@@ -687,4 +687,4 @@ Doris 支持通过两种方式对已导入的数据进行删除。一种是通
 
 另一种删除方式仅针对 Unique 主键唯一模型,通过导入数据的方式将需要删除的主键行数据进行导入。Doris 内部会通过删除标记位对数据进行最终的物理删除。这种删除方式适合以实时的方式对数据进行删除。
 
-关于删除和更新操作的具体说明,可参阅 [数据更新](../data-operate/update-delete/update) 相关文档。
+关于删除和更新操作的具体说明,可参阅 [数据更新](../../data-operate/update-delete/update) 相关文档。
diff --git a/docs/zh-CN/docs/data-table/hit-the-rollup.md b/docs/zh-CN/docs/data-table/hit-the-rollup.md
index 6baf8ac2c7..42e21bc7dd 100644
--- a/docs/zh-CN/docs/data-table/hit-the-rollup.md
+++ b/docs/zh-CN/docs/data-table/hit-the-rollup.md
@@ -44,7 +44,7 @@ ROLLUP 表的基本作用,在于在 Base 表的基础上,获得更粗粒度
 
 1. 示例1:获得每个用户的总消费
 
-接 **[数据模型Aggregate 模型](./data-model)**小节的**示例2**,Base 表结构如下:
+接 **[数据模型Aggregate 模型](../data-model)**小节的**示例2**,Base 表结构如下:
 
 | ColumnName      | Type        | AggregationType | Comment                |
 | --------------- | ----------- | --------------- | ---------------------- |
diff --git a/docs/zh-CN/docs/ecosystem/doris-manager/space-list.md b/docs/zh-CN/docs/ecosystem/doris-manager/space-list.md
index 78abf4e980..8f1ec2ea54 100644
--- a/docs/zh-CN/docs/ecosystem/doris-manager/space-list.md
+++ b/docs/zh-CN/docs/ecosystem/doris-manager/space-list.md
@@ -104,7 +104,7 @@ ssh agent01@xx.xxx.xx.xx
 
 1. 代码包路径
 
-   通过Doris Manager 进行集群部署时,需要提供已编译好的 Doris 安装包,您可以通过 Doris 源码自行编译,或使用官方提供的[二进制版本](https://doris.apache.org/zh-CN/downloads/downloads.html)。
+   通过Doris Manager 进行集群部署时,需要提供已编译好的 Doris 安装包,您可以通过 Doris 源码自行编译.
 
 `Doris Manager 将通过 http 方式拉取Doris安装包,若您需要自建 http 服务,请参考文档底部-自建http服务`。
 
@@ -219,7 +219,7 @@ systemctl start nginx
 ### 3 配置
 
 1.将doris安装包放置nginx根目录
-mv PALO-0.15.1-rc03-binary.tar.gz /usr/share/nginx/html
+mv apache-doris-1.1.1-bin-x86.tar.gz /usr/share/nginx/html
 
 2.修改ngixn.conf
 ```
@@ -228,4 +228,4 @@ location /download {
         }
 ```
 修改后重启ngxin访问 :
-https://host:port/download/PALO-0.15.1-rc03-binary.tar.gz
+https://host:port/download/apache-doris-1.1.1-bin-x86.tar.gz
diff --git a/docs/zh-CN/docs/ecosystem/external-table/hive-bitmap-udf.md b/docs/zh-CN/docs/ecosystem/external-table/hive-bitmap-udf.md
index 8f5e0d7ca3..06cfd5a890 100644
--- a/docs/zh-CN/docs/ecosystem/external-table/hive-bitmap-udf.md
+++ b/docs/zh-CN/docs/ecosystem/external-table/hive-bitmap-udf.md
@@ -59,7 +59,7 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 ### Hive Bitmap UDF 使用:
 
 Hive Bitmap UDF 需要在 Hive/Spark 中使用,首先需要编译fe得到hive-udf-jar-with-dependencies.jar。
-编译准备工作:如果进行过ldb源码编译可直接编译fe,如果没有进行过ldb源码编译,则需要手动安装thrift,可参考:[FE开发环境搭建](../../../community/developer-guide/fe-idea-dev) 中的编译与安装
+编译准备工作:如果进行过ldb源码编译可直接编译fe,如果没有进行过ldb源码编译,则需要手动安装thrift,可参考:[FE开发环境搭建](/community/developer-guide/fe-idea-dev) 中的编译与安装
 
 ```sql
 --clone doris源码
@@ -115,4 +115,4 @@ select k1,bitmap_union(uuid) from hive_bitmap_table group by k1
 
 ## Hive bitmap 导入 doris
 
- 详见: [Spark Load](../../data-operate/import/import-way/spark-load-manual) -> 基本操作  -> 创建导入 (示例3:上游数据源是hive binary类型情况)
+ 详见: [Spark Load](../../../data-operate/import/import-way/spark-load-manual) -> 基本操作  -> 创建导入 (示例3:上游数据源是hive binary类型情况)
diff --git a/docs/zh-CN/docs/ecosystem/external-table/multi-catalog.md b/docs/zh-CN/docs/ecosystem/external-table/multi-catalog.md
index dff1f0a4dd..5b79d51821 100644
--- a/docs/zh-CN/docs/ecosystem/external-table/multi-catalog.md
+++ b/docs/zh-CN/docs/ecosystem/external-table/multi-catalog.md
@@ -64,7 +64,7 @@ under the License.
 	
 4. 删除 Catalog
 
-	External Catalog 中的 Database 和 Table 都是只读的。但是可以删除 Catalog(Internal Catalog无法删除)。可以通过 [DROP CATALOG](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DRIO-CATALOG.md) 命令删除一个 External Catalog。
+	External Catalog 中的 Database 和 Table 都是只读的。但是可以删除 Catalog(Internal Catalog无法删除)。可以通过 [DROP CATALOG](../../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-CATALOG) 命令删除一个 External Catalog。
 	
 	该操作仅会删除 Doris 中该 Catalog 的映射信息,并不会修改或变更任何外部数据目录的内容。
 
diff --git a/docs/zh-CN/docs/ecosystem/logstash.md b/docs/zh-CN/docs/ecosystem/logstash.md
index 9b70559ca2..a10bae8a7d 100644
--- a/docs/zh-CN/docs/ecosystem/logstash.md
+++ b/docs/zh-CN/docs/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 该插件用于logstash输出数据到Doris,使用 HTTP 协议与 Doris FE Http接口交互,并通过 Doris 的 stream load 的方式进行数据导入.
 
-[了解Doris Stream Load ](../data-operate/import/import-way/stream-load-manual)
+[了解Doris Stream Load ](../../data-operate/import/import-way/stream-load-manual)
 
 [了解更多关于Doris](/zh-CN)
 
@@ -85,7 +85,7 @@ copy logstash-output-doris-{version}.gem 到 logstash 安装目录下
 `label_prefix` | 导入标识前缀,最终生成的标识为 *{label\_prefix}\_{db}\_{table}\_{time_stamp}*
 
 
-导入相关配置:([参考文档](../data-operate/import/import-way/stream-load-manual.md))
+导入相关配置:([参考文档](../../data-operate/import/import-way/stream-load-manual.md))
 
 配置 | 说明
 --- | ---
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 67ce90fe86..1f1c4085e2 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ partition_desc ["key"="value"]
 - 分区为左闭右开区间,如果用户仅指定右边界,系统会自动确定左边界
 - 如果没有指定分桶方式,则自动使用建表使用的分桶方式和分桶数。
 - 如指定分桶方式,只能修改分桶数,不可修改分桶方式或分桶列。如果指定了分桶方式,但是没有指定分桶数,则分桶数会使用默认值10,不会使用建表时指定的分桶数。如果要指定分桶数,则必须指定分桶方式。
-- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](../Create/CREATE-TABLE)
+- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](../../Create/CREATE-TABLE)
 - 如果建表时用户未显式创建Partition,则不支持通过ALTER的方式增加分区
 
 2. 删除分区
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index 228ab208c7..205746889d 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "ALTER-TABLE-REPLACE-COLUMN",
+    "title": "ALTER-TABLE-REPLACE",
     "language": "zh-CN"
 }
 ---
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index cd72797b57..b755c39bc8 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -68,7 +68,7 @@ ADD ROLLUP [rollup_name (column_name1, column_name2, ...)
 
 - 如果没有指定 from_index_name,则默认从 base index 创建
 - rollup 表中的列必须是 from_index 中已有的列
-- 在 properties 中,可以指定存储格式。具体请参阅 [CREATE TABLE](../Create/CREATE-TABLE)
+- 在 properties 中,可以指定存储格式。具体请参阅 [CREATE TABLE](../../Create/CREATE-TABLE)
 
 3. 删除 rollup index
 
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index 6a46b0e3e9..5d8ccb399f 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 说明:
 
-- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 语句
+- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) 语句
 - 如果执行 DROP TABLE FORCE,则系统不会检查该表是否存在未完成的事务,表将直接被删除并且不能被恢复,一般不建议执行此操作
 
 ### Example
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index 83459c322c..44b7d5fcee 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert) 文档。
+    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert) 文档。
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert) 文档。
+    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert) 文档。
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert) 文档。
+    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert) 文档。
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../advanced/broker) 文档。
+  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../../advanced/broker) 文档。
 
   ```text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
     - `timezone`
 
-      指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../advanced/time-zone) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
+      指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../../advanced/time-zone) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
 
     - `load_parallelism`
 
@@ -416,25 +416,25 @@ WITH BROKER broker_name
 
 2. 取消导入任务
 
-   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](./CANCEL-LOAD) 命令取消。取消后,已写入的数据也会回滚,不会生效。
+   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](../CANCEL-LOAD) 命令取消。取消后,已写入的数据也会回滚,不会生效。
 
 3. Label、导入事务、多表原子性
 
-   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity) 文档。
+   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../../data-operate/import/import-scenes/load-atomicity) 文档。
 
 4. 列映射、衍生列和过滤
 
-   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert) 文档。
+   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert) 文档。
 
 5. 错误数据过滤
 
    Doris 的导入任务可以容忍一部分格式错误的数据。容忍了通过 `max_filter_ratio` 设置。默认为0,即表示当有一条错误数据时,整个导入任务将会失败。如果用户希望忽略部分有问题的数据行,可以将次参数设置为 0~1 之间的数值,Doris 会自动跳过哪些数据格式不正确的行。
 
-   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert) 文档。
+   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert) 文档。
 
 6. 严格模式
 
-   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode) 文档。
+   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../../data-operate/import/import-scenes/load-strict-mode) 文档。
 
 7. 超时时间
 
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md b/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
index ba1f659052..fb4171d69f 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) 语句提交的创建物化视图作业的执行情况。
+该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) 语句提交的创建物化视图作业的执行情况。
 
 > 该语句等同于 `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md b/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
index 9bf954ef8c..5ab40e24c1 100644
--- a/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
+++ b/docs/zh-CN/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
@@ -32,6 +32,78 @@ SHOW STATUS
 
 ### Description
 
+该命令用于查看通过[创建物化视图](../../Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW)语句提交的创建物化视图作业的执行情况。
+
+> 该语句相当于`SHOW ALTER TABLE ROLLUP`;
+
+```sql
+SHOW ALTER TABLE MATERIALIZED VIEW
+[FROM database]
+[WHERE]
+[ORDER BY]
+[LIMIT OFFSET]
+````
+
+- database :查看指定数据库下的作业。 如果未指定,则使用当前数据库。
+- WHERE:您可以过滤结果列,目前仅支持以下列:
+   - TableName:仅支持等值过滤。
+   - State:仅支持等效过滤。
+   - Createtime/FinishTime:支持 =、>=、<=、>、<、!=
+- ORDER BY:结果集可以按任何列排序。
+- LIMIT:使用 ORDER BY 进行翻页查询。
+
+Return result description:
+
+```sql
+mysql> show alter table materialized view\G
+**************************** 1. row ******************** ******
+          JobId: 11001
+      TableName: tbl1
+     CreateTime: 2020-12-23 10:41:00
+     FinishTime: NULL
+  BaseIndexName: tbl1
+RollupIndexName: r1
+       RollupId: 11002
+  TransactionId: 5070
+          State: WAITING_TXN
+            Msg:
+       Progress: NULL
+        Timeout: 86400
+1 row in set (0.00 sec)
+````
+
+- `JobId`:作业唯一 ID。
+
+- `TableName`:基表名称
+
+- `CreateTime/FinishTime`:作业创建时间和结束时间。
+
+- `BaseIndexName/RollupIndexName`:基表名称和物化视图名称。
+
+- `RollupId`:物化视图的唯一 ID。
+
+- `TransactionId`:参见State字段的描述。
+
+- `State`:工作状态。
+
+  - PENDING:工作正在准备中。
+
+  - WAITING_TXN:
+
+    在正式开始生成物化视图数据之前,它会等待当前正在运行的该表上的导入事务完成。而 `TransactionId` 字段是当前等待的交易 ID。当此 ID 的所有先前导入完成后,作业将真正开始。
+
+  - RUNNING:作业正在运行。
+
+  - FINISHED :作业成功运行。
+
+  - CANCELLED:作业运行失败。
+
+- `Msg`:错误信息
+
+- `Progress`:作业进度。这里的进度是指 `完completed tablets/total tablets`。物化视图以 tablet 粒度创建。
+
+- `Timeout`:作业超时,以秒为单位。
+
 ### Example
 
 ### Keywords


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org