You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by ji...@apache.org on 2022/05/09 01:16:13 UTC

[incubator-doris] branch master updated: [Doc] fix doc link suffix .html to .md (#9442)

This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 8932fcaf59 [Doc] fix doc link suffix .html to .md (#9442)
8932fcaf59 is described below

commit 8932fcaf59000480f8dccfcc981eeb51646e0048
Author: wudi <67...@qq.com>
AuthorDate: Mon May 9 09:16:06 2022 +0800

    [Doc] fix doc link suffix .html to .md (#9442)
    
    * fix doc link suffix html to md
---
 .../cluster-management/elastic-expansion.md        |  6 ++---
 docs/en/admin-manual/config/be-config.md           |  2 +-
 docs/en/admin-manual/data-admin/backup.md          |  6 ++---
 docs/en/admin-manual/data-admin/delete-recover.md  |  2 +-
 docs/en/admin-manual/data-admin/restore.md         |  6 ++---
 .../http-actions/fe/table-schema-action.md         |  2 +-
 .../maint-monitor/metadata-operation.md            |  4 +--
 .../maint-monitor/tablet-repair-and-balance.md     |  4 +--
 .../admin-manual/privilege-ldap/user-privilege.md  |  2 +-
 docs/en/admin-manual/sql-interception.md           |  8 +++---
 docs/en/advanced/alter-table/replace-table.md      |  4 +--
 docs/en/advanced/best-practice/import-analysis.md  |  6 ++---
 docs/en/advanced/broker.md                         |  4 +--
 docs/en/advanced/materialized-view.md              |  2 +-
 docs/en/advanced/partition/table-temp-partition.md |  4 +--
 docs/en/advanced/resource.md                       |  8 +++---
 docs/en/advanced/small-file-mgr.md                 |  2 +-
 docs/en/advanced/variables.md                      | 14 +++++-----
 docs/en/benchmark/ssb.md                           |  4 +--
 .../how-to-contribute/how-to-contribute.md         |  2 +-
 .../release-and-verify/release-prepare.md          |  2 +-
 .../community/release-and-verify/release-verify.md |  6 ++---
 docs/en/data-operate/export/export-manual.md       |  8 +++---
 docs/en/data-operate/export/outfile.md             |  6 ++---
 .../import/import-scenes/external-storage-load.md  |  6 ++---
 .../import/import-scenes/external-table-load.md    |  6 ++---
 .../data-operate/import/import-scenes/jdbc-load.md |  4 +--
 .../import/import-scenes/kafka-load.md             | 16 ++++++------
 .../import/import-scenes/load-atomicity.md         |  4 +--
 .../import/import-scenes/load-data-convert.md      |  6 ++---
 .../import/import-scenes/load-strict-mode.md       | 10 ++++----
 .../import/import-scenes/local-file-load.md        |  6 ++---
 .../import/import-way/binlog-load-manual.md        | 12 ++++-----
 .../import/import-way/broker-load-manual.md        | 22 ++++++++--------
 .../import/import-way/insert-into-manual.md        |  4 +--
 .../import/import-way/load-json-format.md          |  6 ++---
 .../import/import-way/routine-load-manual.md       |  2 +-
 docs/en/data-operate/import/load-manual.md         | 30 +++++++++++-----------
 .../update-delete/batch-delete-manual.md           |  4 +--
 .../en/data-operate/update-delete/delete-manual.md |  6 ++---
 docs/en/data-operate/update-delete/update.md       |  2 +-
 docs/en/data-table/advance-usage.md                |  4 +--
 docs/en/data-table/basic-usage.md                  |  8 +++---
 docs/en/data-table/best-practice.md                |  2 +-
 docs/en/data-table/data-partition.md               | 12 ++++-----
 docs/en/data-table/hit-the-rollup.md               |  4 +--
 docs/en/data-table/index/bitmap-index.md           |  4 +--
 docs/en/design/Flink doris connector Design.md     |  2 +-
 docs/en/developer-guide/benchmark-tool.md          |  2 +-
 docs/en/developer-guide/cpp-diagnostic-code.md     |  2 +-
 docs/en/developer-guide/docker-dev.md              |  8 +++---
 docs/en/developer-guide/fe-vscode-dev.md           |  2 +-
 docs/en/downloads/downloads.md                     |  2 +-
 docs/en/ecosystem/external-table/doris-on-es.md    |  2 +-
 .../ecosystem/external-table/iceberg-of-doris.md   |  6 ++---
 docs/en/ecosystem/external-table/odbc-of-doris.md  |  4 +--
 docs/en/ecosystem/seatunnel/flink-sink.md          |  4 +--
 docs/en/ecosystem/seatunnel/spark-sink.md          |  4 +--
 docs/en/faq/data-faq.md                            |  2 +-
 docs/en/faq/install-faq.md                         | 10 ++++----
 docs/en/faq/sql-faq.md                             |  4 +--
 docs/en/get-starting/get-starting.md               | 20 +++++++--------
 docs/en/install/install-deploy.md                  |  4 +--
 .../Alter/ALTER-TABLE-COLUMN.md                    |  2 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |  2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |  2 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |  2 +-
 .../Backup-and-Restore/BACKUP.md                   |  2 +-
 .../Backup-and-Restore/CREATE-REPOSITORY.md        |  2 +-
 .../Backup-and-Restore/RESTORE.md                  |  2 +-
 .../Create/CREATE-EXTERNAL-TABLE.md                |  2 +-
 .../Create/CREATE-MATERIALIZED-VIEW.md             |  2 +-
 .../Create/CREATE-TABLE.md                         | 10 ++++----
 .../Drop/DROP-DATABASE.md                          |  2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |  2 +-
 .../Load/BROKER-LOAD.md                            | 22 ++++++++--------
 .../Load/CREATE-SYNC-JOB.md                        |  2 +-
 .../Load/STREAM-LOAD.md                            |  8 +++---
 .../sql-reference/Show-Statements/SHOW-PROC.md     | 16 ++++++------
 .../sql-reference/Show-Statements/SHOW-STATUS.md   |  2 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 docs/zh-CN/admin-manual/config/be-config.md        |  4 +--
 docs/zh-CN/admin-manual/config/fe-config.md        |  2 +-
 docs/zh-CN/admin-manual/data-admin/backup.md       |  6 ++---
 .../admin-manual/data-admin/delete-recover.md      |  2 +-
 docs/zh-CN/admin-manual/data-admin/restore.md      |  6 ++---
 .../http-actions/fe/table-schema-action.md         |  2 +-
 .../admin-manual/maint-monitor/disk-capacity.md    |  6 ++---
 .../maint-monitor/metadata-operation.md            |  6 ++---
 .../maint-monitor/tablet-repair-and-balance.md     |  2 +-
 .../admin-manual/privilege-ldap/user-privilege.md  |  2 +-
 docs/zh-CN/admin-manual/sql-interception.md        |  8 +++---
 docs/zh-CN/advanced/alter-table/replace-table.md   |  4 +--
 .../advanced/best-practice/import-analysis.md      |  6 ++---
 docs/zh-CN/advanced/broker.md                      |  8 +++---
 docs/zh-CN/advanced/materialized-view.md           |  6 ++---
 .../advanced/partition/table-tmp-partition.md      |  4 +--
 docs/zh-CN/advanced/resource.md                    |  8 +++---
 docs/zh-CN/advanced/small-file-mgr.md              |  4 +--
 docs/zh-CN/advanced/variables.md                   | 16 ++++++------
 docs/zh-CN/benchmark/ssb.md                        |  4 +--
 .../how-to-contribute/how-to-contribute.md         |  2 +-
 .../release-and-verify/release-doris-manager.md    |  2 +-
 .../community/release-and-verify/release-verify.md |  2 +-
 docs/zh-CN/data-operate/export/export-manual.md    | 10 ++++----
 docs/zh-CN/data-operate/export/outfile.md          |  6 ++---
 .../import/import-scenes/external-storage-load.md  |  8 +++---
 .../import/import-scenes/external-table-load.md    |  8 +++---
 .../data-operate/import/import-scenes/jdbc-load.md |  4 +--
 .../import/import-scenes/kafka-load.md             | 18 ++++++-------
 .../import/import-scenes/load-atomicity.md         |  4 +--
 .../import/import-scenes/load-data-convert.md      |  6 ++---
 .../import/import-scenes/load-strict-mode.md       | 10 ++++----
 .../import/import-scenes/local-file-load.md        |  6 ++---
 .../import/import-way/binlog-load-manual.md        | 12 ++++-----
 .../import/import-way/broker-load-manual.md        | 22 ++++++++--------
 .../import/import-way/insert-into-manual.md        | 12 ++++-----
 .../import/import-way/load-json-format.md          |  6 ++---
 .../import/import-way/routine-load-manual.md       |  4 +--
 .../import/import-way/s3-load-manual.md            |  2 +-
 .../import/import-way/spark-load-manual.md         | 10 ++++----
 .../import/import-way/stream-load-manual.md        |  2 +-
 docs/zh-CN/data-operate/import/load-manual.md      | 28 ++++++++++----------
 .../update-delete/batch-delete-manual.md           |  4 +--
 .../data-operate/update-delete/delete-manual.md    |  6 ++---
 docs/zh-CN/data-operate/update-delete/update.md    |  2 +-
 docs/zh-CN/data-table/advance-usage.md             |  4 +--
 docs/zh-CN/data-table/basic-usage.md               | 10 ++++----
 docs/zh-CN/data-table/best-practice.md             |  4 +--
 docs/zh-CN/data-table/data-model.md                |  4 +--
 docs/zh-CN/data-table/data-partition.md            | 16 ++++++------
 docs/zh-CN/data-table/hit-the-rollup.md            |  6 ++---
 docs/zh-CN/data-table/index/bitmap-index.md        |  4 +--
 docs/zh-CN/developer-guide/benchmark-tool.md       |  2 +-
 docs/zh-CN/developer-guide/cpp-diagnostic-code.md  |  2 +-
 docs/zh-CN/developer-guide/docker-dev.md           |  8 +++---
 docs/zh-CN/developer-guide/fe-vscode-dev.md        |  2 +-
 docs/zh-CN/downloads/downloads.md                  |  2 +-
 docs/zh-CN/ecosystem/external-table/doris-on-es.md |  2 +-
 .../ecosystem/external-table/iceberg-of-doris.md   |  6 ++---
 .../ecosystem/external-table/odbc-of-doris.md      |  4 +--
 docs/zh-CN/ecosystem/logstash.md                   |  4 +--
 docs/zh-CN/ecosystem/seatunnel/flink-sink.md       |  4 +--
 docs/zh-CN/ecosystem/seatunnel/spark-sink.md       |  4 +--
 docs/zh-CN/faq/data-faq.md                         |  4 +--
 docs/zh-CN/faq/install-faq.md                      | 10 ++++----
 docs/zh-CN/faq/sql-faq.md                          |  4 +--
 docs/zh-CN/get-starting/get-starting.md            | 20 +++++++--------
 .../Alter/ALTER-TABLE-COLUMN.md                    |  2 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |  2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |  2 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |  2 +-
 .../Backup-and-Restore/BACKUP.md                   |  2 +-
 .../Backup-and-Restore/CREATE-REPOSITORY.md        |  2 +-
 .../Backup-and-Restore/RESTORE.md                  |  2 +-
 .../Create/CREATE-EXTERNAL-TABLE.md                |  2 +-
 .../Create/CREATE-MATERIALIZED-VIEW.md             |  2 +-
 .../Create/CREATE-TABLE.md                         | 10 ++++----
 .../Drop/DROP-DATABASE.md                          |  2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |  2 +-
 .../Load/BROKER-LOAD.md                            | 22 ++++++++--------
 .../Load/CREATE-SYNC-JOB.md                        |  2 +-
 .../Load/STREAM-LOAD.md                            |  8 +++---
 .../SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md          |  2 +-
 .../sql-reference/Show-Statements/SHOW-PROC.md     | 16 ++++++------
 165 files changed, 483 insertions(+), 483 deletions(-)

diff --git a/docs/en/admin-manual/cluster-management/elastic-expansion.md b/docs/en/admin-manual/cluster-management/elastic-expansion.md
index bbfd122df9..11b4e5741d 100644
--- a/docs/en/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/en/admin-manual/cluster-management/elastic-expansion.md
@@ -102,7 +102,7 @@ You can also view the BE node through the front-end page connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../maint-monitor/tablet-meta-tool.html).
+The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../maint-monitor/tablet-meta-tool.md).
 
 ### Add BE nodes
 
@@ -136,7 +136,7 @@ DECOMMISSION clause:
      > 		```CANCEL ALTER SYSTEM DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```
      > 	The order was cancelled. When cancelled, the data on the BE will maintain the current amount of data remaining. Follow-up Doris re-load balancing
 
-**For expansion and scaling of BE nodes in multi-tenant deployment environments, please refer to the [Multi-tenant Design Document](../multi-tenant.html).**
+**For expansion and scaling of BE nodes in multi-tenant deployment environments, please refer to the [Multi-tenant Design Document](../multi-tenant.md).**
 
 ## Broker Expansion and Shrinkage
 
@@ -146,4 +146,4 @@ There is no rigid requirement for the number of Broker instances. Usually one ph
 ```ALTER SYSTEM DROP BROKER broker_name "broker_host:broker_ipc_port";```
 ```ALTER SYSTEM DROP ALL BROKER broker_name;```
 
-Broker is a stateless process that can be started or stopped at will. Of course, when it stops, the job running on it will fail. Just try again.
\ No newline at end of file
+Broker is a stateless process that can be started or stopped at will. Of course, when it stops, the job running on it will fail. Just try again.
diff --git a/docs/en/admin-manual/config/be-config.md b/docs/en/admin-manual/config/be-config.md
index c8a2830d7b..65c2c6d453 100644
--- a/docs/en/admin-manual/config/be-config.md
+++ b/docs/en/admin-manual/config/be-config.md
@@ -787,7 +787,7 @@ The maximum external scan cache batch count, which means that the cache max_memo
 ### `max_pushdown_conditions_per_column`
 
 * Type: int
-* Description: Used to limit the maximum number of conditions that can be pushed down to the storage engine for a single column in a query request. During the execution of the query plan, the filter conditions on some columns can be pushed down to the storage engine, so that the index information in the storage engine can be used for data filtering, reducing the amount of data that needs to be scanned by the query. Such as equivalent conditions, conditions in IN predicates, etc. In most  [...]
+* Description: Used to limit the maximum number of conditions that can be pushed down to the storage engine for a single column in a query request. During the execution of the query plan, the filter conditions on some columns can be pushed down to the storage engine, so that the index information in the storage engine can be used for data filtering, reducing the amount of data that needs to be scanned by the query. Such as equivalent conditions, conditions in IN predicates, etc. In most  [...]
 * Default value: 1024
 
 * Example
diff --git a/docs/en/admin-manual/data-admin/backup.md b/docs/en/admin-manual/data-admin/backup.md
index 9139d03947..52e677c3c8 100644
--- a/docs/en/admin-manual/data-admin/backup.md
+++ b/docs/en/admin-manual/data-admin/backup.md
@@ -124,7 +124,7 @@ ALTER TABLE tbl1 SET ("dynamic_partition.enable"="true")
    1 row in set (0.15 sec)
    ```
 
-For the detailed usage of BACKUP, please refer to [here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.html).
+For the detailed usage of BACKUP, please refer to [here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md).
 
 ## Best Practices
 
@@ -153,7 +153,7 @@ It is recommended to import the new and old clusters in parallel for a period of
 
    1. CREATE REPOSITORY
 
-      Create a remote repository path for backup or restore. This command needs to use the Broker process to access the remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.html), or you can directly back up to support through the S3 protocol For the remote storage of AWS S3 protocol, please refer to [Create Remote Warehouse Documentation](../../sql-manual/sql-reference/Data-Definition-Statements [...]
+      Create a remote repository path for backup or restore. This command needs to use the Broker process to access the remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.md), or you can directly back up to support through the S3 protocol For the remote storage of AWS S3 protocol, please refer to [Create Remote Warehouse Documentation](../../sql-manual/sql-reference/Data-Definition-Statements/B [...]
 
    2. BACKUP
 
@@ -208,4 +208,4 @@ It is recommended to import the new and old clusters in parallel for a period of
 
 ## More Help
 
- For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.
+ For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.
diff --git a/docs/en/admin-manual/data-admin/delete-recover.md b/docs/en/admin-manual/data-admin/delete-recover.md
index 722798e970..f02c4ada15 100644
--- a/docs/en/admin-manual/data-admin/delete-recover.md
+++ b/docs/en/admin-manual/data-admin/delete-recover.md
@@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
 
 ## More Help
 
-For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
+For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RECOVER.md) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
diff --git a/docs/en/admin-manual/data-admin/restore.md b/docs/en/admin-manual/data-admin/restore.md
index 464b9c82b6..363f62f009 100644
--- a/docs/en/admin-manual/data-admin/restore.md
+++ b/docs/en/admin-manual/data-admin/restore.md
@@ -126,7 +126,7 @@ The restore operation needs to specify an existing backup in the remote warehous
    1 row in set (0.01 sec)
    ```
 
-For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.html).
+For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md).
 
 ## Related Commands
 
@@ -134,7 +134,7 @@ The commands related to the backup and restore function are as follows. For the
 
 1. CREATE REPOSITORY
 
-   Create a remote repository path for backup or restore. This command needs to use the Broker process to access the remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.html), or you can directly back up to support through the S3 protocol For the remote storage of AWS S3 protocol, please refer to [Create Remote Warehouse Documentation](../../sql-manual/sql-reference/Data-Definition-Statements/Ba [...]
+   Create a remote repository path for backup or restore. This command needs to use the Broker process to access the remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.md), or you can directly back up to support through the S3 protocol For the remote storage of AWS S3 protocol, please refer to [Create Remote Warehouse Documentation](../../sql-manual/sql-reference/Data-Definition-Statements/Back [...]
 
 2. RESTORE
 
@@ -180,4 +180,4 @@ The commands related to the backup and restore function are as follows. For the
 
 ## More Help
 
-For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.
+For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.
diff --git a/docs/en/admin-manual/http-actions/fe/table-schema-action.md b/docs/en/admin-manual/http-actions/fe/table-schema-action.md
index d89de1b9d9..9737ac19aa 100644
--- a/docs/en/admin-manual/http-actions/fe/table-schema-action.md
+++ b/docs/en/admin-manual/http-actions/fe/table-schema-action.md
@@ -152,4 +152,4 @@ Note: The difference is that the `http` method returns more `aggregation_type` f
     	},
     	"count": 0
     }
-    ```
\ No newline at end of file
+    ```
diff --git a/docs/en/admin-manual/maint-monitor/metadata-operation.md b/docs/en/admin-manual/maint-monitor/metadata-operation.md
index 12977a7091..db047000b1 100644
--- a/docs/en/admin-manual/maint-monitor/metadata-operation.md
+++ b/docs/en/admin-manual/maint-monitor/metadata-operation.md
@@ -28,7 +28,7 @@ under the License.
 
 This document focuses on how to manage Doris metadata in a real production environment. It includes the proposed deployment of FE nodes, some commonly used operational methods, and common error resolution methods.
 
-For the time being, read the [Doris metadata design document](../../internal/metadata-design_EN.md) to understand how Doris metadata works.
+For the time being, read the [Doris metadata design document](../../design/metadata-design.md) to understand how Doris metadata works.
 
 ## Important tips
 
@@ -136,7 +136,7 @@ Single node FE is the most basic deployment mode. A complete Doris cluster requi
 
 ### Add FE
 
-Adding FE processes is described in detail in the [Elastic Expansion Documents](../../admin-manual/cluster-management/elastic-expansion.html) and will not be repeated. Here are some points for attention, as well as common problems.
+Adding FE processes is described in detail in the [Elastic Expansion Documents](../../admin-manual/cluster-management/elastic-expansion.md) and will not be repeated. Here are some points for attention, as well as common problems.
 
 1. Notes
 
diff --git a/docs/en/admin-manual/maint-monitor/tablet-repair-and-balance.md b/docs/en/admin-manual/maint-monitor/tablet-repair-and-balance.md
index ad68adee78..44898313db 100644
--- a/docs/en/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/docs/en/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -28,7 +28,7 @@ under the License.
 
 Beginning with version 0.9.0, Doris introduced an optimized replica management strategy and supported a richer replica status viewing tool. This document focuses on Doris data replica balancing, repair scheduling strategies, and replica management operations and maintenance methods. Help users to more easily master and manage the replica status in the cluster.
 
-> Repairing and balancing copies of tables with Collocation attributes can be referred to [HERE](../../advanced/join-optimization/colocation-join.html)
+> Repairing and balancing copies of tables with Collocation attributes can be referred to [HERE](../../advanced/join-optimization/colocation-join.md)
 
 ## Noun Interpretation
 
@@ -771,4 +771,4 @@ Overall, when we need to bring the cluster back to a normal state quickly, consi
 2. repair some tables with the `admin repair` statement.
 3. Stop the replica balancing logic to avoid taking up cluster resources, and then turn it on again after the cluster is restored.
 4. Use a more conservative strategy to trigger repair tasks to deal with the avalanche effect caused by frequent BE downtime.
-5. Turn off scheduling tasks for colocation tables on-demand and focus cluster resources on repairing other high-optimality data.
\ No newline at end of file
+5. Turn off scheduling tasks for colocation tables on-demand and focus cluster resources on repairing other high-optimality data.
diff --git a/docs/en/admin-manual/privilege-ldap/user-privilege.md b/docs/en/admin-manual/privilege-ldap/user-privilege.md
index ef15b9ae20..36d99b0a3c 100644
--- a/docs/en/admin-manual/privilege-ldap/user-privilege.md
+++ b/docs/en/admin-manual/privilege-ldap/user-privilege.md
@@ -223,4 +223,4 @@ Here are some usage scenarios of Doris privilege system.
 
 ## More help
 
-For more detailed syntax and best practices for permission management use, please refer to the [GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.html) command manual. Enter `HELP GRANTS` at the command line of the MySql client for more help information.
+For more detailed syntax and best practices for permission management use, please refer to the [GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md) command manual. Enter `HELP GRANTS` at the command line of the MySql client for more help information.
diff --git a/docs/en/admin-manual/sql-interception.md b/docs/en/admin-manual/sql-interception.md
index 599a8b467b..5da44cf0b6 100644
--- a/docs/en/admin-manual/sql-interception.md
+++ b/docs/en/admin-manual/sql-interception.md
@@ -37,7 +37,7 @@ Support SQL block rule by user level:
 ## Rule
 
 SQL block rule CRUD
-- create SQL block rule,For more creation syntax see[CREATE SQL BLOCK RULE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.html)
+- create SQL block rule,For more creation syntax see[CREATE SQL BLOCK RULE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
     - sql:Regex pattern,Special characters need to be translated, "NULL" by default
     - sqlHash: Sql hash value, Used to match exactly, We print it in fe.audit.log, This parameter is the only choice between sql and sql, "NULL" by default
     - partition_num: Max number of partitions will be scanned by a scan node, 0L by default
@@ -65,12 +65,12 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = sql match regex sql block rule:
 CREATE SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "30", "cardinality"="10000000000","global"="false","enable"="true")
 ```
 
-- show configured SQL block rules, or show all rules if you do not specify a rule name,Please see the specific grammar [SHOW SQL BLOCK RULE](../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.html)
+- show configured SQL block rules, or show all rules if you do not specify a rule name,Please see the specific grammar [SHOW SQL BLOCK RULE](../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
 
 ```sql
 SHOW SQL_BLOCK_RULE [FOR RULE_NAME]
 ```
-- alter SQL block rule,Allows changes sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please see the specific grammar[ALTER SQL BLOCK  RULE](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.html)
+- alter SQL block rule,Allows changes sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please see the specific grammar[ALTER SQL BLOCK  RULE](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
     - sql and sqlHash cannot be set both. It means if sql or sqlHash is set in a rule, another property will never be allowed to be altered
     - sql/sqlHash and partition_num/tablet_num/cardinality cannot be set together. For example, partition_num is set in a rule, then sql or sqlHash will never be allowed to be altered.
 ```sql
@@ -81,7 +81,7 @@ ALTER SQL_BLOCK_RULE test_rule PROPERTIES("sql"="select \\* from test_table","en
 ALTER SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "10","tablet_num"="300","enable"="true")
 ```
 
-- drop SQL block rule,Support multiple rules, separated by `,`,Please see the specific grammar[DROP SQL BLOCK RULR](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.html)
+- drop SQL block rule,Support multiple rules, separated by `,`,Please see the specific grammar[DROP SQL BLOCK RULR](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
 ```sql
 DROP SQL_BLOCK_RULE test_rule1,test_rule2
 ```
diff --git a/docs/en/advanced/alter-table/replace-table.md b/docs/en/advanced/alter-table/replace-table.md
index e79a0b49b2..13ad8d5cc4 100644
--- a/docs/en/advanced/alter-table/replace-table.md
+++ b/docs/en/advanced/alter-table/replace-table.md
@@ -29,7 +29,7 @@ under the License.
 In version 0.14, Doris supports atomic replacement of two tables.
 This operation only applies to OLAP tables.
 
-For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition.html)
+For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition.md)
 
 ## Syntax
 
@@ -69,4 +69,4 @@ If `swap` is `false`, the operation is as follows:
 
 1. Atomic Overwrite Operation
 
-    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
\ No newline at end of file
+    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
diff --git a/docs/en/advanced/best-practice/import-analysis.md b/docs/en/advanced/best-practice/import-analysis.md
index 68ae06fc6f..793c7beef0 100644
--- a/docs/en/advanced/best-practice/import-analysis.md
+++ b/docs/en/advanced/best-practice/import-analysis.md
@@ -33,9 +33,9 @@ Doris provides a graphical command to help users analyze a specific import more
 
 ## Import plan tree
 
-If you don't know much about Doris' query plan tree, please read the previous article [DORIS/best practices/query analysis](./query-analysis.html).
+If you don't know much about Doris' query plan tree, please read the previous article [DORIS/best practices/query analysis](./query-analysis.md).
 
-The execution process of a [Broker Load](../../data-operate/import/import-way/broker-load-manual.html) request is also based on Doris' query framework. A Broker Load job will be split into multiple subtasks based on the number of DATA INFILE clauses in the import request. Each subtask can be regarded as an independent import execution plan. An import plan consists of only one Fragment, which is composed as follows:
+The execution process of a [Broker Load](../../data-operate/import/import-way/broker-load-manual.md) request is also based on Doris' query framework. A Broker Load job will be split into multiple subtasks based on the number of DATA INFILE clauses in the import request. Each subtask can be regarded as an independent import execution plan. An import plan consists of only one Fragment, which is composed as follows:
 
 ```sql
 ┌────────────────┐
@@ -168,4 +168,4 @@ This shows the time-consuming of four instances of the subtask 980014623046410a-
 
 The figure above shows the specific profiles of each operator of Instance 980014623046410a-88e260f0c43031f5 in subtask 980014623046410a-88e260f0c43031f1.
 
-Through the above three steps, we can gradually check the execution bottleneck of an import task.
\ No newline at end of file
+Through the above three steps, we can gradually check the execution bottleneck of an import task.
diff --git a/docs/en/advanced/broker.md b/docs/en/advanced/broker.md
index 302750ddfc..2212839a3d 100644
--- a/docs/en/advanced/broker.md
+++ b/docs/en/advanced/broker.md
@@ -74,8 +74,8 @@ Different types of brokers support different storage systems.
 
 ## Function provided by Broker
 
-1. [Broker Load](../data-operate/import/import-way/broker-load-manual.html)
-2. [Export](../data-operate/export/export-manual.html)
+1. [Broker Load](../data-operate/import/import-way/broker-load-manual.md)
+2. [Export](../data-operate/export/export-manual.md)
 3. [Backup](../admin-manual/data-admin/backup.md)
 
 ## Broker Information
diff --git a/docs/en/advanced/materialized-view.md b/docs/en/advanced/materialized-view.md
index 6059b8fa9e..8a855d1285 100644
--- a/docs/en/advanced/materialized-view.md
+++ b/docs/en/advanced/materialized-view.md
@@ -487,4 +487,4 @@ Note: The bitmap type only supports positive integers. If there are negative Num
 
 ## More Help
 
-For more detailed syntax and best practices for using materialized views, see [CREATE MATERIALIZED VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED- VIEW.html) and [DROP MATERIALIZED VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.html) command manual, you can also Enter `HELP CREATE MATERIALIZED VIEW` and `HELP DROP MATERIALIZED VIEW` at the command line of the MySql client for more help information.
+For more detailed syntax and best practices for using materialized views, see [CREATE MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) and [DROP MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.md) command manual, you can also Enter `HELP CREATE MATERIALIZED VIEW` and `HELP DROP MATERIALIZED VIEW` at the command line of the MySql client for more help information.
diff --git a/docs/en/advanced/partition/table-temp-partition.md b/docs/en/advanced/partition/table-temp-partition.md
index 838b604736..8c707744e9 100644
--- a/docs/en/advanced/partition/table-temp-partition.md
+++ b/docs/en/advanced/partition/table-temp-partition.md
@@ -277,7 +277,7 @@ Users can load data into temporary partitions or specify temporary partitions fo
 
 1. Atomic overwrite
 
-    In some cases, the user wants to be able to rewrite the data of a certain partition, but if it is dropped first and then loaded, there will be a period of time when the data cannot be seen. At this moment, the user can first create a corresponding temporary partition, load new data into the temporary partition, and then replace the original partition atomically through the `REPLACE` operation to achieve the purpose. For atomic overwrite operations of non-partitioned tables, please re [...]
+    In some cases, the user wants to be able to rewrite the data of a certain partition, but if it is dropped first and then loaded, there will be a period of time when the data cannot be seen. At this moment, the user can first create a corresponding temporary partition, load new data into the temporary partition, and then replace the original partition atomically through the `REPLACE` operation to achieve the purpose. For atomic overwrite operations of non-partitioned tables, please re [...]
     
 2. Modify the number of buckets
 
@@ -285,4 +285,4 @@ Users can load data into temporary partitions or specify temporary partitions fo
     
 3. Merge or split partitions
 
-    In some cases, users want to modify the range of partitions, such as merging two partitions, or splitting a large partition into multiple smaller partitions. Then the user can first create temporary partitions corresponding to the merged or divided range, and then load the data of the formal partition into the temporary partition through the `INSERT INTO` command. Through the replacement operation, the original partition is replaced atomically to achieve the purpose.
\ No newline at end of file
+    In some cases, users want to modify the range of partitions, such as merging two partitions, or splitting a large partition into multiple smaller partitions. Then the user can first create temporary partitions corresponding to the merged or divided range, and then load the data of the formal partition into the temporary partition through the `INSERT INTO` command. Through the replacement operation, the original partition is replaced atomically to achieve the purpose.
diff --git a/docs/en/advanced/resource.md b/docs/en/advanced/resource.md
index e7dc102aec..494ed7b200 100644
--- a/docs/en/advanced/resource.md
+++ b/docs/en/advanced/resource.md
@@ -41,15 +41,15 @@ There are three main commands for resource management: `create resource`, `drop
 
 1. CREATE RESOURCE
 
-    This statement is used to create a resource. For details, please refer to [CREATE RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.html).
+    This statement is used to create a resource. For details, please refer to [CREATE RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md).
 
 2. DROP RESOURCE
 
-    This command can delete an existing resource. For details, see [DROP RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-RESOURCE.html).
+    This command can delete an existing resource. For details, see [DROP RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-RESOURCE.md).
 
 3. SHOW RESOURCES
 
-    This command can view the resources that the user has permission to use. For details, see [SHOW RESOURCES](../sql-manual/sql-reference/Show-Statements/SHOW-RESOURCES.html).
+    This command can view the resources that the user has permission to use. For details, see [SHOW RESOURCES](../sql-manual/sql-reference/Show-Statements/SHOW-RESOURCES.md).
 
 ## Resources Supported
 
@@ -132,7 +132,7 @@ PROPERTIES
 `driver`: Indicates the driver dynamic library used by the ODBC external table.
 The ODBC external table referring to the resource is required. The old MySQL external table referring to the resource is optional.
 
-For the usage of ODBC resource, please refer to [ODBC of Doris](../ecosystem/external-table/odbc-of-doris.html)
+For the usage of ODBC resource, please refer to [ODBC of Doris](../ecosystem/external-table/odbc-of-doris.md)
 
 
 #### Example
diff --git a/docs/en/advanced/small-file-mgr.md b/docs/en/advanced/small-file-mgr.md
index 0ebf5f8e66..1675b3cbc9 100644
--- a/docs/en/advanced/small-file-mgr.md
+++ b/docs/en/advanced/small-file-mgr.md
@@ -129,4 +129,4 @@ Because the file meta-information and content are stored in FE memory. So by def
 
 ## More Help
 
-For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.html), [DROP FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.html) and [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.
+For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md), [DROP FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md) and [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.
diff --git a/docs/en/advanced/variables.md b/docs/en/advanced/variables.md
index 0d58e7ea65..d5772bdceb 100644
--- a/docs/en/advanced/variables.md
+++ b/docs/en/advanced/variables.md
@@ -160,11 +160,11 @@ Note that the comment must start with /*+ and can only follow the SELECT.
 
 * `disable_colocate_join`
 
-    Controls whether the [Colocation Join](../advanced/join-optimization/colocation-join.html) function is enabled. The default is false, which means that the feature is enabled. True means that the feature is disabled. When this feature is disabled, the query plan will not attempt to perform a Colocation Join.
+    Controls whether the [Colocation Join](../advanced/join-optimization/colocation-join.md) function is enabled. The default is false, which means that the feature is enabled. True means that the feature is disabled. When this feature is disabled, the query plan will not attempt to perform a Colocation Join.
     
 * `enable_bucket_shuffle_join`
 
-    Controls whether the [Bucket Shuffle Join](../advanced/join-optimization/bucket-shuffle-join.html) function is enabled. The default is true, which means that the feature is enabled. False means that the feature is disabled. When this feature is disabled, the query plan will not attempt to perform a Bucket Shuffle Join.
+    Controls whether the [Bucket Shuffle Join](../advanced/join-optimization/bucket-shuffle-join.md) function is enabled. The default is true, which means that the feature is enabled. False means that the feature is disabled. When this feature is disabled, the query plan will not attempt to perform a Bucket Shuffle Join.
 
 * `disable_streaming_preaggregations`
 
@@ -172,7 +172,7 @@ Note that the comment must start with /*+ and can only follow the SELECT.
     
 * `enable_insert_strict`
 
-    Used to set the `strict` mode when loading data via INSERT statement. The default is false, which means that the `strict` mode is not turned on. For an introduction to this mode, see [here](../data-operate/import/import-way/insert-into-manual.html).
+    Used to set the `strict` mode when loading data via INSERT statement. The default is false, which means that the `strict` mode is not turned on. For an introduction to this mode, see [here](../data-operate/import/import-way/insert-into-manual.md).
 
 * `enable_spilling`
 
@@ -294,11 +294,11 @@ Translated with www.DeepL.com/Translator (free version)
     
 * `max_pushdown_conditions_per_column`
 
-    For the specific meaning of this variable, please refer to the description of `max_pushdown_conditions_per_column` in [BE Configuration](../admin-manual/config/be-config.html). This variable is set to -1 by default, which means that the configuration value in `be.conf` is used. If the setting is greater than 0, the query in the current session will use the variable value, and ignore the configuration value in `be.conf`.
+    For the specific meaning of this variable, please refer to the description of `max_pushdown_conditions_per_column` in [BE Configuration](../admin-manual/config/be-config.md). This variable is set to -1 by default, which means that the configuration value in `be.conf` is used. If the setting is greater than 0, the query in the current session will use the variable value, and ignore the configuration value in `be.conf`.
 
 * `max_scan_key_num`
 
-    For the specific meaning of this variable, please refer to the description of `doris_max_scan_key_num` in [BE Configuration](../admin-manual/config/be-config.html). This variable is set to -1 by default, which means that the configuration value in `be.conf` is used. If the setting is greater than 0, the query in the current session will use the variable value, and ignore the configuration value in `be.conf`.
+    For the specific meaning of this variable, please refer to the description of `doris_max_scan_key_num` in [BE Configuration](../admin-manual/config/be-config.md). This variable is set to -1 by default, which means that the configuration value in `be.conf` is used. If the setting is greater than 0, the query in the current session will use the variable value, and ignore the configuration value in `be.conf`.
 
 * `net_buffer_length`
 
@@ -350,7 +350,7 @@ Translated with www.DeepL.com/Translator (free version)
 
 * `sql_mode`
 
-    Used to specify SQL mode to accommodate certain SQL dialects. For the SQL mode, see [here](https://doris.apache.org/zh-CN/administrator-guide/sql-mode.html).
+    Used to specify SQL mode to accommodate certain SQL dialects. For the SQL mode, see [here](https://doris.apache.org/zh-CN/administrator-guide/sql-mode.md).
     
 * `sql_safe_updates`
 
@@ -500,4 +500,4 @@ Translated with www.DeepL.com/Translator (free version)
   
 * `enable_infer_predicate`
   
-  Used to control whether predicate deduction is performed. There are two values: true and false. It is turned off by default, and the system does not perform predicate deduction, and uses the original predicate for related operations. When set to true, predicate expansion occurs.
\ No newline at end of file
+  Used to control whether predicate deduction is performed. There are two values: true and false. It is turned off by default, and the system does not perform predicate deduction, and uses the original predicate for related operations. When set to true, predicate expansion occurs.
diff --git a/docs/en/benchmark/ssb.md b/docs/en/benchmark/ssb.md
index cc2beaf61c..727c2cf485 100644
--- a/docs/en/benchmark/ssb.md
+++ b/docs/en/benchmark/ssb.md
@@ -36,7 +36,7 @@ This document mainly introduces how to pass the preliminary performance test of
 
 ## Environmental preparation
 
-Please refer to the [official document](../install/install-deploy.html) to install and deploy Doris to obtain a normal running Doris cluster ( Contain at least 1 FE, 1 BE).
+Please refer to the [official document](../install/install-deploy.md) to install and deploy Doris to obtain a normal running Doris cluster ( Contain at least 1 FE, 1 BE).
 
 The scripts involved in the following documents are all stored under `tools/ssb-tools/` in the Doris code base.
 
@@ -178,4 +178,4 @@ The following test report is based on Doris [branch-0.15](https://github.com/apa
     >
     > Note 4: Parallelism means query concurrency, which is set by `set parallel_fragment_exec_instance_num=8`.
     >
-    > Note 5: Runtime Filter Mode is the type of Runtime Filter, set by `set runtime_filter_type="BLOOM_FILTER"`. ([Runtime Filter](../../advanced/join-optimization/runtime-filter.html) function has a significant effect on the SSB test set. Because in this test set, The data from the right table of Join can filter the left table very well. You can try to turn off this function through `set runtime_filter_mode=off` to see the change in query latency.)
+    > Note 5: Runtime Filter Mode is the type of Runtime Filter, set by `set runtime_filter_type="BLOOM_FILTER"`. ([Runtime Filter](../../advanced/join-optimization/runtime-filter.md) function has a significant effect on the SSB test set. Because in this test set, The data from the right table of Join can filter the left table very well. You can try to turn off this function through `set runtime_filter_mode=off` to see the change in query latency.)
diff --git a/docs/en/community/how-to-contribute/how-to-contribute.md b/docs/en/community/how-to-contribute/how-to-contribute.md
index b0559cac2e..11795417ff 100644
--- a/docs/en/community/how-to-contribute/how-to-contribute.md
+++ b/docs/en/community/how-to-contribute/how-to-contribute.md
@@ -78,7 +78,7 @@ You can also fix it yourself by reading the analysis code (of course, it's bette
 
 ## Modify the code and submit PR (Pull Request)
 
-You can download the code, compile and install it, deploy and run it for a try (refer to the [compilation document](../installing/compilation.md)) to see if it works as you expected. If you have problems, you can contact us directly, ask questions or fix them by reading and analyzing the source code.
+You can download the code, compile and install it, deploy and run it for a try (refer to the [compilation document](../../install/source-install/compilation.md)) to see if it works as you expected. If you have problems, you can contact us directly, ask questions or fix them by reading and analyzing the source code.
 
 Whether it's fixing Bugs or adding Features, we're all very welcome. If you want to submit code to Doris, you need to create a new branch for your submitted code from the fork code library on GitHub to your project space, add the source project upstream, and submit PR.
 
diff --git a/docs/en/community/release-and-verify/release-prepare.md b/docs/en/community/release-and-verify/release-prepare.md
index 64de07b1f2..35860f08a1 100644
--- a/docs/en/community/release-and-verify/release-prepare.md
+++ b/docs/en/community/release-and-verify/release-prepare.md
@@ -346,4 +346,4 @@ For components such as the Doris Connector, you need to use maven for the releas
 
 ## Initiating DISCUSS in the community
 
-DISCUSS is not a required process before a release, but it is highly recommended to start a discussion in the dev@doris mail group before a major release. Content includes, but is not limited to, descriptions of important features, bug fixes, etc.
\ No newline at end of file
+DISCUSS is not a required process before a release, but it is highly recommended to start a discussion in the dev@doris mail group before a major release. Content includes, but is not limited to, descriptions of important features, bug fixes, etc.
diff --git a/docs/en/community/release-and-verify/release-verify.md b/docs/en/community/release-and-verify/release-verify.md
index 221403fd05..93654bcf06 100644
--- a/docs/en/community/release-and-verify/release-verify.md
+++ b/docs/en/community/release-and-verify/release-verify.md
@@ -96,6 +96,6 @@ If invalid is 0, then the validation passes.
 
 Please see the compilation documentation of each component to verify the compilation.
 
-* For Doris Core, see [compilation documentation](../../installing/compilation.html)
-* Flink Doris Connector, see [compilation documentation](../../extending-doris/flink-doris-connector.md)
-* Spark Doris Connector, see [compilation documentation](../../extending-doris/spark-doris-connector.md)
+* For Doris Core, see [compilation documentation](../../install/source-install/compilation.md)
+* Flink Doris Connector, see [compilation documentation](../../ecosystem/flink-doris-connector.md)
+* Spark Doris Connector, see [compilation documentation](../../ecosystem/spark-doris-connector.md)
diff --git a/docs/en/data-operate/export/export-manual.md b/docs/en/data-operate/export/export-manual.md
index 331b417590..dbe05e7e9f 100644
--- a/docs/en/data-operate/export/export-manual.md
+++ b/docs/en/data-operate/export/export-manual.md
@@ -97,12 +97,12 @@ When all data is exported, Doris will rename these files to the user-specified p
 
 ### Broker parameter
 
-Export needs to use the Broker process to access remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.html)
+Export needs to use the Broker process to access remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../advanced/broker.md)
 
 
 ## Start Export
 
-For detailed usage of Export, please refer to [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html).
+For detailed usage of Export, please refer to [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md).
 
 Export's detailed commands can be passed through `HELP EXPORT;` Examples are as follows:
 
@@ -134,7 +134,7 @@ WITH BROKER "hdfs"
 * `timeout`: homework timeout. Default 2 hours. Unit seconds.
 * `tablet_num_per_task`: The maximum number of fragments allocated per query plan. The default is 5.
 
-After submitting a job, the job status can be imported by querying the   [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html)  command. The results are as follows:
+After submitting a job, the job status can be imported by querying the   [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md)  command. The results are as follows:
 
 ```sql
 mysql> show EXPORT\G;
@@ -207,4 +207,4 @@ Usually, a query plan for an Export job has only two parts `scan`- `export`, and
 
 ## More Help
 
-For more detailed syntax and best practices used by Export, please refer to the [Export](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html) command manual, you can also You can enter `HELP EXPORT` at the command line of the MySql client for more help.
+For more detailed syntax and best practices used by Export, please refer to the [Export](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) command manual, you can also You can enter `HELP EXPORT` at the command line of the MySql client for more help.
diff --git a/docs/en/data-operate/export/outfile.md b/docs/en/data-operate/export/outfile.md
index 83814d7781..b94a80934c 100644
--- a/docs/en/data-operate/export/outfile.md
+++ b/docs/en/data-operate/export/outfile.md
@@ -26,7 +26,7 @@ under the License.
 
 # Export Query Result
 
-This document describes how to use the  [SELECT INTO OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html)  command to export query results.
+This document describes how to use the  [SELECT INTO OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md)  command to export query results.
 
 ## Example
 
@@ -55,7 +55,7 @@ select * from tbl1 limit 10
 INTO OUTFILE "file:///home/work/path/result_";
 ```
 
-For more usage, see [OUTFILE documentation](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html).
+For more usage, see [OUTFILE documentation](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md).
 
 ## Concurrent export
 
@@ -163,4 +163,4 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = Open broker writer failed ...
 
 ## More Help
 
-For more detailed syntax and best practices for using OUTFILE, please refer to the [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html) command manual, you can also More help information can be obtained by typing `HELP OUTFILE` at the command line of the MySql client.
+For more detailed syntax and best practices for using OUTFILE, please refer to the [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md) command manual, you can also More help information can be obtained by typing `HELP OUTFILE` at the command line of the MySql client.
diff --git a/docs/en/data-operate/import/import-scenes/external-storage-load.md b/docs/en/data-operate/import/import-scenes/external-storage-load.md
index cf255ecba8..27eb0fb3cb 100644
--- a/docs/en/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/en/data-operate/import/import-scenes/external-storage-load.md
@@ -35,7 +35,7 @@ Upload the files to be imported to HDFS. For specific commands, please refer to
 
 ### start import
 
-Hdfs load creates an import statement. The import method is basically the same as [Broker Load](../../../data-operate/import/import-way/broker-load-manual.html), only need to `WITH BROKER broker_name () ` statement with the following
+Hdfs load creates an import statement. The import method is basically the same as [Broker Load](../../../data-operate/import/import-way/broker-load-manual.md), only need to `WITH BROKER broker_name () ` statement with the following
 
 ```
   LOAD LABEL db_name.label_name 
@@ -78,7 +78,7 @@ Hdfs load creates an import statement. The import method is basically the same a
        );
    ```
 
-   For parameter introduction, please refer to [Broker Load](../../../data-operate/import/import-way/broker-load-manual.html), HA cluster creation syntax, view through `HELP BROKER LOAD`
+   For parameter introduction, please refer to [Broker Load](../../../data-operate/import/import-way/broker-load-manual.md), HA cluster creation syntax, view through `HELP BROKER LOAD`
 
 3. Check import status
 
@@ -128,7 +128,7 @@ This document mainly introduces how to import data stored in AWS S3. It also sup
 Other cloud storage systems can find relevant information compatible with S3 in corresponding documents
 
 ### Start Loading
-Like [Broker Load](../../../data-operate/import/import-way/broker-load-manual.html)  just replace `WITH BROKER broker_name ()` with
+Like [Broker Load](../../../data-operate/import/import-way/broker-load-manual.md)  just replace `WITH BROKER broker_name ()` with
 ```
     WITH S3
     (
diff --git a/docs/en/data-operate/import/import-scenes/external-table-load.md b/docs/en/data-operate/import/import-scenes/external-table-load.md
index ec8977c886..7fae1d101c 100644
--- a/docs/en/data-operate/import/import-scenes/external-table-load.md
+++ b/docs/en/data-operate/import/import-scenes/external-table-load.md
@@ -38,7 +38,7 @@ This document describes how to create external tables accessed through the ODBC
 
 ## create external table
 
-For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) syntax help manual.
+For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md) syntax help manual.
 
 Here is just an example of how to use it.
 
@@ -60,7 +60,7 @@ Here is just an example of how to use it.
    );
    ````
 
-Here we have created a Resource named `oracle_test_odbc`, whose type is `odbc_catalog`, indicating that this is a Resource used to store ODBC information. `odbc_type` is `oracle`, indicating that this OBDC Resource is used to connect to the Oracle database. For other types of resources, see the [resource management](../../../advanced/resource.html) documentation for details.
+Here we have created a Resource named `oracle_test_odbc`, whose type is `odbc_catalog`, indicating that this is a Resource used to store ODBC information. `odbc_type` is `oracle`, indicating that this OBDC Resource is used to connect to the Oracle database. For other types of resources, see the [resource management](../../../advanced/resource.md) documentation for details.
 
 2. Create an external table
 
@@ -103,7 +103,7 @@ Here we create an `ext_oracle_demo` external table and reference the `oracle_tes
    );
    ````
 
-   For detailed instructions on creating Doris tables, see [CREATE-TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) syntax help.
+   For detailed instructions on creating Doris tables, see [CREATE-TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) syntax help.
 
 2. Import data (from `ext_oracle_demo` table to `doris_oralce_tbl` table)
 
diff --git a/docs/en/data-operate/import/import-scenes/jdbc-load.md b/docs/en/data-operate/import/import-scenes/jdbc-load.md
index 555831526a..f2314077c0 100644
--- a/docs/en/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/en/data-operate/import/import-scenes/jdbc-load.md
@@ -35,7 +35,7 @@ The INSERT statement is used in a similar way to the INSERT statement used in da
 * INSERT INTO table VALUES(...)
 ````
 
-Here we only introduce the second way. For a detailed description of the INSERT command, see the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) command documentation.
+Here we only introduce the second way. For a detailed description of the INSERT command, see the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) command documentation.
 
 ## single write
 
@@ -160,4 +160,4 @@ Please note the following:
 
    As mentioned earlier, we recommend that when using INSERT to import data, use the "batch" method to import, rather than a single insert.
 
-   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](./load-atomicity.html#label-mechanism), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) document.
+   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](./load-atomicity.html#label-mechanism), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) document.
diff --git a/docs/en/data-operate/import/import-scenes/kafka-load.md b/docs/en/data-operate/import/import-scenes/kafka-load.md
index 763726f683..d2bf890188 100644
--- a/docs/en/data-operate/import/import-scenes/kafka-load.md
+++ b/docs/en/data-operate/import/import-scenes/kafka-load.md
@@ -41,14 +41,14 @@ Please note the following usage restrictions:
 1. Support unauthenticated Kafka access and SSL-authenticated Kafka clusters.
 2. The supported message formats are as follows:
    - csv text format. Each message is a line, and the end of the line **does not contain** a newline.
-   - Json format, see [Import Json Format Data](../import-way/load-json-format.html).
+   - Json format, see [Import Json Format Data](../import-way/load-json-format.md).
 3. Only supports Kafka 0.10.0.0 (inclusive) and above.
 
 ### Accessing SSL-authenticated Kafka clusters
 
 The routine import feature supports unauthenticated Kafka clusters, as well as SSL-authenticated Kafka clusters.
 
-Accessing an SSL-authenticated Kafka cluster requires the user to provide a certificate file (ca.pem) for authenticating the Kafka Broker public key. If client authentication is also enabled in the Kafka cluster, the client's public key (client.pem), key file (client.key), and key password must also be provided. The files required here need to be uploaded to Plao through the `CREAE FILE` command, and the catalog name is `kafka`. The specific help of the `CREATE FILE` command can be found [...]
+Accessing an SSL-authenticated Kafka cluster requires the user to provide a certificate file (ca.pem) for authenticating the Kafka Broker public key. If client authentication is also enabled in the Kafka cluster, the client's public key (client.pem), key file (client.key), and key password must also be provided. The files required here need to be uploaded to Plao through the `CREAE FILE` command, and the catalog name is `kafka`. The specific help of the `CREATE FILE` command can be found [...]
 
 - upload files
 
@@ -58,7 +58,7 @@ Accessing an SSL-authenticated Kafka cluster requires the user to provide a cert
   CREATE FILE "client.pem" PROPERTIES("url" = "https://example_url/kafka-key/client.pem", "catalog" = "kafka");
   ````
 
-After the upload is complete, you can view the uploaded files through the [SHOW FILES](../../../sql-manual/sql-reference/Show-Statements/SHOW-FILE.html) command.
+After the upload is complete, you can view the uploaded files through the [SHOW FILES](../../../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) command.
 
 ### Create a routine import job
 
@@ -112,22 +112,22 @@ For specific commands to create routine import tasks, see [ROUTINE LOAD](../../.
 
 ### View import job status
 
-See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD.html) for specific commands and examples for checking the status of a **job** ) command documentation.
+See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD.md) for specific commands and examples for checking the status of a **job** ) command documentation.
 
-See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD-TASK.html) command documentation.
+See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD-TASK.md) command documentation.
 
 Only the currently running tasks can be viewed, and the completed and unstarted tasks cannot be viewed.
 
 ### Modify job properties
 
-The user can modify some properties of the job that has been created. For details, please refer to the [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.html) command manual.
+The user can modify some properties of the job that has been created. For details, please refer to the [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.md) command manual.
 
 ### Job Control
 
 The user can control the stop, pause and restart of the job through the `STOP/PAUSE/RESUME` three commands.
 
-For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html), [RESUME ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) command documentation.
+For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.md) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.md), [RESUME ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.md) command documentation.
 
 ## more help
 
-For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) command manual.
+For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) command manual.
diff --git a/docs/en/data-operate/import/import-scenes/load-atomicity.md b/docs/en/data-operate/import/import-scenes/load-atomicity.md
index 9339744f48..f78b882e2c 100644
--- a/docs/en/data-operate/import/import-scenes/load-atomicity.md
+++ b/docs/en/data-operate/import/import-scenes/load-atomicity.md
@@ -28,9 +28,9 @@ under the License.
 
 All import operations in Doris have atomicity guarantees, that is, the data in an import job either all succeed or all fail. It will not happen that only part of the data is imported successfully.
 
-In [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) we can also implement atomic import of multiple tables .
+In [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) we can also implement atomic import of multiple tables .
 
-For the [materialized view](../../../advanced/materialized-view.html) attached to the table, atomicity and consistency with the base table are also guaranteed.
+For the [materialized view](../../../advanced/materialized-view.md) attached to the table, atomicity and consistency with the base table are also guaranteed.
 
 ## Label mechanism
 
diff --git a/docs/en/data-operate/import/import-scenes/load-data-convert.md b/docs/en/data-operate/import/import-scenes/load-data-convert.md
index 45de8a6492..4a889b5a22 100644
--- a/docs/en/data-operate/import/import-scenes/load-data-convert.md
+++ b/docs/en/data-operate/import/import-scenes/load-data-convert.md
@@ -28,7 +28,7 @@ under the License.
 
 ## Supported import methods
 
-- [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html)
+- [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md)
 
   ```sql
   LOAD LABEL example_db.label1
@@ -48,7 +48,7 @@ under the License.
   );
   ````
 
-- [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html)
+- [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)
 
   ```bash
   curl
@@ -60,7 +60,7 @@ under the License.
   http://host:port/api/testDb/testTbl/_stream_load
   ````
 
-- [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html)
+- [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md)
 
   ```sql
   CREATE ROUTINE LOAD example_db.label1 ON my_table
diff --git a/docs/en/data-operate/import/import-scenes/load-strict-mode.md b/docs/en/data-operate/import/import-scenes/load-strict-mode.md
index 34a79e49d9..421a39e5f3 100644
--- a/docs/en/data-operate/import/import-scenes/load-strict-mode.md
+++ b/docs/en/data-operate/import/import-scenes/load-strict-mode.md
@@ -36,7 +36,7 @@ Strict mode is all False by default, i.e. off.
 
 Different import methods set strict mode in different ways.
 
-1. [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html)
+1. [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md)
 
    ```sql
    LOAD LABEL example_db.label1
@@ -57,7 +57,7 @@ Different import methods set strict mode in different ways.
    )
    ````
 
-2. [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html)
+2. [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)
 
    ```bash
    curl --location-trusted -u user:passwd \
@@ -66,7 +66,7 @@ Different import methods set strict mode in different ways.
    http://host:port/api/example_db/my_table/_stream_load
    ````
 
-3. [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html)
+3. [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md)
 
    ```sql
    CREATE ROUTINE LOAD example_db.test_job ON my_table
@@ -81,9 +81,9 @@ Different import methods set strict mode in different ways.
    );
    ````
 
-4. [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html)
+4. [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md)
 
-   Set via [session variables](../../../advanced/variables.html):
+   Set via [session variables](../../../advanced/variables.md):
 
    ```sql
    SET enable_insert_strict = true;
diff --git a/docs/en/data-operate/import/import-scenes/local-file-load.md b/docs/en/data-operate/import/import-scenes/local-file-load.md
index 176a9f0926..2d51907d67 100644
--- a/docs/en/data-operate/import/import-scenes/local-file-load.md
+++ b/docs/en/data-operate/import/import-scenes/local-file-load.md
@@ -49,7 +49,7 @@ PUT /api/{db}/{table}/_stream_load
 
 1. Create a table
 
-   Use the `CREATE TABLE` command to create a table in the `demo` to store the data to be imported. For the specific import method, please refer to the [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) command manual. An example is as follows:
+   Use the `CREATE TABLE` command to create a table in the `demo` to store the data to be imported. For the specific import method, please refer to the [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) command manual. An example is as follows:
 
    ```sql
    CREATE TABLE IF NOT EXISTS load_local_file_test
@@ -74,7 +74,7 @@ PUT /api/{db}/{table}/_stream_load
    - host:port is the HTTP protocol port of BE, the default is 8040, which can be viewed on the Doris cluster WEB UI page.
    - label: Label can be specified in the Header to uniquely identify this import task.
 
-   For more advanced operations of the Stream Load command, see [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) Command documentation.
+   For more advanced operations of the Stream Load command, see [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) Command documentation.
 
 3. Wait for the import result
 
@@ -102,7 +102,7 @@ PUT /api/{db}/{table}/_stream_load
    ````
 
    - The status of the `Status` field is `Success`, which means the import is successful.
-   - For details of other fields, please refer to the [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) command documentation.
+   - For details of other fields, please refer to the [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) command documentation.
 
 ## Import suggestion
 
diff --git a/docs/en/data-operate/import/import-way/binlog-load-manual.md b/docs/en/data-operate/import/import-way/binlog-load-manual.md
index e274c75d30..83d174533a 100644
--- a/docs/en/data-operate/import/import-way/binlog-load-manual.md
+++ b/docs/en/data-operate/import/import-way/binlog-load-manual.md
@@ -342,7 +342,7 @@ User needs to first create the target table which is corresponding to the MySQL
 
 Binlog Load can only support unique target tables from now, and the batch delete feature of the target table must be activated.
 
-For the method of enabling Batch Delete, please refer to the batch delete function in [ALTER TABLE PROPERTY](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.html).
+For the method of enabling Batch Delete, please refer to the batch delete function in [ALTER TABLE PROPERTY](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md).
 
 Example:
 
@@ -387,7 +387,7 @@ FROM BINLOG
 );
 ```
 
-The detailed syntax for creating a data synchronization job can be connected to Doris and [CREATE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.html) to view the syntax help. Here is a detailed introduction to the precautions when creating a job.
+The detailed syntax for creating a data synchronization job can be connected to Doris and [CREATE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md) to view the syntax help. Here is a detailed introduction to the precautions when creating a job.
 
 grammar:
 ```
@@ -430,7 +430,7 @@ binlog_desc
 ### Show Job Status
 
 
-Specific commands and examples for viewing job status can be viewed through the [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.html) command.
+Specific commands and examples for viewing job status can be viewed through the [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.md) command.
 
 The parameters in the result set have the following meanings:
 
@@ -480,11 +480,11 @@ The parameters in the result set have the following meanings:
 
 Users can control the status of jobs through `stop/pause/resume` commands.
 
-You can use [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.html) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.html); And [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.html); commands to view help and examples.
+You can use [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.md) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.md); And [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.md); commands to view help and examples.
 
 ## Case Combat
 
-[How to use Apache Doris Binlog Load and examples](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.html)
+[How to use Apache Doris Binlog Load and examples](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.md)
 
 ## Related Parameters
 
@@ -556,4 +556,4 @@ The following configuration belongs to the system level configuration of SyncJob
 
 ## More Help
 
-For more detailed syntax and best practices used by Binlog Load, see [Binlog Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.html) command manual, you can also enter `HELP BINLOG` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by Binlog Load, see [Binlog Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md) command manual, you can also enter `HELP BINLOG` in the MySql client command line for more help information.
diff --git a/docs/en/data-operate/import/import-way/broker-load-manual.md b/docs/en/data-operate/import/import-way/broker-load-manual.md
index 1c6056350b..cd2140bc3e 100644
--- a/docs/en/data-operate/import/import-way/broker-load-manual.md
+++ b/docs/en/data-operate/import/import-way/broker-load-manual.md
@@ -26,9 +26,9 @@ under the License.
 
 # Broker Load
 
-Broker load is an asynchronous import method, and the supported data sources depend on the data sources supported by the [Broker](../../../advanced/broker.html) process.
+Broker load is an asynchronous import method, and the supported data sources depend on the data sources supported by the [Broker](../../../advanced/broker.md) process.
 
-Users need to create [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) import through MySQL protocol and import by viewing command to check the import result.
+Users need to create [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) import through MySQL protocol and import by viewing command to check the import result.
 
 ## Applicable scene
 
@@ -76,7 +76,7 @@ BE pulls data from the broker during execution, and imports the data into the sy
 
 ## start import
 
-Let's look at [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) through several actual scenario examples. use
+Let's look at [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) through several actual scenario examples. use
 
 ### Data import of Hive partition table
 
@@ -109,7 +109,7 @@ Then use Hive's Load command to import your data into the Hive table
 load data local inpath '/opt/custorm' into table ods_demo_detail;
 ````
 
-2. Create a Doris table, refer to the specific table syntax: [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+2. Create a Doris table, refer to the specific table syntax: [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 ````
 CREATE TABLE `doris_ods_test_detail` (
@@ -147,7 +147,7 @@ PROPERTIES (
 
 3. Start importing data
 
-   Specific syntax reference: [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html)
+   Specific syntax reference: [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md)
 
 ```sql
 LOAD LABEL broker_load_2022_03_23
@@ -254,13 +254,13 @@ LOAD LABEL demo.label_20220402
         );
 ````
 
-The specific parameters here can refer to: [Broker](../../../advanced/broker.html) and [Broker Load](../../../sql-manual/sql-reference-v2 /Data-Manipulation-Statements/Load/BROKER-LOAD.html) documentation
+The specific parameters here can refer to: [Broker](../../../advanced/broker.md) and [Broker Load](../../../sql-manual/sql-reference-v2 /Data-Manipulation-Statements/Load/BROKER-LOAD.md) documentation
 
 ## View import status
 
 We can view the status information of the above import task through the following command,
 
-The specific syntax reference for viewing the import status [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.html)
+The specific syntax reference for viewing the import status [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.md)
 
 ```sql
 mysql> show load order by createtime desc limit 1\G;
@@ -285,7 +285,7 @@ LoadFinishTime: 2022-04-01 18:59:11
 
 ## Cancel import
 
-When the broker load job status is not CANCELLED or FINISHED, it can be manually canceled by the user. When canceling, you need to specify the Label of the import task to be canceled. Cancel the import command syntax to execute [CANCEL LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CANCEL-LOAD.html) view.
+When the broker load job status is not CANCELLED or FINISHED, it can be manually canceled by the user. When canceling, you need to specify the Label of the import task to be canceled. Cancel the import command syntax to execute [CANCEL LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CANCEL-LOAD.md) view.
 
 For example: cancel the import job with the label broker_load_2022_03_23 on the database demo
 
@@ -296,7 +296,7 @@ CANCEL LOAD FROM demo WHERE LABEL = "broker_load_2022_03_23";
 
 ### Broker parameters
 
-Broker Load needs to use the Broker process to access remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../../advanced/broker.html).
+Broker Load needs to use the Broker process to access remote storage. Different brokers need to provide different parameters. For details, please refer to [Broker documentation](../../../advanced/broker.md).
 
 ### FE configuration
 
@@ -395,7 +395,7 @@ The configuration parameter `async_loading_load_task_pool_size` of FE is used to
 
 Session variables can be enabled by executing `set enable_profile=true` before submitting the LOAD job. Then submit the import job. After the import job is completed, you can view the profile of the import job in the `Queris` tab of the FE web page.
 
-You can check the [SHOW LOAD PROFILE](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE.html) help document for more usage help information.
+You can check the [SHOW LOAD PROFILE](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE.md) help document for more usage help information.
 
 This Profile can help analyze the running status of import jobs.
 
@@ -434,4 +434,4 @@ Currently the Profile can only be viewed after the job has been successfully exe
 
 ## more help
 
-For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
diff --git a/docs/en/data-operate/import/import-way/insert-into-manual.md b/docs/en/data-operate/import/import-way/insert-into-manual.md
index b6c0d6449c..dceafb10c2 100644
--- a/docs/en/data-operate/import/import-way/insert-into-manual.md
+++ b/docs/en/data-operate/import/import-way/insert-into-manual.md
@@ -59,7 +59,7 @@ INSERT INTO tbl1 VALUES ("qweasdzxcqweasdzxc"), ("a");
 > SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
 > ```
 
-For specific parameter description, you can refer to [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) command or execute `HELP INSERT ` to see its help documentation for better use of this import method.
+For specific parameter description, you can refer to [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) command or execute `HELP INSERT ` to see its help documentation for better use of this import method.
 
 
 Insert Into itself is a SQL command, and the return result is divided into the following types according to the different execution results:
@@ -255,4 +255,4 @@ Cluster situation: The average import speed of current user cluster is about 5M/
 
 ## more help
 
-For more detailed syntax and best practices used by insert into, see [insert](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) command manual, you can also enter `HELP INSERT` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by insert into, see [insert](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) command manual, you can also enter `HELP INSERT` in the MySql client command line for more help information.
diff --git a/docs/en/data-operate/import/import-way/load-json-format.md b/docs/en/data-operate/import/import-way/load-json-format.md
index 1d98d08a40..d6097e2137 100644
--- a/docs/en/data-operate/import/import-way/load-json-format.md
+++ b/docs/en/data-operate/import/import-way/load-json-format.md
@@ -32,8 +32,8 @@ Doris supports importing data in JSON format. This document mainly describes the
 
 Currently, only the following import methods support data import in Json format:
 
-- Import the local JSON format file through [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html).
-- Subscribe and consume JSON format in Kafka via [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) information.
+- Import the local JSON format file through [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md).
+- Subscribe and consume JSON format in Kafka via [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) information.
 
 Other ways of importing data in JSON format are not currently supported.
 
@@ -81,7 +81,7 @@ Currently only the following two Json formats are supported:
 
 ### fuzzy_parse parameters
 
-In [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) `fuzzy_parse` parameter can be added to speed up JSON Data import efficiency.
+In [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) `fuzzy_parse` parameter can be added to speed up JSON Data import efficiency.
 
 This parameter is usually used to import the format of **multi-line data represented by Array**, so it is generally used with `strip_outer_array=true`.
 
diff --git a/docs/en/data-operate/import/import-way/routine-load-manual.md b/docs/en/data-operate/import/import-way/routine-load-manual.md
index 2388d0a57f..7715faf6f2 100644
--- a/docs/en/data-operate/import/import-way/routine-load-manual.md
+++ b/docs/en/data-operate/import/import-way/routine-load-manual.md
@@ -240,7 +240,7 @@ You can only view tasks that are currently running, and tasks that have ended an
 
 ### Alter job
 
-Users can modify jobs that have been created. Specific instructions can be viewed through the `HELP ALTER ROUTINE LOAD;` command. Or refer to [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.html).
+Users can modify jobs that have been created. Specific instructions can be viewed through the `HELP ALTER ROUTINE LOAD;` command. Or refer to [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.md).
 
 ### Job Control
 
diff --git a/docs/en/data-operate/import/load-manual.md b/docs/en/data-operate/import/load-manual.md
index b06ab4d67e..5bc11a82da 100644
--- a/docs/en/data-operate/import/load-manual.md
+++ b/docs/en/data-operate/import/load-manual.md
@@ -34,25 +34,25 @@ Doris provides a variety of data import solutions, and you can choose different
 
 | Data Source                          | Import Method                                                |
 | ------------------------------------ | ------------------------------------------------------------ |
-| Object Storage (s3), HDFS            | [Import data using Broker](./import-scenes/external-storage-load.html) |
-| Local file                           | [Import local data](./import-scenes/local-file-load.html)    |
-| Kafka                                | [Subscribe to Kafka data](./import-scenes/kafka-load.html)   |
-| Mysql, PostgreSQL, Oracle, SQLServer | [Sync data via external table](./import-scenes/external-table-load.html) |
-| Import via JDBC                      | [Sync data using JDBC](./import-scenes/jdbc-load.html)       |
-| Import JSON format data              | [JSON format data import](./import-way/load-json-format.html) |
-| MySQL Binlog                         | [Binlog Load](./import-way/binlog-load-manual.html)          |
+| Object Storage (s3), HDFS            | [Import data using Broker](./import-scenes/external-storage-load.md) |
+| Local file                           | [Import local data](./import-scenes/local-file-load.md)    |
+| Kafka                                | [Subscribe to Kafka data](./import-scenes/kafka-load.md)   |
+| Mysql, PostgreSQL, Oracle, SQLServer | [Sync data via external table](./import-scenes/external-table-load.md) |
+| Import via JDBC                      | [Sync data using JDBC](./import-scenes/jdbc-load.md)       |
+| Import JSON format data              | [JSON format data import](./import-way/load-json-format.md) |
+| MySQL Binlog                         | [Binlog Load](./import-way/binlog-load-manual.md)          |
 
 ### Divided by import method
 
 | Import method name | Use method                                                   |
 | ------------------ | ------------------------------------------------------------ |
-| Spark Load         | [Import external data via Spark](./import-way/spark-load-manual.html) |
-| Broker Load        | [Import external storage data via Broker](./import-way/broker-load-manual.html) |
-| Stream Load        | [Stream import data (local file and memory data)](./import-way/stream-load-manual.html) |
-| Routine Load       | [Import Kafka data](./import-way/routine-load-manual.html)   |
-| Binlog Load        | [collect Mysql Binlog import data](./import-way/binlog-load-manual.html) |
-| Insert Into        | [External table imports data through INSERT](./import-way/insert-into-manual.html) |
-| S3 Load            | [Object storage data import of S3 protocol](./import-way/s3-load-manual.html) |
+| Spark Load         | [Import external data via Spark](./import-way/spark-load-manual.md) |
+| Broker Load        | [Import external storage data via Broker](./import-way/broker-load-manual.md) |
+| Stream Load        | [Stream import data (local file and memory data)](./import-way/stream-load-manual.md) |
+| Routine Load       | [Import Kafka data](./import-way/routine-load-manual.md)   |
+| Binlog Load        | [collect Mysql Binlog import data](./import-way/binlog-load-manual.md) |
+| Insert Into        | [External table imports data through INSERT](./import-way/insert-into-manual.md) |
+| S3 Load            | [Object storage data import of S3 protocol](./import-way/s3-load-manual.md) |
 
 ## Supported data formats
 
@@ -80,4 +80,4 @@ For best practices on atomicity guarantees, see Importing Transactions and Atomi
 
 ## Synchronous and asynchronous imports
 
-Import methods are divided into synchronous and asynchronous. For the synchronous import method, the returned result indicates whether the import succeeds or fails. For the asynchronous import method, a successful return only means that the job was submitted successfully, not that the data was imported successfully. You need to use the corresponding command to check the running status of the import job.
\ No newline at end of file
+Import methods are divided into synchronous and asynchronous. For the synchronous import method, the returned result indicates whether the import succeeds or fails. For the asynchronous import method, a successful return only means that the job was submitted successfully, not that the data was imported successfully. You need to use the corresponding command to check the running status of the import job.
diff --git a/docs/en/data-operate/update-delete/batch-delete-manual.md b/docs/en/data-operate/update-delete/batch-delete-manual.md
index 3f8c7599fd..1ccd6e21f7 100644
--- a/docs/en/data-operate/update-delete/batch-delete-manual.md
+++ b/docs/en/data-operate/update-delete/batch-delete-manual.md
@@ -25,7 +25,7 @@ under the License.
 -->
 
 # Batch Delete
-Currently, Doris supports multiple import methods such as [broker load](../import/import-way/broker-load-manual.html), [routine load](../import/import-way/routine-load-manual.html), [stream load](../import/import-way/stream-load-manual.html), etc. The data can only be deleted through the delete statement at present. When the delete statement is used to delete, a new data version will be generated every time delete is executed. Frequent deletion will seriously affect the query performance [...]
+Currently, Doris supports multiple import methods such as [broker load](../import/import-way/broker-load-manual.md), [routine load](../import/import-way/routine-load-manual.md), [stream load](../import/import-way/stream-load-manual.md), etc. The data can only be deleted through the delete statement at present. When the delete statement is used to delete, a new data version will be generated every time delete is executed. Frequent deletion will seriously affect the query performance, and  [...]
 
 For scenarios similar to the import of cdc data, insert and delete in the data data generally appear interspersed. In this scenario, our current import method is not enough, even if we can separate insert and delete, it can solve the import problem , But still cannot solve the problem of deletion. Use the batch delete function to solve the needs of these scenarios.
 There are three ways to merge data import:
@@ -131,7 +131,7 @@ The writing method of `Routine Load` adds a mapping to the `columns` field. The
 ```
 
 ## Note
-1. Since import operations other than stream load may be executed out of order inside doris, if it is not stream load when importing using the `MERGE` method, it needs to be used with load sequence. For the specific syntax, please refer to the [sequence](sequence-column-manual.html) column related documents
+1. Since import operations other than stream load may be executed out of order inside doris, if it is not stream load when importing using the `MERGE` method, it needs to be used with load sequence. For the specific syntax, please refer to the [sequence](sequence-column-manual.md) column related documents
 2. `DELETE ON` condition can only be used with MERGE.
 
 ## Usage example
diff --git a/docs/en/data-operate/update-delete/delete-manual.md b/docs/en/data-operate/update-delete/delete-manual.md
index f8aac7b35c..0bde97b7a2 100644
--- a/docs/en/data-operate/update-delete/delete-manual.md
+++ b/docs/en/data-operate/update-delete/delete-manual.md
@@ -32,7 +32,7 @@ Delete is different from other import methods. It is a synchronization process,
 
 ## Syntax
 
-Please refer to the official website for the [DELETE](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) syntax of the delete operation.
+Please refer to the official website for the [DELETE](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) syntax of the delete operation.
 
 ## Delete Result
 
@@ -154,8 +154,8 @@ mysql> show delete from test_db;
 
 ### Note
 
-Unlike the Insert into command, delete cannot specify `label` manually. For the concept of label, see the [Insert Into](../import/import-way/insert-into-manual.html) documentation.
+Unlike the Insert into command, delete cannot specify `label` manually. For the concept of label, see the [Insert Into](../import/import-way/insert-into-manual.md) documentation.
 
 ## More Help
 
-For more detailed syntax used by **delete**, see the [delete](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) command manual, You can also enter `HELP DELETE` in the Mysql client command line to get more help information
+For more detailed syntax used by **delete**, see the [delete](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) command manual, You can also enter `HELP DELETE` in the Mysql client command line to get more help information
diff --git a/docs/en/data-operate/update-delete/update.md b/docs/en/data-operate/update-delete/update.md
index bb3391ec3a..1b244b7653 100644
--- a/docs/en/data-operate/update-delete/update.md
+++ b/docs/en/data-operate/update-delete/update.md
@@ -114,4 +114,4 @@ After the user executes the UPDATE command, the system performs the following th
 
 ## More Help
 
-For more detailed syntax used by **data update**, please refer to the [update](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.html) command manual , you can also enter `HELP UPDATE` in the Mysql client command line to get more help information.
+For more detailed syntax used by **data update**, please refer to the [update](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.md) command manual , you can also enter `HELP UPDATE` in the Mysql client command line to get more help information.
diff --git a/docs/en/data-table/advance-usage.md b/docs/en/data-table/advance-usage.md
index dcca80d273..270883f78f 100644
--- a/docs/en/data-table/advance-usage.md
+++ b/docs/en/data-table/advance-usage.md
@@ -30,7 +30,7 @@ Here we introduce some of Doris's advanced features.
 
 ## Table Structural Change
 
-Schema of the table can be modified using the [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.html) command, including the following modifications:
+Schema of the table can be modified using the [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md) command, including the following modifications:
 
 * Additional columns
 * Delete columns
@@ -94,7 +94,7 @@ For more help, see `HELP ALTER TABLE`.
 
 Rollup can be understood as a materialized index structure of Table. **materialized** because data is store as a concrete ("materialized") table independently, and **indexing** means that Rollup can adjust column order to increase the hit rate of prefix index, or reduce key column to increase data aggregation.
 
-Use [ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.html) to perform various rollup changes.
+Use [ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md) to perform various rollup changes.
 
 Examples are given below.
 
diff --git a/docs/en/data-table/basic-usage.md b/docs/en/data-table/basic-usage.md
index f6027a17cf..1212b6213a 100644
--- a/docs/en/data-table/basic-usage.md
+++ b/docs/en/data-table/basic-usage.md
@@ -91,7 +91,7 @@ Initially, a database can be created through root or admin users:
 CREATE DATABASE example_db;
 ```
 
-> All commands can use `HELP` command to see detailed grammar help. For example: `HELP CREATE DATABASE;'`.You can also refer to the official website [SHOW CREATE DATABASE](../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE.html) command manual.
+> All commands can use `HELP` command to see detailed grammar help. For example: `HELP CREATE DATABASE;'`.You can also refer to the official website [SHOW CREATE DATABASE](../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE.md) command manual.
 >
 > If you don't know the full name of the command, you can use "help command a field" for fuzzy query. If you type `HELP CREATE`, you can match commands like `CREATE DATABASE', `CREATE TABLE', `CREATE USER', etc.
 
@@ -121,9 +121,9 @@ Query OK, 0 rows affected (0.01 sec)
 
 ### Formulation
 
-Create a table using the [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) command. More detailed parameters can be seen:`HELP CREATE TABLE;`
+Create a table using the [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) command. More detailed parameters can be seen:`HELP CREATE TABLE;`
 
-First, we need to switch databases using the [USE](../sql-manual/sql-reference/Utility-Statements/USE.html) command:
+First, we need to switch databases using the [USE](../sql-manual/sql-reference/Utility-Statements/USE.md) command:
 
 ```sql
 mysql> USE example_db;
@@ -247,7 +247,7 @@ MySQL> DESC table2;
 
 ### Import data
 
-Doris supports a variety of data import methods. Specifically, you can refer to the [data import](../data-operate/import/load-manual.html) document. Here we use streaming import and Broker import as examples.
+Doris supports a variety of data import methods. Specifically, you can refer to the [data import](../data-operate/import/load-manual.md) document. Here we use streaming import and Broker import as examples.
 
 #### Flow-in
 
diff --git a/docs/en/data-table/best-practice.md b/docs/en/data-table/best-practice.md
index 95921735df..8e115f10c6 100644
--- a/docs/en/data-table/best-practice.md
+++ b/docs/en/data-table/best-practice.md
@@ -129,7 +129,7 @@ Doris stores the data in an orderly manner, and builds a sparse index for Doris
 Sparse index chooses fixed length prefix in schema as index content, and Doris currently chooses 36 bytes prefix as index.
 
 * When building tables, it is suggested that the common filter fields in queries should be placed in front of Schema. The more distinguishable the query fields are, the more frequent the query fields are.
-* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model](./data-model.html), [ROLLUP and query](./hit-the-rollup.html).
+* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model](./data-model.md), [ROLLUP and query](./hit-the-rollup.md).
 * In addition to sparse index, Doris also provides bloomfilter index. Bloomfilter index has obvious filtering effect on columns with high discrimination. If you consider that varchar cannot be placed in a sparse index, you can create a bloomfilter index.
 
 ### 1.5 Physical and Chemical View (rollup)
diff --git a/docs/en/data-table/data-partition.md b/docs/en/data-table/data-partition.md
index a98dbe3ee0..441e43cee8 100644
--- a/docs/en/data-table/data-partition.md
+++ b/docs/en/data-table/data-partition.md
@@ -36,7 +36,7 @@ In Doris, data is logically described in the form of a table.
 
 A table includes rows (rows) and columns (columns). Row is a row of data for the user. Column is used to describe different fields in a row of data.
 
-Column can be divided into two broad categories: Key and Value. From a business perspective, Key and Value can correspond to dimension columns and metric columns, respectively. From the perspective of the aggregation model, the same row of Key columns will be aggregated into one row. The way the Value column is aggregated is specified by the user when the table is built. For an introduction to more aggregation models, see the [Doris Data Model](./data-model.html).
+Column can be divided into two broad categories: Key and Value. From a business perspective, Key and Value can correspond to dimension columns and metric columns, respectively. From the perspective of the aggregation model, the same row of Key columns will be aggregated into one row. The way the Value column is aggregated is specified by the user when the table is built. For an introduction to more aggregation models, see the [Doris Data Model](./data-model.md).
 
 ### Tablet & Partition
 
@@ -50,7 +50,7 @@ Several Partitions form a Table. Partition can be thought of as the smallest log
 
 We use a table-building operation to illustrate Doris' data partitioning.
 
-Doris's table creation is a synchronous command. The result is returned after the SQL execution is completed. If the command returns successfully, it means that the table creation is successful. For specific table creation syntax, please refer to [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html), or you can view more details through `HELP CREATE TABLE;` Much help.See more help with `HELP CREATE TABLE;`.
+Doris's table creation is a synchronous command. The result is returned after the SQL execution is completed. If the command returns successfully, it means that the table creation is successful. For specific table creation syntax, please refer to [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md), or you can view more details through `HELP CREATE TABLE;` Much help.See more help with `HELP CREATE TABLE;`.
 
 This section introduces Doris's approach to building tables with an example.
 
@@ -122,7 +122,7 @@ PROPERTIES
 
 ### Column Definition
 
-Here we only use the AGGREGATE KEY data model as an example. See the [Doris Data Model](./data-model.html) for more data models.
+Here we only use the AGGREGATE KEY data model as an example. See the [Doris Data Model](./data-model.md) for more data models.
 
 The basic type of column can be viewed by executing `HELP CREATE TABLE;` in mysql-client.
 
@@ -332,7 +332,7 @@ It is also possible to use only one layer of partitioning. When using a layer pa
     * Once the number of Buckets for a Partition is specified, it cannot be changed. Therefore, when determining the number of Buckets, you need to consider the expansion of the cluster in advance. For example, there are currently only 3 hosts, and each host has 1 disk. If the number of Buckets is only set to 3 or less, then even if you add more machines later, you can't increase the concurrency.
     * Give some examples: Suppose there are 10 BEs, one for each BE disk. If the total size of a table is 500MB, you can consider 4-8 shards. 5GB: 8-16. 50GB: 32. 500GB: Recommended partitions, each partition is about 50GB in size, with 16-32 shards per partition. 5TB: Recommended partitions, each with a size of around 50GB and 16-32 shards per partition.
     
-    > Note: The amount of data in the table can be viewed by the [show data](../sql-manual/sql-reference/Show-Statements/SHOW-DATA.html) command. The result is divided by the number of copies, which is the amount of data in the table.
+    > Note: The amount of data in the table can be viewed by the [show data](../sql-manual/sql-reference/Show-Statements/SHOW-DATA.md) command. The result is divided by the number of copies, which is the amount of data in the table.
     
 
 #### Compound Partitions vs Single Partitions
@@ -352,7 +352,7 @@ The user can also use a single partition without using composite partitions. The
 
 ### PROPERTIES
 
-In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) for a detailed introduction.
+In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) for a detailed introduction.
 
 ### ENGINE
 
@@ -395,4 +395,4 @@ In this example, the type of ENGINE is olap, the default ENGINE type. In Doris,
 
 ## More help
 
-For more detailed instructions on data partitioning, we can refer to the [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) command manual, and also You can enter `HELP CREATE TABLE;` under the Mysql client to get more help information.
+For more detailed instructions on data partitioning, we can refer to the [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) command manual, and also You can enter `HELP CREATE TABLE;` under the Mysql client to get more help information.
diff --git a/docs/en/data-table/hit-the-rollup.md b/docs/en/data-table/hit-the-rollup.md
index b8e078e39c..990518c39c 100644
--- a/docs/en/data-table/hit-the-rollup.md
+++ b/docs/en/data-table/hit-the-rollup.md
@@ -44,7 +44,7 @@ Because Uniq is only a special case of the Aggregate model, we do not distinguis
 
 Example 1: Get the total consumption per user
 
-Following [Data Model Aggregate Model](./data-model.html) in the **Aggregate Model** section, the Base table structure is as follows:
+Following [Data Model Aggregate Model](./data-model.md) in the **Aggregate Model** section, the Base table structure is as follows:
 
 | ColumnName        | Type         | AggregationType | Comment                                |
 |-------------------| ------------ | --------------- | -------------------------------------- |
@@ -128,7 +128,7 @@ Doris automatically hits the ROLLUP table.
 
 #### ROLLUP in Duplicate Model
 
-Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](./data-model.html), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
+Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](./data-model.md), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
 
 ## ROLLUP adjusts prefix index
 
diff --git a/docs/en/data-table/index/bitmap-index.md b/docs/en/data-table/index/bitmap-index.md
index 7f1fa9a139..07fe1a6245 100644
--- a/docs/en/data-table/index/bitmap-index.md
+++ b/docs/en/data-table/index/bitmap-index.md
@@ -33,7 +33,7 @@ This document focuses on how to create an index job, as well as some considerati
 
 ## Basic Principles
 Creating and dropping index is essentially a schema change job. For details, please refer to
-[Schema Change](../../advanced/alter-table/schema-change.html).
+[Schema Change](../../advanced/alter-table/schema-change.md).
 
 ## Syntax
 ### Create index
@@ -81,4 +81,4 @@ DROP INDEX [IF EXISTS] index_name ON [db_name.]table_name;
 
 ### More Help
 
-For more detailed syntax and best practices for using bitmap indexes, please refer to the  [CREARE INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-INDEX.md) / [SHOW INDEX](../../sql-manual/sql-reference/Show-Statements/SHOW-INDEX.html) / [DROP INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-INDEX.html)  command manual. You can also enter HELP CREATE INDEX / HELP SHOW INDEX / HELP DROP INDEX on the MySql client command line.
+For more detailed syntax and best practices for using bitmap indexes, please refer to the  [CREARE INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-INDEX.md) / [SHOW INDEX](../../sql-manual/sql-reference/Show-Statements/SHOW-INDEX.md) / [DROP INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-INDEX.md)  command manual. You can also enter HELP CREATE INDEX / HELP SHOW INDEX / HELP DROP INDEX on the MySql client command line.
diff --git a/docs/en/design/Flink doris connector Design.md b/docs/en/design/Flink doris connector Design.md
index 05481c67bf..1eb6b336e1 100644
--- a/docs/en/design/Flink doris connector Design.md	
+++ b/docs/en/design/Flink doris connector Design.md	
@@ -201,7 +201,7 @@ public void run(SourceContext sourceContext){
 
 ### 4.4 Implement Flink SQL on Doris
 
-Refer to [Flink Custom Source&Sink](https://ci.apache.org/projects/flink/flink-docs-stable/zh/dev/table/sourceSinks.html) and Flink-jdbc-connector to implement the following As a result, Flink SQL can be used to directly manipulate Doris tables, including reading and writing.
+Refer to [Flink Custom Source&Sink](https://ci.apache.org/projects/flink/flink-docs-stable/zh/dev/table/sourceSinks.md) and Flink-jdbc-connector to implement the following As a result, Flink SQL can be used to directly manipulate Doris tables, including reading and writing.
 
 #### 4.4.1 Implementation details
 
diff --git a/docs/en/developer-guide/benchmark-tool.md b/docs/en/developer-guide/benchmark-tool.md
index 536881d7d4..0c903051d1 100644
--- a/docs/en/developer-guide/benchmark-tool.md
+++ b/docs/en/developer-guide/benchmark-tool.md
@@ -33,7 +33,7 @@ It can be used to test the performance of some parts of the BE storage layer (fo
 
 ## Compilation
 
-1. To ensure that the environment has been able to successfully compile the Doris ontology, you can refer to [Installation and deployment] (https://doris.apache.org/master/en/installing/compilation.html)。
+1. To ensure that the environment has been able to successfully compile the Doris ontology, you can refer to [Installation and deployment](../install/source-install/compilation.md)。
 
 2. Execute`run-be-ut.sh`
 
diff --git a/docs/en/developer-guide/cpp-diagnostic-code.md b/docs/en/developer-guide/cpp-diagnostic-code.md
index dd172d8206..d50ff44bf8 100644
--- a/docs/en/developer-guide/cpp-diagnostic-code.md
+++ b/docs/en/developer-guide/cpp-diagnostic-code.md
@@ -26,7 +26,7 @@ under the License.
 
 # C++ Code Diagnostic
 
-Doris support to use [Clangd](https://clangd.llvm.org/) and [Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/) to diagnostic code. Clangd and Clang-Tidy already has in [LDB-toolchain](https://doris.apache.org/zh-CN/installing/compilation-with-ldb-toolchain),also can install by self.
+Doris support to use [Clangd](https://clangd.llvm.org/) and [Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/) to diagnostic code. Clangd and Clang-Tidy already has in [LDB-toolchain](../install/source-install/compilation-with-ldb-toolchain.md),also can install by self.
 
 ### Clang-Tidy
 Clang-Tidy can do some diagnostic cofig, config file `.clang-tidy` is in Doris root path. Compared with vscode-cpptools, clangd can provide more powerful and accurate code jumping for vscode, and integrates the analysis and quick-fix functions of clang-tidy.
diff --git a/docs/en/developer-guide/docker-dev.md b/docs/en/developer-guide/docker-dev.md
index 2ea5b28173..938550ccb1 100644
--- a/docs/en/developer-guide/docker-dev.md
+++ b/docs/en/developer-guide/docker-dev.md
@@ -29,9 +29,9 @@ under the License.
 
 ## Related detailed document navigation
 
-- [Developing mirror compilation using Docker](https://doris.incubator.apache.org/installing/compilation.html#developing-mirror-compilation-using-docker-recommended)
-- [Deploying Doris](https://doris.incubator.apache.org/installing/install-deploy.html#cluster-deployment)
-- [VSCode Be Development Debugging](https://doris.incubator.apache.org/developer-guide/be-vscode-dev.html)
+- [Developing mirror compilation using Docker](../install/source-install/compilation.md#developing-mirror-compilation-using-docker-recommended)
+- [Deploying Doris](../install/install-deploy.md#cluster-deployment)
+- [VSCode Be Development Debugging](./be-vscode-dev.md)
 
 ## Environment preparation
 
@@ -90,7 +90,7 @@ docker build -t doris .
 
 run image
 
-note! [problems with mounting](../installing/compilation.md)
+note! [problems with mounting](../install/source-install/compilation.md)
 
 > See the link above: It is recommended to run the image by mounting the local Doris source code directory as a volume .....
 
diff --git a/docs/en/developer-guide/fe-vscode-dev.md b/docs/en/developer-guide/fe-vscode-dev.md
index e90fc05269..605beb23e1 100644
--- a/docs/en/developer-guide/fe-vscode-dev.md
+++ b/docs/en/developer-guide/fe-vscode-dev.md
@@ -72,7 +72,7 @@ example:
 ## Build
 
 Other articles have already explained:
-* [Build with LDB toolchain ](https://doris.apache.org/zh-CN/installing/compilation-with-ldb-toolchain.html)
+* [Build with LDB toolchain ](../install/source-install/compilation-with-ldb-toolchain.md)
 * ......
 
 In order to debug, you need to add debugging parameters when fe starts, such as `-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005` .
diff --git a/docs/en/downloads/downloads.md b/docs/en/downloads/downloads.md
index e7f3ecbec1..1eec461eda 100644
--- a/docs/en/downloads/downloads.md
+++ b/docs/en/downloads/downloads.md
@@ -85,4 +85,4 @@ You can download source code from following links, then compile and install Dori
 
 To verify the downloaded files, please read [Verify Apache Release](../community/release-and-verify/release-verify.md) and using these [KEYS](https://downloads.apache.org/incubator/doris/KEYS).
 
-After verification, please read [Compilation](../installing/compilation.md) and [Installation and deployment](../installing/install-deploy.md) to compile and install Doris.
+After verification, please read [Compilation](../install/source-install/compilation.md) and [Installation and deployment](../install/install-deploy.md) to compile and install Doris.
diff --git a/docs/en/ecosystem/external-table/doris-on-es.md b/docs/en/ecosystem/external-table/doris-on-es.md
index 3b0b987121..553ae2be8e 100644
--- a/docs/en/ecosystem/external-table/doris-on-es.md
+++ b/docs/en/ecosystem/external-table/doris-on-es.md
@@ -107,7 +107,7 @@ POST /_bulk
 
 ### Create external ES table
 
-Refer to the specific table syntax:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+Refer to the specific table syntax:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 ```
 CREATE EXTERNAL TABLE `test` (
diff --git a/docs/en/ecosystem/external-table/iceberg-of-doris.md b/docs/en/ecosystem/external-table/iceberg-of-doris.md
index ed371487f1..f185652131 100644
--- a/docs/en/ecosystem/external-table/iceberg-of-doris.md
+++ b/docs/en/ecosystem/external-table/iceberg-of-doris.md
@@ -47,7 +47,7 @@ This document introduces how to use this feature and the considerations.
 Iceberg tables can be created in Doris in two ways. You do not need to declare the column definitions of the table when creating an external table, Doris can automatically convert them based on the column definitions of the table in Iceberg.
 
 1. Create a separate external table to mount the Iceberg table.  
-   The syntax can be viewed in [CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html).
+   The syntax can be viewed in [CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md).
 
     ```sql
     -- Syntax
@@ -74,7 +74,7 @@ Iceberg tables can be created in Doris in two ways. You do not need to declare t
     ```
 
 2. Create an Iceberg database to mount the corresponding Iceberg database on the remote side, and mount all the tables under the database.  
-   You can check the syntax with [CREATE DATABASE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.html).
+   You can check the syntax with [CREATE DATABASE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.md).
 
     ```sql
     -- Syntax
@@ -142,7 +142,7 @@ You can also create an Iceberg table by explicitly specifying the column definit
 
 ### Show table structure
 
-Show table structure can be viewed by [SHOW CREATE TABLE](../../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE.html).
+Show table structure can be viewed by [SHOW CREATE TABLE](../../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE.md).
     
 ### Synchronized mounts
 
diff --git a/docs/en/ecosystem/external-table/odbc-of-doris.md b/docs/en/ecosystem/external-table/odbc-of-doris.md
index 2eedfe3abd..2d850f8073 100644
--- a/docs/en/ecosystem/external-table/odbc-of-doris.md
+++ b/docs/en/ecosystem/external-table/odbc-of-doris.md
@@ -47,7 +47,7 @@ This document mainly introduces the implementation principle and usage of this O
 
 ### Create ODBC External Table 
 
-Refer to the specific table syntax:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+Refer to the specific table syntax:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 #### 1. Creating ODBC external table without resource
 
@@ -332,7 +332,7 @@ There are different data types among different databases. Here, the types in eac
 
 Sync for small amounts of data
 
-For example, a table in Mysql has 1 million data. If you want to synchronize to doris, you can use ODBC to map the data. When using[insert into](../../data-operate/import/import-way/insert-into-manual.html)way to synchronize data to Doris, if you want to synchronize large batches of data,Can be used in batches[insert into](../../data-operate/import/import-way/insert-into-manual.html)Sync (deprecated)
+For example, a table in Mysql has 1 million data. If you want to synchronize to doris, you can use ODBC to map the data. When using[insert into](../../data-operate/import/import-way/insert-into-manual.md)way to synchronize data to Doris, if you want to synchronize large batches of data,Can be used in batches[insert into](../../data-operate/import/import-way/insert-into-manual.md)Sync (deprecated)
 
 ## Q&A
 
diff --git a/docs/en/ecosystem/seatunnel/flink-sink.md b/docs/en/ecosystem/seatunnel/flink-sink.md
index 4617cdd1d0..1cf6e288c7 100644
--- a/docs/en/ecosystem/seatunnel/flink-sink.md
+++ b/docs/en/ecosystem/seatunnel/flink-sink.md
@@ -78,7 +78,7 @@ Number of retries after writing to Doris fails
 
 Import parameters for Stream load. For example: 'doris.column_separator' = ', ' etc.
 
-[More Stream Load parameter configuration](../../data-operate/import/import-way/stream-load-manual.html)
+[More Stream Load parameter configuration](../../data-operate/import/import-way/stream-load-manual.md)
 
 ### Examples
 Socket To Doris
@@ -113,4 +113,4 @@ sink {
 ### Start command
 ```
 sh bin/start-seatunnel-flink.sh --config config/flink.streaming.conf
-```
\ No newline at end of file
+```
diff --git a/docs/en/ecosystem/seatunnel/spark-sink.md b/docs/en/ecosystem/seatunnel/spark-sink.md
index d3e516c7e4..2cb3b4c697 100644
--- a/docs/en/ecosystem/seatunnel/spark-sink.md
+++ b/docs/en/ecosystem/seatunnel/spark-sink.md
@@ -73,7 +73,7 @@ Doris number of submissions per batch
 `doris. [string]`
 Doris stream_load properties,you can use 'doris.' prefix + stream_load properties
 
-[More Doris stream_load Configurations](../../data-operate/import/import-way/stream-load-manual.html)
+[More Doris stream_load Configurations](../../data-operate/import/import-way/stream-load-manual.md)
 
 ### Examples
 Hive to Doris
@@ -120,4 +120,4 @@ Doris {
 Start command
 ```
 sh bin/start-waterdrop-spark.sh --master local[4] --deploy-mode client --config ./config/spark.conf
-```
\ No newline at end of file
+```
diff --git a/docs/en/faq/data-faq.md b/docs/en/faq/data-faq.md
index e246a4871a..d9dab3abee 100644
--- a/docs/en/faq/data-faq.md
+++ b/docs/en/faq/data-faq.md
@@ -72,7 +72,7 @@ If no BE node is down, you need to pass the show tablet 110309738 statement, and
 
 Usually occurs in operations such as Import, Alter, etc. This error means that the usage of the corresponding disk corresponding to the BE exceeds the threshold (default 95%). In this case, you can first use the show backends command, where MaxDiskUsedPct shows the usage of the disk with the highest usage on the corresponding BE. If If it exceeds 95%, this error will be reported.
 
-At this point, you need to go to the corresponding BE node to check the usage in the data directory. The trash directory and snapshot directory can be manually cleaned to free up space. If the data directory occupies a large space, you need to consider deleting some data to free up space. For details, please refer to [Disk Space Management](../admin-manual/maint-monitor/disk-capacity.html).
+At this point, you need to go to the corresponding BE node to check the usage in the data directory. The trash directory and snapshot directory can be manually cleaned to free up space. If the data directory occupies a large space, you need to consider deleting some data to free up space. For details, please refer to [Disk Space Management](../admin-manual/maint-monitor/disk-capacity.md).
 
 ### Q7. Calling stream load to import data through a Java program may result in a Broken Pipe error when a batch of data is large.
 
diff --git a/docs/en/faq/install-faq.md b/docs/en/faq/install-faq.md
index 391bc1956e..1a942b0eb6 100644
--- a/docs/en/faq/install-faq.md
+++ b/docs/en/faq/install-faq.md
@@ -81,7 +81,7 @@ Here we provide 3 ways to solve this problem:
 
 3. Manually migrate data using the API
 
-   Doris provides [HTTP API](../admin-manual/http-actions/tablet-migration-action.html), which can manually specify the migration of data shards on one disk to another disk.
+   Doris provides [HTTP API](../admin-manual/http-actions/tablet-migration-action.md), which can manually specify the migration of data shards on one disk to another disk.
 
 ### Q5. How to read FE/BE logs correctly?
 
@@ -255,7 +255,7 @@ There are usually two reasons for this problem:
 1. The local IP obtained when FE is started this time is inconsistent with the last startup, usually because `priority_network` is not set correctly, which causes FE to match the wrong IP address when it starts. Restart FE after modifying `priority_network`.
 2. Most Follower FE nodes in the cluster are not started. For example, there are 3 Followers, and only one is started. At this time, at least one other FE needs to be started, so that the FE electable group can elect the Master to provide services.
 
-If the above situation cannot be solved, you can restore it according to the [metadata operation and maintenance document] (../admin-manual/maint-monitor/metadata-operation.html) in the Doris official website document.
+If the above situation cannot be solved, you can restore it according to the [metadata operation and maintenance document] (../admin-manual/maint-monitor/metadata-operation.md) in the Doris official website document.
 
 ### Q10. Lost connection to MySQL server at 'reading initial communication packet', system error: 0
 
@@ -265,11 +265,11 @@ If the following problems occur when using MySQL client to connect to Doris, thi
 
 Sometimes when FE is restarted, the above error will occur (usually only in the case of multiple Followers). And the two values in the error differ by 2. Causes FE to fail to start.
 
-This is a bug in bdbje that has not yet been resolved. In this case, you can only restore the metadata by performing the operation of failure recovery in [Metadata Operation and Maintenance Documentation](../admin-manual/maint-monitor/metadata-operation.html).
+This is a bug in bdbje that has not yet been resolved. In this case, you can only restore the metadata by performing the operation of failure recovery in [Metadata Operation and Maintenance Documentation](../admin-manual/maint-monitor/metadata-operation.md).
 
 ### Q12. Doris compile and install JDK version incompatibility problem
 
-When compiling Doris using Docker, start FE after compiling and installing, and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit (I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is JDK 11. If your installation environment is using JDK8, you need to switch the JDK environment to JDK8 in Docker. For the specific switching method, please refer to [Compile Documentation](../install/source-install/compilation.html)
+When compiling Doris using Docker, start FE after compiling and installing, and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit (I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is JDK 11. If your installation environment is using JDK8, you need to switch the JDK environment to JDK8 in Docker. For the specific switching method, please refer to [Compile Documentation](../install/source-install/compilation.md)
 
 ### Q13. Error starting FE or unit test locally Cannot find external parser table action_table.dat
 Run the following command
@@ -287,7 +287,7 @@ In doris 1.0 onwards, openssl has been upgraded to 1.1 and is built into the dor
 ```
 ERROR 1105 (HY000): errCode = 2, detailMessage = driver connect Error: HY000 [MySQL][ODBC 8.0(w) Driver]SSL connection error: Failed to set ciphers to use (2026)
 ```
-The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector and select `Linux - Generic` in the operating system, this version of ODBC Driver uses openssl version 1,1. For details, see the [ODBC exterior documentation](. /extending-doris/odbc-of-doris.md)
+The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector and select `Linux - Generic` in the operating system, this version of ODBC Driver uses openssl version 1,1. For details, see the [ODBC exterior documentation](../ecosystem/external-table/odbc-of-doris.md)
 You can verify the version of openssl used by MySQL ODBC Driver by
 ```
 ldd /path/to/libmyodbc8w.so |grep libssl.so
diff --git a/docs/en/faq/sql-faq.md b/docs/en/faq/sql-faq.md
index 2b5f7a28b5..6e30ae2a60 100644
--- a/docs/en/faq/sql-faq.md
+++ b/docs/en/faq/sql-faq.md
@@ -30,7 +30,7 @@ under the License.
 
 This happens because the corresponding tablet does not find a copy that can be queried, usually because the BE is down, the copy is missing, etc. You can first pass the `show tablet tablet_id` statement and then execute the following `show proc` statement to view the replica information corresponding to this tablet and check whether the replica is complete. At the same time, you can also query the progress of replica scheduling and repair in the cluster through `show proc "/cluster_balan [...]
 
-For commands related to data copy management, please refer to [Data Copy Management](../admin-manual/maint-monitor/tablet-repair-and-balance.html).
+For commands related to data copy management, please refer to [Data Copy Management](../admin-manual/maint-monitor/tablet-repair-and-balance.md).
 
 ### Q2. Show backends/frontends The information viewed is incomplete
 
@@ -65,4 +65,4 @@ For example, the table is defined as k1, v1. A batch of imported data is as foll
 
 Then maybe the result of copy 1 is `1, "abc"`, and the result of copy 2 is `1, "def"`. As a result, the query results are inconsistent.
 
-To ensure that the data sequence between different replicas is unique, you can refer to the [Sequence Column](../data-operate/update-delete/sequence-column-manual.html) function.
\ No newline at end of file
+To ensure that the data sequence between different replicas is unique, you can refer to the [Sequence Column](../data-operate/update-delete/sequence-column-manual.md) function.
diff --git a/docs/en/get-starting/get-starting.md b/docs/en/get-starting/get-starting.md
index 4b0d690268..a68bdedddf 100644
--- a/docs/en/get-starting/get-starting.md
+++ b/docs/en/get-starting/get-starting.md
@@ -214,7 +214,7 @@ FE splits the query plan into fragments and sends them to BE for task execution.
 
 - After executing the SQL statement, you can view the corresponding SQL statement execution report information on the FE's WEB-UI interface
 
-For a complete parameter comparison table, please go to [Profile parameter analysis](../admin-manual/query-profile.html) View Details
+For a complete parameter comparison table, please go to [Profile parameter analysis](../admin-manual/query-profile.md) View Details
 
 
 #### Library table operations
@@ -231,7 +231,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
    CREATE DATABASE database name;
    ````
 
-   > For more detailed syntax and best practices used by Create-DataBase, see [Create-DataBase](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.html) command manual.
+   > For more detailed syntax and best practices used by Create-DataBase, see [Create-DataBase](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.md) command manual.
    >
    > If you don't know the full name of the command, you can use "help command a field" for fuzzy query. If you type 'HELP CREATE', you can match `CREATE DATABASE`, `CREATE TABLE`, `CREATE USER` and other commands.
    
@@ -252,7 +252,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
    
 - Create data table
 
-  > For more detailed syntax and best practices used by Create-Table, see [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) command manual.
+  > For more detailed syntax and best practices used by Create-Table, see [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) command manual.
 
   Use the `CREATE TABLE` command to create a table (Table). More detailed parameters can be viewed:
 
@@ -266,7 +266,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
   USE example_db;
   ````
 
-  Doris supports two table creation methods, single partition and composite partition (for details, please refer to [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) command manual).
+  Doris supports two table creation methods, single partition and composite partition (for details, please refer to [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) command manual).
 
   The following takes the aggregation model as an example to demonstrate the table building statements for two partitions.
 
@@ -391,7 +391,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
 
 1. Insert Into
 
-   > For more detailed syntax and best practices for Insert usage, see [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) Command Manual.
+   > For more detailed syntax and best practices for Insert usage, see [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) Command Manual.
 
    The Insert Into statement is used in a similar way to the Insert Into statement in databases such as MySQL. But in Doris, all data writing is a separate import job. Therefore, Insert Into is also introduced as an import method here.
 
@@ -441,7 +441,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
         - If `status` is `visible`, the data import is successful.
       - If `warnings` is greater than 0, it means that data is filtered. You can get the url through the `show load` statement to view the filtered lines.
 
-   For more detailed instructions, see the [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) command manual.
+   For more detailed instructions, see the [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) command manual.
 
 2. Batch Import
 
@@ -449,7 +449,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
 
    - Stream-Load
 
-     > For more detailed syntax and best practices used by Stream-Load, see [Stream-Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) command manual.
+     > For more detailed syntax and best practices used by Stream-Load, see [Stream-Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) command manual.
 
      Streaming import transfers data to Doris through the HTTP protocol, and can directly import local data without relying on other systems or components. See `HELP STREAM LOAD;` for detailed syntax help.
 
@@ -498,7 +498,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
 
      Broker import uses the deployed Broker process to read data on external storage for import.
 
-     > For more detailed syntax and best practices used by Broker Load, see [Broker Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
+     > For more detailed syntax and best practices used by Broker Load, see [Broker Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
 
      Example: With "table1_20170708" as the Label, import the files on HDFS into table1
 
@@ -590,7 +590,7 @@ For a complete parameter comparison table, please go to [Profile parameter analy
 
 #### Update Data
 
-> For more detailed syntax and best practices used by Update, see [Update](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.html) Command Manual.
+> For more detailed syntax and best practices used by Update, see [Update](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.md) Command Manual.
 
 The current UPDATE statement **only supports** row updates on the Unique model, and there may be data conflicts caused by concurrent updates. At present, Doris does not deal with such problems, and users need to avoid such problems from the business side.
 
@@ -635,7 +635,7 @@ The current UPDATE statement **only supports** row updates on the Unique model,
 
 #### Delete Data
 
-> For more detailed syntax and best practices for Delete use, see [Delete](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) Command Manual.
+> For more detailed syntax and best practices for Delete use, see [Delete](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) Command Manual.
 
 1. Grammar rules
 
diff --git a/docs/en/install/install-deploy.md b/docs/en/install/install-deploy.md
index ce8fdab2e9..8c9ccd0045 100644
--- a/docs/en/install/install-deploy.md
+++ b/docs/en/install/install-deploy.md
@@ -351,7 +351,7 @@ You can also view the BE node through the front-end page connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../admin-manual/maint-monitor/tablet-meta-tool.html).
+The expansion and scaling process of BE nodes does not affect the current system operation and the tasks being performed, and does not affect the performance of the current system. Data balancing is done automatically. Depending on the amount of data available in the cluster, the cluster will be restored to load balancing in a few hours to a day. For cluster load, see the [Tablet Load Balancing Document](../admin-manual/maint-monitor/tablet-meta-tool.md).
 
 #### Add BE nodes
 
@@ -385,7 +385,7 @@ DECOMMISSION clause:
 > 		```CANCEL ALTER SYSTEM DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```
 > 	The order was cancelled. When cancelled, the data on the BE will maintain the current amount of data remaining. Follow-up Doris re-load balancing
 
-**For expansion and scaling of BE nodes in multi-tenant deployment environments, please refer to the [Multi-tenant Design Document](../admin-manual/maint-monitor/multi-tenant.html).**
+**For expansion and scaling of BE nodes in multi-tenant deployment environments, please refer to the [Multi-tenant Design Document](../admin-manual/maint-monitor/multi-tenant.md).**
 
 ### Broker Expansion and Shrinkage
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
index 499dc9b3b2..dfdcc00cbd 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
@@ -32,7 +32,7 @@ ALTER TABLE COLUMN
 
 ### Description
 
-This statement is used to perform a schema change operation on an existing table. The schema change is asynchronous, and the task is returned when the task is submitted successfully. After that, you can use the [SHOW ALTER](../../Show-Statements/SHOW-ALTER.html) command to view the progress.
+This statement is used to perform a schema change operation on an existing table. The schema change is asynchronous, and the task is returned when the task is submitted successfully. After that, you can use the [SHOW ALTER](../../Show-Statements/SHOW-ALTER.md) command to view the progress.
 
 grammar:
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 1533bbee7c..074f8f67e3 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ Notice:
 - The partition is left closed and right open. If the user only specifies the right boundary, the system will automatically determine the left boundary
 - If the bucketing method is not specified, the bucketing method used for creating the table is automatically used
 - If the bucketing method is specified, only the number of buckets can be modified, not the bucketing method or the bucketing column
-- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](./sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](./sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 - If the user does not explicitly create a partition when creating a table, adding a partition by ALTER is not supported
 
 2. Delete the partition
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index 1d140b172b..a0957af785 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -32,7 +32,7 @@ ALTER TABLE REPLACE
 
 ### Description
 
-This statement is used to modify the attributes of the schema of the existing table. The syntax is basically similar to [ALTER TABLE CULUMN](ALTER-TABLE-COLUMN.html).
+This statement is used to modify the attributes of the schema of the existing table. The syntax is basically similar to [ALTER TABLE CULUMN](ALTER-TABLE-COLUMN.md).
 
 ```sql
 ALTER TABLE [database.]table MODIFY NEW_COLUMN_INFO REPLACE OLD_COLUMN_INFO ;
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index f06a600c76..3d9dccabe7 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -32,7 +32,7 @@ ALTER TABLE ROLLUP
 
 ### Description
 
-This statement is used to perform a rollup modification operation on an existing table. The rollup is an asynchronous operation, and the task is returned when the task is submitted successfully. After that, you can use the [SHOW ALTER](../../Show-Statements/SHOW-ALTER.html) command to view the progress.
+This statement is used to perform a rollup modification operation on an existing table. The rollup is an asynchronous operation, and the task is returned when the task is submitted successfully. After that, you can use the [SHOW ALTER](../../Show-Statements/SHOW-ALTER.md) command to view the progress.
 
 grammar:
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
index 41057a90ca..d779cb64e8 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
@@ -96,7 +96,7 @@ BACKUP
 
 1. Only one backup operation can be performed under the same database.
 
-2. The backup operation will back up the underlying table and [materialized view](../../../../advanced/materialized-view.html) of the specified table or partition, and only one copy will be backed up.
+2. The backup operation will back up the underlying table and [materialized view](../../../../advanced/materialized-view.md) of the specified table or partition, and only one copy will be backed up.
 
 3. Efficiency of backup operations
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
index 3f12e66a02..daded98ef4 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
@@ -116,5 +116,5 @@ CREATE, REPOSITORY
 ### Best Practice
 
 1. A cluster can create multiple warehouses. Only users with ADMIN privileges can create repositories.
-2. Any user can view the created repositories through the [SHOW REPOSITORIES](../../Show-Statements/SHOW-REPOSITORIES.html) command.
+2. Any user can view the created repositories through the [SHOW REPOSITORIES](../../Show-Statements/SHOW-REPOSITORIES.md) command.
 3. When performing data migration operations, it is necessary to create the exact same warehouse in the source cluster and the destination cluster, so that the destination cluster can view the data snapshots backed up by the source cluster through this warehouse.
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
index cbbafd6cdf..5d54c6c24a 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
@@ -119,4 +119,4 @@ RESTORE
 
 4. Efficiency of recovery operations:
 
-   In the case of the same cluster size, the time-consuming of the restore operation is basically the same as the time-consuming of the backup operation. If you want to speed up the recovery operation, you can first restore only one copy by setting the `replication_num` parameter, and then adjust the number of copies by [ALTER TABLE PROPERTY](../../Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.html), complete the copy.
+   In the case of the same cluster size, the time-consuming of the restore operation is basically the same as the time-consuming of the backup operation. If you want to speed up the recovery operation, you can first restore only one copy by setting the `replication_num` parameter, and then adjust the number of copies by [ALTER TABLE PROPERTY](../../Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md), complete the copy.
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
index 5ca46b1be0..77fb305062 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
@@ -32,7 +32,7 @@ CREATE EXTERNAL TABLE
 
 ### Description
 
-This statement is used to create an external table, see [CREATE TABLE](./CREATE-TABLE.html) for the specific syntax.
+This statement is used to create an external table, see [CREATE TABLE](./CREATE-TABLE.md) for the specific syntax.
 
 Which type of external table is mainly identified by the ENGINE type, currently MYSQL, BROKER, HIVE, ICEBERG are optional
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
index 96fe9f79b7..ae242e5dae 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
@@ -34,7 +34,7 @@ CREATE MATERIALIZED VIEW
 
 This statement is used to create a materialized view.
 
-This operation is an asynchronous operation. After the submission is successful, you need to view the job progress through [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.html). After displaying FINISHED, you can use the `desc [table_name] all` command to view the schema of the materialized view.
+This operation is an asynchronous operation. After the submission is successful, you need to view the job progress through [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md). After displaying FINISHED, you can use the `desc [table_name] all` command to view the schema of the materialized view.
 
 grammar:
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
index 79181e75b2..1c59e95638 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -28,7 +28,7 @@ under the License.
 
 ### Description
 
-This command is used to create a table. The subject of this document describes the syntax for creating Doris self-maintained tables. For external table syntax, please refer to the [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.html) document.
+This command is used to create a table. The subject of this document describes the syntax for creating Doris self-maintained tables. For external table syntax, please refer to the [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.md) document.
 
 ```sql
 CREATE TABLE [IF NOT EXISTS] [database.]table
@@ -149,7 +149,7 @@ distribution_info
 
 * `engine_type`
 
-    Table engine type. All types in this document are OLAP. For other external table engine types, see [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.html) document. Example:
+    Table engine type. All types in this document are OLAP. For other external table engine types, see [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.md) document. Example:
 
     `ENGINE=olap`
 
@@ -534,7 +534,7 @@ distribution_info
 
 #### Partitioning and bucketing
 
-A table must specify the bucket column, but it does not need to specify the partition. For the specific introduction of partitioning and bucketing, please refer to the [Data Division](../../../../data-table/data-partition.html) document.
+A table must specify the bucket column, but it does not need to specify the partition. For the specific introduction of partitioning and bucketing, please refer to the [Data Division](../../../../data-table/data-partition.md) document.
 
 Tables in Doris can be divided into partitioned tables and non-partitioned tables. This attribute is determined when the table is created and cannot be changed afterwards. That is, for partitioned tables, you can add or delete partitions in the subsequent use process, and for non-partitioned tables, you can no longer perform operations such as adding partitions afterwards.
 
@@ -544,7 +544,7 @@ Therefore, it is recommended to confirm the usage method to build the table reas
 
 #### Dynamic Partition
 
-The dynamic partition function is mainly used to help users automatically manage partitions. By setting certain rules, the Doris system regularly adds new partitions or deletes historical partitions. Please refer to [Dynamic Partition](../../../../advanced/partition/dynamic-partition.html) document for more help.
+The dynamic partition function is mainly used to help users automatically manage partitions. By setting certain rules, the Doris system regularly adds new partitions or deletes historical partitions. Please refer to [Dynamic Partition](../../../../advanced/partition/dynamic-partition.md) document for more help.
 
 #### Materialized View
 
@@ -554,7 +554,7 @@ If the materialized view is created when the table is created, all subsequent da
 
 If you add a materialized view in the subsequent use process, if there is data in the table, the creation time of the materialized view depends on the current amount of data.
 
-For the introduction of materialized views, please refer to the document [materialized views](../../../../advanced/materialized-view.html).
+For the introduction of materialized views, please refer to the document [materialized views](../../../../advanced/materialized-view.md).
 
 #### Index
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
index dae32c50a3..e0877d0b83 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
@@ -41,7 +41,7 @@ DROP DATABASE [IF EXISTS] db_name [FORCE];
 
 illustrate:
 
-- During the execution of DROP DATABASE, the deleted database can be recovered through the RECOVER statement. See the [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) statement for details
+- During the execution of DROP DATABASE, the deleted database can be recovered through the RECOVER statement. See the [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) statement for details
 - If you execute DROP DATABASE FORCE, the system will not check the database for unfinished transactions, the database will be deleted directly and cannot be recovered, this operation is generally not recommended
 
 ### Example
diff --git a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index 04d85f537f..260145ad03 100644
--- a/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/docs/en/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 illustrate:
 
-- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) statement for details
+- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) statement for details
 - If you execute DROP TABLE FORCE, the system will not check whether there are unfinished transactions in the table, the table will be deleted directly and cannot be recovered, this operation is generally not recommended
 
 ### Example
diff --git a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index 8b12356eef..458880199e 100644
--- a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](..../../../data-operate/import/import-scenes/load-data-convert.html) document.
+    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](..../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../advanced/broker.html) documentation for specific information.
+  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../advanced/broker.md) documentation for specific information.
 
   ````text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../advanced/time-zone.html) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
+    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../advanced/time-zone.md) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
 
 ### Example
 
@@ -400,29 +400,29 @@ WITH BROKER broker_name
 
 1. Check the import task status
 
-   Broker Load is an asynchronous import process. The successful execution of the statement only means that the import task is submitted successfully, and does not mean that the data import is successful. The import status needs to be viewed through the [SHOW LOAD](../../Show-Statements/SHOW-LOAD.html) command.
+   Broker Load is an asynchronous import process. The successful execution of the statement only means that the import task is submitted successfully, and does not mean that the data import is successful. The import status needs to be viewed through the [SHOW LOAD](../../Show-Statements/SHOW-LOAD.md) command.
 
 2. Cancel the import task
 
-   Import tasks that have been submitted but not yet completed can be canceled by the [CANCEL LOAD](./CANCEL-LOAD.html) command. After cancellation, the written data will also be rolled back and will not take effect.
+   Import tasks that have been submitted but not yet completed can be canceled by the [CANCEL LOAD](./CANCEL-LOAD.md) command. After cancellation, the written data will also be rolled back and will not take effect.
 
 3. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.html) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
 
 4. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 5. Error data filtering
 
    Doris' import tasks can tolerate a portion of malformed data. Tolerated via `max_filter_ratio` setting. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 6. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.html) documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.md) documentation.
 
 7. Timeout
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index 1e7c21c11a..950d407e66 100644
--- a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ The data synchronization (Sync Job) function supports users to submit a resident
 
 Currently, the data synchronization job only supports connecting to Canal, obtaining the parsed Binlog data from the Canal Server and importing it into Doris.
 
-Users can view the data synchronization job status through [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.html).
+Users can view the data synchronization job status through [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.md).
 
 grammar:
 
diff --git a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
index 8e3d85bbeb..a5081e517f 100644
--- a/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
+++ b/docs/en/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
@@ -416,21 +416,21 @@ curl --location-trusted -u root -H "columns: k1,k2,source_sequence,v1,v2" -H "fu
 
 4. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.html) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
 
 5. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 6. Error data filtering
 
    Doris' import tasks can tolerate a portion of malformed data. The tolerance ratio is set via `max_filter_ratio`. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.html) document.
+   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 7. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.html) documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.md) documentation.
 
 8. Timeout
 
diff --git a/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md b/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
index ee2d1a309c..9409d76246 100644
--- a/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
+++ b/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
@@ -80,20 +80,20 @@ mysql> show proc "/";
 illustrate:
 
 1. Statistics: It is mainly used to summarize and view the number of databases, tables, partitions, shards, and replicas in the Doris cluster. and the number of unhealthy copies. This information helps us to control the size of the cluster meta-information in general. It helps us view the cluster sharding situation from an overall perspective, and can quickly check the health of the cluster sharding. This further locates problematic data shards.
-2. brokers : View cluster broker node information, equivalent to [SHOW BROKER](./SHOW-BROKER.html)
-3. frontends: Display all FE node information in the cluster, including IP address, role, status, whether it is a mater, etc., equivalent to [SHOW FRONTENDS](./SHOW-FRONTENDS.html)
+2. brokers : View cluster broker node information, equivalent to [SHOW BROKER](./SHOW-BROKER.md)
+3. frontends: Display all FE node information in the cluster, including IP address, role, status, whether it is a mater, etc., equivalent to [SHOW FRONTENDS](./SHOW-FRONTENDS.md)
 4. routine_loads: Display all routine load job information, including job name, status, etc.
 5. auth: User name and corresponding permission information
 6. jobs:
 7. bdbje: To view the bdbje database list, you need to modify the `fe.conf` file to add `enable_bdbje_debug_mode=true`, and then start `FE` through `sh start_fe.sh --daemon` to enter the `debug` mode. After entering `debug` mode, only `http server` and `MySQLServer` will be started and the `BDBJE` instance will be opened, but no metadata loading and subsequent startup processes will be entered.
 8. dbs: Mainly used to view the metadata information of each database and the tables in the Doris cluster. This information includes table structure, partitions, materialized views, data shards and replicas, and more. Through this directory and its subdirectories, you can clearly display the table metadata in the cluster, and locate some problems such as data skew, replica failure, etc.
-9. resources : View system resources, ordinary accounts can only see resources that they have USAGE_PRIV permission to use. Only the root and admin accounts can see all resources. Equivalent to [SHOW RESOURCES](./SHOW-RESOURCES.html)
+9. resources : View system resources, ordinary accounts can only see resources that they have USAGE_PRIV permission to use. Only the root and admin accounts can see all resources. Equivalent to [SHOW RESOURCES](./SHOW-RESOURCES.md)
 10. monitor : shows the resource usage of FE JVM
-11. transactions : used to view the transaction details of the specified transaction id, equivalent to [SHOW TRANSACTION](./SHOW-TRANSACTION.html)
-12. colocation_group : This command can view the existing Group information in the cluster. For details, please refer to the [Colocation Join](../../../advanced/join-optimization/colocation-join.html) chapter
-13. backends: Displays the node list of BE in the cluster, equivalent to [SHOW BACKENDS](./SHOW-BACKENDS.html)
-14. trash: This statement is used to view the space occupied by garbage data in the backend. Equivalent to [SHOW TRASH](./SHOW-TRASH.html)
-15. cluster_balance : To check the balance of the cluster, please refer to [Data Copy Management](../../../admin-manual/maint-monitor/tablet-repair-and-balance.html)
+11. transactions : used to view the transaction details of the specified transaction id, equivalent to [SHOW TRANSACTION](./SHOW-TRANSACTION.md)
+12. colocation_group : This command can view the existing Group information in the cluster. For details, please refer to the [Colocation Join](../../../advanced/join-optimization/colocation-join.md) chapter
+13. backends: Displays the node list of BE in the cluster, equivalent to [SHOW BACKENDS](./SHOW-BACKENDS.md)
+14. trash: This statement is used to view the space occupied by garbage data in the backend. Equivalent to [SHOW TRASH](./SHOW-TRASH.md)
+15. cluster_balance : To check the balance of the cluster, please refer to [Data Copy Management](../../../admin-manual/maint-monitor/tablet-repair-and-balance.md)
 16. current_queries : View the list of queries being executed, the SQL statement currently running.
 17. load_error_hub: Doris supports centralized storage of error information generated by load jobs in an error hub. Then view the error message directly through the <code>SHOW LOAD WARNINGS;</code> statement. Shown here is the configuration information of the error hub.
 18. current_backend_instances : Displays a list of be nodes that are currently executing jobs
diff --git a/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md b/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
index 238d463340..7e5324f5fe 100644
--- a/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
+++ b/docs/en/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.html) statement.
+This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) statement.
 
 > This statement is equivalent to `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/docs/zh-CN/admin-manual/cluster-management/elastic-expansion.md b/docs/zh-CN/admin-manual/cluster-management/elastic-expansion.md
index 7f4238228a..71f4566d1f 100644
--- a/docs/zh-CN/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/zh-CN/admin-manual/cluster-management/elastic-expansion.md
@@ -128,7 +128,7 @@ DECOMMISSION 语句如下:
      > 		```CANCEL DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```  
      > 	命令取消。取消后,该 BE 上的数据将维持当前剩余的数据量。后续 Doris 重新进行负载均衡
 
-**对于多租户部署环境下,BE 节点的扩容和缩容,请参阅 [多租户设计文档](../multi-tenant.html)。**
+**对于多租户部署环境下,BE 节点的扩容和缩容,请参阅 [多租户设计文档](../multi-tenant.md)。**
 
 ## Broker 扩容缩容
 
diff --git a/docs/zh-CN/admin-manual/config/be-config.md b/docs/zh-CN/admin-manual/config/be-config.md
index 2d187298fd..873ef1d2b8 100644
--- a/docs/zh-CN/admin-manual/config/be-config.md
+++ b/docs/zh-CN/admin-manual/config/be-config.md
@@ -451,7 +451,7 @@ CumulativeCompaction会跳过最近发布的增量,以防止压缩可能被查
 ### `doris_max_scan_key_num`
 
 * 类型:int
-* 描述:用于限制一个查询请求中,scan node 节点能拆分的最大 scan key 的个数。当一个带有条件的查询请求到达 scan node 节点时,scan node 会尝试将查询条件中 key 列相关的条件拆分成多个 scan key range。之后这些 scan key range 会被分配给多个 scanner 线程进行数据扫描。较大的数值通常意味着可以使用更多的 scanner 线程来提升扫描操作的并行度。但在高并发场景下,过多的线程可能会带来更大的调度开销和系统负载,反而会降低查询响应速度。一个经验数值为 50。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../advanced/variables.html) 中 `max_scan_key_num` 的说明。
+* 描述:用于限制一个查询请求中,scan node 节点能拆分的最大 scan key 的个数。当一个带有条件的查询请求到达 scan node 节点时,scan node 会尝试将查询条件中 key 列相关的条件拆分成多个 scan key range。之后这些 scan key range 会被分配给多个 scanner 线程进行数据扫描。较大的数值通常意味着可以使用更多的 scanner 线程来提升扫描操作的并行度。但在高并发场景下,过多的线程可能会带来更大的调度开销和系统负载,反而会降低查询响应速度。一个经验数值为 50。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../advanced/variables.md) 中 `max_scan_key_num` 的说明。
 * 默认值:1024
 
 当在高并发场景下发下并发度无法提升时,可以尝试降低该数值并观察影响。
@@ -788,7 +788,7 @@ cumulative compaction策略:最大增量文件的数量
 ### `max_pushdown_conditions_per_column`
 
 * 类型:int
-* 描述:用于限制一个查询请求中,针对单个列,能够下推到存储引擎的最大条件数量。在查询计划执行的过程中,一些列上的过滤条件可以下推到存储引擎,这样可以利用存储引擎中的索引信息进行数据过滤,减少查询需要扫描的数据量。比如等值条件、IN 谓词中的条件等。这个参数在绝大多数情况下仅影响包含 IN 谓词的查询。如 `WHERE colA IN (1,2,3,4,...)`。较大的数值意味值 IN 谓词中更多的条件可以推送给存储引擎,但过多的条件可能会导致随机读的增加,某些情况下可能会降低查询效率。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../advanced/variables.html) 中 `max_pushdown_conditions_per_column ` 的说明。
+* 描述:用于限制一个查询请求中,针对单个列,能够下推到存储引擎的最大条件数量。在查询计划执行的过程中,一些列上的过滤条件可以下推到存储引擎,这样可以利用存储引擎中的索引信息进行数据过滤,减少查询需要扫描的数据量。比如等值条件、IN 谓词中的条件等。这个参数在绝大多数情况下仅影响包含 IN 谓词的查询。如 `WHERE colA IN (1,2,3,4,...)`。较大的数值意味值 IN 谓词中更多的条件可以推送给存储引擎,但过多的条件可能会导致随机读的增加,某些情况下可能会降低查询效率。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../advanced/variables.md) 中 `max_pushdown_conditions_per_column ` 的说明。
 * 默认值:1024
 
 * 示例
diff --git a/docs/zh-CN/admin-manual/config/fe-config.md b/docs/zh-CN/admin-manual/config/fe-config.md
index 72f5fd3ad9..a4250994e7 100644
--- a/docs/zh-CN/admin-manual/config/fe-config.md
+++ b/docs/zh-CN/admin-manual/config/fe-config.md
@@ -82,7 +82,7 @@ FE 的配置项有两种方式进行配置:
 
 3. 通过 HTTP 协议动态配置
 
-   具体请参阅 [Set Config Action](http://doris.apache.org/master/zh-CN/administrator-guide/http-actions/fe/set-config-action.html)
+   具体请参阅 [Set Config Action](../http-actions/fe/set-config-action.md)
 
    该方式也可以持久化修改后的配置项。配置项将持久化在 `fe_custom.conf` 文件中,在 FE 重启后仍会生效。
 
diff --git a/docs/zh-CN/admin-manual/data-admin/backup.md b/docs/zh-CN/admin-manual/data-admin/backup.md
index d99e4ae624..80a80245e1 100644
--- a/docs/zh-CN/admin-manual/data-admin/backup.md
+++ b/docs/zh-CN/admin-manual/data-admin/backup.md
@@ -124,7 +124,7 @@ Doris 支持将当前数据以文件的形式,通过 broker 备份到远端存
    1 row in set (0.15 sec)
    ```
 
-BACKUP的更多用法可参考 [这里](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.html)。
+BACKUP的更多用法可参考 [这里](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md)。
 
 ## 最佳实践
 
@@ -153,7 +153,7 @@ BACKUP的更多用法可参考 [这里](../../sql-manual/sql-reference/Data-Defi
 
 1. CREATE REPOSITORY
 
-   创建一个远端仓库路径,用于备份或恢复。该命令需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.html),也可以直接通过S3 协议备份到支持AWS S3协议的远程存储上去,具体参考 [创建远程仓库文档](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md)
+   创建一个远端仓库路径,用于备份或恢复。该命令需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.md),也可以直接通过S3 协议备份到支持AWS S3协议的远程存储上去,具体参考 [创建远程仓库文档](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md)
 
 2. BACKUP
 
@@ -208,4 +208,4 @@ BACKUP的更多用法可参考 [这里](../../sql-manual/sql-reference/Data-Defi
 
 ## 更多帮助
 
- 关于 BACKUP 使用的更多详细语法及最佳实践,请参阅 [BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BACKUP` 获取更多帮助信息。
+ 关于 BACKUP 使用的更多详细语法及最佳实践,请参阅 [BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BACKUP` 获取更多帮助信息。
diff --git a/docs/zh-CN/admin-manual/data-admin/delete-recover.md b/docs/zh-CN/admin-manual/data-admin/delete-recover.md
index c11b865247..a9ba4777b9 100644
--- a/docs/zh-CN/admin-manual/data-admin/delete-recover.md
+++ b/docs/zh-CN/admin-manual/data-admin/delete-recover.md
@@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
 
 ## 更多帮助
 
-关于 RECOVER 使用的更多详细语法及最佳实践,请参阅 [RECOVER](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RECOVER` 获取更多帮助信息。
+关于 RECOVER 使用的更多详细语法及最佳实践,请参阅 [RECOVER](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RECOVER` 获取更多帮助信息。
diff --git a/docs/zh-CN/admin-manual/data-admin/restore.md b/docs/zh-CN/admin-manual/data-admin/restore.md
index 6c836662ed..188efe878d 100644
--- a/docs/zh-CN/admin-manual/data-admin/restore.md
+++ b/docs/zh-CN/admin-manual/data-admin/restore.md
@@ -126,7 +126,7 @@ Doris 支持将当前数据以文件的形式,通过 broker 备份到远端存
    1 row in set (0.01 sec)
    ```
 
-RESTORE的更多用法可参考 [这里](../../sql-manual/sql-reference/Show-Statements/RESTORE.html)。
+RESTORE的更多用法可参考 [这里](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md)。
 
 ## 相关命令
 
@@ -134,7 +134,7 @@ RESTORE的更多用法可参考 [这里](../../sql-manual/sql-reference/Show-Sta
 
 1. CREATE REPOSITORY
 
-   创建一个远端仓库路径,用于备份或恢复。该命令需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.html),也可以直接通过S3 协议备份到支持AWS S3协议的远程存储上去,具体参考 [创建远程仓库文档](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md)
+   创建一个远端仓库路径,用于备份或恢复。该命令需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.md),也可以直接通过S3 协议备份到支持AWS S3协议的远程存储上去,具体参考 [创建远程仓库文档](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md)
 
 2. RESTORE
 
@@ -180,5 +180,5 @@ RESTORE的更多用法可参考 [这里](../../sql-manual/sql-reference/Show-Sta
 
 ## 更多帮助
 
-关于 RESTORE 使用的更多详细语法及最佳实践,请参阅 [RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RESTORE` 获取更多帮助信息。
+关于 RESTORE 使用的更多详细语法及最佳实践,请参阅 [RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP RESTORE` 获取更多帮助信息。
 
diff --git a/docs/zh-CN/admin-manual/http-actions/fe/table-schema-action.md b/docs/zh-CN/admin-manual/http-actions/fe/table-schema-action.md
index 0d3ee704ff..a4ae8f6e18 100644
--- a/docs/zh-CN/admin-manual/http-actions/fe/table-schema-action.md
+++ b/docs/zh-CN/admin-manual/http-actions/fe/table-schema-action.md
@@ -97,7 +97,7 @@ under the License.
 	"count": 0
 }
 ```
-注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](https://doris.apache.org/zh-CN/administrator-guide/config/fe_config.html)
+注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](../../config/fe-config.md)
 
 ## Examples
 
diff --git a/docs/zh-CN/admin-manual/maint-monitor/disk-capacity.md b/docs/zh-CN/admin-manual/maint-monitor/disk-capacity.md
index d487335ac3..4b78898e8f 100644
--- a/docs/zh-CN/admin-manual/maint-monitor/disk-capacity.md
+++ b/docs/zh-CN/admin-manual/maint-monitor/disk-capacity.md
@@ -125,7 +125,7 @@ capacity_min_left_bytes_flood_stage 默认 1GB。
   - snapshot/: 快照目录下的快照文件。
   - trash/:回收站中的文件。
 
-  **这种操作会对 [从 BE 回收站中恢复数据](./tablet-restore-tool.html) 产生影响。**
+  **这种操作会对 [从 BE 回收站中恢复数据](./tablet-restore-tool.md) 产生影响。**
 
   如果BE还能够启动,则可以使用`ADMIN CLEAN TRASH ON(BackendHost:BackendHeartBeatPort);`来主动清理临时文件,会清理 **所有** trash文件和过期snapshot文件,**这将影响从回收站恢复数据的操作** 。
 
@@ -156,6 +156,6 @@ capacity_min_left_bytes_flood_stage 默认 1GB。
 
     `rm -rf data/0/12345/`
 
-  - 删除 Tablet 元数据(具体参考 [Tablet 元数据管理工具](./tablet-meta-tool.html))
+  - 删除 Tablet 元数据(具体参考 [Tablet 元数据管理工具](./tablet-meta-tool.md))
 
-    `./lib/meta_tool --operation=delete_header --root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111`
\ No newline at end of file
+    `./lib/meta_tool --operation=delete_header --root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111`
diff --git a/docs/zh-CN/admin-manual/maint-monitor/metadata-operation.md b/docs/zh-CN/admin-manual/maint-monitor/metadata-operation.md
index 4c532a9add..f0008a781a 100644
--- a/docs/zh-CN/admin-manual/maint-monitor/metadata-operation.md
+++ b/docs/zh-CN/admin-manual/maint-monitor/metadata-operation.md
@@ -28,11 +28,11 @@ under the License.
 
 本文档主要介绍在实际生产环境中,如何对 Doris 的元数据进行管理。包括 FE 节点建议的部署方式、一些常用的操作方法、以及常见错误的解决方法。
 
-在阅读本文当前,请先阅读 [Doris 元数据设计文档](../../internal/metadata-design.md) 了解 Doris 元数据的工作原理。
+在阅读本文当前,请先阅读 [Doris 元数据设计文档](../../design/metadata-design.md) 了解 Doris 元数据的工作原理。
 
 ## 重要提示
 
-* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../admin-manual/cluster-management/upgrade.html) 中的操作,测试元数据兼容性。
+* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../admin-manual/cluster-management/upgrade.md) 中的操作,测试元数据兼容性。
 
 ## 元数据目录结构
 
@@ -136,7 +136,7 @@ under the License.
 
 ### 添加 FE
 
-添加 FE 流程在 [弹性扩缩容](../../admin-manual/cluster-management/elastic-expansion.html) 有详细介绍,不再赘述。这里主要说明一些注意事项,以及常见问题。
+添加 FE 流程在 [弹性扩缩容](../../admin-manual/cluster-management/elastic-expansion.md) 有详细介绍,不再赘述。这里主要说明一些注意事项,以及常见问题。
 
 1. 注意事项
 
diff --git a/docs/zh-CN/admin-manual/maint-monitor/tablet-repair-and-balance.md b/docs/zh-CN/admin-manual/maint-monitor/tablet-repair-and-balance.md
index 1d266fcc51..bfa095c8a1 100644
--- a/docs/zh-CN/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/docs/zh-CN/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -28,7 +28,7 @@ under the License.
 
 从 0.9.0 版本开始,Doris 引入了优化后的副本管理策略,同时支持了更为丰富的副本状态查看工具。本文档主要介绍 Doris 数据副本均衡、修复方面的调度策略,以及副本管理的运维方法。帮助用户更方便的掌握和管理集群中的副本状态。
 
-> Colocation 属性的表的副本修复和均衡可以参阅[这里](../../advanced/join-optimization/colocation-join.html)
+> Colocation 属性的表的副本修复和均衡可以参阅[这里](../../advanced/join-optimization/colocation-join.md)
 
 ## 名词解释
 
diff --git a/docs/zh-CN/admin-manual/privilege-ldap/user-privilege.md b/docs/zh-CN/admin-manual/privilege-ldap/user-privilege.md
index 6c42d3eb2f..592eb672f5 100644
--- a/docs/zh-CN/admin-manual/privilege-ldap/user-privilege.md
+++ b/docs/zh-CN/admin-manual/privilege-ldap/user-privilege.md
@@ -216,4 +216,4 @@ ADMIN_PRIV 和 GRANT_PRIV 权限同时拥有**授予权限**的权限,较为
 
 ## 更多帮助
 
- 关于 权限管理 使用的更多详细语法及最佳实践,请参阅 [GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP GRANTS` 获取更多帮助信息。
+ 关于 权限管理 使用的更多详细语法及最佳实践,请参阅 [GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP GRANTS` 获取更多帮助信息。
diff --git a/docs/zh-CN/admin-manual/sql-interception.md b/docs/zh-CN/admin-manual/sql-interception.md
index d6afcb7dcc..3cc682f824 100644
--- a/docs/zh-CN/admin-manual/sql-interception.md
+++ b/docs/zh-CN/admin-manual/sql-interception.md
@@ -37,7 +37,7 @@ under the License.
 ## 规则
 
 对SQL规则增删改查
-- 创建SQL阻止规则,更多创建语法请参阅[CREATE SQL BLOCK RULE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.html)
+- 创建SQL阻止规则,更多创建语法请参阅[CREATE SQL BLOCK RULE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
     - sql:匹配规则(基于正则匹配,特殊字符需要转译),可选,默认值为 "NULL"
     - sqlHash: sql hash值,用于完全匹配,我们会在`fe.audit.log`打印这个值,可选,这个参数和sql只能二选一,默认值为 "NULL"
     - partition_num: 一个扫描节点会扫描的最大partition数量,默认值为0L
@@ -65,12 +65,12 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = sql match regex sql block rule:
 CREATE SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "30", "cardinality"="10000000000","global"="false","enable"="true")
 ```
 
-- 查看已配置的SQL阻止规则,不指定规则名则为查看所有规则,具体语法请参阅 [SHOW SQL BLOCK RULE](../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.html)
+- 查看已配置的SQL阻止规则,不指定规则名则为查看所有规则,具体语法请参阅 [SHOW SQL BLOCK RULE](../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
 
 ```sql
 SHOW SQL_BLOCK_RULE [FOR RULE_NAME]
 ```
-- 修改SQL阻止规则,允许对sql/sqlHash/partition_num/tablet_num/cardinality/global/enable等每一项进行修改,具体语法请参阅[ALTER SQL BLOCK  RULE](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.html)
+- 修改SQL阻止规则,允许对sql/sqlHash/partition_num/tablet_num/cardinality/global/enable等每一项进行修改,具体语法请参阅[ALTER SQL BLOCK  RULE](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
     - sql 和 sqlHash 不能同时被设置。这意味着,如果一个rule设置了sql或者sqlHash,则另一个属性将无法被修改
     - sql/sqlHash 和 partition_num/tablet_num/cardinality 不能同时被设置。举个例子,如果一个rule设置了partition_num,那么sql或者sqlHash将无法被修改
 ```sql
@@ -81,7 +81,7 @@ ALTER SQL_BLOCK_RULE test_rule PROPERTIES("sql"="select \\* from test_table","en
 ALTER SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "10","tablet_num"="300","enable"="true")
 ```
 
-- 删除SQL阻止规则,支持多规则,以`,`隔开,具体语法请参阅 [DROP SQL BLOCK RULR](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.html)
+- 删除SQL阻止规则,支持多规则,以`,`隔开,具体语法请参阅 [DROP SQL BLOCK RULR](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
 ```
 DROP SQL_BLOCK_RULE test_rule1,test_rule2
 ```
diff --git a/docs/zh-CN/advanced/alter-table/replace-table.md b/docs/zh-CN/advanced/alter-table/replace-table.md
index 05ed331efe..b17fa4ec2f 100644
--- a/docs/zh-CN/advanced/alter-table/replace-table.md
+++ b/docs/zh-CN/advanced/alter-table/replace-table.md
@@ -28,7 +28,7 @@ under the License.
 
 在 0.14 版本中,Doris 支持对两个表进行原子的替换操作。 该操作仅适用于 OLAP 表。
 
-分区级别的替换操作,请参阅 [临时分区文档](../partition/table-tmp-partition.html)
+分区级别的替换操作,请参阅 [临时分区文档](../partition/table-tmp-partition.md)
 
 ## 语法说明
 
@@ -68,4 +68,4 @@ ALTER TABLE [db.]tbl1 REPLACE WITH TABLE tbl2
 
 1. 原子的覆盖写操作
 
-   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../partition/table-tmp-partition.html)。
+   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../partition/table-tmp-partition.md)。
diff --git a/docs/zh-CN/advanced/best-practice/import-analysis.md b/docs/zh-CN/advanced/best-practice/import-analysis.md
index dba8762a51..e120ef34c2 100644
--- a/docs/zh-CN/advanced/best-practice/import-analysis.md
+++ b/docs/zh-CN/advanced/best-practice/import-analysis.md
@@ -32,9 +32,9 @@ Doris 提供了一个图形化的命令以帮助用户更方便的分析一个
 
 ## 导入计划树
 
-如果你对 Doris 的查询计划树还不太了解,请先阅读之前的文章 [DORIS/最佳实践/查询分析](./query-analysis.html)。
+如果你对 Doris 的查询计划树还不太了解,请先阅读之前的文章 [DORIS/最佳实践/查询分析](./query-analysis.md)。
 
-一个 [Broker Load](../../data-operate/import/import-way/broker-load-manual.html) 请求的执行过程,也是基于 Doris 的查询框架的。一个Broker Load 作业会根据导入请求中 DATA INFILE 子句的个数讲作业拆分成多个子任务。每个子任务可以视为是一个独立的导入执行计划。一个导入计划的组成只会有一个 Fragment,其组成如下:
+一个 [Broker Load](../../data-operate/import/import-way/broker-load-manual.md) 请求的执行过程,也是基于 Doris 的查询框架的。一个Broker Load 作业会根据导入请求中 DATA INFILE 子句的个数讲作业拆分成多个子任务。每个子任务可以视为是一个独立的导入执行计划。一个导入计划的组成只会有一个 Fragment,其组成如下:
 
 ```sql
 ┌─────────────┐
@@ -167,4 +167,4 @@ mysql> show load profile "/";
 
    上图展示了子任务 980014623046410a-88e260f0c43031f1 中,Instance 980014623046410a-88e260f0c43031f5 的各个算子的具体 Profile。
 
-通过以上3个步骤,我们可以逐步排查一个导入任务的执行瓶颈。
\ No newline at end of file
+通过以上3个步骤,我们可以逐步排查一个导入任务的执行瓶颈。
diff --git a/docs/zh-CN/advanced/broker.md b/docs/zh-CN/advanced/broker.md
index 0244c2fd9c..b63432c5f4 100644
--- a/docs/zh-CN/advanced/broker.md
+++ b/docs/zh-CN/advanced/broker.md
@@ -67,9 +67,9 @@ Broker 在 Doris 系统架构中的位置如下:
 
 ## 需要 Broker 的操作
 
-1. [Broker Load](../data-operate/import/import-way/broker-load-manual.html)
-2. [数据导出(Export)](../data-operate/export/export-manual.html)
-3. [数据备份](../admin-manual/data-admin/backup.html)
+1. [Broker Load](../data-operate/import/import-way/broker-load-manual.md)
+2. [数据导出(Export)](../data-operate/export/export-manual.md)
+3. [数据备份](../admin-manual/data-admin/backup.md)
 
 ## Broker 信息
 
@@ -190,4 +190,4 @@ WITH BROKER "broker_name"
    )
    ```
 
-   关于HDFS集群的配置可以写入hdfs-site.xml文件中,用户使用Broker进程读取HDFS集群的信息时,只需要填写集群的文件路径名和认证信息即可。
\ No newline at end of file
+   关于HDFS集群的配置可以写入hdfs-site.xml文件中,用户使用Broker进程读取HDFS集群的信息时,只需要填写集群的文件路径名和认证信息即可。
diff --git a/docs/zh-CN/advanced/materialized-view.md b/docs/zh-CN/advanced/materialized-view.md
index f7c9cd05f1..4064f4686b 100644
--- a/docs/zh-CN/advanced/materialized-view.md
+++ b/docs/zh-CN/advanced/materialized-view.md
@@ -74,7 +74,7 @@ Doris 系统提供了一整套对物化视图的 DDL 语法,包括创建,查
 
 创建物化视图是一个异步的操作,也就是说用户成功提交创建任务后,Doris 会在后台对存量的数据进行计算,直到创建成功。
 
-具体的语法可查看[CREATE MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.html) 。
+具体的语法可查看[CREATE MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) 。
 
 ### 支持聚合函数
 
@@ -144,7 +144,7 @@ MySQL [test]> desc mv_test all;
 
 如果用户不再需要物化视图,则可以通过命令删除物化视图。
 
-具体的语法可查看[DROP MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.html) 
+具体的语法可查看[DROP MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.md) 
 
 
 
@@ -484,4 +484,4 @@ MySQL [test]> desc advertiser_view_record;
 
 ## 更多帮助
 
-关于物化视图使用的更多详细语法及最佳实践,请参阅 [CREATE MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) 和 [DROP MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE MATERIALIZED VIEW` 和`HELP DROP MATERIALIZED VIEW`  获取更多帮助信息。
+关于物化视图使用的更多详细语法及最佳实践,请参阅 [CREATE MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) 和 [DROP MATERIALIZED VIEW](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-MATERIALIZED-VIEW.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE MATERIALIZED VIEW` 和`HELP DROP MATERIALIZED VIEW`  获取更多帮助信息。
diff --git a/docs/zh-CN/advanced/partition/table-tmp-partition.md b/docs/zh-CN/advanced/partition/table-tmp-partition.md
index cb98169017..87c8960785 100644
--- a/docs/zh-CN/advanced/partition/table-tmp-partition.md
+++ b/docs/zh-CN/advanced/partition/table-tmp-partition.md
@@ -275,7 +275,7 @@ PROPERTIES (
 
 1. 原子的覆盖写操作
 
-   某些情况下,用户希望能够重写某一分区的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先创建一个对应的临时分区,将新的数据导入到临时分区后,通过替换操作,原子的替换原有分区,以达到目的。对于非分区表的原子覆盖写操作,请参阅[替换表文档](../../advanced/alter-table/replace-table.html)
+   某些情况下,用户希望能够重写某一分区的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先创建一个对应的临时分区,将新的数据导入到临时分区后,通过替换操作,原子的替换原有分区,以达到目的。对于非分区表的原子覆盖写操作,请参阅[替换表文档](../../advanced/alter-table/replace-table.md)
 
 2. 修改分桶数
 
@@ -283,4 +283,4 @@ PROPERTIES (
 
 3. 合并或分割分区
 
-   某些情况下,用户希望对分区的范围进行修改,比如合并两个分区,或将一个大分区分割成多个小分区。则用户可以先建立对应合并或分割后范围的临时分区,然后通过 `INSERT INTO` 命令将正式分区的数据导入到临时分区中,通过替换操作,原子的替换原有分区,以达到目的。
\ No newline at end of file
+   某些情况下,用户希望对分区的范围进行修改,比如合并两个分区,或将一个大分区分割成多个小分区。则用户可以先建立对应合并或分割后范围的临时分区,然后通过 `INSERT INTO` 命令将正式分区的数据导入到临时分区中,通过替换操作,原子的替换原有分区,以达到目的。
diff --git a/docs/zh-CN/advanced/resource.md b/docs/zh-CN/advanced/resource.md
index e8f1b43efe..df01e72a76 100644
--- a/docs/zh-CN/advanced/resource.md
+++ b/docs/zh-CN/advanced/resource.md
@@ -40,15 +40,15 @@ under the License.
 
 1. CREATE RESOURCE
 
-   该语句用于创建资源。具体操作可参考 [CREATE RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.html)。
+   该语句用于创建资源。具体操作可参考 [CREATE RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md)。
 
 2. DROP RESOURCE
 
-   该命令可以删除一个已存在的资源。具体操作见 [DROP RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-RESOURCE.html) 。
+   该命令可以删除一个已存在的资源。具体操作见 [DROP RESOURCE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-RESOURCE.md) 。
 
 3. SHOW RESOURCES
 
-   该命令可以查看用户有使用权限的资源。具体操作见  [SHOW RESOURCES](../sql-manual/sql-reference/Show-Statements/SHOW-RESOURCES.html)。
+   该命令可以查看用户有使用权限的资源。具体操作见  [SHOW RESOURCES](../sql-manual/sql-reference/Show-Statements/SHOW-RESOURCES.md)。
 
 ## 支持的资源
 
@@ -127,7 +127,7 @@ PROPERTIES
 
 `driver`: 标示外部表使用的driver动态库,引用该resource的ODBC外表必填,旧的mysql外表选填。
 
-具体如何使用可以,可以参考[ODBC of Doris](../ecosystem/external-table/odbc-of-doris.html)
+具体如何使用可以,可以参考[ODBC of Doris](../ecosystem/external-table/odbc-of-doris.md)
 
 #### 示例
 
diff --git a/docs/zh-CN/advanced/small-file-mgr.md b/docs/zh-CN/advanced/small-file-mgr.md
index 749912b8bd..5325232287 100644
--- a/docs/zh-CN/advanced/small-file-mgr.md
+++ b/docs/zh-CN/advanced/small-file-mgr.md
@@ -75,7 +75,7 @@ Examples:
 
 ### SHOW FILE
 
-该语句可以查看已经创建成功的文件,具体操作可查看 [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.html)。
+该语句可以查看已经创建成功的文件,具体操作可查看 [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md)。
 
 Examples:
 
@@ -131,4 +131,4 @@ Examples:
 
 ## 更多帮助
 
-关于文件管理器使用的更多详细语法及最佳实践,请参阅 [CREATE FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.html) 、[DROP FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.html) 和 [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE FILE` 、`HELP DROP FILE`和`HELP SHOW FILE`  获取更多帮助信息。
+关于文件管理器使用的更多详细语法及最佳实践,请参阅 [CREATE FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md) 、[DROP FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md) 和 [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE FILE` 、`HELP DROP FILE`和`HELP SHOW FILE`  获取更多帮助信息。
diff --git a/docs/zh-CN/advanced/variables.md b/docs/zh-CN/advanced/variables.md
index 901cf4f8d8..1ad81c45a1 100644
--- a/docs/zh-CN/advanced/variables.md
+++ b/docs/zh-CN/advanced/variables.md
@@ -156,11 +156,11 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `disable_colocate_join`
 
-  控制是否启用 [Colocation Join](./join-optimization/colocation-join.html) 功能。默认为 false,表示启用该功能。true 表示禁用该功能。当该功能被禁用后,查询规划将不会尝试执行 Colocation Join。
+  控制是否启用 [Colocation Join](./join-optimization/colocation-join.md) 功能。默认为 false,表示启用该功能。true 表示禁用该功能。当该功能被禁用后,查询规划将不会尝试执行 Colocation Join。
 
 - `enable_bucket_shuffle_join`
 
-  控制是否启用 [Bucket Shuffle Join](./join-optimization/bucket-shuffle-join.html) 功能。默认为 true,表示启用该功能。false 表示禁用该功能。当该功能被禁用后,查询规划将不会尝试执行 Bucket Shuffle Join。
+  控制是否启用 [Bucket Shuffle Join](./join-optimization/bucket-shuffle-join.md) 功能。默认为 true,表示启用该功能。false 表示禁用该功能。当该功能被禁用后,查询规划将不会尝试执行 Bucket Shuffle Join。
 
 - `disable_streaming_preaggregations`
 
@@ -168,7 +168,7 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `enable_insert_strict`
 
-  用于设置通过 INSERT 语句进行数据导入时,是否开启 `strict` 模式。默认为 false,即不开启 `strict` 模式。关于该模式的介绍,可以参阅 [这里](../data-operate/import/import-way/insert-into-manual.html)。
+  用于设置通过 INSERT 语句进行数据导入时,是否开启 `strict` 模式。默认为 false,即不开启 `strict` 模式。关于该模式的介绍,可以参阅 [这里](../data-operate/import/import-way/insert-into-manual.md)。
 
 - `enable_spilling`
 
@@ -289,11 +289,11 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `max_pushdown_conditions_per_column`
 
-  该变量的具体含义请参阅 [BE 配置项](../admin-manual/config/be-config.html) 中 `max_pushdown_conditions_per_column` 的说明。该变量默认置为 -1,表示使用 `be.conf` 中的配置值。如果设置大于 0,则当前会话中的查询会使用该变量值,而忽略 `be.conf` 中的配置值。
+  该变量的具体含义请参阅 [BE 配置项](../admin-manual/config/be-config.md) 中 `max_pushdown_conditions_per_column` 的说明。该变量默认置为 -1,表示使用 `be.conf` 中的配置值。如果设置大于 0,则当前会话中的查询会使用该变量值,而忽略 `be.conf` 中的配置值。
 
 - `max_scan_key_num`
 
-  该变量的具体含义请参阅 [BE 配置项](../admin-manual/config/be-config.html) 中 `doris_max_scan_key_num` 的说明。该变量默认置为 -1,表示使用 `be.conf` 中的配置值。如果设置大于 0,则当前会话中的查询会使用该变量值,而忽略 `be.conf` 中的配置值。
+  该变量的具体含义请参阅 [BE 配置项](../admin-manual/config/be-config.md) 中 `doris_max_scan_key_num` 的说明。该变量默认置为 -1,表示使用 `be.conf` 中的配置值。如果设置大于 0,则当前会话中的查询会使用该变量值,而忽略 `be.conf` 中的配置值。
 
 - `net_buffer_length`
 
@@ -345,7 +345,7 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `sql_mode`
 
-  用于指定 SQL 模式,以适应某些 SQL 方言。关于 SQL 模式,可参阅 [这里](https://doris.apache.org/zh-CN/administrator-guide/sql-mode.html)。
+  用于指定 SQL 模式,以适应某些 SQL 方言。
 
 - `sql_safe_updates`
 
@@ -361,7 +361,7 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `time_zone`
 
-  用于设置当前会话的时区。时区会对某些时间函数的结果产生影响。关于时区,可以参阅 [这里](./time-zone.html)。
+  用于设置当前会话的时区。时区会对某些时间函数的结果产生影响。关于时区,可以参阅 [这里](./time-zone.md)。
 
 - `tx_isolation`
 
@@ -487,4 +487,4 @@ SELECT /*+ SET_VAR(query_timeout = 1, enable_partition_cache=true) */ sleep(3);
 
 - `enable_infer_predicate`
 
-  用于控制是否进行谓词推导。取值有两种:true 和 false。默认情况下关闭,系统不在进行谓词推导,采用原始的谓词进行相关操作。设置为 true 后,进行谓词扩展。
\ No newline at end of file
+  用于控制是否进行谓词推导。取值有两种:true 和 false。默认情况下关闭,系统不在进行谓词推导,采用原始的谓词进行相关操作。设置为 true 后,进行谓词扩展。
diff --git a/docs/zh-CN/benchmark/ssb.md b/docs/zh-CN/benchmark/ssb.md
index 5afb35d4b8..38207d02df 100644
--- a/docs/zh-CN/benchmark/ssb.md
+++ b/docs/zh-CN/benchmark/ssb.md
@@ -36,7 +36,7 @@ under the License.
 
 ## 环境准备
 
-请先参照 [官方文档](../install/install-deploy.html) 进行 Doris 的安装部署,以获得一个正常运行中的 Doris 集群(至少包含 1 FE,1 BE)。
+请先参照 [官方文档](../install/install-deploy.md) 进行 Doris 的安装部署,以获得一个正常运行中的 Doris 集群(至少包含 1 FE,1 BE)。
 
 以下文档中涉及的脚本都存放在 Doris 代码库的 `tools/ssb-tools/` 下。
 
@@ -178,5 +178,5 @@ SSB 测试集共 4 组 14 个 SQL。查询语句在 [queries/](https://github.co
     >
     > 注4:Parallelism 表示查询并发度,通过 `set parallel_fragment_exec_instance_num=8` 设置。
     >
-    > 注5:Runtime Filter Mode 是 Runtime Filter 的类型,通过 `set runtime_filter_type="BLOOM_FILTER"` 设置。([Runtime Filter](../advanced/join-optimization/runtime-filter.html) 功能对 SSB 测试集效果显著。因为该测试集中,Join 算子右表的数据可以对左表起到很好的过滤作用。你可以尝试通过 `set runtime_filter_mode=off` 关闭该功能,看看查询延迟的变化。)
+    > 注5:Runtime Filter Mode 是 Runtime Filter 的类型,通过 `set runtime_filter_type="BLOOM_FILTER"` 设置。([Runtime Filter](../advanced/join-optimization/runtime-filter.md) 功能对 SSB 测试集效果显著。因为该测试集中,Join 算子右表的数据可以对左表起到很好的过滤作用。你可以尝试通过 `set runtime_filter_mode=off` 关闭该功能,看看查询延迟的变化。)
 
diff --git a/docs/zh-CN/community/how-to-contribute/how-to-contribute.md b/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
index 3023fcaf14..7dc2418b3f 100644
--- a/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
+++ b/docs/zh-CN/community/how-to-contribute/how-to-contribute.md
@@ -78,7 +78,7 @@ under the License.
 
 ## 修改代码和提交PR(Pull Request)
 
-您可以下载代码,编译安装,部署运行试一试(可以参考[编译文档](../../installing/source-install/compilation.md)),看看是否与您预想的一样工作。如果有问题,您可以直接联系我们,提 Issue 或者通过阅读和分析源代码自己修复。
+您可以下载代码,编译安装,部署运行试一试(可以参考[编译文档](../../install/source-install/compilation.md)),看看是否与您预想的一样工作。如果有问题,您可以直接联系我们,提 Issue 或者通过阅读和分析源代码自己修复。
 
 无论是修复 Bug 还是增加 Feature,我们都非常欢迎。如果您希望给 Doris 提交代码,您需要从 GitHub 上 fork 代码库至您的项目空间下,为您提交的代码创建一个新的分支,添加源项目为upstream,并提交PR。
 提交PR的方式可以参考文档 [Pull Request](./pull-request.md)。
diff --git a/docs/zh-CN/community/release-and-verify/release-doris-manager.md b/docs/zh-CN/community/release-and-verify/release-doris-manager.md
index 7e1657a3b8..f10f8e35d7 100644
--- a/docs/zh-CN/community/release-and-verify/release-doris-manager.md
+++ b/docs/zh-CN/community/release-and-verify/release-doris-manager.md
@@ -299,4 +299,4 @@ xxx
 
 ## 完成发布
 
-请参阅 [完成发布](https://doris.apache.org/zh-CN/community/release-and-verify/release-complete.html) 文档完成所有发布流程。
+请参阅 [完成发布](./release-complete.md) 文档完成所有发布流程。
diff --git a/docs/zh-CN/community/release-and-verify/release-verify.md b/docs/zh-CN/community/release-and-verify/release-verify.md
index db2873886c..4030152baa 100644
--- a/docs/zh-CN/community/release-and-verify/release-verify.md
+++ b/docs/zh-CN/community/release-and-verify/release-verify.md
@@ -98,6 +98,6 @@ INFO Totally checked 5611 files, valid: 3926, invalid: 0, ignored: 1685, fixed:
 
 请参阅各组件的编译文档验证编译。
 
-* Doris 主代码编译,请参阅 [编译文档](../../install/source-install//compilation.md)
+* Doris 主代码编译,请参阅 [编译文档](../../install/source-install/compilation.md)
 * Flink Doris Connector 编译,请参阅 [编译文档](../../ecosystem/flink-doris-connector.md)
 * Spark Doris Connector 编译,请参阅 [编译文档](../../ecosystem/spark-doris-connector.md)
diff --git a/docs/zh-CN/data-operate/export/export-manual.md b/docs/zh-CN/data-operate/export/export-manual.md
index 3acc22bfaf..94d7de39b8 100644
--- a/docs/zh-CN/data-operate/export/export-manual.md
+++ b/docs/zh-CN/data-operate/export/export-manual.md
@@ -90,11 +90,11 @@ Doris 会首先在指定的远端存储的路径中,建立一个名为 `__dori
 
 ### Broker 参数
 
-Export 需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.html)
+Export 需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../advanced/broker.md)
 
 ## 开始导出
 
-Export 的详细用法可参考 [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html) 。
+Export 的详细用法可参考 [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 。
 
 ### 导出到HDFS
 
@@ -128,7 +128,7 @@ WITH BROKER "hdfs"
 
 ### 查看导出状态
 
-提交作业后,可以通过  [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html) 命令查询导入作业状态。结果举例如下:
+提交作业后,可以通过  [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 命令查询导入作业状态。结果举例如下:
 
 ```sql
 mysql> show EXPORT\G;
@@ -185,7 +185,7 @@ FinishTime: 2019-06-25 17:08:34
 * 在 Export 作业运行过程中,如果 FE 发生重启或切主,则 Export 作业会失败,需要用户重新提交。
 * 如果 Export 作业运行失败,在远端存储中产生的 `__doris_export_tmp_xxx` 临时目录,以及已经生成的文件不会被删除,需要用户手动删除。
 * 如果 Export 作业运行成功,在远端存储中产生的 `__doris_export_tmp_xxx` 目录,根据远端存储的文件系统语义,可能会保留,也可能会被清除。比如在百度对象存储(BOS)中,通过 rename 操作将一个目录中的最后一个文件移走后,该目录也会被删除。如果该目录没有被清除,用户可以手动清除。
-* 当 Export 运行完成后(成功或失败),FE 发生重启或切主,则  [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html) 展示的作业的部分信息会丢失,无法查看。
+* 当 Export 运行完成后(成功或失败),FE 发生重启或切主,则  [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 展示的作业的部分信息会丢失,无法查看。
 * Export 作业只会导出 Base 表的数据,不会导出 Rollup Index 的数据。
 * Export 作业会扫描数据,占用 IO 资源,可能会影响系统的查询延迟。
 
@@ -200,4 +200,4 @@ FinishTime: 2019-06-25 17:08:34
 
 ## 更多帮助
 
-关于 Export 使用的更多详细语法及最佳实践,请参阅 [Export](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP EXPORT` 获取更多帮助信息。
+关于 Export 使用的更多详细语法及最佳实践,请参阅 [Export](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP EXPORT` 获取更多帮助信息。
diff --git a/docs/zh-CN/data-operate/export/outfile.md b/docs/zh-CN/data-operate/export/outfile.md
index d429ab3519..d52177df3d 100644
--- a/docs/zh-CN/data-operate/export/outfile.md
+++ b/docs/zh-CN/data-operate/export/outfile.md
@@ -26,7 +26,7 @@ under the License.
 
 # 导出查询结果集
 
-本文档介绍如何使用 [SELECT INTO OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html) 命令进行查询结果的导出操作。
+本文档介绍如何使用 [SELECT INTO OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md) 命令进行查询结果的导出操作。
 
 ## 示例
 
@@ -55,7 +55,7 @@ select * from tbl1 limit 10
 INTO OUTFILE "file:///home/work/path/result_";
 ```
 
-更多用法可查看[OUTFILE文档](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html)。
+更多用法可查看[OUTFILE文档](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md)。
 
 ## 并发导出
 
@@ -157,4 +157,4 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = Open broker writer failed ...
 
 ## 更多帮助
 
-关于 OUTFILE 使用的更多详细语法及最佳实践,请参阅 [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP OUTFILE` 获取更多帮助信息。
+关于 OUTFILE 使用的更多详细语法及最佳实践,请参阅 [OUTFILE](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP OUTFILE` 获取更多帮助信息。
diff --git a/docs/zh-CN/data-operate/import/import-scenes/external-storage-load.md b/docs/zh-CN/data-operate/import/import-scenes/external-storage-load.md
index d4bd6bf15b..9887459ff9 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/external-storage-load.md
@@ -36,7 +36,7 @@ under the License.
 
 ### 开始导入
 
-Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operate/import/import-way/broker-load-manual.html) 基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
+Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operate/import/import-way/broker-load-manual.md) 基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
 
 ```
   LOAD LABEL db_name.label_name 
@@ -49,7 +49,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
 
 1. 创建一张表
 
-   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 命令手册。示例如下:
+   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 命令手册。示例如下:
 
    ```sql
    CREATE TABLE IF NOT EXISTS load_hdfs_file_test
@@ -82,7 +82,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
        "max_filter_ratio"="0.1"
        );
    ```
-    关于参数介绍,请参阅[Broker Load](../../../data-operate/import/import-way/broker-load-manual.html),HA集群的创建语法,通过`HELP BROKER LOAD`查看
+    关于参数介绍,请参阅[Broker Load](../../../data-operate/import/import-way/broker-load-manual.md),HA集群的创建语法,通过`HELP BROKER LOAD`查看
   
 3. 查看导入状态
    
@@ -134,7 +134,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
 其他云存储系统可以相应的文档找到与S3兼容的相关信息
 
 ### 开始导入
-导入方式和[Broker Load](../../../data-operate/import/import-way/broker-load-manual.html) 基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
+导入方式和[Broker Load](../../../data-operate/import/import-way/broker-load-manual.md) 基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
 ```
     WITH S3
     (
diff --git a/docs/zh-CN/data-operate/import/import-scenes/external-table-load.md b/docs/zh-CN/data-operate/import/import-scenes/external-table-load.md
index b54253c322..19d85e419e 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/external-table-load.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/external-table-load.md
@@ -39,7 +39,7 @@ Doris 可以创建通过 ODBC 协议访问的外部表。创建完成后,可
 
 ## 创建外部表
 
-创建 ODBC 外部表的详细介绍请参阅 [CREATE EXTERNAL TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) 语法帮助手册。
+创建 ODBC 外部表的详细介绍请参阅 [CREATE EXTERNAL TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md) 语法帮助手册。
 
 这里仅通过示例说明使用方式。
 
@@ -61,7 +61,7 @@ Doris 可以创建通过 ODBC 协议访问的外部表。创建完成后,可
    );
    ```
 
-这里我们创建了一个名为 `oracle_test_odbc` 的 Resource,其类型为 `odbc_catalog`,表示这是一个用于存储 ODBC 信息的 Resource。`odbc_type` 为 `oracle`,表示这个 OBDC Resource 是用于连接 Oracle 数据库的。关于其他类型的资源,具体可参阅 [资源管理](../../../advanced/resource.html) 文档。
+这里我们创建了一个名为 `oracle_test_odbc` 的 Resource,其类型为 `odbc_catalog`,表示这是一个用于存储 ODBC 信息的 Resource。`odbc_type` 为 `oracle`,表示这个 OBDC Resource 是用于连接 Oracle 数据库的。关于其他类型的资源,具体可参阅 [资源管理](../../../advanced/resource.md) 文档。
 
 2. 创建外部表
 
@@ -104,7 +104,7 @@ PROPERTIES (
    );
    ```
 
-   关于创建 Doris 表的详细说明,请参阅 [CREATE-TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 语法帮助。
+   关于创建 Doris 表的详细说明,请参阅 [CREATE-TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 语法帮助。
 
 2. 导入数据 (从 `ext_oracle_demo`表 导入到 `doris_oralce_tbl` 表)
 
@@ -123,6 +123,6 @@ PROPERTIES (
 
 ## 更多帮助
 
-关于 CREATE EXTERNAL TABLE 的更多详细语法和最佳实践,请参阅 [CREATE EXTERNAL TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) 命令手册。
+关于 CREATE EXTERNAL TABLE 的更多详细语法和最佳实践,请参阅 [CREATE EXTERNAL TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md) 命令手册。
 
 Doris ODBC 更多使用示例请参考 [文章列表](https://doris.apache.org/zh-CN/article/article-list.html) 。
diff --git a/docs/zh-CN/data-operate/import/import-scenes/jdbc-load.md b/docs/zh-CN/data-operate/import/import-scenes/jdbc-load.md
index 3dd48b41ff..8aa52fcbba 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/jdbc-load.md
@@ -35,7 +35,7 @@ INSERT 语句的使用方式和 MySQL 等数据库中 INSERT 语句的使用方
 * INSERT INTO table VALUES(...)
 ```
 
-这里我们仅介绍第二种方式。关于 INSERT 命令的详细说明,请参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 命令文档。
+这里我们仅介绍第二种方式。关于 INSERT 命令的详细说明,请参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 命令文档。
 
 ## 单次写入
 
@@ -160,4 +160,4 @@ public class DorisJDBCDemo {
 
    前面提到,我们建议在使用 INSERT 导入数据时,采用 ”批“ 的方式进行导入,而不是单条插入。
 
-   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](./load-atomicity.html#label-机制) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 文档。
+   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](./load-atomicity.html#label-机制) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 文档。
diff --git a/docs/zh-CN/data-operate/import/import-scenes/kafka-load.md b/docs/zh-CN/data-operate/import/import-scenes/kafka-load.md
index 7854669743..0b26962076 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/kafka-load.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/kafka-load.md
@@ -42,14 +42,14 @@ Doris 自身能够保证不丢不重的订阅 Kafka 中的消息,即 `Exactly-
 1. 支持无认证的 Kafka 访问,以及通过 SSL 方式认证的 Kafka 集群。
 2. 支持的消息格式如下:
    - csv 文本格式。每一个 message 为一行,且行尾**不包含**换行符。
-   - Json 格式,详见 [导入 Json 格式数据](../import-way/load-json-format.html)。
+   - Json 格式,详见 [导入 Json 格式数据](../import-way/load-json-format.md)。
 3. 仅支持 Kafka 0.10.0.0(含) 以上版本。
 
 ### 访问 SSL 认证的 Kafka 集群
 
 例行导入功能支持无认证的 Kafka 集群,以及通过 SSL 认证的 Kafka 集群。
 
-访问 SSL 认证的 Kafka 集群需要用户提供用于认证 Kafka Broker 公钥的证书文件(ca.pem)。如果 Kafka 集群同时开启了客户端认证,则还需提供客户端的公钥(client.pem)、密钥文件(client.key),以及密钥密码。这里所需的文件需要先通过 `CREAE FILE` 命令上传到 Plao 中,并且 catalog 名称为 `kafka`。`CREATE FILE` 命令的具体帮助可以参见 [CREATE FILE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.html) 命令手册。这里给出示例:
+访问 SSL 认证的 Kafka 集群需要用户提供用于认证 Kafka Broker 公钥的证书文件(ca.pem)。如果 Kafka 集群同时开启了客户端认证,则还需提供客户端的公钥(client.pem)、密钥文件(client.key),以及密钥密码。这里所需的文件需要先通过 `CREAE FILE` 命令上传到 Plao 中,并且 catalog 名称为 `kafka`。`CREATE FILE` 命令的具体帮助可以参见 [CREATE FILE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md) 命令手册。这里给出示例:
 
 - 上传文件
 
@@ -59,11 +59,11 @@ Doris 自身能够保证不丢不重的订阅 Kafka 中的消息,即 `Exactly-
   CREATE FILE "client.pem" PROPERTIES("url" = "https://example_url/kafka-key/client.pem", "catalog" = "kafka");
   ```
 
-上传完成后,可以通过 [SHOW FILES](../../../sql-manual/sql-reference/Show-Statements/SHOW-FILE.html) 命令查看已上传的文件。
+上传完成后,可以通过 [SHOW FILES](../../../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) 命令查看已上传的文件。
 
 ### 创建例行导入作业
 
-创建例行导入任务的具体命令,请参阅 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) 命令手册。这里给出示例:
+创建例行导入任务的具体命令,请参阅 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) 命令手册。这里给出示例:
 
 1. 访问无认证的 Kafka 集群
 
@@ -113,22 +113,22 @@ Doris 自身能够保证不丢不重的订阅 Kafka 中的消息,即 `Exactly-
 
 ### 查看导入作业状态
 
-查看**作业**状态的具体命令和示例请参阅 [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD.html) 命令文档。
+查看**作业**状态的具体命令和示例请参阅 [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD.md) 命令文档。
 
-查看某个作业的**任务**运行状态的具体命令和示例请参阅 [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD-TASK.html) 命令文档。
+查看某个作业的**任务**运行状态的具体命令和示例请参阅 [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference/Show-Statements/SHOW-ROUTINE-LOAD-TASK.md) 命令文档。
 
 只能查看当前正在运行中的任务,已结束和未开始的任务无法查看。
 
 ### 修改作业属性
 
-用户可以修改已经创建的作业的部分属性。具体说明请参阅 [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.html) 命令手册。
+用户可以修改已经创建的作业的部分属性。具体说明请参阅 [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.md) 命令手册。
 
 ### 作业控制
 
 用户可以通过 `STOP/PAUSE/RESUME` 三个命令来控制作业的停止,暂停和重启。
 
-具体命令请参阅 [STOP ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html),[PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html),[RESUME ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) 命令文档。
+具体命令请参阅 [STOP ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.md),[PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.md),[RESUME ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.md) 命令文档。
 
 ## 更多帮助
 
-关于 ROUTINE LOAD 的更多详细语法和最佳实践,请参阅 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) 命令手册。
+关于 ROUTINE LOAD 的更多详细语法和最佳实践,请参阅 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) 命令手册。
diff --git a/docs/zh-CN/data-operate/import/import-scenes/load-atomicity.md b/docs/zh-CN/data-operate/import/import-scenes/load-atomicity.md
index 6279914937..b25aba9a94 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/load-atomicity.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/load-atomicity.md
@@ -28,9 +28,9 @@ under the License.
 
 Doris 中的所有导入操作都有原子性保证,即一个导入作业中的数据要么全部成功,要么全部失败。不会出现仅部分数据导入成功的情况。
 
-在 [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 中我们也可以实现多多表的原子性导入。
+在 [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 中我们也可以实现多多表的原子性导入。
 
-对于表所附属的 [物化视图](../../../advanced/materialized-view.html),也同时保证和基表的原子性和一致性。
+对于表所附属的 [物化视图](../../../advanced/materialized-view.md),也同时保证和基表的原子性和一致性。
 
 ## Label 机制
 
diff --git a/docs/zh-CN/data-operate/import/import-scenes/load-data-convert.md b/docs/zh-CN/data-operate/import/import-scenes/load-data-convert.md
index a9650cdf7d..875798f262 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/load-data-convert.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/load-data-convert.md
@@ -28,7 +28,7 @@ under the License.
 
 ## 支持的导入方式
 
-- [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html)
+- [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md)
 
   ```sql
   LOAD LABEL example_db.label1
@@ -48,7 +48,7 @@ under the License.
   );
   ```
 
-- [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html)
+- [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)
 
   ```bash
   curl
@@ -60,7 +60,7 @@ under the License.
   http://host:port/api/testDb/testTbl/_stream_load
   ```
 
-- [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html)
+- [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md)
 
   ```sql
   CREATE ROUTINE LOAD example_db.label1 ON my_table
diff --git a/docs/zh-CN/data-operate/import/import-scenes/load-strict-mode.md b/docs/zh-CN/data-operate/import/import-scenes/load-strict-mode.md
index 2aeccef410..8dcc2fd043 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/load-strict-mode.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/load-strict-mode.md
@@ -37,7 +37,7 @@ under the License.
 
 不同的导入方式设置严格模式的方式不尽相同。
 
-1. [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html)
+1. [BROKER LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md)
 
    ```sql
    LOAD LABEL example_db.label1
@@ -58,7 +58,7 @@ under the License.
    )
    ```
 
-2. [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html)
+2. [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)
 
    ```bash
    curl --location-trusted -u user:passwd \
@@ -67,7 +67,7 @@ under the License.
    http://host:port/api/example_db/my_table/_stream_load
    ```
 
-3. [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html)
+3. [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md)
 
    ```sql
    CREATE ROUTINE LOAD example_db.test_job ON my_table
@@ -82,9 +82,9 @@ under the License.
    );
    ```
 
-4. [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html)
+4. [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md)
 
-   通过[会话变量](../../../advanced/variables.html)设置:
+   通过[会话变量](../../../advanced/variables.md)设置:
 
    ```sql
    SET enable_insert_strict = true;
diff --git a/docs/zh-CN/data-operate/import/import-scenes/local-file-load.md b/docs/zh-CN/data-operate/import/import-scenes/local-file-load.md
index 34fc28258a..35753e1ec0 100644
--- a/docs/zh-CN/data-operate/import/import-scenes/local-file-load.md
+++ b/docs/zh-CN/data-operate/import/import-scenes/local-file-load.md
@@ -50,7 +50,7 @@ PUT /api/{db}/{table}/_stream_load
 
 1. 创建一张表
 
-   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 命令手册。示例如下:
+   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 命令手册。示例如下:
 
    ```sql
    CREATE TABLE IF NOT EXISTS load_local_file_test
@@ -75,7 +75,7 @@ PUT /api/{db}/{table}/_stream_load
    - host:port 为 BE 的 HTTP 协议端口,默认是 8040,可以在 Doris 集群 WEB UI页面查看。
    - label: 可以在 Header 中指定 Label 唯一标识这个导入任务。
 
-   关于 Stream Load 命令的更多高级操作,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) 命令文档。
+   关于 Stream Load 命令的更多高级操作,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) 命令文档。
 
 3. 等待导入结果
 
@@ -103,7 +103,7 @@ PUT /api/{db}/{table}/_stream_load
    ```
 
    - `Status` 字段状态为 `Success` 即表示导入成功。
-   - 其他字段的详细介绍,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) 命令文档。
+   - 其他字段的详细介绍,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) 命令文档。
 
 ## 导入建议
 
diff --git a/docs/zh-CN/data-operate/import/import-way/binlog-load-manual.md b/docs/zh-CN/data-operate/import/import-way/binlog-load-manual.md
index c35b6d32f3..63f0f0ea64 100644
--- a/docs/zh-CN/data-operate/import/import-way/binlog-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/binlog-load-manual.md
@@ -337,7 +337,7 @@ canal client调用get命令时,canal server会产生数据batch发送给client
 
 Binlog Load只能支持Unique类型的目标表,且必须激活目标表的Batch Delete功能。
 
-开启Batch Delete的方法可以参考[ALTER TABLE PROPERTY](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.html)中的批量删除功能。
+开启Batch Delete的方法可以参考[ALTER TABLE PROPERTY](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md)中的批量删除功能。
 
 示例:
 
@@ -382,7 +382,7 @@ FROM BINLOG
 );
 ```
 
-创建数据同步作业的的详细语法可以连接到 Doris 后,[CREATE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.html) 查看语法帮助。这里主要详细介绍,创建作业时的注意事项。
+创建数据同步作业的的详细语法可以连接到 Doris 后,[CREATE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md) 查看语法帮助。这里主要详细介绍,创建作业时的注意事项。
 
 语法:
 ```
@@ -420,7 +420,7 @@ binlog_desc
 
 ### 查看作业状态
 
-查看作业状态的具体命令和示例可以通过 [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.html) 命令查看。
+查看作业状态的具体命令和示例可以通过 [SHOW SYNC JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.md) 命令查看。
 
 返回结果集的参数意义如下:
 
@@ -462,11 +462,11 @@ binlog_desc
 
 ### 控制作业
 
-用户可以通过 STOP/PAUSE/RESUME 三个命令来控制作业的停止,暂停和恢复。可以通过 [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.html) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.html); 以及 [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.html); 
+用户可以通过 STOP/PAUSE/RESUME 三个命令来控制作业的停止,暂停和恢复。可以通过 [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.md) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.md); 以及 [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.md); 
 
 ## 案例实战
 
-[Apache Doris Binlog Load使用方法及示例](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.html)
+[Apache Doris Binlog Load使用方法及示例](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.md)
 
 ## 相关参数
 
@@ -540,5 +540,5 @@ binlog_desc
 
 ## 更多帮助
 
-关于 Binlog Load 使用的更多详细语法及最佳实践,请参阅 [Binlog Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BINLOG` 获取更多帮助信息。
+关于 Binlog Load 使用的更多详细语法及最佳实践,请参阅 [Binlog Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BINLOG` 获取更多帮助信息。
 
diff --git a/docs/zh-CN/data-operate/import/import-way/broker-load-manual.md b/docs/zh-CN/data-operate/import/import-way/broker-load-manual.md
index a86966ec00..feb30e39be 100644
--- a/docs/zh-CN/data-operate/import/import-way/broker-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/broker-load-manual.md
@@ -26,9 +26,9 @@ under the License.
 
 # Broker Load
 
-Broker load 是一个异步的导入方式,支持的数据源取决于 [Broker](../../../advanced/broker.html) 进程支持的数据源。
+Broker load 是一个异步的导入方式,支持的数据源取决于 [Broker](../../../advanced/broker.md) 进程支持的数据源。
 
-用户需要通过 MySQL 协议 创建 [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 导入,并通过查看导入命令检查导入结果。
+用户需要通过 MySQL 协议 创建 [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 导入,并通过查看导入命令检查导入结果。
 
 ## 适用场景
 
@@ -76,7 +76,7 @@ BE 在执行的过程中会从 Broker 拉取数据,在对数据 transform 之
 
 ## 开始导入
 
-下面我们通过几个实际的场景示例来看 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 的使用
+下面我们通过几个实际的场景示例来看 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 的使用
 
 ### Hive 分区表的数据导入
 
@@ -109,7 +109,7 @@ lines terminated by '\n'
 load data local inpath '/opt/custorm' into table ods_demo_detail;
 ```
 
-2. 创建 Doris 表,具体建表语法参照:[CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+2. 创建 Doris 表,具体建表语法参照:[CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 ```
 CREATE TABLE `doris_ods_test_detail` (
@@ -147,7 +147,7 @@ PROPERTIES (
 
 3. 开始导入数据
 
-   具体语法参照: [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 
+   具体语法参照: [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 
 ```sql
 LOAD LABEL broker_load_2022_03_23
 (
@@ -253,13 +253,13 @@ LOAD LABEL demo.label_20220402
         );
 ```
 
-这里的具体 参数可以参照:  [Broker](../../../advanced/broker.html)  及 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 文档
+这里的具体 参数可以参照:  [Broker](../../../advanced/broker.md)  及 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 文档
 
 ## 查看导入状态
 
 我们可以通过下面的命令查看上面导入任务的状态信息,
 
-具体的查看导入状态的语法参考 [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.html)
+具体的查看导入状态的语法参考 [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.md)
 
 ```sql
 mysql> show load order by createtime desc limit 1\G;
@@ -284,7 +284,7 @@ LoadFinishTime: 2022-04-01 18:59:11
 
 ## 取消导入
 
-当 Broker load 作业状态不为 CANCELLED 或 FINISHED 时,可以被用户手动取消。取消时需要指定待取消导入任务的 Label 。取消导入命令语法可执行 [CANCEL LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CANCEL-LOAD.html) 查看。
+当 Broker load 作业状态不为 CANCELLED 或 FINISHED 时,可以被用户手动取消。取消时需要指定待取消导入任务的 Label 。取消导入命令语法可执行 [CANCEL LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CANCEL-LOAD.md) 查看。
 
 例如:撤销数据库 demo 上, label 为 broker_load_2022_03_23 的导入作业
 
@@ -296,7 +296,7 @@ CANCEL LOAD FROM demo WHERE LABEL = "broker_load_2022_03_23";
 
 ###  Broker 参数
 
-Broker Load 需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../../advanced/broker.html) 。
+Broker Load 需要借助 Broker 进程访问远端存储,不同的 Broker 需要提供不同的参数,具体请参阅 [Broker文档](../../../advanced/broker.md) 。
 
 ### FE 配置
 
@@ -395,7 +395,7 @@ FE 的配置参数 `async_loading_load_task_pool_size` 用于限制同时运行
 
 可以在提交 LOAD 作业前,先执行 `set enable_profile=true` 打开会话变量。然后提交导入作业。待导入作业完成后,可以在 FE 的 web 页面的 `Queris` 标签中查看到导入作业的 Profile。
 
-可以查看 [SHOW LOAD PROFILE](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE.html) 帮助文档,获取更多使用帮助信息。
+可以查看 [SHOW LOAD PROFILE](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD-PROFILE.md) 帮助文档,获取更多使用帮助信息。
 
 这个 Profile 可以帮助分析导入作业的运行状态。
 
@@ -434,4 +434,4 @@ FE 的配置参数 `async_loading_load_task_pool_size` 用于限制同时运行
 
 ## 更多帮助
 
-关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
+关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
diff --git a/docs/zh-CN/data-operate/import/import-way/insert-into-manual.md b/docs/zh-CN/data-operate/import/import-way/insert-into-manual.md
index 34d19ac5c7..dae32aa340 100644
--- a/docs/zh-CN/data-operate/import/import-way/insert-into-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/insert-into-manual.md
@@ -59,7 +59,7 @@ INSERT INTO tbl1 VALUES ("qweasdzxcqweasdzxc"), ("a");
 > SELECT k1 FROM cte1 JOIN cte2 WHERE cte1.k1 = 1;
 > ```
 
-具体的参数说明,你可以参照 [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 命令或者执行`HELP INSERT` 来查看其帮助文档以便更好的使用这种导入方式。
+具体的参数说明,你可以参照 [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 命令或者执行`HELP INSERT` 来查看其帮助文档以便更好的使用这种导入方式。
 
 Insert Into 本身就是一个 SQL 命令,其**返回结果**会根据执行结果的不同,分为以下几种:
 
@@ -116,7 +116,7 @@ Insert Into 本身就是一个 SQL 命令,其**返回结果**会根据执行
 
       `err` 字段会显示一些其他非预期错误。
 
-      当需要查看被过滤的行时,用户可以通过[SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.html)语句
+      当需要查看被过滤的行时,用户可以通过[SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD.md)语句
 
       ```sql
       show load where label="xxx";
@@ -126,7 +126,7 @@ Insert Into 本身就是一个 SQL 命令,其**返回结果**会根据执行
               
       **数据不可见是一个临时状态,这批数据最终是一定可见的**
 
-      可以通过[SHOW TRANSACTION](../../../sql-manual/sql-reference/Show-Statements/SHOW-TRANSACTION.html)语句查看这批数据的可见状态:
+      可以通过[SHOW TRANSACTION](../../../sql-manual/sql-reference/Show-Statements/SHOW-TRANSACTION.md)语句查看这批数据的可见状态:
 
       ```sql
       show transaction where id=4005;
@@ -209,9 +209,9 @@ TransactionStatus: VISIBLE
 
 ### 应用场景
 
-1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 [INSERT INTO VALUES](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 的语法,这里语法和MySql语法一样。
+1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 [INSERT INTO VALUES](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 的语法,这里语法和MySql语法一样。
 2. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
-3. 用户可以创建一种外部表,如 MySQL 外部表映射一张 MySQL 系统中的表。或者创建 [Broker](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 外部表来映射 HDFS 上的数据文件。然后通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+3. 用户可以创建一种外部表,如 MySQL 外部表映射一张 MySQL 系统中的表。或者创建 [Broker](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 外部表来映射 HDFS 上的数据文件。然后通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
 ### 数据量
 
@@ -272,4 +272,4 @@ bj_store_sales schema:
 
 ## 更多帮助
 
-关于 **Insert Into** 使用的更多详细语法,请参阅 [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 命令手册,也可以在 Mysql 客户端命令行下输入 `HELP INSERT` 获取更多帮助信息。
+关于 **Insert Into** 使用的更多详细语法,请参阅 [INSERT INTO](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 命令手册,也可以在 Mysql 客户端命令行下输入 `HELP INSERT` 获取更多帮助信息。
diff --git a/docs/zh-CN/data-operate/import/import-way/load-json-format.md b/docs/zh-CN/data-operate/import/import-way/load-json-format.md
index 160969bde6..5c3d285f30 100644
--- a/docs/zh-CN/data-operate/import/import-way/load-json-format.md
+++ b/docs/zh-CN/data-operate/import/import-way/load-json-format.md
@@ -32,8 +32,8 @@ Doris 支持导入 JSON 格式的数据。本文档主要说明在进行JSON格
 
 目前只有以下导入方式支持 Json 格式的数据导入:
 
-- 将本地 JSON 格式的文件通过 [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) 方式导入。
-- 通过 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) 订阅并消费 Kafka 中的 JSON 格式消息。
+- 将本地 JSON 格式的文件通过 [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) 方式导入。
+- 通过 [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) 订阅并消费 Kafka 中的 JSON 格式消息。
 
 暂不支持其他方式的 JSON 格式数据导入。
 
@@ -81,7 +81,7 @@ Doris 支持导入 JSON 格式的数据。本文档主要说明在进行JSON格
 
 ### fuzzy_parse 参数
 
-在 [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html)中,可以添加 `fuzzy_parse` 参数来加速 JSON 数据的导入效率。
+在 [STREAM LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)中,可以添加 `fuzzy_parse` 参数来加速 JSON 数据的导入效率。
 
 这个参数通常用于导入 **以 Array 表示的多行数据** 这种格式,所以一般要配合 `strip_outer_array=true` 使用。
 
diff --git a/docs/zh-CN/data-operate/import/import-way/routine-load-manual.md b/docs/zh-CN/data-operate/import/import-way/routine-load-manual.md
index c3a6d281ba..4d50b36c72 100644
--- a/docs/zh-CN/data-operate/import/import-way/routine-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/routine-load-manual.md
@@ -80,7 +80,7 @@ under the License.
 
 ### 创建任务
 
-创建例行导入任务的的详细语法可以连接到 Doris 后,查看[CREATE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html)命令手册,或者执行 `HELP ROUTINE LOAD;` 查看语法帮助。
+创建例行导入任务的的详细语法可以连接到 Doris 后,查看[CREATE ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md)命令手册,或者执行 `HELP ROUTINE LOAD;` 查看语法帮助。
 
 下面我们以几个例子说明如何创建Routine Load任务:
 
@@ -311,7 +311,7 @@ CREATE ROUTINE LOAD example_db.test1 ON example_tbl
 
 ### 修改作业属性
 
-用户可以修改已经创建的作业。具体说明可以通过 `HELP ALTER ROUTINE LOAD;` 命令查看或参阅 [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.html)。
+用户可以修改已经创建的作业。具体说明可以通过 `HELP ALTER ROUTINE LOAD;` 命令查看或参阅 [ALTER ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/ALTER-ROUTINE-LOAD.md)。
 
 ### 作业控制
 
diff --git a/docs/zh-CN/data-operate/import/import-way/s3-load-manual.md b/docs/zh-CN/data-operate/import/import-way/s3-load-manual.md
index 0c3be17d10..953370c596 100644
--- a/docs/zh-CN/data-operate/import/import-way/s3-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/s3-load-manual.md
@@ -44,7 +44,7 @@ under the License.
 
 ## 开始导入
 
-导入方式和 [Broker Load](broker-load-manual.html)  基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
+导入方式和 [Broker Load](broker-load-manual.md)  基本相同,只需要将 `WITH BROKER broker_name ()` 语句替换成如下部分
 
 ```text
     WITH S3
diff --git a/docs/zh-CN/data-operate/import/import-way/spark-load-manual.md b/docs/zh-CN/data-operate/import/import-way/spark-load-manual.md
index d4d8600b64..b43687596e 100644
--- a/docs/zh-CN/data-operate/import/import-way/spark-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/spark-load-manual.md
@@ -409,15 +409,15 @@ PROPERTIES
 
 **Label**
 
-导入任务的标识。每个导入任务,都有一个在单 database 内部唯一的 Label。具体规则与 [`Broker Load`](broker-load-manual.html) 一致。
+导入任务的标识。每个导入任务,都有一个在单 database 内部唯一的 Label。具体规则与 [`Broker Load`](broker-load-manual.md) 一致。
 
 **数据描述类参数**
 
-目前支持的数据源有CSV和hive table。其他规则与 [`Broker Load`](broker-load-manual.html) 一致。
+目前支持的数据源有CSV和hive table。其他规则与 [`Broker Load`](broker-load-manual.md) 一致。
 
 **导入作业参数**
 
-导入作业参数主要指的是 Spark load 创建导入语句中的属于 `opt_properties` 部分的参数。导入作业参数是作用于整个导入作业的。规则与 [`Broker Load`](broker-load-manual.html) 一致。
+导入作业参数主要指的是 Spark load 创建导入语句中的属于 `opt_properties` 部分的参数。导入作业参数是作用于整个导入作业的。规则与 [`Broker Load`](broker-load-manual.md) 一致。
 
 **Spark资源参数**
 
@@ -471,7 +471,7 @@ LoadFinishTime: 2019-07-27 11:50:16
     JobDetails: {"ScannedRows":28133395,"TaskNumber":1,"FileNumber":1,"FileSize":200000}
 ```
 
-返回结果集中参数意义可以参考 [Broker Load](broker-load-manual.html)。不同点如下:
+返回结果集中参数意义可以参考 [Broker Load](broker-load-manual.md)。不同点如下:
 
 - State
 
@@ -555,7 +555,7 @@ LoadFinishTime: 2019-07-27 11:50:16
 
 ### 应用场景
 
-使用 Spark Load 最适合的场景就是原始数据在文件系统(HDFS)中,数据量在 几十 GB 到 TB 级别。小数据量还是建议使用  [Stream Load](stream-load-manual.html) 或者 [Broker Load](broker-load-manual.html)。
+使用 Spark Load 最适合的场景就是原始数据在文件系统(HDFS)中,数据量在 几十 GB 到 TB 级别。小数据量还是建议使用  [Stream Load](stream-load-manual.md) 或者 [Broker Load](broker-load-manual.md)。
 
 ## 常见问题
 
diff --git a/docs/zh-CN/data-operate/import/import-way/stream-load-manual.md b/docs/zh-CN/data-operate/import/import-way/stream-load-manual.md
index f0e49c5524..e2897b3e45 100644
--- a/docs/zh-CN/data-operate/import/import-way/stream-load-manual.md
+++ b/docs/zh-CN/data-operate/import/import-way/stream-load-manual.md
@@ -391,5 +391,5 @@ timeout = 1000s 等于 10G / 10M/s
      ```
 ## 更多帮助
 
-关于 Stream Load 使用的更多详细语法及最佳实践,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP STREAM LOAD` 获取更多帮助信息。
+关于 Stream Load 使用的更多详细语法及最佳实践,请参阅 [Stream Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP STREAM LOAD` 获取更多帮助信息。
 
diff --git a/docs/zh-CN/data-operate/import/load-manual.md b/docs/zh-CN/data-operate/import/load-manual.md
index 5233c9b617..bdbd8765e7 100644
--- a/docs/zh-CN/data-operate/import/load-manual.md
+++ b/docs/zh-CN/data-operate/import/load-manual.md
@@ -34,25 +34,25 @@ Doris 提供多种数据导入方案,可以针对不同的数据源进行选
 
 | 数据源                               | 导入方式                                                     |
 | ------------------------------------ | ------------------------------------------------------------ |
-| 对象存储(s3),HDFS                  | [使用Broker导入数据](./import-scenes/external-storage-load.html) |
-| 本地文件                             | [导入本地数据](./import-scenes/local-file-load.html)         |
-| Kafka                                | [订阅Kafka数据](./import-scenes/kafka-load.html)             |
-| Mysql、PostgreSQL,Oracle,SQLServer | [通过外部表同步数据](./import-scenes/external-table-load.html) |
-| 通过JDBC导入                         | [使用JDBC同步数据](./import-scenes/jdbc-load.html)           |
-| 导入JSON格式数据                     | [JSON格式数据导入](./import-way/load-json-format.html)       |
-| MySQL Binlog                         | [Binlog Load](./import-way/binlog-load-manual.html)          |
+| 对象存储(s3),HDFS                  | [使用Broker导入数据](./import-scenes/external-storage-load.md) |
+| 本地文件                             | [导入本地数据](./import-scenes/local-file-load.md)         |
+| Kafka                                | [订阅Kafka数据](./import-scenes/kafka-load.md)             |
+| Mysql、PostgreSQL,Oracle,SQLServer | [通过外部表同步数据](./import-scenes/external-table-load.md) |
+| 通过JDBC导入                         | [使用JDBC同步数据](./import-scenes/jdbc-load.md)           |
+| 导入JSON格式数据                     | [JSON格式数据导入](./import-way/load-json-format.md)       |
+| MySQL Binlog                         | [Binlog Load](./import-way/binlog-load-manual.md)          |
 
 ### 按导入方式划分
 
 | 导入方式名称 | 使用方式                                                     |
 | ------------ | ------------------------------------------------------------ |
-| Spark Load   | [通过Spark导入外部数据](./import-way/spark-load-manual.html) |
-| Broker Load  | [通过Broker导入外部存储数据](./import-way/broker-load-manual.html) |
-| Stream Load  | [流式导入数据(本地文件及内存数据)](./import-way/stream-load-manual.html) |
-| Routine Load | [导入Kafka数据](./import-way/routine-load-manual.html)       |
-| Binlog Load  | [采集Mysql Binlog 导入数据](./import-way/binlog-load-manual.html) |
-| Insert Into  | [外部表通过INSERT方式导入数据](./import-way/insert-into-manual.html) |
-| S3 Load      | [S3协议的对象存储数据导入](./import-way/s3-load-manual.html) |
+| Spark Load   | [通过Spark导入外部数据](./import-way/spark-load-manual.md) |
+| Broker Load  | [通过Broker导入外部存储数据](./import-way/broker-load-manual.md) |
+| Stream Load  | [流式导入数据(本地文件及内存数据)](./import-way/stream-load-manual.md) |
+| Routine Load | [导入Kafka数据](./import-way/routine-load-manual.md)       |
+| Binlog Load  | [采集Mysql Binlog 导入数据](./import-way/binlog-load-manual.md) |
+| Insert Into  | [外部表通过INSERT方式导入数据](./import-way/insert-into-manual.md) |
+| S3 Load      | [S3协议的对象存储数据导入](./import-way/s3-load-manual.md) |
 
 ## 支持的数据格式
 
diff --git a/docs/zh-CN/data-operate/update-delete/batch-delete-manual.md b/docs/zh-CN/data-operate/update-delete/batch-delete-manual.md
index 2c9eb6eafc..44331ae31c 100644
--- a/docs/zh-CN/data-operate/update-delete/batch-delete-manual.md
+++ b/docs/zh-CN/data-operate/update-delete/batch-delete-manual.md
@@ -26,7 +26,7 @@ under the License.
 
 # 批量删除
 
-目前Doris 支持 [Broker Load](../import/import-way/broker-load-manual.html),[Routine Load](../import/import-way/routine-load-manual.html), [Stream Load](../import/import-way/stream-load-manual.html) 等多种导入方式,对于数据的删除目前只能通过delete语句进行删除,使用delete 语句的方式删除时,每执行一次delete 都会生成一个新的数据版本,如果频繁删除会严重影响查询性能,并且在使用delete方式删除时,是通过生成一个空的rowset来记录删除条件实现,每次读取都要对删除条件进行过滤,同样在条件较多时会对性能造成影响。对比其他的系统,greenplum 的实现方式更像是传统数据库产品,snowflake 通过merge 语法实现。
+目前Doris 支持 [Broker Load](../import/import-way/broker-load-manual.md),[Routine Load](../import/import-way/routine-load-manual.md), [Stream Load](../import/import-way/stream-load-manual.md) 等多种导入方式,对于数据的删除目前只能通过delete语句进行删除,使用delete 语句的方式删除时,每执行一次delete 都会生成一个新的数据版本,如果频繁删除会严重影响查询性能,并且在使用delete方式删除时,是通过生成一个空的rowset来记录删除条件实现,每次读取都要对删除条件进行过滤,同样在条件较多时会对性能造成影响。对比其他的系统,greenplum 的实现方式更像是传统数据库产品,snowflake 通过merge 语法实现。
 
 对于类似于cdc数据导入的场景,数据中insert和delete一般是穿插出现的,面对这种场景我们目前的导入方式也无法满足,即使我们能够分离出insert和delete虽然可以解决导入的问题,但是仍然解决不了删除的问题。使用批量删除功能可以解决这些个别场景的需求。数据导入有三种合并方式:
 
@@ -131,7 +131,7 @@ CREATE ROUTINE LOAD example_db.test1 ON example_tbl
 
 ## 注意事项
 
-1. 由于除`Stream Load` 外的导入操作在doris 内部有可能乱序执行,因此在使用`MERGE` 方式导入时如果不是`Stream Load`,需要与 load sequence 一起使用,具体的 语法可以参照[`sequence`](sequence-column-manual.html)列 相关的文档;
+1. 由于除`Stream Load` 外的导入操作在doris 内部有可能乱序执行,因此在使用`MERGE` 方式导入时如果不是`Stream Load`,需要与 load sequence 一起使用,具体的 语法可以参照[`sequence`](sequence-column-manual.md)列 相关的文档;
 2. `DELETE ON` 条件只能与 MERGE 一起使用。
 
 ## 使用示例
diff --git a/docs/zh-CN/data-operate/update-delete/delete-manual.md b/docs/zh-CN/data-operate/update-delete/delete-manual.md
index 3d410507ad..32e481f329 100644
--- a/docs/zh-CN/data-operate/update-delete/delete-manual.md
+++ b/docs/zh-CN/data-operate/update-delete/delete-manual.md
@@ -30,7 +30,7 @@ Delete不同于其他导入方式,它是一个同步过程,与Insert into相
 
 ## 语法
 
-delete操作的语法详见官网 [DELETE](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) 语法。
+delete操作的语法详见官网 [DELETE](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) 语法。
 
 ## 返回结果
 
@@ -148,9 +148,9 @@ mysql> show delete from test_db;
 
 ## 注意事项
 
-- 不同于 Insert into 命令,delete 不能手动指定`label`,有关 label 的概念可以查看[Insert Into](../import/import-way/insert-into-manual.html) 文档。
+- 不同于 Insert into 命令,delete 不能手动指定`label`,有关 label 的概念可以查看[Insert Into](../import/import-way/insert-into-manual.md) 文档。
 
 ## 更多帮助
 
-关于 **delete** 使用的更多详细语法,请参阅 [delete](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) 命令手册,也可以在Mysql客户端命令行下输入 `HELP DELETE` 获取更多帮助信息。
+关于 **delete** 使用的更多详细语法,请参阅 [delete](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) 命令手册,也可以在Mysql客户端命令行下输入 `HELP DELETE` 获取更多帮助信息。
 
diff --git a/docs/zh-CN/data-operate/update-delete/update.md b/docs/zh-CN/data-operate/update-delete/update.md
index 5cde2556bc..38fb2eab4d 100644
--- a/docs/zh-CN/data-operate/update-delete/update.md
+++ b/docs/zh-CN/data-operate/update-delete/update.md
@@ -113,5 +113,5 @@ Query OK, 1 row affected (0.11 sec)
 
 ## 更多帮助
 
-关于 **数据更新** 使用的更多详细语法,请参阅 [update](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.html) 命令手册,也可以在Mysql客户端命令行下输入 `HELP UPDATE` 获取更多帮助信息。
+关于 **数据更新** 使用的更多详细语法,请参阅 [update](../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.md) 命令手册,也可以在Mysql客户端命令行下输入 `HELP UPDATE` 获取更多帮助信息。
 
diff --git a/docs/zh-CN/data-table/advance-usage.md b/docs/zh-CN/data-table/advance-usage.md
index fd6652804d..6009c78107 100644
--- a/docs/zh-CN/data-table/advance-usage.md
+++ b/docs/zh-CN/data-table/advance-usage.md
@@ -30,7 +30,7 @@ under the License.
 
 ## 表结构变更
 
-使用 [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.html) 命令可以修改表的 Schema,包括如下修改:
+使用 [ALTER TABLE COLUMN](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md) 命令可以修改表的 Schema,包括如下修改:
 
 - 增加列
 - 删除列
@@ -94,7 +94,7 @@ CANCEL ALTER TABLE COLUMN FROM table1;
 
 Rollup 可以理解为 Table 的一个物化索引结构。**物化** 是因为其数据在物理上独立存储,而 **索引** 的意思是,Rollup可以调整列顺序以增加前缀索引的命中率,也可以减少key列以增加数据的聚合度。
 
-使用[ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.html)可以进行Rollup的各种变更操作。
+使用[ALTER TABLE ROLLUP](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md)可以进行Rollup的各种变更操作。
 
 以下举例说明
 
diff --git a/docs/zh-CN/data-table/basic-usage.md b/docs/zh-CN/data-table/basic-usage.md
index 48462fce31..fe9ba6cd82 100644
--- a/docs/zh-CN/data-table/basic-usage.md
+++ b/docs/zh-CN/data-table/basic-usage.md
@@ -89,7 +89,7 @@ Query OK, 0 rows affected (0.00 sec)
 CREATE DATABASE example_db;
 ```
 
-> 所有命令都可以使用 `HELP command;` 查看到详细的语法帮助,如:`HELP CREATE DATABASE;`。也可以查阅官网 [SHOW CREATE DATABASE](../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE.html) 命令手册。
+> 所有命令都可以使用 `HELP command;` 查看到详细的语法帮助,如:`HELP CREATE DATABASE;`。也可以查阅官网 [SHOW CREATE DATABASE](../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE.md) 命令手册。
 >
 > 如果不清楚命令的全名,可以使用 "help 命令某一字段" 进行模糊查询。如键入 'HELP CREATE',可以匹配到 `CREATE DATABASE`, `CREATE TABLE`, `CREATE USER` 等命令。
 >
@@ -135,7 +135,7 @@ mysql> SHOW DATABASES;
 
 ### 账户授权
 
-example_db 创建完成之后,可以通过 root/admin 账户使用[GRANT](../sql-manual/sql-reference/Account-Management-Statements/GRANT.html)命令将 example_db 读写权限授权给普通账户,如 test。授权之后采用 test 账户登录就可以操作 example_db 数据库了。
+example_db 创建完成之后,可以通过 root/admin 账户使用[GRANT](../sql-manual/sql-reference/Account-Management-Statements/GRANT.md)命令将 example_db 读写权限授权给普通账户,如 test。授权之后采用 test 账户登录就可以操作 example_db 数据库了。
 
 ```sql
 mysql> GRANT ALL ON example_db TO test;
@@ -144,9 +144,9 @@ Query OK, 0 rows affected (0.01 sec)
 
 ### 建表
 
-使用 [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 命令建立一个表(Table)。更多详细参数可以 `HELP CREATE TABLE;`		
+使用 [CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 命令建立一个表(Table)。更多详细参数可以 `HELP CREATE TABLE;`		
 
-首先,我们需要使用[USE](../sql-manual/sql-reference/Utility-Statements/USE.html)命令来切换数据库:
+首先,我们需要使用[USE](../sql-manual/sql-reference/Utility-Statements/USE.md)命令来切换数据库:
 
 ```sql
 mysql> USE example_db;
@@ -272,7 +272,7 @@ mysql> DESC table2;
 
 ### 导入数据
 
-Doris 支持多种数据导入方式。具体可以参阅[数据导入](../data-operate/import/load-manual.html)文档。这里我们使用流式导入和 Broker 导入做示例。
+Doris 支持多种数据导入方式。具体可以参阅[数据导入](../data-operate/import/load-manual.md)文档。这里我们使用流式导入和 Broker 导入做示例。
 
 #### 流式导入
 
diff --git a/docs/zh-CN/data-table/best-practice.md b/docs/zh-CN/data-table/best-practice.md
index 05ab458cab..2761ce172a 100644
--- a/docs/zh-CN/data-table/best-practice.md
+++ b/docs/zh-CN/data-table/best-practice.md
@@ -128,7 +128,7 @@ Doris对数据进行有序存储, 在数据有序的基础上为其建立稀疏
 稀疏索引选取 schema 中固定长度的前缀作为索引内容, 目前 Doris 选取 36 个字节的前缀作为索引。
 
 - 建表时建议将查询中常见的过滤字段放在 Schema 的前面, 区分度越大,频次越高的查询字段越往前放。
-- 这其中有一个特殊的地方,就是 varchar 类型的字段。varchar 类型字段只能作为稀疏索引的最后一个字段。索引会在 varchar 处截断, 因此 varchar 如果出现在前面,可能索引的长度可能不足 36 个字节。具体可以参阅 [数据模型](./data-model.html)、[ROLLUP 及查询](./hit-the-rollup.html)。
+- 这其中有一个特殊的地方,就是 varchar 类型的字段。varchar 类型字段只能作为稀疏索引的最后一个字段。索引会在 varchar 处截断, 因此 varchar 如果出现在前面,可能索引的长度可能不足 36 个字节。具体可以参阅 [数据模型](./data-model.md)、[ROLLUP 及查询](./hit-the-rollup.md)。
 - 除稀疏索引之外, Doris还提供bloomfilter索引, bloomfilter索引对区分度比较大的列过滤效果明显。 如果考虑到varchar不能放在稀疏索引中, 可以建立bloomfilter索引。
 
 ### 物化视图(rollup)
@@ -195,4 +195,4 @@ ALTER TABLE site_visit MODIFY COLUMN username varchar(64);
 ALTER TABLE site_visit ADD COLUMN click bigint SUM default '0';
 ```
 
-建表时建议考虑好 Schema,这样在进行 Schema Change 时可以加快速度。
\ No newline at end of file
+建表时建议考虑好 Schema,这样在进行 Schema Change 时可以加快速度。
diff --git a/docs/zh-CN/data-table/data-model.md b/docs/zh-CN/data-table/data-model.md
index f4b05e5380..ba0f41a8cf 100644
--- a/docs/zh-CN/data-table/data-model.md
+++ b/docs/zh-CN/data-table/data-model.md
@@ -341,7 +341,7 @@ PROPERTIES (
 );
 ```
 
-这种数据模型区别于 Aggregate 和 Unique 模型。数据完全按照导入文件中的数据进行存储,不会有任何聚合。即使两行数据完全相同,也都会保留。 而在建表语句中指定的 DUPLICATE KEY,只是用来指明底层数据按照那些列进行排序。(更贴切的名称应该为 “Sorted Column”,这里取名 “DUPLICATE KEY” 只是用以明确表示所用的数据模型。关于 “Sorted Column”的更多解释,可以参阅[前缀索引](./index/prefix-index.html))。在 DUPLICATE KEY 的选择上,我们建议适当的选择前 2-4 列就可以。
+这种数据模型区别于 Aggregate 和 Unique 模型。数据完全按照导入文件中的数据进行存储,不会有任何聚合。即使两行数据完全相同,也都会保留。 而在建表语句中指定的 DUPLICATE KEY,只是用来指明底层数据按照那些列进行排序。(更贴切的名称应该为 “Sorted Column”,这里取名 “DUPLICATE KEY” 只是用以明确表示所用的数据模型。关于 “Sorted Column”的更多解释,可以参阅[前缀索引](./index/prefix-index.md))。在 DUPLICATE KEY 的选择上,我们建议适当的选择前 2-4 列就可以。
 
 这种数据模型适用于既没有聚合需求,又没有主键唯一性约束的原始数据的存储。更多使用场景,可参阅**聚合模型的局限性**小节。
 
@@ -458,4 +458,4 @@ Duplicate 模型没有聚合模型的这个局限性。因为该模型不涉及
 
 1. Aggregate 模型可以通过预聚合,极大地降低聚合查询时所需扫描的数据量和查询的计算量,非常适合有固定模式的报表类查询场景。但是该模型对 count(*) 查询很不友好。同时因为固定了 Value 列上的聚合方式,在进行其他类型的聚合查询时,需要考虑语意正确性。
 2. Unique 模型针对需要唯一主键约束的场景,可以保证主键唯一性约束。但是无法利用 ROLLUP 等预聚合带来的查询优势(因为本质是 REPLACE,没有 SUM 这种聚合方式)。
-3. Duplicate 适合任意维度的 Ad-hoc 查询。虽然同样无法利用预聚合的特性,但是不受聚合模型的约束,可以发挥列存模型的优势(只读取相关列,而不需要读取所有 Key 列)。
\ No newline at end of file
+3. Duplicate 适合任意维度的 Ad-hoc 查询。虽然同样无法利用预聚合的特性,但是不受聚合模型的约束,可以发挥列存模型的优势(只读取相关列,而不需要读取所有 Key 列)。
diff --git a/docs/zh-CN/data-table/data-partition.md b/docs/zh-CN/data-table/data-partition.md
index be968c1c43..937b553bac 100644
--- a/docs/zh-CN/data-table/data-partition.md
+++ b/docs/zh-CN/data-table/data-partition.md
@@ -40,7 +40,7 @@ under the License.
 
 - Column: 用于描述一行数据中不同的字段。
 
-  Column 可以分为两大类:Key 和 Value。从业务角度看,Key 和 Value 可以分别对应维度列和指标列。从聚合模型的角度来说,Key 列相同的行,会聚合成一行。其中 Value 列的聚合方式由用户在建表时指定。关于更多聚合模型的介绍,可以参阅 [Doris 数据模型](data-model.html)。
+  Column 可以分为两大类:Key 和 Value。从业务角度看,Key 和 Value 可以分别对应维度列和指标列。从聚合模型的角度来说,Key 列相同的行,会聚合成一行。其中 Value 列的聚合方式由用户在建表时指定。关于更多聚合模型的介绍,可以参阅 [Doris 数据模型](data-model.md)。
 
 ### Tablet & Partition
 
@@ -54,7 +54,7 @@ under the License.
 
 我们以一个建表操作来说明 Doris 的数据划分。
 
-Doris 的建表是一个同步命令,SQL执行完成即返回结果,命令返回成功即表示建表成功。具体建表语法可以参考[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html),也可以通过 `HELP CREATE TABLE;` 查看更多帮助。
+Doris 的建表是一个同步命令,SQL执行完成即返回结果,命令返回成功即表示建表成功。具体建表语法可以参考[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md),也可以通过 `HELP CREATE TABLE;` 查看更多帮助。
 
 本小节通过一个例子,来介绍 Doris 的建表方式。
 
@@ -126,7 +126,7 @@ PROPERTIES
 
 ### 列定义
 
-这里我们只以 AGGREGATE KEY 数据模型为例进行说明。更多数据模型参阅 [Doris 数据模型](./data-model.html)。
+这里我们只以 AGGREGATE KEY 数据模型为例进行说明。更多数据模型参阅 [Doris 数据模型](./data-model.md)。
 
 列的基本类型,可以通过在 mysql-client 中执行 `HELP CREATE TABLE;` 查看。
 
@@ -337,7 +337,7 @@ Doris 支持两层的数据划分。第一层是 Partition,支持 Range 和 Li
    - 一个 Partition 的 Bucket 数量一旦指定,不可更改。所以在确定 Bucket 数量时,需要预先考虑集群扩容的情况。比如当前只有 3 台 host,每台 host 有 1 块盘。如果 Bucket 的数量只设置为 3 或更小,那么后期即使再增加机器,也不能提高并发度。
    - 举一些例子:假设在有10台BE,每台BE一块磁盘的情况下。如果一个表总大小为 500MB,则可以考虑4-8个分片。5GB:8-16个分片。50GB:32个分片。500GB:建议分区,每个分区大小在 50GB 左右,每个分区16-32个分片。5TB:建议分区,每个分区大小在 50GB 左右,每个分区16-32个分片。
 
-   > 注:表的数据量可以通过 [`SHOW DATA`](../sql-manual/sql-reference/Show-Statements/SHOW-DATA.html) 命令查看,结果除以副本数,即表的数据量。
+   > 注:表的数据量可以通过 [`SHOW DATA`](../sql-manual/sql-reference/Show-Statements/SHOW-DATA.md) 命令查看,结果除以副本数,即表的数据量。
 
 #### 复合分区与单分区
 
@@ -356,7 +356,7 @@ Doris 支持两层的数据划分。第一层是 Partition,支持 Range 和 Li
 
 ### PROPERTIES
 
-在建表语句的最后 PROPERTIES 中,关于PROPERTIES中可以设置的相关参数,我们可以查看[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)中查看详细的介绍。
+在建表语句的最后 PROPERTIES 中,关于PROPERTIES中可以设置的相关参数,我们可以查看[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)中查看详细的介绍。
 
 ### ENGIN
 
@@ -387,11 +387,11 @@ Doris 支持两层的数据划分。第一层是 Partition,支持 Range 和 Li
    - 在 fe.log 中,查找对应时间点的 `Failed to create partition` 日志。在该日志中,会出现一系列类似 `{10001-10010}` 字样的数字对。数字对的第一个数字表示 Backend ID,第二个数字表示 Tablet ID。如上这个数字对,表示 ID 为 10001 的 Backend 上,创建 ID 为 10010 的 Tablet 失败了。
    - 前往对应 Backend 的 be.INFO 日志,查找对应时间段内,tablet id 相关的日志,可以找到错误信息。
    - 以下罗列一些常见的 tablet 创建失败错误,包括但不限于:
-     - BE 没有收到相关 task,此时无法在 be.INFO 中找到 tablet id 相关日志或者 BE 创建成功,但汇报失败。以上问题,请参阅 [安装与部署](../install/install-deploy.html) 检查 FE 和 BE 的连通性。
+     - BE 没有收到相关 task,此时无法在 be.INFO 中找到 tablet id 相关日志或者 BE 创建成功,但汇报失败。以上问题,请参阅 [安装与部署](../install/install-deploy.md) 检查 FE 和 BE 的连通性。
      - 预分配内存失败。可能是表中一行的字节长度超过了 100KB。
      - `Too many open files`。打开的文件句柄数超过了 Linux 系统限制。需修改 Linux 系统的句柄数限制。
 
-   如果创建数据分片时超时,也可以通过在 fe.conf 中设置 `tablet_create_timeout_second=xxx` 以及 `max_create_table_timeout_second=xxx` 来延长超时时间。其中 `tablet_create_timeout_second` 默认是1秒, `max_create_table_timeout_second` 默认是60秒,总体的超时时间为min(tablet_create_timeout_second * replication_num, max_create_table_timeout_second),具体参数设置可参阅 [FE配置项](../admin-manual/config/fe-config.html) 。
+   如果创建数据分片时超时,也可以通过在 fe.conf 中设置 `tablet_create_timeout_second=xxx` 以及 `max_create_table_timeout_second=xxx` 来延长超时时间。其中 `tablet_create_timeout_second` 默认是1秒, `max_create_table_timeout_second` 默认是60秒,总体的超时时间为min(tablet_create_timeout_second * replication_num, max_create_table_timeout_second),具体参数设置可参阅 [FE配置项](../admin-manual/config/fe-config.md) 。
 
 3. 建表命令长时间不返回结果。
 
@@ -401,4 +401,4 @@ Doris 支持两层的数据划分。第一层是 Partition,支持 Range 和 Li
 
 ## 更多帮助
 
-关于数据划分更多的详细说明,我们可以在[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)命令手册中查阅,也可以在Mysql客户端下输入 `HELP CREATE TABLE;` 获取更多的帮助信息。
+关于数据划分更多的详细说明,我们可以在[CREATE TABLE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)命令手册中查阅,也可以在Mysql客户端下输入 `HELP CREATE TABLE;` 获取更多的帮助信息。
diff --git a/docs/zh-CN/data-table/hit-the-rollup.md b/docs/zh-CN/data-table/hit-the-rollup.md
index 8d15fd7f5e..9950a57998 100644
--- a/docs/zh-CN/data-table/hit-the-rollup.md
+++ b/docs/zh-CN/data-table/hit-the-rollup.md
@@ -130,7 +130,7 @@ Doris 会执行这些sql时会自动命中这个 ROLLUP 表。
 
 ### Duplicate 模型中的 ROLLUP
 
-因为 Duplicate 模型没有聚合的语意。所以该模型中的 ROLLUP,已经失去了“上卷”这一层含义。而仅仅是作为调整列顺序,以命中前缀索引的作用。我们将在[前缀索引](./index/prefix-index.html)详细介绍前缀索引,以及如何使用ROLLUP改变前缀索引,以获得更好的查询效率。
+因为 Duplicate 模型没有聚合的语意。所以该模型中的 ROLLUP,已经失去了“上卷”这一层含义。而仅仅是作为调整列顺序,以命中前缀索引的作用。我们将在[前缀索引](./index/prefix-index.md)详细介绍前缀索引,以及如何使用ROLLUP改变前缀索引,以获得更好的查询效率。
 
 ## ROLLUP 调整前缀索引
 
@@ -187,7 +187,7 @@ mysql> SELECT * FROM table where age=20 and message LIKE "%error%";
 
 ### 索引
 
-前面的[前缀索引](./index/prefix-index.html)中已经介绍过 Doris 的前缀索引,即 Doris 会把 Base/Rollup 表中的前 36 个字节(有 varchar 类型则可能导致前缀索引不满 36 个字节,varchar 会截断前缀索引,并且最多使用 varchar 的 20 个字节)在底层存储引擎单独生成一份排序的稀疏索引数据(数据也是排序的,用索引定位,然后在数据中做二分查找),然后在查询的时候会根据查询中的条件来匹配每个 Base/Rollup 的前缀索引,并且选择出匹配前缀索引最长的一个 Base/Rollup。
+前面的[前缀索引](./index/prefix-index.md)中已经介绍过 Doris 的前缀索引,即 Doris 会把 Base/Rollup 表中的前 36 个字节(有 varchar 类型则可能导致前缀索引不满 36 个字节,varchar 会截断前缀索引,并且最多使用 varchar 的 20 个字节)在底层存储引擎单独生成一份排序的稀疏索引数据(数据也是排序的,用索引定位,然后在数据中做二分查找),然后在查询的时候会根据查询中的条件来匹配每个 Base/Rollup 的前缀索引,并且选择出匹配前缀索引最长的一个 Base/Rollup。
 
 ```text
        -----> 从左到右匹配
@@ -449,4 +449,4 @@ SELECT SUM(k11) FROM test_rollup WHERE k1 = 10 AND k2 > 200 AND k3 in (1,2,3);
 |      avgRowSize=0.0                                       |
 |      numNodes=0                                           |
 |      tuple ids: 0                                         |
-```
\ No newline at end of file
+```
diff --git a/docs/zh-CN/data-table/index/bitmap-index.md b/docs/zh-CN/data-table/index/bitmap-index.md
index 8f9ca7afe6..04dd7de8fa 100644
--- a/docs/zh-CN/data-table/index/bitmap-index.md
+++ b/docs/zh-CN/data-table/index/bitmap-index.md
@@ -34,7 +34,7 @@ under the License.
 
 ## 原理介绍
 
-创建和删除本质上是一个 schema change 的作业,具体细节可以参照 [Schema Change](../../advanced/alter-table/schema-change.html)。
+创建和删除本质上是一个 schema change 的作业,具体细节可以参照 [Schema Change](../../advanced/alter-table/schema-change.md)。
 
 ## 语法
 
@@ -84,4 +84,4 @@ DROP INDEX [IF EXISTS] index_name ON [db_name.]table_name;
 
 ## 更多帮助
 
-关于 bitmap索引 使用的更多详细语法及最佳实践,请参阅 [CREARE INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-INDEX.md) / [SHOW INDEX](../../sql-manual/sql-reference/Show-Statements/SHOW-INDEX.html) / [DROP INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-INDEX.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE INDEX` /  `HELP SHOW INDEX` / `HELP DROP INDEX`。
+关于 bitmap索引 使用的更多详细语法及最佳实践,请参阅 [CREARE INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-INDEX.md) / [SHOW INDEX](../../sql-manual/sql-reference/Show-Statements/SHOW-INDEX.md) / [DROP INDEX](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-INDEX.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP CREATE INDEX` /  `HELP SHOW INDEX` / `HELP DROP INDEX`。
diff --git a/docs/zh-CN/developer-guide/benchmark-tool.md b/docs/zh-CN/developer-guide/benchmark-tool.md
index 5cea54799d..5debb15278 100644
--- a/docs/zh-CN/developer-guide/benchmark-tool.md
+++ b/docs/zh-CN/developer-guide/benchmark-tool.md
@@ -33,7 +33,7 @@ under the License.
 
 ## 编译
 
-1. 确保环境已经能顺利编译Doris本体,可以参考[编译与部署](../install/source-install/compilation.html)。
+1. 确保环境已经能顺利编译Doris本体,可以参考[编译与部署](../install/source-install/compilation.md)。
 
 2. 运行目录下的`run-be-ut.sh`
 
diff --git a/docs/zh-CN/developer-guide/cpp-diagnostic-code.md b/docs/zh-CN/developer-guide/cpp-diagnostic-code.md
index 1cdf4acb82..62e365c6a0 100644
--- a/docs/zh-CN/developer-guide/cpp-diagnostic-code.md
+++ b/docs/zh-CN/developer-guide/cpp-diagnostic-code.md
@@ -26,7 +26,7 @@ under the License.
 
 # C++ 代码分析
 
-Doris支持使用[Clangd](https://clangd.llvm.org/)和[Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/)进行代码静态分析。Clangd和Clang-Tidy在[LDB-toolchain](../install/source-install/compilation-with-ldb-toolchain.html)中已经内置,另外也可以自己安装或者编译。
+Doris支持使用[Clangd](https://clangd.llvm.org/)和[Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/)进行代码静态分析。Clangd和Clang-Tidy在[LDB-toolchain](../install/source-install/compilation-with-ldb-toolchain.md)中已经内置,另外也可以自己安装或者编译。
 
 ### Clang-Tidy
 Clang-Tidy中可以做一些代码分析的配置,配置文件`.clang-tidy`在Doris根目录下。
diff --git a/docs/zh-CN/developer-guide/docker-dev.md b/docs/zh-CN/developer-guide/docker-dev.md
index 9cc5b127e0..6a9e3da1c2 100644
--- a/docs/zh-CN/developer-guide/docker-dev.md
+++ b/docs/zh-CN/developer-guide/docker-dev.md
@@ -29,9 +29,9 @@ under the License.
 
 ## 相关详细文档导航
 
-- [使用 Docker 开发镜像编译](../install/source-install/compilation.html)
-- [部署](../install/install-deploy.html)
-- [VSCode Be 开发调试](./be-vscode-dev.html)
+- [使用 Docker 开发镜像编译](../install/source-install/compilation.md)
+- [部署](../install/install-deploy.md)
+- [VSCode Be 开发调试](./be-vscode-dev.md)
 
 ## 环境准备
 
@@ -90,7 +90,7 @@ docker build -t doris .
 
 运行镜像
 
-此处按需注意 [挂载的问题](../install/source-install/compilation.html)
+此处按需注意 [挂载的问题](../install/source-install/compilation.md)
 
 > 见链接中:建议以挂载本地 Doris 源码目录的方式运行镜像 .....
 
diff --git a/docs/zh-CN/developer-guide/fe-vscode-dev.md b/docs/zh-CN/developer-guide/fe-vscode-dev.md
index 0e7df5e8a7..bbc5834281 100644
--- a/docs/zh-CN/developer-guide/fe-vscode-dev.md
+++ b/docs/zh-CN/developer-guide/fe-vscode-dev.md
@@ -72,7 +72,7 @@ example:
 ## 编译
 
 其他文章已经介绍的比较清楚了:
-* [使用 LDB toolchain 编译](../install/source-install/compilation-with-ldb-toolchain.html)
+* [使用 LDB toolchain 编译](../install/source-install/compilation-with-ldb-toolchain.md)
 * ......
 
 为了进行调试,需要在 fe 启动时,加上调试的参数,比如 `-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005` 。
diff --git a/docs/zh-CN/downloads/downloads.md b/docs/zh-CN/downloads/downloads.md
index 22b78fcbdb..c1146196e5 100644
--- a/docs/zh-CN/downloads/downloads.md
+++ b/docs/zh-CN/downloads/downloads.md
@@ -86,4 +86,4 @@ under the License.
 
 关于如何校验下载文件,请参阅 [校验下载文件](../community/release-and-verify/release-verify.md),并使用这些[KEYS](https://downloads.apache.org/incubator/doris/KEYS)。
 
-校验完成后,可以参阅 [编译文档](../installing/compilation.md) 以及 [安装与部署文档](../installing/install-deploy.md) 进行 Doris 的编译、安装与部署。
+校验完成后,可以参阅 [编译文档](../install/source-install/compilation.md) 以及 [安装与部署文档](../install/install-deploy.md) 进行 Doris 的编译、安装与部署。
diff --git a/docs/zh-CN/ecosystem/external-table/doris-on-es.md b/docs/zh-CN/ecosystem/external-table/doris-on-es.md
index 824353082d..5a78b62ad2 100644
--- a/docs/zh-CN/ecosystem/external-table/doris-on-es.md
+++ b/docs/zh-CN/ecosystem/external-table/doris-on-es.md
@@ -105,7 +105,7 @@ POST /_bulk
 
 ### Doris中创建ES外表
 
-具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 ```
 CREATE EXTERNAL TABLE `test` (
diff --git a/docs/zh-CN/ecosystem/external-table/iceberg-of-doris.md b/docs/zh-CN/ecosystem/external-table/iceberg-of-doris.md
index ba1015032d..c540721303 100644
--- a/docs/zh-CN/ecosystem/external-table/iceberg-of-doris.md
+++ b/docs/zh-CN/ecosystem/external-table/iceberg-of-doris.md
@@ -47,7 +47,7 @@ Iceberg External Table of Doris 提供了 Doris 直接访问 Iceberg 外部表
 可以通过以下两种方式在 Doris 中创建 Iceberg 外表。建外表时无需声明表的列定义,Doris 可以根据 Iceberg 中表的列定义自动转换。
 
 1. 创建一个单独的外表,用于挂载 Iceberg 表。  
-   具体相关语法,可以通过 [CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 查看。
+   具体相关语法,可以通过 [CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 查看。
 
     ```sql
     -- 语法
@@ -74,7 +74,7 @@ Iceberg External Table of Doris 提供了 Doris 直接访问 Iceberg 外部表
     ```
 
 2. 创建一个 Iceberg 数据库,用于挂载远端对应 Iceberg 数据库,同时挂载该 database 下的所有 table。  
-   具体相关语法,可以通过 [CREATE DATABASE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.html) 查看。
+   具体相关语法,可以通过 [CREATE DATABASE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.md) 查看。
 
     ```sql
     -- 语法
@@ -141,7 +141,7 @@ Iceberg External Table of Doris 提供了 Doris 直接访问 Iceberg 外部表
 
 ### 展示表结构
 
-展示表结构可以通过 [SHOW CREATE TABLE](../../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE.html) 查看。
+展示表结构可以通过 [SHOW CREATE TABLE](../../sql-manual/sql-reference/Show-Statements/SHOW-CREATE-TABLE.md) 查看。
 
 ### 同步挂载
 
diff --git a/docs/zh-CN/ecosystem/external-table/odbc-of-doris.md b/docs/zh-CN/ecosystem/external-table/odbc-of-doris.md
index 031b0ee2c2..8d1df916d6 100644
--- a/docs/zh-CN/ecosystem/external-table/odbc-of-doris.md
+++ b/docs/zh-CN/ecosystem/external-table/odbc-of-doris.md
@@ -44,7 +44,7 @@ ODBC External Table Of Doris 提供了Doris通过数据库访问的标准接口(
 
 ### Doris中创建ODBC的外表
 
-具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 
 #### 1. 不使用Resource创建ODBC的外表
 
@@ -327,7 +327,7 @@ sudo alien -i  oracle-instantclient19.13-sqlplus-19.13.0.0.0-2.x86_64.rpm
 
 适用于少数据量的同步
 
-例如Mysql中一张表有100万数据,想同步到doris,就可以采用ODBC的方式将数据映射过来,在使用[insert into](../../data-operate/import/import-way/insert-into-manual.html) 方式将数据同步到Doris中,如果想同步大批量数据,可以分批次使用[insert into](../../data-operate/import/import-way/insert-into-manual.html)同步(不建议使用)
+例如Mysql中一张表有100万数据,想同步到doris,就可以采用ODBC的方式将数据映射过来,在使用[insert into](../../data-operate/import/import-way/insert-into-manual.md) 方式将数据同步到Doris中,如果想同步大批量数据,可以分批次使用[insert into](../../data-operate/import/import-way/insert-into-manual.md)同步(不建议使用)
 
 ## Q&A
 
diff --git a/docs/zh-CN/ecosystem/logstash.md b/docs/zh-CN/ecosystem/logstash.md
index 0c595f47c9..aeb95e3cfd 100644
--- a/docs/zh-CN/ecosystem/logstash.md
+++ b/docs/zh-CN/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 该插件用于logstash输出数据到Doris,使用 HTTP 协议与 Doris FE Http接口交互,并通过 Doris 的 stream load 的方式进行数据导入.
 
-[了解Doris Stream Load ](../data-operate/import/import-way/stream-load-manual.html)
+[了解Doris Stream Load ](../data-operate/import/import-way/stream-load-manual.md)
 
 [了解更多关于Doris](../)
 
@@ -85,7 +85,7 @@ copy logstash-output-doris-{version}.gem 到 logstash 安装目录下
 `label_prefix` | 导入标识前缀,最终生成的标识为 *{label\_prefix}\_{db}\_{table}\_{time_stamp}*
 
 
-导入相关配置:([参考文档](../data-operate/import/import-way/stream-load-manual.html))
+导入相关配置:([参考文档](../data-operate/import/import-way/stream-load-manual.md))
 
 配置 | 说明
 --- | ---
diff --git a/docs/zh-CN/ecosystem/seatunnel/flink-sink.md b/docs/zh-CN/ecosystem/seatunnel/flink-sink.md
index c53ff5feb6..4b9c08fce9 100644
--- a/docs/zh-CN/ecosystem/seatunnel/flink-sink.md
+++ b/docs/zh-CN/ecosystem/seatunnel/flink-sink.md
@@ -78,7 +78,7 @@ flush 间隔时间(毫秒),超过该时间后异步线程将 缓存中数据
 
 Stream load 的导入参数。例如:'doris.column_separator' = ', '等
 
-[更多 Stream Load 参数配置](../../data-operate/import/import-way/stream-load-manual.html)
+[更多 Stream Load 参数配置](../../data-operate/import/import-way/stream-load-manual.md)
 
 ### Examples
 Socket 数据写入 Doris
@@ -113,4 +113,4 @@ sink {
 ### 启动命令
 ```
 sh bin/start-seatunnel-flink.sh --config config/flink.streaming.conf
-```
\ No newline at end of file
+```
diff --git a/docs/zh-CN/ecosystem/seatunnel/spark-sink.md b/docs/zh-CN/ecosystem/seatunnel/spark-sink.md
index cae2ff61ba..05530dc77a 100644
--- a/docs/zh-CN/ecosystem/seatunnel/spark-sink.md
+++ b/docs/zh-CN/ecosystem/seatunnel/spark-sink.md
@@ -76,7 +76,7 @@ Spark 通过 Stream Load 方式写入,每个批次提交条数
 
 Stream Load 方式写入的 Http 参数优化,在官网参数前加上'Doris.'前缀
 
-[更多 Stream Load 参数配置](../../data-operate/import/import-way/stream-load-manual.html)
+[更多 Stream Load 参数配置](../../data-operate/import/import-way/stream-load-manual.md)
 
 ### Examples
 Hive 迁移数据至 Doris
@@ -121,4 +121,4 @@ Doris {
 启动命令
 ```
 sh bin/start-waterdrop-spark.sh --master local[4] --deploy-mode client --config ./config/spark.conf
-```
\ No newline at end of file
+```
diff --git a/docs/zh-CN/faq/data-faq.md b/docs/zh-CN/faq/data-faq.md
index cfec057323..e3f4ee3b8b 100644
--- a/docs/zh-CN/faq/data-faq.md
+++ b/docs/zh-CN/faq/data-faq.md
@@ -72,7 +72,7 @@ Unique Key模型的表是一个对业务比较友好的表,因为其特有的
 
 通常出现在导入、Alter等操作中。这个错误意味着对应BE的对应磁盘的使用量超过了阈值(默认95%)此时可以先通过 show backends 命令,其中MaxDiskUsedPct展示的是对应BE上,使用率最高的那块磁盘的使用率,如果超过95%,则会报这个错误。
 
-此时需要前往对应BE节点,查看数据目录下的使用量情况。其中trash目录和snapshot目录可以手动清理以释放空间。如果是data目录占用较大,则需要考虑删除部分数据以释放空间了。具体可以参阅[磁盘空间管理](../admin-manual/maint-monitor/disk-capacity.html)。
+此时需要前往对应BE节点,查看数据目录下的使用量情况。其中trash目录和snapshot目录可以手动清理以释放空间。如果是data目录占用较大,则需要考虑删除部分数据以释放空间了。具体可以参阅[磁盘空间管理](../admin-manual/maint-monitor/disk-capacity.md)。
 
 ### Q7. 通过 Java 程序调用 stream load 导入数据,在一批次数据量较大时,可能会报错 Broken Pipe
 
@@ -133,4 +133,4 @@ failed to initialize storage reader. tablet=63416.1050661139.aa4d304e7a7aff9c-f0
 
 ```
 brpc_max_body_size:默认 3GB.
-```
\ No newline at end of file
+```
diff --git a/docs/zh-CN/faq/install-faq.md b/docs/zh-CN/faq/install-faq.md
index b720741536..d3af2f6aac 100644
--- a/docs/zh-CN/faq/install-faq.md
+++ b/docs/zh-CN/faq/install-faq.md
@@ -81,7 +81,7 @@ Observer 角色和这个单词的含义一样,仅仅作为观察者来同步
 
 3. 使用API手动迁移数据
 
-   Doris提供了[HTTP API](../admin-manual/http-actions/tablet-migration-action.html),可以手动指定一个磁盘上的数据分片迁移到另一个磁盘上。
+   Doris提供了[HTTP API](../admin-manual/http-actions/tablet-migration-action.md),可以手动指定一个磁盘上的数据分片迁移到另一个磁盘上。
 
 ### Q5. 如何正确阅读 FE/BE 日志?
 
@@ -255,7 +255,7 @@ http {
 1. 本次FE启动时获取到的本机IP和上次启动不一致,通常是因为没有正确设置 `priority_network` 而导致 FE 启动时匹配到了错误的 IP 地址。需修改 `priority_network` 后重启 FE。
 2. 集群内多数 Follower FE 节点未启动。比如有 3 个 Follower,只启动了一个。此时需要将另外至少一个 FE 也启动,FE 可选举组方能选举出 Master 已提供服务。
 
-如果以上情况都不能解决,可以按照 Doris 官网文档中的[元数据运维文档](../admin-manual/maint-monitor/metadata-operation.html)进行恢复。
+如果以上情况都不能解决,可以按照 Doris 官网文档中的[元数据运维文档](../admin-manual/maint-monitor/metadata-operation.md)进行恢复。
 
 ### Q10. Lost connection to MySQL server at 'reading initial communication packet', system error: 0
 
@@ -265,11 +265,11 @@ http {
 
 有时重启 FE,会出现如上错误(通常只会出现在多 Follower 的情况下)。并且错误中的两个数值相差2。导致 FE 启动失败。
 
-这是 bdbje 的一个 bug,尚未解决。遇到这种情况,只能通过[元数据运维文档](../admin-manual/maint-monitor/metadata-operation.html) 中的 故障恢复 进行操作来恢复元数据了。
+这是 bdbje 的一个 bug,尚未解决。遇到这种情况,只能通过[元数据运维文档](../admin-manual/maint-monitor/metadata-operation.md) 中的 故障恢复 进行操作来恢复元数据了。
 
 ### Q12. Doris编译安装JDK版本不兼容问题
 
-在自己使用 Docker 编译 Doris 的时候,编译完成安装以后启动FE,出现 `java.lang.Suchmethoderror: java.nio. ByteBuffer. limit (I)Ljava/nio/ByteBuffer;` 异常信息,这是因为Docker里默认是JDK 11,如果你的安装环境是使用JDK8 ,需要在 Docker 里 JDK 环境切换成 JDK8,具体切换方法参照[编译文档](../install/source-install/compilation.html)
+在自己使用 Docker 编译 Doris 的时候,编译完成安装以后启动FE,出现 `java.lang.Suchmethoderror: java.nio. ByteBuffer. limit (I)Ljava/nio/ByteBuffer;` 异常信息,这是因为Docker里默认是JDK 11,如果你的安装环境是使用JDK8 ,需要在 Docker 里 JDK 环境切换成 JDK8,具体切换方法参照[编译文档](../install/source-install/compilation.md)
 
 ### Q13. 本地启动 FE 或者启动单元测试报错 Cannot find external parser table action_table.dat
 执行如下命令
@@ -287,7 +287,7 @@ cp fe-core/target/generated-sources/cup/org/apache/doris/analysis/action_table.d
 ```
 ERROR 1105 (HY000): errCode = 2, detailMessage = driver connect Error: HY000 [MySQL][ODBC 8.0(w) Driver]SSL connection error: Failed to set ciphers to use (2026)
 ```
-解决方式是使用`Connector/ODBC 8.0.28` 版本的 ODBC Connector, 并且选择 在操作系统处选择 `Linux - Generic`, 这个版本的ODBC Driver 使用 openssl 1.1 版本。具体使用方式见 [ODBC外表使用文档](../extending-doris/odbc-of-doris.md)
+解决方式是使用`Connector/ODBC 8.0.28` 版本的 ODBC Connector, 并且选择 在操作系统处选择 `Linux - Generic`, 这个版本的ODBC Driver 使用 openssl 1.1 版本。具体使用方式见 [ODBC外表使用文档](../ecosystem/external-table/odbc-of-doris.md)
 可以通过如下方式验证 MySQL ODBC Driver 使用的openssl 版本
 ```
 ldd /path/to/libmyodbc8w.so |grep libssl.so
diff --git a/docs/zh-CN/faq/sql-faq.md b/docs/zh-CN/faq/sql-faq.md
index 45a620f8f2..ae8fadce10 100644
--- a/docs/zh-CN/faq/sql-faq.md
+++ b/docs/zh-CN/faq/sql-faq.md
@@ -30,7 +30,7 @@ under the License.
 
 这种情况是因为对应的 tablet 没有找到可以查询的副本,通常原因可能是 BE 宕机、副本缺失等。可以先通过 `show tablet tablet_id` 语句,然后执行后面的 `show proc` 语句,查看这个 tablet 对应的副本信息,检查副本是否完整。同时还可以通过 `show proc "/cluster_balance"` 信息来查询集群内副本调度和修复的进度。
 
-关于数据副本管理相关的命令,可以参阅 [数据副本管理](../admin-manual/maint-monitor/tablet-repair-and-balance.html)。
+关于数据副本管理相关的命令,可以参阅 [数据副本管理](../admin-manual/maint-monitor/tablet-repair-and-balance.md)。
 
 ### Q2. show backends/frontends 查看到的信息不完整
 
@@ -65,4 +65,4 @@ Doris的 Master FE 节点会主动发送心跳给各个FE或BE节点,并且在
 
 那么可能副本1 的结果是 `1, "abc"`,而副本2 的结果是 `1, "def"`。从而导致查询结果不一致。
 
-为了确保不同副本之间的数据先后顺序唯一,可以参考 [Sequence Column](../data-operate/update-delete/sequence-column-manual.html) 功能。
\ No newline at end of file
+为了确保不同副本之间的数据先后顺序唯一,可以参考 [Sequence Column](../data-operate/update-delete/sequence-column-manual.md) 功能。
diff --git a/docs/zh-CN/get-starting/get-starting.md b/docs/zh-CN/get-starting/get-starting.md
index 5b6c079c48..ee0da68266 100644
--- a/docs/zh-CN/get-starting/get-starting.md
+++ b/docs/zh-CN/get-starting/get-starting.md
@@ -214,7 +214,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
 - 执行 SQL 语句后,可在 FE 的 WEB-UI 界面查看对应的 SQL 语句执行 Report 信息
 
-如需获取完整的参数对照表,请至 [Profile 参数解析](../admin-manual/query-profile.html) 查看详情
+如需获取完整的参数对照表,请至 [Profile 参数解析](../admin-manual/query-profile.md) 查看详情
 
 #### 库表操作
 
@@ -230,7 +230,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
   CREATE DATABASE 数据库名;
   ```
 
-  > 关于 Create-DataBase 使用的更多详细语法及最佳实践,请参阅 [Create-DataBase](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.html) 命令手册。
+  > 关于 Create-DataBase 使用的更多详细语法及最佳实践,请参阅 [Create-DataBase](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-DATABASE.md) 命令手册。
   >
   > 如果不清楚命令的全名,可以使用 "help 命令某一字段" 进行模糊查询。如键入 'HELP CREATE',可以匹配到 `CREATE DATABASE`, `CREATE TABLE`, `CREATE USER` 等命令。
 
@@ -251,7 +251,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
 - 创建数据表
 
-  > 关于 Create-Table 使用的更多详细语法及最佳实践,请参阅 [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 命令手册。
+  > 关于 Create-Table 使用的更多详细语法及最佳实践,请参阅 [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 命令手册。
 
   使用 `CREATE TABLE` 命令建立一个表(Table)。更多详细参数可以查看:
 
@@ -265,7 +265,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
   USE example_db;
   ```
 
-  Doris 支持支持单分区和复合分区两种建表方式(详细区别请参阅 [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html) 命令手册) 。
+  Doris 支持支持单分区和复合分区两种建表方式(详细区别请参阅 [Create-Table](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md) 命令手册) 。
 
   下面以聚合模型为例,演示两种分区的建表语句。
 
@@ -390,7 +390,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
 1. Insert Into 插入
 
-   > 关于 Insert 使用的更多详细语法及最佳实践,请参阅 [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 命令手册。
+   > 关于 Insert 使用的更多详细语法及最佳实践,请参阅 [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 命令手册。
 
    Insert Into 语句的使用方式和 MySQL 等数据库中 Insert Into 语句的使用方式类似。但在 Doris 中,所有的数据写入都是一个独立的导入作业。所以这里将 Insert Into 也作为一种导入方式介绍。
 
@@ -440,7 +440,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
          - 如果 `status` 为 `visible`,表示数据导入成功。
       - 如果 `warnings` 大于 0,表示有数据被过滤,可以通过 `show load` 语句获取 url 查看被过滤的行。
 
-   更多详细说明,请参阅 [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.html) 命令手册。
+   更多详细说明,请参阅 [Insert](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) 命令手册。
 
 2. 批量导入
 
@@ -448,7 +448,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
    - Stream-Load
 
-     > 关于 Stream-Load 使用的更多详细语法及最佳实践,请参阅 [Stream-Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.html) 命令手册。
+     > 关于 Stream-Load 使用的更多详细语法及最佳实践,请参阅 [Stream-Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md) 命令手册。
 
      流式导入通过 HTTP 协议向 Doris 传输数据,可以不依赖其他系统或组件直接导入本地数据。详细语法帮助可以参阅 `HELP STREAM LOAD;`。
 
@@ -497,7 +497,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
      Broker 导入通过部署的 Broker 进程,读取外部存储上的数据进行导入。
 
-     > 关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.html) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
+     > 关于 Broker Load 使用的更多详细语法及最佳实践,请参阅 [Broker Load](../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP BROKER LOAD` 获取更多帮助信息。
 
      示例:以 "table1_20170708" 为 Label,将 HDFS 上的文件导入 table1 表
 
@@ -589,7 +589,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
 #### 更新数据
 
-> 关于 Update 使用的更多详细语法及最佳实践,请参阅 [Update](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.html) 命令手册。
+> 关于 Update 使用的更多详细语法及最佳实践,请参阅 [Update](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/UPDATE.md) 命令手册。
 
 当前 UPDATE 语句**仅支持**在 Unique 模型上的行更新,存在并发更新导致的数据冲突可能。 目前 Doris 并不处理这类问题,需要用户从业务侧规避这类问题。
 
@@ -634,7 +634,7 @@ FE 将查询计划拆分成为 Fragment 下发到 BE 进行任务执行。BE 在
 
 #### 删除数据
 
-> 关于 Delete 使用的更多详细语法及最佳实践,请参阅 [Delete](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.html) 命令手册。
+> 关于 Delete 使用的更多详细语法及最佳实践,请参阅 [Delete](../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/DELETE.md) 命令手册。
 
 1. 语法规则
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
index 12c8414614..f1da090d6b 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md
@@ -32,7 +32,7 @@ ALTER TABLE  COLUMN
 
 ### Description
 
-该语句用于对已有 table 进行 Schema change 操作。schema change 是异步的,任务提交成功则返回,之后可使用[SHOW ALTER](../../Show-Statements/SHOW-ALTER.html) 命令查看进度。
+该语句用于对已有 table 进行 Schema change 操作。schema change 是异步的,任务提交成功则返回,之后可使用[SHOW ALTER](../../Show-Statements/SHOW-ALTER.md) 命令查看进度。
 
 语法:
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 240526f74e..3e15d6f887 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ partition_desc ["key"="value"]
 - 分区为左闭右开区间,如果用户仅指定右边界,系统会自动确定左边界
 - 如果没有指定分桶方式,则自动使用建表使用的分桶方式
 - 如指定分桶方式,只能修改分桶数,不可修改分桶方式或分桶列
-- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](./sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.html)
+- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](./sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
 - 如果建表时用户未显式创建Partition,则不支持通过ALTER的方式增加分区
 
 2. 删除分区
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index 40b4ad65fe..306ec4e6fb 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -32,7 +32,7 @@ ALTER TABLE REPLACE
 
 ### Description
 
-该语句用于对已有 table 的 Schema 的进行属性的修改操作。语法基本类似于 [ALTER TABLE CULUMN](ALTER-TABLE-COLUMN.html)。
+该语句用于对已有 table 的 Schema 的进行属性的修改操作。语法基本类似于 [ALTER TABLE CULUMN](ALTER-TABLE-COLUMN.md)。
 
 ```sql
 ALTER TABLE [database.]table MODIFY NEW_COLUMN_INFO REPLACE OLD_COLUMN_INFO ;
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index b5c520a072..0a683c3ff6 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -32,7 +32,7 @@ ALTER TABLE ROLLUP
 
 ### Description
 
-该语句用于对已有 table 进行 rollup 进行修改操作。rollup 是异步操作,任务提交成功则返回,之后可使用[SHOW ALTER](../../Show-Statements/SHOW-ALTER.html) 命令查看进度。
+该语句用于对已有 table 进行 rollup 进行修改操作。rollup 是异步操作,任务提交成功则返回,之后可使用[SHOW ALTER](../../Show-Statements/SHOW-ALTER.md) 命令查看进度。
 
 语法:
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
index 1115f534b7..e5b1778713 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
@@ -96,7 +96,7 @@ BACKUP
 
 1. 同一个数据库下只能进行一个备份操作。
 
-2. 备份操作会备份指定表或分区的基础表及 [物化视图](../../../../advanced/materialized-view.html),并且仅备份一副本。
+2. 备份操作会备份指定表或分区的基础表及 [物化视图](../../../../advanced/materialized-view.md),并且仅备份一副本。
 
 3. 备份操作的效率
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
index 56e0366c78..b545a433cb 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/CREATE-REPOSITORY.md
@@ -114,5 +114,5 @@ PROPERTIES
 ### Best Practice
 
 1. 一个集群可以创建过多个仓库。只有拥有 ADMIN 权限的用户才能创建仓库。
-2. 任何用户都可以通过 [SHOW REPOSITORIES](../../Show-Statements/SHOW-REPOSITORIES.html) 命令查看已经创建的仓库。
+2. 任何用户都可以通过 [SHOW REPOSITORIES](../../Show-Statements/SHOW-REPOSITORIES.md) 命令查看已经创建的仓库。
 3. 在做数据迁移操作时,需要在源集群和目的集群创建完全相同的仓库,以便目的集群可以通过这个仓库,查看到源集群备份的数据快照。
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
index 978f8fe6d9..c9ef66be9a 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md
@@ -116,4 +116,4 @@ PROPERTIES
 
 4. 恢复操作的效率:
 
-   在集群规模相同的情况下,恢复操作的耗时基本等同于备份操作的耗时。如果想加速恢复操作,可以先通过设置 `replication_num` 参数,仅恢复一个副本,之后在通过调整副本数 [ALTER TABLE PROPERTY](../../Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.html),将副本补齐。
+   在集群规模相同的情况下,恢复操作的耗时基本等同于备份操作的耗时。如果想加速恢复操作,可以先通过设置 `replication_num` 参数,仅恢复一个副本,之后在通过调整副本数 [ALTER TABLE PROPERTY](../../Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md),将副本补齐。
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
index 96bd2f7d9d..d22022ed2a 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.md
@@ -32,7 +32,7 @@ CREATE EXTERNAL TABLE
 
 ### Description
 
-此语句用来创建外部表,具体语法参阅 [CREATE TABLE](./CREATE-TABLE.html)。
+此语句用来创建外部表,具体语法参阅 [CREATE TABLE](./CREATE-TABLE.md)。
 
 主要通过 ENGINE 类型来标识是哪种类型的外部表,目前可选 MYSQL、BROKER、HIVE、ICEBERG
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
index 24caeb8976..4b2074dcc1 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
@@ -34,7 +34,7 @@ CREATE MATERIALIZED VIEW
 
 该语句用于创建物化视图。
 
-该操作为异步操作,提交成功后,需通过 [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.html) 查看作业进度。在显示 FINISHED 后既可通过 `desc [table_name] all` 命令来查看物化视图的 schema 了。
+该操作为异步操作,提交成功后,需通过 [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md) 查看作业进度。在显示 FINISHED 后既可通过 `desc [table_name] all` 命令来查看物化视图的 schema 了。
 
 语法:
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
index 54f2774b9d..34104be0f1 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -28,7 +28,7 @@ under the License.
 
 ### Description
 
-该命令用于创建一张表。本文档主语介绍创建 Doris 自维护的表的语法。外部表语法请参阅 [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.html)文档。
+该命令用于创建一张表。本文档主语介绍创建 Doris 自维护的表的语法。外部表语法请参阅 [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.md)文档。
 
 ```sql
 CREATE TABLE [IF NOT EXISTS] [database.]table
@@ -150,7 +150,7 @@ distribution_info
 
 * `engine_type`
 
-    表引擎类型。本文档中类型皆为 OLAP。其他外部表引擎类型见 [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.html) 文档。示例:
+    表引擎类型。本文档中类型皆为 OLAP。其他外部表引擎类型见 [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.md) 文档。示例:
     
     `ENGINE=olap`
     
@@ -536,7 +536,7 @@ distribution_info
 
 #### 分区和分桶
 
-一个表必须指定分桶列,但可以不指定分区。关于分区和分桶的具体介绍,可参阅 [数据划分](../../../../data-table/data-partition.html) 文档。
+一个表必须指定分桶列,但可以不指定分区。关于分区和分桶的具体介绍,可参阅 [数据划分](../../../../data-table/data-partition.md) 文档。
 
 Doris 中的表可以分为分区表和无分区的表。这个属性在建表时确定,之后不可更改。即对于分区表,可以在之后的使用过程中对分区进行增删操作,而对于无分区的表,之后不能再进行增加分区等操作。
 
@@ -546,7 +546,7 @@ Doris 中的表可以分为分区表和无分区的表。这个属性在建表
 
 #### 动态分区
 
-动态分区功能主要用于帮助用户自动的管理分区。通过设定一定的规则,Doris 系统定期增加新的分区或删除历史分区。可参阅 [动态分区](../../../../advanced/partition/dynamic-partition.html) 文档查看更多帮助。
+动态分区功能主要用于帮助用户自动的管理分区。通过设定一定的规则,Doris 系统定期增加新的分区或删除历史分区。可参阅 [动态分区](../../../../advanced/partition/dynamic-partition.md) 文档查看更多帮助。
 
 #### 物化视图
 
@@ -556,7 +556,7 @@ Doris 中的表可以分为分区表和无分区的表。这个属性在建表
 
 如果在之后的使用过程中添加物化视图,如果表中已有数据,则物化视图的创建时间取决于当前数据量大小。
 
-关于物化视图的介绍,请参阅文档 [物化视图](../../../../advanced/materialized-view.html)。
+关于物化视图的介绍,请参阅文档 [物化视图](../../../../advanced/materialized-view.md)。
 
 #### 索引
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
index b85d7c9e8e..aeeb89f05b 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
@@ -41,7 +41,7 @@ DROP DATABASE [IF EXISTS] db_name [FORCE];
 
 说明:
 
-- 执行 DROP DATABASE 一段时间内,可以通过 RECOVER 语句恢复被删除的数据库。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) 语句
+- 执行 DROP DATABASE 一段时间内,可以通过 RECOVER 语句恢复被删除的数据库。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 语句
 - 如果执行 DROP DATABASE FORCE,则系统不会检查该数据库是否存在未完成的事务,数据库将直接被删除并且不能被恢复,一般不建议执行此操作
 
 ### Example
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index acd771e49d..6a46b0e3e9 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 说明:
 
-- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) 语句
+- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 语句
 - 如果执行 DROP TABLE FORCE,则系统不会检查该表是否存在未完成的事务,表将直接被删除并且不能被恢复,一般不建议执行此操作
 
 ### Example
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index 5ec53df9ac..f0d589351a 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../advanced/broker.html) 文档。
+  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../advanced/broker.md) 文档。
 
   ```text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../advanced/time-zone.html) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
+    指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../advanced/time-zone.md) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
 
 ### Example
 
@@ -400,29 +400,29 @@ WITH BROKER broker_name
 
 1. 查看导入任务状态
 
-   Broker Load 是一个异步导入过程,语句执行成功仅代表导入任务提交成功,并不代表数据导入成功。导入状态需要通过 [SHOW LOAD](../../Show-Statements/SHOW-LOAD.html) 命令查看。
+   Broker Load 是一个异步导入过程,语句执行成功仅代表导入任务提交成功,并不代表数据导入成功。导入状态需要通过 [SHOW LOAD](../../Show-Statements/SHOW-LOAD.md) 命令查看。
 
 2. 取消导入任务
 
-   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](./CANCEL-LOAD.html) 命令取消。取消后,已写入的数据也会回滚,不会生效。
+   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](./CANCEL-LOAD.md) 命令取消。取消后,已写入的数据也会回滚,不会生效。
 
 3. Label、导入事务、多表原子性
 
-   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity.html) 文档。
+   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity.md) 文档。
 
 4. 列映射、衍生列和过滤
 
-   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
 5. 错误数据过滤
 
    Doris 的导入任务可以容忍一部分格式错误的数据。容忍了通过 `max_filter_ratio` 设置。默认为0,即表示当有一条错误数据时,整个导入任务将会失败。如果用户希望忽略部分有问题的数据行,可以将次参数设置为 0~1 之间的数值,Doris 会自动跳过哪些数据格式不正确的行。
 
-   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
 6. 严格模式
 
-   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode.html) 文档。
+   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode.md) 文档。
 
 7. 超时时间
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index 4adfae3dcd..a62eda8bf5 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ CREATE SYNC JOB
 
 目前数据同步作业只支持对接Canal,从Canal Server上获取解析好的Binlog数据,导入到Doris内。
 
-用户可通过 [SHOW SYNC JOB](../../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.html) 查看数据同步作业状态。
+用户可通过 [SHOW SYNC JOB](../../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.md) 查看数据同步作业状态。
 
 语法:
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
index 764b82cc09..a3874b5f02 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
@@ -391,21 +391,21 @@ curl --location-trusted -u root -H "columns: k1,k2,source_sequence,v1,v2" -H "fu
 
 4. Label、导入事务、多表原子性
 
-   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity.html) 文档。
+   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity.md) 文档。
 
 5. 列映射、衍生列和过滤
 
-   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
 6. 错误数据过滤
 
    Doris 的导入任务可以容忍一部分格式错误的数据。容忍率通过 `max_filter_ratio` 设置。默认为0,即表示当有一条错误数据时,整个导入任务将会失败。如果用户希望忽略部分有问题的数据行,可以将次参数设置为 0~1 之间的数值,Doris 会自动跳过哪些数据格式不正确的行。
 
-   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.html) 文档。
+   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
 7. 严格模式
 
-   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode.html) 文档。
+   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode.md) 文档。
 
 8. 超时时间
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md b/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
index 2c59440182..62ee04bd96 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.html) 语句提交的创建物化视图作业的执行情况。
+该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md) 语句提交的创建物化视图作业的执行情况。
 
 > 该语句等同于 `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md b/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
index 16e482c609..87e094fa11 100644
--- a/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
+++ b/docs/zh-CN/sql-manual/sql-reference/Show-Statements/SHOW-PROC.md
@@ -80,20 +80,20 @@ mysql> show proc "/";
 说明:
 
 1. statistics:主要用于汇总查看 Doris 集群中数据库、表、分区、分片、副本的数量。以及不健康副本的数量。这个信息有助于我们总体把控集群元信息的规模。帮助我们从整体视角查看集群分片情况,能够快速查看集群分片的健康情况。从而进一步定位有问题的数据分片。
-2. brokers : 查看集群 Broker 节点信息,等同于 [SHOW BROKER](./SHOW-BROKER.html)   
-3. frontends :显示集群中所有的 FE 节点信息,包括IP地址、角色、状态、是否是mater等,等同于 [SHOW FRONTENDS](./SHOW-FRONTENDS.html)   
+2. brokers : 查看集群 Broker 节点信息,等同于 [SHOW BROKER](./SHOW-BROKER.md)   
+3. frontends :显示集群中所有的 FE 节点信息,包括IP地址、角色、状态、是否是mater等,等同于 [SHOW FRONTENDS](./SHOW-FRONTENDS.md)   
 4. routine_loads : 显示所有的 routine load 作业信息,包括作业名称、状态等
 5. auth:用户名称及对应的权限信息
 6. jobs :
 7. bdbje:查看 bdbje 数据库列表,需要修改 `fe.conf` 文件增加 `enable_bdbje_debug_mode=true` , 然后通过 `sh start_fe.sh --daemon` 启动 `FE` 即可进入 `debug` 模式。 进入 `debug` 模式之后,仅会启动 `http server` 和  `MySQLServer` 并打开 `BDBJE` 实例,但不会进入任何元数据的加载及后续其他启动流程,
 8. dbs : 主要用于查看 Doris 集群中各个数据库以及其中的表的元数据信息。这些信息包括表结构、分区、物化视图、数据分片和副本等等。通过这个目录和其子目录,可以清楚的展示集群中的表元数据情况,以及定位一些如数据倾斜、副本故障等问题
-9. resources : 查看系统资源,普通账户只能看到自己有 USAGE_PRIV 使用权限的资源。只有root和admin账户可以看到所有的资源。等同于 [SHOW RESOURCES](./SHOW-RESOURCES.html)
+9. resources : 查看系统资源,普通账户只能看到自己有 USAGE_PRIV 使用权限的资源。只有root和admin账户可以看到所有的资源。等同于 [SHOW RESOURCES](./SHOW-RESOURCES.md)
 10.  monitor : 显示的是 FE JVM 的资源使用情况     
-11.  transactions :用于查看指定 transaction id 的事务详情,等同于 [SHOW TRANSACTION](./SHOW-TRANSACTION.html)
-12.  colocation_group :   该命令可以查看集群内已存在的 Group 信息, 具体可以查看 [Colocation Join](../../../advanced/join-optimization/colocation-join.html) 章节
-13.  backends :显示集群中 BE 的节点列表  , 等同于 [SHOW BACKENDS](./SHOW-BACKENDS.html)        
-14.  trash :该语句用于查看 backend 内的垃圾数据占用空间。 等同于 [SHOW TRASH](./SHOW-TRASH.html)    
-15. cluster_balance  : 查看集群均衡情况,具体参照 [数据副本管理](../../../admin-manual/maint-monitor/tablet-repair-and-balance.html)
+11.  transactions :用于查看指定 transaction id 的事务详情,等同于 [SHOW TRANSACTION](./SHOW-TRANSACTION.md)
+12.  colocation_group :   该命令可以查看集群内已存在的 Group 信息, 具体可以查看 [Colocation Join](../../../advanced/join-optimization/colocation-join.md) 章节
+13.  backends :显示集群中 BE 的节点列表  , 等同于 [SHOW BACKENDS](./SHOW-BACKENDS.md)        
+14.  trash :该语句用于查看 backend 内的垃圾数据占用空间。 等同于 [SHOW TRASH](./SHOW-TRASH.md)    
+15. cluster_balance  : 查看集群均衡情况,具体参照 [数据副本管理](../../../admin-manual/maint-monitor/tablet-repair-and-balance.md)
 16. current_queries  : 查看正在执行的查询列表,当前正在运行的SQL语句。                          
 17.  load_error_hub :Doris 支持将 load 作业产生的错误信息集中存储到一个 error hub 中。然后直接通过 <code>SHOW LOAD WARNINGS;</code> 语句查看错误信息。这里展示的就是 error hub 的配置信息。
 18.  current_backend_instances :显示当前正在执行作业的be节点列表


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org