You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by ji...@apache.org on 2022/10/25 07:53:35 UTC

[doris-website] branch master updated: link 404

This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4b205bca302 link 404
4b205bca302 is described below

commit 4b205bca302e524e8d412cc101997e048a1f5de4
Author: jiafeng.zhang <zh...@gmail.com>
AuthorDate: Tue Oct 25 15:52:17 2022 +0800

    link 404
---
 docs/admin-manual/config/be-config.md              |   2 +-
 .../maint-monitor/metadata-operation.md            |   2 +-
 docs/admin-manual/maint-monitor/multi-tenant.md    | 238 ---------------------
 .../maint-monitor/tablet-repair-and-balance.md     |   2 +-
 docs/advanced/alter-table/replace-table.md         |   2 +-
 docs/advanced/alter-table/schema-change.md         |   2 +-
 docs/data-operate/export/outfile.md                |   2 +-
 .../import/import-scenes/external-storage-load.md  |   2 +-
 .../data-operate/import/import-scenes/jdbc-load.md |   2 +-
 .../import/import-scenes/kafka-load.md             |   2 +-
 docs/data-table/basic-usage.md                     |   2 +-
 docs/data-table/hit-the-rollup.md                  |   2 +-
 docs/ecosystem/doris-manager/space-list.md         |   2 +-
 docs/ecosystem/logstash.md                         |   2 +-
 docs/ecosystem/udf/contribute-udf.md               |   2 +-
 docs/faq/install-faq.md                            |   8 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |   2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |   2 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |   2 +-
 .../Drop/DROP-DATABASE.md                          |   2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |   2 +-
 .../Load/BROKER-LOAD.md                            |  18 +-
 .../Load/CREATE-SYNC-JOB.md                        |   2 +-
 .../Load/STREAM-LOAD.md                            |  10 +-
 .../SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md          |   2 +-
 .../cluster-management/elastic-expansion.md        |   2 +-
 .../current/admin-manual/config/be-config.md       |   2 +-
 .../maint-monitor/metadata-operation.md            |   2 +-
 .../maint-monitor/tablet-repair-and-balance.md     |   2 +-
 .../current/admin-manual/multi-tenant.md           | 232 --------------------
 .../current/advanced/alter-table/replace-table.md  |   2 +-
 .../current/advanced/alter-table/schema-change.md  |   2 +-
 .../import/import-scenes/external-storage-load.md  |   2 +-
 .../data-operate/import/import-scenes/jdbc-load.md |   2 +-
 .../current/data-table/basic-usage.md              |   2 +-
 .../current/data-table/hit-the-rollup.md           |   2 +-
 .../current/ecosystem/doris-manager/space-list.md  |   2 +-
 .../current/ecosystem/logstash.md                  |   2 +-
 .../current/ecosystem/udf/contribute-udf.md        |   2 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |   2 +-
 .../Alter/ALTER-TABLE-REPLACE.md                   |   4 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |   2 +-
 .../Drop/DROP-DATABASE.md                          |   2 +-
 .../Data-Definition-Statements/Drop/DROP-TABLE.md  |   2 +-
 .../Load/BROKER-LOAD.md                            |  22 +-
 .../Load/CREATE-SYNC-JOB.md                        |   2 +-
 .../SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md          |   2 +-
 .../alter-table/alter-table-bitmap-index.md        |  20 +-
 .../alter-table/alter-table-replace-table.md       |   4 +-
 .../alter-table/alter-table-temp-partition.md      |   2 +-
 .../administrator-guide/config/fe_config.md        |   2 +-
 .../load-data/broker-load-manual.md                |   2 +-
 .../load-data/routine-load-manual.md               |   2 +-
 .../administrator-guide/operation/disk-capacity.md |   4 +-
 .../version-0.15/administrator-guide/outfile.md    |   2 +-
 .../administrator-guide/resource-management.md     |   2 +-
 .../best-practices/star-schema-benchmark.md        |   8 +-
 .../version-0.15/extending-doris/logstash.md       |   4 +-
 .../alter-table/alter-table-bitmap-index.md        |  18 +-
 .../alter-table/alter-table-replace-table.md       |   4 +-
 .../alter-table/alter-table-temp-partition.md      |   2 +-
 .../administrator-guide/config/fe_config.md        |   2 +-
 .../load-data/broker-load-manual.md                |   2 +-
 .../load-data/routine-load-manual.md               |   2 +-
 .../administrator-guide/operation/disk-capacity.md |   4 +-
 .../version-0.15/administrator-guide/outfile.md    |   2 +-
 .../administrator-guide/resource-management.md     |   2 +-
 .../best-practices/star-schema-benchmark.md        |   4 +-
 68 files changed, 118 insertions(+), 588 deletions(-)

diff --git a/docs/admin-manual/config/be-config.md b/docs/admin-manual/config/be-config.md
index 66282b1acfb..e27118a83ed 100644
--- a/docs/admin-manual/config/be-config.md
+++ b/docs/admin-manual/config/be-config.md
@@ -450,7 +450,7 @@ Cgroups assigned to doris
 ### `doris_max_scan_key_num`
 
 * Type: int
-* Description: Used to limit the maximum number of scan keys that a scan node can split in a query request. When a conditional query request reaches the scan node, the scan node will try to split the conditions related to the key column in the query condition into multiple scan key ranges. After that, these scan key ranges will be assigned to multiple scanner threads for data scanning. A larger value usually means that more scanner threads can be used to increase the parallelism of the s [...]
+* Description: Used to limit the maximum number of scan keys that a scan node can split in a query request. When a conditional query request reaches the scan node, the scan node will try to split the conditions related to the key column in the query condition into multiple scan key ranges. After that, these scan key ranges will be assigned to multiple scanner threads for data scanning. A larger value usually means that more scanner threads can be used to increase the parallelism of the s [...]
 * Default value: 1024
 
 When the concurrency cannot be improved in high concurrency scenarios, try to reduce this value and observe the impact.
diff --git a/docs/admin-manual/maint-monitor/metadata-operation.md b/docs/admin-manual/maint-monitor/metadata-operation.md
index bc2439ff586..ca038533ec5 100644
--- a/docs/admin-manual/maint-monitor/metadata-operation.md
+++ b/docs/admin-manual/maint-monitor/metadata-operation.md
@@ -32,7 +32,7 @@ For the time being, read the [Doris metadata design document](/community/design/
 
 ## Important tips
 
-* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion.java` file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../admin-manual/cluster-management/upgrade).
+* Current metadata design is not backward compatible. That is, if the new version has a new metadata structure change (you can see whether there is a new VERSION in the `FeMetaVersion.java` file in the FE code), it is usually impossible to roll back to the old version after upgrading to the new version. Therefore, before upgrading FE, be sure to test metadata compatibility according to the operations in the [Upgrade Document](../../admin-manual/cluster-management/upgrade.md).
 
 ## Metadata catalog structure
 
diff --git a/docs/admin-manual/maint-monitor/multi-tenant.md b/docs/admin-manual/maint-monitor/multi-tenant.md
deleted file mode 100644
index b47a523e5e6..00000000000
--- a/docs/admin-manual/maint-monitor/multi-tenant.md
+++ /dev/null
@@ -1,238 +0,0 @@
----
-{
-    "title": "Multi-tenancy(Deprecated)",
-    "language": "en"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Multi-tenancy(Deprecated)
-
-This function is deprecated. Please see [Multi-Tenant](../multi-tenant.md).
-
-## Background
-Doris, as a PB-level online report and multi-dimensional analysis database, provides cloud-based database services through open cloud, and deploys a physical cluster for each client in the cloud. Internally, a physical cluster deploys multiple services, and separately builds clusters for services with high isolation requirements. In view of the above problems:
-
-- Deployment of multiple physical cluster maintenance costs a lot (upgrade, functionality on-line, bug repair).
-- A user's query or a bug caused by a query often affects other users.
-- In the actual production environment, only one BE process can be deployed on a single machine. Multiple BEs can better solve the problem of fat nodes. And for join, aggregation operations can provide higher concurrency.
-
-Together with the above three points, Doris needs a new multi-tenant scheme, which not only can achieve better resource isolation and fault isolation, but also can reduce the cost of maintenance to meet the needs of common and private clouds.
-
-## Design Principles
-
-- Easy to use
-- Low Development Cost
-- Convenient migration of existing clusters
-
-## Noun Interpretation
-
-- FE: Frontend, the module for metadata management or query planning in Doris.
-- BE: Backend, the module used to store and query data in Doris.
-- Master: A role for FE. A Doris cluster has only one Master and the other FE is Observer or Follower.
-- instance: A BE process is an instance in time.
-- host: a single physical machine
-- Cluster: A cluster consisting of multiple instances.
-- Tenant: A cluster belongs to a tenant. Cluster is a one-to-one relationship with tenants.
-- database: A user-created database
-
-## Main Ideas
-
-- Deploy instances of multiple BEs on a host to isolate resources at the process level.
-- Multiple instances form a cluster, and a cluster is assigned to a business-independent tenant.
-- FE adds cluster level and is responsible for cluster management.
-- CPU, IO, memory and other resources are segregated by cgroup.
-
-## Design scheme
-
-In order to achieve isolation, the concept of **virtual cluster** is introduced.
-
-1. Cluster represents a virtual cluster consisting of instances of multiple BEs. Multiple clusters share FE.
-2. Multiple instances can be started on a host. When a cluster is created, an arbitrary number of instances are selected to form a cluster.
-3. While creating a cluster, an account named superuser is created, which belongs to the cluster. Super user can manage clusters, create databases, assign privileges, and so on.
-4. After Doris starts, the sink creates a default cluster: default_cluster. If the user does not want to use the function of multi-cluster, the default cluster is provided and other operational details of multi-cluster are hidden.
-
-The concrete structure is as follows:
-![](/images/multi_tenant_arch.png)
-
-## SQL interface
-
-- Login
-
-	Default cluster login name: user_name@default_cluster or user_name
-
-	Custom cluster login name: user_name@cluster_name
-
-	`mysqlclient -h host -P port -u user_name@cluster_name -p password`
-
-- Add, delete, decommission and cancel BE
-
-	`ALTER SYSTEM ADD BACKEND "host:port"`
-	`ALTER SYSTEM DROP BACKEND "host:port"`
-	`ALTER SYSTEM DECOMMISSION BACKEND "host:port"`
-	`CANCEL DECOMMISSION BACKEND "host:port"`
-
-	It is strongly recommended to use DECOMMISSION instead of DROP to delete BACKEND. The DECOMMISSION operation will first need to copy data from the offline node to other instances in the cluster. After that, they will be offline.
-
-- Create a cluster and specify the password for the superuser account
-
-	`CREATE CLUSTER cluster_name PROPERTIES ("instance_num" = "10") identified by "password"`
-
-- Enter a cluster
-
-	`ENTER cluster name`
-
-- Cluster Expansion and Shrinkage
-
-	`ALTER CLUSTER cluster_name PROPERTIES ("instance_num" = "10")`
-
-	When the number of instances specified is more than the number of existing be in cluster, it is expanded and if less than it is condensed.
-
-- Link, migrate DB
-
-	`LINK DATABASE src_cluster_name.db_name dest_cluster_name.db_name`
-
-	Soft-chain dB of one cluster to dB of another cluster can be used by users who need temporary access to dB of other clusters but do not need actual data migration.
-
-	`MIGRATE DATABASE src_cluster_name.db_name dest_cluster_name.db_name`
-
-	If you need to migrate dB across clusters, after linking, migrate the actual migration of data.
-
-	Migration does not affect the query, import and other operations of the current two dbs. This is an asynchronous operation. You can see the progress of migration through `SHOW MIGRATIONS`.
-
-- Delete clusters
-
-	`DROP CLUSTER cluster_name`
-
-	Deleting a cluster requires that all databases in the cluster be deleted manually first.
-
-- Others
-
-	`SHOW CLUSTERS`
-
-	Show clusters that have been created in the system. Only root users have this permission.
-
-	`SHOW BACKENDS`
-
-	View the BE instance in the cluster.
-
-	`SHOW MIGRATIONS`
-
-	Show current DB migration tasks. After the migration of DB is completed, you can view the progress of the migration through this command.
-
-## Detailed design
-
-1. Namespace isolation
-
-	In order to introduce multi-tenant, the namespaces between clusters in the system need to be isolated.
-
-	Doris's existing metadata is image + Journal (metadata is designed in related documents). Doris records operations involving metadata as a journal, and then regularly writes images in the form of **Fig. 1** and reads them in the order in which they are written when loaded. But this brings a problem that the format that has been written is not easy to modify. For example, the metadata format for recording data distribution is: database + table + tablet + replica nesting. If we want to is [...]
-
-	- The change of metadata brought by adding one layer is incompatible. It needs to be written in cluster+db+table+tablet+replica level in the way of Figure 2. This changes the way of metadata organization in the past. The upgrading of the old version will be more troublesome. The ideal way is to write cluster in the order of Figure 3 in the form of existing metadata. Metadata.
-
-	- All the DB and user used in the code need to add a layer of cluster. There are many workload changes and deep levels. Most of the code acquires db. The existing functions almost need to be changed, and a layer of cluster locks need to be nested on the basis of DB locks.
-
-	![](/images/palo_meta.png)
-
-	To sum up, we adopt a prefix to DB and user names to isolate the internal problems caused by the conflict of DB and user names between clusters.
-
-	As follows, all SQL input involves db name and user name, and all SQL input needs to spell the full name of DB and user according to their cluster.
-
-	![](/images/cluster_namaspace.png)
-
-	In this way, the above two problems no longer exist. Metadata is also organized in a relatively simple way. That is to say, use ** Figure 3 ** to record db, user and nodes belonging to their own cluster.
-
-2. BE 节点管理
-
-	Each cluster has its own set of instances, which can be viewed through `SHOW BACKENDS`. In order to distinguish which cluster the instance belongs to and how it is used, BE introduces several states:
-
-	- Free: When a BE node is added to the system, be is idle when it does not belong to any cluster.
-	- Use: When creating a cluster or expanding capacity is selected into a cluster, it is in use.
-	- Cluster decommission: If a shrinkage is performed, the be that is executing the shrinkage is in this state. After that, the be state becomes free.
-	- System decommission: be is offline. When the offline is completed, the be will be permanently deleted.
-
-	Only root users can check whether all be in the cluster is used through the cluster item in `SHOW PROC "/backends"`. To be free is to be idle, otherwise to be in use. `SHOW BACKENDS `can only see the nodes in the cluster. The following is a schematic diagram of the state changes of be nodes.
-
-	![](/images/backend_state.png)
-
-3. Creating Clusters
-
-	Only root users can create a cluster and specify any number of BE instances.
-
-	Supports selecting multiple instances on the same machine. The general principle of selecting instance is to select be on different machines as much as possible and to make the number of be used on all machines as uniform as possible.
-
-	For use, each user and DB belongs to a cluster (except root). To create user and db, you first need to enter a cluster. When a cluster is created, the system defaults to the manager of the cluster, the superuser account. Superuser has the right to create db, user, and view the number of be nodes in the cluster to which it belongs. All non-root user logins must specify a cluster, namely `user_name@cluster_name`.
-
-	Only root users can view all clusters in the system through `SHOW CLUSTER', and can enter different clusters through @ different cluster names. User clusters are invisible except root.
-
-	In order to be compatible with the old version of Doris, a cluster named default_cluster was built in, which could not be used when creating the cluster.
-
-	![](/images/user_authority.png)
-
-4. Cluster Expansion
-
-	The process of cluster expansion is the same as that of cluster creation. BE instance on hosts that are not outside the cluster is preferred. The selected principles are the same as creating clusters.
-
-5. Cluster and Shrinkage CLUSTER DECOMMISSION
-
-	Users can scale clusters by setting instance num of clusters.
-
-	Cluster shrinkage takes precedence over Shrinking instances on hosts with the largest number of BE instances.
-
-	Users can also directly use `ALTER CLUSTER DECOMMISSION BACKEND` to specify BE for cluster scaling.
-
-![](/images/replica_recover.png)
-
-6. Create table
-
-	To ensure high availability, each fragmented copy must be on a different machine. So when building a table, the strategy of choosing the be where the replica is located is to randomly select a be on each host. Then, the number of be copies needed is randomly selected from these be. On the whole, it can distribute patches evenly on each machine.
-
-	Therefore, adding a fragment that needs to create a 3-copy fragment, even if the cluster contains three or more instances, but only two or less hosts, still cannot create the fragment.
-
-7. Load Balancing
-
-	The granularity of load balancing is cluster level, and there is no load balancing between clusters. However, the computing load is carried out at the host level, and there may be BE instances of different clusters on a host. In the cluster, the load is calculated by the number of fragments on each host and the utilization of storage, and then the fragments on the machine with high load are copied to the machine with low load (see the load balancing documents for details).
-
-8. LINK DATABASE (Soft Chain)
-
-	Multiple clusters can access each other's data through a soft chain. The link level is dB for different clusters.
-
-	DB in other clusters is accessed by adding DB information of other clusters that need to be accessed in one cluster.
-
-	When querying the linked db, the computing and storage resources used are those of the cluster where the source DB is located.
-
-	DB that is soft-chained cannot be deleted in the source cluster. Only when the linked DB is deleted can the source DB be deleted. Deleting link DB will not delete source db.
-
-9. MIGRATE DATABASE
-
-	DB can be physically migrated between clusters.
-
-	To migrate db, you must first link db. After migration, the data will migrate to the cluster where the linked DB is located, and after migration, the source DB will be deleted and the link will be disconnected.
-
-	Data migration reuses the process of replicating data in load balancing and replica recovery (see load balancing related documents for details). Specifically, after executing the `MIRAGTE` command, Doris will modify the cluster of all copies of the source DB to the destination cluster in the metadata.
-
-	Doris regularly checks whether machines in the cluster are balanced, replicas are complete, and redundant replicas are available. The migration of DB borrows this process, checking whether the be where the replica is located belongs to the cluster while checking the replica is complete, and if not, it is recorded in the replica to be restored. And when the duplicate is redundant to be deleted, it will first delete the duplicate outside the cluster, and then choose according to the exist [...]
-
-![](/images/cluster_link_and_migrate_db.png)
-
-10. BE process isolation
-
-	In order to isolate the actual cpu, IO and memory between be processes, we need to rely on the deployment of be. When deploying, you need to configure the CGroup on the periphery and write all the processes of be to be deployed to the cgroup. If the physical isolation of IO between the data storage paths of each be configuration requires different disks, there is no much introduction here.
diff --git a/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md b/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
index 5d0d2d98837..7abdbd8eb24 100644
--- a/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -28,7 +28,7 @@ under the License.
 
 Beginning with version 0.9.0, Doris introduced an optimized replica management strategy and supported a richer replica status viewing tool. This document focuses on Doris data replica balancing, repair scheduling strategies, and replica management operations and maintenance methods. Help users to more easily master and manage the replica status in the cluster.
 
-> Repairing and balancing copies of tables with Collocation attributes can be referred to [HERE](../../advanced/join-optimization/colocation-join.md)
+> Repairing and balancing copies of tables with Collocation attributes can be referred to [HERE](../../advanced/join-optimization/colocation-join)
 
 ## Noun Interpretation
 
diff --git a/docs/advanced/alter-table/replace-table.md b/docs/advanced/alter-table/replace-table.md
index 204b395a44d..9aa755710f0 100644
--- a/docs/advanced/alter-table/replace-table.md
+++ b/docs/advanced/alter-table/replace-table.md
@@ -29,7 +29,7 @@ under the License.
 In version 0.14, Doris supports atomic replacement of two tables.
 This operation only applies to OLAP tables.
 
-For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition)
+For partition level replacement operations, please refer to [Temporary Partition Document](../../partition/table-temp-partition)
 
 ## Syntax
 
diff --git a/docs/advanced/alter-table/schema-change.md b/docs/advanced/alter-table/schema-change.md
index a0c5b169d54..3cf43598ae7 100644
--- a/docs/advanced/alter-table/schema-change.md
+++ b/docs/advanced/alter-table/schema-change.md
@@ -282,5 +282,5 @@ SHOW ALTER TABLE COLUMN\G;
 
 ## More Help
 
-For more detailed syntax and best practices used by Schema Change, see [ALTER TABLE COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md ) command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by Schema Change, see [ALTER TABLE COLUMN](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md ) command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql client command line for more help information.
 
diff --git a/docs/data-operate/export/outfile.md b/docs/data-operate/export/outfile.md
index b94a80934c5..9532377cde5 100644
--- a/docs/data-operate/export/outfile.md
+++ b/docs/data-operate/export/outfile.md
@@ -106,7 +106,7 @@ Planning example for concurrent export:
 
 ## Usage example
 
-For details, please refer to [OUTFILE Document](../sql-reference/sql-statements/Data%20Manipulation/OUTFILE.md).
+For details, please refer to [OUTFILE Document](../../sql-manual/sql-reference/Data-Manipulation-Statements/OUTFILE.md).
 
 ## Return result
 
diff --git a/docs/data-operate/import/import-scenes/external-storage-load.md b/docs/data-operate/import/import-scenes/external-storage-load.md
index f2ebd53e3af..221a45c1f8c 100644
--- a/docs/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/data-operate/import/import-scenes/external-storage-load.md
@@ -82,7 +82,7 @@ Hdfs load creates an import statement. The import method is basically the same a
 
 3. Check import status
 
-   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to view
+   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to view
    
    ```
    mysql> show load order by createtime desc limit 1\G;
diff --git a/docs/data-operate/import/import-scenes/jdbc-load.md b/docs/data-operate/import/import-scenes/jdbc-load.md
index 60e29236db0..27cc9cfe28e 100644
--- a/docs/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/data-operate/import/import-scenes/jdbc-load.md
@@ -160,5 +160,5 @@ Please note the following:
 
    As mentioned earlier, we recommend that when using INSERT to import data, use the "batch" method to import, rather than a single insert.
 
-   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](./load-atomicity), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) document.
+   At the same time, we can set a Label for each INSERT operation. Through the [Label mechanism](../load-atomicity), the idempotency and atomicity of operations can be guaranteed, and the data will not be lost or heavy in the end. For the specific usage of Label in INSERT, you can refer to the [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT.md) document.
 
diff --git a/docs/data-operate/import/import-scenes/kafka-load.md b/docs/data-operate/import/import-scenes/kafka-load.md
index e43a20c8100..05f64ccbe5c 100644
--- a/docs/data-operate/import/import-scenes/kafka-load.md
+++ b/docs/data-operate/import/import-scenes/kafka-load.md
@@ -62,7 +62,7 @@ After the upload is complete, you can view the uploaded files through the [SHOW
 
 ### Create a routine import job
 
-For specific commands to create routine import tasks, see [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md ) command manual. Here is an example:
+For specific commands to create routine import tasks, see [ROUTINE LOAD](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.md) command manual. Here is an example:
 
 1. Access the Kafka cluster without authentication
 
diff --git a/docs/data-table/basic-usage.md b/docs/data-table/basic-usage.md
index 8d8ce9d1564..8022c39aacf 100644
--- a/docs/data-table/basic-usage.md
+++ b/docs/data-table/basic-usage.md
@@ -142,7 +142,7 @@ mysql> USE example_db;
 Database changed
 ```
 
-Doris supports [composite partition and single partition](./data-partition)  two table building methods. The following takes the aggregation model as an example to demonstrate how to create two partitioned data tables.
+Doris supports [composite partition and single partition](../data-partition)  two table building methods. The following takes the aggregation model as an example to demonstrate how to create two partitioned data tables.
 
 #### Single partition
 
diff --git a/docs/data-table/hit-the-rollup.md b/docs/data-table/hit-the-rollup.md
index 990518c39cd..85cac8c0df3 100644
--- a/docs/data-table/hit-the-rollup.md
+++ b/docs/data-table/hit-the-rollup.md
@@ -44,7 +44,7 @@ Because Uniq is only a special case of the Aggregate model, we do not distinguis
 
 Example 1: Get the total consumption per user
 
-Following [Data Model Aggregate Model](./data-model.md) in the **Aggregate Model** section, the Base table structure is as follows:
+Following [Data Model Aggregate Model](../data-model.md) in the **Aggregate Model** section, the Base table structure is as follows:
 
 | ColumnName        | Type         | AggregationType | Comment                                |
 |-------------------| ------------ | --------------- | -------------------------------------- |
diff --git a/docs/ecosystem/doris-manager/space-list.md b/docs/ecosystem/doris-manager/space-list.md
index 68c3647a368..d9ab3589cf7 100644
--- a/docs/ecosystem/doris-manager/space-list.md
+++ b/docs/ecosystem/doris-manager/space-list.md
@@ -104,7 +104,7 @@ Enter the host IP to add a new host, or add it in batches.
 
 1. Code package path
 
-   When deploying a cluster through Doris Manager, you need to provide the compiled Doris installation package. You can compile it yourself from the Doris source code, or use the officially provided [binary version](https://doris.apache.org/zh-CN/ downloads/downloads.html).
+   When deploying a cluster through Doris Manager, you need to provide the compiled Doris installation package. You can compile it yourself from the Doris source code.
 
 `Doris Manager will pull the Doris installation package through http. If you need to build your own http service, please refer to the bottom of the document - Self-built http service`.
 
diff --git a/docs/ecosystem/logstash.md b/docs/ecosystem/logstash.md
index 11ec6163db7..320b8f14d8c 100644
--- a/docs/ecosystem/logstash.md
+++ b/docs/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 This plugin is used to output data to Doris for logstash, use the HTTP protocol to interact with the Doris FE Http interface, and import data through Doris's stream load.
 
-[Learn more about Doris Stream Load ](../data-operate/import/import-way/stream-load-manual)
+[Learn more about Doris Stream Load ](../data-operate/import/import-way/stream-load-manual.md)
 
 [Learn more about Doris](/)
 
diff --git a/docs/ecosystem/udf/contribute-udf.md b/docs/ecosystem/udf/contribute-udf.md
index 0492c7e0139..d5ae69e576e 100644
--- a/docs/ecosystem/udf/contribute-udf.md
+++ b/docs/ecosystem/udf/contribute-udf.md
@@ -119,6 +119,6 @@ The user manual needs to include: UDF function definition description, applicabl
 
 ## Contribute UDF to the community
 
-When you meet the conditions and prepare the code, you can contribute UDF to the Doris community after the document. Simply submit the request (PR) on [Github](https://github.com/apache/incubator-doris). See the specific submission method: [Pull Request (PR)](https://help.github.com/articles/about-pull-requests/).
+When you meet the conditions and prepare the code, you can contribute UDF to the Doris community after the document. Simply submit the request (PR) on [Github](https://github.com/apache/doris). See the specific submission method: [Pull Request (PR)](https://help.github.com/articles/about-pull-requests/).
 
 Finally, when the PR assessment is passed and merged. Congratulations, your UDF becomes a third-party UDF supported by Doris. You can check it out in the ecological expansion section of [Doris official website](/)~.
diff --git a/docs/faq/install-faq.md b/docs/faq/install-faq.md
index e945ceb6c79..93f6bd00582 100644
--- a/docs/faq/install-faq.md
+++ b/docs/faq/install-faq.md
@@ -83,7 +83,7 @@ Here we provide 3 ways to solve this problem:
 
 3. Manually migrate data using the API
 
-   Doris provides [HTTP API](../admin-manual/http-actions/tablet-migration-action), which can manually specify the migration of data shards on one disk to another disk.
+   Doris provides [HTTP API](../admin-manual/http-actions/tablet-migration-action.md), which can manually specify the migration of data shards on one disk to another disk.
 
 ### Q5. How to read FE/BE logs correctly?
 
@@ -263,11 +263,11 @@ If the following problems occur when using MySQL client to connect to Doris, thi
 
 Sometimes when FE is restarted, the above error will occur (usually only in the case of multiple Followers). And the two values in the error differ by 2. Causes FE to fail to start.
 
-This is a bug in bdbje that has not yet been resolved. In this case, you can only restore the metadata by performing the operation of failure recovery in [Metadata Operation and Maintenance Documentation](../admin-manual/maint-monitor/metadata-operation.md).
+This is a bug in bdbje that has not yet been resolved. In this case, you can only restore the metadata by performing the operation of failure recovery in [Metadata Operation and Maintenance Documentation](../admin-manual/maint-monitor/metadata-operation.md)).
 
 ### Q12. Doris compile and install JDK version incompatibility problem
 
-When compiling Doris using Docker, start FE after compiling and installing, and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit (I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is JDK 11. If your installation environment is using JDK8, you need to switch the JDK environment to JDK8 in Docker. For the specific switching method, please refer to [Compile Documentation](../install/source-install/compilation.md)
+When compiling Doris using Docker, start FE after compiling and installing, and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit (I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is JDK 11. If your installation environment is using JDK8, you need to switch the JDK environment to JDK8 in Docker. For the specific switching method, please refer to [Compile Documentation](../install/source-install/compilation.md))
 
 ### Q13. Error starting FE or unit test locally Cannot find external parser table action_table.dat
 Run the following command
@@ -285,7 +285,7 @@ In doris 1.0 onwards, openssl has been upgraded to 1.1 and is built into the dor
 ```
 ERROR 1105 (HY000): errCode = 2, detailMessage = driver connect Error: HY000 [MySQL][ODBC 8.0(w) Driver]SSL connection error: Failed to set ciphers to use (2026)
 ```
-The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector and select `Linux - Generic` in the operating system, this version of ODBC Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. [Connector/ODBC 5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, see the [ODBC exterior documentation](../ecosystem/external-table/odbc-of-doris.md).
+The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector and select `Linux - Generic` in the operating system, this version of ODBC Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. [Connector/ODBC 5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, see the [ODBC exterior documentation](../ecosystem/external-table/odbc-of-doris.md)).
 
 You can verify the version of openssl used by MySQL ODBC Driver by
 
diff --git a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 69b81b5c18a..47db6d49752 100644
--- a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ Notice:
 - The partition is left closed and right open. If the user only specifies the right boundary, the system will automatically determine the left boundary
 - If the bucketing method is not specified, the bucketing method and bucket number used for creating the table would be automatically used
 - If the bucketing method is specified, only the number of buckets can be modified, not the bucketing method or the bucketing column. If the bucketing method is specified but the number of buckets not be specified, the default value `10` will be used for bucket number instead of the number specified when the table is created. If the number of buckets modified, the bucketing method needs to be specified simultaneously.
-- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](../Create/CREATE-TABLE)
+- The ["key"="value"] section can set some attributes of the partition, see [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 - If the user does not explicitly create a partition when creating a table, adding a partition by ALTER is not supported
 
 2. Delete the partition
diff --git a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index a0c1616e0b9..e14188172b2 100644
--- a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "ALTER-TABLE-REPLACE-COLUMN",
+    "title": "ALTER-TABLE-REPLACE",
     "language": "en"
 }
 ---
diff --git a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index 4d56d7d12ec..d8660a1424e 100644
--- a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -68,7 +68,7 @@ Notice:
 
 - If from_index_name is not specified, it will be created from base index by default
 - Columns in rollup table must be columns already in from_index
-- In properties, the storage format can be specified. For details, see [CREATE TABLE](../Create/CREATE-TABLE)
+- In properties, the storage format can be specified. For details, see [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 
 3. Delete rollup index
 
diff --git a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
index ef0ab3273d1..a5d329e7fa6 100644
--- a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
+++ b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
@@ -41,7 +41,7 @@ DROP DATABASE [IF EXISTS] db_name [FORCE];
 
 illustrate:
 
-- During the execution of DROP DATABASE, the deleted database can be recovered through the RECOVER statement. See the [RECOVER](../../Database-Administration-Statements/RECOVER) statement for details
+- During the execution of DROP DATABASE, the deleted database can be recovered through the RECOVER statement. See the [RECOVER](../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) statement for details
 - If you execute DROP DATABASE FORCE, the system will not check the database for unfinished transactions, the database will be deleted directly and cannot be recovered, this operation is generally not recommended
 
 ### Example
diff --git a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index 260145ad03d..8bb21c8e6e1 100644
--- a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 illustrate:
 
-- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) statement for details
+- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER.md) statement for details
 - If you execute DROP TABLE FORCE, the system will not check whether there are unfinished transactions in the table, the table will be deleted directly and cannot be recovered, this operation is generally not recommended
 
 ### Example
diff --git a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index d033478033e..41d42c71a78 100644
--- a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Used to specify the column order in the original file. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Pre-filter conditions. The data is first concatenated into raw data rows in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter according to the pre-filter conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+    Filter imported data based on conditions. For a detailed introduction to this part, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../../advanced/broker) documentation for specific information.
+  Specifies the information required by the broker. This information is usually used by the broker to be able to access remote storage systems. Such as BOS or HDFS. See the [Broker](../../../../../advanced/broker) documentation for specific information.
 
   ````text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../../../advanced/time-zone) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
+    Specify the time zone for some functions that are affected by time zones, such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the [timezone](../../../../../advanced/time-zone) documentation for details. If not specified, the "Asia/Shanghai" timezone is used
     
   - `send_batch_parallelism` : 
   
@@ -409,19 +409,19 @@ WITH BROKER broker_name
 
 1. Check the import task status
 
-   Broker Load is an asynchronous import process. The successful execution of the statement only means that the import task is submitted successfully, and does not mean that the data import is successful. The import status needs to be viewed through the [SHOW LOAD](../../Show-Statements/SHOW-LOAD) command.
+   Broker Load is an asynchronous import process. The successful execution of the statement only means that the import task is submitted successfully, and does not mean that the data import is successful. The import status needs to be viewed through the [SHOW LOAD](../../../Show-Statements/SHOW-LOAD) command.
 
 2. Cancel the import task
 
-   Import tasks that have been submitted but not yet completed can be canceled by the [CANCEL LOAD](./CANCEL-LOAD) command. After cancellation, the written data will also be rolled back and will not take effect.
+   Import tasks that have been submitted but not yet completed can be canceled by the [CANCEL LOAD](../CANCEL-LOAD) command. After cancellation, the written data will also be rolled back and will not take effect.
 
 3. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../data-operate/import/import-scenes/load-atomicity) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity) documentation.
 
 4. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) document.
 
 5. Error data filtering
 
diff --git a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index 99b592c7619..c3222898b3f 100644
--- a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ The data synchronization (Sync Job) function supports users to submit a resident
 
 Currently, the data synchronization job only supports connecting to Canal, obtaining the parsed Binlog data from the Canal Server and importing it into Doris.
 
-Users can view the data synchronization job status through [SHOW SYNC JOB](../../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB).
+Users can view the data synchronization job status through [SHOW SYNC JOB](../../../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB).
 
 grammar:
 
diff --git a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
index 3e7ded9a1f7..c2d1d29bade 100644
--- a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
+++ b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
@@ -418,21 +418,21 @@ curl --location-trusted -u root -H "columns: k1,k2,source_sequence,v1,v2" -H "fu
 
 4. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in the same import task can also guarantee atomicity. At the same time, Doris can also use the Label mechanism to ensure that the data imported is not lost or heavy. For details, see the [Import Transactions and Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity.md) documentation.
 
 5. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
+   Doris can support very rich column transformation and filtering operations in import statements. Most built-in functions and UDFs are supported. For how to use this function correctly, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert,md) document.
 
 6. Error data filtering
 
-   Doris' import tasks can tolerate a portion of malformed data. The tolerance ratio is set via `max_filter_ratio`. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
+   Doris import tasks can tolerate a portion of malformed data. The tolerance ratio is set via `max_filter_ratio`. The default is 0, which means that the entire import task will fail when there is an error data. If the user wants to ignore some problematic data rows, the secondary parameter can be set to a value between 0 and 1, and Doris will automatically skip the rows with incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) document.
+   For some calculation methods of the tolerance rate, please refer to the [Column Mapping, Conversion and Filtering](../../../../../data-operate/import/import-scenes/load-data-convert.md) document.
 
 7. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../data-operate/import/import-scenes/load-strict-mode.md) documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in strict mode. The format affects the results of column mapping, transformation, and filtering. For a detailed description of strict mode, see the [strict mode](../../../../../data-operate/import/import-scenes/load-strict-moded.md) documentation.
 
 8. Timeout
 
diff --git a/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md b/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
index 7a75f9a94af..cfa1c922810 100644
--- a/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
+++ b/docs/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) statement.
+This command is used to view the execution of the Create Materialized View job submitted through the [CREATE-MATERIALIZED-VIEW](../../../sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) statement.
 
 > This statement is equivalent to `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
index 52f1fb66a2a..66f1abb994c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
@@ -102,7 +102,7 @@ FE 分为 Leader,Follower 和 Observer 三种角色。 默认一个集群,
 
 以上方式,都需要 Doris 的 root 用户权限。
 
-BE 节点的扩容和缩容过程,不影响当前系统运行以及正在执行的任务,并且不会影响当前系统的性能。数据均衡会自动进行。根据集群现有数据量的大小,集群会在几个小时到1天不等的时间内,恢复到负载均衡的状态。集群负载情况,可以参见 [Tablet 负载均衡文档](../maint-monitor/tablet-repair-and-balance.md#%E5%89%AF%E6%9C%AC%E5%9D%87%E8%A1%A1)。
+BE 节点的扩容和缩容过程,不影响当前系统运行以及正在执行的任务,并且不会影响当前系统的性能。数据均衡会自动进行。根据集群现有数据量的大小,集群会在几个小时到1天不等的时间内,恢复到负载均衡的状态。集群负载情况,可以参见 [Tablet 负载均衡文档](../maint-monitor/tablet-repair-and-balance.md)。
 
 ### 增加 BE 节点
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md
index 47b85bff9c1..e6a470c6d4c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md
@@ -445,7 +445,7 @@ BaseCompaction触发条件之一:Singleton文件大小限制,100MB
 ### `doris_max_scan_key_num`
 
 * 类型:int
-* 描述:用于限制一个查询请求中,scan node 节点能拆分的最大 scan key 的个数。当一个带有条件的查询请求到达 scan node 节点时,scan node 会尝试将查询条件中 key 列相关的条件拆分成多个 scan key range。之后这些 scan key range 会被分配给多个 scanner 线程进行数据扫描。较大的数值通常意味着可以使用更多的 scanner 线程来提升扫描操作的并行度。但在高并发场景下,过多的线程可能会带来更大的调度开销和系统负载,反而会降低查询响应速度。一个经验数值为 50。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../advanced/variables.md) 中 `max_scan_key_num` 的说明。
+* 描述:用于限制一个查询请求中,scan node 节点能拆分的最大 scan key 的个数。当一个带有条件的查询请求到达 scan node 节点时,scan node 会尝试将查询条件中 key 列相关的条件拆分成多个 scan key range。之后这些 scan key range 会被分配给多个 scanner 线程进行数据扫描。较大的数值通常意味着可以使用更多的 scanner 线程来提升扫描操作的并行度。但在高并发场景下,过多的线程可能会带来更大的调度开销和系统负载,反而会降低查询响应速度。一个经验数值为 50。该配置可以单独进行会话级别的配置,具体可参阅 [变量](../../../advanced/variables.md) 中 `max_scan_key_num` 的说明。
 * 默认值:1024
 
 当在高并发场景下发下并发度无法提升时,可以尝试降低该数值并观察影响。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/metadata-operation.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/metadata-operation.md
index aa1752c93d4..86dda4da225 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/metadata-operation.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/metadata-operation.md
@@ -32,7 +32,7 @@ under the License.
 
 ## 重要提示
 
-* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../admin-manual/cluster-management/upgrade) 中的操作,测试元数据兼容性。
+* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 [升级文档](../../admin-manual/cluster-management/upgrade.md) 中的操作,测试元数据兼容性。
 
 ## 元数据目录结构
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/tablet-repair-and-balance.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/tablet-repair-and-balance.md
index 64b4a95542a..4a5b1d05120 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -216,7 +216,7 @@ TabletScheduler 里等待被调度的分片会根据状态不同,赋予不同
 
 ## 副本均衡
 
-Doris 会自动进行集群内的副本均衡。目前支持两种均衡策略,负载/分区。负载均衡适合需要兼顾节点磁盘使用率和节点副本数量的场景;而分区均衡会使每个分区的副本都均匀分布在各个节点,避免热点,适合对分区读写要求比较高的场景。但是,分区均衡不考虑磁盘使用率,使用分区均衡时需要注意磁盘的使用情况。 策略只能在fe启动前配置[tablet_rebalancer_type](../config/fe-config )  ,不支持运行时切换。
+Doris 会自动进行集群内的副本均衡。目前支持两种均衡策略,负载/分区。负载均衡适合需要兼顾节点磁盘使用率和节点副本数量的场景;而分区均衡会使每个分区的副本都均匀分布在各个节点,避免热点,适合对分区读写要求比较高的场景。但是,分区均衡不考虑磁盘使用率,使用分区均衡时需要注意磁盘的使用情况。 策略只能在fe启动前配置[tablet_rebalancer_type](../../config/fe-config)  ,不支持运行时切换。
 
 ### 负载均衡
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/multi-tenant.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/multi-tenant.md
deleted file mode 100644
index acc1775c0a6..00000000000
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/multi-tenant.md
+++ /dev/null
@@ -1,232 +0,0 @@
----
-{
-    "title": "多租户和资源划分",
-    "language": "zh-CN"
-}
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# 多租户和资源划分
-
-Doris 的多租户和资源隔离方案,主要目的是为了多用户在同一 Doris 集群内进行数据操作时,减少相互之间的干扰,能够将集群资源更合理的分配给各用户。
-
-该方案主要分为两部分,一是集群内节点级别的资源组划分,二是针对单个查询的资源限制。
-
-## Doris 中的节点
-
-首先先简单介绍一下 Doris 的节点组成。一个 Doris 集群中有两类节点:Frontend(FE) 和 Backend(BE)。
-
-FE 主要负责元数据管理、集群管理、用户请求的接入和查询计划的解析等工作。
-
-BE 主要负责数据存储、查询计划的执行等工作。
-
-FE 不参与用户数据的处理计算等工作,因此是一个资源消耗较低的节点。而 BE 负责所有的数据计算、任务处理,属于资源消耗型的节点。因此,本文所介绍的资源划分及资源限制方案,都是针对 BE 节点的。FE 节点因为资源消耗相对较低,并且还可以横向扩展,因此通常无需做资源上的隔离和限制,FE 节点由所有用户共享即可。
-
-## 节点资源划分
-
-节点资源划分,是指将一个 Doris 集群内的 BE 节点设置标签(Tag),标签相同的 BE 节点组成一个资源组(Resource Group)。资源组可以看作是数据存储和计算的一个管理单元。下面我们通过一个具体示例,来介绍资源组的使用方式。
-
-1. 为 BE 节点设置标签
-
-   假设当前 Doris 集群有 6 个 BE 节点。分别为 host[1-6]。在初始情况下,所有节点都属于一个默认资源组(Default)。
-
-   我们可以使用以下命令将这6个节点划分成3个资源组:group_a、group_b、group_c:
-
-   ```sql
-   alter system modify backend "host1:9050" set ("tag.location" = "group_a");
-   alter system modify backend "host2:9050" set ("tag.location" = "group_a");
-   alter system modify backend "host3:9050" set ("tag.location" = "group_b");
-   alter system modify backend "host4:9050" set ("tag.location" = "group_b");
-   alter system modify backend "host5:9050" set ("tag.location" = "group_c");
-   alter system modify backend "host6:9050" set ("tag.location" = "group_c");
-   ```
-
-   这里我们将 `host[1-2]` 组成资源组 `group_a`,`host[3-4]` 组成资源组 `group_b`,`host[5-6]` 组成资源组 `group_c`。
-
-   > 注:一个 BE 只支持设置一个 Tag。
-
-2. 按照资源组分配数据分布
-
-   资源组划分好后。我们可以将用户数据的不同副本分布在不同资源组内。假设一张用户表 UserTable。我们希望在3个资源组内各存放一个副本,则可以通过如下建表语句实现:
-
-   ```sql
-   create table UserTable
-   (k1 int, k2 int)
-   distributed by hash(k1) buckets 1
-   properties(
-       "replication_allocation"="tag.location.group_a:1, tag.location.group_b:1, tag.location.group_c:1"
-   )
-   ```
-
-   这样一来,表 UserTable 中的数据,将会以3副本的形式,分别存储在资源组 group_a、group_b、group_c所在的节点中。
-
-   下图展示了当前的节点划分和数据分布:
-
-   ```text
-    ┌────────────────────────────────────────────────────┐
-    │                                                    │
-    │         ┌──────────────────┐  ┌──────────────────┐ │
-    │         │ host1            │  │ host2            │ │
-    │         │  ┌─────────────┐ │  │                  │ │
-    │ group_a │  │   replica1  │ │  │                  │ │
-    │         │  └─────────────┘ │  │                  │ │
-    │         │                  │  │                  │ │
-    │         └──────────────────┘  └──────────────────┘ │
-    │                                                    │
-    ├────────────────────────────────────────────────────┤
-    ├────────────────────────────────────────────────────┤
-    │                                                    │
-    │         ┌──────────────────┐  ┌──────────────────┐ │
-    │         │ host3            │  │ host4            │ │
-    │         │                  │  │  ┌─────────────┐ │ │
-    │ group_b │                  │  │  │   replica2  │ │ │
-    │         │                  │  │  └─────────────┘ │ │
-    │         │                  │  │                  │ │
-    │         └──────────────────┘  └──────────────────┘ │
-    │                                                    │
-    ├────────────────────────────────────────────────────┤
-    ├────────────────────────────────────────────────────┤
-    │                                                    │
-    │         ┌──────────────────┐  ┌──────────────────┐ │
-    │         │ host5            │  │ host6            │ │
-    │         │                  │  │  ┌─────────────┐ │ │
-    │ group_c │                  │  │  │   replica3  │ │ │
-    │         │                  │  │  └─────────────┘ │ │
-    │         │                  │  │                  │ │
-    │         └──────────────────┘  └──────────────────┘ │
-    │                                                    │
-    └────────────────────────────────────────────────────┘
-   ```
-
-3. 使用不同资源组进行数据查询
-
-   在前两步执行完成后,我们就可以通过设置用户的资源使用权限,来限制某一用户的查询,只能使用指定资源组中的节点来执行。
-
-   比如我们可以通过以下语句,限制 user1 只能使用 `group_a` 资源组中的节点进行数据查询,user2 只能使用 `group_b` 资源组,而 user3 可以同时使用 3 个资源组:
-
-   ```sql
-   set property for 'user1' 'resource_tags.location' = 'group_a';
-   set property for 'user2' 'resource_tags.location' = 'group_b';
-   set property for 'user3' 'resource_tags.location' = 'group_a, group_b, group_c';
-   ```
-
-   设置完成后,user1 在发起对 UserTable 表的查询时,只会访问 `group_a` 资源组内节点上的数据副本,并且查询仅会使用 `group_a` 资源组内的节点计算资源。而 user3 的查询可以使用任意资源组内的副本和计算资源。
-
-   这样,我们通过对节点的划分,以及对用户的资源使用限制,实现了不同用户查询上的物理资源隔离。更进一步,我们可以给不同的业务部门创建不同的用户,并限制每个用户使用不同的资源组。以避免不同业务部分之间使用资源干扰。比如集群内有一张业务表需要共享给所有9个业务部门使用,但是希望能够尽量避免不同部门之间的资源抢占。则我们可以为这张表创建3个副本,分别存储在3个资源组中。接下来,我们为9个业务部门创建9个用户,每3个用户限制使用一个资源组。这样,资源的竞争程度就由9降低到了3。
-
-   另一方面,针对在线和离线任务的隔离。我们可以利用资源组的方式实现。比如我们可以将节点划分为 Online 和 Offline 两个资源组。表数据依然以3副本的方式存储,其中 2 个副本存放在 Online 资源组,1 个副本存放在 Offline 资源组。Online 资源组主要用于高并发低延迟的在线数据服务,而一些大查询或离线ETL操作,则可以使用 Offline 资源组中的节点执行。从而实现在统一集群内同时提供在线和离线服务的能力。
-
-4. 导入作业的资源组分配
-
-   导入作业(包括insert、broker load、routine load、stream load等)的资源使用可以分为两部分:
-   1. 计算资源:负责读取数据源、数据转换和分发。
-   2. 写入资源:负责数据编码、压缩并写入磁盘。
-
-   其中写入资源必须是数据副本所在的节点,而计算资源理论上可以选择任意节点完成。所以对于导入作业的资源组的分配分成两个步骤:
-   1. 使用用户级别的 resource tag 来限定计算资源所能使用的资源组。
-   2. 使用副本的 resource tag 来限定写入资源所能使用的资源组。
-
-   所以如果希望导入操作所使用的全部资源都限定在数据所在的资源组的话,只需将用户级别的 resource tag 设置为和副本的 resource tag 相同即可。
-
-## 单查询资源限制
-
-前面提到的资源组方法是节点级别的资源隔离和限制。而在资源组内,依然可能发生资源抢占问题。比如前文提到的将3个业务部门安排在同一资源组内。虽然降低了资源竞争程度,但是这3个部门的查询依然有可能相互影响。
-
-因此,除了资源组方案外,Doris 还提供了对单查询的资源限制功能。
-
-目前 Doris 对单查询的资源限制主要分为 CPU 和 内存限制两方面。
-
-1. 内存限制
-
-   Doris 可以限制一个查询被允许使用的最大内存开销。以保证集群的内存资源不会被某一个查询全部占用。我们可以通过以下方式设置内存限制:
-
-   ```sql
-   # 设置会话变量 exec_mem_limit。则之后该会话内(连接内)的所有查询都使用这个内存限制。
-   set exec_mem_limit=1G;
-   # 设置全局变量 exec_mem_limit。则之后所有新会话(新连接)的所有查询都使用这个内存限制。
-   set global exec_mem_limit=1G;
-   # 在 SQL 中设置变量 exec_mem_limit。则该变量仅影响这个 SQL。
-   select /*+ SET_VAR(exec_mem_limit=1G) */ id, name from tbl where xxx;
-   ```
-
-   因为 Doris 的查询引擎是基于全内存的 MPP 查询框架。因此当一个查询的内存使用超过限制后,查询会被终止。因此,当一个查询无法在合理的内存限制下运行时,我们就需要通过一些 SQL 优化手段,或者集群扩容的方式来解决了。
-
-2. CPU 限制
-
-   用户可以通过以下方式限制查询的 CPU 资源:
-
-   ```sql
-   # 设置会话变量 cpu_resource_limit。则之后该会话内(连接内)的所有查询都使用这个CPU限制。
-   set cpu_resource_limit = 2
-   # 设置用户的属性 cpu_resource_limit,则所有该用户的查询情况都使用这个CPU限制。该属性的优先级高于会话变量 cpu_resource_limit
-   set property for 'user1' 'cpu_resource_limit' = '3';
-   ```
-
-   `cpu_resource_limit` 的取值是一个相对值,取值越大则能够使用的 CPU 资源越多。但一个查询能使用的CPU上限也取决于表的分区分桶数。原则上,一个查询的最大 CPU 使用量和查询涉及到的 tablet 数量正相关。极端情况下,假设一个查询仅涉及到一个 tablet,则即使 `cpu_resource_limit` 设置一个较大值,也仅能使用 1 个 CPU 资源。
-
-通过内存和CPU的资源限制。我们可以在一个资源组内,将用户的查询进行更细粒度的资源划分。比如我们可以让部分时效性要求不高,但是计算量很大的离线任务使用更少的CPU资源和更多的内存资源。而部分延迟敏感的在线任务,使用更多的CPU资源以及合理的内存资源。
-
-## 最佳实践和向前兼容
-
-Tag 划分和 CPU 限制是 0.15 版本中的新功能。为了保证可以从老版本平滑升级,Doris 做了如下的向前兼容:
-
-1. 每个 BE 节点会有一个默认的 Tag:`"tag.location": "default"`。
-2. 通过 `alter system add backend` 语句新增的 BE 节点也会默认设置 Tag:`"tag.location": "default"`。
-3. 所有表的副本分布默认修改为:`"tag.location.default:xx`。其中 xx 为原副本数量。
-4. 用户依然可以通过 `"replication_num" = "xx"` 在建表语句中指定副本数,这种属性将会自动转换成:`"tag.location.default:xx`。从而保证无需修改原建表语句。
-5. 默认情况下,单查询的内存限制为单节点2GB,CPU资源无限制,和原有行为保持一致。且用户的 `resource_tags.location` 属性为空,即默认情况下,用户可以访问任意 Tag 的 BE,和原有行为保持一致。
-
-这里我们给出一个从原集群升级到 0.15 版本后,开始使用资源划分功能的步骤示例:
-
-1. 关闭数据修复与均衡逻辑
-
-   因为升级后,BE的默认Tag为 `"tag.location": "default"`,而表的默认副本分布为:`"tag.location.default:xx`。所以如果直接修改 BE 的 Tag,系统会自动检测到副本分布的变化,从而开始数据重分布。这可能会占用部分系统资源。所以我们可以在修改 Tag 前,先关闭数据修复与均衡逻辑,以保证我们在规划资源时,不会有副本重分布的操作。
-
-   ```sql
-   ADMIN SET FRONTEND CONFIG ("disable_balance" = "true");
-   ADMIN SET FRONTEND CONFIG ("disable_tablet_scheduler" = "true");
-   ```
-
-2. 设置 Tag 和表副本分布
-
-   接下来可以通过 `alter system modify backend` 语句进行 BE 的 Tag 设置。以及通过 `alter table` 语句修改表的副本分布策略。示例如下:
-
-   ```sql
-   alter system modify backend "host1:9050, 1212:9050" set ("tag.location" = "group_a");
-   alter table my_table modify partition p1 set ("replication_allocation" = "tag.location.group_a:2");
-   ```
-
-3. 开启数据修复与均衡逻辑
-
-   在 Tag 和副本分布都设置完毕后,我们可以开启数据修复与均衡逻辑来触发数据的重分布了。
-
-   ```sql
-   ADMIN SET FRONTEND CONFIG ("disable_balance" = "false");
-   ADMIN SET FRONTEND CONFIG ("disable_tablet_scheduler" = "false");
-   ```
-
-   该过程根据涉及到的数据量会持续一段时间。并且会导致部分 colocation table 无法进行 colocation 规划(因为副本在迁移中)。可以通过 `show proc "/cluster_balance/"` 来查看进度。也可以通过 `show proc "/statistic"` 中 `UnhealthyTabletNum` 的数量来判断进度。当 `UnhealthyTabletNum` 降为 0 时,则代表数据重分布完毕。
-
-4. 设置用户的资源标签权限。
-
-   等数据重分布完毕后。我们就可以开始设置用户的资源标签权限了。因为默认情况下,用户的 `resource_tags.location` 属性为空,即可以访问任意 Tag 的 BE。所以在前面步骤中,不会影响到已有用户的正常查询。当 `resource_tags.location` 属性非空时,用户将被限制访问指定 Tag 的 BE。
-
-通过以上4步,我们可以较为平滑的在原有集群升级后,使用资源划分功能。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/replace-table.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/replace-table.md
index 7e24bfb28af..216350e2bb2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/replace-table.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/replace-table.md
@@ -28,7 +28,7 @@ under the License.
 
 在 0.14 版本中,Doris 支持对两个表进行原子的替换操作。 该操作仅适用于 OLAP 表。
 
-分区级别的替换操作,请参阅 [临时分区文档](../partition/table-temp-partition)
+分区级别的替换操作,请参阅 [临时分区文档](../../partition/table-temp-partition)
 
 ## 语法说明
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/schema-change.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/schema-change.md
index f2adbbe4e4e..7eff0b70998 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/schema-change.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/alter-table/schema-change.md
@@ -292,4 +292,4 @@ SHOW ALTER TABLE COLUMN\G;
 
 ## 更多帮助
 
-关于Schema Change使用的更多详细语法及最佳实践,请参阅 [ALTER TABLE COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP ALTER TABLE COLUMN`  获取更多帮助信息。
+关于Schema Change使用的更多详细语法及最佳实践,请参阅 [ALTER TABLE COLUMN](../../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md) 命令手册,你也可以在 MySql 客户端命令行下输入 `HELP ALTER TABLE COLUMN`  获取更多帮助信息。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
index 1dd619cde04..88c5a271e71 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
@@ -86,7 +86,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
   
 3. 查看导入状态
    
-   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
+   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW LOAD](../../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
    
    ```
    mysql> show load order by createtime desc limit 1\G;
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/jdbc-load.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/jdbc-load.md
index 36725343f07..ada0d26ddce 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/jdbc-load.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/jdbc-load.md
@@ -160,4 +160,4 @@ public class DorisJDBCDemo {
 
    前面提到,我们建议在使用 INSERT 导入数据时,采用 ”批“ 的方式进行导入,而不是单条插入。
 
-   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](./load-atomicity) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) 文档。
+   同时,我们可以为每次 INSERT 操作设置一个 Label。通过 [Label 机制](../load-atomicity) 可以保证操作的幂等性和原子性,最终做到数据的不丢不重。关于 INSERT 中 Label 的具体用法,可以参阅 [INSERT](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT) 文档。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md
index d60288e14c4..51d52d998b5 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md
@@ -165,7 +165,7 @@ mysql> USE example_db;
 Database changed
 ```
 
-Doris支持[复合分区和单分区](./data-partition)两种建表方式。下面以聚合模型为例,分别演示如何创建两种分区的数据表。
+Doris支持[复合分区和单分区](../data-partition)两种建表方式。下面以聚合模型为例,分别演示如何创建两种分区的数据表。
 
 #### 单分区
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/hit-the-rollup.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/hit-the-rollup.md
index 79eb4529ff3..335a8380d37 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/hit-the-rollup.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/hit-the-rollup.md
@@ -44,7 +44,7 @@ ROLLUP 表的基本作用,在于在 Base 表的基础上,获得更粗粒度
 
 1. 示例1:获得每个用户的总消费
 
-接 **[数据模型Aggregate 模型](./data-model)**小节的**示例2**,Base 表结构如下:
+接 **[数据模型Aggregate 模型](../data-model)**小节的**示例2**,Base 表结构如下:
 
 | ColumnName      | Type        | AggregationType | Comment                |
 | --------------- | ----------- | --------------- | ---------------------- |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-manager/space-list.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-manager/space-list.md
index fdbb578f8e5..a211454ea15 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-manager/space-list.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-manager/space-list.md
@@ -104,7 +104,7 @@ ssh agent01@xx.xxx.xx.xx
 
 1. 代码包路径
 
-   通过Doris Manager 进行集群部署时,需要提供已编译好的 Doris 安装包,您可以通过 Doris 源码自行编译,或使用官方提供的[二进制版本](https://doris.apache.org/zh-CN/download)。
+   通过Doris Manager 进行集群部署时,需要提供已编译好的 Doris 安装包,您可以通过 Doris 源码自行编译。
 
 `Doris Manager 将通过 http 方式拉取Doris安装包,若您需要自建 http 服务,请参考文档底部-自建http服务`。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/logstash.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/logstash.md
index 9b70559ca2c..2a6b98fb2b7 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/logstash.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 该插件用于logstash输出数据到Doris,使用 HTTP 协议与 Doris FE Http接口交互,并通过 Doris 的 stream load 的方式进行数据导入.
 
-[了解Doris Stream Load ](../data-operate/import/import-way/stream-load-manual)
+[了解Doris Stream Load ](../data-operate/import/import-way/stream-load-manual.md)
 
 [了解更多关于Doris](/zh-CN)
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/udf/contribute-udf.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/udf/contribute-udf.md
index ce83e7a5ac9..d3039447f19 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/udf/contribute-udf.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/udf/contribute-udf.md
@@ -119,6 +119,6 @@ under the License.
 
 ## 贡献 UDF 到社区
 
-当你符合前提条件并准备好代码,文档后就可以将 UDF 贡献到 Doris 社区了。在  [Github](https://github.com/apache/incubator-doris) 上面提交 Pull Request (PR) 即可。具体提交方式见:[Pull Request (PR)](https://help.github.com/articles/about-pull-requests/)。
+当你符合前提条件并准备好代码,文档后就可以将 UDF 贡献到 Doris 社区了。在  [Github](https://github.com/apache/doris) 上面提交 Pull Request (PR) 即可。具体提交方式见:[Pull Request (PR)](https://help.github.com/articles/about-pull-requests/)。
 
 最后,当 PR 评审通过并 Merge 后。恭喜你,你的 UDF 已经贡献给 Doris 社区,你可以在 [Doris 官网](/zh-CN) 的生态扩展部分查看到啦~。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 67ce90fe866..f2304ae3503 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ partition_desc ["key"="value"]
 - 分区为左闭右开区间,如果用户仅指定右边界,系统会自动确定左边界
 - 如果没有指定分桶方式,则自动使用建表使用的分桶方式和分桶数。
 - 如指定分桶方式,只能修改分桶数,不可修改分桶方式或分桶列。如果指定了分桶方式,但是没有指定分桶数,则分桶数会使用默认值10,不会使用建表时指定的分桶数。如果要指定分桶数,则必须指定分桶方式。
-- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](../Create/CREATE-TABLE)
+- ["key"="value"] 部分可以设置分区的一些属性,具体说明见 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 - 如果建表时用户未显式创建Partition,则不支持通过ALTER的方式增加分区
 
 2. 删除分区
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
index 228ab208c75..3c23fbb5dcb 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "ALTER-TABLE-REPLACE-COLUMN",
+    "title": "ALTER-TABLE-REPLACE",
     "language": "zh-CN"
 }
 ---
@@ -83,4 +83,4 @@ ALTER, TABLE, REPLACE, ALTER TABLE
 ### Best Practice
 1. 原子的覆盖写操作
 
-   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../../../../advanced/partition/table-temp-partition)。
+   某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../../../../../advanced/partition/table-temp-partition)。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index dbb36b9969d..1f74ab95d23 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -68,7 +68,7 @@ ADD ROLLUP [rollup_name (column_name1, column_name2, ...)
 
 - 如果没有指定 from_index_name,则默认从 base index 创建
 - rollup 表中的列必须是 from_index 中已有的列
-- 在 properties 中,可以指定存储格式。具体请参阅 [CREATE TABLE](../Create/CREATE-TABLE)
+- 在 properties 中,可以指定存储格式。具体请参阅 [CREATE TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 
 3. 删除 rollup index
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
index c4f8cad1852..2919dbc2770 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
@@ -41,7 +41,7 @@ DROP DATABASE [IF EXISTS] db_name [FORCE];
 
 说明:
 
-- 执行 DROP DATABASE 一段时间内,可以通过 RECOVER 语句恢复被删除的数据库。详见 [RECOVER](../../Database-Administration-Statements/RECOVER) 语句
+- 执行 DROP DATABASE 一段时间内,可以通过 RECOVER 语句恢复被删除的数据库。详见 [RECOVER](../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER) 语句
 - 如果执行 DROP DATABASE FORCE,则系统不会检查该数据库是否存在未完成的事务,数据库将直接被删除并且不能被恢复,一般不建议执行此操作
 
 ### Example
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
index 6a46b0e3e98..00bc33be82e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-TABLE.md
@@ -42,7 +42,7 @@ DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
 
 说明:
 
-- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 语句
+- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../../../sql-manual/sql-reference/Database-Administration-Statements/RECOVER.md) 语句
 - 如果执行 DROP TABLE FORCE,则系统不会检查该表是否存在未完成的事务,表将直接被删除并且不能被恢复,一般不建议执行此操作
 
 ### Example
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index e30e5dcc85f..b55b83dfc14 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
+    用于指定原始文件中的列顺序。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
+    前置过滤条件。数据首先根据 `column list` 和 `COLUMNS FROM PATH AS` 按顺序拼接成原始数据行。然后按照前置过滤条件进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
+    根据条件对导入的数据进行过滤。关于这部分详细介绍,可以参阅 [列的映射,转换与过滤](../../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../advanced/broker.md) 文档。
+  指定 broker 所需的信息。这些信息通常被用于 Broker 能够访问远端存储系统。如 BOS 或 HDFS。关于具体信息,可参阅 [Broker](../../../../../advanced/broker.md) 文档。
 
   ```text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../advanced/time-zone.md) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
+    指定某些受时区影响的函数的时区,如 `strftime/alignment_timestamp/from_unixtime` 等等,具体请查阅 [时区](../../../../../advanced/time-zone.md) 文档。如果不指定,则使用 "Asia/Shanghai" 时区
     
   - send_batch_parallelism: 用于设置发送批处理数据的并行度,如果并行度的值超过 BE 配置中的 `max_send_batch_parallelism_per_job`,那么作为协调点的 BE 将使用 `max_send_batch_parallelism_per_job` 的值。
   
@@ -406,29 +406,29 @@ WITH BROKER broker_name
 
 1. 查看导入任务状态
 
-   Broker Load 是一个异步导入过程,语句执行成功仅代表导入任务提交成功,并不代表数据导入成功。导入状态需要通过 [SHOW LOAD](../../Show-Statements/SHOW-LOAD.md) 命令查看。
+   Broker Load 是一个异步导入过程,语句执行成功仅代表导入任务提交成功,并不代表数据导入成功。导入状态需要通过 [SHOW LOAD](../../../Show-Statements/SHOW-LOAD) 命令查看。
 
 2. 取消导入任务
 
-   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](./CANCEL-LOAD.md) 命令取消。取消后,已写入的数据也会回滚,不会生效。
+   已提交切尚未结束的导入任务可以通过 [CANCEL LOAD](../CANCEL-LOAD) 命令取消。取消后,已写入的数据也会回滚,不会生效。
 
 3. Label、导入事务、多表原子性
 
-   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../../data-operate/import/import-scenes/load-atomicity.md) 文档。
+   Doris 中所有导入任务都是原子生效的。并且在同一个导入任务中对多张表的导入也能够保证原子性。同时,Doris 还可以通过 Label 的机制来保证数据导入的不丢不重。具体说明可以参阅 [导入事务和原子性](../../../.../../ddata-operate/import/import-scenes/load-atomicity) 文档。
 
 4. 列映射、衍生列和过滤
 
-   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
+   Doris 可以在导入语句中支持非常丰富的列转换和过滤操作。支持绝大多数内置函数和 UDF。关于如何正确的使用这个功能,可参阅 [列的映射,转换与过滤](../../../.../../data-operate/import/import-scenes/load-data-convert) 文档。
 
 5. 错误数据过滤
 
    Doris 的导入任务可以容忍一部分格式错误的数据。容忍了通过 `max_filter_ratio` 设置。默认为0,即表示当有一条错误数据时,整个导入任务将会失败。如果用户希望忽略部分有问题的数据行,可以将次参数设置为 0~1 之间的数值,Doris 会自动跳过哪些数据格式不正确的行。
 
-   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../../data-operate/import/import-scenes/load-data-convert.md) 文档。
+   关于容忍率的一些计算方式,可以参阅 [列的映射,转换与过滤](../../../.../../ddata-operate/import/import-scenes/load-data-convert) 文档。
 
 6. 严格模式
 
-   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../../data-operate/import/import-scenes/load-strict-mode.md) 文档。
+   `strict_mode` 属性用于设置导入任务是否运行在严格模式下。该格式会对列映射、转换和过滤的结果产生影响。关于严格模式的具体说明,可参阅 [严格模式](../../../.../../ddata-operate/import/import-scenes/load-strict-mode) 文档。
 
 7. 超时时间
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index d6253ec9f82..fc28b2e35ca 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ CREATE SYNC JOB
 
 目前数据同步作业只支持对接Canal,从Canal Server上获取解析好的Binlog数据,导入到Doris内。
 
-用户可通过 [SHOW SYNC JOB](../../Show-Statements/SHOW-SYNC-JOB) 查看数据同步作业状态。
+用户可通过 [SHOW SYNC JOB](../../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB) 查看数据同步作业状态。
 
 语法:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
index ba1f6590527..c9574bb0165 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) 语句提交的创建物化视图作业的执行情况。
+该命令用于查看通过 [CREATE-MATERIALIZED-VIEW](../../../sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW) 语句提交的创建物化视图作业的执行情况。
 
 > 该语句等同于 `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
index 7ddf145f557..c7a2c983023 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
@@ -32,34 +32,34 @@ under the License.
 * bitmap index:位图索引,是一种快速数据结构,能够加快查询速度
 
 ## 原理介绍
-创建和删除本质上是一个 schema change 的作业,具体细节可以参照 [Schema Change](./alter-table-schema-change)。
+创建和删除本质上是一个 schema change 的作业,具体细节可以参照 [Schema Change](../alter-table-schema-change)。
 
 ## 语法
 index 创建和修改相关语法有两种形式,一种集成与 alter table 语句中,另一种是使用单独的 
 create/drop index 语法
 1. 创建索引
 
-    创建索引的的语法可以参见 [CREATE INDEX](../../sql-reference/sql-statements/Data-Definition/CREATE-INDEX) 
-    或 [ALTER TABLE](../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE) 中bitmap 索引相关的操作,
-    也可以通过在创建表时指定bitmap 索引,参见[CREATE TABLE](../../sql-reference/sql-statements/Data-Definition/CREATE-TABLE)
+    创建索引的的语法可以参见 [CREATE INDEX](../../../sql-reference/sql-statements/Data-Definition/CREATE-INDEX) 
+    或 [ALTER TABLE](../../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE) 中bitmap 索引相关的操作,
+    也可以通过在创建表时指定bitmap 索引,参见[CREATE TABLE](../../../sql-reference/sql-statements/Data-Definition/CREATE-TABLE)
 
 2. 查看索引
 
-    参照[SHOW INDEX](../../sql-reference/sql-statements/Administration/SHOW-INDEX)
+    参照[SHOW INDEX](../../../sql-reference/sql-statements/Administration/SHOW-INDEX)
 
 3. 删除索引
 
-    参照[DROP INDEX](../../sql-reference/sql-statements/Data-Definition/DROP-INDEX)
-    或者 [ALTER TABLE](../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE) 中bitmap 索引相关的操作
+    参照[DROP INDEX](../../../sql-reference/sql-statements/Data-Definition/DROP-INDEX)
+    或者 [ALTER TABLE](../../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE) 中bitmap 索引相关的操作
 
 ## 创建作业
-参照 schema change 文档 [Schema Change](./alter-table-schema-change)
+参照 schema change 文档 [Schema Change](../alter-table-schema-change)
 
 ## 查看作业
-参照 schema change 文档 [Schema Change](./alter-table-schema-change)
+参照 schema change 文档 [Schema Change](../alter-table-schema-change)
 
 ## 取消作业
-参照 schema change 文档 [Schema Change](./alter-table-schema-change)
+参照 schema change 文档 [Schema Change](../alter-table-schema-change)
 
 ## 注意事项
 * 目前索引仅支持 bitmap 类型的索引。 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
index ce477239184..37add211fd2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
@@ -29,7 +29,7 @@ under the License.
 在 0.14 版本中,Doris 支持对两个表进行原子的替换操作。
 该操作仅适用于 OLAP 表。
 
-分区级别的替换操作,请参阅 [临时分区文档](./alter-table-temp-partition.md)
+分区级别的替换操作,请参阅 [临时分区文档](../alter-table-temp-partition)
 
 ## 语法说明
 
@@ -70,4 +70,4 @@ ALTER TABLE [db.]tbl1 REPLACE WITH TABLE tbl2
 
 1. 原子的覆盖写操作
 
-    某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](./alter-table-temp-partition.md)
+    某些情况下,用户希望能够重写某张表的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先使用 `CREATE TABLE LIKE` 语句创建一个相同结构的新表,将新的数据导入到新表后,通过替换操作,原子的替换旧表,以达到目的。分区级别的原子覆盖写操作,请参阅 [临时分区文档](../alter-table-temp-partition)
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
index b46759642f2..9d4b5e75815 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
@@ -277,7 +277,7 @@ PROPERTIES (
 
 1. 原子的覆盖写操作
 
-    某些情况下,用户希望能够重写某一分区的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先创建一个对应的临时分区,将新的数据导入到临时分区后,通过替换操作,原子的替换原有分区,以达到目的。对于非分区表的原子覆盖写操作,请参阅[替换表文档](./alter-table-replace-table.md)
+    某些情况下,用户希望能够重写某一分区的数据,但如果采用先删除再导入的方式进行,在中间会有一段时间无法查看数据。这时,用户可以先创建一个对应的临时分区,将新的数据导入到临时分区后,通过替换操作,原子的替换原有分区,以达到目的。对于非分区表的原子覆盖写操作,请参阅[替换表文档](../alter-table-replace-table)
     
 2. 修改分桶数
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/config/fe_config.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/config/fe_config.md
index 0ef6f996b15..802f3b23f60 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/config/fe_config.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/config/fe_config.md
@@ -82,7 +82,7 @@ FE 的配置项有两种方式进行配置:
 
 3. 通过 HTTP 协议动态配置
 
-   具体请参阅 [Set Config Action](../http-actions/fe/set-config-action)
+   具体请参阅 [Set Config Action](../../http-actions/fe/set-config-action)
 
    该方式也可以持久化修改后的配置项。配置项将持久化在 `fe_custom.conf` 文件中,在 FE 重启后仍会生效。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/broker-load-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
index 69ba5255c26..9a9dec92136 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
@@ -513,7 +513,7 @@ LoadFinishTime: 2019-07-27 11:50:16
 
 * 导入报错:`failed to send batch` 或 `TabletWriter add batch with unknown id`
 
-    请参照 [导入手册](./load-manual) 中 **通用系统配置** 中 **BE 配置**,适当修改 `query_timeout` 和 `streaming_load_rpc_max_alive_time_sec`。
+    请参照 [导入手册](../load-manual) 中 **通用系统配置** 中 **BE 配置**,适当修改 `query_timeout` 和 `streaming_load_rpc_max_alive_time_sec`。
     
 * 导入报错:`LOAD_RUN_FAIL; msg:Invalid Column Name:xxx` 
     
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/routine-load-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
index aa150ec1d70..2fb2b97da68 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
@@ -242,7 +242,7 @@ FE 中的 JobScheduler 根据汇报结果,继续生成后续新的 Task,或
 
 ### 修改作业属性
 
-用户可以修改已经创建的作业。具体说明可以通过 `HELP ALTER ROUTINE LOAD;` 命令查看。或参阅 [ALTER ROUTINE LOAD](../../sql-reference/sql-statements/Data-Manipulation/alter-routine-load)。
+用户可以修改已经创建的作业。具体说明可以通过 `HELP ALTER ROUTINE LOAD;` 命令查看。或参阅 [ALTER ROUTINE LOAD](../../../sql-reference/sql-statements/Data-Manipulation/alter-routine-load)。
 
 ### 作业控制
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/disk-capacity.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/disk-capacity.md
index aa3ca1a79d8..d5a61024968 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/disk-capacity.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/disk-capacity.md
@@ -127,7 +127,7 @@ capacity_min_left_bytes_flood_stage 默认 1GB。
     * snapshot/: 快照目录下的快照文件。
     * trash/:回收站中的文件。
 
-    **这种操作会对 [从 BE 回收站中恢复数据](./tablet-restore-tool) 产生影响。**
+    **这种操作会对 [从 BE 回收站中恢复数据](../tablet-restore-tool) 产生影响。**
 
     如果BE还能够启动,则可以使用`ADMIN CLEAN TRASH ON(BackendHost:BackendHeartBeatPort);`来主动清理临时文件,会清理 **所有** trash文件和过期snapshot文件,**这将影响从回收站恢复数据的操作** 。
 
@@ -158,6 +158,6 @@ capacity_min_left_bytes_flood_stage 默认 1GB。
 
         ```rm -rf data/0/12345/```
 
-    * 删除 Tablet 元数据(具体参考 [Tablet 元数据管理工具](./tablet-meta-tool))
+    * 删除 Tablet 元数据(具体参考 [Tablet 元数据管理工具](../tablet-meta-tool))
 
         ```./lib/meta_tool --operation=delete_header --root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/outfile.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/outfile.md
index 762ce21d5d1..585ebf79af9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/outfile.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/outfile.md
@@ -64,7 +64,7 @@ INTO OUTFILE "file_path"
 
     指定相关属性。目前支持通过 Broker 进程, 或通过 S3 协议进行导出。
 
-    + Broker 相关属性需加前缀 `broker.`。具体参阅[Broker 文档](./broker.html)。
+    + Broker 相关属性需加前缀 `broker.`。具体参阅[Broker 文档](../broker)。
     + HDFS 相关属性需加前缀 `hdfs.`。
     + S3 协议则直接执行 S3 协议配置即可。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/resource-management.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/resource-management.md
index 897fb53e802..90cd87195a0 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/resource-management.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/resource-management.md
@@ -149,7 +149,7 @@ PROPERTIES
 `driver`: 标示外部表使用的driver动态库,引用该resource的ODBC外表必填,旧的mysql外表选填。
 
 
-具体如何使用可以,可以参考[ODBC of Doris](../extending-doris/odbc-of-doris.html)
+具体如何使用可以,可以参考[ODBC of Doris](../../extending-doris/odbc-of-doris.html)
 
 #### 示例
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/best-practices/star-schema-benchmark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/best-practices/star-schema-benchmark.md
index 468cf6ca6aa..0df33cf5c5e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/best-practices/star-schema-benchmark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/best-practices/star-schema-benchmark.md
@@ -36,7 +36,7 @@ under the License.
 
 ## 环境准备
 
-请先参照 [官方文档](http://doris.incubator.apache.org/master/zh-CN/installing/install-deploy.html) 进行 Doris 的安装部署,以获得一个正常运行中的 Doris 集群(至少包含 1 FE,1 BE)。
+请先参照 [官方文档](../installing/install-deploy.html) 进行 Doris 的安装部署,以获得一个正常运行中的 Doris 集群(至少包含 1 FE,1 BE)。
 
 以下文档中涉及的的脚本都存放在 Doris 代码库的 `tools/ssb-tools/` 下。
 
@@ -78,7 +78,7 @@ sh gen-ssb-data.sh -s 100 -c 100
 
 3. 建表
 
-    复制 [create-tables.sql](https://github.com/apache/incubator-doris/tree/master/tools/ssb-tools/create-tables.sql) 中的建表语句,在 Doris 中执行。
+    复制 [create-tables.sql](https://github.com/apache/doris/tree/master/tools/ssb-tools/create-tables.sql) 中的建表语句,在 Doris 中执行。
 
 4. 导入数据
 
@@ -116,7 +116,7 @@ SSB 测试集共 4 组 14 个 SQL。查询语句在  [queries/](https://github.c
 
 ## 测试报告
 
-以下测试报告基于 Doris [branch-0.15](https://github.com/apache/incubator-doris/tree/branch-0.15) 分支代码测试,仅供参考。(更新时间:2021年10月25号)
+以下测试报告基于 Doris [branch-0.15](https://github.com/apache/doris/tree/branch-0.15) 分支代码测试,仅供参考。(更新时间:2021年10月25号)
 
 1. 硬件环境
 
@@ -162,5 +162,5 @@ SSB 测试集共 4 组 14 个 SQL。查询语句在  [queries/](https://github.c
     >
     > 注4:Parallelism 表示查询并发度,通过 `set parallel_fragment_exec_instance_num=8` 设置。
     >
-    > 注5:Runtime Filter Mode 是 Runtime Filter 的类型,通过 `set runtime_filter_type="BLOOM_FILTER"` 设置。([Runtime Filter](http://doris.incubator.apache.org/master/zh-CN/administrator-guide/runtime-filter.html) 功能对 SSB 测试集效果显著。因为该测试集中,Join 算子右表的数据可以对左表起到很好的过滤作用。你可以尝试通过 `set runtime_filter_mode=off` 关闭该功能,看看查询延迟的变化。)
+    > 注5:Runtime Filter Mode 是 Runtime Filter 的类型,通过 `set runtime_filter_type="BLOOM_FILTER"` 设置。([Runtime Filter](../administrator-guide/runtime-filter/) 功能对 SSB 测试集效果显著。因为该测试集中,Join 算子右表的数据可以对左表起到很好的过滤作用。你可以尝试通过 `set runtime_filter_mode=off` 关闭该功能,看看查询延迟的变化。)
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/extending-doris/logstash.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/extending-doris/logstash.md
index 467a886aa08..72639a0f1ce 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/extending-doris/logstash.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/extending-doris/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 该插件用于logstash输出数据到Doris,使用 HTTP 协议与 Doris FE Http接口交互,并通过 Doris 的 stream load 的方式进行数据导入.
 
-[了解Doris Stream Load ](http://doris.apache.org/master/zh-CN/administrator-guide/load-data/stream-load-manual.html)
+[了解Doris Stream Load ](https://doris.apache.org/zh-CN/docs/0.15/administrator-guide/load-data/stream-load-manual)
 
 [了解更多关于Doris](http://doris.apache.org/master/zh-CN/)
 
@@ -85,7 +85,7 @@ copy logstash-output-doris-{version}.gem 到 logstash 安装目录下
 `label_prefix` | 导入标识前缀,最终生成的标识为 *{label\_prefix}\_{db}\_{table}\_{time_stamp}*
 
 
-导入相关配置:([参考文档](http://doris.apache.org/master/zh-CN/administrator-guide/load-data/stream-load-manual.html))
+导入相关配置:([参考文档](https://doris.apache.org/zh-CN/docs/0.15/administrator-guide/load-data/stream-load-manual))
 
 配置 | 说明
 --- | ---
diff --git a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
index 95c1da79f28..330fdda6a04 100644
--- a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
+++ b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-bitmap-index.md
@@ -33,32 +33,32 @@ This document focuses on how to create an index job, as well as some considerati
 
 ## Basic Principles
 Creating and dropping index is essentially a schema change job. For details, please refer to
-[Schema Change](./alter-table-schema-change).
+[Schema Change](../alter-table-schema-change).
 
 ## Syntax
 There are two forms of index creation and modification related syntax, one is integrated with alter table statement, and the other is using separate
 create/drop index syntax
 1. Create Index
 
-    Please refer to [CREATE INDEX](../../sql-reference/sql-statements/Data-Definition/CREATE-INDEX) 
-    or [ALTER TABLE](../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE),
-    You can also specify a bitmap index when creating a table, Please refer to [CREATE TABLE](../../sql-reference/sql-statements/Data-Definition/CREATE-TABLE)
+    Please refer to [CREATE INDEX](../../../sql-reference/sql-statements/Data-Definition/CREATE-INDEX) 
+    or [ALTER TABLE](../../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE),
+    You can also specify a bitmap index when creating a table, Please refer to [CREATE TABLE](../../../sql-reference/sql-statements/Data-Definition/CREATE-TABLE)
 
 2. Show Index
 
-    Please refer to [SHOW INDEX](../../sql-reference/sql-statements/Administration/SHOW-INDEX)
+    Please refer to [SHOW INDEX](../../../sql-reference/sql-statements/Administration/SHOW-INDEX)
 
 3. Drop Index
 
-    Please refer to [DROP INDEX](../../sql-reference/sql-statements/Data-Definition/DROP-INDEX) or [ALTER TABLE](../../sql-reference/sql-statements/Data-Definition/ALTER-TABLE)
+    Please refer to [DROP INDEX](../../../sql-reference/sql-statements/Data-Definition/DROP-INDEX) or [ALTER TABLE](.././../sql-reference/sql-statements/Data-Definition/ALTER-TABLE)
 
 ## Create Job
-Please refer to [Schema Change](./alter-table-schema-change)
+Please refer to [Schema Change](../alter-table-schema-change)
 ## View Job
-Please refer to [Schema Change](./alter-table-schema-change)
+Please refer to [Schema Change](../alter-table-schema-change)
 
 ## Cancel Job
-Please refer to [Schema Change](./alter-table-schema-change)
+Please refer to [Schema Change](../alter-table-schema-change)
 
 ## Notice
 * Currently only index of bitmap type is supported.
diff --git a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
index 02532988d84..729a8b00bcf 100644
--- a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
+++ b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-replace-table.md
@@ -29,7 +29,7 @@ under the License.
 In version 0.14, Doris supports atomic replacement of two tables.
 This operation only applies to OLAP tables.
 
-For partition level replacement operations, please refer to [Temporary Partition Document](./alter-table-temp-partition.md)
+For partition level replacement operations, please refer to [Temporary Partition Document](../alter-table-temp-partition.md)
 
 ## Syntax
 
@@ -69,4 +69,4 @@ If `swap` is `false`, the operation is as follows:
 
 1. Atomic Overwrite Operation
 
-    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
+    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
diff --git a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
index 94e735d36b4..666f497c381 100644
--- a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
+++ b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-temp-partition.md
@@ -277,7 +277,7 @@ Users can load data into temporary partitions or specify temporary partitions fo
 
 1. Atomic overwrite
 
-    In some cases, the user wants to be able to rewrite the data of a certain partition, but if it is dropped first and then loaded, there will be a period of time when the data cannot be seen. At this moment, the user can first create a corresponding temporary partition, load new data into the temporary partition, and then replace the original partition atomically through the `REPLACE` operation to achieve the purpose. For atomic overwrite operations of non-partitioned tables, please re [...]
+    In some cases, the user wants to be able to rewrite the data of a certain partition, but if it is dropped first and then loaded, there will be a period of time when the data cannot be seen. At this moment, the user can first create a corresponding temporary partition, load new data into the temporary partition, and then replace the original partition atomically through the `REPLACE` operation to achieve the purpose. For atomic overwrite operations of non-partitioned tables, please re [...]
     
 2. Modify the number of buckets
 
diff --git a/versioned_docs/version-0.15/administrator-guide/config/fe_config.md b/versioned_docs/version-0.15/administrator-guide/config/fe_config.md
index 2852e88b539..f3e00267afc 100644
--- a/versioned_docs/version-0.15/administrator-guide/config/fe_config.md
+++ b/versioned_docs/version-0.15/administrator-guide/config/fe_config.md
@@ -83,7 +83,7 @@ There are two ways to configure FE configuration items:
     
 3. Dynamic configuration via HTTP protocol
 
-    For details, please refer to [Set Config Action](../http-actions/fe/set-config-action.md)
+    For details, please refer to [Set Config Action](../../http-actions/fe/set-config-action.md)
 
     This method can also persist the modified configuration items. The configuration items will be persisted in the `fe_custom.conf` file and will still take effect after FE is restarted.
 
diff --git a/versioned_docs/version-0.15/administrator-guide/load-data/broker-load-manual.md b/versioned_docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
index 930b5e1d5a7..253598d3514 100644
--- a/versioned_docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
+++ b/versioned_docs/version-0.15/administrator-guide/load-data/broker-load-manual.md
@@ -504,7 +504,7 @@ Cluster situation: The number of BEs in the cluster is about 3, and the Broker n
 
 *  failed with: `failed to send batch` or `TabletWriter add batch with unknown id`
 
-	Refer to **General System Configuration** in **BE Configuration** in the Import Manual (./load-manual.md), and modify `query_timeout` and `streaming_load_rpc_max_alive_time_sec` appropriately.
+	Refer to **General System Configuration** in **BE Configuration** in the Import Manual (../load-manual), and modify `query_timeout` and `streaming_load_rpc_max_alive_time_sec` appropriately.
 	
 *  failed with: `LOAD_RUN_FAIL; msg: Invalid Column Name: xxx`
     
diff --git a/versioned_docs/version-0.15/administrator-guide/load-data/routine-load-manual.md b/versioned_docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
index 5fe5558626c..81ac1451136 100644
--- a/versioned_docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
+++ b/versioned_docs/version-0.15/administrator-guide/load-data/routine-load-manual.md
@@ -243,7 +243,7 @@ You can only view tasks that are currently running, and tasks that have ended an
 
 ### Alter job
 
-Users can modify jobs that have been created. Specific instructions can be viewed through the `HELP ALTER ROUTINE LOAD;` command. Or refer to [ALTER ROUTINE LOAD](../../sql-reference/sql-statements/Data-Manipulation/alter-routine-load).
+Users can modify jobs that have been created. Specific instructions can be viewed through the `HELP ALTER ROUTINE LOAD;` command. Or refer to [ALTER ROUTINE LOAD](../../../sql-reference/sql-statements/Data-Manipulation/alter-routine-load).
 
 ### Job Control
 
diff --git a/versioned_docs/version-0.15/administrator-guide/operation/disk-capacity.md b/versioned_docs/version-0.15/administrator-guide/operation/disk-capacity.md
index 515ce4b1737..c43ef4aa3b1 100644
--- a/versioned_docs/version-0.15/administrator-guide/operation/disk-capacity.md
+++ b/versioned_docs/version-0.15/administrator-guide/operation/disk-capacity.md
@@ -129,7 +129,7 @@ When the disk capacity is higher than High Watermark or even Flood Stage, many o
     * snapshot/: Snapshot files in the snapshot directory. 
     * trash/ Trash files in the trash directory. 
 
-    **This operation will affect [Restore data from BE Recycle Bin](./tablet-restore-tool.md).**
+    **This operation will affect [Restore data from BE Recycle Bin](../tablet-restore-tool).**
 
     If the BE can still be started, you can use `ADMIN CLEAN TRASH ON(BackendHost:BackendHeartBeatPort);` to actively clean up temporary files. **all trash files** and expired snapshot files will be cleaned up, **This will affect the operation of restoring data from the trash bin**.
 
@@ -164,6 +164,6 @@ When the disk capacity is higher than High Watermark or even Flood Stage, many o
 
         ```rm -rf data/0/12345/```
 
-    * Delete tablet metadata (refer to [Tablet metadata management tool](./tablet-meta-tool))
+    * Delete tablet metadata (refer to [Tablet metadata management tool](../tablet-meta-tool))
 
         ```./lib/meta_tool --operation=delete_header --root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
\ No newline at end of file
diff --git a/versioned_docs/version-0.15/administrator-guide/outfile.md b/versioned_docs/version-0.15/administrator-guide/outfile.md
index dbea062d9a6..17ebe8af65d 100644
--- a/versioned_docs/version-0.15/administrator-guide/outfile.md
+++ b/versioned_docs/version-0.15/administrator-guide/outfile.md
@@ -63,7 +63,7 @@ INTO OUTFILE "file_path"
 
     Specify the relevant attributes. Currently it supports exporting through the Broker process, or through the S3, HDFS protocol.
 
-    + Broker related attributes need to be prefixed with `broker.`. For details, please refer to [Broker Document](./broker).
+    + Broker related attributes need to be prefixed with `broker.`. For details, please refer to [Broker Document](../broker).
     + HDFS protocal can directly execute HDFS protocal configuration.
     + S3 protocol can directly execute S3 protocol configuration.
 
diff --git a/versioned_docs/version-0.15/administrator-guide/resource-management.md b/versioned_docs/version-0.15/administrator-guide/resource-management.md
index b0a1e3e7cce..283e13ee427 100644
--- a/versioned_docs/version-0.15/administrator-guide/resource-management.md
+++ b/versioned_docs/version-0.15/administrator-guide/resource-management.md
@@ -147,7 +147,7 @@ PROPERTIES
 `driver`: Indicates the driver dynamic library used by the ODBC external table.
 The ODBC external table referring to the resource is required. The old MySQL external table referring to the resource is optional.
 
-For the usage of ODBC resource, please refer to [ODBC of Doris](../extending-doris/odbc-of-doris)
+For the usage of ODBC resource, please refer to [ODBC of Doris](../../extending-doris/odbc-of-doris)
 
 
 #### Example
diff --git a/versioned_docs/version-0.15/best-practices/star-schema-benchmark.md b/versioned_docs/version-0.15/best-practices/star-schema-benchmark.md
index 99174569059..aacb08635d7 100644
--- a/versioned_docs/version-0.15/best-practices/star-schema-benchmark.md
+++ b/versioned_docs/version-0.15/best-practices/star-schema-benchmark.md
@@ -78,7 +78,7 @@ Under the `-s 100` parameter, the generated data set size is:
 
 3. Build a table
 
-    Copy the table creation statement in [create-tables.sql](https://github.com/apache/incubator-doris/tree/master/tools/ssb-tools/create-tables.sql) and execute it in Doris.
+    Copy the table creation statement in [create-tables.sql](https://github.com/apache/doris/tree/master/tools/ssb-tools/create-tables.sql) and execute it in Doris.
 
 4. Import data
 
@@ -116,7 +116,7 @@ There are 4 groups of 14 SQL in the SSB test set. The query statement is in the
 
 ## testing report
 
-The following test report is based on Doris [branch-0.15](https://github.com/apache/incubator-doris/tree/branch-0.15) branch code test, for reference only. (Update time: October 25, 2021)
+The following test report is based on Doris [branch-0.15](https://github.com/apache/doris/tree/branch-0.15) branch code test, for reference only. (Update time: October 25, 2021)
 
 1. Hardware environment
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org