You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by ji...@apache.org on 2022/04/18 05:29:27 UTC

[incubator-doris] branch master updated: Modify some bad link in docs. (#9078)

This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-doris.git


The following commit(s) were added to refs/heads/master by this push:
     new dffd8513c6 Modify some bad link in docs. (#9078)
dffd8513c6 is described below

commit dffd8513c6f6f6027ccfa61a91437add9400a761
Author: smallhibiscus <84...@qq.com>
AuthorDate: Mon Apr 18 13:29:22 2022 +0800

    Modify some bad link in docs. (#9078)
    
    Modify some bad link in docs.
---
 new-docs/en/admin-manual/data-admin/backup.md      |  2 +-
 .../en/admin-manual/data-admin/delete-recover.md   |  2 +-
 new-docs/en/admin-manual/data-admin/restore.md     |  4 +-
 new-docs/en/advanced/alter-table/replace-table.md  |  4 +-
 new-docs/en/advanced/alter-table/schema-change.md  | 14 ++---
 new-docs/en/advanced/small-file-mgr.md             |  2 +-
 .../import/import-scenes/external-storage-load.md  |  2 +-
 .../import/import-scenes/external-table-load.md    |  2 +-
 .../import/import-scenes/kafka-load.md             | 10 ++--
 .../import/import-way/broker-load-manual.md        |  2 +-
 .../import/import-way/routine-load-manual.md       |  2 +-
 .../import/import-way/stream-load-manual.md        |  2 +-
 new-docs/en/data-table/best-practice.md            |  2 +-
 new-docs/en/data-table/data-partition.md           |  4 +-
 new-docs/en/data-table/hit-the-rollup.md           |  4 +-
 new-docs/en/get-starting/get-starting.md           |  2 +-
 .../sql-reference-v2/Show-Statements/SHOW-FILE.md  | 66 ++++++++++++++++++++++
 17 files changed, 96 insertions(+), 30 deletions(-)

diff --git a/new-docs/en/admin-manual/data-admin/backup.md b/new-docs/en/admin-manual/data-admin/backup.md
index 357c21886c..3e484a83bb 100644
--- a/new-docs/en/admin-manual/data-admin/backup.md
+++ b/new-docs/en/admin-manual/data-admin/backup.md
@@ -206,4 +206,4 @@ It is recommended to import the new and old clusters in parallel for a period of
 
 ## More Help
 
- For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.
+ For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.
diff --git a/new-docs/en/admin-manual/data-admin/delete-recover.md b/new-docs/en/admin-manual/data-admin/delete-recover.md
index 6f4330baf5..bb459dc98f 100644
--- a/new-docs/en/admin-manual/data-admin/delete-recover.md
+++ b/new-docs/en/admin-manual/data-admin/delete-recover.md
@@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
 
 ## More Help
 
-For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
+For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
diff --git a/new-docs/en/admin-manual/data-admin/restore.md b/new-docs/en/admin-manual/data-admin/restore.md
index 46199bde73..6bc4314aec 100644
--- a/new-docs/en/admin-manual/data-admin/restore.md
+++ b/new-docs/en/admin-manual/data-admin/restore.md
@@ -126,7 +126,7 @@ The restore operation needs to specify an existing backup in the remote warehous
    1 row in set (0.01 sec)
    ```
 
-For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference-v2/Show-Statements/RESTORE.html).
+For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html).
 
 ## Related Commands
 
@@ -180,4 +180,4 @@ The commands related to the backup and restore function are as follows. For the
 
 ## More Help
 
-For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.
+For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.
diff --git a/new-docs/en/advanced/alter-table/replace-table.md b/new-docs/en/advanced/alter-table/replace-table.md
index 420cb094bf..e79a0b49b2 100644
--- a/new-docs/en/advanced/alter-table/replace-table.md
+++ b/new-docs/en/advanced/alter-table/replace-table.md
@@ -29,7 +29,7 @@ under the License.
 In version 0.14, Doris supports atomic replacement of two tables.
 This operation only applies to OLAP tables.
 
-For partition level replacement operations, please refer to [Temporary Partition Document](./alter-table-temp-partition.md)
+For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition.html)
 
 ## Syntax
 
@@ -69,4 +69,4 @@ If `swap` is `false`, the operation is as follows:
 
 1. Atomic Overwrite Operation
 
-    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
\ No newline at end of file
+    In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, pl [...]
\ No newline at end of file
diff --git a/new-docs/en/advanced/alter-table/schema-change.md b/new-docs/en/advanced/alter-table/schema-change.md
index cda62024be..a4a4a769a8 100644
--- a/new-docs/en/advanced/alter-table/schema-change.md
+++ b/new-docs/en/advanced/alter-table/schema-change.md
@@ -97,20 +97,20 @@ TransactionId: 10023
 * JobId: A unique ID for each Schema Change job.
 * TableName: The table name of the base table corresponding to Schema Change.
 * CreateTime: Job creation time.
-* FinishedTime: The end time of the job. If it is not finished, "N / A" is displayed.
+* FinishedTime: The end time of the job. If it is not finished, "N/A" is displayed.
 * IndexName: The name of an Index involved in this modification.
 * IndexId: The unique ID of the new Index.
 * OriginIndexId: The unique ID of the old Index.
 * SchemaVersion: Displayed in M: N format. M is the version of this Schema Change, and N is the corresponding hash value. With each Schema Change, the version is incremented.
 * TransactionId: the watershed transaction ID of the conversion history data.
 * State: The phase of the operation.
-    * PENDING: The job is waiting in the queue to be scheduled.
-    * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
-        * RUNNING: Historical data conversion.
-        * FINISHED: The operation was successful.
-            * CANCELLED: The job failed.
+  * PENDING: The job is waiting in the queue to be scheduled.
+  * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
+  * RUNNING: Historical data conversion.
+  * FINISHED: The operation was successful.
+  * CANCELLED: The job failed.
 * Msg: If the job fails, a failure message is displayed here.
-* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M ​​/ N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
+* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M/N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
 * Timeout: Job timeout time. Unit of second.
 
 ## Cancel Job
diff --git a/new-docs/en/advanced/small-file-mgr.md b/new-docs/en/advanced/small-file-mgr.md
index 5053431f43..9bc5b8eebd 100644
--- a/new-docs/en/advanced/small-file-mgr.md
+++ b/new-docs/en/advanced/small-file-mgr.md
@@ -129,4 +129,4 @@ Because the file meta-information and content are stored in FE memory. So by def
 
 ## More Help
 
-For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-FILE.html), [DROP FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Drop/DROP-FILE.html) and [SHOW FILE](../sql-manual/sql-reference-v2 /Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.
+For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-FILE.html), [DROP FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Drop/DROP-FILE.html) and [SHOW FILE](../sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.
diff --git a/new-docs/en/data-operate/import/import-scenes/external-storage-load.md b/new-docs/en/data-operate/import/import-scenes/external-storage-load.md
index e9d7dfd313..c377f51f27 100644
--- a/new-docs/en/data-operate/import/import-scenes/external-storage-load.md
+++ b/new-docs/en/data-operate/import/import-scenes/external-storage-load.md
@@ -82,7 +82,7 @@ Hdfs load creates an import statement. The import method is basically the same a
 
 3. Check import status
 
-   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-LOAD.html#show- load) command to view
+   Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-LOAD.html#show-load) command to view
    
    ```
    mysql> show load order by createtime desc limit 1\G;
diff --git a/new-docs/en/data-operate/import/import-scenes/external-table-load.md b/new-docs/en/data-operate/import/import-scenes/external-table-load.md
index 25c9d21955..f28a644eba 100644
--- a/new-docs/en/data-operate/import/import-scenes/external-table-load.md
+++ b/new-docs/en/data-operate/import/import-scenes/external-table-load.md
@@ -38,7 +38,7 @@ This document describes how to create external tables accessed through the ODBC
 
 ## create external table
 
-For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE]((../../../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) syntax help manual.
+For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE](../../../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) syntax help manual.
 
 Here is just an example of how to use it.
 
diff --git a/new-docs/en/data-operate/import/import-scenes/kafka-load.md b/new-docs/en/data-operate/import/import-scenes/kafka-load.md
index abfc0d3f40..20f5d6283a 100644
--- a/new-docs/en/data-operate/import/import-scenes/kafka-load.md
+++ b/new-docs/en/data-operate/import/import-scenes/kafka-load.md
@@ -58,7 +58,7 @@ Accessing an SSL-authenticated Kafka cluster requires the user to provide a cert
   CREATE FILE "client.pem" PROPERTIES("url" = "https://example_url/kafka-key/client.pem", "catalog" = "kafka");
   ````
 
-After the upload is complete, you can view the uploaded files through the [SHOW FILES]() command.
+After the upload is complete, you can view the uploaded files through the [SHOW FILES](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.html) command.
 
 ### Create a routine import job
 
@@ -112,9 +112,9 @@ For specific commands to create routine import tasks, see [ROUTINE LOAD](../../.
 
 ### View import job status
 
-See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD.html for specific commands and examples for checking the status of a **job** ) command documentation.
+See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD.html) for specific commands and examples for checking the status of a **job** ) command documentation.
 
-See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW -ROUTINE-LOAD-TASK.html) command documentation.
+See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD-TASK.html) command documentation.
 
 Only the currently running tasks can be viewed, and the completed and unstarted tasks cannot be viewed.
 
@@ -126,8 +126,8 @@ The user can modify some properties of the job that has been created. For detail
 
 The user can control the stop, pause and restart of the job through the `STOP/PAUSE/RESUME` three commands.
 
-For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html), [RESUME ROUTINE LOAD](../../ ../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) command documentation.
+For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html), [RESUME ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) command documentation.
 
 ## more help
 
-For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/CREATE-ROUTINE -LOAD.html) command manual.
\ No newline at end of file
+For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) command manual.
\ No newline at end of file
diff --git a/new-docs/en/data-operate/import/import-way/broker-load-manual.md b/new-docs/en/data-operate/import/import-way/broker-load-manual.md
index 3ab23b9669..b0e14a8845 100644
--- a/new-docs/en/data-operate/import/import-way/broker-load-manual.md
+++ b/new-docs/en/data-operate/import/import-way/broker-load-manual.md
@@ -434,4 +434,4 @@ Currently the Profile can only be viewed after the job has been successfully exe
 
 ## more help
 
-For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/BROKER- LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
+For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/BROKER-LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
diff --git a/new-docs/en/data-operate/import/import-way/routine-load-manual.md b/new-docs/en/data-operate/import/import-way/routine-load-manual.md
index e989c8ce20..7ddefa9e22 100644
--- a/new-docs/en/data-operate/import/import-way/routine-load-manual.md
+++ b/new-docs/en/data-operate/import/import-way/routine-load-manual.md
@@ -232,7 +232,7 @@ Accessing the SSL-certified Kafka cluster requires the user to provide a certifi
 
 ### Viewing the status of the load job
 
-Specific commands and examples for viewing the status of the ** job** can be viewed with the `HELP SHOW ROUTINE LOAD;` command.
+Specific commands and examples for viewing the status of the **job** can be viewed with the `HELP SHOW ROUTINE LOAD;` command.
 
 Specific commands and examples for viewing the **Task** status can be viewed with the `HELP SHOW ROUTINE LOAD TASK;` command.
 
diff --git a/new-docs/en/data-operate/import/import-way/stream-load-manual.md b/new-docs/en/data-operate/import/import-way/stream-load-manual.md
index be48651550..373d7eabc1 100644
--- a/new-docs/en/data-operate/import/import-way/stream-load-manual.md
+++ b/new-docs/en/data-operate/import/import-way/stream-load-manual.md
@@ -312,7 +312,7 @@ Timeout = 1000s -31561;. 20110G / 10M /s
 ```
 
 ### Complete examples
-Data situation: In the local disk path / home / store_sales of the sending and importing requester, the imported data is about 15G, and it is hoped to be imported into the table store\_sales of the database bj_sales.
+Data situation: In the local disk path /home/store_sales of the sending and importing requester, the imported data is about 15G, and it is hoped to be imported into the table store\_sales of the database bj_sales.
 
 Cluster situation: The concurrency of Stream load is not affected by cluster size.
 
diff --git a/new-docs/en/data-table/best-practice.md b/new-docs/en/data-table/best-practice.md
index 930bdb3a86..95921735df 100644
--- a/new-docs/en/data-table/best-practice.md
+++ b/new-docs/en/data-table/best-practice.md
@@ -129,7 +129,7 @@ Doris stores the data in an orderly manner, and builds a sparse index for Doris
 Sparse index chooses fixed length prefix in schema as index content, and Doris currently chooses 36 bytes prefix as index.
 
 * When building tables, it is suggested that the common filter fields in queries should be placed in front of Schema. The more distinguishable the query fields are, the more frequent the query fields are.
-* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model, ROLLUP and prefix index] (. / data-model-rollup. md).
+* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model](./data-model.html), [ROLLUP and query](./hit-the-rollup.html).
 * In addition to sparse index, Doris also provides bloomfilter index. Bloomfilter index has obvious filtering effect on columns with high discrimination. If you consider that varchar cannot be placed in a sparse index, you can create a bloomfilter index.
 
 ### 1.5 Physical and Chemical View (rollup)
diff --git a/new-docs/en/data-table/data-partition.md b/new-docs/en/data-table/data-partition.md
index 714c7340cb..c171c81a66 100644
--- a/new-docs/en/data-table/data-partition.md
+++ b/new-docs/en/data-table/data-partition.md
@@ -332,7 +332,7 @@ It is also possible to use only one layer of partitioning. When using a layer pa
     * Once the number of Buckets for a Partition is specified, it cannot be changed. Therefore, when determining the number of Buckets, you need to consider the expansion of the cluster in advance. For example, there are currently only 3 hosts, and each host has 1 disk. If the number of Buckets is only set to 3 or less, then even if you add more machines later, you can't increase the concurrency.
     * Give some examples: Suppose there are 10 BEs, one for each BE disk. If the total size of a table is 500MB, you can consider 4-8 shards. 5GB: 8-16. 50GB: 32. 500GB: Recommended partitions, each partition is about 50GB in size, with 16-32 shards per partition. 5TB: Recommended partitions, each with a size of around 50GB and 16-32 shards per partition.
     
-    > Note: The amount of data in the table can be viewed by the `[show data](../sql-manual/sql-reference-v2/Show-Statements/SHOW-DATA.html)` command. The result is divided by the number of copies, which is the amount of data in the table.
+    > Note: The amount of data in the table can be viewed by the [show data](../sql-manual/sql-reference-v2/Show-Statements/SHOW-DATA.html) command. The result is divided by the number of copies, which is the amount of data in the table.
     
 
 #### Compound Partitions vs Single Partitions
@@ -352,7 +352,7 @@ The user can also use a single partition without using composite partitions. The
 
 ### PROPERTIES
 
-In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-TABLE .html) for a detailed introduction.
+In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-TABLE.html) for a detailed introduction.
 
 ### ENGINE
 
diff --git a/new-docs/en/data-table/hit-the-rollup.md b/new-docs/en/data-table/hit-the-rollup.md
index 6690d70d42..b8e078e39c 100644
--- a/new-docs/en/data-table/hit-the-rollup.md
+++ b/new-docs/en/data-table/hit-the-rollup.md
@@ -44,7 +44,7 @@ Because Uniq is only a special case of the Aggregate model, we do not distinguis
 
 Example 1: Get the total consumption per user
 
-Following [Data Model Aggregate Model](data-model.html#Aggregate Model) in the **Aggregate Model** section, the Base table structure is as follows:
+Following [Data Model Aggregate Model](./data-model.html) in the **Aggregate Model** section, the Base table structure is as follows:
 
 | ColumnName        | Type         | AggregationType | Comment                                |
 |-------------------| ------------ | --------------- | -------------------------------------- |
@@ -128,7 +128,7 @@ Doris automatically hits the ROLLUP table.
 
 #### ROLLUP in Duplicate Model
 
-Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](data-model.html#prefix index), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
+Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](./data-model.html), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
 
 ## ROLLUP adjusts prefix index
 
diff --git a/new-docs/en/get-starting/get-starting.md b/new-docs/en/get-starting/get-starting.md
index a0b714dcdf..190def784a 100644
--- a/new-docs/en/get-starting/get-starting.md
+++ b/new-docs/en/get-starting/get-starting.md
@@ -144,7 +144,7 @@ This article is applicable to multi-platform (Win|Mac|Linux) and multi-mode (bar
    > 5. At the same time, if there is a data query, you should be able to see the log that keeps scrolling, and there is a log of execute time is xxx, indicating that the BE has been started successfully and the query is normal.
    > 6. You can also check whether the startup is successful through the following connection: http://be_host:be_http_port/api/health If it returns: {"status": "OK","msg": "To Be Added"}, it means the startup is successful, In other cases, there may be problems.
    >
-   > **Note: If you can't see the startup failure information in be.INFO, maybe you can see it in be.out. **
+   > **Note: If you can't see the startup failure information in be.INFO, maybe you can see it in be.out.**
 
    Register BE to FE (using MySQL-Client, you need to install it yourself)
 
diff --git a/new-docs/en/sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.md b/new-docs/en/sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.md
new file mode 100644
index 0000000000..acd5c6fc2c
--- /dev/null
+++ b/new-docs/en/sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.md
@@ -0,0 +1,66 @@
+---
+{
+    "title": "SHOW-FILE",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## SHOW-FILE
+
+### Name
+
+SHOW FILE
+
+### Description
+
+This statement is used to display a file created in a database
+
+grammar:
+
+```sql
+SHOW FILE [FROM database];
+````
+
+illustrate:
+
+````text
+FileId: file ID, globally unique
+DbName: the name of the database to which it belongs
+Catalog: Custom Category
+FileName: file name
+FileSize: file size, in bytes
+MD5: MD5 of the file
+````
+
+### Example
+
+1. View the uploaded files in the database my_database
+
+     ```sql
+     SHOW FILE FROM my_database;
+     ````
+
+### Keywords
+
+     SHOW, FILE
+
+### Best Practice
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org