You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by mo...@apache.org on 2022/04/23 14:05:18 UTC

[incubator-doris] branch master updated: [docs][typo] Fix some typos in "alter-table" content. (#9131)

This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 4911d6898a [docs][typo] Fix some typos in "alter-table" content. (#9131)
4911d6898a is described below

commit 4911d6898a1be1259c969935f87b880fc44a0739
Author: liuzhuang2017 <95...@users.noreply.github.com>
AuthorDate: Sat Apr 23 22:05:13 2022 +0800

    [docs][typo] Fix some typos in "alter-table" content. (#9131)
---
 .../alter-table/alter-table-schema-change.md       | 12 +++----
 .../alter-table/alter-table-temp-partition.md      |  2 +-
 .../load-data/broker-load-manual.md                | 38 +++++++++++-----------
 .../administrator-guide/load-data/delete-manual.md |  2 +-
 .../load-data/spark-load-manual.md                 |  8 ++---
 .../load-data/stream-load-manual.md                |  2 +-
 .../load-data/batch-delete-manual.md               |  2 +-
 7 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/docs/en/administrator-guide/alter-table/alter-table-schema-change.md b/docs/en/administrator-guide/alter-table/alter-table-schema-change.md
index 17d510af47..31755b02a0 100644
--- a/docs/en/administrator-guide/alter-table/alter-table-schema-change.md
+++ b/docs/en/administrator-guide/alter-table/alter-table-schema-change.md
@@ -101,11 +101,11 @@ TransactionId: 10023
 * SchemaVersion: Displayed in M: N format. M is the version of this Schema Change, and N is the corresponding hash value. With each Schema Change, the version is incremented.
 * TransactionId: the watershed transaction ID of the conversion history data.
 * State: The phase of the operation.
-    * PENDING: The job is waiting in the queue to be scheduled.
-    * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
-    * RUNNING: Historical data conversion.
-    * FINISHED: The operation was successful.
-    * CANCELLED: The job failed.
+  * PENDING: The job is waiting in the queue to be scheduled.
+  * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
+  * RUNNING: Historical data conversion.
+  * FINISHED: The operation was successful.
+  * CANCELLED: The job failed.
 * Msg: If the job fails, a failure message is displayed here.
 * Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M ​​/ N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
 * Timeout: Job timeout time. Unit of second.
@@ -190,7 +190,7 @@ At the same time, columns that already exist in the Base table are not allowed t
 
      If you modify the column `k1 INT SUM NULL DEFAULT" 1 "` as type BIGINT, you need to execute the following command:
     
-    ```ALTER TABLE tbl1 MODIFY COLUMN `k1` BIGINT SUM NULL DEFAULT "1"; ```
+```ALTER TABLE tbl1 MODIFY COLUMN `k1` BIGINT SUM NULL DEFAULT "1";```
     
    Note that in addition to the new column types, such as the aggregation mode, Nullable attributes, and default values must be completed according to the original information.
     
diff --git a/docs/en/administrator-guide/alter-table/alter-table-temp-partition.md b/docs/en/administrator-guide/alter-table/alter-table-temp-partition.md
index 94e735d36b..94f7440bf7 100644
--- a/docs/en/administrator-guide/alter-table/alter-table-temp-partition.md
+++ b/docs/en/administrator-guide/alter-table/alter-table-temp-partition.md
@@ -223,7 +223,7 @@ Users can load data into temporary partitions or specify temporary partitions fo
     ```
     LOAD LABEL example_db.label1
     (
-    DATA INFILE ("hdfs: // hdfs_host: hdfs_port / user / palo / data / input / file")
+    DATA INFILE ("hdfs://hdfs_host:hdfs_port/user/palo/data/input/file")
     INTO TABLE `my_table`
     TEMPORARY PARTITION (tp1, tp2, ...)
     ...
diff --git a/docs/en/administrator-guide/load-data/broker-load-manual.md b/docs/en/administrator-guide/load-data/broker-load-manual.md
index 0a5d168284..72a1976f2f 100644
--- a/docs/en/administrator-guide/load-data/broker-load-manual.md
+++ b/docs/en/administrator-guide/load-data/broker-load-manual.md
@@ -393,11 +393,11 @@ The following configurations belong to the Broker load system-level configuratio
 
 Default configuration:
 
-	```
-	Parameter name: min_bytes_per_broker_scanner, default 64MB, unit bytes.
-	Parameter name: max_broker_concurrency, default 10.
-	Parameter name: max_bytes_per_broker_scanner, default 3G, unit bytes.
-	```
+```
+Parameter name: min_bytes_per_broker_scanner, default 64MB, unit bytes.
+Parameter name: max_broker_concurrency, default 10.
+Parameter name: max_bytes_per_broker_scanner, default 3GB, unit bytes.
+```
 
 ## Best Practices
 
@@ -455,7 +455,7 @@ We will only discuss the case of a single BE. If the user cluster has more than
 		```
 		Expected maximum imported file data = 14400s * 10M / s * BE number
 		For example, the BE number of clusters is 10.
-		Expected maximum imported file data volume = 14400 * 10M / s * 10 = 1440000M 1440G
+		Expected maximum imported file data volume = 14400 * 10M / s * 10 = 1440000M ≈ 1440G
 		
 		Note: The average user's environment may not reach the speed of 10M/s, so it is recommended that more than 500G files be split and imported.
 		
@@ -521,16 +521,16 @@ Cluster situation: The number of BEs in the cluster is about 3, and the Broker n
 	Refer to **General System Configuration** in **BE Configuration** in the Import Manual (./load-manual.md), and modify `query_timeout` and `streaming_load_rpc_max_alive_time_sec` appropriately.
 	
 *  failed with: `LOAD_RUN_FAIL; msg: Invalid Column Name: xxx`
-    
-     If it is PARQUET or ORC format data, you need to keep the column names in the file header consistent with the column names in the doris table, such as:
-     `` `
-     (tmp_c1, tmp_c2)
-     SET
-     (
-         id = tmp_c2,
-         name = tmp_c1
-     )
-     `` `
-     Represents getting the column with (tmp_c1, tmp_c2) as the column name in parquet or orc, which is mapped to the (id, name) column in the doris table. If set is not set, the column names in the column are used as the mapping relationship.
-
-     Note: If the orc file directly generated by some hive versions is used, the table header in the orc file is not the column name in the hive meta, but (_col0, _col1, _col2, ...), which may cause the Invalid Column Name error, then You need to use set for mapping.
+
+	If it is PARQUET or ORC format data, you need to keep the column names in the file header consistent with the column names in the doris table, such as:
+	```
+	(tmp_c1, tmp_c2)
+	SET
+	(
+	id = tmp_c2,
+	name = tmp_c1
+	)
+	```
+	Represents getting the column with (tmp_c1, tmp_c2) as the column name in parquet or orc, which is mapped to the (id, name) column in the doris table. If set is not set, the column names in the column are used as the mapping relationship.
+
+	Note: If the orc file directly generated by some hive versions is used, the table header in the orc file is not the column name in the hive meta, but (_col0, _col1, _col2, ...), which may cause the Invalid Column Name error, then You need to use set for mapping.
diff --git a/docs/en/administrator-guide/load-data/delete-manual.md b/docs/en/administrator-guide/load-data/delete-manual.md
index 8ee465b80c..fc0302bb7b 100644
--- a/docs/en/administrator-guide/load-data/delete-manual.md
+++ b/docs/en/administrator-guide/load-data/delete-manual.md
@@ -161,7 +161,7 @@ In general, Doris's deletion timeout is limited from 30 seconds to 5 minutes. Th
   
 * `query_timeout`
   
-    Because delete itself is an SQL command, the deletion statement is also limited by the session variables, and the timeout is also affected by the session value `query'timeout`. You can increase the value by `set query'timeout = xxx`.
+    Because delete itself is an SQL command, the deletion statement is also limited by the session variables, and the timeout is also affected by the session value `query_timeout`. You can increase the value by `set query_timeout = xxx`.
 
 **InPredicate configuration**
 
diff --git a/docs/en/administrator-guide/load-data/spark-load-manual.md b/docs/en/administrator-guide/load-data/spark-load-manual.md
index 062e1823aa..3534be710d 100644
--- a/docs/en/administrator-guide/load-data/spark-load-manual.md
+++ b/docs/en/administrator-guide/load-data/spark-load-manual.md
@@ -96,7 +96,7 @@ The implementation of spark load task is mainly divided into the following five
 
 ### Applicable scenarios
 
-At present, the bitmap column in Doris is implemented using the class library '`roaingbitmap`', while the input data type of '`roaringbitmap`' can only be integer. Therefore, if you want to pre calculate the bitmap column in the import process, you need to convert the type of input data to integer.
+At present, the bitmap column in Doris is implemented using the class library `roaingbitmap`, while the input data type of `roaringbitmap` can only be integer. Therefore, if you want to pre calculate the bitmap column in the import process, you need to convert the type of input data to integer.
 
 In the existing Doris import process, the data structure of global dictionary is implemented based on hive table, which stores the mapping from original value to encoded value.
 
@@ -191,7 +191,7 @@ REVOKE USAGE_PRIV ON RESOURCE resource_name FROM ROLE role_name
 
   - Other parameters are optional, refer to `http://spark.apache.org/docs/latest/configuration.html`
 
-- `working_dir`: directory used by ETL. Spark is required when used as an ETL resource. For example: `hdfs://host :port/tmp/doris`.
+- `working_dir`: directory used by ETL. Spark is required when used as an ETL resource. For example: `hdfs://host:port/tmp/doris`.
 
 - `broker`: the name of the broker. Spark is required when used as an ETL resource. You need to use the 'alter system add broker' command to complete the configuration in advance.
 
@@ -272,7 +272,7 @@ REVOKE USAGE_PRIV ON RESOURCE "spark0" FROM "user0"@"%";
 
 ### Configure spark client
 
-The Fe submits the spark task by executing the spark submit command. Therefore, it is necessary to configure the spark client for Fe. It is recommended to use the official version of spark 2 above 2.4.3, [download spark here](https://archive.apache.org/dist/spark/). After downloading, please follow the steps to complete the following configuration.
+The Fe submits the spark task by executing the spark submit command. Therefore, it is necessary to configure the spark client for Fe. It is recommended to use the official version of spark 2 above 2.4.5, [download spark here](https://archive.apache.org/dist/spark/). After downloading, please follow the steps to complete the following configuration.
 
 #### Configure SPARK_HOME environment variable
 
@@ -601,7 +601,7 @@ The directory where the spark client's commit log is stored (`Fe/log/spark)_laun
 
 + `yarn_client_path`
 
-The path of the yarn binary executable file (`Fe/lib/yarn-client/Hadoop/bin/yarn').
+The path of the yarn binary executable file (`Fe/lib/yarn-client/Hadoop/bin/yarn`).
 
 + `yarn_config_dir`
 
diff --git a/docs/en/administrator-guide/load-data/stream-load-manual.md b/docs/en/administrator-guide/load-data/stream-load-manual.md
index b91b339b21..edfcb8c96c 100644
--- a/docs/en/administrator-guide/load-data/stream-load-manual.md
+++ b/docs/en/administrator-guide/load-data/stream-load-manual.md
@@ -133,7 +133,7 @@ Stream load uses HTTP protocol, so all parameters related to import tasks are se
 
 	``` dpp.abnorm.ALL``` denotes the number of rows whose data quality is not up to standard. Such as type mismatch, column mismatch, length mismatch and so on.
 
-	``` dpp.norm.ALL ``` refers to the number of correct data in the import process. The correct amount of data for the import task can be queried by the ``SHOW LOAD` command.
+	``` dpp.norm.ALL ``` refers to the number of correct data in the import process. The correct amount of data for the import task can be queried by the `SHOW LOAD` command.
 
 The number of rows in the original file = `dpp.abnorm.ALL + dpp.norm.ALL`
 
diff --git a/docs/zh-CN/administrator-guide/load-data/batch-delete-manual.md b/docs/zh-CN/administrator-guide/load-data/batch-delete-manual.md
index c7d9f398c2..e86cc895b6 100644
--- a/docs/zh-CN/administrator-guide/load-data/batch-delete-manual.md
+++ b/docs/zh-CN/administrator-guide/load-data/batch-delete-manual.md
@@ -36,7 +36,7 @@ under the License.
 ## 原理
 通过增加一个隐藏列`__DORIS_DELETE_SIGN__`实现,因为我们只是在unique 模型上做批量删除,因此只需要增加一个 类型为bool 聚合函数为replace 的隐藏列即可。在be 各种聚合写入流程都和正常列一样,读取方案有两个:
 
-在fe遇到 * 等扩展时去去掉`__DORIS_DELETE_SIGN__`,并且默认加上 `__DORIS_DELETE_SIGN__ != true` 的条件
+在fe遇到 * 等扩展时去掉`__DORIS_DELETE_SIGN__`,并且默认加上 `__DORIS_DELETE_SIGN__ != true` 的条件
 be 读取时都会加上一列进行判断,通过条件确定是否删除。
 
 ### 导入


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org